text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
sequencelengths
3
3
Culver's Erlick Earns Second Academic Honor Miriam L. Erlick of Culver City has been named to second honors on the Clark University Dean's List. This selection marks outstanding academic achievement during the Fall 2013 semester. To be eligible for second honors, students must have a grade point average between 3.5 and 3.79, of a maximum of 4.3. Since its founding in 1887, Clark University in Worcester, Mass., has a history of challenging convention. Clark's students, faculty and alumni embody the Clark motto: Challenge convention. Change our world. www.clarku.edu
High
[ 0.7032348804500701, 31.25, 13.1875 ]
The playing of electronic games over the Internet by multiple players has become an increasingly popular pastime. Although games designed to run on personal computers (PCs) and on dedicated electronic game systems, such as Microsoft Corporation's XBOX™ game system, are designed to enable multiple players to play in a local game session, games played over the Internet offer users the opportunity to match skills against a much broader range of players and to play at any time. Multiplayer games over a network are typically implemented by enabling each of a plurality of client computing devices to connect to a server computing device over the network, so that the server computing device facilitates game interaction between the players of a plurality of different games. To simplify the following discussion, the term “client” will be used instead of client computing device, and the term “server” will be used instead of server computing device, but the broader concept of these entities is intended to apply. Ideally, only the skill of players participating in online game play should determine who wins a game. However, online gamers are notorious for developing creative ways to cheat during online game play, so that a player's skill in playing a game is not necessarily determinative of who wins the game. For example, game software can be modified (for example, using a Game Shark program) to provide a player with more lives, more energy, more protection, and other attributes, so that the player has a substantial advantage over players who are running an unmodified version of the game program. Playing against another person who is cheating in this manner can be very frustrating and will not be enjoyable, since the game is often no longer won by the more skillful player, but instead, is won by the player who is cheating by using a modified game program. Accordingly, it would be desirable for a server at a game playing site to be able to detect if a player is using a modified game program so that the server can take appropriate action to prevent such a modified program from being used in online game play by a player who is connected to the server. Dedicated electronic game playing systems can also be modified to enable a player to cheat when playing online games. For example, it is possible for a player to connect a replacement memory module containing a modified basic input output system (BIOS) to a game console, to replace the original BIOS, and thus, enable functionality and changes to the system that would not be permitted while running the system with the original BIOS. The modified BIOS can permit unauthorized or pirated copies of games to be played and can permit a user to avoid zone restrictions regarding games that can be played on the game console. More importantly, use of a modified chip in a game console can allow other types of cheating behavior during game play. Thus, it would also be desirable to detect modifications that have been made to an electronic game system when the game system is logging onto a game site to play a multiplayer game, and/or during play of such a game, to enable an appropriate action to be taken by a server at the site. In a more general sense, it would further be desirable to enable a server to challenge a client device in regard to any desired condition on the client when the user of the client device is attempting to log on or sign in to a service provided on the server, to enable the server to determine if some characteristic or condition of the client is different than expected. It will be apparent that this procedure is not limited to a game playing function provided by the server or limited to game playing clients. If the response returned from a client is not as expected, then the server should be able to automatically take appropriate action. For example, the server might simply terminate the current session with the client, and might record an identification of the client in a database to prevent the client from ever again using the service provided on the server, even if the response returned from the client to the server in a future session is as expected.
Mid
[ 0.59090909090909, 35.75, 24.75 ]
21st Century Fox already has a 39 percent stake in Sky. Should the deal go through, that figure will go up to 100 percent. It's a big if, but were that to happen, Sky would join the multitude of assets that are currently in Disney's crosshairs. Of course, the Disney deal needs to be approved too — and it's not clear how long that will take (the consolidation of rights and media properties will likely raise some competition concerns.) But theoretically, if everything is green-lit, Disney will take ownership of Sky. What would that mean for the two businesses? For now, it's hard to say. Here's how Disney described the possible team-up today: "Sky is one of Europe's most successful pay television and creative enterprises with innovative and high-quality direct-to-consumer platforms, resonant brands and a strong and respected leadership team. 21st Century Fox remains fully committed to completing the current Sky offer and anticipates that, subject to the necessary regulatory consents, the transaction will close by June 30, 2018. Assuming 21st Century Fox completes its acquisition of Sky prior to closing of the transaction, The Walt Disney Company would assume full ownership of Sky, including the assumption of its outstanding debt, upon closing." Would this mean a merging of Now TV and DisneyLife? Or a wealth of exclusive Disney content on Sky? For now, we can only speculate.
Mid
[ 0.6000000000000001, 34.5, 23 ]
Comparative carcinogenicity of o-toluidine hydrochloride and o-nitrosotoluene in F-344 rats. o-Toluidine hydrochloride and one of its metabolites, o-nitrosotoluene, were administered in the diet (0.028 mol/kg diet) to 2 groups of 30 male F-344 rats for 72 weeks. o-Nitrosotoluene induced significantly more tumors of the bladder (16/30 rats) and liver (20/30) than did o-toluidine hydrochloride (bladder, 4/30; liver, 3/30. Both compounds induced comparable numbers of peritoneal tumors and fibroma of the skin and spleen. o-Toluidine hydrochloride induced more mammary tumors (13/30) than did o-nitrosotoluene (3/30). The results indicate tha N-oxidation is important in the induction of bladder and liver tumors by these single ring compounds but that other mechanisms could be involved in the induction of peritoneal, skin, spleen and mammary tumors.
Mid
[ 0.6338383838383831, 31.375, 18.125 ]
Lord of the Universe (disambiguation) Lord of the Universe is a 1974 documentary film on Guru Maharaj Ji. Lord of the Universe may also refer to: A name used by Moses in the Bible, when addressing God (Adon olam) In baptism prayer, the name given to God Adon Olam, a Jewish hymn In the Qur'an, the name given to God An attribute given to Siva, in his role as creator in Bhakti literature In the neophyte ceremonial magic of the hermetic order of The Golden Dawn, when members are assembled they stand and declare Let us adore the Lord of the Universe and Space The inner god carried by all human beings, as described in the Vedanta sutras. Here, inwardly in the heart, there is a space, therein lies the lord of the universe A title given to the Gautama A title given to Prem Rawat (Guru Maharaj Ji)
Mid
[ 0.6234718826405861, 31.875, 19.25 ]
Quantitative detection of agar-cultivated and rhizotron-grown Piloderma croceum Erikss. & Hjortst. by ITS1-based fluorescent PCR. A real-time quantitative TaqMan-PCR was established for the absolute quantification of extramatrical hyphal biomass of the ectomycorrhizal fungus Piloderma croceum in pure cultures as well as in rhizotron samples with non-sterile peat substrate. After cloning and sequencing of internal transcribed spacer (ITS) sequences ITS1/ITS2 and the 5.8S rRNA gene from several fungi, including Tomentellopsis submollis, Paxillus involutus, and Cortinarius obtusus, species-specific primers and a dual-labelled fluorogenic probe were designed for Piloderma croceum. The dynamic range of the TaqMan assay spans seven orders of magnitude, producing an online-detectable fluorescence signal during the cycling run that is directly related to the starting number of ITS copies present. To test the confidence of the PCR-based quantification results, the hyphal length of Piloderma croceum was counted under the microscope to determine the recovery from two defined but different amounts of agar-cultivated mycelia. Inspection of the registered Ct values (defined as that cycle number at which a statistically significant increase in the reporter fluorescence can first be detected) in a 10-fold dilution series of template DNA represents a suitable and stringent quality control standard for exclusion of false PCR-based quantification results. The fast real-time PCR approach enables high throughput of samples, making this method well suited for quantitative analysis of ectomycorrhizal fungi in communities of natural and artificial ecosystems, so long as applicable DNA extraction protocols exist for different types of soil.
Mid
[ 0.647887323943662, 34.5, 18.75 ]
Just in case you didn’t hear it, that was the sound of the BRIC bubble popping. The acronym stands for Brazil-Russia-India-China. Coined by economist Jim O’Neill of Goldman Sachs, it symbolizes the rise of once-poor countries (“emerging markets”) into economic powerhouses. More recently, the message has been: The rapid expansion of emerging-market countries will help rescue Europe, the United States and Japan — the “old world” — from their economic turmoil. The BRICs will prop up the global demand for industrial goods and commodities (oil, foodstuffs, metals). Forget it. For a while, the prospect seemed plausible. During the 2007-09 financial crisis, some BRIC countries — China, most notably — adopted large stimulus programs, and others just grew rapidly. In 2010, China’s economy expanded 10.4 percent, India’s 10.1 percent and Brazil’s 7.5 percent. Today’s outlook is more muted. In 2012, China will grow 7.8 percent, India 4.9 percent and Brazil 1.5 percent, according to the latest projections from the International Monetary Fund. Although the IMF predicts slight pickups in 2013, some economists forecast further declines. True, Americans would celebrate China’s and India’s growth rates; in 2012 the U.S. economy will grow only about 2 percent. But comparisons are misleading because China and India still benefit from economic “catch-up.” They’re poor countries that can expand rapidly by raising workers’ skills and adopting technologies and management practices pioneered elsewhere. Every decade produces a powerful economic idea that captivates popular imagination, argues Ruchir Sharma of Morgan Stanley. In the 1980s, the idea was that Japan would dominate the world economically; in the 1990s, it was that the Internet was the greatest innovation since the printing press; and in the 2000s, it’s been the inevitability of the BRICs’ economic advance. These are intellectual bubbles; sooner or later reality pricks them. In his prescient book “Breakout Nations: In Pursuit of the Next Economic Miracles,” Sharma does this for the BRIC bubble. He writes: “The perception that the growth game had suddenly become easy — that everyone could be a winner — is built on the unique results of the last decade, when virtually all emerging markets did grow together.” In reality, the early 2000s were simply an old-fashioned boom. China’s rapid growth fueled demand for raw materials (oil, grains, minerals) that raised prices and enriched producers, including Brazil and Russia. Easy credit in the United States, Europe and Japan encouraged money flows into other developing countries, where interest rates and returns seemed higher. In 2007, the boom’s peak year, about 60 percent of the world’s 183 countries grew at 5 percent or better, notes Sharma. Only three countries (Fiji, Zimbabwe and the Republic of Congo) didn’t grow at all. As opportunities for economic catch-up shrink, growth also subsides. Sharma thinks China’s average annual growth will fall to a 6 percent to 7 percent range. He’s also skeptical of Brazil. Without the commodity boom, Brazilian growth may be mired at 2 percent to 3 percent. He thinks government spending (about 40 percent of the economy) is too high, and investment in roads and other infrastructure is too low. India faces comparable problems. Some reflect the hangover from the recent boom. Everywhere, the global economy is weak or weakening. Against this dismal backdrop, it was tempting to think that resilient BRICs would act as a shock absorber. They would buy more European and American exports. Just the opposite occurred. The weakness of advanced economies transmitted itself, through export markets, to the BRICs. The world economy is truly interconnected. What was hoped would happen was wishful thinking. CDL A DELIVERY DRIVERS Home Every Night! Needed for our Worcester Depot! Drive local - No more spending valuable nights away from your family! As a Direct Store Delivery Representative YOU have the opportunity to make a difference with our customers! Provide excellent customer service; interact in a positive manner with our customers; deliver our products to local stores. Be home every night! Work for a Company that has been around for over 80 years! Minimum of 3 months driving experience with CDL A/B; GED or HS diploma required; Must be able to drive a standard transmission. EEO/Veteran/Disability Growing Strong Since1933!
Low
[ 0.520746887966805, 31.375, 28.875 ]
Q: What statue is this in Athens near the Acropolis? Does anyone know who this statue is of? https://goo.gl/maps/29giAzrQHrm I passed it at night and didn't get a good chance to read it carefully, but I thought I saw something on it about Bolivia or something. A: It's a statue of General Yannis Makriyannis. The location is on the intersection of Dionysiou Areopagitou and Vyronos Streets. The inscription on the base of the statue mentions the General's name and years of life, sprayed over by some vandals. Source: Waymarking.com
Mid
[ 0.5879828326180251, 34.25, 24 ]
/*! @file Forward declares `boost::hana::sum`. @copyright Louis Dionne 2013-2016 Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE.md or copy at http://boost.org/LICENSE_1_0.txt) */ #ifndef BOOST_HANA_FWD_SUM_HPP #define BOOST_HANA_FWD_SUM_HPP #include <boost/hana/config.hpp> #include <boost/hana/core/when.hpp> #include <boost/hana/fwd/integral_constant.hpp> BOOST_HANA_NAMESPACE_BEGIN //! Compute the sum of the numbers of a structure. //! @ingroup group-Foldable //! //! More generally, `sum` will take any foldable structure containing //! objects forming a Monoid and reduce them using the Monoid's binary //! operation. The initial state for folding is the identity of the //! Monoid. It is sometimes necessary to specify the Monoid to use; //! this is possible by using `sum<M>`. If no Monoid is specified, //! the structure will use the Monoid formed by the elements it contains //! (if it knows it), or `integral_constant_tag<int>` otherwise. Hence, //! @code //! sum<M>(xs) = fold_left(xs, zero<M or inferred Monoid>(), plus) //! sum<> = sum<integral_constant_tag<int>> //! @endcode //! //! For numbers, this will just compute the sum of the numbers in the //! `xs` structure. //! //! //! @note //! The elements of the structure are not actually required to be in the //! same Monoid, but it must be possible to perform `plus` on any two //! adjacent elements of the structure, which requires each pair of //! adjacent element to at least have a common Monoid embedding. The //! meaning of "adjacent" as used here is that two elements of the //! structure `x` and `y` are adjacent if and only if they are adjacent //! in the linearization of that structure, as documented by the Iterable //! concept. //! //! //! Why must we sometimes specify the `Monoid` by using `sum<M>`? //! ------------------------------------------------------------- //! This is because sequence tags like `tuple_tag` are not parameterized //! (by design). Hence, we do not know what kind of objects are in the //! sequence, so we can't know a `0` value of which type should be //! returned when the sequence is empty. Therefore, the type of the //! `0` to return in the empty case must be specified explicitly. Other //! foldable structures like `hana::range`s will ignore the suggested //! Monoid because they know the tag of the objects they contain. This //! inconsistent behavior is a limitation of the current design with //! non-parameterized tags, but we have no good solution for now. //! //! //! Example //! ------- //! @include example/sum.cpp #ifdef BOOST_HANA_DOXYGEN_INVOKED constexpr auto sum = see documentation; #else template <typename T, typename = void> struct sum_impl : sum_impl<T, when<true>> { }; template <typename M> struct sum_t; template <typename M = integral_constant_tag<int>> constexpr sum_t<M> sum{}; #endif BOOST_HANA_NAMESPACE_END #endif // !BOOST_HANA_FWD_SUM_HPP
Low
[ 0.522388059701492, 30.625, 28 ]
Welcome to LLVM! In order to get started, you first need to know some basic information. First, LLVM comes in three pieces. The first piece is the LLVM suite. This contains all of the tools, libraries, and header files needed to use LLVM. It contains an assembler, disassembler, bitcode analyzer and bitcode optimizer. It also contains basic regression tests that can be used to test the LLVM tools and the Clang front end. The second piece is the Clang front end. This component compiles C, C++, Objective C, and Objective C++ code into LLVM bitcode. Once compiled into LLVM bitcode, a program can be manipulated with the LLVM tools from the LLVM suite. There is a third, optional piece called Test Suite. It is a suite of programs with a testing harness that can be used to further test LLVM’s functionality and performance. The LLVM Getting Started documentation may be out of date. So, the Clang Getting Started page might also be a good place to start. Here’s the short story for getting up and running quickly with LLVM: Read the documentation. Read the documentation. Remember that you were warned twice about reading the documentation. In particular, the relative paths specified are important. Checkout LLVM: cdwhere-you-want-llvm-to-live svncohttp://llvm.org/svn/llvm-project/llvm/trunkllvm Checkout Clang: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/cfe/trunkclang Checkout Extra Clang Tools [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools/clang/tools svncohttp://llvm.org/svn/llvm-project/clang-tools-extra/trunkextra Checkout LLD linker [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/lld/trunklld Checkout Polly Loop Optimizer [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/polly/trunkpolly Checkout Compiler-RT (required to build the sanitizers) [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/compiler-rt/trunkcompiler-rt Checkout Libomp (required for OpenMP support) [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/openmp/trunkopenmp Checkout libcxx and libcxxabi [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/libcxx/trunklibcxx svncohttp://llvm.org/svn/llvm-project/libcxxabi/trunklibcxxabi Get the Test Suite Source Code [Optional] cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/test-suite/trunktest-suite Configure and build LLVM and Clang: Warning: Make sure you’ve checked out all of the source code before trying to configure with cmake. cmake does not pickup newly added source directories in incremental builds. The build uses CMake. LLVM requires CMake 3.4.3 to build. It is generally recommended to use a recent CMake, especially if you’re generating Ninja build files. This is because the CMake project is constantly improving the quality of the generators, and the Ninja generator gets a lot of attention. To use LLVM modules on Win32-based system, you may configure LLVM with -DBUILD_SHARED_LIBS=On. MCJIT not working well pre-v7, old JIT engine not supported any more. Note that Debug builds require a lot of time and disk space. An LLVM-only build will need about 1-3 GB of space. A full build of LLVM and Clang will need around 15-20 GB of disk space. The exact space requirements will vary by system. (It is so large because of all the debugging information and the fact that the libraries are statically linked into multiple tools). If you are space-constrained, you can build only selected tools or only selected targets. The Release build requires considerably less space. The LLVM suite may compile on other platforms, but it is not guaranteed to do so. If compilation is successful, the LLVM utilities should be able to assemble, disassemble, analyze, and optimize LLVM bitcode. Code generation should work as well, although the generated native code may not work on your platform. Compiling LLVM requires that you have several software packages installed. The table below lists those required packages. The Package column is the usual name for the software package that LLVM depends on. The Version column provides “known to work” versions of the package. The Notes column describes how LLVM uses the package and provides other details. LLVM is very demanding of the host C++ compiler, and as such tends to expose bugs in the compiler. We are also planning to follow improvements and developments in the C++ language and library reasonably closely. As such, we require a modern host C++ toolchain, both compiler and standard library, in order to build LLVM. For the most popular host toolchains we check for specific minimum versions in our build systems: Clang 3.1 GCC 4.8 Visual Studio 2015 (Update 3) Anything older than these toolchains may work, but will require forcing the build system with a special option and is not really a supported host platform. Also note that older versions of these compilers have often crashed or miscompiled LLVM. For less widely used host toolchains such as ICC or xlC, be aware that a very recent version may be required to support all of the C++ features used in LLVM. We track certain versions of software that are known to fail when used as part of the host toolchain. These even include linkers at times. GNU ld 2.16.X. Some 2.16.X versions of the ld linker will produce very long warning messages complaining that some “.gnu.linkonce.t.*” symbol was defined in a discarded section. You can safely ignore these messages as they are erroneous and the linkage is correct. These messages disappear using ld 2.17. GNU binutils 2.17: Binutils 2.17 contains a bug which causes huge link times (minutes instead of seconds) when building LLVM. We recommend upgrading to a newer version (2.17.50.0.4 or later). GNU Binutils 2.19.1 Gold: This version of Gold contained a bug which causes intermittent failures when building LLVM with position independent code. The symptom is an error about cyclic dependencies. We recommend upgrading to a newer version of Gold. This section mostly applies to Linux and older BSDs. On Mac OS X, you should have a sufficiently modern Xcode, or you will likely need to upgrade until you do. Windows does not have a “system compiler”, so you must install either Visual Studio 2015 or a recent version of mingw64. FreeBSD 10.0 and newer have a modern Clang as the system compiler. However, some Linux distributions and some other or older BSDs sometimes have extremely old versions of GCC. These steps attempt to help you upgrade you compiler even on such a system. However, if at all possible, we encourage you to use a recent version of a distribution with a modern system compiler that meets these requirements. Note that it is tempting to install a prior version of Clang and libc++ to be the host compiler, however libc++ was not well tested or set up to build on Linux until relatively recently. As a consequence, this guide suggests just using libstdc++ and a modern GCC as the initial host in a bootstrap, and then using Clang (and potentially libc++). The first step is to get a recent GCC toolchain installed. The most common distribution on which users have struggled with the version requirements is Ubuntu Precise, 12.04 LTS. For this distribution, one easy option is to install the toolchain testing PPA and use it to install a modern GCC. There is a really nice discussions of this on the ask ubuntu stack exchange. However, not all users can use PPAs and there are many other distributions, so it may be necessary (or just useful, if you’re here you are doing compiler development after all) to build and install GCC from source. It is also quite easy to do these days. For more details, check out the excellent GCC wiki entry, where I got most of this information from. Once you have a GCC toolchain, configure your build of LLVM to use the new toolchain for your host compiler and C++ standard library. Because the new version of libstdc++ is not on the system library search path, you need to pass extra linker flags so that it can be found at link time (-L) and at runtime (-rpath). If you are using CMake, this invocation should produce working binaries: If you fail to set rpath, most LLVM binaries will fail on startup with a message from the loader similar to libstdc++.so.6:version`GLIBCXX_3.4.20'notfound. This means you need to tweak the -rpath linker flag. When you build Clang, you will need to give it access to modern C++11 standard library in order to use it as your new host in part of a bootstrap. There are two easy ways to do this, either build (and install) libc++ along with Clang and then use it with the -stdlib=libc++ compile and link flag, or install Clang into the same prefix ($HOME/toolchains above) as GCC. Clang will look within its own prefix for libstdc++ and use it if found. You can also add an explicit prefix for Clang to look in for a GCC toolchain with the --gcc-toolchain=/opt/my/gcc/prefix flag, passing it to both compile and link commands when using your just-built-Clang to bootstrap. The remainder of this guide is meant to get you up and running with LLVM and to give you some basic information about the LLVM environment. The later sections of this guide describe the general layout of the LLVM source tree, a simple example using the LLVM tool chain, and links to find more information about LLVM or to get help via e-mail. Throughout this manual, the following names are used to denote paths specific to the local system and working environment. These are not environment variables you need to set but just strings used in the rest of this document below. In any of the examples below, simply replace each of these names with the appropriate pathname on your local system. All these paths are absolute: SRC_ROOT This is the top level directory of the LLVM source tree. OBJ_ROOT This is the top level directory of the LLVM object tree (i.e. the tree where object files and compiled programs will be placed. It can be the same as SRC_ROOT). If you have the LLVM distribution, you will need to unpack it before you can begin to compile it. LLVM is distributed as a set of two files: the LLVM suite and the LLVM GCC front end compiled for your platform. There is an additional test suite that is optional. Each file is a TAR archive that is compressed with the gzip program. This will create an ‘llvm’ directory in the current directory and fully populate it with the LLVM source code, Makefiles, test directories, and local copies of documentation files. If you want to get a specific release (as opposed to the most recent revision), you can check it out from the ‘tags’ directory (instead of ‘trunk’). The following releases are located in the following subdirectories of the ‘tags’ directory: Release 3.5.0 and later: RELEASE_350/final and so on Release 2.9 through 3.4: RELEASE_29/final and so on Release 1.1 through 2.8: RELEASE_11 and so on Release 1.0: RELEASE_1 If you would like to get the LLVM test suite (a separate package as of 1.4), you get it from the Subversion repository: Git mirrors are available for a number of LLVM subprojects. These mirrors sync automatically with each Subversion commit and contain all necessary git-svn marks (so, you can recreate git-svn metadata locally). Note that right now mirrors reflect only trunk for each project. Note On Windows, first you will want to do gitconfig--globalcore.autocrlffalse before you clone. This goes a long way toward ensuring that line-endings will be handled correctly (the LLVM project mostly uses Linux line-endings). You can do the read-only Git clone of LLVM via: % git clone https://git.llvm.org/git/llvm.git/ If you want to check out clang too, run: %cd llvm/tools % git clone https://git.llvm.org/git/clang.git/ If you want to check out compiler-rt (required to build the sanitizers), run: Since the upstream repository is in Subversion, you should use gitpull--rebase instead of gitpull to avoid generating a non-linear history in your clone. To configure gitpull to pass --rebase by default on the master branch, run the following command: This leaves your working directories on their master branches, so you’ll need to checkout each working branch individually and rebase it on top of its parent branch. For those who wish to be able to update an llvm repo/revert patches easily using git-svn, please look in the directory for the scripts git-svnup and git-svnrevert. To perform the aforementioned update steps go into your source directory and just type git-svnup or gitsvnup and everything will just work. If one wishes to revert a commit with git-svn, but do not want the git hash to escape into the commit message, one can use the script git-svnrevert or gitsvnrevert which will take in the git hash for the commit you want to revert, look up the appropriate svn revision, and output a message where all references to the git hash have been replaced with the svn revision. To commit back changes via git-svn, use gitsvndcommit: % git svn dcommit Note that git-svn will create one SVN commit for each Git commit you have pending, so squash and edit each commit before executing dcommit to make sure they all conform to the coding standards and the developers’ policy. On success, dcommit will rebase against the HEAD of SVN, so to avoid conflict, please make sure your current branch is up-to-date (via fetch/rebase) before proceeding. The git-svn metadata can get out of sync after you mess around with branches and dcommit. When that happens, gitsvndcommit stops working, complaining about files with uncommitted changes. The fix is to rebuild the metadata: % rm -rf .git/svn % git svn rebase -l Please, refer to the Git-SVN manual (mangit-svn) for more information. While this is using SVN under the hood, it does not require any interaction from you with git-svn. After a few minutes, gitpull should get back the changes as they were committed. Note that a current limitation is that git does not directly record file rename, and thus it is propagated to SVN as a combination of delete-add instead of a file rename. The SVN revision of each monorepo commit can be found in the commit notes. git does not fetch notes by default. The following commands will fetch the notes and configure git to fetch future notes. Use gitnotesshow$commit to look up the SVN revision of a git commit. The notes show up gitlog, and searching the log is currently the recommended way to look up the git commit for a given SVN revision. Once checked out from the Subversion repository, the LLVM suite source code must be configured before being built. This process uses CMake. Unlinke the normal configure script, CMake generates the build files in whatever format you request as well as various *.inc files, and llvm/include/Config/config.h. Variables are passed to cmake on the command line using the format -D<variablename>=<value>. The following variables are some common options used by people developing LLVM. Variable Purpose CMAKE_C_COMPILER Tells cmake which C compiler to use. By default, this will be /usr/bin/cc. CMAKE_CXX_COMPILER Tells cmake which C++ compiler to use. By default, this will be /usr/bin/c++. CMAKE_BUILD_TYPE Tells cmake what type of build you are trying to generate files for. Valid options are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. CMAKE_INSTALL_PREFIX Specifies the install directory to target when running the install action of the build files. LLVM_TARGETS_TO_BUILD A semicolon delimited list controlling which targets will be built and linked into llc. This is equivalent to the --enable-targets option in the configure script. The default list is defined as LLVM_ALL_TARGETS, and can be set to include out-of-tree targets. The default value includes: AArch64,AMDGPU,ARM,BPF,Hexagon,Mips,MSP430,NVPTX,PowerPC,Sparc,SystemZ,X86,XCore. LLVM_ENABLE_DOXYGEN Build doxygen-based documentation from the source code This is disabled by default because it is slow and generates a lot of output. LLVM_ENABLE_SPHINX Build sphinx-based documentation from the source code. This is disabled by default because it is slow and generates a lot of output. Sphinx version 1.5 or later recommended. LLVM_BUILD_LLVM_DYLIB Generate libLLVM.so. This library contains a default set of LLVM components that can be overridden with LLVM_DYLIB_COMPONENTS. The default contains most of LLVM and is defined in tools/llvm-shlib/CMakelists.txt. LLVM_OPTIMIZED_TABLEGEN Builds a release tablegen that gets used during the LLVM build. This can dramatically speed up debug builds. Unlike with autotools, with CMake your build type is defined at configuration. If you want to change your build type, you can re-run cmake with the following invocation: % cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=type SRC_ROOT Between runs, CMake preserves the values set for all options. CMake has the following build types defined: Debug These builds are the default. The build system will compile the tools and libraries unoptimized, with debugging information, and asserts enabled. Release For these builds, the build system will compile the tools and libraries with optimizations enabled and not generate debug info. CMakes default optimization level is -O3. This can be configured by setting the CMAKE_CXX_FLAGS_RELEASE variable on the CMake command line. RelWithDebInfo These builds are useful when debugging. They generate optimized binaries with debug information. CMakes default optimization level is -O2. This can be configured by setting the CMAKE_CXX_FLAGS_RELWITHDEBINFO variable on the CMake command line. Once you have LLVM configured, you can build it by entering the OBJ_ROOT directory and issuing the following command: % make If the build fails, please check here to see if you are using a version of GCC that is known not to compile LLVM. If you have multiple processors in your machine, you may wish to use some of the parallel build options provided by GNU Make. For example, you could use the command: % make -j2 There are several special targets which are useful when working with the LLVM source code: makeclean Removes all files generated by the build. This includes object files, generated C/C++ files, libraries, and executables. makeinstall Installs LLVM header files, libraries, tools, and documentation in a hierarchy under $PREFIX, specified with CMAKE_INSTALL_PREFIX, which defaults to /usr/local. makedocs-llvm-html If configured with -DLLVM_ENABLE_SPHINX=On, this will generate a directory at OBJ_ROOT/docs/html which contains the HTML formatted documentation. It is possible to cross-compile LLVM itself. That is, you can create LLVM executables and libraries to be hosted on a platform different from the platform where they are built (a Canadian Cross build). To generate build files for cross-compiling CMake provides a variable CMAKE_TOOLCHAIN_FILE which can define compiler flags and variables used during the CMake test operations. The result of such a build is executables that are not runnable on the build host but can be executed on the target. As an example the following CMake invocation can generate build files targeting iOS. This will work on Mac OS X with the latest Xcode: The LLVM build system is capable of sharing a single LLVM source tree among several LLVM builds. Hence, it is possible to build LLVM for several different platforms or configurations using the same source tree. Change directory to where the LLVM object files should live: %cd OBJ_ROOT Run cmake: % cmake -G "Unix Makefiles" SRC_ROOT The LLVM build will create a structure underneath OBJ_ROOT that matches the LLVM source tree. At each level where source files are present in the source tree there will be a corresponding CMakeFiles directory in the OBJ_ROOT. Underneath that directory there is another directory with a name ending in .dir under which you’ll find object files for each source. If you’re running on a Linux system that supports the binfmt_misc module, and you have root access on the system, you can set your system up to execute LLVM bitcode files directly. To do this, use commands like this (the first command may not be required if you are already using the module): Public header files exported from the LLVM library. The three main subdirectories: llvm/include/llvm All LLVM-specific header files, and subdirectories for different portions of LLVM: Analysis, CodeGen, Target, Transforms, etc… llvm/include/llvm/Support Generic support libraries provided with LLVM but not necessarily specific to LLVM. For example, some C++ STL utilities and a Command Line option processing library store header files here. llvm/include/llvm/Config Header files configured by the configure script. They wrap “standard” UNIX and C header files. Source code can include these header files which automatically take care of the conditional #includes that the configure script generates. A comprehensive correctness, performance, and benchmarking test suite for LLVM. Comes in a separate Subversion module because not every LLVM user is interested in such a comprehensive suite. For details see the Testing Guide document. Executables built out of the libraries above, which form the main part of the user interface. You can always get help for a tool by typing tool_name-help. The following is a brief introduction to the most important tools. More detailed information is in the Command Guide. bugpoint bugpoint is used to debug optimization passes or code generation backends by narrowing down the given test case to the minimum number of passes and/or instructions that still cause a problem, whether it is a crash or miscompilation. See HowToSubmitABug.html for more information on using bugpoint. llvm-ar The archiver produces an archive containing the given LLVM bitcode files, optionally with an index for faster lookup. llvm-link, not surprisingly, links multiple LLVM modules into a single program. lli lli is the LLVM interpreter, which can directly execute LLVM bitcode (although very slowly…). For architectures that support it (currently x86, Sparc, and PowerPC), by default, lli will function as a Just-In-Time compiler (if the functionality was compiled in), and will execute the code much faster than the interpreter. llc llc is the LLVM backend compiler, which translates LLVM bitcode to a native code assembly file. opt opt reads LLVM bitcode, applies a series of LLVM to LLVM transformations (which are specified on the command line), and outputs the resultant bitcode. ‘opt-help’ is a good way to get a list of the program transformations available in LLVM. opt can also run a specific analysis on an input LLVM bitcode file and print the results. Primarily useful for debugging analyses, or familiarizing yourself with what an analysis does. Utilities for working with LLVM source code; some are part of the build process because they are code generators for parts of the infrastructure. codegen-diff codegen-diff finds differences between code that LLC generates and code that LLI generates. This is useful if you are debugging one of them, assuming that the other generates correct output. For the full user manual, run `perldoccodegen-diff'. emacs/ Emacs and XEmacs syntax highlighting for LLVM assembly files and TableGen description files. See the README for information on using them. getsrcs.sh Finds and outputs all non-generated source files, useful if one wishes to do a lot of development across directories and does not want to find each file. One way to use it is to run, for example: xemacs`utils/getsources.sh` from the top of the LLVM source tree. llvmgrep Performs an egrep-H-n on each source file in LLVM and passes to it a regular expression provided on llvmgrep’s command line. This is an efficient way of searching the source base for a particular regular expression. makellvm Compiles all files in the current directory, then compiles and links the tool that is the first argument. For example, assuming you are in llvm/lib/Target/Sparc, if makellvm is in your path, running makellvmllc will make a build of the current directory, switch to directory llvm/tools/llc and build it, causing a re-linking of LLC. TableGen/ Contains the tool used to generate register descriptions, instruction set descriptions, and even assemblers from common TableGen description files. vim/ vim syntax-highlighting for LLVM assembly files and TableGen description files. See the README for how to use them. This document is just an introduction on how to use LLVM to do some simple things… there are many more interesting and complicated things that you can do that aren’t documented here (but we’ll gladly accept a patch if you want to write something up!). For more information about LLVM, check out:
Mid
[ 0.617169373549884, 33.25, 20.625 ]
The item cannot be added to your registry. Please select alternate item(s) or consider purchasing the item(s) now!The item(s) you want to add may not be available for purchase from your registry in the future. Consider purchasing the item(s) now, and check your local store if we are out of stock online.Please make a selection to add to registry. The item cannot be added to your wish list. Please select alternate item(s) or consider purchasing the item(s) now!The item(s) you want to add may not be available for purchase from your Wish List in the future. Consider purchasing the item(s) now, and check your local store if we are out of stock online.Please make a selection to add to wish list. Product Description Adventure awaits future conservationists and animal lovers with the 18-piece Animal Planet Sea Life Playset, a Toys"R"Us exclusive. Above water in the rescue boat with working winch/dive cage or below the waves in the rescue submarine, the two research figures can help save the marine creatures with your child's help! Everything in the set is sized just right for little hands to grasp, encouraging hours of imaginary play. Boat length: 10". Submarine length: 7.5". Research figure height: 3". Smallest animal length (baby orca): 4". Total product weight: 3 lbs, 1.3 oz. Toys'R'Us is the home for Animal Planet toys, puzzles and animal themed playsets that you won't find anywhere else! Remote control T-Rex dinosaurs and creepy crawling tarantula, dino and safari play tents, plush stuffed animals and jumbo T-Rex foam dinos all in one place! The Toys'R'Us Animal Planet store features toys and playsets inspired by the world of Animal Planet, with fun for the whole family! Get exclusive deals on today's most popular toys and adventure sets with life-like animal assortments and themes from Animal Planet ONLY at Toys'R'Us! Be sure to visit our Toys'R'Us Exclusive Brand Store for superior toys, games and more. We bought this toy based on the picture on the box showing the boat floating in water. Then we got it home, opened it up at bath time, and discovered both the boat and sub have wheels and are clearly NOT meant for water. There are two figures in the set too, and my kids were disappointed that they don't actually fit in either the boat or the sub. But, despite the flaws, the animals are great, and my children are enjoying the toy overall. I think the packaging is misleading showing the boat floating in water. About Me Education Oriented, Parent Of Two Or More Children, Stay At Home Parent Pros Engaging Cons Flimsy Poor Design Best Uses Entertainment Travel Comments about Toys 'R' Us Animal Planet Playset - Sea Life: My nine year old bought this toy with a birthday gift card. She has always been an animal lover. She enjoys animal figurine toys. We were a bit disappointed with the quality of the animals, because they are lightweight. But she still enjoys this set. It contains a lot of animals for a reasonable price. The other disappointing factor is that the box clearly shows the toy being played with in water, yet it is definitely not a water toy. The boat sinks and the toy's decals are not meant to get wet. Rather misleading. My husband bought this set for my son thinking it was bath toys! Well, that was his mistake, but of course my son wanted to take them all in the bath, and they are just too sharp for that (all those fins!) and the decals came off the ship right away. That aside, the submarine, which was the most interesting part, fell apart pretty quickly. The top hatch came off and wouldn't go back on all the way. We still have some of the animals for my daughter who loves animals (the baby and mama whales and dolphins are fun for her) but we threw out the men and boats. The toys look really neat...we bought them at the store...but came home and read reviews...and am disappointed not waterproof...that was one of the reasons we bought it for our 2 yr old grandson. Hope he likes it anyway!! My kids (ages 3,5, and 7) have enjoyed playing with this set, but found it frustrating because the pieces cannot go in water. All they want to do is put it in water, bring it in the bathtub, etc. There are holes in all the pieces so water gets inside of them. So that has been disappointing and I wouldn't buy it again as a result. We were expecting a lot of small cheap pieces in this set. My grandson wanted this above everything else. So, I bought. When it arrived I didn't know what it was because of the large size of the box. My grandson plays with this every single day. He LOVES this set. About Me Education Oriented, Parent Of Two Or More Children, Stay At Home Parent Pros Educational Engaging Imaginative Interactive Lots of Fun Cons Best Uses Entertainment Indoor Travel Young Children Comments about Toys 'R' Us Animal Planet Playset - Sea Life: I waited for the buy one Animal Planet get the 2nd for half off. So, I also got the Polar playset. Tons of imaginative and interactive play for my 2.5 yr old and 4 yr old boys. Boats have a flimsy string with a plastic anchor that can be wound back up. The achor came off and the string was wound up, haven't been able to retieve the string. However, boat is still very useful. This happened to 1:2 boats. We live near Sea World and these toys help reinforce some learning when we saw the animals up close. I'm a stay at home mom and former preschool teacher.
Low
[ 0.46733668341708506, 23.25, 26.5 ]
The kidney and hypertension: over 70 years of research. The crucial role of the kidneys in regulation of systemic blood pressure has been known for more than 70 years. A multitude of studies have described the regulatory mechanisms behind this interaction, and elucidate why kidney disease is such a rampant and difficult form of secondary hypertension. Historically, renal hypertension has primarily been described as derangements of the renin-angiotensin-aldosterone system (RAAS), and salt and volume retention. Renally mediated hypertension involves the activation of RAAS leading to angiotensin II-mediated vaso-constriction, and aldosterone-mediated salt retention. The increased sodium retention and volume expansion seen in kidney disease is accompanied by a failure to autoregulate the peripheral vasculature, leading to hypertension. An-giotensin II and aldosterone also cause increased inflammation and endothelial dysfunction, and volume retention leads to the elaboration of ouabain-like compounds that contribute to increased total peripheral resistance. More recently, studies have shown that activation of renal afferent pathways connecting with specific brain nuclei involved in the noradrenergic control of blood pressure appears to play a substantial role. This article will review the classic pa-radigms, as well as new and emerging paradigms linking the kidney with blood pressure.
High
[ 0.6600496277915631, 33.25, 17.125 ]
Dear insecurity Lyrics By following ​Gnash, you will receive email notifications when new lyrics by ​Gnash are added to Exposed lyrics. You will also be able to view a list of the artists you are following from your account and quickly access all of their lyrics. Dear insecurity Lyrics Posted on August 9, 2018 [Chorus: Ben Abraham] Dear insecurity When you gonna take your hands off me? When you ever gonna let me be proud of who I am Oh, insecurity When you gonna take your hands off me? When you ever gonna let me be just the way I am Dear insecurity [Verse 1: gnash] I hate the way you make me feel I hate the things you make me think You make me sick to my stomach I wish that I wasn’t me Somedays when I wake up, I see myself in the mirror I feel like we shouldn’t be good and be clearer My nose to my clothes from my chin to my skin I’ll never be good, enough forever again For you, so I’d change for you Then I’ll die for you, then you made me blue If I were you, I’d hate me too But I already feel like you do Because you tell me I’m not worship And the bad luck’s on purpose And if I’m sad, then I deserved it But underneath the surface I’m hurting, searching, and learning My imperfections maybe perfect [Chorus: Ben Abraham] Dear insecurity When you gonna take your hands off me? When you ever gonna let me be proud of who I am Oh, insecurity When you gonna take your hands off me? When you ever gonna let me be just the way I am Dear insecurity [Verse 2: gnash] I, feel like I’m dying on the inside But I smile it off I’m a mess, I’m depressed, I’m alone and it’s all my fault Did I do something wrong? This feeling’s unfair You’re making me anxious but why the fuck do I care? I overthink everything till’ my thoughts are impaired I hate everything about me, I think I need some air Drink some water, take a breath Take a moment to be thankful for the reasons that you’re blessed It’s not about mistakes you made or failures that you had It’s all about the memories and little things you have You freckles, and flawsty Your body ambruises Your scar is til’ your beautiful, birth marks that truth hides We’re on in the same to play the cards that you doubt Nobody likes you more than you being yourself [Chorus: Ben Abraham] Dear insecurity When you gonna take your hands off me? When you ever gonna let me be proud of who I am Oh, insecurity When you gonna take your hands off me? When you gonna let me be just the way I am Dear insecurity [Bridge: gnash] I am Proud of the person of I am Nobody’s gon’ tell me who I am Or who I can’t be I am Taking my life into my hand So tired of hiding who I am I am me So [Chorus: gnash] Dear insecurity When you gonna take your hands off me? When you ever gonna let me be proud of who I am Oh, insecurity It’s time to take your hands off me? It’s the time to let you see Just the way I am I’m proud of who I am
Mid
[ 0.587529976019184, 30.625, 21.5 ]
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.druid.data.input.impl; import com.fasterxml.jackson.annotation.JsonCreator; import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.base.Optional; import org.apache.druid.java.util.common.parsers.Parser; import org.apache.druid.java.util.common.parsers.RegexParser; import java.util.List; /** */ public class RegexParseSpec extends ParseSpec { private final String listDelimiter; private final List<String> columns; private final String pattern; @JsonCreator public RegexParseSpec( @JsonProperty("timestampSpec") TimestampSpec timestampSpec, @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec, @JsonProperty("listDelimiter") String listDelimiter, @JsonProperty("columns") List<String> columns, @JsonProperty("pattern") String pattern ) { super(timestampSpec, dimensionsSpec); this.listDelimiter = listDelimiter; this.columns = columns; this.pattern = pattern; } @JsonProperty public String getListDelimiter() { return listDelimiter; } @JsonProperty("pattern") public String getPattern() { return pattern; } @JsonProperty public List<String> getColumns() { return columns; } @Override public Parser<String, Object> makeParser() { if (columns == null) { return new RegexParser(pattern, Optional.fromNullable(listDelimiter)); } return new RegexParser(pattern, Optional.fromNullable(listDelimiter), columns); } @Override public ParseSpec withTimestampSpec(TimestampSpec spec) { return new RegexParseSpec(spec, getDimensionsSpec(), listDelimiter, columns, pattern); } @Override public ParseSpec withDimensionsSpec(DimensionsSpec spec) { return new RegexParseSpec(getTimestampSpec(), spec, listDelimiter, columns, pattern); } }
Mid
[ 0.575645756457564, 39, 28.75 ]
1176 Windermere Street House/Single Family For Sale in Renfrew VE, Vancouver East Amazing home on a corner lot and comes with a new laneway house! Loaded with updates including kitchen, bathroom, windows, blinds, floors, landscaping, fencing and the list goes on. Large open kitchen features S/S quality appliances, shaker style cabinets and durable bamboo flooring. Main bathroom has heated floors, soaker tub and generous storage. Plenty of room downstairs for more bedrooms or keep it open concept. The legal one bedroom laneway home is just one year old and has a separate address. Built with quality materials and has it's own laundry, ready to start generating rental revenue. Close to everything you need. Sneak peak on Thursday the 19th from 5:30-6:30PM. Open houses on Sat. and Sun. the 21st and 22nd from 2-4PM. This representation is based in whole or in part on data generated by the Chilliwack & District Real Estate Board, Fraser Valley Real Estate Board or Real Estate Board of Greater Vancouver which assumes no responsibility for its accuracy - Listing data updated on August 14, 2018.
High
[ 0.6719160104986871, 32, 15.625 ]
No. The Cisco catalyst 2960-X series switches(also the WS-C2960X-24TS-L) feature set is bound to the hardware model. To get the features and capabilities of IP Lite, you must purchase an IP Lite switch.
Low
[ 0.5176715176715171, 31.125, 29 ]
1. Field of the Invention The present invention relates to a method of fabricating a conductive column, and more particularly to a method of fabricating a conductive column, which is adapted for fabricating a circuit board. 2. Description of Related Art In semiconductor packaging process, circuit boards have been widely used due to their compatibility with complex circuit patterns, high integration and excellent performance. Circuit boards comprise patterned circuit layers and dielectric layers which are alternately deposited. For example, the method of forming circuit boards includes the laminating process and the build-up process. Regardless of the above mentioned methods, the patterned circuit layers are interconnected through conductive vias. The method of forming conductive vias includes conductive-through-via process, conductive-embedded-via process and conductive-blind-via process. FIG. 1 is a cross sectional view showing a prior art circuit board with a plated through hole (PTH). The circuit board 100 comprises a dielectric layer 110 comprising a material such as epoxy resin or epoxy resin with glass fiber. A first conductive layer 120, comprised of, for example, a copper foil, is formed over a first surface 112 of the dielectric layer 110. A second conductive layer 130, comprised of, for example, a copper foil, is formed over a second surface 114 of the dielectric layer 110 opposite to the first surface 112. In order to electrically connect the first conductive layer 120 with the second conductive layer 130 through the dielectric layer 110, a plating-through-hole process is carried out. In the plating-through-hole process, a through hole 102 is formed in the dielectric layer 110, the first conductive layer 120 and the second conductive layer 130 by using a drilling process. An plating process is carried out to deposit a conductive material on the sidewalls of the through hole 102 and the surfaces of the first conductive layer 120 and the second conductive layer 130 so as to form an plating layer 140. Accordingly, the conductive via 142 of the plating layer 140 formed on the sidewalls of the through hole 102 electrically connects the first conductive layer 120 with the second conductive layer 130. Thereafter, downstream processes can be carried out to complete the fabrication of the circuit board 100. In addition to the plating-through-hole process, a plating-filling-hole process can be further carried out to fill the through hole with a conductive material so as to form a conductive column, which can enhance heat dissipation of the circuit board. FIG. 2A is a cross sectional view showing another prior art circuit board with a conductive column, which is similar to the prior art described above, except that at least one through hole 202 is formed in the dielectric layer 210, a first conductive layer 220 and a second conductive layer 230 by using a drilling process, and then an plating process is carried out to deposit a conductive material on the sidewalls of the through hole 202 and the surfaces of the first conductive layer 220 and the second conductive layer 230 to form the plating layer 240, wherein the plating layer 240 fills the through hole 202 to form a conductive column 242. Compared with the conductive via 142 of the plating layer 140 in the through hole 102 in FIG. 1, the conductive column 242 in FIG. 2A has a greater heat dissipation area which improves heat dissipation of the circuit board 200a. FIG. 2B is a cross sectional view showing a prior art circuit board with a defective conductive column. During the plating-filling-hole process, the point discharging phenomenon occurs at two sharp edges of the through hole 202. It is believed that the point discharging phenomenon at the sharp edge 220a of the first conductive layer 220 and the sharp edge 230a of the second conductive layer 230 is most likely due to larger point discharge currents. The larger currents cause the conductive material to deposit faster on the sharp edge 220a of the first conductive layer 220 and the sharp edge 230a of the second conductive layer 230, and the conductive material deposited on the sharp edge 220a of the first conductive layer 220 and the sharp edge 230a of the second conductive layer 230 extend towards the center of the through hole 202. As a result, a void 242a is easily formed in the conductive column 242 formed in the through hole 202. The void 242a reduces the heat dissipation area of the conductive column 242. Accordingly, the heat dissipation of the circuit board 200b is lower than that of the circuit board 200a in FIG. 2A.
Low
[ 0.5268817204301071, 36.75, 33 ]
The scriptures are always hinting of that, but you’ll never understand a word of what the scriptures are saying until you wake up. Sleeping people read the scriptures and crucify the Messiah on the basis of them. You’ve got to wake up to make sense out of the scriptures. When you do wake up, they make sense. So does reality. But you’ll never be able to put it into words. You’d rather do something? But even there we’ve got to make sure that you’re not swinging into action simply to get rid of your negative feelings. Many people swing into action only to make things worse. They’re not coming from love, they’re coming from negative feelings. They’re coming from guilt, anger, hate;
Low
[ 0.501039501039501, 30.125, 30 ]
Role of multidrug-resistance protein 2 in glutathione S-transferase P1-1-mediated resistance to 4-nitroquinoline 1-oxide toxicities in HepG2 cells. Previous studies in our laboratory have shown that the phase III efflux transporter multidrug-resistance protein (MRP)1 can act synergistically with the phase II conjugating glutathione S-transferases (GST) to confer resistance to the toxicities of some electrophilic drugs and carcinogens. To determine whether the distinct efflux transporter MRP2 could also potentiate GST-mediated protection from electrophilic toxins, we examined the effect of regulatable GSTP1-1 expression in MRP2-rich HepG2 cells on 4-nitroquinoline 1-oxide (4NQO)-induced cytotoxicity and genotoxicity (nucleic-acid adduct formation). Expression of GSTP1-1 was associated with a fourfold to tenfold protection from 4NQO-induced cytotoxicity. Inhibition of MRP2-mediated efflux activity by sulfinpyrazone or cyclosporin A completely reversed GSTP1-1-associated resistance-a result indicating that GSTP1-1-mediated cytoprotection is absolutely dependent on MRP2 efflux activity. Moreover, MRP2 efflux activity also augmented GSTP1-1-mediated protection from 4NQO-induced nucleic-acid adduct formation. We conclude that MRP2-mediated efflux of the glutathione conjugate of 4NQO and/or another toxic derivative of 4NQO is required to support GSTP1-1-associated protection from 4NQO toxicities in HepG2 cells.
High
[ 0.6828025477707, 33.5, 15.5625 ]
Written by Dan Shapiro For too long, the Democratic Party has been accusing President Donald Trump of being a criminal with sinister ties towards the Russian government, which has long been jealous of the United States of America’s sheer might and our powerful and robust Democracy. These supposed “collusion” ties have never been proven in a court of Law or God, so it’s foolhardy to talk any further about Russia. But there remains a billion person elephant in the room for Democrats, only it’s not India. It’s China, a communist threat to the United States since 1949, and has usurped the USSR as America’s #1 threat. And yet, no Democrat including crypto-liberal Howard Schultz attempting to run for president in 2020 against Donald Trump have brought up China, just Russia, Russia, Russia. If you ask me for an objective analysis, I think the Democratic Party is either scared of China or maybe they’ve been implementing conspiracy theories to the mainstream media about Donald Trump’s supposed “collusion” efforts towards Russia as a means to distract you, the average everyday ordinary American, from knowing about how Communist China funds the Democratic Party and its leaders to give them political life and support? Think about it. Remember when President Trump instituted a trade war with China nearly a year ago, and how every Democrat uniformly opposed Trump’s actions without any principals or seriously investigated and opposed the matter? It was a warning shot fired by President Trump to tell China to stop colluding with the Democratic Party, as well as a symbolic sign to the international world that America is still the #1 producer of liberty and the greatest economic force in the known universe, much to China’s dismay on how the world really works. Despite China’s communist ties, they’ve had tastes of capitalism and enjoyed it. Some expert analyses on mainstream media websites like The Federalist and Breitbart don’t even know how to quantify China. They trade with America to make cheap goods for us but at the same time, they support and uphold the Communist Party, a natural enemy of not just the Republican Party, but also to the American people. The Democratic Party’s silence on China is loudly deafening. Barack Hussein Obama never gave China a hardline stance towards our goods, allowing Communist China’s nefarious debt continue to climb higher levels, threatening America’s world-renowned number one liberty producing status. It was a matter of time before China would strike us before we would need to strike them. Military wise, it’s a good idea not to invade China. Not for any political purposes, but just because that region of the world has some seriously bad juju to it. China has so many people, that it would require a draft in order to seriously think of winning the war. They are largely defensive, and would rather retaliate by continuing this trade war. Because of this effort, President Trump has saved countless lives over and over by tweeting himself a humble standard no other president has done before him, which we have provided to you via laminated telegram message: “Don’t be involved in a land war on some other continent in the name of America and probably extra so in the winter – Sun Tzu #DeepThoughts #MAGA” And with all of this the Democratic Party, not once, have applauded President Trump for his humanitarian efforts to not blow China into smithereens. In fact, the Democratic Party have only remained silent about China, continuing to blabber about Russia having some sort of mind control device on President Trump or whatever. The next time you confront a liberal into a debate, just bring up China. Chances are, they are confused about their own opinions about it, will pretend not to know where China is on a map, and give you an unrealistic, typical Democrat non-answer. The largest open secret in American politics is the Democratic Party’s payola from Communist China, and somehow no one wants to talk about it. It’s time to let the whole American world know. Dan Shapiro is the Bullshit News correspondent on American Party Politics and that Middle East thing. Dan went to Yale and graduated at the top of his class stemming from his charming wit and intellectual knowledge about political issues. Dan Shapiro is best known to “pwn” liberals with his vast and mighty big brain. He lives in New York City and talks in 90dB SPL.
Mid
[ 0.6057906458797321, 34, 22.125 ]
Wednesday, February 24, 2016 Cayetano and Bangus Farmers Vice presidential aspirant and Senate Majority Leader Alan Peter Cayetano went to Dagupan, Pangasinan on February 23 and spoke with bangus farmers to push for a "people's bank" to facilitate capital lending. This is part of the "Ronda-Serye” listening tour with his running mate, Davao City Mayor Rodrigo, where the tandem personally listens to ordinary Filipinos and presents their solutions to the country's disorder. Media man Atong Remogat , dubbed as the Duterte's dead ringer in Pangasinan, introduced Philippines Councilors League President Maybelyn dela Cruz -Fernandez before the latter spoke lauding the help Senator Cayetano rendered to the PCL. Dagupan City Councilor Maybelyn is an actress
Mid
[ 0.6453201970443351, 32.75, 18 ]
Introducing: The NEWEST Sauerkrock! WE’RE (ALMOST) BACK Due to an exciting but overwhelming demand over the holidays, the 5L Sauerkrock fermentation crock, Humble House’s inaugural product, has unfortunately been out of stock since early March. However, we have been very busy throwing and firing even MORE of your favorite fermentation crocks and you can expect to see the 5L Sauerkrock fermentation crock back on the Amazon “shelves” in just TWO WEEKS! We also have a BIG surprise in store… A NEW SAUERKROCK FERMENTATION CROCK?! You heard that right – there’s a new Sauerkrock fermentation crock in town. Humble House is expanding our line of fermentation crocks by introducing a smaller 2L version of the beloved “Original” 5L Sauerkrock fermentation crock. Made specifically for those with less space or less demand for that probiotic goodness, it’s sure to be well-received and meet the needs of some of our biggest fans. Hey URBANITES – meet your new favorite fermentation crock, the 2L Sauerkrock “City”! THE 2L SAUERKROCK “CITY” For those where cupboard real estate is lacking or the occasional serving of sauerkraut, kimchi or pickles for one or two people is sufficient, meet the 2L Sauerkrock “City” which was designed just for you. Over 50% smaller than the Sauerkrock “Original”, the Sauerkrock “City” is perfect for your 400 square foot New York City apartment, the San Francisco flat you share with four roommates or even the Tiny House you have parked in best friend’s Austin back yard. It is also the perfect fermentation crock for those “experimental” batches you want to test out but aren’t ready to commit to with a full batch. The 2L Sauerkrock “City” has a 0.5 gallon capacity, the equivalent of four standard mason jars and, regardless of your situation, is sure to become your new favorite fermentation crock. BUT LET’S NOT FORGET WHERE IT ALL STARTED… THE 5L SAUERKROCK “ORIGINAL” Tested and trusted (and boasting 4.8 out of 5 stars on Amazon!) the 5L Sauerkrock “Original” is the standard in traditional home-fermenting a kitchen staple. Capable of churning out a gallon of sauerkraut, kimchi or pickles at a time, the 5L Sauerkrock “Original” is best suited for small families or couples who just can’t get enough of that fermented, probiotic goodness. At just over a cubic foot in size, the 5L Sauerkrock “Original” takes up roughly the same amount of cabinet space as other kitchen essentials like a stand mixer or set of mixing bowls and is the best choice for those with average-size and above kitchens. WHICH SAUERKROCK TO CHOOSE? Whatever your living situation, there is a Sauerkrock fermentation crock that is PERFECT for you. Check out what our customers and biggest fans are saying about the Sauerkrock fermentation crock and be on the lookout in the coming weeks for your favorite fermentation crocks to be back in stock and available for purchase on Amazon. If you have any questions about which Sauerkrock fermentation crock is right for you, please do not hesitate to reach out to one of our helpful Humble House team members at [email protected]. We can’t wait to hear from you!
Mid
[ 0.637931034482758, 37, 21 ]
Hydroxyl radical (•OH) is one of reactive oxygen radicals. Hydroxyl radical (•OH) has extremely strong oxidizing property with an oxidation reduction potential of 2.80 V, which is second only to that of fluorine atom and is capable of reacting with most inorganics or organics at a diffusion controlled rate with a reaction rate constant of generally greater than 108 mol L−1 s−1. In the field of environmental science, hydroxyl radicals are used for degradation treatment of organic pollutants and are the most important reactive intermediates in the advanced oxidation processes (AOPs) for sewage treatment. There are many methods for generating hydroxyl radicals, which can be broadly classified into chemical catalysis method, ozone/hydrogen peroxide photolysis method, photocatalysis method, electrocatalysis method, ray method, and so on. The chemical catalysis method generally uses Fenton reaction, and hydroxyl radicals are generated by catalyzing decomposition of hydrogen peroxide with iron ions. Although the method is simple, easy and cheaper, a large amount of iron-containing sludge is generated when it is applied on a larger scale, causing inconvenient subsequent processing. Ozone and hydrogen peroxide are photolyzed under the irradiation of ultraviolet light and hydroxyl radicals can be generated; however, it is needed to add hydroxyl radical precursors, such as ozone and hydrogen peroxide, and there are more side reactions. Generating hydroxyl radicals by photocatalysis with semiconductor titanium dioxide particles and the like as a catalyst needs to ensure that the catalyst is in a suspended state, and it is also needed to separate the photocatalyst simultaneously for the photocatalysis, providing poor continuous operation, and the dissolved oxygen has a greater influence on the generation of hydroxyl radicals by photocatalysis using titanium dioxide. The method for generating hydroxyl radicals by electrocatalysis has higher requirements for the dissolved oxygen in water and its catalytic components, and has lower current efficiency. There are problems of higher cost and greater harm to the human body in the ray method. Therefore, the current commonly used methods for generating hydroxyl radicals have more side reactions, have poor operability and low efficiency, or have a great harm to the environment or the human body. Accordingly, the existing methods for generating hydroxyl radicals each have their own problems, and thus it is difficult for their popularization and application.
Mid
[ 0.647201946472019, 33.25, 18.125 ]
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726. Re-inflating the Ball: The Intel Approach to Loan Sales Charles T. Marshall | Jun 20, 2016 The world’s largest microprocessor maker may have a thing or two to teach commercial mortgage lenders about loan sales. Andy Grove, the colorful management guru of Intel in its formative years who died earlier this year, once described the secret to the company’s success as more than just taking the ball and running with it. “At Intel,” he explained, “you take the ball, let the air out, fold it up and put it in your pocket. Then you take another ball and run with it. When you’ve crossed the goal you take the first ball out of your pocket, re-inflate it and score two touchdowns instead of one.” In the secondary market game, loan sales have been largely reserved for failed banks and non-performing loans or as an end-of-the-bench substitute for securitization. Facing formidable macroeconomic and regulatory defenses as the current real estate cycle ages, however, lenders are increasingly learning that loan sales offer additional scoring opportunities. The market dislocation beginning in late 2015 and early 2016 caused, according to Gerard Sansosti and Daniel O’Donnell of the debt placement and loan sale advisory groups at HFF, “an uptick in the sale of performing loans, particularly floating rate, as portfolio lenders prune their loan holdings for exposure issues, scratch and dent sales, and attractive pricing.” Unacceptable concentrations of loans to one sponsor, in limited or challenged geographic areas or asset types, or with similar loan maturities, are among the exposure issues motivating loan sales. Portfolio lenders have not been the only loan sellers running with the ball. Confronted with widening spreads and less frequent securitizations, CMBS lenders too are pursuing loan sales as a cost-effective alternative. Private equity lenders, for example, often face both the holding and opportunity cost of having capital tied up in loans subject to increased warehouse interest charges and uncertain exits. Regulatory requirements such as risk retention and increased capital reserves offer additional motivation for banks and CMBS lenders to consider loan sale exits. Whether to sell, hold or securitize a mortgage loan, and similarly whether to originate or purchase, involves the interplay of a comparative matrix of factors. Transaction costs, retained liability, speed and certainty of exit, and pricing (and whether premium or discount to par) may vary significantly between a securitization and a loan sale exit. The roster of CMBS issuers is substantially smaller than that of potential loan purchase investors, and each has different tolerances for cash flow, location and stabilization of properties, and loan terms. Among investors, institutional purchasers may have more rigorous requirements than high yield investors. And depending on business objectives, each player may read the macroeconomic tea leaves differently. To maximize exit options, loan originators must also consider whether and how to tailor loan terms and underwriting to match the demands of these different purchasers. Non-institutional investors must evaluate whether they have the expertise (and stomach) for hands-on asset management, a cost that also impacts pricing. Whatever the game plan or the player, whole loan sellers and purchasers must address numerous documentation and legal issues. Sale Agreement. Loan sales have historically been documented like real estate transactions, with agreements requiring earnest money, a defined diligence period, some negotiated level of loan and property representations, physical delivery of loan documents generally via escrow closings with a title company and damages as remedy for termination and breach. While many transactions still employ this documentation approach, CMBS technology has refined the loan sale process. Industry-accepted representations and warranties and remedies, standard mortgage file documentation, advanced loan data capture and presentation, and custodial possession of documents have streamlined transactions. The loan sale agreement may mirror the terms of a mortgage loan purchase agreement (MLPA) executed in connection with a securitization, but increasingly transactions are documented as simply as a trade confirmation, going directly to an assignment and assumption agreement executed on the date of closing. Additional terms can mimic the scope of an MLPA, including for loans intended for securitization, post-closing securitization cooperation covenants. Identity of Loan Seller. Purchasers will insist on a parent company as loan seller to backstop loan representations and warranties. If loans have been assigned to affiliates, for example in connection with a loan warehouse financing, the loans will need to be reassigned as of the closing date. Warehouse financing, however, creates additional closing issues. Representations and Warranties. Securitization reps and warranties represent the standard for loan and property diligence and are obviously the expectation for CMBS loans, so dialing back could impact pricing. More conservatively, representations can be subject to post-closing adjustment to pick up any variation in those actually required for the securitization. In CMBS securitizations, SEC regulations require a certification of loan information by, and imposition of personal liability on, the chief executive officer of the issuer, which is often in turn required from each loan seller. In loan sale agreements, on the other hand, reps and warranties are made by the loan seller only, with no officer-level certification or liability. Portfolio, bridge and seasoned loan sales typically do not include such rigorous reps and warranties. Such loan sellers take the position that no rep or warranty should be provided to the extent a loan or property feature can be determined by due diligence review and that representations should be limited to seller’s knowledge or lack of receipt of written notice of an event and subject to all matters contained in the diligence file. The promulgated FDIC loan sale agreement, for example, conveys loans “as-is” with no reps and warranties, but provides for seller repurchase of a loan for certain specified material issues. Robust diligence information is the best answer for limited reps and warranties and enhances loan pricing. Prudent lenders are scrubbing loans as they close in order to ensure compliance with the most stringent reps and warranties and to anticipate either loan sale or securitization exit so that curative items such as missing documents or documentation errors can be corrected and loan and property level information and disclosures can be compiled in advance. Survival Period. The survival period of reps and warranties is negotiable. For a non-securitized pool of loans with pricing discount, a survival period of three to six months would be customary. Sales of loans intended for securitization often have a survival period of 18–24 months. As a reference point, absent parties’ contractual agreement otherwise, New York law, the typical governing law for loan sale transactions, applies a six-year statute of limitation on alleged breaches of reps and warranties that accrue from the date made (i.e., the date of loan sale closing). Purchasers may start with a “life-of-loan” ask, but that’s not market. Property-related conditions that existed at the time of sale can generally be determined within a short period of time, and causation for loan loss lessens with passage of time anyway. Loan document defects can often be determined by pre-closing diligence or during post-closing servicing (though of course the ultimate test for document defects is when remedies are exercised). Leverage in times of volatility may favor CMBS loan purchasers who insist on aligning reps and warranty survivability with their securitization exposure. Loan Document Schedule. Completeness of the loan document schedule in a loan sale agreement is important and should reference any sourcing/servicing interest strips or rights that may have been previously assigned by separate agreements and survive the sale. If documents are held by a custodian, the custodial receipt and loan document schedule should conform. Generally the transfer of a “mortgage file,” unless all such items are held by a custodian, is contemplated promptly after the closing. It is typical for CMBS loans to use an industry-standard mortgage file definition; for other loans, either a list of loan-specific mortgage file contents or a generic definition of items in the seller’s possession or control is used. Remedies. The typical CMBS loan sale remedial formula is a cure period and, if not cured, either an agreed-value settlement or a repurchase obligation. A cure period and cause of action for damages (actual damages, not speculative, punitive, or contingent) in lieu of repurchase obligations is also seen. Market volatility may dictate that the loan sale be documented more conservatively and aligned with securitization risks: the purchase price could be subject to reduction based on a recalculation of value at the purchaser’s ultimate securitization to take into account the pricing exposure an actual loan seller faces in the securitization, such as price concessions demanded by a B-piece buyer, allocable securitization transaction expenses and any subordination level required by rating agencies generally or for the specific loan. The repurchase remedy could apply also if, through no fault of the purchaser, the loan is kicked out of the purchaser’s intended securitization or if the securitization has not occurred by a specified date. The parties might include a negotiated right for the seller to substitute a separate loan if the purchased loan is deemed defective or kicked out of the securitization. Closing. Loan sale closings are simplest if loan documents are held by a custodian, in which case only original assignment documents need to be physically delivered at closing. This is particularly true when the seller, purchaser, or both have obtained warehouse/purchase financing secured by pledged loans. Rather than physical review and delivery of original documents at closing, custodians provide trust receipts and exception reports confirming the documents held, status of whether original or copy/recorded or unrecorded, and any exceptions noted in their review for approval or curative repair. The loan document transfer can be achieved by delivery of transfer documents to the custodian and simple bailee or escrow letters by which the custodian holds the documents for each party pending the closing. If one or more transaction parties have warehouse/purchase financing, a multiparty escrow agreement among loan facility lender(s), custodian(s), seller, purchaser, and escrow agent(s) will be needed to accommodate the assignment and purchase price payment and/or loan facility payoff, which will require lead time due to multiparty signoff. Assignment Documents. It is important to agree upon the form of assignment documents early in the process, particularly if multiple states and mortgages are involved (which is magnified in single-family rental loans) and/or one or more parties has a loan purchase facility. With loan facility parties as seller and/or purchaser, tiers of assignments to SPE affiliates may be required. This is one of the most time-consuming parts of the loan sale process, so planning and lead time is required. Whether performing loan sales are just a late-game Hail Mary pass or a permanent addition to a sophisticated secondary market playbook remains to be seen. Several factors may provide the answer: the increasing number of yield-chasing debt funds, with nimble, opportunistic business models, filling the capital gap in commercial real estate financing; the cresting wave of maturing loans originated at the peak of the prior real estate cycle; and streamlining of loan sale transactions with broadly accepted information gathering and legal process technologies developed for CMBS transactions. Just as single-family rental loans emerged as a new asset class in response to the Great Recession, performing loan sales may emerge from the current market volatility and regulation as a vital business option available throughout the game. Reinflating the ball and scoring with performing loan sales has never been easier, or more timely.
Low
[ 0.5161290322580641, 28, 26.25 ]
Shapefiles ========== Reading ------- .. automodule:: landlab.io.shapefile.read_shapefile :members: :undoc-members: :show-inheritance:
Mid
[ 0.597752808988764, 33.25, 22.375 ]
Blog While my reflection on the first 1/2 of my term as moderator is in the works, I wanted to first get offer up some thoughts that I am SURE will get some comments and I hope some good discussion: membership decline. As you know, the most recent membership numbers where just released and, for various arguable reasons, the PC(USA) declined in membership by 69,381 members. As we see these numbers announced each year, the theorizing and punditry around the decline is nothing new and I suspect it will continue as long as there are people with opinions and who care about the church. The prevailing reasons that are usually sent my way are basically three: We are in decline because we are too liberal, having stopped being a people of The Book and are caving to cultural trends especially around homosexuality. We are in decline because we are far too conservative, no longer live the love that Christ calls us to and the world no longer sees us as a place of welcome. Our 1960's members trends were but a blimp in our history for churches in the United States . . . so numbers need to be taken in context. Now it is obviously easy to assign blame for our decline in membership, often falling into a far too simple rhetoric that there is indeed only ONE reason for our decline. Now regardless of how you value the use of numbers as measure of worth, I think that we are more nuanced than that and that if we really think about it there are probably multiple reasons for our decline. Now as a new church development pastor, I have never been solely driven by numbers. Not surprisingly, like most things, I find God speaking to me in the gray, somewhere between only finding worth in numbers and thinking that numbers are silly and irrelevant. I think numbers are an important measurement that can give us some useful indications of trends and developments, but we can also get into trouble when our ONLY drive is numerical. In the end, I want us to impact lives that in turn impact the world and believe that if we are faithful to God's calling upon our lives, we will grow to the size that God hopes us to be. Still, our decline may give us some indications of our life together and I am not immune from offering some thoughts in the issue. Now I have written upon this before, Number 1 reason why PC(USA) churches are dying a slow, painful, sad, drawn-out, death and other happy thoughts, but let me add something more as I have continued to listen to and reflect on what I am hearing as Moderator. I believe that one of the main factors in our failure to grow is that we still operate with an institutional worldview that is not built for the fluid, adaptive and complex nature of the world today. Theological and ideological perspectives aside, we – at all levels of our church life – still operate with a 1960's worldview that simply does not speak to the world today. We spoke well to the United States culture during a long stretch of our denominational life, but we have forgotten how to speak to the world in a way that offers a transformational experience of the Gospel life in a Presbyterian context. I grieve this because I have been so fed and formed by my Presbyterian heritage and deep theological history that I am compelled to find ways to meaningfully pass this rich tradition on to my kids. But sadly, as I look around the church, those under 35 are painfully absent. And while many of us would like to hold onto our youthful spirits for as long as we can, 60 is not the new 50 and 40 is not young. We who hold power and influence in the church must stop pretending that we are the future. We are not. In fact, as those with power and influence in the church, if we do not joyfully embrace our changing rolls in our institutional life, we will die with no reason to expect resurrection. Simply put, we must ask ourselves hard questions and learn to adapt if we are to impact the world as Presbyterians for any length of time into the future. To get things started, here are some of the questions I think we need to address: Is Jesus enough? What ARE our essentials and non-negotiables as we gather as a denominational gathering of the Body of Christ? Do we live the Trinity? Do we fully understand the nature of living in community and living out our understanding of the Triune God? Are we committed to connectionalism and if so, how committed are we to creating healthy Presbyteries? Because unless we have Presbyteries that are vibrant and at the heart of our lives together, we are no longer Presbyterian. Can we handle an abundance of manifestations of the Presbyterian family where congregations look, feel and operate in drastically different ways? Can we fathom the idea of the death of some parts of our structural and institutional life together trusting that where resurrections is to happen it will happen? Are those who hold power and authority willing to create space for who are not part of our life but will best be able to help us navigate our way into their world? Can we find a way for an institution to live the peace of Christ in a world of chaos? Will we be able to respond well even if the answer is, "We do not have the capacity to adapt, the time of our current way of being is done." Can we truly embrace the unknown, but yet joyfully strive to seek God's intentions? These are obviously not all the questions that we need ask of ourselves and as hard as it may be to believe, I would not want to place values on the answers to these questions. But, if we do not venture onto some deeper questions about our future, we will never fully be able to navigate our way into who God hopes us to become as a Presbyterian people. So . . . there you have it, what other questions do we need to ask of ourselves? What are more reasons for our decline? Does it even matter? What say ye Presby bloggosphere? Share: Related 37 Comments Jesus Christ demands we measure Medicare overhead as a % of dollars and not per patient (see article below)? Jesus Christ demands we ignore Medicare fraud, when things like preventing fraud are the reasons for ‘overhead’? Jesus Christ teaches us that Medicaid and the VA are efficient? Verse, please? Jesus Christ demands we ignore the question of whether it is profits in the US that drive medical innovation, and the lives we may save now will be more than offset by lives lost in the future due to undeveloped medicines? Jesus demands we ignore the question of other nations’ lower costs being freeloading on the US, who pays for innovation? Jesus Christ demands we ignore the idea of “regulatory capture”, where the platonic ideal of a hypothetical perfect system meets lobbyists for entities to be impacted (or Jesus demands campaign finance ‘reform’ to make sure the wrong people don’t have enough political clout to stop the initiatives Jesus endorses)? Jesus Christ says this is constitutional? You consider that a settled issue? Does Jesus, or was Jesus silent about the 9th and 10th amendments to the US Constitution?http://www.layman.org/news.aspx?article=26247 Presbyterian Church (USA) Stated Clerk Gradye Parsons weighed in on the national health care reform debate Friday by reiterating a resolution from the denomination’s 2008 General Assembly that demeans the quality of American health care, condemns profits earned by private insurance companies, dedicates mission money to lobbying efforts and supports the call for a government-run, single-payer system… …A resolution approved by the 218th General Assembly, which met last summer in San Jose, Calif., outlines the denomination’s support of a single-payer system of health insurance for the country’s uninsured. The action also routed $25,000 from the denomination’s mission budget to a political action network called Presbyterian Health, Education and Welfare Association for the purpose of hosting 10 regional, one-day seminars supporting universal health care…. One more for Kevin, Article in Saturday’s Tennessean caught my eye. They talked about some of the churches in Middle TN presbytery, including Jim Kitchens of Second Pres.:http://www.tennessean.com/article/20090801/NEWS06/908010329/1017/NEWS03/Plun “Local churches have grown by doing the basics right, said the Rev. Jim Kitchens, pastor of Second Presbyterian in South Nashville. If they do a good job taking care of youth and children, that can draw in young families. Second Presbyterian has started several initiatives that bring in newcomers. This year, eight young adult volunteers are taking part in the Nashville Epiphany Project. They’ll attend the church, live in a Christian community, and volunteer in programs like the Martha O’Bryan Center. The church also is up front about being a more progressive church, and welcoming to people who are gay and lesbian. It’s part of the Faith and Justice Congregational Network, with ties to Sojourners, a progressive Christian magazine. That more inclusive view helped bring Kim Huguley and her husband to Second Presbyterian. “We were looking for a church with a bigger view of who is in God’s family,” she said. “We’d been looking for a church for several months. The first time we walked in, we knew it was a good match.” Kitchens tries not to worry about the future of his denomination. Less than half his congregation grew up Presbyterian, and most newcomers come from a number of denominational backgrounds. For most people, he said, denominational ties matter less than making sure a local church is a good fit.” A good example of a Reciprocating Church. Best, john Now we are talking. Excellent comment, Kevin. There is a lot to go into reasons why we are not affiliating with volunteer organizations. Most of these organizations had their big day post WW2 as our greatest generation was rebuilding society. Times have changed. Most people I know my age (48 and younger) are so strung out with suburbia, two jobs, who all knows. The last thing my folks see as relevant is the presbytery connection (this goes for all three congregations I have pastored), even as I have been quite involved and always urging and trying to find ways to connect. It isn’t that the presbytery is doing anything wrong, nor the people in the congregations. They are just on different paths. You are exactly right about getting involved as a citizen and how the church can facilitate that. For me it is about encouraging involvement, and giving people permission (and the property) to be creative and do stuff. Frankly, all the worry over this is moot. We are facing major, major changes caused by energy and environment. In the coming years (less than a decade), we will desperately need social networks for basic needs. The church will be re-created again as it has through history. Hi Bruce, I appreciate your views and especially the ideas around creating conversational space and a consideration of the death of some parts of our structural and institutional life together. Yes. Resurrection can occur. But it is not likely to occur without our being acted on by an outside force. It occurs to me that God (outside force) often intervenes with honest, authentic, reliable data. In my view, it is precisely this lack of data that keeps any remedy to the membership decline shrouded. I offer what I hope is a shroud-lifting comment that may be resurrection material. You decide. Follow. News of the steepest membership loss in twenty-five years comes as no surprise to Newark Presbytery in New Jersey. We address the evidence of these statistics every day. I am proud of our growing effort to build collaborative energy to increase the capacity of every one of our congregations to be viable, healthy, and effective. Each of our churches is a delivery station of the Good News. Moving forward is a challenge. How we respond to our membership decline is important. I continue to listen and engage in conversation with our denominational upstream in Louisville about our decline. The PC(USA) messages have included: Try harder at what you have been doing; Try something new; Invite neighbors to church, Blend your worship, Become multicultural, Support General Assembly Mission directly; Apply for grants; and in the meantime, Louisville will downsize the denominational structure (again). We still decline. The reason that these directives often fail to alter our experience of institutional trauma or the congregational outcomes from decades of decline is that Louisville attributes the decline, at least in part, to death, people being removed from the rolls, and to a “gradual” drifting away from our congregations. Gradual drifting? What’s gradual about twenty-five years of consistent decline? Even the Pew Forum, whose research was referenced by denominational execs, seems more like a distraction than a reason, as it identifies why people change religious affiliation rather than addressing the real reasons people do not affiliate at all. North Americans have consistently withdrawn their volunteer association affiliation for more than thirty years. The questions we ask define our assumptions. In this case, the PC(USA) and Pew, ask the question: “How do our neighbors choose between Protestant or Roman Catholic affiliation?” suggesting that the focus of our concern is religious affiliation. The critical question is not Protestant v Catholic, or Christian v Muslim v Jewish, etc. The critical, core, question we must consider together is: “Why do people fail to affiliate with volunteer associations at all, church or otherwise?” Almost every volunteer association in America has been in decline for decades. From the Boy Scouts, Girl Scouts, AMA, PTA, Elks, Lions, etc., to the political, civic, religious, and professional groups, membership is down. There is a direct correlation between the membership decline of volunteer associations in North America and the associations’ lack of community engagement. Even more consequential, corresponding benefits from these association networks to influence reciprocal behaviors (doing things for each other) have diminished. It has been documented that Americans have steadily reduced their investment in “outside the family” activities. Our North American cultural milieu has normalized self-engagement and isolation. Our increasingly time-shifted ways to connect has corresponded to the rise of social media sites and technologies. We no longer derive value from connecting in person. In short, the church has experienced a reduction in its membership. However, the reduction in membership corresponds to the church’s prior failure to return sufficient value to the community outside itself which could have sustained the community gathering “at the church.” This destructive cycle has been perpetuated over the decades. As Presbyterians, we have focused on ourselves, mistakenly believing that our “decline” was a Presbyterian one. We seemed to think it was our problem. How many curriculums, conferences, coaching, and action plans directed us to do something within ourselves and our space without realizing it was our almost narcissistic framing of the problem and our solution that made the situation worse. As a denomination, we missed opportunities to lead a revival of the re-investment of social capital, volunteerism, and instead, with little reflection, followed the status quo. The good news is that our decline can be reversed by swift and decisive realignment of our congregational resources to tangibly benefit the communities we are located in. Our disconnect from the community reduced the community’s connection to us. Instead of merely asking our congregants to bring a friend to church, (a fine but insufficient remedy), we must ask our congregants to re-engage in their communities. We need to invite our congregants back into their communities. The Church is peculiarly well-suited for this transformational mandate of re-engaging communities since God has sent the Church into the world, not to be served, but to serve. We can lead our congregations as servants, empowering them to become a Reciprocating Church. A Reciprocating Church is a church that reinvests its experience of God’s love into the world, so that their community knows God loves it, too. A Reciprocating Church will ensure congruence between its congregation and building capacities and by God’s grace, be a healthy and effective demonstration of the Christian gospel in the Church and the world. The opportunities to be a Reciprocating Church are huge. Let’s explore them, transforming together. Kevin ……………………………………………………. Dr. Kevin Yoho, General Presbyter Newark Presbytery, PC(USA)[email protected]://www.newarkpresbytery.orghttp://www.kevinyoho.com Twitter: @kevinyoho Susan, Birthrate? That is assuming we don’t evangelize and only grow by current members having babies—who then must stay in the PCUSA, a rather big assumption in today’s world where there isn’t much “brand loyalty” out there. The sort of thinking reflected in that Presbyterians Today bit is part of why we are where we are. Think about this—in 1969 the U. S. population was right at 200 million with reported church attendance of 46%. That would mean 92 million in church during an average week. 2009 the U. S. population is 306 million and even with the drop of church attendance to 40% by some studies, that means 122 million in church during an average week. This means, while we were undergoing this long-term loss of members and shrinking in size, the numbers of people in church increased by 30 million people. Let that soak in for a bit and you get a better picture of how bad things are going for us. We shrink while there is great growth in numbers going on. God’s blessings to you, Matt Ferguson Hillsboro, IL A couple of years ago an article in Presbyterians Today claimed that sociologically speaking 70% of our denominational decline since the 1960’s was due to decreased birthrates (see july/aug 2007 here: http://www.pcusa.org/research/gofigure/index.htm). I see this in the small town in which I serve where many friends in conservative / non-denom churches have 3 or four children. I showed the article to our Session, but only one Elder and I were of “child-bearing age” and neither of us were interested. Good job Bruce for increasing our odds — I’m just hoping to replace myself. I heard Stacey Johnson (I hope I’m remembering this right Stacey) suggest last year how amazing it would be for the powers-that-be in the institution to simply hand over the resources, investments, trusts to the young and see what kind of ministry might happen. Who knows what G-d might do? I am late to this conversation, but thought I would offer a few thoughts anyway. In the history of Christendom, people went to church (or were church members) because they were forced to by either political or social pressure. Now churches find that they need to market their wares. I appreciate your thoughtful questions but I wonder if the reasons aren’t due to our ineptitude but to social factors beyond our control? There is an assumption that church is good for people whether they think so or not. We just need to show these boneheads that we can overcome their superficial objections and meet their needs. It could be that folks simply aren’t interested and are living perfectly happy and fulfilled lives without us. Hey Bruce, I’ll be thinking on your questions for some time. It looks as though books like “The Starfish and the Spider” should be mainstay guides for church leaders at the moment. Tod Bolsinger spent quite alot of time reflecting on Presbyterian leadership in this starfish paradigm (http://bolsinger.blogs.com/weblog/starfish-and-the-spider/). It’s worth the time! I’d like to speak to Martha’s comment re: why we exist. For our children? For ourselves? For the glory of God (whatever that means? Because my hunch is that we differ on what this means. Back to the liberal/conservative debate.) If we take seriously The Great Commission, I believe we are supposed to exist to make disciples of all nations. In other words, we are to exist for those who do not yet follow Jesus. The problem is that we tend to exist for ourselves – to comfort our own harried souls, to serve our own needs and personal preferences, to have a place to be married/buried. The basic conflict in the congregation I serve seems to be this one: some believe the church exists for them and others believe the church exists for those “out there.” Good post, Bruce, as always. I know whenever these numbers come out, people tend to blame the loss on whatever it is that we don’t like about the church and then espouse as the answer whatever we think the church should be. A few observations: 1. Certainly you have to see some of this loss as due to issues related to how we are dealing with the homosexuality issue. A good portion of the loss was from congregations leaving and moving to more theologically conservative congregations. I don’t see how you can debate that and I suspect these kinds of losses will only continue. 2. If someone in business sees a company losing market share, the first thing they do is to look at other companies that are gaining market share. I suspect the main reason we’re unwilling to do that is because most of the churches/denominations that are growing are theologically conservative (even if they are progressive and innovative in their programs or worship). Southern Baptists are probably the best example. But look also at Calvary Chapel, Willow Creek, Saddleback, and the wonderfully Calvinistic Mars Hill in Seattle. All rapidly growing while we shrink. All theologically classically orthodox. Why not see what makes them successful and then see how that can apply to us? 3. Bruce, I know there is much to admire in your list and it mirrors a lot of what I hear in the “emergent church” circles. But to be honest, most of the people I hear talking about this haven’t really shown it in action (with the exception of Erwin McManus at Mosaic). It’s often presented by people who have little real experience in congregations that have actually grown to significant numbers and stayed there. I see even your congregation has shrunk to the point where there are financial difficulties. These are all great theories, I just don’t see them working anywhere. There are some good general “my leaving the church” stories on exchristian.net if you skip over the atheist advocacy. “Yet, look at the impact and change they were able to bring about.” You mean by becoming the State Church after an emperor thought Jesus took his side in one of Rome’s civil wars because he had a “vision”, with said emperor then ordering the church around and having quite a say in the development of Christianity? 🙂 (Or more likely, a would-be emperor thinking the early church was a big, untapped source of political support for his becoming a military dictator over a big chunk of the world’s population and the church compromising itself to gain political influence over a new military dictator.) I just don’t think Jesus said one word about, for example, whether a national healthcare system will save money thru efficiency or thru arbitrary rationing, or whether it will trade access by having money for access by having political connections. Not to start a debate on that specific topic, but I’m just saying no matter how convinced you are that Jesus supports your end goal, that doesn’t mean that Jesus thinks your means will work or agrees with your method. All government activities are coercive – “Jesus would support this goal” doesn’t necessarily mean “Jesus would support having armed agents of The State make people act in accordance with this goal thru explicit and implicit threats of violence”. To bring this back to the topic, with fewer people being straight line political party supporters, maybe the key to church recovery ISN’T becoming indistinguishable from a political party with a laundry list of political actions to support. Do you want to become an organization that circles the wagons around something like Clinton’s perjury about adultery or W’s questionable use of WMD intelligence (or outright lies, take your pick) to get us into a war just to maintain its power and influence? That is the mindset that leads to a church circling the wagons around pedophile priests (that’s a reference to what’s been in the news, not a vague accusation that isn’t already in the public sphere). It’s not even a matter of agreement or disagreement. A local Big Baptist church pastor writes for the local paper as a pastor every now and then. His last two articles have been on lowering taxes and teaching creationism in schools. I agreed with the politics of the first and not of the second, but had the same reaction to BOTH when I saw them. My reaction was “you’re a minister, shut up about that”. Mark, Many young folks outside the church may think we are too conservative but, if you care to take a look, many of those young folks are searching for something solid and are finding their ways to conservative churches. Just take at look at Tim Keller in PCA and what he is spreading through his work (recent Christianity Today feature on him) or Mark Driscoll and the whole Acts 29 group, and that the Roman Catholic Church returned to growth under Pope JP 2 and his strong movement back to a more conservative way, and that the Southern Baptist Church has, for the most part, been one of the few growing denominations, and . . . We have a growing number of younger folks here (19 years ago we were 80% over 65 now we likely have the reverse / are far, far younger). And yes, I do think we are politically liberal and that is a problem because the majority in the pew and on Session and even in the pulpit disagree with those pronouncements and it causes more division. JS Howard, You have a good viewpoint to consider. I heard Larry Osbourn (Northcoast church) make the observation that the early church did have political rallies, etc. and were living in a society worse than ours. Yet, look at the impact and change they were able to bring about. Maybe the link will take you there. If so, Osbourne’s comments are on track about 3 minutes inhttp://www.northcoastchurch.com/fileadmin/audios/Simple/cs04/cs04player.html If that doesn’t work, his sermon from February 2 – 3, track 4, title Ministry Made Simple. Bruce, I should say I like your first question on essentials. After all these years (I have written and spoken on this topic for nearly 20 years now—shows you how much I impact) “#1. in essentials, unity; #2. in non-essentials, #3. liberty; in all things, charity” but the key to part 2 in that observation is doing #1. Until we do #1, we will never be able to allow for #2 because we will see everything as part of the debate to getting to #1 and we will then debate and argue over way too many things to make sure something is or isn’t include in #1 or doesn’t impact things when we finally try to define #1. Define the essentials, keep them as few, and then we can more readily move on to liberty in non-essentials. God’s blessings to you all, Matt Ferguson, Hillsboro, IL I have a question for you. Pretend God Himself decends from Heaven and tells you that, no matter what, no politician or arm of government will ever listen to anything the church has to say, so political activity on the part of the church (for “family values” or for “social justice”) is pointless. Where would you have the church spend the time, money, and energy that would be freed up from political activity, pronouncements on proposed laws, and other attempts to gain power to make others do what you believe Jesus wants them to do? Matt, While you may find the PCUSA liberal, I think that if you ask the unchurched – particularly the young unchurched – you’ll find that in many ways the PCUSA is considered conservative. The political positions may be liberal, but compared to no faith at all, Robert’s Rules and sitting in rows listening to organ music with a bunch of gray-haired folks in suits is very conservative. That’s not all the PCUSA is, but that’s what it looks like to those who haven’t experienced it. Thanks Bruce for your words. It was very timely for me to read. I serve a small church that is declining and recently they were celebrating the reason for existing as a church is “for our children.” These children are 20 somethings are grown, moved out of town and come home maybe once or twice a year. As a pastor, it made my blood boil. This church (and I suspect it isn’t a unique situation) has forgotten why we have a church. I think you are correct that it has nothing to do with being too liberal or conservative. As someone who leans one way or the other depending on the issue, I know that I have a lot of company in the church. Thank you all for your wonderfully thoughtful response and interactions. I know it seems like I say that all the time, but hey, truth hurts 😉 I actually have not felt the need to respond to folks on this one because of the depth at which folks have obviously been thinking about this. Just a few thoughts . . . GEOFF – Thanks for your notes as always. First, as you know I come out of campus ministry and have always been a advocate of campus ministry. At the same time, I think there is some challenges that face ministries that have for so long been so tied into denominational support. I REALLY think that we all need to stop giving so much worth to the institution and stop seeing our only future to be lived with their support, especially fiscally. I think that if there is passionate ministry happening, we need to be able to find those who will support it. Yes, the denomination will need to be supportive as they can, but the realities of the future, like NCD’s and other like ministries, is that the very assumption that a national body will or should have “control” over the entities it supports is changing. Campus ministry is the most scrappy and I think most prepared for what is happening in the world as a whole and some need to embrace the opportunity to see new ways of building a ministry presence on our campuses. TALITHA – YES, great questions. I guess, I don’t really care about preserving THIS particular institution but finding ways that core values of being Presbyterian may be preserved. Now if our connectional nature is over, that is one thing, but if we beleive that the nature of our governance is important, there is a need for some kind of institutional structure. STUSHIE – I never said post-modernism is THE ANSWER, but if we can;t wrap our heads and hearts around the very nature of it we will never know what we stand against and what we embrace. I know you may think so, but I am NOT an “everything goes” kind of guy. Now we may disagree about how we interpret our understandings of Jesus and the trinity in our lives, but I can firmly say that in the midst of all of this I DO have absolute faith in Christ. Post-modernism is not the answer, Bruce. Absolute faith in Christ is. it’s as simple as that, but too many people don’t want to read or hear the truth. We have strayed from Christ and syncretized our beliefs to fit in with a world view. We have forgotten that Christ is Sovereign of the World. Our loyalty needs to be re-aligned to Him. Hey Bruce- Always a thoughtful, reflective post and I thank you for raising these important questions. Without belaboring what’s been said above, mainly because I agree with the majority of the comments as to why we are declining (and it has nothing to do with being too liberal/conservative…), I would simply offer that we need to have some serious conversation and come to some serious conclusions as to what is essential and what is non-essential. I think this conversation particularly needs to take place on the local, congregational level. The young people I am around and in relationship with are seeking inspiration. They want to make a difference. They want to know if the church is a competent and credible vehicle to pour their resources into to make that difference. Because we cannot articulate with any kind of clarity the Gospel or the essentials of our faith, we are unable to present a compelling and inspiring vision to anyone, much less young people who still believe (thankfully!) that the world can change. It would be a truly courageous step if you, as the moderator, would press for this conversation around essentials. Surely we can come to some agreement on some basic tenets…like the Nicene Creed/Apostle’s Creed for instance? From there we could engage congregations in a deep study of their identity in light of these “essential tenets” which in turn could lead to a fruitful clarification/articulation of God’s mission for them. Peace, Doug Thanks Bruce for this and your earlier blog. At a small membership church meeting yesterday I stated that the first place we need to start, in terms of understanding and moving through these times of our collective lives, is that God has a future in mind for our church. Unless we grasp the depth of that, then we’re only reacting to the chaos around us, instead of engaging (an overused word, admittedly) it. When tectonic plates collide, new worlds are formed. Peace Bruce, If those are the only 3 theories that you’re hearing, then you have to go incognito and visit some churches without a schedule or an invitation. I’m afraid that you may be getting insulated within the institution. Those who are speaking of the 4th theory – that we’re shrinking because we are failing to include/inspire the new generations – are right on the money. And more than that …. one thing that I learned at the Princeton seminary on Emerging Adulthood (18-29) was that we need to give young people real responsibility AND let them fail a bit while backstopping them. We have become risk-averse and afraid to try new things – or we try them with a top-down huge investment in a “program”. This is why things like Beau Weston’s “Re-building the Presbyterian Establishment” paper give me fits. They are trying to solve the opposite problem to the one that we have. 1 and 2 (too conservative and too liberal) aren’t mutually exclusive. They are both two sides of “Jesus died to give me political power – I know what Jesus wants and I’m going to use The State to make you obey him”. I left because of the combination of a too liberal denomination and too conservative churches. Some of us don’t like having political positions shoved down our throats by either side. and another question. How much does it matter that we preserve our institution? Can we be a movement? (instead of, or at least alongside our institutionality) if we “die” to the outer observer, have we planted seeds that will spring up in new ways? Bob Coote brought my attention to the fact that the mustard plant (to which Jesus likens the kingdom, now, the kingdom =/= pc(USA) but we can use the same metaphor) is an ANNUAL plant not a permanent tree. It dies and re-grows every season. Can we be brave enough to die and re-grow every generation? Along the same line I know a pastor who is thinking she’s ready for a change, for a new job… she’s thinking of sending out a PIF. Knowing her awesome church I did slip her something to think about — maybe she’ll get a new job, a new position, a new activity description, at the same place! could it happen on a wider scale? Grace and Peace,Bruce. Thanks for your timely blog on the church and membership. If the “Generation Theory” writers are correct we have entered a new generation cycle. This is the time for creating new organizational structures to serve the new generation cycle. The organizations created in the 1940s and 1950s have served well. Now it is time to think about new ways of being church. Personally, I believe that the decline is a positive sign as well as sign of this period of transition. We might begin with asking how the church can better reflect the radical teachings of the prophets of Israel, Jesus and Paul. We may need to lose our life in order to let God show us our new life. I have further throughts on my blog: http://www.saltandlightpages.com. Bruce, thank you for your very timely blog on numbers. If the “Generation Theory” folks are accurate, we are in fact in a period of time when new institutional structures are needed to meet the life of the new generation cycle. The ones created in the 1950s and 60s served well the needs of the generations at the time. Perhaps, we need to look at how the decline is an opportunity to follow God’s spirit into a new generation cycle that creates a church structure that reflects in a new way the radical teachings of the prophets, Jesus and Paul. A faithful time if we chose to let God make us faithful. It may be time to lose our life in order to find it. I have a few thoughts about a different way of being church on my blog:www.saltandlightpages.com Good post, Bruce. I think underlying our malaise, as well as oher mainline churches in North America is the assumption that we have or even know what “church” is. I don’t mean he shpe of the church, traditional, emerging/emergent, etc. But I mean church in a really basic sense. One way I try to get at that is to ask folks to read Matthew 18:15-20 and then ask them if they can honestly, with a straight face, and no fingers crossed behind their backs, say they are in a church or know of a church that could and would stand for or even see a need for that level of accountability. I have not been in or known of any in my 30 years of ministry in the PCUSA. I believe the acids of individualism and choice and a consumeristic mindset make it nearly impossible for folks to even imagine the kind of commitment and accountability Jesus envisions for his people. And without that, verything else is pretty much window dressing! Peace, Lee Bruce, not sure where to post, here or FB. Thank you for great reflection! In serving small rural churches I would say that the keyword is Institution. Most of our battles are about controlling or changing the constitution, the safeguard of the Institution. As you clearly point out death is the future and the willingness to engage/travel with the Spirit through and into resurrection is our challenge. I believe the PCUSA focuses more on engaging and protecting the institution than in following God. In seminary our church history prof. talked about how the church is the oldest human institution on the planet. I realize now I should have wept at those words. Will we recognize Calvin and much of the creeds speak to yesterdays world and not today? Will we be able to stop protecting turf and declare the core essentials? Mine? God: Salvation in Christ Jesus: Led by Spirit. Relationship is more important than knowledge. Rules kill us. Buildings are a close second. Community is not about control but blessing and healing. Worship without unity feels hollow to me. We are still wrapped up in the “if we build it they will come” mentality AND what I hear sometimes from my dear brothers and sisters is “we built so they should be coming! What’s wrong with THEM?!” Yet when I read the NT I see disciples who were going OUT into the culture and meeting people and serving them where they were both physically and spiritually. Sadly I think we are too much like the rich, young ruler who could not give up all his possessions to follow Jesus. We in the North American Church have too many things to give up – land, buildings, budgets, programs, traditions, egos. Could we do as Jesus asked his disciples, to go out into all the various towns with no money or provisions? Trusting that someone will feed us and give us a place to sleep? All to perform miraculous healings and share the Good News? Which begs another question: has our church actually produced disciples who would be willing to give everything up to represent Jesus out in the world? Bruce, I believe the short reason for our decline is that we are creating more barriers to Jesus Christ than we are opening doors. I think this is the reason for decline in Christianity throughout the Western world, regardless of denomination. People don’t see Christ in the institutional heritages, occasional political infighting, and well-maintained properties. I don’t think most people see Christ in professional preaching, high-quality music, or well-crafted worship centers, either. And those who do come to know Christ don’t aspire to committee participation, endless intellectual learning, or “friendly family” churches. We, individually and collectively, must reclaim a passionate, outrageous love affair with Christ. We need to grab hold of the experience of Christ that apocalyptically changed our lives, that motivates us to the point of martyrdom, that is the reason and source of new life. I suspect many of our brothers and sisters don’t even know Christ. I don’t say this to slight anyone, but to lift up a very real illness in the Body. That anyone could participate in a church and not intimately know our Lord is lamentable. We must cling to Christ and express him throughout our lives. If we cannot we really should just close the doors. Great post, Geoff. Thank you, thank you, thank you. I am a mother of three young adults who grew up in the PCUSA, and I get really angry sometimes because I feel like the church has utterly let them down. I was just in a meeting last night where the youth program at my church was discussed, and I was a little disturbed that we were patting ourselves on the back for having a basically good youth program. But at age 18 we dump these students by the wayside, right at the time many are still struggling to understand what they believe in the face of a culture that tells them religion is irrelevant, and – in the case of strident atheists – dangerous. So absolutely, we as a denomination have to address this big time. (And while I’m venting here, I had one Session member tell me, when I suggested we needed to figure out how to reach 18-25 year olds, that “we don’t do enough to reach people our own (middle) age” which is true, but it was a total smokescreen, because ultimately what she was suggesting was to do nothing at all in terms of reaching out to anyone) Hello Bruce, Thanks for raising this important issue for our church. This is the subject that I wanted to talk with you about in relation to campus ministry because I think it tells a lot about the future of the church. I work at Stanford as a campus minister and as you may know, our denomination along with other mainline denominations, began to cut back on funding of campus ministry back in the 60s through the 80s. So in my capacity as the campus minister for United Campus Christian Ministry (UCCM) at Stanford, I represent the Presbyterians, Methodists, UCC, American Baptists, & Disciples of Christ denominations. All of those denominations together are supporting part of my single half-time position. By comparison, the Catholic community at Stanford has 8 staff positions and the Jewish community has 9 staff! What do you think that tells us about the future of the church? One of the examples I use when I talk to congregations is I explain to them that major companies like Apple, Microsoft, HP, Dell, etc. spend millions of dollars in subsidizing the software and hardware purchases of students on campus for two reasons: 1)they know that this is an unparalleled opportunity to reach their prospective customers and 2)they know that if they can get their product into the hands of these prospective customers now, they will be much more likely to use their product after the leave the university. I believe it is the same situation that we have for communicating the value and importance of the church in the lives of the students. If we think that they will return to the church after they leave, we are making the wrong bet. Thanks much, Geoff Great post, for which I have no answers. The question about why we are declining is too complex to sum up neatly. The answers I may have given a year ago while serving a suburban church are vastly different than the answers I find now that I am serving a rural church. I serve a church that I am dragging back from the edge of becoming a statistic. It is in a town that time has forgotten. The youngest people in our church are 8 and 14, and the next youngest is their mother, 39. Most young adults leave town for college and never come back, or they leave to find jobs. I am knocking myself out to help them let go of past mind-sets and practices that no longer serve them, trying to give them hope that if they are willing to embrace a new vision and mission for themselves in this small, depressed town, they will once again have a vital and transforming ministry. I’ve got key leaders on board, excited about a new future, but is that enough? They can only afford me for about 2 more months, after which, they will, hopefully, continue the work but with a new structure of leadership. The questions you ask, and the answers people give, they are good, but they don’t work here where people are leaving town in droves. I feel that the key here is economic development. That, paired with their new energy and vision, just might keep them from becoming another denominational statistic, and might even help them grow. So the questions about post modernity and young adults and cultural shifts are all contextual. For some of us, hte issues are more basic. Bruce… thanks for your thoughtful set of questions regarding the Presbyterian Church which apply to Christianity in general and other traditions as well… first off, while some focus/worry about numbers and size, I am way more concerned about what Kind of Church we are than in what Size of Church we were, we are, we will become. Second, if we embraced the mysteries of God, life, love and each other more and let go of thinking we have it all figured out, and let go of dogma and law…then God’s love could flow more naturally in and through all of us. As a gay man and Christian navigating our Church, it is my hope and prayer that we would stop throwing sticks in each others’ path and let God’s love be the bridge among all of God’s children… Bruce… I struggle with those who refuse to admit times are actually changing, as if that devalues the world of our past. I am constantly searching for ways to validate the modern values while straddling the ways of postmodernity. I most appreciate the way you have embraced moving beyond the world of young adults. As an “insider” spokesperson for young adults in the PC(USA) for so long, it is important to recognize the young, too, grow up. While I will still hold onto my young adult status for a few more years, I have put myself in check quite a few times recently. I am no longer the youngest voice or the only young adult voice. It’s a great reminder for those who have been paving the way… we might need get out of the way to allow even newer ideas to have space… and to meet them with hospitality. Good post Bruce. I enjoyed your article and thought you made some excellent points. Mainly, our culture continues to change around us and as our denomination ages, our overall interest and enthusiasm to stay connected to that change decreases (not in all but in many). I believe we must seek God’s creation of a church within a church. We need to comfort and minister to those who enjoy the status quo but also remember we have a calling to build up the church for new and future generations. Christ is before us and we need to follow him. All the best and in Christ, Tom Bruce Reyes-Chow One of those “consultant” types who spends his time, blogging, teaching, speaking and writing. He also happens to be a Presbyterian Teaching Elder, father to three daughters, smug San Franciscan and FANatic of the Oakland Athletics Baseball club. Thanks for reading.
Mid
[ 0.542510121457489, 33.5, 28.25 ]
! RUN: %S/test_errors.sh %s %t %f18 ! C1568 The procedure-name shall have been declared to be a separate module ! procedure in the containing program unit or an ancestor of that program unit. module m1 interface module subroutine sub1(arg1) integer, intent(inout) :: arg1 end subroutine module integer function fun1() end function end interface type t end type integer i end module submodule(m1) s1 contains !ERROR: 'missing1' was not declared a separate module procedure module procedure missing1 end !ERROR: 'missing2' was not declared a separate module procedure module subroutine missing2 end !ERROR: 't' was not declared a separate module procedure module procedure t end !ERROR: 'i' was not declared a separate module procedure module subroutine i end end submodule module m2 interface module subroutine sub1(arg1) integer, intent(inout) :: arg1 end subroutine module integer function fun1() end function end interface type t end type !ERROR: Declaration of 'i' conflicts with its use as module procedure integer i contains !ERROR: 'missing1' was not declared a separate module procedure module procedure missing1 end !ERROR: 'missing2' was not declared a separate module procedure module subroutine missing2 end !ERROR: 't' is already declared in this scoping unit !ERROR: 't' was not declared a separate module procedure module procedure t end !ERROR: 'i' was not declared a separate module procedure module subroutine i end end module ! Separate module procedure defined in same module as declared module m3 interface module subroutine sub end subroutine end interface contains module procedure sub end procedure end module ! Separate module procedure defined in a submodule module m4 interface module subroutine a end subroutine module subroutine b end subroutine end interface end module submodule(m4) s4a contains module procedure a end procedure end submodule submodule(m4:s4a) s4b contains module procedure b end procedure end
Low
[ 0.5022026431718061, 28.5, 28.25 ]
Q: Lyx 2.0 Formatted references to theorem/lemma/claim environments In Lyx 2.0, I'm trying to use a formatted reference to a claim, but I cannot get the "formatted" part, i.e. I can't get it to typeset "claim" as well as the claim number. I'm using modules "Theorems (Numbered by type)" and "Theorems (Numbered by type within sections)". I have a paragraph in the "Claim" environment, with a label "claim:1". (I got the "claim:" part in there by editing ../Resources/layouts/theorems-refprefix.inc, but I had the same behavior before I made this change.) I later put in a formatted reference which generates tex code as "\claimref{1}". This typesets to "... 1.1 ..." in the output, i.e. just section and claim counter number. What I'd like to see is "... claim 1.1 ..." If I change these claims to theorem environments AND delete and remake the labels (so "thm:1") and update the references, the formatted reference typesets the word "theorem" for the reference, as desired. Lemma does not seem to work (though I may not have remade all the lables.) I do see Lyx adding this to the preamble: \AtBeginDocument{\providecommand\claimref[1]{\ref{claim:#1}}} I also see \RS@ifundefined{thmref} {\def\RSthmtxt{theorem~}\newref{thm}{name = \RSthmtxt}} {} which I can't trace the origin of, but it may explain why theorem works. Any ideas? A: Add to the document preamble (Document --> Settings --> LaTeX preamble) \newref{claim}{name=claim~} and it will probably work. As mentioned by egreg in the comments, the refstyle package is used, and you have to tell it what it should insert for labels starting with claim:. refstyle also provides commands for start-of-sentence references (capital letter) and plural forms, and the words to insert in these cases can also be specified, e.g. \newref{claim}{name=claim~,Name=Claim~,names=claims~,Names=Claims~} I don't know, however, how to make LyX use these, other than through using an ERT, with e.g. \Claimref{claim:1}. See the refstyle documentation for details. Example Copy the following code to an empty file and save as a .lyx file, e.g. example.lyx. Open in LyX and compile. Screenshots of LyX view and PDF below. #LyX 2.0 created this file. For more info see http://www.lyx.org/ \lyxformat 413 \begin_document \begin_header \textclass article \begin_preamble \newref{claim}{name=claim~} \end_preamble \use_default_options true \begin_modules theorems-bytype theorems-sec-bytype \end_modules \maintain_unincluded_children false \language english \language_package default \inputencoding auto \fontencoding global \font_roman default \font_sans default \font_typewriter default \font_default_family default \use_non_tex_fonts false \font_sc false \font_osf false \font_sf_scale 100 \font_tt_scale 100 \graphics default \default_output_format default \output_sync 0 \bibtex_command default \index_command default \paperfontsize default \spacing single \use_hyperref false \papersize default \use_geometry false \use_amsmath 1 \use_esint 1 \use_mhchem 1 \use_mathdots 1 \cite_engine basic \use_bibtopic false \use_indices false \paperorientation portrait \suppress_date false \use_refstyle 1 \index Index \shortcut idx \color #008000 \end_index \secnumdepth 3 \tocdepth 3 \paragraph_separation indent \paragraph_indentation default \quotes_language english \papercolumns 1 \papersides 1 \paperpagestyle default \tracking_changes false \output_changes false \html_math_output 0 \html_css_as_file 0 \html_be_strict false \end_header \begin_body \begin_layout Section Something \end_layout \begin_layout Claim \begin_inset CommandInset label LatexCommand label name "claim:claim1" \end_inset I hereby claim that I don't use LyX. \end_layout \begin_layout Theorem \begin_inset CommandInset label LatexCommand label name "thm:1" \end_inset LyX makes some things harder. \end_layout \begin_layout Standard Inserting references: \begin_inset CommandInset ref LatexCommand formatted reference "thm:1" \end_inset and \begin_inset CommandInset ref LatexCommand formatted reference "claim:claim1" \end_inset . \end_layout \end_body \end_document
Mid
[ 0.5777777777777771, 26, 19 ]
Q: Any subring of $A$ is an ideal. If $A$ is an integral domain then $A$ is commutative Any subring of $A$ is an ideal. If $A$ is an integral domain then $A$ is commutative. Is my proof correct? So let $a$ and $b$ nonzero elements of $A$. $C(a)=\{ x\in A \mid ax=xa\}$ Is a subring and an ideal of $A$, so $ba\in C(a)$. Then $a (ba)=(ba)a$ $aba-baa=0$ $(ab-ba)a=0$ And $A$ is an integral domain, $a$ is nonzero: $ab-ba=0$ $ab=ba$ A: This is completely correct. Padding goes here.
Mid
[ 0.648854961832061, 31.875, 17.25 ]
Quick Look: Halo: Spartan Strike What do we want? Top-down view! When do we want it? Louder! Sit back and enjoy as the Giant Bomb team takes an unedited look at the latest video games. Apr. 22 2015 Cast: Jeff, Drew Posted by: Jason
Low
[ 0.477508650519031, 34.5, 37.75 ]
--- title: Escalado de aplicaciones nativas en la nube description: Escalado de aplicaciones nativas en la nube con Azure Kubernetes Service y Azure Functions para satisfacer la demanda de los usuarios de una manera rentable. ms.date: 05/13/2020 ms.openlocfilehash: d425976eed248461a9c2e4fe03596f9f6dfd2eba ms.sourcegitcommit: 27db07ffb26f76912feefba7b884313547410db5 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 05/19/2020 ms.locfileid: "83613738" --- # <a name="scaling-cloud-native-applications"></a>Escalado de aplicaciones nativas en la nube Una de las ventajas más frecuentes de pasar a un entorno de hospedaje en la nube es la escalabilidad. Escalabilidad, o la capacidad de una aplicación para aceptar la carga de usuarios adicional sin poner en peligro el rendimiento de cada usuario. La mayoría de las veces se consigue dividiendo una aplicación en pequeñas partes a las que se les puede dar cada uno de los recursos que requieran. Los proveedores de nube permiten una escalabilidad masiva en cualquier momento y lugar del mundo. En este capítulo, se describen las tecnologías que permiten escalar aplicaciones nativas en la nube para satisfacer la demanda de los usuarios. Estas tecnologías incluyen: - Contenedores - Orquestadores - Informática sin servidor >[!div class="step-by-step"] >[Anterior](centralized-configuration.md) >[Siguiente](leverage-containers-orchestrators.md)
Mid
[ 0.646913580246913, 32.75, 17.875 ]
Brailsford on Froome-Wiggins combo at 2014 Tour: ‘I’d love to do it’ LONDON (AFP) — Sky boss Dave Brailsford would love to see the last two Tour de France winners Chris Froome and Bradley Wiggins lining up together when the race heads to Great Britain next year. Froome, who won the centenary edition of cycling’s biggest race on Sunday, and Wiggins, who became the first British winner last year, reportedly do not enjoy the easiest of relationships. But Brailsford said he would still “absolutely love” to see the pair working together when the Tour starts in the northern English city of Leeds in 2014. “I’d love to do it,” he said, dismissing concerns that the riders’ apparent differences would not make for a happy team. Wiggins, 33, followed up his victory in 2012 with a gold medal in the Olympic time trial in London. He was also knighted by Queen Elizabeth II and won the BBC’s prestigious Sports Personality of the Year. But he has endured a frustrating 2013, hampered by illness and injury that forced him to withdraw from the Giro d’Italia in May and sit out the Tour de France. Wiggins said in June that he may not try to win the Tour again and suggested the 28-year-old Froome would be the man to beat in the race in the next few years. Brailsford, however, said that Wiggins was “very, very motivated” for his return this weekend in the Tour of Poland and in good shape to target the world championships in Florence, Italy, in September. The Sky principal said he was not ruling anything out for next season but having Froome as defending champion as the Tour heads to Britain was “brilliant.”
Mid
[ 0.590123456790123, 29.875, 20.75 ]
The traditional Christmas shutdown of the nation’s railways began early for tens of thousands of rail passengers, with a main London terminus closed for six days. Paddington is closed from 24 to 29 December, as Network Rail undertakes its biggest-ever programme of Christmas engineering work. A trickle of baffled passengers arrived at Paddington station, most of them foreign visitors planning to take the train to Heathrow. They were told to take the Tube instead. Passengers from London to South Wales, Bristol and the South West were redirected to Ealing Broadway, a suburban station in west London. About half the normal number of inter-city services were running. There are also big projects taking place between now and the New Year in the Manchester area, in South Wales and on the line from Norwich to London. A Great Western Railway spokesman said: “One of the reasons Network Rail carries out key engineering projects during the Christmas period is that it coincides with a significant drop in demand for rail services, so the work inconveniences the fewest customers possible.” But he held open the possibility that future Christmas timetables could include services on both 25 and 26 December. “We are open to doing so in the future, should the current level of predicted demand increase to sustainable levels.” For most of the 20th century, trains ran on both Christmas Day and Boxing Day. Mark Carne, chief executive of Network Rail, said that for the vast majority of the railways, trains could run on December if the operators choose to do so: “90 per cent of the network is open. But no trains run on Christmas Day – and that’s a decision taken by the train operating companies.” A spokesman for the Rail Delivery Group said: “Operating a normal train service on Christmas and Boxing Day would cost the taxpayer money as only a fraction of the normal number of passengers would be travelling. It would also make it harder and more expensive to carry out vital engineering work as part of our Railway Upgrade Plan of over £50bn.” However, both Megabus and National Express are planning their biggest-ever schedules of inter-city and airport coaches for Christmas Day. The only domestic flights on Christmas Day are two round-trips between Heathrow and Manchester, but many international services are continuing as normal – with a number of UK airports expecting their busiest 25 December ever. A strike planned by some British Airways cabin crew at Heathrow for Christmas Day and Boxing Day has been called off. But pilots working for Virgin Atlantic have started a work-to-rule in a dispute over union recognition. They say they will work “strictly to contract”, which could involve refusing to be flexible in the event of disruption. Virgin Atlantic said it expects flights to be unaffected.
Mid
[ 0.59958071278826, 35.75, 23.875 ]
Who Pulls John Gray’s Strings? John Gray, emeritus professor of European Thought at the London School of Economics, is an enigma. He began his intellectual life on theleft but moved right in the late 1970s, becoming a fan of Nobel Prize-winning free-market economist F.A. Hayek. Gray’s libertarianism was tempered, however, by studying British philosopher Michael Oakeshott’s critique of “rationalism in politics.” During the 1990s, Gray was associated with New Labour—the center-left ideology that brought Tony Blair to power in Westminster—and he became a prominent critic of global capitalism with his 1998 book False Dawn. Recently he appears to have embraced something of a nihilistic stoicism, whose spirit suffuses The Soul of the Marionette. In these pages he undertakes a sort of jazz improvisation on the theme of human freedom, surveying an omnium-gatherum of earlier writers’ and cultures’ thoughts on the topic from the point of view of a “freedom-skeptic.” Gray sees the modern, supposedly secular belief in human freedom as a creed that will not admit its character: “Throughout much of the world … the Gnostic faith that knowledge can give humans a freedom that no other creature can possess has become the predominant religion.” Gray finds the Gnostic frame of mind even among “hard-headed” scientists: The crystallographer J. D. Bernal … envisioned ‘an erasure of individuality and mortality’ in which human beings would cease to be distinct physical entities … ‘consciousness itself might end or vanish … becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light.’ In another vignette of a thinker he finds relevant to his inquiry, Gray discusses the philosophy of the 19th-century Italian writer Giacomo Leopardi, most famous for penning the classic poem “L’Infinito.” Leopardi was a staunch materialist who nevertheless found religion to be a necessary illusion. He understood Christianity as an essential response to the rise of skepticism in Greco-Roman culture; in Leopardi’s view, “What was destroying the [ancient] world was the lack of illusion.” Christianity had now gone into decline, but this was not to be celebrated; as Gray quotes Leopardi, “There is no doubt that the progress of reason and the extinction of illusions produce barbarism.” What was arising from the “secular creeds” of his time was only “the militant evangelism of Christianity in a more dangerous form.” Gray finds Edgar Allan Poe’s vision of a world where “human reason could never grasp the nature of things” congenial and devotes several pages to the American poet. He also takes up the trope of the golem as evinced in Mary Shelley’s Frankenstein, declaring “Humans have too little self-knowledge to be able to fashion a higher version of themselves”—a view on the surface at odds with his later proclamations about the coming age of artificial intelligence. Continuing his odyssey, Gray arrives at the isle—or rather, planet—of Stanislav Lem’s novel Solaris (which was made into a 2002 movie starring George Clooney). It features a water-covered world involved in “ontological auto-metamorphosis.” According to the “heretical” scientific theories its discovery spawned, the planet has a “sentient ocean”: Lem was prefiguring something like the Gaia hypothesis of James Lovelock that Gray has invoked favorably here and in earlier works. Gray also takes interest in the work of renowned American science fiction writer Philip K. Dick, who wrote a series of novels that advanced one of the most compelling paranoid metaphysics of our time. Gray notes that Dick is an archetypal Gnostic, as shown by lines like “Behind the counterfeit universe lies God … it is not man who is estranged from God; it is God who is estranged from God.” For Dick, it is unlikely that anyone can ever penetrate to a “true” reality through the veil of illusion: “were we to penetrate [that veil] for any reason, this strange, veil-like dream would reinstate itself retroactively, in terms of our perceptions and in terms of our memory. The mutual dreaming would resume as before…” Dick ultimately concluded that the flawed world he lived in was just a costume concealing the good world that is the true reality. But if this is so, Gray asks, how did this veil come into being? If an all-powerful God created it, then He must have wanted the veil to exist. But if it is the creation of some sub-deity, a Demiurge, then the “top” God is not all-powerful since he could not prevent the veil from coming into being. Of course, this is the ancient problem of theodicy restated in different terms, but it is to Gray’s credit that he recognizes it at play in Dick’s oeuvre. And as Gray notes, Dick was a very modern Gnostic in that he incorporated into his philosophy the idea of an evolution towards higher states of being taking place over time. In fact, it is “not least when it is intensely hostile to religion” that modern thought most embraces tales of the historical redemption of humanity. Gray argues that “All modern philosophies in which history is seen as a process of human emancipation … are garbled versions of [the] Christian narrative.” The next section of the book, called “In the puppet theatre,” begins with a look at the Aztec penchant for mass ritual killing. He quotes anthropologist Inga Clendinnen at length on the gruesome nature of the practice, including descriptions like: “On high occasions warriors carrying gourds of human blood or wearing the dripping skins of their captives ran through the streets … the flesh of their victims seethed in domestic cooking pots; human thighbones, scraped and dried, were set up in the courtyard of the households…” Gray contends the Aztecs were superior to modern state-based killers in that their victims were not “seen as less than human.” But only two pages later he claims, “In the ritual killings, nothing was left of human pride. If they were warriors, the victims were denied any status they had in society” and were “trussed like deer,” which certainly makes it sound as though they were seen as less than human. In any case, Gray views Aztec society as a lesson in the inevitability of human violence. We tamp it down in one place, only to see it pop back up in another. He is skeptical of statistics that seem to show a long-term decline in violence. He cites violence-caused famines and epidemics, deaths in labor camps, the gigantic U.S. prison population, the revival of torture in the most “civilized” societies, and other modern atrocities to call these figures into doubt. And he sees the false sense that we have overcome this human tendency to violence in “enlightened” Western societies as connected to our arrogant approach in dealing with “unenlightened” societies: By intervening in societies of which they know nothing, western elites are advancing a future they believe is prefigured in themselves—a new world based on freedom, democracy and human rights. The results are clear—failed states, zones of anarchy and new and worse tyrannies; but in order that they may see themselves as world-changing figures, our leaders have chosen not to see what they have done. Gray turns his attention to French Marxist Guy Debord, finding “nothing of interest” in his standard Marxist schema but noting that Debord was ahead of his time in analyzing celebrity. With work no longer giving life meaning, it is necessary that our “culture of celebrity” offers everyone “fifteen minutes of fame” to reconcile us to the “boredom of the rest of [our] lives.” He quotes Debord on the rising social importance of “media status”: “Where ‘media status’ has acquired infinitely more importance than the value of anything one might actually be capable of doing, it is normal for this status to be readily transferable…” This quote gets at the heart of why in 2015 we see headline coverage of a dispute between singer Elton John and fashion designers Domenico Dolce and Stefano Gabbana on the proper form for the family. Are fashion designers or pop songwriters experts on child development or the ethics of the family? If not, why is anyone paying any attention to this feud? Well, because they are celebrities with high “media status,” and that status is “readily transferable” to any other field whatsoever. Bored modern individuals are also rootless. Gray sees the rise of the surveillance state as tied to that condition: When people are locked into local communities they are subject to continuous informal monitoring of their behaviour. Modern individualism tends to condemn these communities because they repress personal autonomy … The informal controls on behavior that exist in a world of many communities are unworkable in a world of highly mobile individuals, so … near-ubiquitous technological monitoring is a consequence of the decline of cohesive societies that has occurred alongside the rising demand for individual freedom. As The Soul of the Marionette draws to a close, Gray heads off into a sort of nature mysticism where his thinking is—to me, at least—at its most obscure. Considering climate change, he claims: “Whatever is done now, human expansion has triggered a shift that will persist for thousands of years. A sign of the planet healing itself, climate change will continue regardless of its impact on humankind.” But how does Gray know climate change is a “sign of the planet healing itself,” rather than, say, a sign of its decline or something the planet itself is completely indifferent to? Gray’s gloomy vision seeps through in his prognosis for the human race too: “However it ends, the Anthropocene”—the epoch of humanity’s rule—“will be brief.” Again, I wonder how Gray knows this? Here he appears as the anti-Hegel, somehow sussing out the future of man much like the German philosopher, but from pessimistic rather than optimistic presuppositions. Although Gray is an atheist and a materialist of some sort or another, he correctly understands what science can and can’t tell us: Nothing carries so much authority today as science, but there is actually no such thing as ‘the scientific world-view.’ Science is a method of inquiry, not a view of the world. Knowledge is growing at accelerating speed; but no advance in science will tell us whether materialism is true or false, or whether humans possess free will. He also gets at the deep meaning behind religious stories: “being divided from yourself goes with being self-aware. This is the truth in the Genesis myth: the Fall is not an event at the beginning of history, but the intrinsic condition of self-conscious beings.” (Albert Camus, like Gray a nonbeliever, understood this very well: see his novel The Fall.) Yet there is a problem with the coherence of Gray’s outlook. He urges us to adopt a stoical attitude towards our predicament as marionettes. But if we are free to choose our attitude, why are we not also free to make other choices about our lives? Then again, perhaps Gray isn’t really to blame for this incoherence: it could be that some unknown puppeteer, pulling on Gray’s strings, made him write this book. MORE FROM THIS AUTHOR Hide 11 comments 11 Responses to Who Pulls John Gray’s Strings? Most of this is way, way over my head. I guess that’s why I’m so delighted to learn that even an acclaimed modern thinker like John Gray sometimes writes simple things that I can actually understand – and agree with: “By intervening in societies of which they know nothing, western elites are advancing a future they believe is prefigured in themselves — a new world based on freedom, democracy and human rights. The results are clear — failed states, zones of anarchy and new and worse tyrannies; but in order that they may see themselves as world-changing figures, our leaders have chosen not to see what they have done.” Nothing carries so much authority today as science, but there is actually no such thing as ‘the scientific world-view.’ Science is a method of inquiry, not a view of the world. Knowledge is growing at accelerating speed; but no advance in science will tell us whether materialism is true or false, or whether humans possess free will. Gray seems to do that annoying thing here where academics project their own ignorance. Neuroscience does, in fact, have a lot to say on the topic of free will (or lack thereof), and physics continues to do an excellent job of broadening our scope of the physical world, thus making words like “materialism” kind of pointless. I’ve mostly read John Gray in short articles written for the London Review and other such periodicals, and he strikes me as just another in the long line of, if you like, ‘popular’ philosophers––’popular’ meaning not widely read, but more or less non-technical and not substantially engaged in ‘academic’ philosophy––who locate the faults of modern society in its ideals: in the main ‘progress’ and ‘rationalism,’ maybe ‘materialism’ as well. But they never manage to argue convincingly that ideas drive history––at best, a rationalist, progressive materialism is an incoherent ideological cocktail of ideas assimilated from different philosophical traditions which, very possibly, fit together only so as to serve the broader political, economic, and social interests of modern Western capitalist states. It reminds me of Bertrand Russell blaming Nazism and communism entirely on the German Romantic philosophers––those theorists had quite a bit of influence in the British universities (F.H. Bradley, Bernard Bosanquet) prior to the Great War, at which point, very conveniently, Hegel and Nietzsche became the scapegoats du jour and ideas which had been entirely present in England without leading to British totalitarianism (Locke’s liberal empiricism was good enough for British imperialism anyhow) were suddenly explanations for German aggression in Europe. In sum: either address the peculiarities of how ideas serve history, or spare me the philosopher’s prejudice about ideas determining history. And on a pedantic note: Stanislaw Lem’s “Solaris” was first adapted into a film by Andrei Tarkovsky in 1972. Sorry, as a film buff, I couldn’t let that oversight slide––you can’t mention the 2002 Soderbergh film and leave the classic Tarkovsky by the wayside (much better, much more famous). Good piece. I look forward to reading John Gray’s latest. One nitpick, however: Avoid at all costs the 2002 film version of Solaris and check out instead Andrei Tarkovsky’s 1972 classic, which is a far more realistic meditation on space travel and its effects on the human psyche. Science cannot ever answer the question of being: “why is there anything at all?” And neuroscience can never transcend the ego, properly understood. Phenomenology retains its legitimacy as a method no matter how refined the wielders of scalpels can become. John Gray would find an ill omen in a rainbow. One of the first documents found (Egyptian, 3000 BCE) talked about the tragedy (and absurdity)of life. Humanity has been going to hell for a long, long time. We ain’t going nowhere. Gray should not only stop to smell roses, he should plant a few for himself and others. Human history makes it easy to think any positive action is naive futility. It is also a hollow excuse for inaction. Even the smallest of good deeds makes the world a little bit better, if only for a moment. Mele, of course, demonstrates that misinterpretations of findings in neuroscience combined with philosophical ignorance on the part of neuroscientists do not demonstrate the absence of free will, but the lack of a basic education in the humanities on the part of neuroscientists. For those, contra Einstein, who want to keep it simple, more simple than is possible, I recommend avoiding Mele’s book at all cost. There is also Denis Noble’s book, the Music of Life, addressing the failure of reductionism, and the increasing empirical case for holism as demonstrated by the use of mathematical complex systems for biological modeling: And then there is E.O. Wilson and others reviving group selection in evolution, and people like Peter Turchin who are using ideas like group selection and complex systems to model the development of human society. Needless to say, the underlying anthropology is much closer to Aristotle than Locke (which to the philosophically astute means the underlying ontology is not only post-Newtonian but also pre-Modern). “…This is the truth in the Genesis myth: the Fall is not an event at the beginning of history, but the intrinsic condition of self-conscious beings.” Both are true. The Fall is an event at the beginning of history AND the intrinsic condition of self-conscious beings. It might be that the Fall is the outcome of creating self-conscious beings and is thus intrinsic to them. We inherit the condition by being human; the Fall captures that reality.
Mid
[ 0.6414141414141411, 31.75, 17.75 ]
Comparison of the effects of supplementation with whey mineral and potassium on arterial tone in experimental hypertension. The aim of this work was to compare the effects of supplementation of rat chow diet with potassium (K+) and whey mineral concentrate (Whey), a diet rich in milk minerals, on blood pressure and arterial responses in vitro in spontaneously hypertensive rats (SHR). Thirty young SHR and twenty Wistar-Kyoto rats (WKY) were allocated into five groups: SHR, Whey-SHR, K(+)-SHR, WKY and Whey-WKY. Whey-supplementation was performed by adding 25% whey mineral concentrate to the chow, which in particular increased the intake of potassium (from 1.0% to 3.6%) and also that of calcium (from 1.0% to 1.3%) and magnesium (from 0.2% to 0.3%) in the rats. The K(+)-SHR were given extra potassium chloride (KCl) so that the final potassium content in the chow was 3.6%. Blood pressures were measured indirectly by the tail-cuff method. Responses of mesenteric arterial rings were examined in standard organ chambers after 12 study weeks. During the 12-week study systolic blood pressures in control SHR increased steadily from 160 to about 230 mmHg, while supplementation with either Whey or potassium had a clear antihypertensive effect of about 50 mmHg in the hypertensive rats. Blood pressures in the WKY and Whey-WKY groups remained comparable during the whole study. In noradrenaline-precontracted arterial rings, endothelium-dependent relaxation to acetylcholine (ACh), as well as endothelium-independent relaxations to nitroprusside and isoprenaline were attenuated in untreated SHR, while all these dilatory responses were similarly improved by Whey and potassium supplementation. The cyclooxygenase inhibitor diclofenac, which reduces the synthesis of dilatory and constricting prostanoids, clearly enhanced the relaxation to ACh in untreated SHR, but was without effect in the other groups. In the presence of the nitric oxide synthase inhibitor NG-nitro-L-arginine methyl ester the relaxation to ACh was markedly reduced in all SHR groups, whereas in the two WKY groups, distinct relaxations to ACh were still present. The remaining responses were partially prevented by tetraethylammonium, an inhibitor of calcium-activated potassium channels, and the difference between untreated and potassium-supplemented SHR was abolished. When endothelium-mediated hyperpolarization of smooth muscle was prevented by precontracting the preparations with 50 mM KCl, only marginal differences were observed in relaxations to ACh between untreated SHR and the other groups. Interestingly, the impaired endothelium-independent relaxations to cromakalim, a hyperpolarizing vasodilator acting via ATP-sensitive potassium channels, were normalized by Whey mineral and potassium diets. Supplementation with Whey mineral and a comparable dose of potassium similarly opposed the development of experimental genetic hypertension, an effect which was associated with improved arterial dilatory properties. Both supplements augmented the hyperpolarization-related component of arterial relaxation, increased the sensitivity of smooth muscle to nitric oxide, and decreased the production of vasoconstrictor prostanoids. Therefore, the beneficial effects of the Whey diet could be attributed to increased intake of potassium in SHR.
High
[ 0.681638044914134, 32.25, 15.0625 ]
Steve Hilton and wife Rachel Whetstone married for a decade; Know their Married Life and Children Living happily with the same person after marriage for a decade is not common these days. But, the former Director of Strategy, Steve Hilton and his wife Rachel Whetstone has managed to defy the modern etiquette and have been able to stay together for such a long time. Many of the fans of Steve and Rachel would want to know the romantic relationship between them. So, today on this topic you will get to know about the relationship between them and how they have managed to make their relationship stronger till the date. Also, know about their children. What madeSteve Hilton and Rachel Whetstone stay strong till the date? What is the most important thing in a relationship? It’s obvious love and trust. If one maintains those two aspects, then they surely will have a long strong relationship. Likewise, Steve and Rachel who are busy in their professional life have trusted each other blindly. That has made their relationship smooth till the date. [ CAPTION: Steve Hilton and Rachel Whetstone ][ SOURCE: Daily Mail ] Although they have faced a lot of ups and downs in their life, they have always supported each other. The couple has been living a lavish life with their children in their own house in Paolo. And there are no rumors of their separation and divorce till the date. Steve Hilton and Rachel Whetstone's Married life Steve Hilton married his longtime sweetheart, Rachel Whetstone in 2008. The pair met each other for the first time at the Conservative Central Office. After their first meeting, they became friends and slowly and steadily, their friendship turned out to a romantic relationship. [ CAPTION: Steve Hilton and wife Rachel Whetstone ] Before getting married the couple dated each other for a very long time. However, other details regarding their dating life and wedding days have not been disclosed anywhere by both parties. Does the couple share children together? Well, being in a romantic relationship for such a long time, the couple is blessed with two children in total. The elder son is named as Ben Hilton. However, the name of their second child is mysterious to all of us. [ CAPTION: Steve Hilton ][ SOURCE: Evening Standard ] Steve and Rachel were also the Godparents of Ivan Cameron, eldest son of former Prime Minister of UK, David Cameron. Ivan, who was severely disabled, died at the age of six. Although it has been many years, the couple is still having the same relationship as before and enjoying their married life. Despite having a busy schedule, the couple has managed to spend quality time and children. Hope they will stay together for the whole of their life.
Mid
[ 0.6405529953917051, 34.75, 19.5 ]
Professor wants to observe illegal assisted suicides Canada’s university professors are preparing to defend the right of a Metro Vancouver researcher to witness illegal assisted suicides in the name of increasing understanding of the right-to-die movement. By Vancouver SunJuly 3, 2008 Canada’s university professors are preparing to defend the right of a Metro Vancouver researcher to witness illegal assisted suicides in the name of increasing understanding of the right-to-die movement. The Canadian Association of University Teachers (CAUT) has formed a high-level committee to investigate claims that Kwantlen Polytechnic University sociologist Russel Ogden was unjustly denied the chance to research new techniques for assisted suicide. “In the face of it, it looks as if there has been a violation of academic freedom,” James Turk, executive director of the CAUT, said Wednesday in an interview from Ottawa. The CAUT has formed what Turk call a “blue-ribbon committee” to look into why the Kwantlen administration is effectively blocking Ogden from researching assisted suicides, even after the university-college’s ethics committee approved his research three years ago. For more than 14 years, Ogden has engaged in controversial and ground-breaking research into scores of underground assisted suicides (often known as “Nu Tech deathing”) by people dealing with AIDS, cancer and other terminal illnesses. Ogden has frequently run into opposition from university administrators who fear their institutions could wind up in trouble for allowing him to possibly skirt the edges of the law. In 2003, Ogden was awarded $143,000 in damages after it was determined that Britain’s Exeter University had illicitly backed out of an agreement to protect the identities of scores of people Ogden found had taken part in illegal assisted suicides. More recently, Ogden has discovered that more than 19 British Columbians have committed suicide through an increasingly widespread technique known as “helium in a bag.” Helium is seen as a swift, highly lethal and painless way to die without involving physicians or drugs. Helium is also nearly undetectable in toxicological probes. The latest confrontation over Ogden’s pioneering research techniques has arisen at the same time that assisted suicide has become big news in Washington state. Former Democratic governor Booth Gardner, who struggles with Parkinson’s disease, is campaigning for a November ballot initiative on doctor-assisted euthanasia, which will go ahead if state supporters gather 225,000 signatures by today. However, the CAUT worries that Ogden is being blocked from continuing legitimate research into the right-to-die movement by Kwantlen officials. Despite receiving earlier ethics board approval, Ogden has since been told by Kwantlen’s administration he cannot “engage in any illegal activity, including attending at an assisted death,” says a CAUT letter written by Turk, which was addressed to eight academics and administrators. A copy was obtained by The Vancouver Sun. Neither Ogden nor Kwantlen University officials were available for comment Wednesday. The CAUT’s Turk maintains that, although assisted suicide is illegal in Canada (unlike in the state of Oregon, as well as the countries of Switzerland, Belgium and the Netherlands), it is neither illegal to commit suicide nor against the law to witness an assisted death in this country. “Witnessing an illegal act, such as a husband murdering his wife, is not illegal behaviour on your part,” Turk said. Therefore, Turk said, it would not be illegal for Ogden to witness an assisted suicide, since he would be neither discouraging nor encouraging it. It’s important, Turk said, for academic researchers to be given the freedom to try to “understand politically unpopular behaviour.” Even while a Canwest poll last year showed three-quarters of Canadians approve of assisted suicide, compared to 48 per cent of Americans, Turk said researchers like Ogden are being held back by university administrators “who might think the [federal] government is going to get mad at them.” The high-level CAUT committee that will review Ogden’s case and issue its findings in a few months includes Kevin Haggerty, a sociologist at the University of Alberta; John McLaren, professor emeritus of law at the University of Victoria; and Lorraine Weir, an English professor at the University of B.C. Story Tools We encourage all readers to share their views on our articles and blog posts. We are committed to maintaining a lively but civil forum for discussion, so we ask you to avoid personal attacks, and please keep your comments relevant and respectful. If you encounter a comment that is abusive, click the "X" in the upper right corner of the comment box to report spam or abuse. We are using Facebook commenting. Visit our FAQ page for more information.
Mid
[ 0.622222222222222, 28, 17 ]
Q: Android Intents - use of Context argument For the Intent constructor - Intent(Context context, Class myClass) what exactly does the context argument specify ? Also, do we ever need to set it to the context of any other application ? A: According to the Context documentation: Interface to global information about an application environment. This is an abstract class whose implementation is provided by the Android system. It allows access to application-specific resources and classes, as well as up-calls for application-level operations such as launching activities, broadcasting and receiving intents, etc. Or in other words it is a class that provides access to your application. Also, do we ever need to set it to the context of any other application ? No, the context of your applciation is provided by Android. Normally, for 'normal' applications you do not need to bother with the context. Unless you need to activate your application from another application or send a message between two running applications. If you want to launch an application you do not need its context though, as you normally do not have the context of another application. Instead, you can ask Android for it (using the application name), in the form on an Intent: Intent LaunchIntent = getPackageManager().getLaunchIntentForPackage("com.package.address"); startActivity(LaunchIntent); See Launch an application from another application on Android for more information.
High
[ 0.712195121951219, 27.375, 11.0625 ]
JERUSALEM — Israeli police say they forcibly removed dozens of Jewish protesters trying to prevent a Christian ritual from taking place at a holy site revered in both religions. Police spokeswoman Luba Samri says the skirmish took place Monday at a site revered by Jews as the tomb of the biblical King David and by Christians as the site of Jesus’ Last Supper. ADVERTISEMENT On Sunday, dozens of Jewish protesters also attempted to block Christian prayer there for the holiday of Pentecost. A status-quo arrangement permits Christian prayer at the site on specific holidays. The Vatican is lobbying Israel for more access to the site, which fundamentalist Jewish Israelis oppose. The Custodia of Terra Santa, a Vatican representative in Jerusalem, said the events were “grave.” Read Next EDITORS' PICK MOST READ
Mid
[ 0.5447154471544711, 33.5, 28 ]
# can use variables like {build} and {branch} version: 1.2.{build} pull_requests: do_not_increment_build_number: true branches: only: - develop configuration: - Debug - Release environment: matrix: - VS_VERSION_MAJOR: 12 - VS_VERSION_MAJOR: 14 BOOST_ROOT: C:\Libraries\boost_1_59_0 platform: - Win32 - x64 before_build: "scripts\\appveyor.bat" build: parallel: true project: build/cereal.sln verbosity: minimal test_script: "scripts\\appveyor.bat test" artifacts: - path: build\Testing - path: out
Mid
[ 0.6195121951219511, 31.75, 19.5 ]
At a mass meeting of Allied women war workers held in Paris yesterday evening the following message from the Prime Minister was read:- I extremely regret that it is impossible for me to fulfil my undertaking to address the great gathering of women war workers in Paris. I regret it all the more because I was very anxious to bear testimony to the tremendous part which women have played in this vital epoch in human history. They have not only borne their burden of sorrow and separation with unflinching fortitude and patience; they have assumed an enormous share of the burdens necessary to the practical conduct of the war. If it had not been for the splendid manner in which the women came forward to work in hospitals, in munition factories, on the land, in administrative offices of all kinds, and in war work behind the lines, often in daily danger of their lives, Great Britain and, as I believe, all the Allies would have been unable to withstand the enemy attacks during the past few months. For this service to our common cause humanity owes them unbounded gratitude. In the past I have heard it said that women were not fit for the vote because they would be weak when it came to understanding the issues and bearing the strains of a great war. My recent experience in South Wales confirmed me in the conviction that the women there understand perfectly what is at stake in this war. I believe that they recognise as clearly as any that there can be no peace, no progress, no happiness in the world so long as the monster of militarism is able to stalk unbridled and unashamed among the weaker peoples. To them this war is a crusade for righteousness and gentleness, and they do not mean to make peace until the Allies have made it impossible for another carnival of violence to befall mankind. I am certain that this resolution of the women of South Wales is but typical of the spirit of the women in the rest of Great Britain. This war was begun in order that force and brutality might crush out freedom among men. Its authors cannot have foreseen that one of its main effects would be to give to women a commanding position and influence in the public affairs of the world. To their ennobling influence we look not only for strength to win the war but for inspiration during the great work of reconstruction which we shall have to undertake after victory is won. The women who have flocked to France to work for the Allies are among the foremost leaders of this great movement of regeneration. My message to their representatives gathered together in Paris is this: "Well done; carry on. You are helping to create a new earth for yourselves and for your children." D. LLOYD GEORGE.
High
[ 0.6878048780487801, 35.25, 16 ]
= Post Release (Successful) :Notice: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at. http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :page-partial: The release process consists of: * the release manager xref:comguide:ROOT:cutting-a-release.adoc[cutting the release] * members of the Apache Isis PMC xref:comguide:ROOT:verifying-releases.adoc[verifying] and voting on the release * the release manager performing post-release tasks, for either a successful or an xref:comguide:ROOT:post-release-unsuccessful.adoc[unsuccessful] vote (former documented below) For a vote to succeed, there must be +3 votes from PMC members, and the vote must have been open at least 72 hours. If there are not +3 votes after this time then it is perfectly permissible to keep the vote open longer. This section describes the steps to perform if the vote has been successful. == Inform dev ML Post the results to the `[email protected]` mailing list: [source,subs="attributes+"] ---- [RESULT] [VOTE] Apache Isis Core release {page-isisrel} ---- using the body (alter last line as appropriate): [source] ---- The vote has completed with the following result : +1 (binding): ... list of names ... +1 (non binding): ... list of names ... -1 (binding): ... list of names ... -1 (non binding): ... list of names ... The vote is SUCCESSFUL. I\'ll now go ahead and complete the post-release activities. ---- == Release to Maven Central CAUTION: We release from Maven Central before anything else; we don't want to push the git tags (an irreversible action) until we know that this has worked ok. From the http://repository.apache.org[ASF Nexus repository], select the staging repository and select 'release' from the top menu. image::release-process/nexus-release-1.png[width="600px",link="{imagesdir}/release-process/nexus-release-1.png"] This moves the release artifacts into an Apache releases repository; from there they will be automatically moved to the Maven repository. == Set environment variables As we did for the cutting of the release, we set environment variables to parameterize the following steps: [source,bash,subs="attributes+"] ---- export ISISJIRA=ISIS-9999 # <.> export ISISTMP=/c/tmp # <.> export ISISREL={page-isisrel} # <.> export ISISRC=RC1 # <.> export ISISBRANCH=release-$ISISREL-$ISISRC export ISISART=isis env | grep ISIS | sort ---- <.> set to an "umbrella" ticket for all release activities. (One should exist already, xref:comguide:ROOT:post-release-successful.adoc#create-new-jira[created at] the beginning of the development cycle now completing). <.> adjust by platform <.> adjust as required <.> adjust as necessary if there was more than one attempt to release Open up a terminal, and switch to the correct release branch: [source,bash,subs="attributes+"] ---- git checkout $ISISBRANCH ---- == Update tags Replace the `-RCn` tag with another without the qualifier. You can do this using the `scripts/promoterctag.sh` script; for example: [source,bash,subs="attributes+"] ---- sh scripts/promoterctag.sh $ISISART-$ISISREL $ISISRC ---- This script pushes the tag under `refs/tags/rel`. As per Apache policy (communicated on 10th Jan 2016 to Apache PMCs), this path is 'protected' and is unmodifiable (guaranteeing the provenance that the ASF needs for releases). == Update JIRA === Close tickets Close all JIRA tickets for the release, or moved to future releases if not yet addressed. Any tickets that were partially implemented should be closed, and new tickets created for the functionality on the ticket not yet implemented. === Generate Release Notes From the root directory, generate the release notes for the current release, in Asciidoc format; eg: [source,bash,subs="attributes+"] ---- sh scripts/jira-release-notes.sh ISIS $ISISREL > /tmp/1 ---- [NOTE] ==== This script uses 'jq' to parse JSON. See the script itself for details of how to install this utility. ==== === Mark the version as released In JIRA, go to the link:https://issues.apache.org/jira/plugins/servlet/project-config/ISIS/versions[administration section] for the Apache Isis project and update the version as being released. In the link:https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=87[Kanban view] this will have the effect of marking all tickets as released (clearing the "done" column). [#create-new-jira] === Create new JIRA Create a new JIRA ticket as a catch-all for the _next_ release. == Update Release Notes In the main `isis` repo (ie containing the asciidoc source): * Create a new `relnotes.adoc` file to hold the JIRA-generated release notes generated above. + This should live in `antora/components/relnotes/modules/ROOT/pages/yyyy/vvv/relnotes.adoc` ** where `yyyy` is the year ** where `vvv` is the version number * Update the `nav.adoc` file to reference these release notes + In `antora/components/relnotes/ROOT/nav.adoc` * Update the table in the `about.adoc` summary + In `antora/components/relnotes/ROOT/pages/about.adoc` * update the `doap_isis.rdf` file (which provides a machine-parseable description of the project) with details of the new release. Validate using the http://www.w3.org/RDF/Validator/[W3C RDF Validator] service. + TIP: For more on DOAP files, see these link:http://projects.apache.org/doap.html[Apache policy docs]. * Update the link:https://github.com/apache/isis/blob/master/STATUS[STATUS] file (in root of Apache Isis' source) should be updated with details of the new release. * commit the changes + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: updates release notes, STATUS and doap_isis.rdf" ---- == Release Source Zip As described in the link:http://www.apache.org/dev/release-publishing.html#distribution_dist[Apache documentation], each Apache TLP has a `release/TLP-name` directory in the distribution Subversion repository at link:https://dist.apache.org/repos/dist[https://dist.apache.org/repos/dist]. Once a release vote passes, the release manager should `svn add` the artifacts (plus signature and hash files) into this location. The release is then automatically pushed to http://www.apache.org/dist/[http://www.apache.org/dist/] by `svnpubsub`. Only the most recent release of each supported release line should be contained here, old versions should be deleted. Each project is responsible for the structure of its directory. The directory structure of Apache Isis reflects the directory structure in our git source code repo: [source] ---- isis/ core/ ---- If necessary, checkout this directory structure: [source,bash] ---- svn co https://dist.apache.org/repos/dist/release/isis isis-dist ---- Next, add the new release into the appropriate directory, and delete any previous release. The `upd.sh` script can be used to automate this: [source,bash] ---- old_ver=$1 new_ver=$2 # constants repo_root=https://repository.apache.org/content/repositories/releases/org/apache/isis zip="source-release.zip" asc="$zip.asc" md5="$zip.md5" # # isis-core # type="core" fullname="isis-parent" pushd isis-core curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$asc svn add $fullname-$new_ver-$asc curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$md5 svn add $fullname-$new_ver-$md5 curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$zip svn add $fullname-$new_ver-$zip svn delete $fullname-$old_ver-$asc svn delete $fullname-$old_ver-$md5 svn delete $fullname-$old_ver-$zip popd ---- [source,bash,subs="attributes+"] ---- sh upd.sh [previous_release] {page-isisrel} ---- The script downloads the artifacts from the Nexus release repository, adds the artifacts to subversion and deletes the previous version. Double check that the files are correct; there is sometimes a small delay in the files becoming available in the release repository. It should be sufficient to check just the `md5` or `.asc` files that these look valid (aren't HTML 404 error pages): [source,bash,subs="attributes+"] ---- vi `find . -name *.md5` ---- Assuming all is good, commit the changes: [source,subs="attributes+"] ---- svn commit -m "publishing isis source releases to dist.apache.org" ---- If the files are invalid, then revert using `svn revert . --recursive` and try again in a little while. == Final website updates Apply any remaining documentation updates: * If there have been documentation changes made in other branches since the release branch was created, then merge these in. * If there have been updates to any of the schemas, copy them over: ** copy the new schema(s) from + `api/schema/src/main/resources/o.a.i.s.xxx` + to its versioned: + `antora/supplemental-ui/schema/xxx/xxx-ver.xsd` ** ensure the non-versioned is same as the highest versioned + `antora/supplemental-ui/schema/xxx/xxx.xsd` * Commit the changes: + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: merging in final changes to docs" ---- We are now ready to xref:#generate-website[generate the website]. [#generate-website] == Generate website We use Antora to generate the site, not only the version being release but also any previous versions listed in `site.yml`. This is done using the `content.sources.url[].branches` properties. We use branches for all cases - note that the branch name appears in the generated UI. If there are patches to the documentation, we move the branches. We therefore temporarily modify all of the `antora.yml` files (and update `index.html`) file and create a branch for this change; then we update `site.yml` with a reference to that new branch. All of this is changed afterwards. === Create doc branch First, we prepare a doc branch to reference: * Update all `antora.yml` files, eg using an IDE: + ** `version: latest` -> `version: {page-isisrel}` * Commit all these changes: + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: bumps antora.yml and index.html to $ISISREL" ---- We now create a branch to reference in the `site.yml`, later on. * We create the `{page-isisrel}` branch. + This mirrors the "rel/isis-{page-isisrel}" used for the formal (immutable) release tag, but is a branch because it allows us to move it, and must have this simplified name as it is used in the "edit page" link of the site template. + [source,bash,subs="attributes+"] ---- git branch {page-isisrel} git push origin {page-isisrel} ---- Finally, revert the last commit (backing out changes to `antora.yml` files): [source,bash,subs="attributes+"] ---- git revert HEAD ---- === Update `index.html` & `site.yml` & generate Lastly, we update `index.html` and then `site.yml` * Update the home page of the website, `antora/supplemental-ui/index.html` + Note that this isn't performed in the docs branch (xref:#create-doc-branch[previous section]) because the supplemental files are _not_ versioned as a doc component: ** update any mention of `master` -> `{page-isisrel}` + This should be the two sets of starter app instructions for helloworld and simpleapp. ** update any mention of `latest` -> `{page-isisrel}` + This should be in hyperlinks, `<a href="docs/...">` * Now update `site.yml` + This will reference the new branch (and any previous branches). Every content source needs to be updated: + ** `branches: HEAD` -> `branches: {page-isisrel}` * commit this change, too (there's no need to push): + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: adds tag to site.yml" ---- We are now in a position to actually generate the Antora website: * generate the website: + [source,bash,subs="attributes+"] ---- sh preview.sh ---- + This will write to `antora/target/site`; we'll use the results in the xref:#publish-website[next section]. Finally, revert the last commit (backing out changes to `site.yml`): [source,bash,subs="attributes+"] ---- git revert HEAD ---- [#update-the-algolia-search-index] == Update the Algolia search index == Index the site Create a `algolia.env` file holding the `APP_ID` and the admin `API_KEY`, in the root of `isis-site`: [source,ini] .algolia.env ---- APPLICATION_ID=... API_KEY=... ---- CAUTION: This file should not be checked into the repo, because the API_KEY allows the index to be modified or deleted. We use the Algolia-provided link:https://hub.docker.com/r/algolia/docsearch-scraper[docker image] for the crawler to perform the search (as per the link:as per https://docsearch.algolia.com/docs/run-your-own/#run-the-crawl-from-the-docker-image[docs]): [source,bash] ---- cd content docker run -it --env-file=../algolia.env -e "CONFIG=$(cat ../algolia-config.json | jq -r tostring)" algolia/docsearch-scraper ---- This posts the index up to the link:https://algolia.com[Algolia] site. NOTE: Additional config options for the crawler can be found link:https://www.algolia.com/doc/api-reference/crawler/[here]. [#publish-website] == Publish website We now copy the results of the Antora website generation over to the `isis-site` repo: * in the `isis-site` repo, check out the `asf-site` branch: + [source,bash,subs="attributes+"] ---- cd ../isis-site git checkout asf-site git pull --ff-only ---- * still in the `isis-site` repo, delete all the files in `content/` _except_ for the `schema` and `versions` directories: + [source,bash,subs="attributes+"] ---- pushd content for a in $(ls -1 | grep -v schema | grep -v versions) do rm -rf $a done popd ---- * Copy the generated Antora site to `isis-site` repo's `contents` directory: + [source,bash,subs="attributes+"] ---- cd ../isis cp -Rf antora/target/site/* ../isis-site/content/. ---- * Back in the `isis-site` repo, commit the changes and preview: + [source,bash,subs="attributes+"] ---- cd ../isis-site git add . git commit -m "$ISISJIRA : production changes to website" sh preview.sh ---- * If everything looks ok, then push the changes to make live, and switch back to the `isis` repo: + [source,bash,subs="attributes+"] ---- git push origin asf-site cd ../isis ---- == Merge in release branch Because we release from a branch, the changes made in the branch should be merged back from the release branch back into the `master` branch. In the `isis` repo (adjust if not on RC1): [source,bash,subs="attributes+"] ---- git checkout master # update master with latest git pull git merge release-{page-isisrel}-RC1 # merge branch onto master git push origin --delete release-{page-isisrel}-RC1 # remote branch no longer needed git branch -d release-{page-isisrel}-RC1 # branch no longer needed ---- == Bump \{page-isisrel} in `site.yml` In `site.yml` file, bump the version of `\{page-isisrel}`, and commit. == Update the ASF Reporter website Log the new release in the link:https://reporter.apache.org/addrelease.html?isis[ASF Reporter website]. == Announce the release Announce the release to link:mailto:[email protected][users mailing list]. For example, for a release of Apache Isis Core, use the following subject: [source,subs="attributes+"] ---- [ANN] Apache Isis version {page-isisrel} Released ---- And use the following body (summarizing the main points as required): [source,subs="attributes+"] ---- The Apache Isis team is pleased to announce the release of Apache Isis {page-isisrel}. New features in this release include: * ... Full release notes are available on the Apache Isis website at [1]. You can access this release directly from the Maven central repo [2]. Alternatively, download the release and build it from source [3]. Enjoy! --The Apache Isis team [1] http://isis.apache.org/relnotes/{page-isisrel}/about.html [2] https://search.maven.org [3] https://isis.apache.org/docs/{page-isisrel}/downloads/how-to.html ---- == Blog post link:https://blogs.apache.org/roller-ui/login.rol[Log onto] the http://blogs.apache.org/isis/[Apache blog] and create a new post. Copy-n-paste the above mailing list announcement should suffice. == Update dependencies With the release complete, now is a good time to bump versions of dependencies (so that there is a full release cycle to identify any possible issues). You will probably want to create a new JIRA ticket for these updates (or if minor then use the "catch-all" JIRA ticket raised earlier for the next release). === Merge in any changes from `org.apache:apache` Check (via link:http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache%22%20a%3A%22apache%22[search.maven.org]) whether there is a newer version of the Apache parent `org.apache:apache`. If there are, merge in these changes to the `isis-parent` POM. === Update plugin versions The `maven-versions-plugin` should be used to determine if there are newer versions of any of the plugins used to build Apache Isis. Since this goes off to the internet, it may take a minute or two to run: [source,bash] ---- mvn versions:display-plugin-updates > /tmp/foo grep "\->" /tmp/foo | /bin/sort -u ---- Review the generated output and make updates as you see fit. (However, if updating, please check by searching for known issues with newer versions). === Update dependency versions The `maven-versions-plugin` should be used to determine if there are newer versions of any of Isis' dependencies. Since this goes off to the internet, it may take a minute or two to run: [source,bash] ---- mvn versions:display-dependency-updates > /tmp/foo grep "\->" /tmp/foo | /bin/sort -u ---- Update any of the dependencies that are out-of-date. That said, do note that some dependencies may show up with a new dependency, when in fact the dependency is for an old, badly named version. Also, there may be new dependencies that you do not wish to move to, eg release candidates or milestones. For example, here is a report showing both of these cases: [source,bash] ---- [INFO] asm:asm ..................................... 3.3.1 -> 20041228.180559 [INFO] commons-httpclient:commons-httpclient .......... 3.1 -> 3.1-jbossorg-1 [INFO] commons-logging:commons-logging ......... 1.1.1 -> 99.0-does-not-exist [INFO] dom4j:dom4j ................................. 1.6.1 -> 20040902.021138 [INFO] org.datanucleus:datanucleus-api-jdo ................ 3.1.2 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-core ................... 3.1.2 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-jodatime ............... 3.1.1 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-rdbms .................. 3.1.2 -> 3.2.0-m1 [INFO] org.easymock:easymock ................................... 2.5.2 -> 3.1 [INFO] org.jboss.resteasy:resteasy-jaxrs ............. 2.3.1.GA -> 3.0-beta-1 ---- For these artifacts you will need to search http://search.maven.org[Maven central repo] directly yourself to confirm there are no newer dependencies not shown in this list.
Low
[ 0.535714285714285, 33.75, 29.25 ]
Q: Getting the rectangle four points of given one latitude and one longitude only I am reading this: http://www.panoramio.com/api/widget/api.html#photo-widget to build a JavaScript photo widget. Under Request -> request object table, it is written: name: rect example value: {'sw': {'lat': -30, 'lng': 10.5}, 'ne': {'lat': 50.5, 'lng': 30}} meaning: This option is only valid for requests where you do not use the ids option. It indicates that only photos that are in a certain area are to be shown. The area is given as a latitude-longitude rectangle, with sw at the south-west corner and ne at the north-east corner. Each corner has a lat field for the latitude, in degrees, and a lng field for the longitude, in degrees. Northern latitudes and eastern longitudes are positive, and southern latitudes and western longitudes are negative. Note that the south-west corner may be more "eastern" than the north-east corner if the selected rectangle crosses the 180° meridian But usually we are only given one latitude point, and one longitude point. What kind of expressions should I write to build the four points as stated above, to cover the pictures around the area given two points I have in hand? For example, I have in Paris: lat: 48.8566667 lng: 2.3509871 I want to cover pictures around it 10km rectangle. Thanks. A: Here's the answer I got from Panoramio Forum by QuentinUK. Can't do a 10km distance because this implies a circular region. It can only do rectangular. So you might as well approximate (best is use Vincenty's formulae) and calculate an angle +/- around the point. function requestAroundLatLong(lat,lng,km){ // angle per km = 360 / (2 * pi * 6378) = 0.0089833458 var angle=km* 0.0089833458; var myRequest = new panoramio.PhotoRequest({ 'rect': {'sw': {'lat': lat-angle, 'lng': lng-angle}, 'ne': {'lat': lat+angle, 'lng': lng+angle}} }); return myRequest; } var widget = new panoramio.PhotoWidget('wapiblock', requestAroundLatLong(48.8566667, 2.3509871,10), myOptions);
Mid
[ 0.6546391752577321, 31.75, 16.75 ]
/* * Copyright 2015 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.social.connect.web; import static java.util.Arrays.*; import java.util.List; import java.util.Map.Entry; import javax.servlet.http.HttpServletRequest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.social.connect.Connection; import org.springframework.social.connect.ConnectionFactory; import org.springframework.social.connect.support.OAuth1ConnectionFactory; import org.springframework.social.connect.support.OAuth2ConnectionFactory; import org.springframework.social.oauth1.AuthorizedRequestToken; import org.springframework.social.oauth1.OAuth1Operations; import org.springframework.social.oauth1.OAuth1Parameters; import org.springframework.social.oauth1.OAuth1Version; import org.springframework.social.oauth1.OAuthToken; import org.springframework.social.oauth2.AccessGrant; import org.springframework.social.oauth2.OAuth2Operations; import org.springframework.social.oauth2.OAuth2Parameters; import org.springframework.util.LinkedMultiValueMap; import org.springframework.util.MultiValueMap; import org.springframework.web.client.HttpClientErrorException; import org.springframework.web.context.request.NativeWebRequest; import org.springframework.web.context.request.WebRequest; /** * Provides common connect support and utilities for Java web/servlet environments. * Used by {@link ConnectController} and {@link ProviderSignInController}. * @author Keith Donald */ public class ConnectSupport { private final static Log logger = LogFactory.getLog(ConnectSupport.class); private boolean useAuthenticateUrl; private String applicationUrl; private String callbackUrl; private SessionStrategy sessionStrategy; public ConnectSupport() { this(new HttpSessionSessionStrategy()); } public ConnectSupport(SessionStrategy sessionStrategy) { this.sessionStrategy = sessionStrategy; } /** * Flag indicating if this instance will support OAuth-based authentication instead of the traditional user authorization. * Some providers expose a special "authenticateUrl" the user should be redirected to as part of an OAuth-based authentication attempt. * Setting this flag to true has {@link #buildOAuthUrl(ConnectionFactory, NativeWebRequest) oauthUrl} return this authenticate URL. * @param useAuthenticateUrl whether to use the authenticat url or not * @see OAuth1Operations#buildAuthenticateUrl(String, OAuth1Parameters) * @see OAuth2Operations#buildAuthenticateUrl(OAuth2Parameters) */ public void setUseAuthenticateUrl(boolean useAuthenticateUrl) { this.useAuthenticateUrl = useAuthenticateUrl; } /** * Configures the base secure URL for the application this controller is being used in e.g. <code>https://myapp.com</code>. Defaults to null. * If specified, will be used to generate OAuth callback URLs. * If not specified, OAuth callback URLs are generated from {@link HttpServletRequest HttpServletRequests}. * You may wish to set this property if requests into your application flow through a proxy to your application server. * In this case, the HttpServletRequest URI may contain a scheme, host, and/or port value that points to an internal server not appropriate for an external callback URL. * If you have this problem, you can set this property to the base external URL for your application and it will be used to construct the callback URL instead. * @param applicationUrl the application URL value */ public void setApplicationUrl(String applicationUrl) { this.applicationUrl = applicationUrl; } /** * Configures a specific callback URL that is to be used instead of calculating one based on the application URL or current request URL. * When set this URL will override the default behavior where the callback URL is derived from the current request and/or a specified application URL. * When set along with applicationUrl, the applicationUrl will be ignored. * @param callbackUrl the callback URL to send to providers during authorization. Default is null. */ public void setCallbackUrl(String callbackUrl) { this.callbackUrl = callbackUrl; } /** * Builds the provider URL to redirect the user to for connection authorization. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return the URL to redirect the user to for authorization * @throws IllegalArgumentException if the connection factory is not OAuth1 based. */ public String buildOAuthUrl(ConnectionFactory<?> connectionFactory, NativeWebRequest request) { return buildOAuthUrl(connectionFactory, request, null); } /** * Builds the provider URL to redirect the user to for connection authorization. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @param additionalParameters parameters to add to the authorization URL. * @return the URL to redirect the user to for authorization * @throws IllegalArgumentException if the connection factory is not OAuth1 based. */ public String buildOAuthUrl(ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { if (connectionFactory instanceof OAuth1ConnectionFactory) { return buildOAuth1Url((OAuth1ConnectionFactory<?>) connectionFactory, request, additionalParameters); } else if (connectionFactory instanceof OAuth2ConnectionFactory) { return buildOAuth2Url((OAuth2ConnectionFactory<?>) connectionFactory, request, additionalParameters); } else { throw new IllegalArgumentException("ConnectionFactory not supported"); } } /** * Complete the connection to the OAuth1 provider. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return a new connection to the service provider */ public Connection<?> completeConnection(OAuth1ConnectionFactory<?> connectionFactory, NativeWebRequest request) { String verifier = request.getParameter("oauth_verifier"); AuthorizedRequestToken requestToken = new AuthorizedRequestToken(extractCachedRequestToken(request), verifier); OAuthToken accessToken = connectionFactory.getOAuthOperations().exchangeForAccessToken(requestToken, null); return connectionFactory.createConnection(accessToken); } /** * Complete the connection to the OAuth2 provider. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return a new connection to the service provider */ public Connection<?> completeConnection(OAuth2ConnectionFactory<?> connectionFactory, NativeWebRequest request) { if (connectionFactory.supportsStateParameter()) { verifyStateParameter(request); } String code = request.getParameter("code"); try { AccessGrant accessGrant = connectionFactory.getOAuthOperations().exchangeForAccess(code, callbackUrl(request), null); return connectionFactory.createConnection(accessGrant); } catch (HttpClientErrorException e) { logger.warn("HttpClientErrorException while completing connection: " + e.getMessage()); logger.warn(" Response body: " + e.getResponseBodyAsString()); throw e; } } private void verifyStateParameter(NativeWebRequest request) { String state = request.getParameter("state"); String originalState = extractCachedOAuth2State(request); if (state == null || !state.equals(originalState)) { throw new IllegalStateException("The OAuth2 'state' parameter is missing or doesn't match."); } } protected String callbackUrl(NativeWebRequest request) { if (callbackUrl != null) { return callbackUrl; } HttpServletRequest nativeRequest = request.getNativeRequest(HttpServletRequest.class); if (applicationUrl != null) { return applicationUrl + connectPath(nativeRequest); } else { return nativeRequest.getRequestURL().toString(); } } // internal helpers private String buildOAuth1Url(OAuth1ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth1Operations oauthOperations = connectionFactory.getOAuthOperations(); MultiValueMap<String, String> requestParameters = getRequestParameters(request); OAuth1Parameters parameters = getOAuth1Parameters(request, additionalParameters); parameters.putAll(requestParameters); if (oauthOperations.getVersion() == OAuth1Version.CORE_10) { parameters.setCallbackUrl(callbackUrl(request)); } OAuthToken requestToken = fetchRequestToken(request, requestParameters, oauthOperations); sessionStrategy.setAttribute(request, OAUTH_TOKEN_ATTRIBUTE, requestToken); return buildOAuth1Url(oauthOperations, requestToken.getValue(), parameters); } private OAuth1Parameters getOAuth1Parameters(NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth1Parameters parameters = new OAuth1Parameters(additionalParameters); parameters.putAll(getRequestParameters(request)); return parameters; } private OAuthToken fetchRequestToken(NativeWebRequest request, MultiValueMap<String, String> requestParameters, OAuth1Operations oauthOperations) { if (oauthOperations.getVersion() == OAuth1Version.CORE_10_REVISION_A) { return oauthOperations.fetchRequestToken(callbackUrl(request), requestParameters); } return oauthOperations.fetchRequestToken(null, requestParameters); } private String buildOAuth2Url(OAuth2ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth2Operations oauthOperations = connectionFactory.getOAuthOperations(); String defaultScope = connectionFactory.getScope(); OAuth2Parameters parameters = getOAuth2Parameters(request, defaultScope, additionalParameters); String state = connectionFactory.generateState(); parameters.add("state", state); sessionStrategy.setAttribute(request, OAUTH2_STATE_ATTRIBUTE, state); if (useAuthenticateUrl) { return oauthOperations.buildAuthenticateUrl(parameters); } else { return oauthOperations.buildAuthorizeUrl(parameters); } } private OAuth2Parameters getOAuth2Parameters(NativeWebRequest request, String defaultScope, MultiValueMap<String, String> additionalParameters) { OAuth2Parameters parameters = new OAuth2Parameters(additionalParameters); parameters.putAll(getRequestParameters(request, "scope")); parameters.setRedirectUri(callbackUrl(request)); String scope = request.getParameter("scope"); if (scope != null) { parameters.setScope(scope); } else if (defaultScope != null) { parameters.setScope(defaultScope); } return parameters; } private String connectPath(HttpServletRequest request) { String pathInfo = request.getPathInfo(); return request.getServletPath() + (pathInfo != null ? pathInfo : ""); } private String buildOAuth1Url(OAuth1Operations oauthOperations, String requestToken, OAuth1Parameters parameters) { if (useAuthenticateUrl) { return oauthOperations.buildAuthenticateUrl(requestToken, parameters); } else { return oauthOperations.buildAuthorizeUrl(requestToken, parameters); } } private OAuthToken extractCachedRequestToken(WebRequest request) { OAuthToken requestToken = (OAuthToken) sessionStrategy.getAttribute(request, OAUTH_TOKEN_ATTRIBUTE); sessionStrategy.removeAttribute(request, OAUTH_TOKEN_ATTRIBUTE); return requestToken; } private String extractCachedOAuth2State(WebRequest request) { String state = (String) sessionStrategy.getAttribute(request, OAUTH2_STATE_ATTRIBUTE); sessionStrategy.removeAttribute(request, OAUTH2_STATE_ATTRIBUTE); return state; } private MultiValueMap<String, String> getRequestParameters(NativeWebRequest request, String... ignoredParameters) { List<String> ignoredParameterList = asList(ignoredParameters); MultiValueMap<String, String> convertedMap = new LinkedMultiValueMap<String, String>(); for (Entry<String, String[]> entry : request.getParameterMap().entrySet()) { if (!ignoredParameterList.contains(entry.getKey())) { convertedMap.put(entry.getKey(), asList(entry.getValue())); } } return convertedMap; } private static final String OAUTH_TOKEN_ATTRIBUTE = "oauthToken"; private static final String OAUTH2_STATE_ATTRIBUTE = "oauth2State"; }
Mid
[ 0.538899430740038, 35.5, 30.375 ]
--- abstract: 'We prove the full range of estimates for a five-linear singular integral of Brascamp-Lieb type. The study is methodology-oriented with the goal to develop a sufficiently general technique to estimate singular integral variants of Brascamp-Lieb inequalities that are not of Hölder type. The invented methodology constructs localized analysis on the entire space from local information on its subspaces of lower dimensions and combines such tensor-type arguments with the generic localized analysis. A direct consequence of the boundedness of the five-linear singular integral is a Leibniz rule which captures nonlinear interactions of waves from transversal directions.' address: - 'Department of Mathematics, Cornell University, Ithaca, NY ' - 'Laboratoire de Mathématiques, Université de Nantes, Nantes' author: - Camil Muscalu - Yujia Zhai title: 'Five-Linear Singular Integral Estimates of Brascamp-Lieb Type' --- Introduction ============ Background and Motivation ------------------------- Brascamp-Lieb inequalities refer to inequalities of the form $$\begin{aligned} \label{classical_bl} \displaystyle \int_{\mathbb{R}^n} \big|\prod_{j=1}^{m}F_j(L_j(x))\big| dx \leq \text{BL}(\textbf{L,p})\prod_{j=1}^{m}\left(\int_{\mathbb{R}^{k_j}}|F_j|^{p_j}\right)^{\frac{1}{p_j}},\end{aligned}$$ where $\text{BL}(\textbf{L,p})$ represents the Brascamp-Lieb constant depending on $\textbf{L} := (L_j)_{j=1}^m$ and $\textbf{p} := (p_j)_{j=1}^m$. For each $1 \leq j \leq m$, $L_j: R^{n} \rightarrow R^{k_j}$ is a linear surjection and $p_j \geq 1$. One equivalent formulation of (\[classical\_bl\]) is $$\begin{aligned} \label{classical_bl_exp} \displaystyle \bigg(\int_{\mathbb{R}^n} \big|\prod_{j=1}^{m}F_j(L_j(x))\big|^r dx\bigg)^{\frac{1}{r}} \leq \text{BL}(\textbf{L},r\textbf{p})\prod_{j=1}^{m}\left(\int_{\mathbb{R}^{k_j}}|F_j|^{rp_j}\right)^{\frac{1}{rp_j}},\end{aligned}$$ for any $r > 0$. Brascamp-Lieb inequalities have been well-developed in [@bl], [@bcct], [@bbcf], [@bbbf], [@chv]. Examples of Brascamp-Lieb inequalities consist of Hölder’s inequality and the Loomis-Whitney inequality. Singular integral estimates corresponding to Hölder’s inequality have been studied extensively, including boundedness of single-parameter paraproducts [@cm] and multi-parameter paraproducts [@cptt], [@cptt_2], single-parameter flag paraproducts [@c_flag], bilinear Hilbert transform [@lt], multilinear operators of arbitrary rank [@mtt2002], etc. But it is of course natural to ask if there are similar singular integral estimates corresponding to Brascamp-Lieb inequalities that are not necessarily of Hölder type. This question was asked to us by Jonathan Bennett during a conference in Matsumoto, Japan, in February 2016. Since then, we adopted the informal definition of *singular integral estimate of Brascamp-Lieb type* as the singular integral estimate which is reduced to a classical Brascamp-Lieb inequality when the kernels are replaced by Dirac distributions. For the readers familiar with the recent expository work of Durcik and Thiele in [@dt2], this is similar to the generic estimate (2.3) from [@dt2]. So far, to the best of our knowledge, the only research article in the literature where the term “singular Brascamp-Lieb” has been used is the recent work by Durcik and Thiele [@dt]. However, we would like to emphasize that the basic inequalities[^1] corresponding to the “cubic singular expressions” considered in [@dt] are still of Hölder type, and the term “singular Brascamp-Lieb” was used to underline that the necessary and sufficient boundedness condition (1.6) of [@dt] is of the same flavor as the one for classical Brascamp-Lieb inequalities stated as (8) in [@bcct]. Techniques to tackle multilinear singular integral operators corresponding to Hölder’s inequality [@cm], [@cptt], [@cptt_2], [@c_flag], [@lt], [@mtt2002] usually involve localizations on phase space subsets of the full-dimension. In contrast, the understanding of singular integral estimates corresponding to Brascamp-Lieb inequalities with $k_j < n$ for some $k_j$ in (\[classical\_bl\_exp\]) (and thus not of Hölder scaling) is far beyond satisfaction. The ultimate goal would be to develop a general methodology to treat a large class of singular Brascamp-Lieb estimates that are not of Hölder type. It is natural to believe that such an approach would need to extract and integrate local information on subspaces of lower dimensions. Also due to its multilinear structure, localizations on the entire space could be necessary as well and a hybrid of both localized analyses would be demanded. The subject of our study in this present paper is one of the simplest multilinear operators, whose complete understanding cannot be reduced to earlier results[^2] and which requires such a new type of analysis. More precisely, it is the five-linear operator defined by $$\begin{aligned} \label{bi_flag_int} & T_{K_1K_2}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) \nonumber \\ = & p.v. \displaystyle \int_{\mathbb{R}^{10}} K_1\big((t_1^1, t_1^2),(t_2^1,t_2^2))K_2((s_1^1,s_1^2), (s_2^1,s_2^2), (s_3^1,s_3^2)\big) \cdot \nonumber \\ &\quad \quad \quad f_1(x-t_1^1-s_1^1)f_2(x-t_2^1-s_2^1)g_1(y-t_1^2-s_1^2)g_2(y-t_2^2-s_2^2)h(x-s_3^1,y-s_3^2) d\vec{t_1} d\vec{t_2} d \vec{s_1} d\vec{s_2} d\vec{s_3},\end{aligned}$$ where $\vec{t_i} = (t_i^1, t_i^2)$, $\vec{s_j} = (s_j^1,s_j^2)$ for $i = 1, 2$ and $j = 1,2,3$. In (\[bi\_flag\_int\]), $K_1$ and $K_2$ are Calderón-Zygmund kernels that satisfy $$\begin{aligned} & |\nabla K_1(\vec{t_1}, \vec{t_2})| \lesssim \frac{1}{|(t_1^1,t_2^1)|^{3}}\frac{1}{|(t_1^2,t_2^2)|^{3}}, \nonumber \\ & |\nabla K_2(\vec{s_1}, \vec{s_2}, \vec{s_3})| \lesssim \frac{1}{|(s_1^1,s_2^1,s_3^1)|^{4}}\frac{1}{|(s_1^2,s_2^2, s_3^2)|^{4}} .\end{aligned}$$ As one can see, the operator $T_{K_1K_2}$ takes two functions depending on the $x$ variable ($f_1$ and $f_2$), two functions depending on the $y$ variable ($g_1$ and $g_2$) and one depending on both $x$ and $y$ (namely $h$) into another function of $x$ and $y$. Our goal is to prove that $T_{K_1K_2}$ satisfies the mapping property $$L^{p_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_y) \times L^{q_2}(\mathbb{R}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2)$$ for $1 < p_1, p_2, q_1, q_2, s \leq \infty$, $r >0$, $(p_1,q_1), (p_2,q_2) \neq (\infty, \infty)$ with $$\label{bl_exp} \frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} = \frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}.$$ To verify that the boundedness of $T_{K_1K_2}$ qualifies to be a singular integral estimate of Brascamp-Lieb type, one can remove the singularities by setting $$\begin{aligned} & K_1(\vec{t_1}, \vec{t_2}) = \delta_{\textbf{0}}(\vec{t_1}, \vec{t_2}), \nonumber \\ & K_2(\vec{s_1}, \vec{s_2}, \vec{s_3}) = \delta_{\textbf{0}}(\vec{s_1}, \vec{s_2}, \vec{s_3}),\end{aligned}$$ and express its boundedness explicitly as $$\begin{aligned} \label{flag_bl} \|f_1(x) f_2(x) g_1(y) g_2(y) h(x,y)\|_{r} \lesssim \|f_1\|_{L^{p_1}(\mathbb{R}_x)}\|f_2\|_{L^{q_1}(\mathbb{R}_x)}\|g_1\|_{L^{p_2}(\mathbb{R}_y)} \|g_4\|_{L^{q_2}(\mathbb{R}_y)}\|h\|_{L^{s}(\mathbb{R}^2)}.\end{aligned}$$ The above inequality follows from Hölder’s inequality and the Loomis-Whitney inequality, which, in this simple two dimensional case, is the same as Fubini’s theorem. Clearly, it is an inequality of the same type as (\[classical\_bl\_exp\]), with a different homogeneity than Hölder. Moreover, this reduction shows that (\[bl\_exp\]) is indeed a necessary condition for the boundedness exponents of (\[flag\_bl\]) and thus of (\[bi\_flag\_int\]). Connection with Other Multilinear Objects ----------------------------------------- The connection with other well-established multilinear operators that we will describe next justifies that $T_{K_1K_2}$ defined in (\[bi\_flag\_int\]) is a reasonably simple and interesting operator to study, with the hope of inventing a general method that can handle a large class of singular integral estimates of Brascamp-Lieb type with non-Hölder scaling. Let $\mathcal{M}(\mathbb{R}^d)$ denote the set of all bounded symbols $m \in L^{\infty}(\mathbb{R}^d)$ smooth away from the origin and satisfying the Marcinkiewicz-Hörmander-Mihlin condition $$\left|\partial^{\alpha} m(\xi) \right| \lesssim \frac{1}{|\xi|^{|\alpha|}}$$ for any $\xi \in \mathbb{R}^d \setminus \{0\}$ and sufficiently many multi-indices $\alpha$. The simplest singular integral operator which corresponds to the two-dimensional Loomis-Whitney inequality would be $$\label{tensor_ht} T_{m_1m_2}(f^x, g^y)(x,y) := \int_{\mathbb{R}^2} m_1(\xi)m_2(\eta) {\widehat}{f}(\xi) {\widehat}{g}(\eta) e^{2 \pi i x \xi} 2^{2\pi i y\eta}d\xi d\eta,$$ where $m_1, m_2 \in \mathcal{M}(\mathbb{R})$. (\[tensor\_ht\]) is a tensor product of Hilbert transforms whose boundedness are well-known. The bilinear variant of (\[tensor\_ht\]) can be expressed as $$\begin{aligned} \label{tensor_para} &T_{m_1m_2}(f_1^x,f_2^x, g_1^y, g_2^y)(x,y) \nonumber \\ := & \int_{\mathbb{R}^4} m_1(\xi_1,\xi_2) m_2(\eta_1,\eta_2) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2){\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2)e^{2 \pi i x(\xi_1+\xi_2)}e^{2 \pi i y(\eta_1+\eta_2)} d\xi_1 d\xi_2 d\eta_1 d\eta_2,\end{aligned}$$ where $m_1, m_2 \in \mathcal{M}(\mathbb{R}^2)$. It can be separated as a tensor product of single-parameter paraproducts whose boundedness are proved by Coifman-Meyer’s theorem [@cm]. To avoid trivial tensor products of single-parameter operators, one then completes (\[tensor\_para\]) by adding a generic function of two variables thus obtaining $$\begin{aligned} \label{bi_pp} &T_{b}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y})(x,y) \nonumber \\ :=& \int_{\mathbb{R}^4} b((\xi_1,\eta_1),(\xi_2,\eta_2),(\xi_3,\eta_3)) {\widehat}{f_1 \otimes g_1}(\xi_1, \eta_1) {\widehat}{f_2 \otimes g_2}(\xi_2, \eta_2) {\widehat}{h}(\xi_3,\eta_3) \nonumber \\ & \quad \quad \cdot e^{2 \pi i x(\xi_1+\xi_2+ \xi_3)}e^{2 \pi i y(\eta_1+\eta_2+ \eta_3)} d\xi_1 d\xi_2 d\eta_1 d\eta_2,\end{aligned}$$ where $$\begin{aligned} & \left|\partial^{\alpha}_{(\xi_1,\xi_2,\xi_3)} \partial^{\beta}_{(\eta_1,\eta_2, \eta_3)} b \right| \lesssim \frac{1}{|(\xi_1,\xi_2,\xi_3)|^{|\alpha|}|(\eta_1,\eta_2,\eta_3)|^{|\beta|}}\end{aligned}$$ for sufficiently many multi-indices $\alpha$ and $\beta$. Such a multilinear operator is indeed a bi-parameter paraproduct whose theory has been developed by Muscalu, Pipher, Tao and Thiele [@cptt]. It also appeared naturally in nonlinear PDEs, such as Kadomtsev-Petviashvili equations studied by Kenig [@k]. To reach beyond bi-parameter paraproducts, one then replaces the singularity in each subspace by a flag singularity. In one dimension, the corresponding trilinear operator takes the form $$\label{flag} T_{m_1m_2}(f_1,f_2,f_3)(x) := \int_{\mathbb{R}^3} m_1(\xi_1,\xi_2)m_2(\xi_1,\xi_2,\xi_3) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{f_3}(\xi_3) e^{2 \pi i x(\xi_1+\xi_2+\xi_3)} d\xi_1 d\xi_2 d\xi_3,$$ where $m_1 \in \mathcal{M}(\mathbb{R}^2)$ and $m_2 \in \mathcal{M}(\mathbb{R}^3)$. The operator (\[flag\]) was studied by Muscalu [@c_flag] using time-frequency analysis which applies not only to the operator itself, but also to all of its adjoints. Miyachi and Tomita [@mt] extended the $L^p$-boundedness for $p>1$ established in [@c_flag] to all Hardy spaces $H^p$ with $p > 0$. The single-parameter flag paraproduct and its adjoints are closely related to various nonlinear partial differential equations, including nonlinear Schrödinger equations and water wave equations as discovered by Germain, Masmoudi and Shatah [@gms]. Its bi-parameter variant is indeed related to the subject of our study and is equivalent to (\[bi\_flag\_int\]): $$\begin{aligned} \label{bi_flag_mult} T_{ab}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \int_{\mathbb{R}^6} & a((\xi_1,\eta_1),(\xi_2,\eta_2)) b((\xi_1,\eta_1),(\xi_2,\eta_2),(\xi_3,\eta_3)) {\widehat}{f_1 \otimes g_1}(\xi_1, \eta_1) {\widehat}{f_2 \otimes g_2}(\xi_2, \eta_2) \nonumber \\ & \cdot {\widehat}{h}(\xi_3,\eta_3) e^{2\pi i x(\xi_1+\xi_2+\xi_3)}e^{2\pi i y(\eta_1+\eta_2+\eta_3)} d \xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3,\end{aligned}$$ where $$\begin{aligned} & \left|\partial^{\alpha_1}_{(\xi_1,\xi_2)} \partial^{\beta_1}_{(\eta_1,\eta_2)} a\right| \lesssim \frac{1}{|(\xi_1,\xi_2)|^{|\alpha_1|}|(\eta_1,\eta_2)|^{|\beta_1|}}, \nonumber \\ & \left|\partial^{\alpha_2}_{(\xi_1,\xi_2,\xi_3)} \partial^{\beta_2}_{(\eta_1,\eta_2, \eta_3)} b \right| \lesssim \frac{1}{|(\xi_1,\xi_2,\xi_3)|^{|\alpha_2|}|(\eta_1,\eta_2,\eta_3)|^{|\beta_2|}},\end{aligned}$$ for sufficiently many multi-indices $\alpha_1, \beta_1, \alpha_2$ and $\beta_2$. The equivalence can be derived with $$\begin{aligned} & a = {\widehat}{K_1}, \nonumber \\ & b = {\widehat}{K_2}.\end{aligned}$$ The general bi-parameter trilinear flag paraproduct is defined on larger function spaces where the tensor products are replaced by general functions in the plane.[^3] From this perspective, $T_{ab}$ or equivalently $T_{K_1K_2}$ defined in (\[bi\_flag\_mult\]) and (\[bi\_flag\_int\]) respectively can be viewed as a trilinear operator with the desired mapping property $$T_{ab}: L^{p_1}_{x}(L^{p_2}_y) \times L^{q_1}_{x}(L^{q_2}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2)$$ for $ 1 < p_1, p_2, q_1, q_2, s \leq \infty$, $r > 0$, $(p_1, q_1), (p_2,q_2) \neq (\infty, \infty)$ and $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} = \frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}$, where the first two function spaces are restricted to be tensor-product spaces. The condition that $(p_1, q_1), (p_1,q_2) \neq (\infty, \infty)$ is inherited from single-parameter flag paraproducts and can be verified by the unboundedness of the operator when $f_1, f_2 \in L^{\infty}(\mathbb{R}_x)$ are constant functions. Lu, Pipher and Zhang [@lpz] showed that the general bi-parameter flag paraproduct can be reduced to an operator given by a symbol with better singularity using an argument inspired by Miyachi and Tomita [@mt]. The boundedness of the reduced multiplier operator still remains open. The reduction allows an alternative proof of $L^p$-boundedness for (\[bi\_flag\_mult\]) as long as $p \neq \infty$. However, we emphasize again that we will not take this point of view now, and instead, we treat our operator $T_{ab}$ as a five-linear operator. Methodology ----------- As one may notice from the last section, the five-linear operator $T_{ab}$ ( or $T_{K_1K_2}$) contains the features of the bi-parameter paraproduct defined in (\[bi\_pp\]) and the single-parameter flag paraproduct defined in (\[flag\]), which hints that the methodology would embrace localized analyses of both operators. Nonetheless, it is by no means a simple concatenation of two existing arguments. The methodology includes 1. **tensor-type stopping-time decomposition** which refers to an algorithm that first implements a one-dimensional stopping-time decomposition for each variable and then combines information for different variables to obtain estimates for operators involving several variables; 2. **general two-dimensional level sets stopping-time decomposition** which refers to an algorithm that partitions the collection of dyadic rectangles such that the dyadic rectangles in each sub-collection intersect with a certain level set non-trivially; and the main novelty lies in (i) the construction of two-dimensional stopping-time decompositions from stopping-time decompositions on one-dimensional subspaces; (ii) the hybrid of tensor-type and general two-dimensional level sets stopping-time decompositions in a meaningful fashion. The methodology outlined above is considered to be robust in the sense that it captures all local behaviors of the operator. The robustness may also be verified by the entire range of estimates obtained. After closer inspection of the technique, it would not be surprising that the technique gives estimates involving $L^{\infty}$-norms. In particular, the tensor-type stopping-time decompositions process information on each subspaces independently. As a consequence, when some function defined on some subspace lies in $L^{\infty}$, one simply “forgets" about that function and glues the information from subspaces in an intelligent way specified later. Structure --------- The paper is organized as follows: main theorems are stated in Chapter 2 followed by preliminary definitions and theorems introduced in Chapter 3. Chapter 4 describes the reduced discrete model operators and estimates one needs to obtain for the model operators while the reduction procedure is postponed to Appendix II. Chapter 5 gives the definition and estimates for the building blocks in the argument - sizes and energies. Chapter 6 - 9 focus on estimates for the model operators in the Haar case. All four chapters start with a specification of the stopping-time decompositions used. Chapter 10 extends all the estimates in the Haar setting to the general Fourier case. It is also important to notice that Chapter 6 develops an argument for one of the simpler model operators with emphasis on the key geometric feature implied by a stopping-time decomposition, that is the sparsity condition. Chapter 7 focuses on a more complicated model which requires not only the sparsity condition, but also a Fubini-type argument which is discussed in details. Chapter 8 and 9 are devoted to estimates involving $L^{\infty}$-norms and the arguments for those cases are similar to the ones in Chapter 6, in the sense that the sparsity condition is sufficient to obtain the results. Acknowledgements. ----------------- We thank Jonathan Bennett for the inspiring conversation we had in Matsumoto, Japan, in February 2016, that triggered our interest in considering and understanding singular integral generalizations of Brascamp-Lieb inequalities, and, in particular, the study of the present paper. We also thank Guozhen Lu, Jill Pipher and Lu Zhang for discussions about their recent work in [@lpz]. Finally, we thank Polona Durcik and Christoph Thiele for the recent conversation which clarified the similarities and differences between the results in [@dt] and those in our paper and [@bm2]. The first author was partially supported by a Grant from the Simons Foundation. The second author was partially supported by the ERC Project FAnFArE no. 637510. Main Results ============ We state the main results in Theorem \[main\_theorem\] and \[main\_thm\_inf\]. Theorem \[main\_theorem\] proves the boundedness when $p_i, q_i$ are strictly between $1$ and infinity whereas Theorem \[main\_thm\_inf\] deals with the case when $p_i = \infty$ or $q_j = \infty$ for some $i\neq j$. \[main\_theorem\] Suppose $a \in L^{\infty}(\mathbb{R}^4)$, $b\in L^{\infty}(\mathbb{R}^6)$, where $a$ and $b$ are smooth away from $\{(\xi_1,\xi_2) = 0 \} \cup \{(\eta_1,\eta_2) = 0 \}$ and $\{(\xi_1, \xi_2,\xi_3) = 0 \} \cup \{(\eta_1,\eta_2,\eta_3) = 0\}$ respectively and satisfy the following Marcinkiewicz conditions: $$\begin{aligned} & |\partial^{\alpha_1}_{\xi_1} \partial^{\alpha_2}_{\eta_1} \partial^{\beta_1}_{\xi_2} \partial^{\beta_2}_{\eta_2} a(\xi_1,\eta_1, \xi_2,\eta_2)| \lesssim \frac{1}{|(\xi_1,\xi_2)|^{\alpha_1 + \beta_1}} \frac{1}{|(\eta_1,\eta_2)|^{\alpha_2+\beta_2}}, \nonumber \\ & |\partial^{\bar{\alpha_1}}_{\xi_1} \partial^{\bar{\alpha_2}}_{\eta_1} \partial^{\bar{\beta_1}}_{\xi_2} \partial^{\bar{\beta_2}}_{\eta_2}\partial^{\bar{\gamma_1}}_{\xi_3} \partial^{\bar{\gamma_2}}_{\eta_3}b(\xi_1,\eta_1, \xi_2,\eta_2, \xi_3, \eta_3)| \lesssim \frac{1}{|(\xi_1,\xi_2, \xi_3)|^{\bar{\alpha_1} + \bar{\beta_1}+\bar{\gamma_1}}} \frac{1}{|(\eta_1,\eta_2, \eta_3)|^{\bar{\alpha_2}+\bar{\beta_2}+ \bar{\gamma_2}}}\end{aligned}$$ for sufficiently many multi-indices $\alpha_1,\alpha_2,\beta_1,\beta_2, \bar{\alpha_1}, \bar{\alpha_2},\bar{\beta_1},\bar{\beta_2}, \bar{\gamma_1}, \bar{\gamma_2} \geq 0$. For $f_1, f_2, g_1,g_2 \in \mathcal{S}(\mathbb{R})$ and $h \in \mathcal{S}(\mathbb{R}^2)$ where $\mathcal{S}(\mathbb{R})$ and $\mathcal{S}(\mathbb{R}^2)$ denote the Schwartz spaces, define $$\begin{aligned} \label{bi_flag} \displaystyle T_{ab}(f^x_1, f^x_2, g_1^y ,g^y_2,h^{x,y}) := \int_{\mathbb{R}^6} & a(\xi_1,\eta_1,\xi_2,\eta_2) b(\xi_1,\eta_1,\xi_2,\eta_2,\xi_3,\eta_3) \nonumber \\ & \hat{f_1}(\xi_1)\hat{f_2}(\xi_2)\hat{g_1}(\eta_1)\hat{g_2}(\eta_2)\hat{h}(\xi_3,\eta_3) \nonumber \\ & e^{2\pi i x(\xi_1+\xi_2+\xi_3)}e^{2\pi i y(\eta_1+\eta_2+\eta_3)} d \xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3.\end{aligned}$$ Then for $1< p_1, p_2, q_1, q_2, < \infty, 1 < s \leq \infty$, $r > 0$, $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} =\frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r} $, $T_{ab}$ satisfies the following mapping property $$T_{ab}: L^{p_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_x) \times L^{p_2}(\mathbb{R}_y) \times L^{q_2}(\mathbb{R}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2).$$ \[main\_thm\_inf\] Let $T_{ab}$ be defined as (\[bi\_flag\]). Then for $1< p < \infty$, $1 < s \leq \infty$, $r >0$, $\frac{1}{p} + \frac{1}{s} = \frac{1}{r}$, $T_{ab}$ satisfies the following mapping property $$\begin{aligned} T_{ab}: & L^{p}(\mathbb{R}_x) \times L^{\infty}(\mathbb{R}_x) \times L^{p}(\mathbb{R}_y) \times L^{\infty}(\mathbb{R}_y) \times L^{s} \rightarrow L^{r} \nonumber $$ where $p_1 = p_2 = p$ as imposed by (\[bl\_exp\]). The cases $(i) q_1 = q_2 < \infty$ and $p_1= p_2= \infty$ $(ii) p_1 = q_2 < \infty$ and $p_2 = q_1 = \infty$ $(iii) q_1 = p_2 < \infty$ and $p_1 = q_2 = \infty$ follows from the same argument by the symmetry. Restricted Weak-Type Estimates ------------------------------ For the Banach estimates when $r > 1$, Hölder’s inequality involving hybrid square and maximal functions is sufficient. The argument resembles the Banach estimates for the single-parameter flag paraproduct. The quasi-Banach estimates when $r < 1$ is trickier and requires a careful treatment. In this case, we use multilinear interpolations and reduce the desired estimates specified in Theorem \[main\_theorem\] and Theorem \[main\_thm\_inf\] to the following restricted weak-type estimates for the associated multilinear form[^4]. \[thm\_weak\] Let $T_{ab}$ denote the operator defined in (\[bi\_flag\]). Suppose that $1< p_1, p_2, q_1, q_2, < \infty, 1 < s <2$, $0 < r <1$, $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} =\frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}$. Then for any measurable set $F_1 \subseteq \mathbb{R}_{x} , F_2 \subseteq \mathbb{R}_{x}, G_1\subseteq \mathbb{R}_y, G_2\subseteq \mathbb{R}_y, E \subset \mathbb{R}^2$ of positive and finite measure and any measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|f_2(x)| \leq \chi_{F_2}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $|g_2(y)| \leq \chi_{G_2}(y)$, $h \in L^{s}(\mathbb{R}^2)$, there exists $E' \subseteq E$ with $|E'| > |E|/2$ such that the multilinear form associated to $T_{ab}$ satisfies $$\label{thm_weak_explicit} |\Lambda(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy},\chi_{E'}) | \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}|E|^{\frac{1}{r'}}.$$ .15in \[thm\_weak\_inf\] Let $T_{ab}$ denote the operator defined in (\[bi\_flag\]). Suppose that $1< p < \infty$, $1 < s < 2$, $0 < r < 1$, $\frac{1}{p} + \frac{1}{s} = \frac{1}{r}$. Then for any measurable set $F_1\subseteq \mathbb{R}_{x}$, $G_1 \subseteq \mathbb{R}_y$, $E \subset \mathbb{R}^2$ of positive and finite measure and every measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $f_2 \in L^{\infty}(\mathbb{R}_x)$, $g_2 \in L^{\infty}(\mathbb{R}_y)$, $h \in L^{s}(\mathbb{R}^2)$, there exists $E' \subseteq E$ with $|E'| > |E|/2$ such that the multilinear form associated to $T_{ab}$ satisfies $$\label{thm_weak_inf_explicit} |\Lambda(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy},\chi_{E'}) | \lesssim |F_1|^{\frac{1}{p}} |G_1|^{\frac{1}{p}} \|f_2\|_{L_x^{\infty}} \|g_2\|_{L_y^{\infty}} \|h\|_{L^s(\mathbb{R}^2)}|E|^{\frac{1}{r'}}.$$ Theorem \[thm\_weak\] and \[thm\_weak\_inf\] hint the necessity of localization and the major subset $E'$ of $E$ is constructed based on the philosophy to localize the operator where it is well-behaved. The reduction of Theorem \[main\_theorem\] and \[main\_thm\_inf\] to Theorem \[thm\_weak\] and \[thm\_weak\_inf\] respectively will be postponed to Appendix I. In brief, it depends on the interpolation of multilinear forms described in Lemma 9.6 of [@cw] and a tensor-product version of Marcinkiewicz interpolation theorem. .15in Application - Leibniz Rule -------------------------- A direct corollary of Theorem \[main\_theorem\] is a Leibniz rule which captures the nonlinear interaction of waves coming from transversal directions. In general, Leibniz rules refer to inequalities involving norms of derivatives. The derivatives are defined in terms of Fourier transforms. More precisely, for $\alpha \geq 0$ and $f \in \mathcal{S}(\mathbb{R}^d)$ a Schwartz function in $\mathbb{R}^d$, define the homogeneous derivative of $f$ as $$D^{\alpha}f := \mathcal{F}^{-1}\left(|\xi|^{\alpha}{\widehat}{f}(\xi)\right).$$ Leibniz rules are closely related to boundedness of multilinear operators discussed in Section 1.2. For example, the boundedness of one-parameter paraproducts give rise to a Leibniz rule by Kato and Ponce [@kp]. For $f, g \in \mathcal{S}(\mathbb{R}^d)$ and $\alpha > 0$ sufficiently large, $$\label{lb_para} \| D^{\alpha} (fg)\|_r \lesssim \|D^{\alpha} f \|_{p_1} \|g \|_{q_1} + \| f \|_{p_2} \|D^{\alpha}g \|_{q_2}$$ with $1 < p_i, q_i < \infty, \frac{1}{p_i}+ \frac{1}{q_i} = \frac{1}{r}, i= 1,2.$ The inequality in (\[lb\_para\]) generalizes the trivial and well-known Leibniz rule when $\alpha = 1$ and states that the derivative for a product of two functions can be dominated by the terms which involve the highest order derivative hitting on one of the functions. The reduction of (\[lb\_para\]) to the boundedness of one-parameter paraproducts is routine (see Chapter 2 in [@cw] for details) and can be applied to other Leibniz rules with their corresponding multilinear operators, including the boundedness of our operator $T_{ab}$ and its Leibniz rule stated in Theorem \[lb\_main\] below. The Leibniz rule stated in Theorem \[lb\_main\] deals with partial derivatives, where the partial derivative of $f \in \mathcal{S}(\mathbb{R}^d)$ is defined, for $(\alpha_1,\ldots, \alpha_d)$ with $\alpha_1, \ldots, \alpha_d \geq 0$, as $$D_1^{\alpha_1}\cdots D_d^{\alpha_d}f := \mathcal{F}^{-1}\left(|\xi_1|^{\alpha_1} \cdots |\xi_d|^{\alpha_d}{\widehat}{f}(\xi_1,\ldots, \xi_d)\right).$$ \[lb\_main\] Suppose $f_1, f_2 \in \mathcal{S}(\mathbb{R}_x)$, $ g_1, g_2 \in \mathcal{S}(\mathbb{R}_y)$ and $h \in \mathcal{S}(\mathbb{R}^2).$ Then for $\beta_1, \beta_2, \alpha_1, \alpha_2 > 0$ sufficiently large and $1 < p^j_1, p^j_2, q^j_1, q^j_2, s^j \leq \infty$, $r >0$, $(p^j_1, q^j_1), (p^j_2, q^j_2) \neq (\infty, \infty)$, $\frac{1}{p^j_1} + \frac{1}{q^j_1} + \frac{1}{s^j}= \frac{1}{p^j_2} + \frac{1}{q^j_2} + \frac{1}{s^j}= \frac{1}{r} $ for each $j = 1, \ldots, 16 $, $$\begin{aligned} & \|D_1^{\beta_1} D_2^{\beta_2}(D_1^{\alpha_1}D_2^{\alpha_2}(f_1^x f_2^x g_1^y g_2^y) h^{x,y})\|_{L^r(\mathbb{R}^2)} \nonumber \\ \lesssim & \ \ \text{sum of \ \ }16 \text{\ \ terms of the forms: \ \ } \nonumber \\ & \|D_1^{\alpha_1+\beta_1}f_1\|_{L^{p^1_1}(\mathbb{R})} \|f_2\|_{L^{q^1_1}(\mathbb{R})} \|D_2^{\alpha_2 + \beta_2}g_1\|_{L^{p^1_2}(\mathbb{R})} \|g_2\|_{L^{q^1_2}(\mathbb{R})} \|h\|_{L^{s^1}(\mathbb{R}^2)} + \nonumber \\ & \|f_1\|_{L^{p^2_1}(\mathbb{R})} \|D_1^{\alpha+\beta_1}f_2\|_{L^{q^2_1}(\mathbb{R})} \|D_2^{\alpha_2 + \beta_2}g_1\|_{L^{p^2_2}(\mathbb{R})} \|g_2\|_{L^{q^2_2}(\mathbb{R})} \|h\|_{L^{s^2}(\mathbb{R}^2)} + \nonumber \\ & \|D_1^{\alpha+\beta_1}f_1\|_{L^{p^3_1}(\mathbb{R})} \|f_2\|_{L^{q^3_1}(\mathbb{R})} \|D_2^{\alpha_2}g_1\|_{L^{p^3_2}(\mathbb{R})} \|g_2\|_{L^{q^3_2}(\mathbb{R})} \|D_2^{\beta_2}h\|_{L^{s^3}(\mathbb{R}^2)} + \ldots\end{aligned}$$ The reasoning for the number “16” is that (i) for $\alpha_1$, there are $2$ possible distributions of highest order derivatives thus yielding 2 terms; (ii) for $\alpha_2$, there are $2$ terms for the same reason in (i); (iii) for $\beta_1$, it can hit $h$ or some function which comes from the dominant terms of $D^{\alpha_1}(f_1 f_2)$ and which have two choices as illustrated in (i), thus generating $2 \times 2 = 4$ terms; (iv) for $\beta_2$, there would be $4$ terms for the same reason in (iii). By summarizing (i)-(iv), one has the count $4 \times 4 = 16$. As commented in the beginning of this section, $f_1$ and $f_2$ in Theorem \[lb\_main\] can be viewed as waves coming from one direction while $g_1$ and $g_2$ are waves from the orthogonal direction. The presence of $h$, as a generic wave in the plane, makes the interaction nontrivial. Preliminaries ============= Terminology ----------- We will first introduce some notations which will be useful throughout the paper. Suppose $I \in \mathbb{R}$ is an interval. Then we say a smooth function $\phi$ is *adapted to $I$* if $$|\phi^{(l)}(x)| \leq C_l C_M \frac{1}{|I|^l} \frac{1}{\big(1+\frac{|x-x_I|}{|I|}\big)^M}$$ for sufficiently many derivatives $l$, where $x_I$ denotes the center of the interval $I$. \[bump\] Suppose $\mathcal{I}$ is a collection of dyadic intervals. Then a family of $L^2$-normalized bump functions $(\phi_I)_{I \in \mathcal{I}}$ is *lacunary* if and only if for every $ I \in \mathcal{I}$, $$\text{supp}\ \ {\widehat}{\phi_I} \subseteq [-4|I|^{-1}, \frac{1}{4}|I|^{-1}] \cup [\frac{1}{4}|I|^{-1}, 4|I|^{-1}].$$ A family of $L^2$-normalized bump functions $(\phi_I)_{I \in \mathcal{I}}$ is *non-lacunary* if and only if for every $ I \in \mathcal{I}$, $$\text{supp}\ \ {\widehat}{\phi_I} \subseteq [-4|I|^{-1}, 4|I|^{-1}].$$ We usually denote bump functions in lacunary family by $(\psi_I)_I$ and those in non-lacunary family by $({\varphi}_I)_I$. A simplified variant of bump functions given in Definition \[bump\] is specified as follows - Haar wavelets correspond to lacunary family of bump functions and $L^2$-normalized indicator functions are analogous to non-lacunary family of bump functions. \[bump\_walsh\] Define $$\psi^H(x) := \begin{cases} 1 \ \ \text{for}\ \ x \in [0,\frac{1}{2})\\ -1 \ \ \text{for}\ \ x \in [\frac{1}{2},1).\\ \end{cases}$$ Let $I := [n2^{k},(n+1)\cdot2^k)$ denote a dyadic interval. Then the Haar wavelet on $I$ is defined as $$\psi^H_I(x) := 2^{-\frac{k}{2}}\psi^H(2^{-k}x-n).$$ The $L^2$-normalized indicator function on $I$ is expressed as $${\varphi}^H_I(x) := |I|^{-\frac{1}{2}}\chi_{I}(x).$$ We shall remark that the boundedness of the multilinear form described in Theorem \[thm\_weak\] and \[thm\_weak\_inf\] can be reduced to the estimates of discrete model operators which are defined in terms of bump functions of the form specified in Definition \[bump\]. The precise statements are included in Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] and the proof is discussed in Appendix II. However, we will first study the simplified model operators with the general bump functions replaced by Haar wavelets and indicator functions defined in Definition \[bump\_walsh\]. The arguments for the simplified models capture the main challenges while avoiding some technical aspects. We will leave the generalization and the treatment of the technical details to Chapter 10. The simplified models would be denoted as Haar models and we will highlight the occasions when the Haar models are considered. Useful Operators - Definitions and Theorems ------------------------------------------- We also give explicit definitions for the Hardy-Littlewood maximal function, the discretized Littlewood-Paley square function and the hybrid square-and-maximal functions that will appear naturally in the argument. The *Hardy-Littlewood maximal operator* $M$ is defined as $$Mf(\vec{x}) = \sup_{\vec{x} \in B}\int_{B}|f(\vec{u})|d\vec{u}$$ where the supremum is taken over all open balls $B \subseteq \mathbb{R}^d$ containing $\vec{x}$. Suppose $\mathcal{I}$ is a finite family of dyadic intervals and $(\psi_I)_I$ a lacunary family of $L^2$-normalized bump functions. The *discretized Littlewood-Paley square function operator* $S$ is defined as $$Sf(x) = \bigg(\sum_{I \in \mathcal{I}}\frac{|\langle f, \psi_I\rangle|^2 }{|I|}\chi_{I}(x)\bigg)^{\frac{1}{2}}$$ Suppose $\mathcal{R}$ is a finite collection of dyadic rectangles. Let $(\phi_R)_{R \in \mathcal{R}}$ denote the family of $L^2$-normalized bump functions with $\phi_R = \phi_I \otimes \phi_J$ where $R= I \times J$. 1. the *double square function operator* $SS$ is defined as $$\displaystyle SSh(x,y) = \bigg(\sum_{I \times J } \frac{|\langle h, \psi_{I} \otimes \psi_J \rangle|^2 }{|I||J|} \chi_{I \times J} (x,y)\bigg)^{\frac{1}{2}};$$ 2. the *hybrid maximal-square operator* $MS$ is defined as $$MSh(x,y) = \sup_{I}\frac{1}{|I|^{\frac{1}{2}}} \bigg(\sum_{J} \frac{|\langle h, {\varphi}_I \otimes \psi_J \rangle|^2}{|J|} \chi_{J}(y)\bigg)^{\frac{1}{2}}\chi_I(x);$$ 3. the *hybrid square-maximal operator* $SM$ is defined as $$\displaystyle SMh(x,y) = \bigg(\sum_{I} \frac{\big(\sup_{J}\frac{|\langle h,\psi_I \otimes {\varphi}_J \rangle|}{|J|}\chi_J(y) \big)}{|I|}\chi_{I}(x)\bigg)^{\frac{1}{2}};$$ 4. the *double maximal function* $MM$ is defined as $$MM h(x,y) = \sup_{(x,y) \in R} \frac{1}{|R|}\int_{R}|h(s,t)| ds dt,$$ where the supremum is taken over all dyadic rectangles in $\mathcal{R}$ containing $(x,y)$. The following theorem about the operators defined above is used frequently in the argument. The proof of the theorem and other contexts where the hybrid operators appear can be found in [@cw], [@cf] and [@fs]. \[maximal-square\] 1. $M$ is bounded in $L^{p}(\mathbb{R}^{d})$ for $1< p \leq \infty$ and $M: L^{1} \longrightarrow L^{1,\infty}$. 2. $S$ is bounded in $L^{p}(\mathbb{R})$ for $1< p < \infty$. 3. The hybrid operators $SS, MS, SM, MM$ are bounded in $L^{p}(\mathbb{R}^2)$ for $1 < p < \infty$. .25in Discrete Model Operators ======================== In this chapter, we will introduce the discrete model operators whose boundedness implies the estimates specified in Theorem \[thm\_weak\] and Theorem \[thm\_weak\_inf\]. The reduction procedure follows from a routine treatment which has been discussed in [@cw]. The details will be enclosed in Appendix II for the sake of completeness. The model operators are usually more desirable because they are more “localizable”. The discrete model operators are defined as follows. \[discrete\_model\_op\] Suppose $\mathcal{I}, \mathcal{J}, \mathcal{K}$, $\mathcal{L}$ are finite collections of dyadic intervals. Suppose $\displaystyle(\phi^i_I)_{I\in \mathcal{I}}$, $ (\phi^j_J)_{J \in \mathcal{J}}$, $(\phi^k_K)_{K \in \mathcal{K}}$, $(\phi^{l}_L)_{L \in \mathcal{L}}$, $i, j, k, l = 1, 2, 3$ are families of $L^2$-normalized bump functions adapted to $I, J, K, L$ respectively. We further assume that at least two families of $(\phi^i_I)_{I\in \mathcal{I}}, i = 1, 2, 3, $ are lacunary. Same conditions are assumed for families $ (\phi^j_J)_{J \in \mathcal{J}}$, $(\phi^k_K)_{K \in \mathcal{K}} $ and $(\phi^l_L)_{L \in \mathcal{L}} $. In some models, we specify the lacunary and non-lacunary families by explicitly denoting the functions in the lacunary family as $\psi$ and those in the non-lacunary family as ${\varphi}$. Let $\#_1, \#_2$ denote some positive integers. Define 1. $$\Pi_{\text{flag}^0 \otimes \text{paraproduct}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle g_1,\phi^1_J \rangle \langle g_2, \phi^2_J \rangle \langle h, \psi_I^{2} \otimes \phi_{J}^2 \rangle \psi_I^{3} \otimes \phi_{J}^3$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x).$$ 2. $$\Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|} \langle B^{\#_1}_I(f_1,f_2),{\varphi}_I^1 \rangle \langle g_1,\phi^1_J \rangle \langle g_2, \phi^2_J \rangle \langle h, \psi_I^{2} \otimes \phi_{J}^2 \rangle \psi_I^{3} \otimes \phi_{J}^3$$ where $$B^{\#_1}_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x).$$ 3. $$\Pi_{\text{flag}^0 \otimes \text{flag}^0}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B_J}(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \geq |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ 4. $$\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B}_J^{\#_2}(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J^{\#_2}(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2}|J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ 5. $$\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B}^{\#_2}_J(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B^{\#_1}_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J^{\#_2}(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2} |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ \[thm\_weak\_mod\] Let $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$, $ \Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ and $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$ be multilinear operators specified in Definition \[discrete\_model\_op\]. Then all of them satisfy the mapping property stated in Theorem \[thm\_weak\], where the constants are independent of $\#_1,\#_2$ and the cardinalities of the collections $\mathcal{I}, \mathcal{J}, \mathcal{K}$ and $\mathcal{L}$. \[thm\_weak\_inf\_mod\] Let $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$, $ \Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ and $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$ be multilinear operators specified in Definition \[discrete\_model\_op\]. Then all of them satisfy the mapping property stated in Theorem \[thm\_weak\_inf\], where the constants are independent of $\#_1,\#_2$ and the cardinalities of the collections $\mathcal{I}, \mathcal{J}, \mathcal{K}$ and $\mathcal{L}$. The following chapters are devoted to the proofs of Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] which would imply Theorem \[thm\_weak\] and \[thm\_weak\_inf\]. We will mainly focus on discrete model operators defined in $(3)$ (Chapter 7) and $(5)$ (Chapter 6), whose arguments consist of all the essential tools that are needed for other discrete models. Sizes and Energies ================== The notion of sizes and energies appear first in [@mtt] and [@mtt2]. Since they will play important roles in the main arguments, the explicit definitions of sizes and energies are introduced and some useful properties are highlighted in this chapter. Let $\mathcal{I}$ be a finite collection of dyadic intervals. Let $(\psi_I)_{I \in \mathcal{I}}$ denote a lacunary family of $L^2$-normalized bump functions and $({\varphi}_I)_{I \in \mathcal{I}}$ a non-lacunary family of $L^2$-normalized bump functions. Define (1) $$\text{size}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \sup_{I \in \mathcal{I}} \frac{|\langle f, {\varphi}_I\rangle|}{|I|^{\frac{1}{2}}};$$ (2) $$\text{size}_{\mathcal{I}}((\langle f, \psi_I \rangle)_{I \in \mathcal{I}}) := \sup_{I_0 \in \mathcal{I}} \frac{1}{|I_0|}\left\Vert \bigg(\sum_{\substack{I \subseteq I_0 \\ I \in \mathcal{I}}} \frac{|\langle f, \psi_I \rangle|^2}{|I|} \chi_{I}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty};$$ (3) $$\text{energy} _{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \sup_{n \in \mathbb{Z}} 2^{n} \sup_{\mathbb{D}_n} \sum_{I \in \mathbb{D}_n} |I|$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{|\langle f,{\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^n;$$ (4) $$\text{energy} _{\mathcal{I}}((\langle f, \psi_I \rangle)_{I \in \mathcal{I}}) := \sup_{n \in \mathbb{Z}} 2^{n} \sup_{\mathbb{D}_n} \sum_{I \in \mathbb{D}_n} |I|$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{1}{|I|}\left\Vert \bigg(\sum_{\substack{\tilde{I} \subseteq I \\ \tilde{I} \in \mathcal{I}}} \frac{|\langle f, \psi_{\tilde{I}} \rangle|^2}{|\tilde{I}|} \chi_{\tilde{I}}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty} > 2^{n};$$ (5) For $t>1$, define $$\text{energy}^{t}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \left(\sum_{n \in \mathbb{Z}}2^{tn}\sup_{\mathbb{D}_n}\sum_{I \in \mathbb{D}_n}|I| \right)^{\frac{1}{t}}$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{|\langle f,{\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^n.$$ Useful Facts about Sizes and Energies ------------------------------------- The following propositions describe facts about sizes and energies which will be heavily employed later on. Proposition \[JN\] and \[size\] are routine and the proofs can be found in Chapter 2 of [@cw]. Proposition \[energy\_classical\] consists of two parts - the first part is discussed in [@cw] while the second part is less standard. We will include the proof of both parts in Section 5.3 for the sake of completeness. Proposition \[size\_cor\], Proposition \[B\_en\_global\] and Proposition \[B\_en\] highlight the useful size and energy estimates involving the operators $B$ and $\tilde{B}$ in the Haar model. The emphasis on the Haar model assumption keeps track of the arguments we need to modify for the general Fourier case. It is noteworthy that Proposition \[B\_en\_global\] describes a “global” energy estimate while Proposition \[size\_cor\] and \[B\_en\] take into the consideration that the operators $B$ and $\tilde{B}$ are localized to intersect certain level sets which carry crucial information for the estimates of the sizes and energies for $B$ and $\tilde{B}$. While the proof of Proposition \[B\_en\_global\] follows from the boundedness of paraproducts ([@cm], [@cw]), the arguments for Proposition \[size\_cor\] and Proposition \[B\_en\] request localizations and more careful treatments that will be discussed in subsequent sections. \[JN\] Let $\mathcal{I}$ be a finite collection of dyadic intervals. For any sequence $(a_I)_{I \in \mathcal{I}}$ and $r > 0$, define the BMO-norm for the sequence as $$\|(a_I)_I\|_{\text{BMO}(r)} := \sup_{I_0 \in \mathcal{I}}\frac{1}{|I_0|^{\frac{1}{r}}} \left\Vert \left(\sum_{I \subseteq I_0} \frac{|a_I|^2}{|I|}\chi_{I}(x)\right)^{\frac{1}{2}}\right\Vert_r.$$ Then for any $0 < p < q < \infty$, $$\|(a_I)_I\|_{\text{BMO}(p)} \simeq \|(a_I)_I \|_{\text{BMO}(q)}.$$ \[size\] Suppose $f \in L^1(\mathbb{R})$. Then $$\text{size}_{\mathcal{I}}\big((\langle f,{\varphi}_I \rangle)_I\big), \text{size}_{\mathcal{I}}\big((\langle f,\psi_I \rangle)_I \big) \lesssim \sup_{I \in \mathcal{I}}\int_{\mathbb{R}}|f|\tilde{\chi}_I^M dx$$ for $M > 0$ and the implicit constant depends on $M$. $\tilde{\chi}_I$ is an $L^{\infty}$-normalized bump function adapted to $I$. \[energy\_classical\] 1. Suppose $f \in L^1(\mathbb{R})$. Then $$\text{energy}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle))_I, \text{energy}_{\mathcal{I}}((\langle f, \psi_I \rangle))_I \lesssim \|f\|_1.$$ 2. Suppose $f \in L^t(\mathbb{R})$ for $t >1$. Then $$\text{energy}^t_{\mathcal{I}}((\langle f, {\varphi}_I \rangle))_I \lesssim \|f\|_t.$$ \[B\_en\_global\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Suppose that $\mathcal{K}$ and $\mathcal{L}$ are finite collections of dyadic intervals. Define $$\begin{aligned} & B(f_1,f_2)(x):= \sum_{K \in \mathcal{K}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \phi_K^3 (x), \nonumber \\ & \tilde{B}(g_1,g_2)(y):= \sum_{L \in \mathcal{L}} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2 \rangle \phi_L^3 (y).\end{aligned}$$ 1. Then for any $0 < \rho,\rho'<1$, one has $$\begin{aligned} & \text{energy}_{\mathcal{I}}((\langle B(f_1,f_2), {\varphi}_I \rangle)_{I \in \mathcal{I}}) \lesssim |F_1|^{\rho}|F_2|^{1-\rho}, \nonumber \\ & \text{energy}_{\mathcal{J}}((\langle \tilde{B}(g_1,g_2), {\varphi}_J \rangle)_{J \in \mathcal{J}}) \lesssim |G_1|^{\rho'}|G_2|^{1-\rho'}. \nonumber \\\end{aligned}$$ 2. Suppose that $t,s >1$. Then for any $0 \leq \theta_1, \theta_2, \zeta_1, \zeta_2 <1$, with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2 = \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^t_{\mathcal{I}}((\langle B(f_1,f_2), {\varphi}_I \rangle)_{I \in \mathcal{I}}) \lesssim |F_1|^{\theta_1}|F_2|^{\theta_2}, \nonumber \\ & \text{energy}^s_{\mathcal{J}}((\langle \tilde{B}(g_1,g_2), {\varphi}_J \rangle)_{J \in \mathcal{J}}) \lesssim |G_1|^{\zeta_1}|G_2|^{\zeta_2}. \nonumber \\\end{aligned}$$ It is not difficult to observe that Proposition \[B\_en\_global\] follows immediately from Proposition \[energy\_classical\] and the following lemma. \[B\_global\_norm\] Suppose that $1 < p_1,p_2 \leq \infty $ and $ 1 < q_1,q_2 \leq \infty$ with $(p_i, q_i) \neq (\infty, \infty)$ for $i = 1, 2$. Further assume that $\frac{1}{t} := \frac{1}{p_1}+ \frac{1}{q_1} <1$ and $\frac{1}{s} := \frac{1}{p_2}+ \frac{1}{q_2} <1$. Then for any $f_1 \in L^{p_1}$, $f_2 \in L^{q_1}$, $g_1 \in L^{p_2}$ and $g_2 \in L^{q_2}$, $$\begin{aligned} & \|B(f_1,f_2)\|_{t} \lesssim \|f_1\|_{L^{p_1}} \|f_2\|_{L^{q_1}}, \nonumber \\ & \|\tilde{B}(g_1,g_2)\|_{s} \lesssim \|g_1\|_{L^{p_2}} \|g_2\|_{L^{q_2}}.\end{aligned}$$ By identifying that $B$ and $\tilde{B}$ are one-parameter paraproducts, Lemma \[B\_global\_norm\] is a restatement of Coifman-Meyer’s theorem on the boundedness of paraproducts [@cm]. We will now turn our attention to local size estimates for $(\langle B_I^{\#_1,H}, {\varphi}_I \rangle)_I$ and $(\langle \tilde{B}_J^{\#_2,H}, {\varphi}_J \rangle)_J$ and local energy estimates for $(\langle B_I^H, {\varphi}_I \rangle )_I$ and $(\langle B_J^H, {\varphi}_J \rangle)_J$ in the Haar model. The precise definitions for the operators $B_I^{\#_1,H}, \tilde{B}_J^{\#_2,H}, B_I^H$ and $B_J^H$ are stated as follows. \[B\_def\] Suppose that $I$ and $J$ are fixed dyadic intervals and $\mathcal{K}$ and $\mathcal{L}$ are finite collections of dyadic intervals. Suppose that $(\phi_{K}^{i})_{K \in \mathcal{K}}, (\phi_{L}^{j})_{L \in \mathcal{L}}$ for $i, j = 1,2$ are families of $L^2$-normalized bump functions. Further assume that $(\phi_K^{3,H})_{K \in \mathcal{K}}$ and $(\phi_L^{3,H})_{L \in \mathcal{L}} $ are families of Haar wavelets or $L^2$-normalized indicator functions. A family of Haar wavelets are considered to be a lacunary family and a family of $L^2$-normalized indicator functions to be a non-lacunary family. Suppose that at least two families of $(\phi_{K}^{1})_K, (\phi_{K}^2)_K$ and $(\phi_{K}^{3,H})_K$ are lacunary and that at least two families of $(\phi_{L}^{1})_L, (\phi_{L}^2)_L$ and $(\phi_{L}^{3,H})_L$ are lacunary. Let (i) $$\begin{aligned} \label{B_size_haar} & B_I^{\#_1,H}(f_1, f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^{3,H}(x), \nonumber \\ & \tilde{B}_J^{\#_2,H}(g_1,g_2) (y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2} |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^{3,H}(y);\end{aligned}$$ (ii) $$\begin{aligned} & B_{I}^H(f_1, f_2)(x) := \sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \phi_K^{3,H} (x), \nonumber \\ & \tilde{B}_{J}^H(g_1, g_2)(y) := \sum_{L: |L| \geq |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2 \rangle \phi_L^{3,H} (y).\end{aligned}$$ In the Haar model, for any fixed dyadic intervals $I$ and $K$, the only non-degenerate case $\langle \phi_K^{3,H}, {\varphi}_I^H \rangle \neq 0$ is that $K \supseteq I$. Such observation provides natural localizations for the sequence $(\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}$ and thus for the sequences $(\langle f_1, \phi_K^1 \rangle)_{K}$ and $(\langle f_2, \phi_K^2 \rangle)_{K}$ as explicitly stated in the following lemma. \[B\_size\] Suppose that $S$ is a measurable subset of $\mathbb{R}_{x}$ and $S'$ a measurable subset of $\mathbb{R}_{y}$. If $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap S \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap S' \neq \emptyset$ for any $J \in \mathcal{J}'$, then $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \lesssim \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}},$$ $$\text{size}_{\mathcal{J}'}((\langle \tilde{B}_J^{\#_2,H}, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \lesssim \sup_{L \cap S' \neq \emptyset}\frac{|\langle g_1, \phi_L^1 \rangle|}{|L|^{\frac{1}{2}}} \sup_{L \cap S' \neq \emptyset}\frac{|\langle g_2, \phi_L^2 \rangle|}{|L|^{\frac{1}{2}}}.$$ The localization generates more quantitative and useful estimates for the sizes involving $B_I^{\#_1,H}$ and $\tilde{B}_J^{\#_2,H}$ when $S$ and $S'$ are level sets of the Hardy-Littlewood maximal functions $Mf_1$ and $Mg_1$ as elaborated in the following proposition. One notational comment is that $C_1, C_2$ and $C_3$ used throughout the paper denote some sufficiently large constants greater than 1. \[Local Size Estimates in the Haar Model\]\[size\_cor\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Let $n_1, m_1, n_2, m_2$ denote some integers. Let $\mathcal{U}_{n_1,m_1}:=\{x: Mf_1(x) \leq C_12^{n_1} |F_1|\} \cap \{ x: Mf_2(x) \leq C_1 2^{m_1} |F_2|\}$ and $\mathcal{U}'_{n_2,m_2} := \{y: Mg_1(y) \leq C_2 2^{n_2} |G_1|\} \cap \{y: Mg_2(y) \leq C_1 2^{m_2} |G_2|\}$. If $ \mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$, then $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \lesssim (C_1 2^{n_1}|F_1|)^{\alpha_1} (C_1 2^{m_1}|F_2|)^{\beta_1},$$ $$\text{size}_{\mathcal{J}'}((\langle \tilde{B}_J^{\#_2,H}, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \lesssim (C_2 2^{n_2}|G_1|)^{\alpha_2} (C_2 2^{m_2}|G_2|)^{\beta_2},$$ for any $ 0 \leq \alpha_1, \alpha_2, \beta_1, \beta_2 \leq 1$. The proof of the proposition follows directly from Lemma \[B\_size\] and the trivial estimates $$\sup_{K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \min(C_1 2^{n_1}|F_1|,1),$$ $$\sup_{K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \min(C_1 2^{m_1}|F_2|,1),$$ $$\sup_{L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset}\frac{|\langle g_1, \phi_L^1 \rangle|}{|L|^{\frac{1}{2}}} \lesssim \min(C_2 2^{n_2}|G_1|,1),$$ $$\sup_{L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset}\frac{|\langle g_2, \phi_L^2 \rangle|}{|L|^{\frac{1}{2}}} \lesssim \min(C_2 2^{m_2}|G_2|,1).$$ We will also explore the local energy estimates which are “stronger” than the global energy estimates. Heuristically, in the case when $f_1 \in L^{p_1}$ and $ f_2 \in L^{q_1}$ with $|f_1| \leq \chi_{F_1}$ and $|f_2| \leq \chi_{F_2}$ for $p_1,q_1>1$ and close to $1$, the global energy estimates would not yield the desired boundedness exponents for $|F_1|$ and $|F_2| $ whereas one could take advantages of the local energy estimates to obtain the result. In the Haar model, a perfect localization can be achieved for energy estimates involving bilinear operators $B^H_{I}$ and $\tilde{B}^H_J$ specified in Definition \[B\_def\](ii). In particular, the corresponding energy estimates can be compared to the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ where $B^{n_1,m_1}_0$ and $\tilde{B}^{n_2,m_2}_0$ are localized operators defined as follows. Let $\mathcal{U}_{n_1,m_1}, \mathcal{U}'_{n_2,m_2}$ be defined as levels sets described in Proposition \[size\_cor\]. And suppose that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Define $$B^{n_1,m_1}_0(f_1,f_2)(x):= \begin{cases} \displaystyle \sum_{K: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset} \frac{1}{|K|^{\frac{1}{2}}}|\langle f_1, \psi_K^1\rangle| |\langle f_2, \psi_K^2 \rangle| |{\varphi}_K^{3,H} (x)| \ \ \text{if}\ \ \phi_K^{3,H} \ \ \text{is}\ \ L^2 \text{-normalized indicator func.} \\ \displaystyle \sum_{K: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1\rangle \langle f_2, \psi_K^2 \rangle \psi_K^{3,H} (x) \quad \ \ \ \text{if}\ \ \phi_K^{3,H} \ \ \text{is Haar wavelet}, \\ \end{cases}$$ $$\tilde{B}^{n_2,m_2}_0(g_1,g_2)(y):= \begin{cases} \displaystyle \sum_{L: L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset} \frac{1}{|L|^{\frac{1}{2}}}|\langle g_1, \psi_L^1\rangle| |\langle g_2, \psi_L^2 \rangle| |{\varphi}_L^{3,H} (y)| \ \ \text{if}\ \ \phi_L^{3,H} \ \ \text{is}\ \ L^2 \text{-normalized indicator func.} \\ \displaystyle \sum_{L: L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, {\varphi}_L^1\rangle \langle g_2, \psi_L^2 \rangle \psi_L^{3,H} (y) \quad \ \ \ \text{if}\ \ \phi_L^{3,H} \ \ \text{is Haar wavelet}. \\ \end{cases}$$ We would like to emphasize that $B^{n_1,m_1}_0$ and $\tilde{B}^{n_2,m_2}_0$ are localized to intersect level sets $\mathcal{U}_{n_1,m_1}$ and $\mathcal{U}'_{n_2,m_2}$ nontrivially. It is not difficult to imagine that the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ would be better than the “global” energy estimates ($i.e. \ \ \text{energy}(\langle B(f_1, f_2), {\varphi}_I\rangle_{I} )$ and $\text{energy}(\langle \tilde{B}(g_1, g_2), {\varphi}_J\rangle_{J} )$) since one can now employ the information about intersections with level sets to control $$\frac{|\langle f_1, \phi_I \rangle|}{|I|^{\frac{1}{2}}}, \frac{|\langle f_2, \phi_I \rangle|}{|I|^{\frac{1}{2}}}, \frac{|\langle g_1, \phi_J \rangle|}{|J|^{\frac{1}{2}}}, \frac{|\langle g_2, \phi_J \rangle|}{|J|^{\frac{1}{2}}}.$$ The energy estimates for $(\langle B^H_I, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^H_J, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ can indeed be reduced to the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ as stated in Lemma \[localization\_haar\]. \[localization\_haar\] Suppose that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Then $$\begin{aligned} & \text{energy}_{\mathcal{I}'}((\langle B^H_I, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \leq \text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}), \nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \leq \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}).\end{aligned}$$ The following local energy estimates will play a crucial role in the proof of our main theorem. \[B\_en\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Assume that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Further assume that $\frac{1}{p_1} + \frac{1}{q_1}=\frac{1}{p_2}+ \frac{1}{q_2} > 1$. (i) Then for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$ and $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, one has $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B^H_I, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. $$ (ii) Suppose that $t,s >1$. Then for any $0 \leq \theta_1,\theta_2, \zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B^H_I, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ The condition that $$\label{diff_exp} \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2}+ \frac{1}{q_2} > 1$$ is required in the proof the proposition. Moreover, the energy estimates in Proposition \[B\_en\] are useful for the range of exponents specified as (\[diff\_exp\]). A simpler argument without the use of Proposition \[B\_en\] can be applied for the other case $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2}+ \frac{1}{q_2} \leq 1.$$ \[loc\_easy\_haar\] Thanks to the localization specified in Lemma \[localization\_haar\], it suffices to prove that $$\text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}),$$ $$\text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'})$$ satisfy the same estimates on the right hand side of the inequalities in Proposition \[B\_en\], equivalently 1. for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$ and $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}},\nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}; $$ 2. for any $0 \leq \theta_1,\theta_2,\zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ Due to Proposition \[energy\_classical\], the proofs of (i’) and (ii’) and thus of (i) and (ii) can be reduced to verifying Lemma \[B\_loc\_norm\]. \[B\_loc\_norm\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Fix $t,s \geq 1$. Then for any $0 \leq \theta_1,\theta_2, \zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \|B_0^{n_1,m_1}(f_1,f_2)\|_t \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}, \nonumber \\ & \|\tilde{B}_0^{n_2,m_2}(g_1,g_2)\|_{s} \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. \nonumber \\\end{aligned}$$ .15in Proof of Proposition \[energy\_classical\] ------------------------------------------ (i) One notices that there exists an integer $n_0$ and a disjoint collection of intervals, denoted by $\mathbb{D}^{0}_{n_0}$ such that $$\text{energy} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'}) = 2^{n_0} \sum_{\substack{I \in \mathbb{D}_{n_0}^{0}\\ I \in \mathcal{I'}}}|I|\label{energy_1}$$\[B\_energy\] where for any $I \in \mathbb{D}^0_n$, $$\frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n_0}.$$ Meanwhile for any $x \in I$, $$Mf(x) \geq \frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}}, $$ which implies that $$I \subseteq \{Mf(x) > 2^{n_0}\}$$ for any $I \in \mathbb{I}'$ satisfying (\[st\_interval\]). Then by the disjointness of $\mathbb{D}^0_{n_0}$, one can estimate the energy as follows $$\text{energy} _{\mathcal{I}}((\langle B_I, {\varphi}_I \rangle)_{I \in \mathcal{I}'} \leq 2^{n_0 } |\{Mf > 2^{n_0}\}| \leq \|Mf\|_{1,\infty} \lesssim \|f\|_1.$$ (ii) One observes that for each $n$, there exists a disjoint collection of intervals, denoted by $\mathbb{D}^{0}_n$ such that $$\text{energy}^{t} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'}) = \bigg(\sum_{n}2^{tn} \sum_{\substack{I \in \mathbb{D}_n^{0}\\ I \in \mathcal{I'}}}|I|\bigg)^{\frac{1}{t}}\label{energy_p}$$\[B\_energy\] where for any $I \in \mathbb{D}^0_n$, $$\frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n}.$$ By the same reasoning in (i), $$I \subseteq \{Mf(x) > 2^{n}\}.$$ Then by the disjointness of $\mathbb{D}^0_n$, one can estimate the energy as follows $$\text{energy}^{t} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'} \leq \big(\sum_{n}2^{tn } |\{Mf(x) > 2^{n}\}|\big)^{\frac{1}{t}} \lesssim \|Mf\|_{t}.$$ One can then apply the fact that the mapping property of maximal operator $M: L^{t} \rightarrow L^{t}$ for $t >1$ and derive $$\|Mf\|_{t} \lesssim \|f\|_{t}.$$ Proof of Propositions \[B\_size\] --------------------------------- Without loss of generality, we will prove the first size estimate and the second follows from the same argument. One recalls the definition of $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'} = \frac{|\langle B^{\#_1,H}_{I_0}(f_1,f_2),{\varphi}_{I_0}^H \rangle|}{|I_0|^{\frac{1}{2}}}$$ for some $I_0 \in \mathcal{I}'$ with the property that $I_0 \cap S \neq \emptyset$ by the assumption. Then $$\begin{aligned} \frac{|\langle B^{\#_1,H}_{I_0}(f_1,f_2),{\varphi}_{I_0}^H \rangle|}{|I_0|^{\frac{1}{2}}} \leq & \frac{1}{|I_0|}\sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{1}{|K|^{\frac{1}{2}}}|\langle f_1, \phi_K^1 \rangle| |\langle f_2, \phi_K^2 \rangle| |\langle |I_0|^{\frac{1}{2}}{\varphi}^H_{I_0},\phi_K^{3,H} \rangle| \nonumber \\ = & \frac{1}{|I_0|}\sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} |\langle |I_0|^{\frac{1}{2}}{\varphi}_{I_0}^H, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle|. \nonumber \\\end{aligned}$$ Since $ {\varphi}_{I_0}^H$ and $\phi_K^{3,H}$ are compactly supported on $I_0$ and $K$ respectively with $|I_0| \leq |K|$, one has $$\langle |I_0|^{\frac{1}{2}}{\varphi}_{I_0}^H, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle \neq 0$$ if and only if $$I_0 \subseteq K.$$ By the hypothesis that $I_0 \cap S \neq \emptyset$, one derives that $K \cap S\neq \emptyset$ and $$\begin{aligned} \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \leq &\frac{1}{|I_0|} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}}\sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^H_{I_0}, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle| \nonumber \\ \lesssim & \frac{1}{|I_0|} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S\neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \cdot |I_0|,\end{aligned}$$ where the last inequality holds trivially given that $|I_0|^{\frac{1}{2}}{\varphi}^H_{I_0}$ is an $L^{\infty}$-normalized characteristic function of $I_0$ and $|K|^{\frac{1}{2}}\phi_K^{3,H}$ an $L^{\infty}$-normalized characteristic function of $K$. This completes the proof of the proposition. Proof of Lemma \[localization\_haar\] ------------------------------------- Suppose that for any $I \in \mathcal{I}'$, $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. By definition of energy, there exists $n \in \mathbb{Z}$ and a disjoint collection of dyadic intervals $\mathbb{D}^0_{n}$ such that $$\text{energy} _{\mathcal{I}'}((\langle B^H_I, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) := 2^{n} \sum_{\substack{I \in \mathbb{D}^0_{n}\\ I \in \mathcal{I'}}}|I| \label{energy}$$\[B\_energy\] where $$\label{st_interval} \frac{|\langle B^H_I, {\varphi}^H_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n}.$$ **Case I. $(\phi^3_K)_K$ is lacunary.** One recalls that in the Haar model, $$\langle B^H_I, {\varphi}_I^H \rangle:= \frac{1}{|I|^{\frac{1}{2}}} \sum_{\substack{K \in \mathcal{K} \\ |K| \geq |I|}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle$$ where ${\varphi}^H_1$ is an $L^2$-normalized indicator function of $I$ and $\psi_K^{3,H}$ is a Haar wavelet on $K$. It is not difficult to observe that $$\label{haar_biest_cond} \langle {\varphi}^H_I,\psi_K^{3,H} \rangle \neq 0 \iff K \supseteq I.$$ Given $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$, one can deduce that $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. As a consequence, $$\begin{aligned} \label{haar_biest} \langle B^H_I, {\varphi}^H_I \rangle =& \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset \\ |K| \geq |I|}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle \nonumber \\ = &\sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle.\end{aligned}$$ Let $$B^{n_1,m_1}_0(f_1, f_2)(x) := \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \psi_K^{3,H}(x).$$ Then $$\langle B^H_I, {\varphi}^H_I \rangle = \langle B^{n_1,m_1}_0, {\varphi}^H_I \rangle.$$ In the Haar model, equation (\[haar\_biest\]) trivially holds due to (\[haar\_biest\_cond\]). Such technique of replacing the operator defined in terms of $I$ (namely $B_I^H$) by another operator independent of $I$ (namely $B^{n_1,m_1}_0$) is called **biest trick** which allows neat energy estimates for $$\begin{aligned} & \text{energy}((\langle B_0^{n_1,m_1}, {\varphi}_I \rangle)_{I \in \mathcal{I}}), \nonumber \\ & \text{energy}((\langle \tilde{B}_0^{n_2,m_2}, {\varphi}_J \rangle)_{J \in \mathcal{J}}), \nonumber \end{aligned}$$ and yields a local energy estimates described in Proposition \[B\_en\]. **Case II: $(\phi^3_K)_K$ is non-lacunary.** Since $\phi^{3,H}_K$ and ${\varphi}_I^H$ are $L^2$-normalized indicator functions of $K$ and $I$ respectively, $|K| \leq |I|$ implies that $K \supseteq I$. As a result, $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ given $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. Then $$\begin{aligned} \frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} = & \frac{1}{|I|^{\frac{1}{2}}} \bigg|\sum_{\substack{K \in \mathcal{K} \\ K \supseteq I \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^{H}_I,{\varphi}_K^3 \rangle \bigg| \nonumber \\ \leq & \frac{1}{|I|^{\frac{1}{2}}} \sum_{\substack{K \in \mathcal{K} \\ K \supseteq I \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |\langle |{\varphi}^{H}_I|,|{\varphi}_K^3| \rangle.\end{aligned}$$ One can drop the condition $K \supseteq I$ in the sum and bound the above expression by $$\frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} \leq \frac{1}{|I|} \sum_{\substack{K \in \mathcal{K}\\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| \langle |{\varphi}_I^H|,|{\varphi}_K^3| \rangle.$$ One can define the localized operator in this case $$B^{n_1,m_1}_0(x) := \displaystyle \sum_{\substack{K \in \mathcal{K}\\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |{\varphi}_K^3|(x).$$ The discussion above yields that $$\frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} \leq \frac{|\langle B_0^{n_1,m_1}, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}}$$ and therefore $$\text{energy}_{\mathcal{I}'}(\langle B_I^H, {\varphi}_I^H \rangle_{I \in \mathcal{I'}}) \leq \text{energy}_{\mathcal{I}'}(\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle_{I \in \mathcal{I'}}).$$ This completes the proof of the lemma. $B^{n_1,m_1}_0$ is perfectly localized in the sense that the dyadic intervals (that matter) intersect with $\mathcal{U}_{n_1,m_1}$ nontrivially given that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. As will be seen from the proof of Lemma \[B\_loc\_norm\], such localization is essential in deriving desired estimates. In the general Fourier case, more efforts are needed to create similar localizations as will be discussed in Chapter 10. Proof of Lemma \[B\_loc\_norm\] ------------------------------- The estimates described in Lemma \[B\_loc\_norm\] can be obtained by a very similar argument for proving the boundedness of one-parameter paraproducts discussed in Chapter 2 of [@cw]. We would include the customized proof here since the argument depends on a one-dimensional stopping-time decomposition which is also an important ingredient for our tensor-type stopping-time decompositions that will be introduced in later chapters. ### One-dimensional stopping-time decomposition - maximal intervals Given finiteness of the collection of dyadic intervals $\mathcal{K}$, there exists some $K_1 \in \mathbb{Z}$ such that $$\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \leq C_1 2^{K_1} \text{energy}_{\mathcal{K}}((\langle f_1, {\varphi}_K \rangle)_{K \in \mathcal{K}}).$$ We can pick the largest interval $K_{\text{max}}$ such that $$\frac{|\langle f_1, {\varphi}^1_{K_{\text{max}}} \rangle|}{|K_{\text{max}}|^{\frac{1}{2}}} > C_1 2^{K_1-1}\text{energy}_{\mathcal{K}}((\langle f_1, {\varphi}_K \rangle)_{K \in \mathcal{K}}).$$ Then we define a tree $$U:= \{K \in \mathcal{K}: K \subseteq K_{\text{max}}\},$$ and let $K_U := K_{\text{max}},$ usually called as tree-top. Now we look at $\mathcal{K} \setminus U$ and repeat the above step to choose maximal intervals and collect their subintervals in their corresponding sets. Since $\mathcal{K}$ is finite, the process will eventually end. We then collect all $U$’s in a set $\mathbb{U}_{K_1-1}$. Next we repeat the above algorithm to $\displaystyle \mathcal{K} \setminus \bigcup_{U \in \mathbb{U}_{K_1-1}} U$. We thus obtain a decomposition $\displaystyle \mathcal{K} = \bigcup_{k}\bigcup_{U \in \mathbb{U}_{k}}U$. If, otherwise, the sequence is formed in terms of bump functions in lacunary family, then the same procedure can be performed to $$\frac{1}{|K|} \left\Vert \bigg(\sum_{K' \subseteq K}\frac{|\langle f_2, \psi_{K'} \rangle|^2 }{|K'|}\chi_{K'}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty}.$$ .15in The next proposition summarizes the information from the stopping-time decomposition and the details of the proof are included in Chapter 2 of [@cw]. \[st\_prop\] Suppose $\displaystyle \mathcal{K} = \bigcup_{k}\bigcup_{U \in \mathbb{U}_{k}}U$ is a decomposition obtained from the stopping-time algorithm specified above, then for any $k \in \mathbb{Z}$, one has $$\displaystyle 2^{k-1}\text{energy}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}) \leq \text{size}_{\bigcup_{U \in \mathbb{U}_k}U}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}) \leq \min(2^{k}\text{energy}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}),\text{size}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}})).$$ In addition, $$\sum_{U \in \mathbb{U}_k} |K_{U}| \lesssim 2^{-k}.$$ The next lemma follows from the stopping-time decomposition, Proposition \[st\_prop\] and Proposition \[JN\], whose proof is discussed carefully in Chapter 2.9 of [@cw]. It plays an important role in proving Lemma \[B\_loc\_norm\] as can be seen in Section 5.2.2. \[s-e\] Suppose $\mathcal{K}$ is a finite collection of dyadic intervals. Then $$\bigg|\sum_{K \in \mathcal{I}}\frac{1}{|K|}\langle f_1,\phi_K \rangle \langle f_2,\phi_K \rangle \langle f_3,\phi_K\rangle \bigg| \lesssim \prod_{i=1}^3 \text{size}_{\mathcal{K}} \big((\langle f_i, \phi_K \rangle)_{K \in \mathcal{K}} \big)^{1-\theta_i}\text{energy}_{\mathcal{K}}\big((\langle f_i, \phi_K \rangle)_{K \in \mathcal{K}} \big)^{\theta_i}.$$ ### Proof of Lemma \[B\_loc\_norm\] 1. **Estimate of $ \|B_0^{n_1,m_1}\|_1$.** For any $\eta \in L^{\infty}$ one has $$\begin{aligned} |\langle B_0^{n_1,m_1},\eta \rangle |\leq & \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |\langle \eta, \phi_K^{3} \rangle |, \nonumber \\\end{aligned}$$ where $$\phi^{3}_{K}:=\begin{cases} \psi^{3,H}_K \quad \quad \ \ \text{in Case}\ \ I\\ |{\varphi}^{3,H}_{K}| \quad \quad \text{in Case} \ \ II. \end{cases}$$ Let $\mathcal{K}'$ denote the sub-collection $$\mathcal{K'}:= \{K \in \mathcal{K}: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset \}.$$ Then, one can now apply Lemma \[s-e\] to obtain $$\begin{aligned} & |\langle B_0^{n_1,m_1}, \eta \rangle | \nonumber \\ \lesssim & \text{\ \ size}_{\mathcal{K}'} ((\langle f_1, \phi^1_K \rangle)_{K})^{1-\theta_1}\text{size}_{\mathcal{K}'}((\langle f_2, \phi^2_K \rangle)_{K})^{1-\theta_2} \text{size}_{\mathcal{K}'}((\langle \eta, \phi^3_K \rangle)_{K})^{1-\theta_3} \nonumber \\ & \text{\ \ energy} _{\mathcal{K}'}((\langle f_1, \phi^1_K\rangle)_{K})^{\theta_1}\text{energy} _{\mathcal{K}'}((\langle f_2, \phi^2_K\rangle)_{K})^{\theta_2} \text{energy} _{\mathcal{K}'}((\langle \eta, \phi^3_K\rangle)_{K})^{\theta_3},\end{aligned}$$ for any $0 \leq \theta_1,\theta_2, \theta_3 <1$ with $\theta_1 + \theta_2 + \theta_3 = 1$. By applying Proposition \[size\] and the fact that $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $K \in \mathcal{K}'$, one deduces that $$\begin{aligned} \label{f_size} & \text{size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K}) \lesssim 2^{n_1}|F_1|, \nonumber \\ & \text{size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K}) \lesssim 2^{n_1}|F_1|.\end{aligned}$$ One also recalls that $\eta \in L^{\infty}$, which gives $$\label{inf_size} \text{size}_{K \in \mathcal{K}}((\langle \eta, \phi^3_K \rangle)_{K}) \lesssim 1.$$ By choosing $\theta_3 = 0$ and combining the estimates (\[f\_size\]), (\[inf\_size\]) with the energy estimates described in Proposition \[energy\_classical\], one obtains $$\begin{aligned} |\langle B_0^{n_1,m_1}, \eta \rangle |\lesssim & (C_12^{n_1}|F_1|)^{\alpha_1(1-\theta_1)} (C_1 2^{m_1}|F_2|)^{\beta_1(1-\theta_2)} \|\eta\|_{L^{\infty}}|F_1|^{\theta_1}|F_2|^{\theta_2} \nonumber \\ = & C_1^{\alpha_1(1-\theta_1)+ \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1)+\theta_1}|F_2|^{\beta_1(1-\theta_2)+\theta_2} \|\eta\|_{L^{\infty}},\end{aligned}$$ where $\theta_1 + \theta_2 = 1$, $ 0 \leq \alpha_1, \beta_1 \leq 1$. Therefore, one can conclude that $$\begin{aligned} & \|B_0^{n_1,m_1}\|_1 \lesssim C_1^{\alpha_1(1-\theta_1)+ \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1)+\theta_1}|F_2|^{\beta_1(1-\theta_2)+\theta_2}. $$ By choosing $\alpha_1(1-\theta_1)+\theta_1 = \frac{1}{p_1}$ and $\beta_1(1-\theta_2)+\theta_2 = \frac{1}{q_1}$ which is possible given $\frac{1}{p_1} + \frac{1}{q_1} > 1$, one obtains the desired result. 0.25 in 2. **Estimate of $\| B_0^{n_1,m_1}\|_{t}$ for $t >1$.** We will first prove restricted weak-type estimates for $B_0^{n_1,m_1}$ specified in Claim \[en\_weak\_p\] and then the strong-type estimates in Claim \[en\_strong\_p\] follows from the standard interpolation technique. \[en\_weak\_p\] $\| B_0^{n_1,m_1}(f_1,f_2)\|_{\tilde{t},\infty} \lesssim C_1^{\frac{1}{p_1} + \frac{1}{q_1}-\theta_1 -\theta_2}2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$ where $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$ and $\tilde{t} \in (t-\delta, t+ \delta)$ for some $\delta > 0 $ sufficiently small. \[en\_strong\_p\] $\| B_0^{n_1,m_1}(f_1,f_2)\|_{\tilde{t}} \lesssim C_1^{\frac{1}{p_1} + \frac{1}{q_1}-\theta_1-\theta_2} 2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$ where $\theta_1 + \theta_2 = \frac{1}{t}$. It suffices to apply the dualization and prove that for any $\chi_S \in L^{\tilde{t}'}$, $$|\langle B_0^{n_1,m_1}, \chi_S \rangle| \lesssim 2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|S|^{\frac{1}{\tilde{t}'}}$$ where $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$. The multilinear form can be estimated using a similar argument described in the proof of (i). In particular, let $$\mathcal{K}' := \{K \in \mathcal{K}: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset\}.$$ Then $$\begin{aligned} \label{linear_form_p} & |\langle B_0^{n_1,m_1}, \chi_S \rangle| \nonumber \\ \lesssim & \text{\ \ size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K})^{1-\theta_1}\text{size}_{\mathcal{K}'}((\langle f_2, \phi^2_K \rangle)_{K})^{1-\theta_2} \text{size}_{\mathcal{K}'}((\langle \chi_S, \phi^3_K \rangle)_{K})^{1-\theta_3} \nonumber \\ & \text{\ \ energy} _{\mathcal{K}'}((\langle f_1, \phi^1_K\rangle)_{K})^{\theta_1}\text{energy} _{\mathcal{K}'}((\langle f_2, \phi^2_K\rangle)_{K})^{\theta_2} \text{energy} _{\mathcal{K}'}((\langle \chi_S , \phi^3_K\rangle)_{K})^{\theta_3},\end{aligned}$$ for any $0 \leq \theta_1,\theta_2, \theta_3 <1$ with $\theta_1 + \theta_2 + \theta_3 = 1$. The size and energy estimates involving $f_1, f_2$ in part (1) are still valid. Here $\phi^3_K$ are defined differently in Case $I$ and $II$. However, one applies the same straightforward estimates that $$\begin{aligned} \label{set_size_en} & \text{size}_{\mathcal{K}'}((\langle \chi_S , \phi^3_K \rangle)_{K}) \lesssim 1, \nonumber \\ & \text{energy} _{\mathcal{K}'}((\langle \chi_S , \phi^3_K\rangle)_{K}) \lesssim |S|. \end{aligned}$$ By plugging in the above estimate (\[set\_size\_en\]) and (\[f\_size\]) into (\[linear\_form\_p\]), one has $$\begin{aligned} |\langle B_0^{n_1,m_1}, \chi_S \rangle| \lesssim & (C_1 2^{n_1}|F_1|)^{\alpha_1(1-\theta_1)} (C_1 2^{m_1}|F_2|)^{\beta_1(1-\theta_2)}|F_1|^{\theta_1}|F_2|^{\theta_2}|S|^{\theta_3} \nonumber \\ = & C_1^{\alpha_1(1-\theta_1) + \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1) + \theta_1} |F_2|^{\beta_1(1-\theta_2)+ \theta_2} |S|^{\theta_3}, $$ for any $0 \leq \alpha_1, \beta_1 \leq 1$. Let $\theta_3 = \frac{1}{\tilde{t}'}$, then $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$. One can then conclude $$\begin{aligned} \|B_0^{n_1,m_1}\|_{\tilde{t},\infty} \lesssim & C_1^{\alpha_1(1-\theta_1) + \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1) + \theta_1} |F_2|^{\beta_1(1-\theta_2)+ \theta_2}. $$ Since $\frac{1}{p_1} + \frac{1}{q_1} >1$, one can choose $0 \leq \alpha_1, \beta_1 \leq 1$ and $\theta_1,\theta_2$ with $\theta_1 + \theta_2 = \frac{1}{\tilde{t}} \sim \frac{1}{t}$ such that $$\alpha_1(1-\theta_1) + \theta_1 = \frac{1}{p_1},$$ $$\beta_1(1-\theta_2)+ \theta_2 = \frac{1}{q_1},$$ the claim then follows. Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$ - Haar Model ======================================================================================================== In this chapter, we will first specify the localization for the discrete model $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$, which can be viewed as a starting point for the stopping-time decompositions. Then we will introduce different stopping-time decompositions used in the estimates. Finally, we will discuss how to apply information from the multiple stopping-time decompositions to obtain estimates. The organizations of Chapters 7-9 will follow the same scheme. Localization ------------ The definition of the exceptional set, which settles the starting point for the stopping-time decompositions, is expected to be compatible with the stopping-time algorithms involved. There would be two types of stopping-time decompositions undertaken for the estimates of $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$ - one is the *tensor-type stopping-time decomposition* and the other one the *general two-dimensional level sets stopping-time decomposition*. While the second algorithm is related to a generic exceptional set (denoted by $\Omega^2$), the first algorithm aims to integrate information from two one-dimensional decompositions, which corresponds to the creation of a two-dimensional exceptional set (denoted by $\Omega^1$) as a Cartesian product of two one-dimensional exceptional sets. One defines the exceptional set, denoted by $\tilde{\Omega}$, as follows. Let $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}$$ with $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s(\mathbb{R}^2)}\}. \nonumber \\\end{aligned}$$ \[subset\] Given by the boundedness of the Hardy-Littlewood maximal operator and the double square function operator, it is not difficult to check that if $C_1, C_2, C_3 \gg 1$, then $ |\tilde{\Omega}| \ll 1. $ For different model operators, we will define different exceptional sets based on different stopping-time decompositions to employ. Nevertheless, their measures can be controlled similarly using the boundedness of the maximal operator and hybrid maximal and square operators. By scaling invariance, we will assume without loss of generality that $|E| = 1$ throughout the paper. Let $$\label{set_E'} E' := E \setminus \tilde{\Omega},$$ then $ |E'| \sim |E| $ and thus $|E'| \sim 1$. Our goal is to show that (\[thm\_weak\_explicit\]) holds with the corresponding subset $E' \subseteq E$(which will be different for each discrete model operator). In the current setting, this is equivalent to proving that the multilinear form $$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ It is noteworthy that the discrete model operators are perfectly localized to $E'$ in the Haar model. In particular, $$\begin{aligned} \label{haar_local} \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}, \chi_{E'} \rangle := & \displaystyle \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^{1,H} \rangle \langle \tilde{B_J}(g_1, g_2), \phi_J^{1,H} \rangle \langle h, \phi_I^{2,H} \otimes \phi_J^{2.H} \rangle \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle \nonumber \\ = & \displaystyle \sum_{\substack{I \times J \in \mathcal{R} \\ I \times J \cap \tilde{\Omega}^c \neq \emptyset}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^{1,H} \rangle \langle \tilde{B_J}(g_1, g_2), \phi_J^{1,H} \rangle \langle h, \phi_I^{2,H} \otimes \phi_J^{2.H} \rangle \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle, \end{aligned}$$ because for $I \times J \cap \tilde{\Omega}^c = \emptyset$, $I \times J \cap E' = \emptyset$ and thus $ \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle = 0, $ which means that dyadic rectangles satisfying $I \times J \cap \tilde{\Omega}^c = \emptyset$ do not contribute to the multilinear form. In the Haar model, we would heavily rely on the localization (\[haar\_local\]) and consider only the dyadic rectangles $I \times J \in \mathcal{R}$ such that $I \times J \cap \tilde{\Omega}^c \neq \emptyset$. Tensor-type stopping-time decomposition I - level sets ------------------------------------------------------ The first tensor-type stopping time decomposition, denoted by the *tensor-type stopping-time decomposition I*, will be performed to obtain estimates for $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$. It aims to recover intersections with two-dimensional level sets from intersections with one-dimensional level sets for each variable. Another tensor-type stopping-time decomposition, denoted by the *tensor-type stopping-time decomposition II*, involves maximal intervals and plays an important role in the discussion for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$. We will focus on the *tensor-type stopping-time decomposition I* in this chapter. ### One-dimensional stopping-time decompositions - level sets One can perform a one-dimensional stopping-time decomposition on $\mathcal{I} := \{I: I \times J \in \mathcal{R}\}$. Let $$\Omega^{x}_{N_1} := \{ Mf_1 > C_1 2^{N_1}|F_1|\},$$ for some $N_1 \in \mathbb{Z}$ and $$\mathcal{I}_{N_1} := \{I \in \mathcal{I}: |I \cap \Omega^{x}_{N_1}| > \frac{1}{10}|I| \}.$$ Define $$\Omega^{x}_{N_1-1} := \{ Mf_1 > C_1 2^{N_1-1}|F_1|\},$$ and $$\mathcal{I}_{N_1-1} := \{I \in \mathcal{I} \setminus \mathcal{I}^{N_1}: |I \cap \Omega^{x}_{N_1-1}| > \frac{1}{10}|I| \}.$$                                                      The procedure generates the sets $(\Omega^{x}_{n_1})_{n_1}$ and $(\mathcal{I}_{n_1})_{n_1}$. Independently define $$\Omega^{x}_{M_1} := \{ Mf_2 > C_1 2^{M_1}|F_2|\},$$ for some $M_1 \in \mathbb{Z}$ and $$\mathcal{I}_{M_1} := \{I \in \mathcal{I}: |I \cap \Omega^{x}_{M_1}| > \frac{1}{10}|I| \}.$$ Define $$\Omega^{x}_{M_1-1} := \{ Mf_2 > C_1 2^{M_1-1}|F_2|\},$$ and $$\mathcal{I}_{M_1-1} := \{I \in \mathcal{I} \setminus \mathcal{I}^{M_1}: |I \cap \Omega^{x}_{M_1-1}| > \frac{1}{10}|I| \}.$$                                                      The procedure generates the sets $(\Omega^{x}_{m_1})_{m_1}$ and $(\mathcal{I}_{m_1})_{m_1}$. Now define $\mathcal{I}_{n_1,m_1} := \mathcal{I}_{n_1} \cap \mathcal{I}_{m_1}$ and the decomposition on $\displaystyle \mathcal{I} = \bigcup_{n_1,m_1}\mathcal{I}_{n_1,m_1}$. Same algorithm can be applied to $\mathcal{J}:= \{J: I \times J \in \mathcal{R}\}$ with respect to the level sets in terms of $Mg_1$ and $Mg_2$, which produces the sets (i) $(\Omega^{y}_{n_2})_{n_2}$ and $(\mathcal{J}_{n_2})_{n_2}$, where $$\Omega^{y}_{n_2} := \{ Mg_1 > C_2 2^{n_2}|G_1|\},$$ and $$\mathcal{J}_{n_2} := \{J \in \mathcal{J} \setminus \mathcal{J}_{n_2+1}: |J \cap \Omega^{y}_{n_2}| > \frac{1}{10}|J| \}.$$ (ii) $(\Omega^{y}_{m_2})_{m_2}$ and $(\mathcal{J}_{m_2})_{m_2}$, where $$\Omega^{y}_{m_2} := \{ Mg_2 > C_2 2^{m_2}|G_2|\},$$ and $$\mathcal{J}_{m_2} := \{J \in \mathcal{J} \setminus \mathcal{J}_{m_2+1}: |J \cap \Omega^{y}_{m_2}| > \frac{1}{10}|J| \}.$$ One thus obtains the decomposition $\displaystyle \mathcal{J} = \bigcup_{n_2, m_2} \mathcal{J}_{n_2,m_2}$, where $\mathcal{J}_{n_2,m_2} := \mathcal{J}_{n_2} \cap \mathcal{J}_{m_2}$. ### Tensor product of two one-dimensional stopping-time decompositions - level sets If we assume that all dyadic rectangles satisfy $I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$ as in the Haar model, then we have the following observation. \[obs\_indice\] If $I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}$, then $n_1 , m_1 ,n_2, m_2 \in \mathbb{Z}$ satisfies $n_1+n_2 < 0$ and $m_1 + m_2 < 0$. (Equivalently, $\forall I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$, $I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}$, for some $n_2, m_2 \in \mathbb{Z}$ and $n, m > 0$.) The observation shows that how a rectangle $I \times J$ intersects with a two-dimensional level sets is closely related to how the corresponding intervals intersect with one-dimensional level sets (namely $I \in \mathcal{I}_{n_1.m_1}$ and $J \in \mathcal{J}_{n_2,m_2}$ with $n_1 + n_2 < 0$ and $m_1 + m_2 < 0$), as commented in the beginning of the section. Given $I \in \mathcal{I}_{n_1}$, one has $|I \cap \{ Mf_1 > C_1 2^{n_1}|F_1|\}| > \frac{1}{10} |I|$; similarly, $J \in \mathcal{J}_{n_1'}$ implies that $|J \cap \{ Mg_1 > C_2 2^{n_1'}|G_1|\}| > \frac{1}{10}|J|$. If $n_1 + n_1' \geq 0$ , then $\{ Mf_1 > C_1 2^{n_1}|F_1|\} \times\{ Mg_1 > C_2 2^{n_1'}|G_1|\} \subseteq \Omega^1 \subseteq \Omega$. Then $|I \times J \cap \Omega| > \frac{1}{100}|I \times J|$, which implies that $I \times J \subseteq \tilde{\Omega}$ and contradicts with the assumption. Same reasoning applies to $m_1$ and $m_2$. General two-dimensional level sets stopping-time decomposition -------------------------------------------------------------- With the assumption that $R \cap \tilde{\Omega}^c \neq \emptyset$, one has that $$|R\cap \Omega^2| \leq \frac{1}{100}|R|,$$ where $$\Omega^2 = \{ SSh >C_3 \|h\|_s\}.$$ Then define $$\Omega^2_{-1}:= \{SSh > C_3 2^{-1}\|h\|_{L^s}\}$$ and $$\mathcal{R}_{-1} := \{R \in \mathcal{R}: |R \cap \Omega^2_{-1}| > \frac{1}{100}|R|\}.$$ Successively define $$\Omega^2_{-1}:= \{SSh > C_3 2^{-2}\|h\|_{L^s}\}$$ and $$\mathcal{R}_{-2} := \{R \in \mathcal{R} \setminus \mathcal{R}_{-1}: |R \cap \Omega^2_{-2}| > \frac{1}{100}|R|\}.$$                                                                 This two-dimensional stopping-time decomposition generates the sets $(\Omega^2_{k_1})_{k_1 \leq 0}$ and $(\mathcal{R}_{k_1})_{k_1 \leq 0}$. Independently one can apply the same algorithm involving $SS\chi_{E'}$ which generates $(\Omega^2_{k_2})_{k_2 \leq K}$ and $(\mathcal{R}_{k_2})_{k_2 \leq K}$ where $K$ can be arbitrarily large. The existence of $K$ is guaranteed by the finite cardinality of the collection of dyadic rectangles. Sparsity condition ------------------ One important property followed from the *tensor-type stopping-time decomposition I - level sets* is the sparsity of dyadic intervals at different levels. Such geometric property plays an important role in the arguments for the main theorems. \[sparsity\] Suppose that $\displaystyle \mathcal{J} = \bigcup_{n_2 \in \mathbb{Z}} \mathcal{J}_{n_2}$ is a decomposition of dyadic intervals with respect to $Mg_1$ as specified in Section 6.3. For any fixed $n_2 \in \mathbb{Z}$, suppose that $J_0 \in \mathcal{J}_{n_2 - 10}$. Then $$\displaystyle \sum_{\substack{J \in \mathcal{J}_{n_2}\\J \cap J_0 \neq \emptyset}} |J| \leq \frac{1}{2}|J_0|.$$ To prove the proposition, one would need the following claim about point-wise estimates for $Mg_1$ on $J \in \mathcal{J}_{n_2}$: \[ptwise\] Suppose that $\bigcup_{n_2}\mathcal{J}_{n_2}$ is a partition of dyadic intervals generated from the stopping-time decomposition described above. If $J \in \mathcal{J}_{n_2}$, then for any $y \in J,$ $$ Mg_1(y)> 2^{-7} \cdot C_2 2^{n_2}|G_1|.$$ We will first explain why the proposition follows from the claim and then prove the claim. One recalls that all the intervals are dyadic, which means if $J \cap J_0 \neq \emptyset$, then either $$J \subseteq J_0$$ or $$J_0 \subseteq J.$$ If $J_0 \subseteq J$, then the claim implies that $$J_0 \subseteq J \subseteq \{ Mg_1 > C_2 2^{n_2-7}|G_1|\}.$$ But $J_0 \in \mathcal{J}_{n_2-10}$ infers that $$\big|J_0 \cap \{ Mg_1 > C_2 2^{n_2 - 7}\}\big| < \frac{1}{10}|J_0|,$$ which is a contradiction. If $J \subseteq J_0$ and suppose that $$\displaystyle \sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} |J| > \frac{1}{2}|J_0|.$$ Then one can derive from $J \in \mathcal{J}_{n_2}$ that $$\big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{10}|J|.$$ Therefore $$\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} \big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{10}\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}}|J| > \frac{1}{20}|J_0|.$$ But by the disjointness of $(J)_{J \in \mathcal{J}_{n_2}}$, $$\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} \big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| \leq \big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big|.$$ Thus $$\big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{20}|J_0|,$$ Now the claim, with slight modifications, implies that $J_0 \subseteq \{Mg_1 > C_2 2^{n_2-8}|G_1| \}$. But $J_0 \in \mathcal{J}_{n_2-10}$, which gives the necessary condition that $$\big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| \leq \frac{1}{10}|J_0|$$ and reaches a contradiction. We will now prove the claim. Without loss of generality, we assume that $g$ is non-negative since if it is not, we can always replace it by $|g|$ where $Mg = M(|g|)$. We prove the claim case by case: Case (i): $\forall y \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$, there exists $J_{y} \subseteq J$ such that $\text{ave}_{J_y}(g_1) > C_2 2^{n_2}|G_1|;$ Case (ii): There exists $y_0 \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ and $J_{y_0} \nsubseteq J$ such that $\text{ave}_{J_{y_0}}(g_1) > C_2 2^{n_2}|G_1|$ and Case (iia): $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$ and $|J_{y_0}| \leq |J|$; Case (iib): $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$ and $|J_{y_0}| > |J|$; Case (iic): $|J_{y_0} \cap J| < \frac{1}{40}|J|$. *Proof of (i):* In Case (i), one observes that $\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J$ can be rewritten as $\{M(g_1\cdot \chi_J) > C_2 2^{n_2}|G_1|\} \cap J$. Thus $$C_2 2^{n_2}|G_1||\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J| = C_2 2^{n_2}|G_1||\{M(g_1\chi_J) > C_2 2^{n_2}|G_1|\} \cap J| \leq \|g_1\chi_J\|_1.$$ One recalls that $|\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J| > \frac{1}{10}|J|$, which implies that $$C_2 2^{n_2}|G_1|\cdot \frac{1}{10}|J| \leq \|g_1\chi_J\|_1,$$ or equivalently, $$\frac{\|g_1\chi_J\|_1}{|J|} \geq \frac{1}{10}C_2 2^{n_2}|G_1|.$$ Therefore $Mg_1 > 2^{-4} C_2 2^{n_2}|G_1|$. *Proof of (ii)*: We will prove that if either (iia) or (iib) holds, then $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$. If neither (iia) nor (iib) happens, then (iic) has to hold and in this case, $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$. If there exists $y_0 \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ such that (iia) holds, then $$\frac{\|g_1 \chi_{J_{y_0}} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}\cap J|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{40}|J|},$$ where the last inequality follows from $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$. Moreover, $|J_{y_0}| \leq |J|$ and $y \in J_{y_0} \cap J \neq \emptyset$ infer that $|J_{y_0} \cup J| \leq 2|J|$. Thus $$\frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{20}|J|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{40}\frac{1}{2}|J_{y_0} \cup J|},$$ which implies $$\frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|} > \frac{1}{80}C_2 2^{n_2}|G_1|,$$ and as a result $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$ on $J$. If there exists $y \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ such that (iib) holds, then $$\frac{\|g_1 \chi_{J_{y_0}} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}|} = \frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{2|J_{y_0}|} \leq \frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|},$$ where the last inequality follows from $|J_{y_0}| > |J|$. As a consequence, $$\frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|} > C_2 2^{n_2}|G_1|,$$ and $Mg_1 > 2^{-1} C_22^{n_2}|G_1|$ on $J$. If neither (i), (iia) nor (iib) happens, then for $\mathcal{S}_{(iic)} := \{y: Mg_1(y) > C_2 2^{n_2}|G_1| \text{\ \ and\ \ } (i) \text{\ \ does not hold}\}$, one direct geometric observation is that $|\mathcal{S}_{(iic)} \cap J| \leq \frac{1}{20}|J|$. In particular, suppose $y \in \mathcal{S}_{(iic)}$, then any $J_{y_0}$ with $\text{ave}_{J_{y_0}}(g_1) > C_2 2^{n_2}|G_1|$ has to contain the left endpoint or right endpoint of $J$, which we denote by $J_{\text{left}}$ and $J_{\text{right}}$. If $J_{\text{left}} \in J_{y_0}$, then the assumption that neither (iia) nor (iib) holds implies that $$|J_{y_0} \cap J| < \frac{1}{40} |J|,$$ and thus $$|[J_{\text{left}}, y]| < \frac{1}{40}|J|.$$ Same implication holds true for $y \in \mathcal{S}_{(iic)}$ with $J_{\text{right}} \in J_{y_0}$. Therefore, for any $y \in \mathcal{S}_{(iic)}$, $|[J_{\text{left}}, y]| < \frac{1}{40}|J|$ or $|[y, J_{\text{right}}]| < \frac{1}{40}|J|$, which can be concluded as $$\big|\mathcal{S}_{(iic)} \cap J\big|< \frac{1}{20}|J|.$$ Since $\big|\{Mg_1> C_2 2^{n_2}|G_1|\} \cap J\big| > \frac{1}{10}|J|$, $$\bigg|\big(\{Mg_1> C_2 2^{n_2}|G_1|\} \setminus \mathcal{S}_{(iic)}\big) \cap J \bigg| > \frac{1}{20}|J|,$$ in which case one can apply the argument for (i) with $\{Mg_1> C_2 2^{n_2}|G_1|\}$ replaced by $\{Mg_1> C_2 2^{n_2}|G_1|\} \setminus \mathcal{S}_{(iic)}$ to conclude that $$Mg_1 > 2^{-5}C_2 2^{n_2} |G_1|.$$ This ends the proof for the claim. \[sp\_2d\] Given an arbitrary collection of dyadic rectangles $\mathcal{R}_0$. Define $\mathcal{J}:= \{J: R = I \times J \in \mathcal{R}_0 \}$. Suppose that $\displaystyle \mathcal{J} = \bigcup_{n_2 \in \mathbb{Z}} \mathcal{J}_{n_2}$ is a decomposition of dyadic intervals with respect to $Mg_1$ as specified in Section 6.3 so that $\displaystyle \mathcal{R}_0 = \bigcup_{n_2 \in \mathbb{Z}} \bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2} \\ }} R $ is a decomposition of dyadic rectangles in $\mathcal{R}_0$. Then $$\sum_{n_2 \in \mathbb{Z}} \bigg|\bigcup_{\substack{R = I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| \lesssim \bigg|\bigcup_{R \in \mathcal{R}_0} R \bigg|. $$ Proposition \[sparsity\] gives a sparsity condition for intervals in the $y$-direction, which is sufficient to generate sparsity for dyadic rectangles in $\mathbb{R}^2$. In particular, $$\begin{aligned} \sum_{n_2 \in \mathbb{Z}} \bigg|\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| =& \sum_{i = 0}^9 \sum_{n_2 \equiv i \ \ \text{mod} \ \ 10} \bigg|\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| \nonumber \\ \lesssim & \sum_{i = 0}^9 \bigg|\bigcup_{n_2 \equiv i \ \ \text{mod} \ \ 10}\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}} R \bigg| \nonumber \\ \leq &10 \bigg|\bigcup_{n_2 \in \mathbb{Z}}\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}} R \bigg| \nonumber\\ = & 10 \big|\bigcup_{R \in \mathcal{R}_0} R \big|,\end{aligned}$$ where the second inequality follows from the sparsity condition in Proposition \[sparsity\]. The picture below illustrates from a geometric point of view why the two-dimensional sparsity condition (Proposition \[sp\_2d\]) follows naturally from the one-dimensional sparsity (Proposition \[sparsity\]). In the figure, $A_1, A_2 \in \mathcal{I} \times \mathcal{J}_{n_2+20}$, $B \in \mathcal{I} \times \mathcal{J}_{n_2+10}$ and $C \in \mathcal{I} \times \mathcal{J}_{n_2}$ for some $n_2 \in \mathbb{Z}$. (0,0) – (2,0) – (2,2) – (0,2) – (0,0); (-2,1/2) – (4,1/2) – (4,1) – (-2,1) – (-2,1/2); (-4,5/4) – (8,5/4) – (8,3/2) – (-4,3/2) – (-4,5/4); (1/2,-2) – (1,-2) – (1,4) – (1/2,4) – (1/2,-2); (0.75,3)node\[blue\][$C$]{}; (3,1.45)node[$A_2$]{}; (1.75,1.75)node\[red\][$B$]{}; (3,0.7)node[$A_1$]{}; .15in Summary of stopping-time decompositions ---------------------------------------    ---------------------------------------------------------------------------------- ------------------- --------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}$ $(n_1 + n_2 < 0, m_1 + m_2 < 0, $ $n_1 + m_2 < 0 , m_1 + n_2 < 0)$ II\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $     on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K) $ ---------------------------------------------------------------------------------- ------------------- --------------------------------------------------------------------- .25in Application of stopping-time decompositions ------------------------------------------- With the stopping-time decompositions specified above, one can rewrite the multilinear form as [$$\begin{aligned} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ = &\bigg|\displaystyle \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1, m_1} \times \mathcal{J}_{n_2, m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle \cdot\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg| \nonumber \\ \leq & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} |I| |J|.\nonumber \end{aligned}$$]{} One recalls the *general two-dimensional level sets stopping-time decomposition* that $I \times J \in \mathcal{R}_{k_1,k_2} $ only if $$|I\times J \cap (\Omega^2_{k_1})^c | \geq \frac{99}{100}|I\times J|$$ $$|I\times J \cap (\Omega^2_{k_2})^c | \geq \frac{99}{100}|I \times J|$$ with $\Omega^2_{k_1} := \{ SSh > C_3 2^{k_1}\|h\|_s\}$, and $\Omega^2_{k_2}:= \{ SS\chi_{E'} > C_3 2^{k_2}\}$. As a result, $$|I \times J| \sim |I \times J \cap (\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c|.$$ One can therefore rewrite the multilinear form as [$$\begin{aligned} \label{form12} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ = & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \nonumber \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ \cdot |I\times J \cap (\Omega_{k_1})^c \cap (\Omega_{k_2})^c| \nonumber \\ \leq & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \displaystyle \sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}\cdot \nonumber \\ &\quad \quad \quad \quad \quad \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}}\chi_{I}(x) \chi_{J}(y) dx dy. \nonumber \\\end{aligned}$$]{} We will now estimate each components in (\[form12\]) separately for clarity. ### Estimate for the $integral$ One can apply the Cauchy-Schwartz inequality to the integrand and obtain $$\begin{aligned} \label{integral12} & \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}}\chi_{I}(x) \chi_{J}(y) dx dy \nonumber \\ \leq & \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \bigg(\sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}\frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|^2}{|I||J|} \chi_I(x)\chi_J(y)\bigg)^{\frac{1}{2}} \nonumber \\ &\quad \quad \quad \quad \quad \quad \bigg(\sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}}\frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|^2}{|I||J|}\chi_{I}(x) \chi_{J}(y)\bigg)^{\frac{1}{2}} dxdy \nonumber \\ \leq &\displaystyle \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} SSh(x,y) SS\chi_{E'}(x,y) \cdot \chi_{\bigcup_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J}(x,y) dxdy. \nonumber \\\end{aligned}$$ Based on the *general two-dimensional level sets stopping-time decomposition*, the hybrid functions have point-wise control on the domain for integration. In particular, for any $(x,y) \in (\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c$, $$\begin{aligned} & SSh(x,y) \lesssim C_3 2^{k_1} \|h\|_s, \nonumber \\ & SS\chi_{E'}(x,y) \lesssim C_3 2^{k_2}.\end{aligned}$$ As a result, the integral can be estimated by $$\begin{aligned} C_3^2 2^{k_1}\|h\|_s 2^{k_2} \bigg| \bigcup_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J \bigg|.\nonumber \\\end{aligned}$$ ### Estimate for $ \sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} $ and $ \sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}$ One recalls the algorithm in the *tensor type stopping-time decomposition I - level sets*, which incorporates the following information. $$I \in \mathcal{I}_{n_1, m_1}$$ implies that $$|I \cap \{ Mf_1 < C_1 2^{n_1}|F_1|\}| \geq \frac{9}{10}|I|,$$ $$|I \cap \{Mf_2 < C_1 2^{m_1}|F_2|\}| \geq \frac{9}{10}|I|,$$ which translates into $$I \cap \{ Mf_1 < C_1 2^{n_1}|F_1|\} \cap \{Mf_2 < C_1 2^{m_1}|F_2|\} \neq \emptyset.$$ Then one can recall Proposition \[size\_cor\] to estimate $$\sup_{I \in \mathcal{I}_{-n-n_2,-m-m_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_1^2 (2^{n_1}|F_1|)^{\alpha_1} (2^{m_1}|F_2|)^{\alpha_2},$$ for any $0 \leq \alpha_1,\alpha_2 \leq 1$. Similarly, one can apply Proposition \[size\_cor\] with $\mathcal{U}'_{n_2,m_2}:= \{ Mg_1 < C_2 2^{n_2}|G_1|\} \cap \{Mg_2 < C_2 2^{m_2}|G_2|\}$ to conclude that $$\sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2^2 (2^{n_2}|G_1| )^{\beta_1}(2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \beta_1,\beta_2 \leq 1$. By choosing $\alpha_1 = \frac{1}{p_1}, \alpha_2 = \frac{1}{q_1}, \beta_1 = \frac{1}{p_2}, \beta_2 = \frac{1}{q_2}, $ the multilinear form can therefore be estimated by $$\begin{aligned} \label{linear_form_fixed_scale} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ \lesssim & C_1^2 C_2^2 C_3^2\sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{n_1 \frac{1}{p_1}}2^{m_1\frac{1}{q_1}}2^{n_2 \frac{1}{p_2}} 2^{m_2 \frac{1}{q_2}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_1}}\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}}} R\bigg| \nonumber. \\\end{aligned}$$ One recalls that $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ then $$\begin{aligned} \label{exp_size} 2^{n_1 \frac{1}{p_1}}2^{m_1\frac{1}{q_1}}2^{n_2 \frac{1}{p_2}} 2^{m_2 \frac{1}{q_2}} = & 2^{n_1\frac{1}{p_2}} 2^{n_1(\frac{1}{q_2} - \frac{1}{q_1})}2^{m_1 \frac{1}{q_1}} 2^{n_2\frac{1}{p_2}}2^{m_2(\frac{1}{q_2} - \frac{1}{q_1})} 2^{m_2\frac{1}{q_1}} \nonumber \\ = & (2^{n_1 + n_2})^{\frac{1}{p_2}} (2^{n_1+m_2})^{\frac{1}{q_2} - \frac{1}{q_1}}(2^{m_1+m_2})^{\frac{1}{q_1}}.\end{aligned}$$ By the definition of exceptional sets, $ 2^{n_1 + n_2} \lesssim 1, 2^{n_1 + m_2} \lesssim 1, 2^{m_1 + n_2} \lesssim 1, 2^{m_1 + m_2} \lesssim 1 $. Then $$n := -(n_1 + n_2) \geq 0,$$ $$m := -(m_1 + m_2) \geq 0.$$ Without loss of generality, one further assumes that $\frac{1}{q_2} \geq \frac{1}{q_1}$ (with $q_1$ and $q_2$ swapped in the opposite case), which implies that $$(2^{n_1+m_2})^{\frac{1}{q_2}- \frac{1}{q_1}} \lesssim 1.$$ Now (\[linear\_form\_fixed\_scale\]) can be bounded by $$\begin{aligned} \label{linear_almost} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ \lesssim & C_1^2 C_2^2 C_3^2\sum_{\substack{n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} 2^{-n \frac{1}{p_2}}2^{-m \frac{1}{q_1}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_1}}\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R\bigg|. \end{aligned}$$ With $k_1, k_2, n, m$ fixed, one can apply the sparsity condition (Proposition \[sp\_2d\]) repeatedly and obtain the following bound for the expression $$\label{nested_area} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim \sum_{m_2 \in \mathbb{Z}} \bigg| \bigcup_{\substack{R \in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-m-m_2} \times \mathcal{J}_{m_2}}} R \bigg| \lesssim \bigg|\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\bigg| \leq \min( \big|\bigcup_{R\in \mathcal{R}_{k_1}} R\big|, \big|\bigcup_{R\in \mathcal{R}_{k_2}} R\big|).$$ The arbitrariness of the collection of rectangles in Proposition \[sp\_2d\] provides the compatibility of different stopping-time decompositions. In the current setting, the notation $\mathcal{R}_0$ in Proposition \[sp\_2d\] is chosen to be $\mathcal{R}_{k_1, k_2}$. The sparsity condition allows one to combine the *tensor-type stopping-time decomposition I* and *general two-dimensional level sets stopping-time decomposition* and to obtain information from both stopping-time decompositions. The readers who are familiar with the proof of single-parameter paraproducts [@cw] or bi-parameter paraproducts [@cptt], [@cw] might recall that (\[nested\_area\]) employs a different argument from the previous ones [@cptt], [@cw]. In particular, by previous reasonings, one would fix $n_2, m_2 \in \mathbb{Z}$ and obtain $$\label{old} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim \min( \big|\bigcup_{R\in \mathcal{R}_{k_1}} R\big|, \big|\bigcup_{R\in \mathcal{R}_{k_2}} R\big|).$$ However, the expression on the right hand side of (\[old\]) is independent of $n_2$ or $m_2$, which gives a divergent series when the sum is taken over all $n_2, m_2 \in \mathbb{Z}$. This explains the novelty and necessity of the sparsity condition (Proposition \[sp\_2d\]) for our argument. To estimate the right hand side of (\[nested\_area\]), one recalls from the *general two-dimensional level sets stopping-time decomposition* that $R \in \mathcal{R}_{k_1}$ implies $$\big|R \cap \Omega^2_{k_1-1} \big| > \frac{1}{100}|R|,$$ or equivalently $$\displaystyle \bigcup_{R\in \mathcal{R}_{k_1}} R \subseteq \{ M (\chi_{\Omega^2_{k_1-1}}) > \frac{1}{10}\}.$$ As a result, $$\begin{aligned} \label{rec_area_1} \bigg|\bigcup_{R\in \mathcal{R}_{k_1}} R\bigg| \leq & \big|\{ M (\chi_{\Omega^2_{k_1-1}}) > \frac{1}{100}\}\big| \lesssim |\Omega^2_{k_1-1}|=|\{ SSh > C_3 2^{k_1} \|h\|_s\}| \lesssim C_3^{-s}2^{-k_1s},\end{aligned}$$ where the last inequality follows from the boundedness of the double square function described in Proposition \[maximal-square\]. By a similar reasoning and the fact that $|E'| \sim 1$, $$\begin{aligned} \label{rec_area_2} \bigg|\bigcup_{R\in \mathcal{R}_{k_2}} R\bigg| \leq & \big|\{ M (\chi_{\Omega^2_{k_2-1}}) > \frac{1}{100}\}\big| \lesssim |\Omega^2_{k_2-1}|=|\{ SS(\chi_{E'}) > C_3 2^{k_2}\}| \lesssim C_3^{-\gamma}2^{-k_2\gamma},\end{aligned}$$ for any $\gamma >1$. Interpolation between (\[rec\_area\_1\]) and (\[rec\_area\_2\]) yields $$\label{int_area} \bigg|\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\bigg| \lesssim 2^{-\frac{k_1s}{2}}2^{-\frac{k_2\gamma}{2}},$$ and by plugging (\[int\_area\]) into (\[nested\_area\]), one has $$\label{rec_area_hybrid} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim 2^{-\frac{k_1s}{2}}2^{-\frac{k_2\gamma}{2}},$$ for any $\gamma >1$. One combines the estimates (\[rec\_area\_hybrid\]) and (\[linear\_almost\]) to obtain $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1^2 C_2^2 C_3^2 \sum_{\substack{n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n \frac{1}{p_2}}2^{-m \frac{1}{q_1}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{p_2}}|G_1|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\cdot 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s} 2^{k_2(1-\frac{\gamma}{2})}. \nonumber \\\end{aligned}$$ The geometric series $\displaystyle \sum_{k_1<0}2^{k_1(1-\frac{s}{2})}$ is convergent given that $s <2$. For $\displaystyle\sum_{k_2 \leq K}2^{k_2(1-\frac{\gamma}{2})}$, one can choose $\gamma >1$ to be sufficiently large for the range $0 \leq k_2 \leq K$ and $\gamma >1$ and close to $1$ for $k_2 <0$. One thus concludes that $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1^2 C_2^2 C_3^2 |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{p_2}}|G_1|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\| h \|_{L^s}.\end{aligned}$$ One important observation is that thanks to Lemma \[B\_size\], the sizes $$\sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}}$$ and $$\sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}$$ can be estimated in the exactly same way as $$\text{size}_{\mathcal{I}_{n_1}}\big( (f_1,\phi_I)_I \big) \cdot \text{size}_{\mathcal{I}_{m_1}}\big( (f_2,\phi_I)_I \big)$$ and $$\text{size}_{\mathcal{J}_{n_2}}\big( (g_1,\phi_J)_J \big) \cdot \text{size}_{\mathcal{J}_{m_2}}\big( (g_2,\phi_J)_J \big)$$ respectively. Based on this observation, it is not difficult to verify that the discrete model $\Pi_{\text{flag}^{\#1} \otimes \text{paraproduct}}$, $\Pi_{\text{paraproduct}\otimes \text{paraproduct}}$ can be estimated by a essentially same argument as $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$. In addition, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ can be studied similarly as $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$. .15in Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ - Haar Model ================================================================================================ The argument in Chapter 6 is not sufficient for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ because the localized size $$\begin{aligned} & \sup_{I \cap S \neq \emptyset} \frac{|\langle B_I^H, {\varphi}^{1,H}_I \rangle |}{|I|^{\frac{1}{2}}}, \nonumber \\ & \sup_{J \cap S' \neq \emptyset} \frac{|\langle \tilde{B}_J^H, {\varphi}^{1,H}_J \rangle }{|J|^{\frac{1}{2}}}\end{aligned}$$ cannot be controlled without information about corresponding level sets. In particular, one needs to impose the additional assumption that $$\begin{aligned} &I \cap \{MB^H \leq C_1 2^{l_1}\|B^H\|_1\} \neq \emptyset, \nonumber \\ &J \cap \{M\tilde{B}^H \leq C_2 2^{l_2}\|\tilde{B}^H\|_1\} \neq \emptyset,\end{aligned}$$ where $$\begin{aligned} & B^H (f_1, f_2)(x):= \sum_{K \in \mathcal{K}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2\rangle \phi_{K}^{3,H}(x), \nonumber \\ &\tilde{B}^H (g_1, g_2)(y):= \sum_{L \in \mathcal{L}} \frac{1}{|L|^{\frac{1}{2}}} \langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2\rangle \phi_{L}^{3,H}(y).\end{aligned}$$ However, while the sizes of $B^H$ and $\tilde{B}^H$ can be controlled in this way, they lose the information from the localization (e.g. $K \cap \{ Mf_1 \leq C_1 2^{n_1}|F_1|\} \neq \emptyset$ for some $n_1 \in \mathbb{Z}$) and are thus far away from satisfaction. It is indeed the energies which capture such local information and compensate for the loss from size estimates in this scenario. Localization ------------ As one would expect from the definition of the exceptional set, the *tensor-type stopping-time decompositions* and the *general two-dimensional level sets stopping-time decomposition* are involved in the argument. We define the set $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{n_1 \in \mathbb{Z}}\{Mf_1 > C_1 2^{n_1}|F_1|\} \times \{Mg_1 > C_2 2^{-n_1}|G_1|\}\cup \nonumber \\ & \bigcup_{m_1 \in \mathbb{Z}}\{Mf_2 > C_1 2^{m_1}|F_2|\} \times \{Mg_2 > C_2 2^{-m_1}|G_2|\}\cup \nonumber \\ &\bigcup_{l_1 \in \mathbb{Z}} \{MB^H > C_1 2^{l_1}\| B^H\|_1\} \times \{M\tilde{B}^H > C_2 2^{-l_1}\| \tilde{B}^H \|_1\},\nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ Then similar argument in Remark \[subset\] yields that $|E'| \sim |E|$ where $|E|$ can be assumed to be 1 by scaling invariance. We aim to prove that the multilinear form $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ .15in Tensor-type stopping-time decomposition II - maximal intervals -------------------------------------------------------------- ### One-dimensional stopping-time decomposition - maximal intervals One applies the stopping-time decomposition described in Section 5.5.1 to the sequences $$\big(\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}}\big)_{I \in \mathcal{I}}$$ and $$\big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}}$$ We will briefly recall the algorithm and introduce some necessary notations for the sake of clarity. Since $\mathcal{I}$ is finite, there exists some $L_1 \in \mathbb{Z}$ such that $\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}} \leq C_1 2^{L_1} \|B^H\|_1$. There exists a largest interval $I_{\text{max}}$ such that $$\frac{|\langle B^H_{I_{\text{max}}}(f_1,f_2), {\varphi}^{1,H}_{I_{\text{max}}} \rangle|}{|I_{\text{max}}|^{\frac{1}{2}}} \geq C_1 2^{L_1-1}\|B^H\|_1.$$ Then we define a *tree* $$T := \{I \in \mathcal{I}: I \subseteq I_{\text{max}}\},$$ and the corresponding *tree-top* $$I_T := I_{\text{max}}.$$ Now we repeat the above step on $\mathcal{I} \setminus T$ to choose maximal intervals and collect their subintervals in their corresponding sets, which will end thanks to the finiteness of $\mathcal{I}$. Then collect all $T$’s in a set $\mathbb{T}_{L_1-1}$ and repeat the above algorithm to $\displaystyle \mathcal{I} \setminus \bigcup_{T \in \mathbb{T}_{L_1-1}} T$. Eventually the algorithm generates a decomposition $\displaystyle \mathcal{I} = \bigcup_{l_1}\bigcup_{T \in \mathbb{T}_{l_1}}T$. One simple observation is that the above procedure can be applied to general sequences indexed by dyadic intervals. One can thus apply the same algorithm to $\mathcal{J} := \{J: I \times J \in \mathcal{R}\}$. We denote the decomposition as $\displaystyle \mathcal{J} = \bigcup_{l_2}\bigcup_{S \in \mathbb{S}_{l_2}}S$ with respect to the sequence $\big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}}$, where $S$ is a collection of dyadic intervals analogous to $T$ and is denoted by *tree*. And $J_S$ represents the corresponding *tree-top* analogous to $I_T$. .15in ### Tensor product of two one-dimensional stopping-time decompositions - maximal intervals If $I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$ and $I \times J \in T \times S$ with $T \in \mathbb{T}_{l_1}$ and $S \in \mathbb{S}_{l_2}$, then $l_1, l_2 \in \mathbb{Z}$ satisfies $l_1 + l_2 < 0$. Equivalently, $I \times J \in T \times S$ with $T \in \mathbb{T}_{-l - l_2}$ and $S \in \mathbb{S}_{l_2}$ for some $l_2 \in \mathbb{Z}$, $l> 0$. $I \in T$ with $T \in \mathbb{T}_{l_1}$ means that $I \subseteq I_T$ where $\frac{|\langle B^H_{I_T}(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} > C_12^{l_1} \|B^H\|_1$. By the biest trick, $\frac{|\langle B^H_{I_T}(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} = \frac{|\langle B^H(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} \leq MB^H(x)$ for any $x \in I_T$. Thus $I_T \subseteq \{ MB^H > C_12^{l_1} \|B^H\|_1\}$. By a similar reasoning, $J \in S$ with $S \in \mathbb{S}_{l_2}$ implies that $J \subseteq J_S \subseteq \{ M\tilde{B}^H > C_22^{l_2} \|\tilde{B}^H\|_1\}$. If $l_1 + l_2 \geq 0$, then $\{ MB^H > C_12^{l_1} \|B^H\|_1\} \times \{ M\tilde{B}^H > C_2 2^{l_2}\| \tilde{B}^H\|_1\} \subseteq \Omega^1 \subseteq \Omega$. As a consequence, $I \times J \subseteq \Omega \subseteq \tilde{\Omega}$, which is a contradiction. .15in Summary of stopping-time decompositions --------------------------------------- The notions of *tensor-type stopping-time decomposition I* and *general two-dimensional level sets stopping-time decomposition* introduced in Chapter 6 will be applied without further specifications. ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}$ $(n_2, m_2 \in \mathbb{Z}, n > 0)$ II\. Tensor-type stopping-time decomposition II on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in T \times S$,with $T \in \mathbb{T}_{-l-l_2}$, $S \in \mathbb{S}_{l_2}$ $(l_2 \in \mathbb{Z}, l> 0)$ III\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $      on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------- Application of stopping-time decompositions ------------------------------------------- One first rewrites the multilinear form with the partition of dyadic rectangles specified in the stopping-time algorithm: $$\begin{aligned} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim &\displaystyle \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ S \in \mathbb{S}_{l_2}}}\sum_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2, m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}}| \langle B_I^H(f_1,f_2),{\varphi}_I^{1,H} \rangle| |\langle \tilde{B}_J^H(g_1,g_2),{\varphi}_J^{1,H} \rangle| \nonumber \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot |\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle| |\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|. \nonumber \\ $$ One can now apply the exactly same argument in Section 6.6.1 to estimate the multilinear form by $$\begin{aligned} \label{form_00} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \displaystyle & \sup_{I \in T} \frac{|\langle B_I^H(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \sup_{J \in S} \frac{|\langle \tilde{B}_J^H(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \nonumber \\ & 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.\end{aligned}$$ Fix $-l-l_2$ and $T \in \mathbb{T}_{-l-l_2}$, one recalls the *tensor-type stopping-time decomposition II* to conclude that $$\label{ave_1} \sup_{I \in T } \frac{|\langle B_I^H(f_1,f_2),{\varphi}_I^{1H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_1 2^{-l-l_2} \|B^H\|_1.$$ By the similar reasoning, $$\label{ave_2} \sup_{J \in S } \frac{|\langle \tilde{B}^H_J(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2 2^{l_2} \|\tilde{B}^H\|_1.$$ By applying the estimates (\[ave\_1\]) and (\[ave\_2\]) to (\[form\_00\]), $$\begin{aligned} \label{form00_set} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}} | & \lesssim C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-l}\|B^H\|_1 \|\tilde{B}^H \|_1\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|. \end{aligned}$$ 0.15in Estimate for nested sum of dyadic rectangles -------------------------------------------- One can estimate the nested sum (\[ns\]) in two approaches - one with the application of the sparsity condition and the other with a Fubini-type argument which will be introduced in Section 7.5.2. $$\label{ns} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.$$ Both arguments aim to combine different stopping-time decompositions and to extract useful information from them. Generically, the sparsity condition argument employs the geometric property, namely Proposition \[sp\_2d\], of the *tensor-type stopping-time decomposition I* and applies the analytical implication from the *general two-dimensional level sets stopping-time decomposition*. Meanwhile, the Fubini-type argument focuses on the hybrid of the *tensor-type stopping time decomposition I - level sets* and the *tensor-type stopping-time decomposition II - maximal intervals*. As implied by the name, the Fubini-type argument attempts to estimate measures of two dimensional sets by the measures of its projected one-dimensional sets. The approaches to estimate projected one-dimensional sets are different depending on which tensor-type stopping decomposition is in consideration. ### Sparsity condition. The first approach relies on the sparsity condition which mimics the argument in the Chapter 6. In particular, fix $n, m, l , k_1$ and $k_2$, one estimates (\[ns\]) as follows. $$\begin{aligned} & \sum_{l_2}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg| \nonumber \\ \leq & \underbrace{ \sup_{l_2}\Bigg(\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J\in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|\Bigg)^{\frac{1}{2}}}_{SC-I} \nonumber \\ & \cdot \underbrace{\sum_{l_2}\Bigg(\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}\\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \Bigg)^{\frac{1}{2}}}_{SC-II}.\end{aligned}$$ **Estimate of $SC-I$.** Fix $l, n, m, k_1, k_2$ and $l_2$. Then by the *one-dimensional stopping-time decomposition - maximal intervals*, for any $I \in T$ and $I' \in T'$ such that $T, T' \in \mathbb{T}^{-l-l_2}$ and $T \neq T'$, one has $I \cap I' = \emptyset$. Hence for any fixed $n_2$ and $m_2$, one can rewrite $$\begin{aligned} \label{SC-I} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|= & \bigg|\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R} _{k_1,k_2} }} I \times J \bigg|, \nonumber \\\end{aligned}$$ where the right hand side of (\[SC-I\]) can be trivially bounded by $$\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.$$ One can then recall the sparsity condition highlighted as Proposition \[sp\_2d\] and reduce the nested sum of measures of unions of rectangles to the measure of the corresponding union of rectangles. More precisely, $$\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg| \bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2} }} I \times J \bigg| \sim \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg|,$$ where the right hand side can be estimated by $$\bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \leq \bigg| \bigcup_{I \times J \in \mathcal{R}_{k_1,k_2}}I \times J\bigg| \lesssim \min(2^{-k_1s},2^{-k_2\gamma}),$$ for any $\gamma >1$. The last inequality follows directly from (\[rec\_area\_hybrid\]). Since the above estimates hold for any $l_2 \in \mathbb{Z}$, one can conclude that $$SC-I \lesssim \min(2^{-\frac{k_1s}{2}},2^{-\frac{k_2\gamma}{2}}).$$ .15in **Estimate of $SC-II$.** One invokes (\[SC-I\]) and Proposition \[sp\_2d\] to obtain $$\label{SC-II} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \sim \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg|.$$ One enlarges the collection of the rectangles by forgetting about the restriction that the rectangles lie in $\mathcal{R}_{k_1,k_2}$ and estimate the right hand side of (\[SC-II\]) by $$\label{SC-II2} \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} }} I \times J\bigg|,$$ which is indeed the measure of the union of the rectangles collected in the *tensor-type stopping-time decomposition II - maximal intervals* at a certain level. In other words, $$\bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} I \times J\bigg| = \bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_T \times J_S\bigg|.$$ Then $$\label{fb_simple} SC-II \leq \sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}},$$ whose estimate follows a Fubini-type argument that plays an important role in the proof. We will focus on the development of this Fubini-type argument in a separate section and discuss its applications in other useful estimates for the proof. ### Fubini argument. Alternatively, one can apply a Fubini-type argument to estimate (\[ns\]) in the sense that the measure of some two-dimensional set is estimated by the product of the measures of its projected one-dimensional sets. To introduce this argument, we will first look into (\[fb\_simple\]) which requires a simpler version of the argument. .05in **Estimate of (\[fb\_simple\]) - Introduction of Fubini argument.** As illustrated before, one first rewrites the measure of two dimensional-sets in terms of the measures of two one-dimensional sets as follows. $$\begin{aligned} \label{fb_simple_2d} & \sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}} \nonumber \\ \leq & \bigg( \sum_{l_2 \in \mathbb{Z}}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big|\bigg)^{\frac{1}{2}}\bigg( \sum_{l_2 \in \mathbb{Z}}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big|\bigg)^{\frac{1}{2}},\end{aligned}$$ where the last step follows from the Cauchy-Schwartz inequality. To estimate the measures of the one-dimensional sets appearing above, one can convert them to the form of “global” energies and apply the energy estimates specified in Proposition \[B\_en\_global\]. In particular, (\[fb\_simple\_2d\]) can be rewritten as $$\begin{aligned} \label{SC-II-en} & \bigg( \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big|\bigg)^{\frac{1}{2}}\bigg( \sum_{l_2 \in \mathbb{Z}}(C_22^{l_2} \|\tilde{B}^H\|_1)^{1+\delta}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big|\bigg)^{\frac{1}{2}} \cdot 2^{l\frac{(1+\delta)}{2}}\|B^H\|_{1}^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_{1}^{-\frac{1+\delta}{2}}, \nonumber \\ $$ for any $\delta >0$. One notices that for fixed $l$ and $l_2$, $$\{I_T: T \in \mathbb{T}_{-l-l_2} \}$$ is a disjoint collection of dyadic intervals according to the *one-dimensional stopping-time decomposition - maximal interval*. Thus $$\label{en_global} \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big| = \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\sum_{T\in \mathbb{T}_{-l-l_2}}|I_{T}|$$ is indeed a “global” $L^{1+\delta}$-energy for which one can apply the energy estimates to obtain the bound $$|F_1|^{\mu_1(1+\delta)}|F_2|^{\mu_2(1+\delta)},$$ where $\delta, \mu_1, \mu_2 >0$ with $\mu_1 + \mu_2 = \frac{1}{1+\delta}$. Similarly, one can apply the same reasoning to the measure of the set in the $y$-direction to derive $$\label{SC-II-y} \sum_{l_2 \in \mathbb{Z}}(C_22^{l_2} \|\tilde{B}^H\|_1)^{1+\delta}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big| \lesssim |G_1|^{\nu_1(1+\delta)}|G_2|^{\nu_2(1+\delta)},$$ for any $\nu_1, \nu_2 > 0$ with $\nu_1 + \nu_2 = \frac{1}{1+\delta}$. By applying (\[en\_global\]) and (\[SC-II-y\]) into (\[SC-II-en\]), one derives that $$\sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}} \lesssim 2^{l \frac{(1+\delta)}{2}} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B^H\|_1^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}},$$ for any $\delta,\mu_1,\mu_2,\nu_1,\nu_2 >0$ with $\mu_1+ \mu_2 = \nu_1+ \nu_2 = \frac{1}{1+\delta}$. The reason for leaving the expressions $\|B^H\|_1^{-\frac{1+\delta}{2}}$ or $\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}}$ will become clear later. In short, $\|B^H\|_1$ and $\|\tilde{B}^H\|_1$ will appear in estimates for other parts. We will keep them as they are for the exponent-counting and then use the estimates for $\|B^H\|_1$ and $\|\tilde{B}^H\|_1$ at last. By combining the estimates $SC-I$ and $SC-II$, one can conclude that (\[ns\]) is majorized by $$\label{ns_sp} 2^{-\frac{k_2\gamma}{2}}2^{l \frac{(1+\delta)}{2}} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B^H\|_1^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}},$$ for $\gamma >1 $, $\delta,\mu_1,\mu_2,\nu_1,\nu_2 >0$ with $\mu_1+ \mu_2 = \nu_1+ \nu_2 = \frac{1}{1+\delta}$. The frame-work for estimating the measure of two-dimensional sets by its corresponding one-dimensional sets, as illustrated by (\[fb\_simple\_2d\]), is the so-called “Fubini-type” argument which we will heavily employ from now on. .15in **Estimate of (\[ns\]) - Application of Fubini argument.** It is not difficult to observe that (\[ns\]) can also be estimated by $$\label{set_00} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} }} I \times J \bigg|= \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|.$$ One now rewrites the above expression and separates it into two parts. Both parts can be estimated by the Fubini-type argument whereas the methodologies to estimate projected one-dimensional sets are different. More precisely, (\[set\_00\]) can be separated as $$\begin{aligned} &\underbrace{\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \sum_{l_2 \in \mathbb{Z}}\bigg(\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg)^{\frac{1}{2}}\bigg(\sum_{\substack{S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|\bigg)^{\frac{1}{2}}}_{\mathcal{A}} \times \nonumber \\ & \underbrace{\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sup_{l_2 \in \mathbb{Z}}\bigg(\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg)^{\frac{1}{2}}\bigg(\sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|\bigg)^{\frac{1}{2}}}_{\mathcal{B}}.\nonumber \\\end{aligned}$$ To estimate $\mathcal{A}$, one first notices that for for any fixed $n, m, n_2, m_2, l, l_2$ and a fixed tree $T \in \mathbb{T}^{-l-l_2}$, a dyadic interval $I \in T \cap \mathcal{I}_{-n-n_2.-m-m_2}$ means that (i) $I \subseteq I_T$ where $I_T$ is the tree-top interval as implied by the *one-dimensional stopping-time decomposition - maximal interval*; (ii) $I \cap \{Mf_1 \leq C_1 2^{-n-n_2+1}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2+1}|F_2| \} \neq \emptyset$. By (i) and (ii), one can deduce that $$I_T \cap \{Mf_1 \leq C_1 2^{-n-n_2+1}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2+1}|F_2| \} \neq \emptyset.$$ As a consequence, $$\label{a_x} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \leq \sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_T|.$$ A similar reasoning applies to the term involving intervals in the $y$-direction and generates $$\label{a_y} \sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ I \in \mathcal{J}_{n_2,m_2}}} J \bigg| \leq \sum_{\substack{S \in \mathbb{S}_{l_2} \\J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}}|J_S|.$$ By applying the Cauchy-Schwartz inequality together with (\[a\_x\]) and (\[a\_y\]), one obtains $$\begin{aligned} \label{a_pre_en} \mathcal{A} \leq & \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg(\sum_{l_2 \in \mathbb{Z}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_T|\bigg)^{\frac{1}{2}}\cdot \bigg(\sum_{l_2 \in \mathbb{Z}}\sum_{\substack{S \in \mathbb{S}_{l_2} \\J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}}|J_S|\bigg) ^{\frac{1}{2}}.\end{aligned}$$ One then “completes” the expression (\[a\_pre\_en\]) to produce localized energy-like terms as follows. $$\begin{aligned} & \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \underbrace{\bigg[\sum_{l_2 \in \mathbb{Z}}(C_1 2^{-l-l_2}\|B^H\|_1)^2\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_{T}|\bigg]^{\frac{1}{2}}}_{\mathcal{A}^1}\cdot \underbrace{\bigg[\sum_{l_2 \in \mathbb{Z}}(C_2 2^{l_2}\|\tilde{B}^H\|_1)^{2} \sum_{\substack{S \in \mathbb{S}_{l_2}\\ J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}} |J_S |\bigg]^{\frac{1}{2}}}_{\mathcal{A}^2} \nonumber \\ &\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.\nonumber \\ \end{aligned}$$ It is not difficult to recognize that $\mathcal{A}^1$ and $\mathcal{A}^2$ are $L^2$-energies. Moreover, they follow stronger local energy estimates described in Proposition \[B\_en\]. $\mathcal{A}^1$ is indeed an $L^2$ energy localized to $\{Mf_1 \leq C_1 2^{-n-n_2}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2}|F_2| \}$. Then Proposition \[B\_en\] gives the estimate $$\label{a_1} \mathcal{A}^1 \lesssim (C_1 2^{-n-n_2})^{\frac{1}{p_1}-\theta_1} (C_1 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$$ for any $0 \leq \theta_1, \theta_2 < 1$ satisfying $\theta_1 + \theta_2 = \frac{1}{2}$. One applies the same reasoning to $\mathcal{A}^2$ to deduce that $$\label{a_2} \mathcal{A}^2 \lesssim C_2^{2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2}- \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}},$$ where $0 \leq \zeta_1, \zeta_2 < 1$ and $\zeta_1 + \zeta_2 = \frac{1}{2}$. One can now combine the estimates for $\mathcal{A}^1$ (\[a\_1\]) and $\mathcal{A}^2$ (\[a\_2\]) to derive $$\begin{aligned} \mathcal{A} \lesssim & C_1^{2}C_2^{2}\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}2^{(-n-n_2)(\frac{1}{p_1}- \theta_1)}2^{(-m-m_2)(\frac{1}{q_1} - \theta_2)}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2}- \zeta_2)} \cdot \nonumber \\ & |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.\end{aligned}$$ One observes that the following two conditions are equivalent: $$\label{exp_1} \frac{1}{p_1} - \theta_1 = \frac{1}{p_2} - \zeta_1 \iff \frac{1}{q_1} - \theta_2 = \frac{1}{q_2} - \zeta_2.$$ The equivalence is imposed by the fact that $$\begin{aligned} \label{exp_2} &\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2}, \nonumber \\ &\theta_1 + \theta_2 = \zeta_1 + \zeta_2 = \frac{1}{2} .\end{aligned}$$ With the choice $ 0 \leq \theta_1, \zeta_1 < 1$ with $\theta_1- \zeta_1 = \frac{1}{p_1} - \frac{1}{p_2}$ , one has $$\label{a_estimate} \mathcal{A} \lesssim C_1^2 C_2^{2}2^{-n(\frac{1}{p_1} - \theta_1)}2^{-m(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.$$ (\[exp\_1\]) and (\[exp\_2\]) together imposes a condition that $$\label{pair_exp} \left|\frac{1}{p_1} - \frac{1}{p_2}\right| = \left|\frac{1}{q_1} - \frac{1}{q_2}\right| < \frac{1}{2}.$$ Without loss of generality, one can assume that $\frac{1}{p_1} \geq \frac{1}{p_2}$ and $\frac{1}{q_1} \leq \frac{1}{q_2}$. Then either (\[pair\_exp\]) holds or $$\frac{1}{p_1} - \frac{1}{p_2} = \frac{1}{q_2} - \frac{1}{q_1} > \frac{1}{2},$$ which implies $$\left|\frac{1}{p_1} - \frac{1}{q_2}\right| = \left|\frac{1}{p_2} - \frac{1}{q_1}\right| < \frac{1}{2}.$$ Then one can switch the role of $g_1$ and $g_2$ to “pair“ the functions as $f_1$ with $g_2$ and $f_2$ with $g_1$. A parallel argument can be applied to obtain the desired estimates. One can apply another Fubini-type argument to estimate $\mathcal{B}$ with $l, n$ and $m$ fixed. Such argument again relies heavily on the localization. First of all, for any fixed $l_2 \in \mathbb{Z}$, $$\{I: I \in T, T \in \mathbb{T}_{-l-l_2} \}$$ is a disjoint collection of dyadic intervals. Thus $$\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \leq \bigg|\bigcup_{\substack{ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|.$$ One then recalls the point-wise estimate stated in Claim \[ptwise\] to deduce $$\bigcup_{\substack{ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \subseteq \{ Mf_1 > C_1 2^{-n-n_2-10}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2-10}|F_2|\},$$ and for arbitrary but fixed $l_2 \in \mathbb{Z}$, $$\label{b_x} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\leq \big|\{ Mf_1 > C_1 2^{-n-n_2-10}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2-10}|F_2|\} \big|.$$ A similar reasoning applies to the intervals in the $y$-direction and yields that for any fixed $l_2 \in \mathbb{Z}$, $$\label{b_y} \sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg| \leq \big|\{ Mg_1 > C_2 2^{n_2-10}|G_1|\} \cap \{ Mg_2 > C_2 2^{m_2-10}|G_2|\} \big|.$$ To apply the above estimates, one notices that there exists some $\tilde{l}_2 \in \mathbb{Z}$ possibly depending $n, m, l, n_2, m_2$ such that $$\begin{aligned} \mathcal{B} =& \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg(\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \bigg)^{\frac{1}{2}}\bigg(\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg| \bigg)^{\frac{1}{2}}. \nonumber\end{aligned}$$ One can further“complete” $\mathcal{B}$ in the following manner for appropriate use of the Cauchy-Schwartz inequality. $$\begin{aligned} \mathcal{B} = & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg((C_12^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_12^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \bigg)^{\frac{1}{2}} \cdot \nonumber \\ & \quad \quad \ \ \bigg((C_22^{n_2}|G_1|)^{\mu(1+\epsilon)}(C_2 2^{m_2}|G_2|)^{(1-\mu)(1+\epsilon)}\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg|\bigg)^{\frac{1}{2}}\nonumber \\ &\quad \quad \ \ \cdot 2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}|F_1|^{-\frac{1}{2}\mu(1+\epsilon)}|F_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}|G_1|^{-\frac{1}{2}\mu(1+\epsilon)}|G_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)} \nonumber \\ \leq & \underbrace{\bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_12^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_12^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg]^{\frac{1}{2}}}_{\mathcal{B}^1} \cdot \nonumber \\ &\underbrace{\bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_22^{n_2}|G_1|)^{\mu(1+\epsilon)}(C_2 2^{m_2}|G_2|)^{(1-\mu)(1+\epsilon)}\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg|\bigg]^{\frac{1}{2}}}_{\mathcal{B}^2}\nonumber \\ &\ \ \cdot 2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}|F_1|^{-\frac{1}{2}\mu(1+\epsilon)}|F_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}|G_1|^{-\frac{1}{2}\mu(1+\epsilon)}|G_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}, \nonumber $$ for any $\epsilon > 0$, $0 < \mu <1$, where the second inequality follows from the Cauchy-Schwartz inequality. To estimate $\mathcal{B}^1$, one recalls (\[b\_x\]) - which holds for any fixed $l_2 \in \mathbb{Z}$ - to obtain $$\begin{aligned} \label{B_1} \mathcal{B}^1 \lesssim & \bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_1 2^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_1 2^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\big| \{ Mf_1 > C_1 2^{-n-n_2}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2}|F_2|\} \big|\bigg]^{\frac{1}{2}} \nonumber \\ \leq & \bigg[\int (Mf_1(x))^{\mu(1+\epsilon)}(Mf_2(x))^{(1-\mu)(1+\epsilon)} dx\bigg]^{\frac{1}{2}} \nonumber \\ \leq & \Bigg[\bigg(\int (Mf_1(x))^{\mu(1+\epsilon) \frac{1}{\mu}} dx\bigg)^{\mu}\bigg(\int (Mf_2(x))^{(1-\mu)(1+\epsilon) \frac{1}{1-\mu}} dx\bigg)^{1-\mu}\Bigg]^{\frac{1}{2}},\end{aligned}$$ where the last step follows from Hölder’s inequality. One can now use the mapping property for the Hardy-Littlewood maximal operator $M: L^{p} \rightarrow L^{p}$ for any $p >1$ and deduces that $$\begin{aligned} \label{piece} &\bigg(\int (Mf_1(x))^{1+\epsilon} dx\bigg)^{\mu} \lesssim \|f_1\|_{1+\epsilon}^{(1+\epsilon)\mu} = |F_1|^{\mu}, \nonumber \\ &\bigg(\int (Mf_2(x))^{1+\epsilon} dx\bigg)^{1-\mu} \lesssim \|f_2\|_{1+\epsilon}^{(1+\epsilon)(1-\mu)} = |F_2|^{1-\mu}.\end{aligned}$$ By plugging the estimate (\[piece\]) into (\[B\_1\]), $$\label{B_1_final} \mathcal{B}^1 \lesssim |F_1|^{\frac{\mu}{2}}|F_2|^{\frac{1-\mu}{2}}.$$ By the same argument with $-n-n_2$ and $-m-m_2$ replaced by $n_2$ and $m_2$ correspondingly, one obtains $$\label{B_2_final} \mathcal{B}^2 \lesssim |G_1|^{\frac{\mu}{2}}|G_2|^{\frac{1-\mu}{2}}.$$ Combination of the estimates for $\mathcal{B}^1$ (\[B\_1\_final\]) and $\mathcal{B}^2$ (\[B\_2\_final\]) yields $$\label{b_estimate} \mathcal{B} \lesssim |F_1|^{-\frac{\mu}{2}\epsilon}|F_2|^{-\frac{1-\mu}{2}\epsilon}|G_1|^{-\frac{\mu}{2}\epsilon}|G_2|^{-\frac{1-\mu}{2}\epsilon}2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}.$$ By applying the results for both $\mathcal{A}$ (\[a\_estimate\]) and $\mathcal{B}$ (\[b\_estimate\]), one concludes with the following estimate for (\[ns\]). $$\begin{aligned} \label{ns_fb} & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg| \nonumber \\ \lesssim & C_1^{2} C_2^{2}2^{-n(\frac{1}{p_1} - \theta_1-\frac{1}{2}\mu(1+\epsilon))}2^{-m(\frac{1}{q_1}- \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))} \nonumber \\ & |F_1|^{\frac{1}{p_1}-\frac{\mu}{2}\epsilon}|F_2|^{\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon}|G_1|^{\frac{1}{p_2}-\frac{\mu}{2}\epsilon}|G_2|^{\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1},\end{aligned}$$ for any $0 \leq \theta_1, \theta_2 < 1$ with $\theta_1 + \theta_2 = \frac{1}{2}$, $0 <\mu<1$ and $\epsilon > 0$. One can now interpolate between the estimates obtained with two different approaches, namely (\[ns\_sp\]) and (\[ns\_fb\]), to derive the following bound for (\[ns\]). $$\begin{aligned} \label{ns_sum_result} &C_1^{2}C_2^{2} 2^{-\frac{k_2\gamma\lambda}{2}}2^{-n(\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon))(1-\lambda)}2^{-m(\frac{1}{q_1} - \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))(1-\lambda)} \cdot \nonumber \\ & (2^{l})^{\lambda\frac{(1+\delta)}{2}+(1-\lambda)}\|B^H\|_1^{-\lambda\frac{(1+\delta)}{2}-(1-\lambda)}\|\tilde{B}^H\|_1^{-\lambda\frac{(1+\delta)}{2}-(1-\lambda)} \cdot \nonumber \\ & |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)},\end{aligned}$$ for some $0 \leq \lambda \leq 1$. By applying (\[ns\_sum\_result\]) to (\[form00\_set\]), one has $$\begin{aligned} & |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \nonumber \\ \lesssim & C_1^{3}C_2^{3} C_3^{3} \| h \|_{L^s}\cdot \nonumber \\ & \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}}2^{-l\lambda(1-\frac{1+\delta}{2})}2^{k_1}2^{k_2(1-\frac{\lambda\gamma}{2})}2^{-n(\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon))(1-\lambda)}2^{-m(\frac{1}{q_1}-\theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))(1-\lambda)} \nonumber \\ &\cdot |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)} \nonumber \\ & \cdot \|B^H\|_1^{\lambda(1-\frac{1+\delta}{2})}\|\tilde{B}^H\|_1^{\lambda(1-\frac{1+\delta}{2})}. \end{aligned}$$ One notices that there exists $\epsilon > 0$, $0 < \mu < 1$ and $0 <\theta_1<\frac{1}{2}$ such that $$\begin{aligned} \label{nec_condition} &\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon) > 0, \nonumber \\ &\frac{1}{q_1} - \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon) > 0.\end{aligned}$$ One notices that (\[nec\_condition\]) imposes a necessary condition on the range of exponents. In particular, $$\label{>1/2} \frac{1}{p_1} + \frac{1}{q_1} - (\theta_1 + \theta_2) > \frac{1}{2}\mu(1+ \epsilon) + \frac{1}{2}(1-\mu)(1+\epsilon).$$ Using the fact that $\theta_1 + \theta_2 = \frac{1}{2}$, one can rewrite (\[&gt;1/2\]) as $$\frac{1}{p_1} + \frac{1}{q_1} > 1+ \frac{\epsilon}{2}.$$ As a consequence, the case $1< \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} < 2 $ can be treated by the current argument. Meanwhile, the case $0 < \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} \leq 1$ follows a simpler argument which resembles the one for the estimates involving $L^{\infty}$-norms and will be postponed to Section 9. Imposed by (\[nec\_condition\]), the geometric series involving $2^{-n}$ and $2^{-m}$ are convergent. The convergence of series involving $2^{k_1}$ is trivial. One also observes that for any $0 < \lambda < 1$ and $0 < \delta < 1$, $$\lambda(1-\frac{1+\delta}{2}) > 0,$$ which implies that the series involving $2^{-l}$ is convergent. One can separate the cases when $k_2 > 0 $ and $k_2 \leq 0$ and select $\gamma >1$ in each case to make the series about $2^{k_2}$ convergent. Therefore, one can estimate the multilinear form by $$\begin{aligned} & \Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}} \nonumber \\ \lesssim & C_1^3 C_2^3 C_3^{3}\| h \|_{L^s} \|B^H\|_1^{\lambda(1-\frac{1+\delta}{2})}\|\tilde{B}^H\|_1^{\lambda(1-\frac{1+\delta}{2})} \nonumber \\ &\cdot |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)}, \nonumber \\\end{aligned}$$ where one can apply Proposition \[B\_global\_norm\] to derive $$\begin{aligned} & \|B^H\|_1 \lesssim |F_1|^{\rho}|F_2|^{1-\rho}, \nonumber \\ & \|\tilde{B}^H\|_1 \lesssim |G_1|^{\rho'}|G_2|^{1-\rho'},\end{aligned}$$ with the corresponding exponent to be positive as guaranteed by the fact that $0 < \lambda,\delta < 1$. One thus obtains $$\begin{aligned} \label{exp00} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}|\lesssim& C_1^3 C_2^3 C_3^{3}\|h\|_s |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon) + \rho\lambda(1-\frac{1+\delta}{2})}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon) + (1-\rho)\lambda(1-\frac{1+\delta}{2})} \nonumber \\ & \cdot |G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)+ \rho'\lambda(1-\frac{1+\delta}{2})}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)+(1-\rho')\lambda(1-\frac{1+\delta}{2})}. $$ With a little abuse of notation, we use $\tilde{p_i}$ and $\tilde{q_i}$, $i = 1,2$ to represent $p_i$ and $q_i$ in the above argument. And from now on, $p_i$ and $q_i$ stand for the boundedness exponents specified in the main theorem. One has the freedom to choose $1 < \tilde{p_i}, \tilde{q_i}< \infty$, $0 < \mu,\lambda < 1$ and $\epsilon > 0 $ such that $$\begin{aligned} \label{exp_tilde} & \lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{\tilde{p_1}}-\frac{\mu}{2}\epsilon) + \rho\lambda(1-\frac{1+\delta}{2}) = \frac{1}{p_1} \nonumber \\ & \lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{\tilde{q_1}}-\frac{1-\mu}{2}\epsilon) + (1-\rho)\lambda(1-\frac{1+\delta}{2}) = \frac{1}{q_1} \nonumber \\ & \lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{\tilde{p_2}}-\frac{\mu}{2}\epsilon)+ \rho'\lambda(1-\frac{1+\delta}{2}) = \frac{1}{p_2} \nonumber \\ & \lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{\tilde{q_2}}-\frac{1-\mu}{2}\epsilon)+(1-\rho')\lambda(1-\frac{1+\delta}{2}) = \frac{1}{q_2}.\end{aligned}$$ To see that above equations can hold, one can view the parts without $\tilde{p_i}$ and $\tilde{q_i}$ as perturbations which can be controlled small. More precisely, when $0 < \delta < 1$ is close to $1$, $$\lambda(1-\frac{1+\delta}{2}) \ll 1.$$ When $0 < \lambda < 1$ is close to $0$, one has $$\lambda \frac{\mu_1(1+\delta)}{2}, \lambda \frac{\mu_2(1+\delta)}{2}, \lambda \frac{\nu_1(1+\delta)}{2}, \lambda\frac{\nu_2(1+\delta)}{2} \ll 1$$ and $$\begin{aligned} &\frac{1}{p_i} - (1-\lambda)(\frac{1}{\tilde{p_i}}-\frac{\mu}{2}\epsilon) \ll 1,\nonumber \\ & \frac{1}{q_i} - (1-\lambda)(\frac{1}{\tilde{q_i}}-\frac{1-\mu}{2}\epsilon)\ll 1,\end{aligned}$$ for $ i = 1,2$. It is also necessary to check is that $\tilde{p_i}$ and $\tilde{q_i}$ satisfy the conditions which have been used to obtain (\[exp00\]), namely $$\begin{aligned} \frac{1}{\tilde{p_1}} + \frac{1}{\tilde{q_1}} = & \frac{1}{\tilde{p_2}} + \frac{1}{\tilde{q_2}} > 1.\end{aligned}$$ One can easily verify the first equation and the second inequality by manipulating (\[exp\_tilde\]). As a result, we have derived that $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim C_1^3 C_2^3 C_3^{3}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}.$$ .25in Proof of Theorem \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}}$ - Haar Model =============================================================================================================== One can mimic the proof in Chapter 6 with a change of perspectives on size estimates. More precisely, one applies some trivial size estimates for functions $f_2$ and $g_2$ lying in $L^{\infty}$ spaces while one needs to pay respect to the fact that $f_1$ and $g_1$ could lie in $L^p$ space for any $p >1$. Such perspective is demonstrated in the stopping-time decomposition and in the definition of the exceptional set. Loalization ----------- One defines $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \Omega^1 := &\bigcup_{n_1 \in \mathbb{Z}}\{Mf_1 > C_1 2^{n_1}\|f_1\|_{p}\} \times \{Mg_1 > C_2 2^{-n_1}\|g_1\|_{p}\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ It is not difficult to check that given $C_1, C_2$ and $C_3$ are sufficiently large, $|E'| \sim |E|$ where $|E|$ can be assumed to be 1. It suffices to prove that the multilinear form defined as $$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the restricted weak-type estimate $$|\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ Summary of stopping-time decompositions. ---------------------------------------- ---------------------------------------------------------------------------------- ------------------- ------------------------------------------------------------------ I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}$ $(n_2 \in \mathbb{Z}, n > 0)$ II\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $     on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ---------------------------------------------------------------------------------- ------------------- ------------------------------------------------------------------ where $$\begin{aligned} \mathcal{I}'_{-n-n_2} := & \{ I \in \mathcal{I} \setminus \mathcal{I}'_{-n-n_2+1}: \left| I \cap \Omega'^x_{-n-n_2}\right| > \frac{1}{10}|I| \}, \nonumber \\ \mathcal{J}'_{n_2} := & \{ J \in \mathcal{J} \setminus \mathcal{J}'_{n_2+1}: \left| I \cap \Omega'^y_{n_2}\right| > \frac{1}{10}|J| \},\end{aligned}$$ with $$\begin{aligned} \Omega'^x_{-n-n_2} := &\{Mf_1> C_1 2^{-n-n_2}\|f_1\|_p \}, \nonumber \\ \Omega'^y_{n_2} := & \{Mg_1> C_2 2^{n_2} \|g_1\|_p \}.\end{aligned}$$ Application of stopping-time decompositions ------------------------------------------- One can now apply the stopping-time decompositions and follow the same argument in Section 6 to deduce that [$$\begin{aligned} \label{form11_inf} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber\\ \lesssim &\bigg|\displaystyle \sum_{\substack{n> 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{n_2 \in \mathbb{Z}}\sum_{\substack{I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J^{\#_2,H} (g_1,g_2),{\varphi}_J^{1,H} \rangle \langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg| \nonumber \\\nonumber \\ \lesssim & \sum_{\substack{n> 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{n_2 \in \mathbb{Z}}\sup_{I \in \mathcal{I}'_{-n-n_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \cdot \sup_{J \in \mathcal{J}'_{n_2}}\frac{| \langle \tilde{B}_J^{\#_2,H} (g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}\cdot C_3 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \nonumber \\ &\quad \quad \quad \sum_{\substack{I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \bigg|\big(\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\big) \cap \big(\bigcup_{I \in \mathcal{I}_{-n-n_2,-m-m_2}} I \times \bigcup_{J \in \mathcal{J}_{n_2,m_2}}J\big)\bigg|. \end{aligned}$$]{} To estimate $\displaystyle \sup_{I \in \mathcal{I}'_{-n-n_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} $, one can now apply Lemma \[B\_size\] with $S:= \{Mf_1 \leq C_1 2^{-n-n_2}|F_1|^{\frac{1}{p}} \}$ and obtain $$\sup_{I \in \mathcal{I}'_{-n-n_2}}\frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}},$$ where by the definition of $S$, $$\sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \lesssim C_12^{-n-n_2}\|f_1\|_p,$$ and by the fact that $f_2 \in L^{\infty}$, $$\sup_{K \cap S \neq \emptyset} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \|f_2\|_{\infty}.$$ As a result, $$\label{est_x} \sup_{I \in \mathcal{I}'_{-n-n_2}}\frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_12^{-n-n_2}\|f_1\|_p\|f_2\|_{\infty}.$$ By a similar reasoning, $$\label{est_y} \sup_{J \in \mathcal{J}'_{n_2}}\frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2 2^{n_2}\|g_1\|_p\|g_2\|_{\infty}.$$ When combining the estimates (\[est\_x\]) and (\[est\_y\]) into (\[form11\_inf\]), one concludes that $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n}\|f_1\|_p \|g_1\|_p C_3 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{n_2 \in \mathbb{Z}}\bigg|\big(\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\big) \cap \big(\bigcup_{I \in \mathcal{I}'_{-n-n_2,-m-m_2}} I \times \bigcup_{J \in \mathcal{J}'_{n_2,m_2}}J\big)\bigg|\nonumber \\ \lesssim &C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n}\|f_1\|_p \|g_1\|_p C_3 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s} 2^{k_2(1-\frac{\gamma}{2})}, \nonumber \end{aligned}$$ where the last inequality follows from the sparsity condition. With proper choice of $\gamma >1$, one obtains the desired estimate. Proof of Theorem \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ - Haar Model ===================================================================================================== One interesting fact is that when $$\label{easy_case} \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} \leq 1,$$ Theorem \[thm\_weak\_mod\] can be proved by a simpler argument as remarked in Chapter 7. And Theorem \[thm\_weak\_inf\_mod\] for the model $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ can be viewed as a sub-case and proved by the same argument. The key idea is that in the case specified in (\[easy\_case\]), one no longer needs the localization of the operator $B$ in the proof. Let $$\frac{1}{t} := \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ and the condition of the exponents (\[easy\_case\]) translates to $$t \geq 1.$$ Localization. ------------- One first defines $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{l_2 \in \mathbb{Z}} \{MB > C_1 2^{-l_2}\| B\|_t\} \times \{M\tilde{B} > C_2 2^{l_2}\|\tilde{B}\|_t\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ We shall notice that $t \geq 1$ allows one to use the mapping property of the Hardy-Littlewood maximal operator, which plays an essential role in the estimate of $|\Omega|$. A straightforward computation shows $|E'| \sim |E|$ given that $C_1, C_2$ and $C_3$ are sufficiently large. It suffices to assume that $|E'| \sim |E| = 1$ and to prove that the multilinear form $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ Summary of stopping-time decompositions. ---------------------------------------- ---------------------------------------------------------------- ------------------- ----------------------------------------- General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $ on $ \mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ---------------------------------------------------------------- ------------------- ----------------------------------------- .25in One performs the *general two-dimensional level sets stopping-time decomposition* with respect to the hybrid maximal-square functions as specified in the definition of the exceptional set. It would be evident from the argument below that there is no stopping-time decomposition necessary for the maximal functions involving $B$ and $\tilde{B}$. One brief explanantion is that only “averages” for $B$ and $\tilde{B}$ are required while the measurement of the set where the averages are attained is not. As a consequence, the macro-control of the averages would be sufficient and the stopping-time decomposition, which can be seen as a more delicate “slice-by-slice” or “level-by-level” partition, is not compulsory. More precisely, $$\begin{aligned} \label{form00_inf} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| = &\displaystyle \bigg|\sum_{\substack{ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle \langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg|\nonumber \\ \lesssim & \sum_{\substack{k_1 < 0 \\ k_2 \leq K}}\displaystyle \sup_{I \times J \in \mathcal{I}\times \mathcal{J}} \bigg(\frac{|\langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \bigg) \cdot C_3^2 2^{k_1}\|h\|_s 2^{k_2} \bigg|\bigcup_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg|.\end{aligned}$$ By the same reasoning applied in previous chapters, one has $$\bigg|\bigcup_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg| \lesssim \min(C_3^{-1}2^{-k_1s}, C_3^{-\gamma}2^{-k_2 \gamma}),$$ for any $\gamma >1$. Meanwhile, an argument similar to the proof of Observation 2 in Section 7.2.2 implies that $$\frac{|\langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_1 C_2 \|B\|_t \|\tilde{B}\|_t,$$ for any $I \times J \cap \tilde{\Omega} \neq \emptyset$ as assumed in the Haar model. As a consequence, $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim C_1 C_2 C_3^2 \sum_{\substack{k_1 < 0 \\ k_2 \leq K}}\|B\|_t \|\tilde{B}\|_t C_3 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s(\mathbb{R}^2)} 2^{k_2(1-\frac{\gamma}{2})} \lesssim \|B\|_t \|\tilde{B}\|_t,$$ with appropriate choice of $\gamma>1$. One can now invoke Lemma \[B\_global\_norm\] to complete the proof of Theorem \[thm\_weak\_mod\]. In particular, $$\begin{aligned} \label{B_easy} \|B\|_t \lesssim & \|f_1\|_{p_1} \|f_2\|_{q_1} \nonumber \\ \|\tilde{B}\|_t \lesssim & \|g_1\|_{p_2} \|g_2\|_{q_2},\end{aligned}$$ while the case described in Theorem \[thm\_weak\_inf\_mod\] is when $q_1 = q_2 = \infty$ and (\[B\_easy\]) can be rewritten as $$\|B\|_p \lesssim \|f_1\|_{p} \|f_2\|_{\infty},$$ $$\|\tilde{B}\|_p \lesssim \|g_1\|_{p} \|g_2\|_{\infty}.$$ 1. One notices that Theorem \[thm\_weak\_inf\_mod\] in the Haar model is proved directly with generic functions in $L^p$ and $L^s$ spaces for $1< p < \infty, 1 < s < 2$. 2. The above argument proves Theorem \[thm\_weak\_mod\] in the Haar model for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ with the range of exponents described as (\[easy\_case\]), which completes the proof of Theorem \[thm\_weak\_mod\] - Haar model. Generalization to Fourier Case ============================== We will first highlight where we have used the assumption about the Haar model in the proof and then modify those partial arguments to prove the general case. We have used the following implications specific to the Haar model. 1. Let $\chi_{E'} := \chi_{E \setminus \tilde{\Omega}}$. Then $$\label{Haar_loc_bipara} \langle \chi_{E'}, \phi^H_{I} \otimes \phi^H_J \rangle \neq 0 \iff I \times J \cap \tilde{\Omega}^c \neq \emptyset.$$ As a result, what contributes to the multilinear forms in the Haar model are the dyadic rectangles $I \times J \in \mathcal{R}$ satisfying $I \times J \cap \tilde{\Omega}^c \neq \emptyset$, which is a condition we heavily used in the proofs of the theorems in the Haar model. 2. For any dyadic intervals $K$ and $I$ with $|K| \geq |I|$, $$\langle \phi^{3,H}_K, \phi^H_I \rangle \neq 0$$ if and only if $$K \supseteq I.$$ Therefore, the non-degenerate case imposes the condition on the geometric containment we have employed for localizations of the operator $B$. 3. In the case $(\phi^{3,H}_K)_K$ is a family of Haar wavelets, the observation highlighted as (\[haar\_biest\_cond\]) generates the biest trick (\[haar\_biest\]) which is essential in the energy estimates. We will focus on how to generalize proofs of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ and $\Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}$ and discuss how to tackle restrictions listed as $H(I), H(II)$ and $H(III)$. The generalizations of arguments for other model operators and for Theorem \[thm\_weak\_inf\_mod\] follow from the same ideas. Generalized Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ ---------------------------------------------------------------------------------------------------------- ### Localization and generalization of $H(I)$ The argument for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ in Chapter 6 takes advantage of the localization of spatial variables, as stated in $H(I)$. The following lemma allows one to decompose the original bump function into bump functions of compact supports so that a perfect localization in spatial variables can be achieved, which can be viewed as generalized $H(I)$ and whose proof is included in Chapter 3 of [@cw]. \[decomp\_compact\] Let $I \subseteq \mathbb{R}$ be an interval. Then any smooth bump function $\phi_I$ adapted to $I$ can be decomposed as $$\phi_I = \sum_{\tau \in \mathbb{N}} 2^{100 \tau} \phi_I^{\tau}$$ where for each $\tau \in \mathbb{N}$, $\phi_I^{\tau}$ is a smooth bump function adapted to $I$ and $\text{supp}(\phi_J^{\tau}) \subseteq 2^{\tau} I$. If $\int \phi_I = 0$, then the functions $\phi_I^{\tau}$ can be chosen such that $\int \phi_I^{\tau} = 0$. The multilinear form associated to $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ in the general case can now be rewritten as $$\begin{aligned} \label{compact} \Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy}, \chi_{E'}) := \displaystyle \sum_{\tau_1,\tau_2 \in \mathbb{N}}2^{-100(\tau_1+\tau_2)}\sum_{I \times J \in \mathcal{R}}& \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}^{\#_2}(g_1,g_2), \phi_J^1 \rangle \nonumber \\ & \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle. \nonumber \\\end{aligned}$$ For $\tau_1, \tau_2 \in \mathbb{N}$ fixed, define [$$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}^{\tau_1, \tau_2} (f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}):= \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle.$$]{} It suffices to prove that for any fixed $\tau_1,\tau_2 \in \mathbb{N}$, $$\label{linear_fix_fourier} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}^{\tau_1, \tau_2}| \lesssim (2^{\tau_1+ \tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}},$$ for some $0 < \Theta < 100$, thanks to the fast decay $2^{-100(\tau_1+\tau_2)}$ in the decomposition of the original multilinear form (\[compact\]). One first re-defines the exceptional set with the replacement of $C_1$, $C_2$ and $C_3$ by $C_12^{10\tau_1}$, $C_2 2^{10\tau_2}$ and $C_3 2^{10\tau_1+10\tau_2}$ respectively. In particular, let $$\begin{aligned} & C_1^{\tau_1} := C_12^{10\tau_1}, \nonumber\\ & C_2^{\tau_2} := C_22^{10\tau_2}, \nonumber\\ & C_3^{\tau_1,\tau_2} := C_32^{10\tau_1+10\tau_2}. \nonumber\end{aligned}$$ Then define $$\begin{aligned} \displaystyle \Omega_1^{\tau_1, \tau_2} := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\},\nonumber \\ \Omega_2^{\tau_1,\tau_2} := & \{SSh > C_3^{\tau_1, \tau_2} \|h\|_{L^s(\mathbb{R}^2)}\}. \nonumber \\\end{aligned}$$ One also defines $$\begin{aligned} & \Omega^{\tau_1,\tau_2} := \Omega_1^{\tau_1,\tau_2} \cup \Omega_2^{\tau_1,\tau_2}, \nonumber \\ & \tilde{\Omega}^{\tau_1,\tau_2} := \{M(\chi_{\Omega^{\tau_1,\tau_2}})> \frac{1}{100} \}, \nonumber\\ & \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} := \{M(\chi_{ \tilde{\Omega}^{\tau_1,\tau_2}})> \frac{1}{2^{2\tau_1+ 2\tau_2}} \},\end{aligned}$$ and finally $$\tilde{\Omega} := \bigcup_{\tau_1,\tau_2\in \mathbb{N}}\tilde{\tilde{\Omega}}^{\tau_1,\tau_2}.$$ It is not difficult to verify that $|\tilde{\Omega}| \ll 1$ given that $C_1, C_2$ and $C_3$ are sufficiently large. One can then define $E' := E \setminus \tilde{\Omega}$, where $|E'| \sim |E|$ as desired. For such $E'$, one has the following simple but essential observation. \[start\_point\] For any fixed $\tau_1, \tau_2 \in \mathbb{N}$ and any dyadic rectangle $I \times J$, $$\langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle \neq 0$$ implies that $$I \times J \cap (\tilde{\Omega}^{\tau_1,\tau_2})^c \neq \emptyset.$$ We will prove the equivalent contrapositive statement. Suppose that $I \times J \cap(\tilde{\Omega}^{\tau_1,\tau_2})^c = \emptyset$, or equivalently $I \times J \subseteq \tilde{\Omega}^{\tau_1,\tau_2}$, then $$|2^{\tau_1}I \times 2^{\tau_2}J \cap \tilde{\Omega}^{\tau_1,\tau_2}| > \frac{1}{2^{2\tau_1+2\tau_2}}|2^{\tau_1}I \times 2^{\tau_2}J|,$$ which infers that $$2^{\tau_1}I \times 2^{\tau_2}J \subseteq \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} \subseteq \tilde{\Omega}.$$ Since $E' \cap \tilde{\Omega} = \emptyset$, one can conclude that $$\langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle = 0,$$ which completes the proof of the observation. \[st\_general\] Observation \[start\_point\] settles a starting point for the stopping-time decompositions with fixed parameters $\tau_1$ and $\tau_2$. More precisely, suppose that $\mathcal{R}$ is an arbitrary finite collection of dyadic rectangles. Then with fixed $\tau_1, \tau_2 \in \mathbb{N}$, let $\displaystyle \mathcal{R} := \bigcup_{n_1, n_2 \in \mathbb{Z}}\mathcal{I}^{\tau_1}_{-n_1} \times \mathcal{J}^{\tau_2}_{n_2}$ denote the *tensor-type stopping-time decomposition I - level sets* introduced in Chapter 6. Now $\mathcal{I}^{\tau_1}_{n_1}$ and $\mathcal{J}^{\tau_2}_{n_2}$ are defined in the same way as $\mathcal{I}_{n_1}$ and $\mathcal{J}_{n_2}$ with $C_1$ and $C_2$ replaced by $C_1^{\tau_1}$ and $C_2^{\tau_2}$. By the argument for Observation \[obs\_indice\] in Chapter 6, one can deduce the same conclusion that if for any $I \times J \in \mathcal{R}$, $I \times J \cap \tilde{\Omega}^c \neq \emptyset$, then $n_1 + n_2 < 0$. Due to Remark \[st\_general\], one can perform the stopping-time decompositions specified in Chapter 6 with $C_1, C_2$ and $C_3$ replaced by $C_1^{\tau_1}$, $C_2^{\tau_2}$ and $C_3^{\tau_1,\tau_2}$ respectively and adopt the argument without issues. The difference that lies in the resulting estimate is the appearance of $O(2^{50\tau_1})$, $O(2^{50\tau_2})$ and $O(2^{50\tau_1+50\tau_2})$, which is not of concerns as illustrated in (\[linear\_fix\_fourier\]). The only “black-box” used in Chapter 6 is the local size estimates (Proposition \[size\_cor\]), which needs a more careful treatment and will be explored in the next subsection. ### Local size estimates and generalization of H(II) We will focus on the estimates of $\text{size}(\langle B^{\#_1}_I, {\varphi}_I\rangle)_I$, whose argument applies to $\text{size}(\langle \tilde{B}^{\#_2}_J, {\varphi}_J\rangle)_J$ as well. It suffices to prove Lemma \[B\_size\] in the Fourier case and the local size estimates described in Proposition \[size\_cor\] follow immediately. One first attempts to apply Lemma \[decomp\_compact\] to create a setting of compactly-supported bump functions so that the same localization described in Chapter 5 can be achieved. Suppose that for any $I \in \mathcal{I}'$, $I \cap S \neq \emptyset$, then $$\begin{aligned} \text{size}_{\mathcal{I}'}((\langle B_I^{\#_1,H}, {\varphi}_I \rangle)_{I \in \mathcal{I}'} =& \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}}, \nonumber \\\end{aligned}$$ for some $I_0 \in \mathcal{I}'$ such that $I_0 \cap S \neq \emptyset$. Consider $$\begin{aligned} \label{form_f} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} = \frac{1}{|I_0|^{\frac{1}{2}}}\bigg|\sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{K: |K| \sim 2^{\#_1}|I_0|} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I_0}, \phi^{3,\tau_4}_K\rangle\bigg|, \end{aligned}$$ where ${\varphi}^1_{I}$ denotes the $L^{2}$ smooth bump function adapted to $I$, ${\varphi}^{1,\tau_3}_{I}$ is an $L^{2}$-normalized bump function adapted to $I$ with $\text{supp}({\varphi}^{1,\tau_3}_{I}) \subseteq 2^{\tau_3}I$, and $\phi^{3,\tau_4}_K$ is an $L^2$-normalized bump function with $\text{supp}(\phi^{3,\tau_4}_K)\subseteq 2^{\tau_4}K$. With the property of being compactly supported, one has that if $$\langle {\varphi}^{1,\tau_3}_{I}, \phi^{3, \tau_4}_K\rangle \neq 0,$$ then $$2^{\tau_3} I \cap 2^{\tau_4}K \neq \emptyset.$$ One also recalls that $I \cap S \neq \emptyset$ and $|I| \leq |K|$, it follows that $$\label{geometry_fourier} \frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}.$$ Therefore, one can apply (\[geometry\_fourier\]) and rewrite (\[form\_f\]) as $$\begin{aligned} \label{size_B_f} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \nonumber \\ \leq &\sum_{\tau_3,\tau_4 \in \mathbb{N}} 2^{-100\tau_3}2^{-100\tau_4} \frac{1}{|I_0|} \sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} |\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle| \nonumber \\ \leq & \sum_{\tau_3,\tau_4 \in \mathbb{N}} 2^{-100\tau_3}2^{-100\tau_4}\frac{1}{|I_0|} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \nonumber \\ & \quad \cdot \sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle|. \end{aligned}$$ One notices that $$\begin{aligned} \label{size_f_1} & \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \nonumber \\ \lesssim& 2^{\tau_3+\tau_4}\sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_{2^{\tau_3+\tau_4}K}^1 \rangle|}{|2^{\tau_3+\tau_4}K|^{\frac{1}{2}}} \nonumber\\ \leq & 2^{\tau_3+ \tau_4} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}}.\end{aligned}$$ where $K' := 2^{\tau_3+\tau_4}K$, the interval with the same center as $K$ with the radius $2^{\tau_3 + \tau_4}|K|.$ Similarly, $$\label{size_f_2} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim 2^{\tau_3+ \tau_4} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}}.$$ Moroever, $$\begin{aligned} \label{disjoint} & \sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle| \nonumber \\ \leq & \sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{1}{\left(1+\frac{\text{dist}(K,I)}{|K|}\right)^{100}} |I_0| \nonumber \\ \leq & |I_0| \sum_{k \in \mathbb{N}}k^{-100} \nonumber \\ \leq & |I_0|.\end{aligned}$$ By combining (\[size\_f\_1\]), (\[size\_f\_2\]) and (\[disjoint\]), one can estimate (\[size\_B\_f\]) as $$\begin{aligned} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \nonumber \\ \lesssim & \frac{1}{|I_0|}\sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}} 2^{2(\tau_3+ \tau_4)} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}} \sup_{K' \cap S \neq \emptyset}\frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}} |I_0|\nonumber \\ \lesssim & \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}} \sup_{K' \cap S \neq \emptyset}\frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}},\end{aligned}$$ which is exactly the same estimate for the corresponding term in Lemma \[B\_size\]. This completes the proof of Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}}$. Generalized Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ ----------------------------------------------------------------------------------------------- ### Local energy estimates and generalization of H(III) The delicacy of the argument for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ with the lacunary family $(\phi_K^3)_K$ lies in the localization and the application of the biest trick for the energy estimates. It is worthy to note that Lemma \[decomp\_compact\] fails to generate the local energy estimates. In particular, one can decompose $$\langle B_I(f_1,f_2), {\varphi}_I^1\rangle = \sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ Then by the geometric observation (\[geometry\_fourier\]) implied by the non-degenerate condition $ \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle \neq 0$, $$\label{loc_attempt_fourier} \langle B_I(f_1,f_2), {\varphi}_I^1\rangle = \sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{\substack{K:|K| \geq |I| \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ The localization has been obtained in (\[loc\_attempt\_fourier\]). Nonetheless, for each fixed $\tau_3$ and $\tau_4$, one cannot equate the terms $$\sum_{\substack{K:|K| \geq |I| \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \phi^{3,\tau_4}_K\rangle \neq \sum_{\substack{ \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ The reason is that ${\varphi}_I^{1,\tau_3}$ and $\psi_K^{3,\tau_4}$ are general $L^2$-normalized bump functions instead of Haar wavelets and $L^2$-normalized indicator functions. The “if and only if” condition $$\label{haar_biest_tri} \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle \neq 0 \iff |K| \geq |I|$$ is no longer valid and is insufficient to derive the biest trick. The biest trick is crucial in the local energy estimates which shall be evident from the previous analysis. In order to use the biest trick in the Fourier case, one needs to exploit the compact Fourier supports instead of the compact supports for spatial variables in the Haar model. As a consequence, one cannot simply apply Lemma \[decomp\_compact\] to localize the energy term involving $B$ as (\[loc\_attempt\_fourier\]) since the bump functions $ {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K$ are compactly supported in space and cannot be compactly supported in frequency due to the uncertainty principle. To achieve the biest trick, one needs to apply a generalized localization. One first recalls that the Littlewood-Paley decomposition imposes that $\text{supp}({\varphi}_I^1) \subseteq \omega_I$ and $\text{supp}(\psi_K^3) \subseteq \omega_K$ where $\omega_I$ and $\omega_K$ behave as follows in the frequency space: .25 in (-20,0)– (20,0); /in [-20/$-4|I|$, -10/$-4|K|^{-1}$, -5/$-\frac{1}{4}|K|^{-1}$, 0/0,5/$\frac{1}{4}|K|^{-1}$,10/$4|K|^{-1}$,20/$4|I|$]{} [ (,0.5) – (,-0.5) node\[below\] ; ]{} (-10,0) – node\[above=0.4ex\] [$\omega_K$]{} (-5,0); (5,0) – node\[above=0.4ex\] [$\omega_K$]{} (10,0); (-20,0) – node\[above=0.8ex\] [$\omega_I$]{} (20,0); (-20,-10)– (20,-10); /in [-5/$-4|I|$, -20/$-4|K|^{-1}$, -10/$-\frac{1}{4}|K|^{-1}$, 0/0,10/$\frac{1}{4}|K|^{-1}$,20/$4|K|^{-1}$,5/$4|I|$]{} [ (,-9.5) – (,-10.5) node\[below\] ; ]{} (-20,-10) – node\[above=0.4ex\] [$\omega_K$]{} (-10,-10); (10,-10) – node\[above=0.4ex\] [$\omega_K$]{} (20,-10); (-5,-10) – node\[above=0.4ex\] [$\omega_I$]{} (5,-10); .25in As one may notice, $$\label{biest_fourier} \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K \rangle \neq 0 \iff \omega_K \subseteq \omega_I \iff |K| \geq |I|,$$ which yields the biest trick as desired. Meanwhile, we would like to attain some localization for the energy. In particular, fix any $n_1, m_1$, define the level set $$\Omega_{n_1,m_1}^x := \{Mf_1 > C_12^{n_1}|F_1|\} \cap \{Mf_2 > C_12^{m_1}|F_2|\},$$ then one would like to reduce $\text{energy}(\langle B_I, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}$ to $\text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}$, where $$B^{n_1,m_1}_0 := \sum_{\substack{K \in \mathcal{K}\\ K \cap \Omega_{n_1,m_1}^x \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \psi_K^3.$$ One observes that since $\psi_K^3$ and ${\varphi}_I^1$ are not compactly supported in $K$ and $I$ respectively, one cannot deduce that $K \cap \Omega_{n_1,m_1}^x \neq \emptyset$ given that $|K| \geq |I|$. The localization in the Fourier case is attained in a more analytic fashion. One decomposes the sum $$\begin{aligned} \label{d_0} \frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} = & \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle\bigg| \nonumber \\ = & \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle + \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}_I^1, \psi_K^3 \rangle \bigg|,\end{aligned}$$ where $$\mathcal{K}_d^{n_1,m_1} := \{K: 1 + \frac{\text{dist}(K,\Omega_x^{n_1,m_1})}{|K|} \sim 2^{d} \},$$ and $$\mathcal{K}_0^{n_1,m_1} := \{K : K \cap \Omega^{n_1,m_1}_x \neq \emptyset\}.$$ Ideally, one would like to ”omit” the former term, which is reasonable once $$\label{energy_needed} \sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \ll \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle$$ so that one can apply the previous argument discussed in Chapter 6. In the other case when $$\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \gtrsim \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle,$$ local energy estimates are not necessary to achieve the result. The following lemma generates estimates for the former term and provides a guideline about the separation of cases. The notations in the lemma are consistent with the previous discussion. \[en\_loc\] Suppose that $d >0$. Then $$\frac{1}{|I|^{\frac{1}{2}}}\bigg| \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}^{n_1,m_1}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \bigg| \lesssim 2^{-Nd}(C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1},$$ for any $0 \leq \alpha_1,\beta_1 \leq 1$ and some $N \gg 1$. 1. One simple but important fact is that for any fixed $d>0$, $n_1$ and $m_1$, $\mathcal{K}_d^{n_1,m_1}$ is a disjoint collection of dyadic intervals. 2. Aware of the first comment, one can apply the exactly same argument in Section 10.1 to prove the lemma. Based on the estimates described in Lemma \[en\_loc\], one has that $$\begin{aligned} \label{threshold} \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}^{n_1,m_1}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \bigg| & \lesssim \sum_{d>0} 2^{-Nd} (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1} \nonumber \\ & \lesssim (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}, \end{aligned}$$ for any $0 \leq \alpha_1, \beta_1 \leq 1$. One can then use the upper bound in (\[threshold\]) to proceed the discussion case by case. .25in **Case I: There exists $0 \leq \alpha_1, \beta_1 \leq 1$ such that $\frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} \gg (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$** .1in In Case I, (\[energy\_needed\]) holds and the dominant term in expression (\[d\_0\]) has to be $$\frac{1}{|I|^{\frac{1}{2}}}\sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0^{n_1,m_1}}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle,$$ which provides a localization for energy estimates involving $B$. In particular, in the current case $$\begin{aligned} & \text{energy}((\langle B_I, {\varphi}_I^1\rangle)_{I}) \lesssim \text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}, \nonumber \\ & \text{energy}^t((\langle B_I, {\varphi}_I^1\rangle)_{I}) \lesssim \text{energy}^t(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}, \nonumber\end{aligned}$$ for any $t >1$. Furthermore, $$\begin{aligned} & \text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset} \lesssim \|B^{n_1,m_1}_{0}\|_1, \nonumber \\ & \text{energy}^t(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset} \lesssim \|B^{n_1,m_1}_{0}\|_t,\end{aligned}$$ for any $t > 1$, where $\|B^{n_1,m_1}_{0}\|_1$ and $\|B^{n_1,m_1}_{0}\|_t$ follow from the same estimates for their Haar variants described in Chapter 5. We will explicitly state the local energy estimates in this case. \[localized\_energy\_fourier\_x\] Suppose that $n_1, m_1 \in \mathbb{Z}$ are fixed and suppose that $\mathcal{I}'$ is a finite collection of dyadic intervals such that for any $I \in \mathcal{I} '$, $I $ satisfies 1. $I \in \mathcal{I}_{n_1,m_1}$; 2. $I \in T $ with $T \in \mathbb{T}_{l_1}$ for some $l_1$ satisfying the condition that there exists some $ 0 \leq \alpha_1, \beta_1 \leq 1$ such that $$\label{loc_condition_x} 2^{l_1}\|B\|_1 \gg (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$$ <!-- --> (i) Then for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$, one has $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B_I, {\varphi}_I\rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}.\nonumber \\ $$ (ii) Suppose that $t >1$. Then for any $0 \leq \theta_1, \theta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B_I, {\varphi}_I\rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}. \nonumber $$ A parallel statement holds for dyadic intervals in the $y$-direction, which will be stated for the convenience of reference later on. \[localized\_energy\_y\] Suppose that $ n_2, m_2 \in \mathbb{Z}$ are fixed and suppose that $\mathcal{J}'$ is a finite collection of dyadic intervals such that for any $J \in \mathcal{J} '$, $J $ satisfies 1. $J \in \mathcal{J}_{n_2,m_2}$; 2. $J \in S $ with $S \in \mathbb{S}_{l_2}$ for some $l_2$ satisfying the condition that there exists some $0 \leq \alpha_2, \beta_2 \leq 1$ such that $$\label{loc_condition_y} 2^{l_2}\|\tilde{B}\|_1 \gg (C_22^{n_2}|G_1|)^{\alpha_2} (C_22^{m_2}|G_2|)^{\beta_2}.$$ <!-- --> (i) Then for any $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, one has $$\begin{aligned} & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}_J, {\varphi}_J \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. $$ (ii) Suppose that $s >1$. Then for any $0 \leq \zeta_1, \zeta_2 <1$ with $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}_J, {\varphi}_J \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ We would like to highlight that the localization of energies is attained under the additional conditions (\[loc\_condition\_x\]) and (\[loc\_condition\_y\]), in which case one obtains the local energy estimates stated in Proposition \[localized\_energy\_fourier\_x\] and \[localized\_energy\_y\] that can be viewed as analogies of Proposition \[B\_en\]. .25in **Case II: For any $0 \leq \alpha_1, \beta_1 \leq 1$, $ \frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} \lesssim (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$** .1in In this alternative case, the size estimates are favorable and a simpler argument can be applied without invoking the local energy estimates. ### Proof Part 1 - Localization In this last section, we will explore how to implement the case-by-case analysis and generalize the argument in the proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ when $(\phi^3_K)_K$ and $(\phi^3_L)_L$ are **lacunary** families and $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} >1 $, which is the most tricky part to generalize for the argument in Chapter 7. The generalized argument can be viewed as a combination of the discussions in Chapter 6 and 7. One first defines the exceptional set $\Omega$ as follows: For any $\tau_1,\tau_2 \in \mathbb{N}$, define $$\Omega^{\tau_1,\tau_2} := \Omega_1^{\tau_1,\tau_2}\cup \Omega_2^{\tau_1,\tau_2}$$ with $$\begin{aligned} \displaystyle \Omega_1^{\tau_1,\tau_2} := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\}\cup \nonumber \\ & \bigcup_{l_2 \in \mathbb{Z}}\{MB > C_1^{\tau_1}2^{-l_2}\|B\|\} \times \{M\tilde{B} > C_2^{\tau_2} 2^{l_2}| \|\tilde{B}\|_1\}, \nonumber \\ \Omega_2^{\tau_1,\tau_2} := & \{SSh > C_3^{\tau_1,\tau_2} \|h\|_{L^s(\mathbb{R}^2)}\}, \nonumber \\\end{aligned}$$ and $$\begin{aligned} & \tilde{\Omega}^{\tau_1,\tau_2} := \{ M\chi_{\Omega^{\tau_1,\tau_2}} > \frac{1}{100}\}, \nonumber \\ & \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} := \{ M\chi_{\tilde{\Omega}^{\tau_1,\tau_2}} >\frac{1}{2^{2\tau_1+2\tau_2}}\},\end{aligned}$$ and finally $$\tilde{\Omega} := \bigcup_{\tau_1,\tau_2 \in \mathbb{N}}\tilde{\tilde{\Omega}}^{\tau_1,\tau_2}.$$ Let $$E' := E \setminus \tilde{\Omega},$$ where $|E'| \sim |E| =1$ given that $C_1, C_2$ and $C_3$ are sufficiently large constants. Our goal is to prove that $$\begin{aligned} \label{compact} \Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy}, \chi_{E'}) := \displaystyle \sum_{\tau_1,\tau_2 \in \mathbb{N}}2^{-100(\tau_1+\tau_2)}\sum_{I \times J \in \mathcal{R}}& \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}^{\#_2}(g_1,g_2), \phi_J^1 \rangle \nonumber \\ & \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle \nonumber\end{aligned}$$ satisfies the restricted weak-type estimates $$\label{final_linear} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}.$$ For $\tau_1, \tau_2 \in \mathbb{N}$ fixed, let $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}^{\tau_1, \tau_2} (f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}):= \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle$$ then (\[final\_linear\]) can be reduced to proving that for any fixed $\tau_1, \tau_2 \in \mathbb{N}$, $$\label{linear_fix_fourier} |\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \lesssim (2^{\tau_1+ \tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}$$ for some $0 < \Theta < 100$. ### Proof Part 2 - Summary of stopping-time decompositions. For any fixed $\tau_1,\tau_2 \in \mathbb{N}$, one can carry out the exactly same stopping-time algorithms in Chapter 7 with the replacement of $C_1, C_2$ and $C_3$ by $C_1^{\tau_1,\tau_2}$, $C_2^{\tau_1,\tau_2}$ and $C_3^{\tau_1,\tau_2}$ respectively. The resulting level sets, trees and collections of dyadic rectangles will follow the similar notations as before with extra indications of $\tau_1$ and $\tau_2$. .15in ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}^{\tau_1}_{-n-n_2,-m-m_2} \times \mathcal{J}^{\tau_2}_{n_2,m_2}$ $(n_2, m_2 \in \mathbb{Z}, n > 0)$ II\. Tensor-type stopping-time decomposition II on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in T \times S $ with $T \in \mathbb{T}_{-l-l_2}^{\tau_1}$, $S \in \mathbb{S}_{l_2}^{\tau_2}$ $(l_2 \in \mathbb{Z}, l > 0)$ III\. General two-dimensional level sets stopping-time $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2} $      decomposition on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------------------------- ### Proof Part 3 - Application of stopping-time decompostions. As one may recall, the multilinear form is estimated based on the stopping-time decompositions, the sparsity condition and the Fubini-type argument. $$\begin{aligned} &|\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \nonumber \\ = & \bigg| \sum_{\substack{l >0 \\ n> 0 \\ m> 0 \\ k_1 < 0 \\ k_2 \leq K}}\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}}\sum_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in T \times S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1\tau_2}}}\frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \langle h, \phi_I^2 \otimes \phi_J^2 \rangle \langle \chi_{E'}, \phi_I^{3,\tau_1} \otimes \phi_J^{3,\tau_2} \rangle\bigg| \nonumber \\ \lesssim &C_1^{\tau_1}C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2\sum_{\substack{l >0 \\ n> 0 \\ m> 0 \\ k_1 < 0 \\ k_2 \leq K}}2^{k_1} \|h\|_s 2^{k_2}\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2} \|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg|. \nonumber \\\end{aligned}$$ The nested sum $$\begin{aligned} \label{ns_fourier} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \end{aligned}$$ can be estimated using the same sparsity condition for (\[ns\_sp\]) and a modified Fubini argument as discussed in the following two subsection. ### Proof Part 4 - Sparsity condition One invokes the sparsity condition Theorem \[sparsity\] and argument in Chapter 6 to obtain the following estimate for (\[ns\_fourier\]). $$\begin{aligned} \label{ns_fourier_sp} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \nonumber \\ \lesssim & 2^{-\frac{k_2\gamma}{2}}2^{-l(1- \frac{(1+\delta)}{2})} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B\|_1^{1-\frac{1+\delta}{2}}\|\tilde{B}\|_1^{1-\frac{1+\delta}{2}}.\end{aligned}$$ For any $0 < \delta \ll1$, Lemma \[B\_global\_norm\] implies that $$\begin{aligned} & \|B\|^{1-\frac{1+\delta}{2}} \lesssim |F_1|^{\rho(1-\frac{1+\delta}{2})}|F_2|^{(1-\rho)(1-\frac{1+\delta}{2})}, \nonumber \\ & \|\tilde{B}\|^{1-\frac{1+\delta}{2}} \lesssim |G_1|^{\rho'(1-\frac{1+\delta}{2})}|G_2|^{(1-\rho')(1-\frac{1+\delta}{2})}.\end{aligned}$$ Therefore, (\[ns\_fourier\_sp\]) can be majorized by $$\label{ns_fourier_sp_final} 2^{-\frac{k_2\gamma}{2}}2^{-l(1- \frac{(1+\delta)}{2})} |F_1|^{\frac{\mu_1(1+\delta)}{2}+\rho(1-\frac{1+\delta}{2})}|F_2|^{\frac{\mu_2(1+\delta)}{2}+(1-\rho)(1-\frac{1+\delta}{2})}|G_1|^{\frac{\nu_1(1+\delta)}{2}+\rho'(1-\frac{1+\delta}{2})}|G_2|^{\frac{\nu_2(1+\delta)}{2}+(1-\rho')(1-\frac{1+\delta}{2})}.$$ .15 in ### Proof Part 5 - Fubini argument The separation of cases based on the levels of the stopping-time decompositions for $ \big(\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}}\big)_{I \in \mathcal{I}} $ and $ \big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}} $, in particular the ranges of $l_2$ in the *tensor-type stopping-time decomposition I*, plays an important role in the modified Fubini-type argument. With $l \in \mathbb{N}$ fixed, the ranges of exponents $l_2$ are defined as follows: $$\begin{aligned} \mathcal{EXP}_1^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{for any}\ \ 0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1\lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}, \nonumber \\ \mathcal{EXP}_2^{l,-n-n_2,n_2, -m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{there exists} \ \ 0 \leq \alpha_1, \beta_1 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \gg (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad \text{for any} \ \ 0 \leq \alpha_2, \beta_2 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}, \nonumber \\ \mathcal{EXP}_3^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{for any} \ \ 0 \leq \alpha_1, \beta_1 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad \text{there exists} \ \ 0 \leq \alpha_2, \beta_2 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}, \nonumber \\ \mathcal{EXP}_4^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{there exists} \ \ 0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \gg (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}. \nonumber \end{aligned}$$ One decomposes the sum into four parts based on the ranges specified above: $$\begin{aligned} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2} \|\tilde{B}\|_1\sum_{\substack{S \in \mathbb{S}^{-l-l_2} \\ T \in \mathbb{T}^{l_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \cap S \times \mathcal{J}_{n_2,m_2} \cap T \\ I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg| \nonumber \\ = &\underbrace{ \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_1^{l,-n-n_2,n_2,-m-m_2,m_2}}}_I+ \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_2^{l,-n-n_2,n_2,-m-m_2,m_2}}}_{II} + \nonumber \\ & \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l}} \sum_{_2\in \mathcal{EXP}_3^{l,-n-n_2,n_2,-m-m_2,m_2,}} }_{III}+ \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_4^{l,-n-n_2,n_2,-m-m_2,m_2,}}}_{IV}.\end{aligned}$$ One denotes the four parts by $I$, $II$, $III$ and $IV$ and will derive estimates for each part separately. The multilinear form can thus be decomposed correspondingly as follows: $$\begin{aligned} |\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \lesssim & \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot I}_{\Lambda_{I}^{\tau_1, \tau_2}} + \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot II}_{\Lambda_{II}^{\tau_1, \tau_2}} + \nonumber \\ &\underbrace{ C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot III}_{\Lambda_{III}^{\tau_1, \tau_2}} + \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot IV}_{\Lambda_{IV}^{\tau_1, \tau_2}}.\end{aligned}$$ It would be sufficient to prove that each part satisfies the bound $$(C_1^{\tau_1}C_2^{\tau_2} C_3 ^{\tau_1,\tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}$$ for some constant $0<\Theta<100$. With little abuse of notation, we will simplify $\mathcal{EXP}_i^{^l,-n-n_2,n_2,-m-m_2,m_2}$ by $\mathcal{EXP}_i$, for $i = 1,2,3,4.$ .15in **Estimate of $\Lambda_I^{\tau_1,\tau_2}$.** Though for $I$, the localization of energies cannot be applied at all, one observes that energy estimates are indeed not necessary. In particular, $$\begin{aligned} \label{I} I \lesssim & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}} \sum_{l_2\in \mathcal{EXP}_1}2^{-l-l_2}\|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber \\ \leq & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}} \bigg(\sup_{l_2\in \mathcal{EXP}_1} 2^{-l-l_2}\|B\|_1\bigg)\bigg(\sup_{l_2\in \mathcal{EXP}_1} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg|\bigg) \bigg(\sum_{l_2\in \mathcal{EXP}_1}2^{l_2}\|\tilde{B}\|_1\bigg). $$ We will estimate the expressions in the parentheses separately. (i) It is trivial from the definition of $\mathcal{EXP}_1$ that for any $0 \leq \alpha_1, \beta_1 \leq 1$, $$\sup_{l_2\in \mathcal{EXP}_1} 2^{-l-l_2}\|B\|_1 \lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1}.$$ (ii) The last expression is a geometric series with the largest term bounded by $$\label{I_i} (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$ according to the definition of $\mathcal{EXP}_1$. As a result, $$\sum_{l_2 \in \mathcal{EXP}_1} 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2},2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$. (iii) For any fixed $-n-n_2, -m-m_2, n_2,m_2, l_2, \tau_1,\tau_2$, $$\begin{aligned} \label{I_ii} &\{I_T: I_T \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \ \ \text{and} \ \ T \in \mathbb{T}^{\tau_1}_{-l-l_2} \}, \nonumber \\ & \{J_S: J_S \in \mathcal{I}_{n_2,-m_2}^{\tau_2} \ \ \text{and} \ \ S \in \mathbb{S}^{\tau_2}_{l_2} \}\end{aligned}$$ are disjoint collections of dyadic intervals. Therefore $$\begin{aligned} \label{I_iii} &\sup_{l_2\in \mathcal{EXP}_1} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber \\ \leq & \sup_{l_2}\bigg|\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}}\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber\\ \leq & \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg|.\end{aligned}$$ One can now plug in the estimates (\[I\_i\]), (\[I\_ii\]) and (\[I\_iii\]) into (\[I\]) and derive that for any $0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1$, $$\begin{aligned} & I \nonumber \\ \lesssim & (C_1^{\tau_1})^2(C_2^{\tau_2})^2\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(2^{-n-n_2}|F_1|)^{\alpha_1} (2^{-m-m_2}|F_2|)^{\beta_1}(2^{n_2}|G_1|)^{\alpha_2} (2^{m_2}|G_2|)^{\beta_2}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \nonumber \\ \leq & (C_1^{\tau_1})^2(C_2^{\tau_2})^2 \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(2^{-n-n_2}|F_1|)^{\alpha_1} (2^{-m-m_2}|F_2|)^{\beta_1}(2^{n_2}|G_1|)^{\alpha_2} (2^{m_2}|G_2|)^{\beta_2}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg|. \end{aligned}$$ By letting $\alpha_1 = \frac{1}{p_1}$, $\alpha_2 = \frac{1}{q_1}$, $\beta_1 = \frac{1}{p_2}$ and $\beta_2 = \frac{1}{q_2}$ and the argument for choice of indices in Chapter 6, one has $$I \lesssim (C_1^{\tau_1})^2(C_2^{\tau_2})^2 2^{-n\frac{1}{p_2}}2^{-m\frac{1}{q_1}} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| .$$ where $$\begin{aligned} \label{I_measure} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \lesssim \min(2^{-k_1s},2^{-k_2\gamma}),\end{aligned}$$ for any $\gamma >1$. The estimate is a direct application of the sparsity condition described in Proposition \[sp\_2d\] that has been extensively used before. One can now apply (\[I\_measure\]) to conclude that $$\begin{aligned} |\Lambda_I^{\tau_1,\tau_2}| = & C_1^{\tau_1}C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot I \nonumber \\ \lesssim & (C_1^{\tau_1}C_2^{\tau_2}C_3^{\tau_1,\tau_2})^{6} \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1(1-\frac{s}{2})}\|h\|_s 2^{k_2(1-\frac{\gamma}{2})} 2^{-n\frac{1}{p_2}}2^{-m\frac{1}{q_1}} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}},\end{aligned}$$ and achieves the desired bound with appropriate choice of $\gamma>1$. .15in **Estimate of $\Lambda_{II}^{\tau_1,\tau_2}$.** One first observes that the estimates for $\Lambda_{II}^{\tau_1,\tau_2}$ apply to $\Lambda_{III}^{\tau_1,\tau_2}$ due to the symmetry. One shall notice that $$\begin{aligned} \label{II} II \leq & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg( \sum_{l_2 \in \mathcal{EXP}_2} 2^{l_2} \|\tilde{B}\|_1\bigg) \bigg(\sup_{l_2 \in \mathcal{EXP}_2}2^{-l-l_2}\|B\|_1\sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ I_T \in \mathcal{I}_{-n-n_2,-m-m_2}}}|I_T|\bigg)\bigg(\sup_{l_2}\sum_{\substack{S \in \mathbb{S}_{l_2}^{\tau_2} \\ J_S \in \mathcal{J}_{n_2,m_2}^{\tau_2}}}|J_S|\bigg).\end{aligned}$$ (i) The first expression is a geometric series which can be bounded by $$\label{II_i} (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$ (up to some constant as discussed in the estimate of $I$). (ii) The second term in (\[II\]) can be considered as a localized $L^{1,\infty}$-energy. In addition, given by the restriction that $l_2 \in \mathcal{EXP}_2$, one can apply the localization and the corresponding energy estimates described in Proposition \[localized\_energy\_fourier\_x\]. In particular, for any $0 \leq \theta_1, \theta_2 < 1$ with $\theta_1 + \theta_2 = 1$, $$\begin{aligned} \label{II_ii} & \sup_{l_2 \in \mathcal{EXP}_2}2^{-l-l_2}\|B\|_1\sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ I_T \in \mathcal{I}_{-n-n_2,-m-m_2}}}|I_T| \nonumber \\ \lesssim & (C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1}-\theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1}- \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}.\end{aligned}$$ (iii) For any fixed $n_2,m_2,l_2$ and $\tau_2$, $\{J_S: J_S \in \mathcal{J}_{n_2,m_2}^{\tau_2} \ \ \text{and } \ \ S \in \mathbb{S}_{l_2}^{\tau_2}\}$ is a disjoint collection of dyadic intervals, which implies that $$\begin{aligned} \label{II_iii} \sup_{l_2}\sum_{\substack{S \in \mathbb{S}^{l_2} \\ J_S \in \mathcal{J}_{n_2,m_2}}}|J_S| & \leq \big| \bigcup_{\substack{J_S \in \mathcal{J}_{n_2,m_2}}}J_S\big| \nonumber \\ & \lesssim \big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big|,\end{aligned}$$ where the last inequality follows from the point-wise estimates indicated in Claim \[ptwise\]. By combining (\[II\_i\]), (\[II\_ii\]) and (\[II\_iii\]), one can majorize (\[II\]) as $$\begin{aligned} \label{II_final} II \lesssim & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2} 2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2} 2^{m_2}|G_2|)^{\beta_2}(C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1}- \theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}} \nonumber \\ & \quad \quad \cdot \big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big| \nonumber \\ & \leq \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} (C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1} - \theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}\nonumber \\ & \quad \quad \quad \cdot (C_2^{\tau_2} 2^{n_2})^{\alpha_2 - (1+\epsilon)(1-\mu)}(C_2^{\tau_2 }2^{m_2})^{\beta_2-(1+\epsilon)\mu}|G_1|^{\alpha_2- (1+\epsilon)(1-\mu)} |G_2|^{\beta_2-(1+\epsilon)\mu} \cdot \nonumber \\ &\quad \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2}2^{n_2}|G_1|)^{(1+\epsilon)(1-\mu)}(C_2^{\tau_2}2^{m_2}|G_2|)^{(1+\epsilon)\mu}\big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big|.\end{aligned}$$ By the Hölder-type argument introduced in Chapter 7, one can estimate the expression $$\begin{aligned} \label{II_fub} & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2}2^{n_2}|G_1|)^{(1+\epsilon)(1-\mu)}(C_2^{\tau_2}2^{m_2}|G_2|)^{(1+\epsilon)\mu}\big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big| \nonumber \\ \lesssim & |G_1|^{1-\mu} |G_2|^{\mu}.\end{aligned}$$ Therefore, by plugging in (\[II\_fub\]) and some simplifications, (\[II\_final\]) can be majorized by $$\begin{aligned} & II \nonumber \\ \lesssim & (C_1^{\tau_1} C_2^{\tau_2})^2\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} (2^{-n-n_2})^{\frac{1}{p_1} - \theta_1}( 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}} (2^{n_2})^{\alpha_2 - (1+\epsilon)(1-\mu)}(2^{m_2})^{\beta_2-(1+\epsilon)\mu}|G_1|^{\alpha_2-\epsilon(1-\mu)} |G_2|^{\beta_2-\epsilon\mu}. $$ One would like to choose $0 \leq \alpha_2, \beta_2 \leq 1, 0 < \mu < 1$ and $\epsilon>0$ such that $$\begin{aligned} \label{exp_cond_fourier} & \alpha_2-\epsilon(1-\mu) = \frac{1}{p_2}, \nonumber \\ & \beta_2 - \epsilon\mu = \frac{1}{q_2}. \end{aligned}$$ Meanwhile, one can also achieve the equalities $$\begin{aligned} & \frac{1}{p_1} - \theta_1 = \alpha_2-(1+\epsilon)(1-\mu), \nonumber \\ & \frac{1}{q_1} - (1-\theta_1) = \beta_2-(1+\epsilon)\mu, \end{aligned}$$ which combined with (\[exp\_cond\_fourier\]), yield $$\begin{aligned} & \frac{1}{p_1} - \theta_1 = \frac{1}{p_2} - (1-\mu), \nonumber \\ & \frac{1}{q_1} - (1-\theta_1) = \frac{1}{q_2} -\mu.\end{aligned}$$ Thanks to the condition that $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ one only needs to choose $0 < \theta_1, \mu < 1$ such that $$\frac{1}{p_1} - \frac{1}{p_2} = \theta_1- (1-\mu).$$ To sum up, one has the following estimate for II: $$\label{II_ns} II \lesssim (C_1^{\tau_1} C_2^{\tau_2})^2 2^{-n(\frac{1}{p_1}- \theta_1)}2^{-m(\frac{1}{q_1}- (1-\theta_1))}|F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}.$$ Last but not least, one can interpolate between the estimates (\[II\_ns\]) and (\[ns\_fourier\_sp\_final\]) obtained from the sparsity condition to conclude that $$\begin{aligned} \label{ns_fourier_fb_final} |\Lambda_{II}^{\tau_1,\tau_2}| = & C_1^{\tau_1} C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot II \nonumber \\ \lesssim & (C_1^{\tau_1} C_2^{\tau_2}C_3^{\tau_1,\tau_2})^6 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2(1-\frac{\lambda\gamma}{2})} 2^{-l\lambda(1- \frac{(1+\delta)}{2})}2^{-n(1-\lambda)(\frac{1}{p_1}- \theta_1)}2^{-m(1-\lambda)(\frac{1}{q_1}- (1-\theta_1))} \nonumber \\ & \cdot |F_1|^{(1-\lambda)\frac{1}{p_1}+\lambda\frac{\mu_1(1+\delta)}{2}+\lambda\rho(1-\frac{1+\delta}{2})} |F_2|^{(1-\lambda)\frac{1}{q_1}+\lambda\frac{\mu_2(1+\delta)}{2}+\lambda(1-\rho)(1-\frac{1+\delta}{2})} \nonumber \\ &\cdot |G_1|^{(1-\lambda)\frac{1}{p_2}+\lambda\frac{\nu_1(1+\delta)}{2}+\lambda\rho'(1-\frac{1+\delta}{2})} |G_2|^{(1-\lambda)\frac{1}{q_2}+\lambda\frac{\nu_2(1+\delta)}{2}+\lambda(1-\rho')(1-\frac{1+\delta}{2})}. \end{aligned}$$ One has enough degree of freedom to choose the indices and obtain the desired estimate: (i) for any $0 < \lambda,\delta < 1$, the series $\displaystyle \sum_{l>0}2^{-l\lambda(1- \frac{(1+\delta)}{2})}$ is convergent; (ii) one notices that for $0 < \theta_1 < 1$, $\displaystyle \sum_{n>0}2^{-n(1-\lambda)(\frac{1}{p_1}- \theta_1)}$ and $\displaystyle \sum_{m>0}2^{-m(1-\lambda)(\frac{1}{q_1}- (1-\theta_1))}$ converge if $$\begin{aligned} &\frac{1}{p_1} - \theta_1>0, \nonumber \\ &\frac{1}{q_1} - (1-\theta_1)>0, \end{aligned}$$ which implies that $$\frac{1}{p_1} + \frac{1}{q_1} > 1.$$ This would be the condition we impose on the exponents $p_1$ and $q_1$. The proof for range $\frac{1}{p_1} + \frac{1}{q_1} \leq 1$ follows a simpler argument. (iii) One can identify (\[ns\_fourier\_fb\_final\]) with (\[exp00\]) and choose the indices to match the desired exponents for $|F_1|,|F_2|, |G_1|$ and $|G_2|$ in the exactly same fashion. .15 in **Estimate of $\Lambda_{IV}^{\tau_1,\tau_2}$.** When $l_2 \in \mathcal{EXP}_4$, one has the localization that the main contribution of $$\sum_{|K| \geq |I|}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle \chi_{E'}, \psi_K^3\rangle$$ comes from $$\sum_{K \supseteq I}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle \chi_{E'}, \psi_K^3\rangle$$ as in the Haar model. As a consequence, it is not difficult to check that the argument in Section 7 applies to the estimate of $IV$, where one employs the local energy estimates stated in Proposition \[localized\_energy\_fourier\_x\] and \[localized\_energy\_y\] instead of Proposition \[B\_en\] and derive that $$\begin{aligned} \label{ns_fourier_fb_iv} IV \lesssim (C_1^{\tau_1} C_2^{\tau_2})^2 2^{-n(\frac{1}{p_1} - \theta_1-\frac{1}{2}\mu(1+\epsilon))}2^{-m(\frac{1}{q_1}- \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))} |F_1|^{\frac{1}{p_1}-\frac{\mu}{2}\epsilon}|F_2|^{\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon}|G_1|^{\frac{1}{p_2}-\frac{\mu}{2}\epsilon}|G_2|^{\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon}. \end{aligned}$$ By interpolating between (\[ns\_fourier\_fb\_iv\]) and (\[ns\_fourier\_sp\]) which agree with the estimates for the nested sum using the Fubini argument and the sparsity condition developed in Section 7, one achieves the desired bound. When only one of the families $(\phi_K)_{K \in \mathcal{K}}$ and $(\phi_L)_{L \in \mathcal{L}}$ is lacunary, a simplified argument is sufficient. Without loss of generality, we assume that $(\psi_K)_{K \in \mathcal{K}}$ is a lacunary family while $({\varphi}_L)_{L \in \mathcal{L}}$ is a non-lacunary family. One can then split the argument into two parts depending the range of the exponents $l_2$: (i) $l_2 \in \{l_2 \in \mathbb{Z}: 2^{l_2}\|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2}(C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}$; (ii) $l_2 \in \{l_2 \in \mathbb{Z}: 2^{l_2}\|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2}(C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}$; where Case (i) can be treated by the same argument for $II$ and Case (ii) by the reasoning for $IV$. This completes the proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ in the general case. As commented in the beginning of this section, the argument for Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] developed in the Haar model can be generalized to the Fourier setting, which ends the proof of the main theorems. Appendix I - Multilinear Interpolations ======================================= This chapter is devoted to various multilinear interpolations that allow one to reduce Theorem \[main\_theorem\] to \[thm\_weak\] (and Theorem \[main\_thm\_inf\] to \[thm\_weak\_inf\] correspondingly). We will start from the statement in Theorem \[thm\_weak\] and implement interpolations step by step to reach Theorem \[main\_theorem\]. Throughout this chapter, we will consider $T_{ab}$ as a trilinear operator with first two function spaces restricted to tensor-product spaces. Interpolation of Multilinear forms ---------------------------------- One may recall that Theorem \[thm\_weak\] covers all the restricted weak-type estimates except for the case $2 \leq s \leq \infty$. We will apply the interpolation of multilinear forms to fill in the gap. In particular, Let $T^*_{ab}$ denote the adjoint operator of $T_{ab}$ such that $$\langle T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h),l\rangle = \langle T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, l), h \rangle$$ Due to the symmetry between $T_{ab}$ and $T^*_{ab}$, one concludes that the multilinear form associated to $T^{*}_{ab}$ satisfies $$|\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, l)| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} |H|^{\frac{1}{r'}} |L|^{\frac{1}{s}}$$ for every measurable set $F_1, F_2 \subseteq \mathbb{R}_x$, $G_1, G_2 \subseteq \mathbb{R}_y$, $H, L \subseteq \mathbb{R}^2$ of positive and finite measure and every measurable function $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $|h| \leq \chi_{H}$ and $|l| \leq \chi_{L}$ for $i, j = 1, 2$. The notation and the range of exponents agree with the ones in Theorem \[thm\_weak\]. One can now apply the interpolation of multilinear forms described in Lemma 9.6 of [@cw] to attain the restricted weak-type estimate with $1 < s \leq \infty $: $$\label{s=inf} |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, l)| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}}|H|^{\frac{1}{s}} |L|^{\frac{1}{r'}}$$ where $\frac{1}{s} = 0$ if $s= \infty$. For $1 \leq s < \infty$, one can fix $f_1, g_1, f_2, g_2 $ and apply linear Marcinkiewiecz interpolation theorem to prove the strong-type estimates for $h \in \L^s(\mathbb{R}^2)$ with $1 < s < \infty$. The next step would be to validate the same result for $h \in L^{\infty}$. One first rewrites the multilinear form associated to $T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h)$ as $$\begin{aligned} \label{linear_form_interp} \Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, \chi_{E'}) := & \langle T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h), \chi_{E'}\rangle \nonumber \\ = & \langle T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'}), h\rangle. \nonumber\\\end{aligned}$$ Let $Q_N := [ -N,N]^2$ denote the cube of length $2N$ centered at the origin in $\mathbb{R}^2$, then (\[linear\_form\_interp\]) can be expressed as $$\begin{aligned} & \displaystyle \lim_{N \rightarrow \infty} \int_{Q_N} T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'})(x) h(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\int T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'})(x) (h\cdot\chi_{Q_N})(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\int T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h\cdot\chi_{Q_N})(x) \chi_{E'}(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h\cdot\chi_{Q_N}, \chi_{E'}).\end{aligned}$$ Let $\tilde{h}:= \frac{h \chi_{Q_N}}{\|h\|_{\infty}}$, where $|\tilde{h}| \leq \chi_{Q_N}$ with $|Q_N| \leq N^2$. One can thus invoke (\[s=inf\]) to conclude that $$\begin{aligned} |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h\chi_{Q_N}, \chi_{E'})| =& \|h\|_{\infty} \cdot |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, \tilde{h}, \chi_{E'})| \nonumber \\ \lesssim & |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{\infty}|E|^{\frac{1}{r'}}.\end{aligned}$$ As the bound for the multilinear form is independent of $N$, passing to the limit when $N \rightarrow \infty$ yields that $$|\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, \chi_{E'})| \lesssim |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{\infty}|E|^{\frac{1}{r'}}.$$ Combined with the statement in Theorem \[thm\_weak\], one has that for any $1 < p_1,p_2, q_1,q_2 < \infty$, $1<s \leq \infty$, $0 < r < \infty$, $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} = \frac{1}{r} - \frac{1}{s}$, $$\label{restricted_weak} \|T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h)\|_{r,\infty} \lesssim |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{s}$$ for every measurable set $F_1, F_2 \subseteq \mathbb{R}_x$, $G_1, G_2 \subseteq \mathbb{R}_y$ of positive and finite measure and every measurable function $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$ for $i, j = 1, 2$. Tensor-type Marcinkiewiecz Interpolation ---------------------------------------- The next and final step would be to attain strong-type estimates for $T_{ab}$ from (\[restricted\_weak\]). We first fix $h \in L^{s}$ and define $$T^{h}(f_1 \otimes g_1, f_2 \otimes g_2) := T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2,h)$$ One can then apply the following tensor-type Marcinkiewiecz interpolation theorem to each $T^h$ so that Theorem \[main\_theorem\] follows. \[tensor\_interpolation\] Let $1 < p_1,p_2, q_1, q_2< \infty$ and $ 0 < t < \infty$ such that $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} = \frac{1}{t}$. Suppose a multilinear tensor-type operator $T(f_1 \otimes g_1, f_2 \otimes g_2)$ satisfies the restricted weak-type estimates for any $\tilde{p_1}, \tilde{p_2}, \tilde{q_1}, \tilde{q_2}$ in a neighborhood of $p_1, p_2, q_1, q_2$ respectively with $ \frac{1}{\tilde{p_1}} + \frac{1}{\tilde{q_1}} = \frac{1}{\tilde{p_2}} + \frac{1}{\tilde{q_2}} = \frac{1}{\tilde{t}}$, equivalently $$\|T(f_1 \otimes g_1, f_2 \otimes g_2) \|_{\tilde{t},\infty} \lesssim |F_1|^{\frac{1}{\tilde{p_1}}} |G_1|^{\frac{1}{\tilde{p_2}}} |F_2|^{\frac{1}{\tilde{q_1}}} |G_2|^{\frac{1}{\tilde{q_2}}}$$ for any measurable sets $F_1 \subseteq \mathbb{R}_{x} , F_2 \subseteq \mathbb{R}_{x}, G_1\subseteq \mathbb{R}_y, G_2\subseteq \mathbb{R}_y$ of positive and finite measure and any measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|f_2(x)| \leq \chi_{F_2}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $|g_2(y)| \leq \chi_{G_2}(y)$. Then $T$ satisfies the strong-type estimate $$\|T(f_1 \otimes g_1, f_2 \otimes g_2) \|_{t} \lesssim \|f_1\|_{p_1} \|g_1\|_{p_2} \|f_2\|_{q_1} \|g_2\|_{q_2}$$ for any $f_1 \in L^{p_1}(\mathbb{R}_x)$, $f_2 \in L^{q_1}(\mathbb{R}_x)$, $g_1 \in L^{p_2}(\mathbb{R}_y)$ and $g_2 \in L^{q_2}(\mathbb{R}_y)$. The proof of the theorem resembles the argument for the multilinear Marcinkiewiecz interpolation(see [@bm]) with small modifications. Appendix II - Reduction to Model Operators ========================================== Littlewood-Paley Decomposition ------------------------------ ### Set Up Let ${\varphi}\in \mathcal{S}(\mathbb{R})$ be a Schwartz function with $\text{supp} {\widehat}{{\varphi}} \subseteq [-2,2]$ and ${\widehat}{{\varphi}}(\xi) = 1$ on $[-1,1]$. Let $${\widehat}{\psi}(\xi) = {\widehat}{{\varphi}}(\xi) - {\widehat}{{\varphi}}(2\xi)$$ so that $\text{supp} {\widehat}{\psi} \subseteq [-2,-\frac{1}{2}] \cup [-\frac{1}{2}, 2]$. Now for every $k \in \mathbb{Z}$, define $${\widehat}{\psi}_{k}: = {\widehat}{\psi}(2^{-k}\xi)$$ One important observation is that $$\sum_{k \in \mathbb{Z}} {\widehat}{\psi}_k(\xi) = 1$$ We will adopt the notation *lacunary* for $({\psi}_k)_k$ and *non-lacunary* for $({{\varphi}}_k)_k$. ### Special Symbols We will first focus on a special case of the symbols and the general case will be studied as an extension afterwards. Suppose that $$a(\xi_1,\eta_1,\xi_2,\eta_2) = a_1(\xi_1,\xi_2)a_2(\eta_1,\eta_2)$$ $$b(\xi_1,\eta_1,\xi_2,\eta_2,\xi_3,\eta_3) = b_1(\xi_1,\xi_2,\xi_3) b_2(\eta_1,\eta_2,\eta_3)$$ where $$a_1(\xi_1,\xi_2) = \sum_{k_1} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2)$$ $$b_1(\xi_1,\xi_2,\xi_3) = \sum_{k_2} {\widehat}{\phi}_{k_2}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_3)$$ At least one of the families $({\phi}_{k_1}(\xi_1))_{k_1}$ and $({\phi}_{k_1}(\xi_2))_{k_1}$ is lacunary and at least one of the families $({\phi}_{k_2}(\xi_1))_{k_2}$, $({\phi}_{k_2})(\xi_2))_{k_2}$ and $({\phi}_{k_2}(\xi_3))_{k_2}$ is lacunary. Moroever, $$a_2(\eta_1,\eta_2) = \sum_{j_1} {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2)$$ $$b_2(\eta_1,\eta_2,\eta_3) = \sum_{j_2} {\widehat}{\phi}_{j_2}(\eta_1) {\widehat}{\phi}_{j_2}(\eta_2) {\widehat}{\phi}_{j_2}(\eta_3)$$ where at least one of the families $({\phi}_{j_1}(\eta_1))_{j_1}$ and $({\phi}_{j_1}(\eta_2))_{j_1}$ is lacunary and at least one of the families $({\phi}_{j_2}(\eta_1))_{j_2}$, $({\phi}_{j_2})(\eta_2))_{j_2}$ and $({\phi}_{j_2}(\eta_3))_{j_2}$ is lacunary. Then $$\begin{aligned} a_1(\xi_1,\xi_2) b_1(\xi_1,\xi_2,\xi_3) = & \sum_{k_1,k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_3) \nonumber \\ = & \underbrace{\sum_{k_1 \approx k_2}}_{I^1} + \underbrace{\sum_{k_1 \ll k_2}}_{II^1} + \underbrace{\sum_{k_1 \gg k_2}}_{III^1}.\end{aligned}$$ Case $I^1$ gives rise to the symbol of paraproduct. More precisely, $$I^1 = \sum_{k} {\widehat}{\tilde{\phi}}_{k}(\xi_1) {\widehat}{\tilde{\phi}}_{k}(\xi_2) {\widehat}{\phi}_{k}(\xi_3)$$ where ${\widehat}{\tilde{\phi}}_{k}(\xi_1) := {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_1)$ and ${\widehat}{\tilde{\phi}}_{k}(\xi_2) := {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_2)$ when $k := k_1 \approx k_2$. The above expression can be completed as $$I^1 = \sum_{k} {\widehat}{\tilde{\phi}}_{k}(\xi_1) {\widehat}{\tilde{\phi}}_{k}(\xi_2) {\widehat}{\phi}_{k}(\xi_3) {\widehat}{\phi}_{k}(\xi_1 + \xi_2 + \xi_3)$$ and at least two of the families $ ({\tilde{\phi}}_{k}(\xi_1)_{k}$, $({\tilde{\phi}}_{k}(\xi_2))_{k}$, $ ({\phi}_{k}(\xi_3))_{k}$, $({\phi}_{k}(\xi_1 + \xi_2 + \xi_3))_{k}$ are lacunary. Case $II^1$ and $III^1$ can be treated similarly. In Case $II^1$, the sum is non-degenerate when $(\phi_{k_2}(\xi_1))_{k_2}$ and $(\phi_{k_2}(\xi_2))_{k_2}$ are non-lacunary. In particular, one has $$II ^1 = \sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1) {\widehat}{{\varphi}}_{k_2}(\xi_2) {\widehat}{\psi}_{k_2}(\xi_3)$$ In the case when the symbols are assumed to take the special form, the above expression can be rewritten as $$\sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\psi}_{k_2}(\xi_3),$$ which can be “completed" as $$\label{completion} \sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi_1+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3)$$ The exact same argument can be applied to $a_2(\eta_1,\eta_2)b_2(\eta_1,\eta_2,\eta_3)$ so that the symbol can be decomposed as $$\underbrace{\sum_{j_1 \approx j_2}}_{I^2} + \underbrace{\sum_{j_1 \ll j_2}}_{II^2} + \underbrace{\sum_{j_1 \gg j_2}}_{III^2}$$ where $$I^2 = \sum_{j} {\widehat}{\tilde{\phi}}_{j}(\eta_1) {\widehat}{\tilde{\phi}}_{j}(\xi_2) {\widehat}{\phi}_{j}(\eta_3) {\widehat}{\phi}_{j}(\eta_1+\eta_2+\eta_3)$$ with at least two of the families $({\tilde{\phi}}_{j}(\eta_1))_{j}$, $( {\tilde{\phi}}_{j}(\eta_2) )_{j}$, $({\phi}_{j}(\eta_3))_{j}$ and $({\phi}_{j}(\eta_1+\eta_2+\eta_3))_j$ are lacunary. Case $II^2$ and $III^2$ have similar expressions, where $$II ^2 = \sum_{j_1 \ll j_2} {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2){\widehat}{\phi}_{j_1}(\eta_1+\eta_2) {\widehat}{{\varphi}}_{j_2}(\eta_1+\eta_2) {\widehat}{\psi}_{j_2}(\eta_3){\widehat}{\psi}_{j_2}(\eta_1+\eta_2+\eta_3).$$ One can now combine the decompositions and analysis for $a_1,a_2,b_1$ and $b_2$ to study the original operator: $$\begin{aligned} T_{ab}(f_1 \otimes g_1,f_2 \otimes g_2, h) = T_{ab}^{I^1I^2} + T_{ab}^{I^1 II^2} + T_{ab}^{I^1 III^2} + T_{ab}^{II^1 I^2} + T_{ab}^{II^1 II^2} + T_{ab}^ {II^1 III^2} + T_{ab}^{III^1 I^1} + T_{ab}^{III^2 II^2} + T_{ab}^{III^1 III^2}.\end{aligned}$$ Because of the symmetry between frequency variables $(\xi_1,\xi_2,\xi_3)$ and $(\eta_1,\eta_2,\eta_3)$ and the symmetry between cases for frequency scales $k_1 \ll k_2$ and $k_1 \gg k_2$, $j_1 \ll j_2$ and $j_1 \gg j_2$, it suffices to consider the following operators and others can be proved using the same argument. 1. $T_{ab}^{I^1 I^2}$ is a bi-parameter paraproduct; 0.15in 2. $$\begin{aligned} T_{ab}^{II^1 I^2} = & \displaystyle \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}} \int {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3) \nonumber \\ & \quad \quad \quad {\widehat}{\tilde{\phi}}_{j}(\eta_1) {\widehat}{\tilde{\phi}}_{j}(\eta_2) {\widehat}{\phi}_{j}(\eta_3) {\widehat}{\phi}_{j}(\eta_1+\eta_2+\eta_3) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3)\nonumber \\ & \quad \quad \quad \cdot e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3 \nonumber \\ = & \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) ( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}, \nonumber \\\end{aligned}$$ where at least two of the families $(\phi_{k_1})_{k_1}$ are lacunary and at least two of the families $(\phi_{j})_{j}$ are lacunary. .15 in 3. $$\begin{aligned} T_{ab}^{II^1 II^2} = & \displaystyle \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}} \int {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3) \nonumber \\ & \quad \quad \quad {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2){\widehat}{\phi}_{j_1}(\eta_1+\eta_2) {\widehat}{{\varphi}}_{j_2}(\eta_1+\eta_2) {\widehat}{\psi}_{j_2}(\eta_3){\widehat}{\psi}_{j_2}(\eta_1+\eta_2+\eta_3) \nonumber \\ & \quad \quad \quad {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3) \cdot e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3 \nonumber \\ = & \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) \bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) \nonumber \\ & \quad \ \ \ \ \cdot (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2},\end{aligned}$$ where at least two of the families $(\phi_{k_1})_{k_1}$ are lacunary and at least two of the families $(\phi_{j_1})_{j_1}$ are lacunary. .15in ### General Symbols The extension from special symbols to general symbols can be treated as specified in Chapter 2.13 of [@cw]. With abuse of notations, we will proceed the discussion as in the previous section with recognition of the fact that bump functions do not necessarily equal to $1$ on their supports, which prevents simple manipulation as before. One notices that $I^1$ generates bi-parameter paraproduct as previously. In Case $II^1$, since $k_1 \ll k_2$, ${\widehat}{{\varphi}}_{k_2}(\xi_1)$ and ${\widehat}{{\varphi}}_{k_2}(\xi_2)$ behave like ${\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2)$. One could obtain (\[completion\]) as a result. To make the argument rigorous, one considers the Taylor expansions $${\widehat}{{\varphi}}_{k_2}(\xi_1) = {\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2) + \sum_{l_1> 0} \frac{{\widehat}{{\varphi}}^{(l_1)}_{k_2}(\xi_1+ \xi_2)}{{l_1}!}(-\xi_2)^{l_1}$$ $${\widehat}{{\varphi}}_{k_2}(\xi_2) = {\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2) + \sum_{l _2> 0} \frac{{\widehat}{{\varphi}}^{(l_2)}_{k_2}(\xi_1+ \xi_2)}{{l_2}!}(-\xi_1)^{l_2}$$ There are some abuse of notations in the sense that ${\widehat}{{\varphi}}_{k_2}(\xi_1+ \xi_2)$ in both equations do not represent for the same function - they correspond to ${\widehat}{{\varphi}}_{k_2}(\xi_1)$ and ${\widehat}{{\varphi}}_{k_2}(\xi_2)$ respectively, and share the common feature that $({{\varphi}}_{k_2}(\xi_1))_{k_2}$ and $({{\varphi}}_{k_2}(\xi_2))_{k_2}$ are non-lacunary families of bump functions. Let ${\widehat}{\tilde{{\varphi}}}_{k_2}(\xi_1+\xi_2)$ denote the product of the two and one can rewrite $II^1$ as $$\begin{aligned} &\underbrace{\sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\tilde{{\varphi}}}_{k_2}(\xi_1 + \xi_2){\widehat}{\psi}_{k_2}(\xi_3)}_{II^1_0} + \nonumber \\ & \underbrace{\sum_{\substack{0 < l_1+l_2 \leq M}}\sum_{k_1 \ll k_2}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3)}_{II^1_1} + \nonumber \\ &\underbrace{\sum_{\substack{l_1 + l_2 > M }}\sum_{k_1 \ll k_2}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) }_{II^1_{\text{rest}}}, \nonumber \\\end{aligned}$$ where $M \gg |\alpha_1|$. One observes that $II^1_0$ can be “completed" to obtain (\[completion\]) as desired. One can simplify $II^1_1$ as $$\begin{aligned} II^1_1 & = \sum_{\substack{0 < l_1 + l_2\leq M}} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & = \sum_{\substack{0 < l_1 + l_2 \leq M }} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) 2^{-k_2 l_1}{\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) 2^{-k_2 l_2}{\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & \sim \sum_{\substack{0 < l_1+l_2 \leq M }} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) 2^{-k_2 l_1}{\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) 2^{-k_2 l_2}{\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) 2^{k_1l_1}2^{k_1 l_2} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & = \sum_{\substack{0 < l_1+l_2 \leq M }} \sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)}\sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) {\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ &= \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)} \underbrace{ \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)}_{II_{1,\mu}^{1}}, \nonumber \\\end{aligned}$$ where $ {\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) $ denotes an $L^{\infty}$-normalized non-lacunary bump function with Fourier support at scale $2^{k_2}$. One notices that $II_{1,\mu}^{1}$ has a form similar to (\[completion\]) and can be rewritten as $$\sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3)$$ Meanwhile. $$\begin{aligned} II^1_{\text{rest}} = & \sum_{\substack{l_1+l_2 > M }}\sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)\nonumber \\ \leq &\sum_{\mu=100}^{\infty} 2^{-\mu M} \underbrace{\sum_{k_2 = k_1 + \mu} \sum_{\substack{l_1 +l_2 > M }}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)}_{II^{1}_{\text{rest},\mu}},\nonumber \\\end{aligned}$$ where $m^1_{\mu} := II^{1}_{\text{rest},\mu}$ is a Coifman-Meyer symbol satisfying $$\left|\partial^{\alpha_1} m^1_{\mu}\right| \lesssim 2^{\mu |\alpha_1|}\frac{1}{|(\xi_1,\xi_2)|^{|\alpha_1|}}$$ for sufficiently many multi-indices $\alpha_1$. Same procedure can be applied to study $a_2(\eta_1,\eta_2)b_2(\eta_1 \eta_2,\eta_3)$. One can now combine all the arguments above to decompose and study $$T_{ab} = T_{ab}^{I^1I^2} + T_{ab}^{I^1 II^2} + T_{ab}^{I^1 III^2} + T_{ab}^{II^1 I^2} + T_{ab}^{II^1 II^2} + T_{ab}^ {II^1 III^2} + T_{ab}^{III^1 I^1} + T_{ab}^{III^2 II^2} + T_{ab}^{III^1 III^2}$$ where each operator takes the form $$\displaystyle \int_{\mathbb{R}^6} \text{symbol} \cdot {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3)e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3$$ with the symbol for each operator specified as follows. 1. $T_{ab}^{I^1I^2}$ is a bi-parameter paraproduct as in the special case. .15in 2. $T_{ab}^{II^1 I^2}$: $(II^{1}_0 + II^{1}_{1} + II^{1}_{\text{rest}}) \otimes I^2$ where the operator associated with each symbol can be written as (i) $$T^{II_0^1 I^2} := \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) ( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}$$ (ii) $$T^{II^1_1 I^2} := \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu= 100}^{\infty} 2^{-\mu(l_1+\l_2)} T^{II^1_{1,\mu}I^2}$$ with $$T^{II^1_{1,\mu}I^2}:= \sum_{\substack{k_2 = k_1 + \mu \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}$$ (iii) $$T^{II^1_{\text{rest}}I^2} := \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}I^2}$$ One notices that $II^1_{\text{rest},\mu}$ and $I^2$ are Coifman-Meyer symbols. $T^{II^1_{\text{rest},\mu}I^2}$ is therefore a bi-parameter paraproduct and one can apply the Coifman-Meyer theorem on paraproducts to derive the bound of type $O(2^{|\alpha_1|\mu})$, which would suffice due to the decay factor $2^{-\mu M}$. 3. $T^{II^1 II^2}$: $(II^1_0 + II^1_1 + II^1_{\text{rest}}) \otimes (II^2_0 + II^2_1 + II^2_{\text{rest}})$ where the operator associated with each symbol can be written as (i) $$T^{II_0^1 II_0^2} := \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) \bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (ii) $$T^{II^1_1 II_0^2} := \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu= 100}^{\infty} 2^{-\mu(l_1+\l_2)} T^{II^1_{1,\mu}II^2_0}$$ with $$T^{II^1_{1,\mu}II^2_0}:= \sum_{\substack{k_2 = k_1 + \mu \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)\bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (iii) $$T^{II^1_{\text{rest}}II^2_0}:= \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}II^2_0}$$ where $T^{II^1_{\text{rest},\mu}II^2_0}$ is a multiplier operator with the symbol $$m^1_{\mu}\otimes II^2_0$$ which generates a model similar as $T^{I^1 II^2_0}$ or, by symmetry, $T^{II^1_0 I^2}$. (iv) $$T^{II^1_1 II^2_1}:= \sum_{\substack{0 < l_1+l_2 \leq M \\ l_1' + l_2' \leq M'}}\sum_{\mu,\mu'= 100}^{\infty} 2^{-\mu(l_1+\l_2)} 2^{\mu'(l_1'+l_2')}T^{II^1_{1,\mu}II^2_{1,\mu'}}$$ with $$T^{II^1_{1,\mu}II^2_{1,\mu'}}:= \sum_{\substack{k_2 = k_1 + \mu \\ j_2 = j_1 + \mu'}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)\bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * \tilde{{\varphi}}_{j_2,l_1',l_2'} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (v) $$T^{II^1_{\text{rest}} II^2_1} := \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}II^2_1}$$ where $T^{II^1_{\text{rest},\mu}II^2_1}$ has the symbol $$m^1_{\mu}\otimes II^2_1$$ which generates a model similar as $T^{I^1 II^2_1}$ or $T^{II^1_1 I^2}$. (vi) $$T^{II^1_{\text{rest}}II^2_{\text{rest}}} := \sum_{\mu,\mu'= 100}^{\infty}2^{\mu M}2^{\mu'M'} T^{II^1_{\text{rest},\mu}II^2_{\text{rest},\mu'}}$$ where $ T^{II^1_{\text{rest},\mu}II^2_{\text{rest},\mu'}}$ is associated with the symbol $$m^1_{\mu}\otimes m^2_{\mu'}$$ which generates a model similar as $T^{II^1_{\text{rest},\mu}I^2}$, $T^{I^1 II^2_0}$ or $T^{II^1_0 I^2}$. 4. $T^{III^1 II^2}$, $T^{III^1 I^2}$ and $T^{III^1 III^2}$ can be studied by the exact same reasoning for $T^{II^1II^2}$, $T^{II^1 I^2}$ and $T^{II^1 II^2}$ by the symmetry between symbols $II$ and $III$. Discretization -------------- With discretization procedure specified in Chapter 2.2 of [@cw], one can reduce the above operators into the following discrete model operators listed in Theorem (\[thm\_weak\]): ---------------------------------- -------------------- ------------------------------------------------------- $T^{II^1_0 I^2}$ $ \longrightarrow$ $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$ $T^{II^1_{1,\mu} I^2}$ $ \longrightarrow$ $ \Pi_{\text{flag}^{\mu} \otimes \text{paraproduct}}$ $T^{II^1_0 II^2_0}$ $ \longrightarrow$ $ \Pi_{\text{flag}^0 \otimes \text{falg}^0} $ $T^{II^1_0 II^2_{1,\mu'}}$ $ \longrightarrow$ $\Pi_{\text{flag}^0 \otimes \text{flag}^{\mu'}} $ $T^{II^1_{1,\mu} II^2_{1,\mu'}}$ $ \longrightarrow$ $\Pi_{\text{flag}^{\mu} \otimes \text{flag}^{\mu'}} $ ---------------------------------- -------------------- ------------------------------------------------------- [03]{} Benea, C. and Muscalu, C. *Quasi-Banach valued Inequalities via the helicodial method*, J. Funct. Anal. 273, no. 4, 1295-1353, \[2017\]. Benea, C. and Muscalu, C. *Mixed norm estimates via the helicoidal method*, Preprint \[2020\]. Bennett, J., Bez, N., Buschenhenke, S. and Flock, T. C. *The nonlinear Brascamp-Lieb inequality forsimple data*, Preprint, arxiv: 1801.05214. Bennett,J., Bez, N., Cowling, M. G. and Flock, T. C. *Behaviour of the Brascamp-Lieb constant*, Bull.Lond. Math. Soc.49, no. 3, 512-518, \[2017\]. Bennett, J., Carbery, A., Christ, M. and Tao, T. *The Brascamp-Lieb inequalities: finiteness, structure and extremals*, Geom. Funct. Anal.17, 1343-1415, \[2007\]. Brascamp, H. J. and Lieb, E. H. *Best constants in Young’s inequality, its converse,and its generalization to more than three functions*, Adv. Math.20, 151-173, \[1976\]. Carbery, A., Hänninen, T. S. and Valdimarsson, S. *Multilinear duality and factori-sation for Brascamp-Lieb-type inequalities with applications*, arXiv:1809.02449. Chang, S.-Y. A. and Fefferman, R. *Some recent developments in Fourier analysis and $H^p$ theory on product domains*, Bull. Amer. Math. Soc., vol. 12, 1-43, \[1985\]. Coifman, R. R. and Meyer, Y. *Operateurs multilineaires*, Hermann, Paris, \[1991\]. Durcik, P. and Thiele, C. *Singular Brascamp-Lieb inequalities with cubical structure*, arXiv:1809.08688. Durcik, P. and Thiele, C. *Singular Brascamp-Lieb: a survey*, arXiv:1904.08844. Fefferman, C. and Stein, E. *Some maximal inequalities*, Amer. J. Math., vol. 93, 107-115, \[1971\]. Germain, P., Masmoudi, N. and Shatah, J. *Global Solutions for the Gravity Water Waves Equation in Dimension 3*, \[2009\]. Kato, T. and Ponce, G. *Commutator estimates and the Euler and Navier-Stokes equations*,Comm. Pure App. Math. ,41, 891-907, \[1988\]. Kenig, C. *On the local and global well-posedness theory for the KP-I equation*, Ann. Inst. H. Poincaré Anal. Non Linéaire 21, 827-838, \[2004\]. Lacey, M. and Thiele, C. *On Calderón’s conjecture*, Ann. of Math. (2),149(2):475-496, \[1999\]. Lu, G., Pipher, J. and Zhang, L. *Bi-parameter trilinear Fourier multipliers and pseudo-differential operators with flag symbols*, arXiv:1901.00036. Miyachi, A. and Tomita, N. *Estimates for trilinear flag paraproducts on $L^{\infty}$ and Hardy spaces*, Math. Z. 282, 577 - 613, \[2016\]. Muscalu, C. *Paraproducts with flag singularities I. A case study*, Rev. Mat. Iberoamericana, vol. 23 705-742, \[2007\]. Muscalu, C., Pipher, J., Tao, T. and Thiele, C. *Bi-parameter paraproducts*, Acta Math., vol. 193, 269-296, \[2004\]. Muscalu, C., Pipher, J., Tao, T. and Thiele, C. *Multi-parameter paraproducts*, Rev. Mat. Iberoamericana, pp. 963-976, \[2006\]. Muscalu, C. and Schlag, W. *Classical and Multilinear Harmonic Analysis*, \[2013\]. Muscalu, C., Tao, T. and Thiele, C. *Multi-linear operators given by singular multipliers*, J. Amer. Math. Soc. 15, 469-496, \[2002\]. Muscalu, C., Tao, T. and Thiele, C. *$L^p$ estimates for the biest l: The Walsh case*, Math. Ann. 329, 401-426, \[2004\]. Muscalu, C., Tao, T. and Thiele, C. *$L^p$ estimates for the biest lI: The Fourier case*, Math. Ann., 329, 427-461, \[2004\]. [^1]: Basic inequalities refer to the inequalities obtained for Dirac kernels. [^2]: Many cases of arbitrary complexity follow from the mixed-norm estimates for vector-valued inequalities in the paper by Benea and the first author [@bm2]. [^3]: Its boundedness is at present an open question, raised by the first author of the article on several occasions. [^4]: Multilinear form, denoted by $\Lambda$, associated to an n-linear operator $T(f_1, \ldots, f_n)$ is defined as $\Lambda(f_1, \ldots, f_n, f_{n+1}) := \langle T(f_1,\ldots, f_n), f_{n+1}\rangle $.
High
[ 0.6882591093117401, 31.875, 14.4375 ]
/* * Copyright 2020 Expedia, Inc * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.expediagroup.graphql.plugin.gradle import com.github.mustachejava.DefaultMustacheFactory import com.github.mustachejava.MustacheFactory import com.github.tomakehurst.wiremock.WireMockServer import com.github.tomakehurst.wiremock.client.MappingBuilder import com.github.tomakehurst.wiremock.client.WireMock import com.github.tomakehurst.wiremock.core.WireMockConfiguration import com.github.tomakehurst.wiremock.matching.ContainsPattern import org.junit.jupiter.api.AfterAll import org.junit.jupiter.api.BeforeAll import org.junit.jupiter.api.BeforeEach import java.io.BufferedReader import java.io.File import java.io.StringWriter abstract class GraphQLGradlePluginAbstractIT { // unsure if there is a better way - correct values are set from Gradle build // when running directly from IDE you will need to manually update those to correct values private val gqlKotlinVersion = System.getProperty("graphQLKotlinVersion") ?: "4.0.0-SNAPSHOT" private val kotlinVersion = System.getProperty("kotlinVersion") ?: "1.3.72" private val junitVersion = System.getProperty("junitVersion") ?: "5.6.2" val testSchema = loadResource("mocks/schema.graphql") val introspectionResult = loadResource("mocks/IntrospectionResult.json") val testQuery = loadResource("mocks/JUnitQuery.graphql") val testResponse = loadResource("mocks/JUnitQueryResponse.json") @BeforeEach fun setUp() { WireMock.reset() WireMock.stubFor(stubSdlEndpoint()) WireMock.stubFor(stubIntrospectionResult()) WireMock.stubFor(stubGraphQLResponse()) } fun stubSdlEndpoint(delay: Int? = null): MappingBuilder = WireMock.get("/sdl") .withResponse(content = testSchema, contentType = "text/plain", delay = delay) fun stubIntrospectionResult(delay: Int? = null): MappingBuilder = WireMock.post("/graphql") .withRequestBody(ContainsPattern("IntrospectionQuery")) .withResponse(content = introspectionResult, delay = delay) fun stubGraphQLResponse(delay: Int? = null): MappingBuilder = WireMock.post("/graphql") .withRequestBody(ContainsPattern("JUnitQuery")) .withResponse(content = testResponse, delay = delay) private fun MappingBuilder.withResponse(content: String, contentType: String = "application/json", delay: Int? = null) = this.willReturn( WireMock.aResponse() .withStatus(200) .withHeader("Content-Type", contentType) .withBody(content) .withFixedDelay(delay ?: 0) ) fun loadResource(resourceName: String) = ClassLoader.getSystemClassLoader().getResourceAsStream(resourceName)?.use { BufferedReader(it.reader()).readText() } ?: throw RuntimeException("unable to load $resourceName") fun loadTemplate(templateName: String, configuration: Map<String, Any> = emptyMap()): String { val testApplicationMustache = mustacheFactory.compile("templates/$templateName.mustache") return testApplicationMustache.execute(StringWriter(), configuration).toString() } internal fun File.generateBuildFile(contents: String) { val buildFileContents = """ import org.jetbrains.kotlin.gradle.tasks.KotlinCompile import com.expediagroup.graphql.plugin.config.TimeoutConfig import com.expediagroup.graphql.plugin.generator.GraphQLClientType import com.expediagroup.graphql.plugin.generator.ScalarConverterMapping import com.expediagroup.graphql.plugin.gradle.graphql import com.expediagroup.graphql.plugin.gradle.tasks.GraphQLDownloadSDLTask import com.expediagroup.graphql.plugin.gradle.tasks.GraphQLGenerateClientTask import com.expediagroup.graphql.plugin.gradle.tasks.GraphQLIntrospectSchemaTask plugins { id("org.jetbrains.kotlin.jvm") version "$kotlinVersion" id("com.expediagroup.graphql") application } repositories { mavenLocal() mavenCentral() } tasks.withType<KotlinCompile> { kotlinOptions { jvmTarget = "1.8" } } dependencies { implementation("org.jetbrains.kotlin:kotlin-stdlib:$kotlinVersion") implementation("com.expediagroup:graphql-kotlin-ktor-client:$gqlKotlinVersion") implementation("com.expediagroup:graphql-kotlin-spring-client:$gqlKotlinVersion") testImplementation("org.junit.jupiter:junit-jupiter-api:$junitVersion") testImplementation("org.junit.jupiter:junit-jupiter-engine:$junitVersion") } $contents """.trimIndent() val buildFile = File(this, "build.gradle.kts") buildFile.writeText(buildFileContents) } internal fun File.createTestFile(fileName: String, subDirectory: String? = null): File { val targetDirectory = if (subDirectory != null) { File(this, subDirectory) } else { this } targetDirectory.mkdirs() return File(targetDirectory, fileName) } companion object { internal val wireMockServer: WireMockServer = WireMockServer(WireMockConfiguration.wireMockConfig().dynamicPort()) internal val mustacheFactory: MustacheFactory = DefaultMustacheFactory() @BeforeAll @JvmStatic fun oneTimeSetup() { wireMockServer.start() WireMock.configureFor(wireMockServer.port()) } @AfterAll @JvmStatic fun oneTimeTearDown() { wireMockServer.stop() } } }
Low
[ 0.48505747126436705, 26.375, 28 ]
"This changes everything." "[Fishlegs] Well, by my calculations, Hiccup, for the Dragon Blade to ignite in those kind of wind conditions, it would require" "An additional half jar of Monstrous Nightmare gel." "Precisely!" "Are you thinking what I'm thinking, Fishlegs?" "[both] Build a new handle to hold twice as much gel." "Of course." "Now, what gauge cylinder will we use?" "My brain says 10, but my heart says 13..." "That's a language we'll never understand." "[growls in approval] [chuckles] Yeah." "You know, we too have a language that you will never understand." "This is news?" "No, seriously." "We created our own secret twin language." "Yeah, just in case we ever got captured and needed to communicate in code." "Okay, I know I'm gonna regret asking this, but what exactly is this secret language of yours?" "It's complex, so try and follow along with that pretty little head of yours." "Ello-hay, [snorts] Uffnut-Ray. [snorts]" "Ello-hay, [snorts] Uffnut-Tay. [snorts]" "We call it Boar Latin!" "[Ruffnut] Yeah." "Genius, right?" "[snorts]" "Oh, wait, uh..." "En-ius-jay, Ight-ray?" "[snorts]" "Can you believe it only took us 11 years to come up with that?" "I mean, 15 with the research and development." "What's with all the snorting?" "Uh, hello?" "It's called "Boar Latin." Uh, boar. [snorts]" "Yeah, heard of it?" "Can't have the Latin without the boar." "Then it'd just be Latin." "Duh." "Everyone speaks Latin these days." "[chuckles] Ummy-Day. [snorts]" "Erk-jay. [snorts]" "Hey, just for the record, I understand everything you guys are saying." "That's ingenious, Hiccup." "I wouldn't have thought of it if you hadn't suggested Changewing acid." "Uh, I come back from patrol for this?" "[imitating Fishlegs] "Oh, you're so smart, Hiccup."" "[imitating Hiccup] "Oh, no, actually you're the smartest, Fishlegs."" "[imitating Fishlegs] "Oh, you're so pretty."" "[imitating Hiccup] "Oh, actually you're so pretty." "We're both pretty." "Let's hug."" "Ah!" "Ook-Lay. [snorts]" "Uh, what was that?" "Boar Latin." "I'll explain later." "[Terrible Terror growls]" "Terror Mail." "We'll continue this discussion later, Fishlegs." "Huh?" "What is it?" "Urgent message from the Defenders of the Wing." "Mala needs help." "[whizzing]" "All right, gang." "Fan out and keep your eyes peeled." "We have no idea what we're flying into." "[woman 1] They're here!" "[man 1] Welcome!" "[crowd cheering]" "Look, there they are!" "[man 2] Welcome, Riders!" "[man 3] I see them now!" "[crowd continues cheering]" "Hiccup Haddock, thank the ancients, you received our message." "Mala." "Throk." "What happened?" "Is it Hunters?" "No." "Something much worse." "[laughs] This would appear to be an egg-mergency." "Or some might call it "emergency egg-may." [snorts]" "Has something happened to Tuffnut?" "Nope." "This is pretty much a daily thing." "Is that" "An Eruptodon egg." "Unlike other dragons, Eruptodons only produce a single egg in their lifetime." "Our tribe has been waiting generations for our Great Protector to have an heir." "And now, it has finally happened." "So, this should be a time for celebration, shouldn't it?" "[all sighing]" "If it were that simple." "An Eruptodon egg can only hatch under very special conditions." "The dragon is born of flame and its egg requires the life-giving lava of its ancestral nesting site, a cavern deep inside the Grand Volcano." "So, what's it doin' out here?" "[rumbling] [growls wearily]" "Easy girl." "It's all right." "The birth weakened our already aged Great Protector, so much so that she cannot fly to the sacred site." "We were able to spare the egg, but without proper nesting, it will not hatch." "Our only option is to transport the egg ourselves before the lava rises and floods the cavern." "Whoa!" "[screams] -[Mala screams] [laughs] [sniffs]" "The future of our entire civilization rests on this egg's survival." "If it fails to hatch, the Great Protector will not have an heir." "And if there is no Eruptodon to eat the lava from the volcano, the island and our tribe is doomed." "Mala, we will deliver that egg into the volcano." "[laughs] When you say "we will," you actually mean "you will," right?" "Okay, great." "Check you later." "By the looks of the lava, we have a small window, but it should be enough time to get in and get out." "Exactly." "You thinking what I'm thinking, Fishlegs?" "[retches] Here it comes." "Another Hicclegs lovefest." "I'll fly the egg down." "I'll fly the egg down." "[all gasp]" "Uh, Hiccup, I think you mean I should fly the egg down, because Gronckles are accustomed to lava." "Well, that's true, yeah, but a Night Fury has the distinct speed advantage, don't you think?" "Gentlemen." "Time is waning." "Hiccup and the Night Fury will fly the egg to the cavern." "Right." "Yes." "Okay." "What just happened?" "I have no idea, but Hicclegs just got very interesting." "Our armor is coated with a layer of heat-resistant Eruptodon saliva." "Ugh." "It should help protect against the effects of the volcano." "And, speaking of heat..." "[Toothless grunting]" "Gronckle Iron tail fin." "[clinking] [whistles]" "Uh, Fishlegs, look" "The sacred cavern is located on the south side of the volcano's interior." "May the spirits of our fallen warriors guide your wings, Hiccup Haddock." "[grunts]" "Okay, uh, I guess all I need now is the egg." "The egg is my responsibility." "I'm going with you." "My Queen, let me go instead." "No, Throk." "A Queen must always be willing to risk her life for her people." "[growls]" "[lava bubbling]" "[grunts] Come on." "Keep going, bud." "[Mala grunts] [growls]" "Down there." "I see it." "Toothless, wing right." "[Toothless grunting]" "[Hiccup] Whoa!" "What is happening, Hiccup?" "If that tail gives out, the three of us and the egg are done for." "[Hiccup grunting] -[Toothless growling]" "[Mala groaning]" "My queen." "Hiccup, what happened?" "Quickly." "Replace the Night Fury's tail." "You must go back." "I don't have another one." "I could try to make one out of" "No." "There's no time." "Oh, no." "I feared this would happen." "The egg has spent too much time outside the nesting site." "It requires the life-giving lava." "And if it isn't delivered soon, it will become hard as stone." "Then..." "Then, what?" "It will never hatch." "And it will die." "[crowd clamoring]" "There is no need for panic." "We must stay calm." "[grunting in pain]" "These herbs will help you regain your strength, Great Protector." "Looks like ol' Throk might be a little "cray-cray." [laughs, snorts]" "Oh!" "Wow!" "I had no idea" "you spoke our language." "Rother-bay. [snorts] [both laughing]" "What?" "[grunting in pain] [crowd continues clamoring]" "Fear not." "The egg will be delivered into the Grand Volcano as promised." "Hey, uh, Fishlegs, look, [stammers] about earlier" "Yeah, earlier." "Right." "Weird." "So weird." "[chuckles]" "Well, I thought we could put our heads together again and see if we can come up with a solution." "She really needs us." "Yes." "I agree." "Great." "Great." "Yeah, well, I've been giving it a lot of thought." "Me, too." "Perfect." "Then you must be thinking what I'm thinking, right?" "Scale down the cliff." "Submerge the egg in a lava bath." "[sighs in exasperation]" "Lava bath?" "Oh, come on." "We could never maintain its temperature." "Lava cools." "Scale down the cliff?" "Were you being serious about that?" "Well, what do you think I'm" "Guys, remember earlier when you both agreed Gronckles were good in lava conditions?" "Maybe Fishlegs and Meatlug should give it a try?" "You know, Eruptodons, Gronckles, both Boulder class." "Hey, we tried it your way." "Why not just" "Excellent idea." "We leave at once." "Uh..." "Come, Hiccup." "We don't know what we'll find and may need your help." "[growling]" "[growls excitedly]" "We are nearing the sacred nesting site." "Okay, girl." "Take us home." "Can't this Gronckle fly any faster?" "[grunts] -[exclaims]" "Fishlegs, why are you stopping?" "[lava explodes]" "Impressive." "Okay, girl, let's go." "[screeching]" "We must transport the egg to the end of these caverns before the lava floods in." "Quickly." "[screeching]" "Rother-bay [snorts] Notlout-say [snorts] e-way [snorts] elcome-way [snorts] ou-yay. [snorts]" "For the millionth time, you two," "I don't understand anything you've been saying for the last three hours!" "Shh." "It's okay, Boar Brother Snotlout." "Don't." "No need to hide your proud roots." "Please stop." "You're among your Boar Latin family now." "[screams]" "Or should we say, "amily-fay." [snorts]" "Okay, that lava is getting a little too close to the entrance." "Not to worry, Astrid Hofferson." "Queen Mala knows this volcano better than anyone on the island." "[grunts]" "They've been down there for a long time." "Yes." "The lava is rising quickly." "They should've returned by now." "That settles it." "We're going in." "[growls] [lava explodes]" "I agree." "But how?" "Those explosions are too dangerous and getting worse." "If we just had a way to get down safely without dragons." "Actually, we might have a "lan-pay." -[snorts]" "[screams] -[laughs]" "We'll make it." "Uh, hey, you guys, what are those?" "Whoa!" "Hmm." "There was a time when the tribal elders would climb down into these caverns and sacrifice themselves for the good of the tribe." "Right, right, right, right." "But, what are these figures?" "I have never seen those before." "Uh, the egg?" "We should keep moving." "[distant roaring]" "Um, what was that?" "Uh, I'm not sure." "[screeching]" "Please tell me those aren't bats." "Yeah, they're definitely not bats." "[screeching getting louder]" "[Fishlegs screams]" "Guard the egg!" "[Meatlug grunts]" "This must have been what the carving was trying to warn about." "But there's too many." "They're as relentless as Speed Stingers, so we should probably... [clangs]" "Not exactly what I had in mind." "I'll direct them away while you and Meatlug get Mala and the egg to safety." "Hiccup, these dragons eat fire." "Fishlegs, that is abundantly clear." "[Mala] No!" "Mala!" "[grunting]" "Stay away." "They outnumber her three to one." "Then we need to even the odds." "[screeching]" "Ah, now what?" "I don't know." "I thought you had the idea." "[Meatlug grunting] [screeching in panic] [growls]" "Hiccup Haddock." "No!" "[screeching]" "Fishlegs Ingerman." "We'll never make it to her in time." "[growling]" "Meatlug, roll!" "Meatlug, fly!" "[growls quizzically, groans] [screams] Stop!" "No!" "[growling]" "The Diving Bell was your big "lan-pay"?" "You flew all the way to Berk for a big hunk of metal to dangle over fire?" "Why not bring back a frying pan?" "How "umb-day" are they?" "[laughs, snorts]" "It's uncanny." "There's no trace of an accent." "To talk that eloquently, he must be at least a quarter boar." "Maybe two-fifths." "He is hairy in strange places." "Hey!" "Actually, I believe this could work." "If we were to invert it, and then coat it in Eruptodon saliva." "It won't last long but should be enough to reach the cavern, find them, and raise them to safety." "I'll get to work." "Oh, great, now what?" "Which direction?" "Left." "Right." "Oh, Gods." "What is going on with us?" "We're just not thinking." "We need to clear our heads." "Right, right." "Good idea." "Right." "Left?" "[gasps]" "Ugh!" "Oh, maybe we're cursed." "No." "There is a perfectly logical explanation for what's happening." "Of course, I can't think of it right now." "Mala, what direction do you think?" "Hiccup, where's Mala?" "Mala!" "She snuck off down those corridors." "Mala!" "Mala!" "[grunts annoyingly]" "Okay, Barf and Belch, take us in." "[Barf and Belch grunting]" "[lava bubbling]" "Grace of the ancients." "The Eruptodon saliva worked." "Yes." "But we need to move faster." "Guys, let's pick up the "ace-pay." [snorts]" "[laughs] They made it." "Great." "Time for the big swing." "[metal creaking]" "[Astrid screams] -[splashes]" "[Astrid grunting]" "[Toothless growling]" "[chomps]" "Toothless, what are you doing?" "[screams] Toothless!" "Pull up, you crazy Night Fury." "[gasps] Oh, that's not good." "[growls in pain]" "Okay, you're the "idea dragon." Now what?" "Mala!" "Mala!" "Mala!" "Mala!" "I must have offended the gods, Hiccup." "That's why we're being punished." "I should've never taken Odin's name in vain." "Never!" "Oh, come on, Fishlegs." "That has nothing to do with this." "There, you see, our fortunes are finally changing for the better." "Leave me." "This is my duty." "My people." "Mala, you don't know what they're capable of." "[screeching] [all scream]" "Unhand me." "I command you." "Sacrificing yourself won't do anyone any good." "Hiccup, look." "[screeching]" "Are they frustrated at not being able to crack the shell, or is it something else?" "Not sure, but it doesn't seem predatory." "[rumbling]" "Wait, Fishlegs, are you thinking what I'm thinking?" "You know, I think I actually am!" "There is no more time for indecision." "Mala..." "[screeching]" "Oh, for the love of Odi" "Seriously?" "Sorry." "I really gotta work on that." "[bubbling] [explosion]" "[groans] Th..." "Th..." "Throk." "So tired." "[groans]" "Don't close your eyes, Astrid." "We must stay awake for Hiccup Haddock and Queen Mala." "Hi..." "Hi..." "Hiccup." "[sizzling] [screeching]" "That is the Egg of the Great Protector." "I command you." "Return it." "[screeching] [grunting with effort] [screeching furiously] [rumbling]" "Mala, give us the egg." "Absolutely not." "We have a plan." "Do you?" "We do." "And I think you should hear us out on this one." "Those dragons won't let you pass, but we have a way to get the egg to the nesting place." "Trust us, Mala." "[rumbling] [screeching]" "Hiccup, what are you doing?" "They don't want to harm the egg." "They're not predators, okay?" "They're here to help Eruptodon eggs reach their sacred nesting place." "Yeah." "Those cave drawings weren't a warning from your ancestors." "They were historical records, instructions." "[screeching]" "Their attacks were to keep us humans from damaging the egg." "[rumbling loudly]" "We have to go." "Now." "[growling] [screeching]" "Ooh." "This is gonna be close." "Give us all you got!" "[Hiccup] Come on." "Yes!" "Hey!" "Down there!" "Look behind you!" "Hiccup, there in the lava!" "Let's go, Fishlegs." "Right there with you." "[grunts] [grunts]" "Come on, Throk." "[grunts] Go!" "That Gronckle clearly cannot hold us all." "My mission was to get my queen to safety." "By the ancients." "By the ancients." "[roars]" "[Mala] Yes!" "[Fishlegs] [laughs]" "[grunting affectionately]" "The egg is in good hands, Mala." "Exactly what I was gonna say." "[both laughing]" "It's nice to see things are back to normal." "Whatevs." "I sort of liked the new Hicclegs." "The other kind is "otally-tay" [snorts] "oring-bay." [snorts]" "A true master linguist." "[growls in approval] [chuckles] You said it "Ookfang-hay." [snorts]" "[screeches]" "[screeching excitedly]"
Mid
[ 0.598778004073319, 36.75, 24.625 ]
Q: Why doesn't `mypy` detect bad TypedDict usage inside a function I have the following Python module: from typing import TypedDict class P(TypedDict): x: int def return_p() -> P: return {'x': 5} p = return_p() p['abc'] = 1 def test(): p = return_p() p['abc'] = 2 When I run mypy on it, it rightfully complains about the line p['abc']=1, but ignores the exact same issue in the line p['abc']=2. This happens on Windows 10, with Python 3.8 and mypy 0.781. The same behavior occurs with Python 3.7 (there I need to import TypedDict from typing_extensions) What's going on? A: This is because test() is not typed. Adding type hints to its signature will make its body checkable: def test() -> None: p = return_p() p['abc'] = 2
Mid
[ 0.6453333333333331, 30.25, 16.625 ]
1. Introduction {#s0005} =============== The success of coral reefs in oligotrophic environments is owed to the symbiotic association of the habitat-forming scleractinian corals with photosymbionts from the genus *Symbiodinium* (zooxanthellae). These algal symbionts enable the coral host to access the pool of dissolved inorganic nitrogen and phosphorus in the water column in addition to the nutrient uptake by heterotrophic feeding ([@bb0040], [@bb0065], [@bb0210], [@bb0130], [@bb0275], [@bb0070], [@bb0115], [@bb0225]). Moreover, the zooxanthellae recycle ammonium excreted as metabolic waste product by the host, thereby efficiently retaining nitrogen within the holobiont ([@bb0210], [@bb0235], [@bb0295]). The nutrient limitation experienced by the zooxanthellae *in hospite* in oligotrophic conditions results in a skewed chemical balance of the cellular nitrogen and phosphorus content relative to the available carbon. As a result, photosynthetic carbon fixation can be uncoupled from cellular growth, facilitating the translocation of a large proportion of photosynthates to the coral host ([@bb0205], [@bb0215], [@bb0085], [@bb0075]). Reefs and the provision of their valuable ecosystem services are globally threatened by climate change and a range of anthropogenic pressures ([@bb0125], [@bb0190], [@bb0260], [@bb0150], [@bb0165], [@bb0010], [@bb0155], [@bb0055], [@bb0185]). In this context, it has become increasingly clear that the nutrient environment plays a defining role in determining coral reef resilience ([@bb0055], [@bb0080], [@bb0270], [@bb0025], [@bb0110], [@bb0015]). The ratio of dissolved inorganic nitrogen to phosphorus in the marine environment can be interpreted as an indicator of whether photosynthetic primary production is limited by the availability of nitrogen or phosphorus. In coral reef waters, N:P ratios were found in an approximate range from 4.3:1 to 7.2:1 ([@bb0265], [@bb0045], [@bb0105]) which is lower than the canonical Redfield ratio of 16:1, considered optimal to sustain phytoplankton growth ([@bb0240]). Consequently, many processes in coral reefs tend to be nitrogen limited ([@bb0110]). Natural nutrient levels in coral reef ecosystems are impacted by the rising anthropogenic nutrient input into the oceans, especially into coastal waters, via the atmospheric deposition of combustion products, agricultural activities, erosion and sewage discharge ([@bb0080], [@bb0025], [@bb0055]). Since a number of these sources of nutrient enrichment can be influenced at the local scale ([@bb0020], [@bb0175], [@bb0005]), the management of nutrification is a promising tool for coral reef protection which also holds potential to mitigate some of the negative effects of rising sea water temperatures on these ecosystems ([@bb0055]). It has been conceptualised that some direct negative effects of eutrophication on the *Symbiodinium* stress tolerance may be caused, paradoxically, by an associated deprivation of nutrients vital for the physiological functioning of the coral symbionts ([@bb0310], [@bb0055]). The resulting nutrient starvation can occur for example when the availability of one type of essential nutrient (e.g. phosphate) decreases relative to the cellular demand, resulting in imbalanced and unacclimated growth ([@bb0220]). High nitrate concentrations in combination with low phosphate availability have previously been shown to result in phosphate starvation of the algal symbiont and increased susceptibility of corals to heat- and light-stress-induced bleaching ([@bb0310]). In principle, this condition could not only result from an increased cellular demand due to nutrient (nitrogen) -- accelerated cell proliferation rates but also from a selective decrease of one specific nutrient type ([@bb0220]). Relevant shifts of the nutrient balance in the natural reef environments were reported, for example, for the reefs of Discovery Bay in Jamaica where enrichment with groundwater-borne nitrate resulted in a dissolved inorganic nitrogen to phosphorus ratio of 72:1, coral decline and phase shifts to macroalgal dominance ([@bb0180]). However, the functioning of the coral-*Symbiodinium* association can be severely impaired not only by the imbalanced availability of nutrients, but also by a combined deprivation of both, nitrogen and phosphorus ([@bb0250]). In this light, the expected nutrient impoverishment of oceanic waters that could result from global warming or the rapid uptake of dissolved inorganic nutrients by ephemeral phytoplankton blooms could possibly act in combination with increased heat stress levels to accelerate reef decline ([@bb0055], [@bb0245]). Due to the fast uptake of dissolved inorganic nutrients by benthic communities it is often difficult to measure the level of nutrient exposure in coral reefs ([@bb0110]). Consequently, biomarkers are required that inform about the nature of the nutrient stress which corals and their symbionts experience under certain conditions ([@bb0035], [@bb0055]). Recently, we have demonstrated that bleaching and reduced growth of corals resulting from the deprivation of dissolved inorganic nitrogen and phosphorus is reflected by the ultrastructure of zooxanthellae ([@bb0250]). The undersupply with nutrients manifests in a larger symbiont cell size, increased accumulation of lipid bodies, higher numbers of starch granules and a striking fragmentation of their accumulation bodies. We have exploited the potential of these biomarkers to detect nutrient stress imposed on the coral-*Symbiodinium* association and explored the response of the algal ultrastructure to skewed dissolved inorganic nitrogen to phosphorus ratios. 2. Materials and methods {#s0010} ======================== 2.1. Coral culture {#s0015} ------------------ We used *Symbiodinium* clade C1 associated with *Euphyllia paradivisa* as model to establish in long-term experiments the responses of the coral holobiont and zooxanthellae biomarkers to different nutrient environments. We exposed the corals to high nitrogen-low phosphorus (HN/LP) and low nitrogen--high phosphorus (LN/HP) conditions and compared them to corals experiencing nutrient replete (HN/HP) and low nutrient (LN/LP) conditions ([@bb0250]). We note that the attributes "high" and "low" are introduced to facilitate comparison of the nutrient conditions in the context of our experiment and do not necessarily represent all natural reef environments. Imbalanced nutrient conditions were established in individual aquarium systems within the experimental mesocosm of the Coral Reef Laboratory at the National Oceanography Centre Southampton ([@bb0050]): high nitrogen/low phosphorus (HN/LP = \~ 38 μM NO~3~^−^/\~0.18 μM PO~4~^−^; N:P ratio = 211:1) and low nitrogen/high phosphorus (LN/HP = \~ 0.06 μM NO~3~^−^/\~3.6 μM PO~4~^−^; N:P ratio = 1: 60). The ammonium levels found in our mesocosm are very low (\< 0.7% of total dissolved inorganic nitrogen) compared to the combined nitrite (\~ 10%) and nitrate concentrations (\~ 90%) ([@bb0310]). Therefore, the measured NO~3~^−^ concentrations (combined NO~2~^−^/NO~3~^−^) represent largely the total dissolved inorganic nitrogen pool that could be accessed by the zooxanthellae in the present experiment. All experimental systems were supplemented with iron and other trace elements by weekly dosage of commercially available solutions (Coral Colours, Red Sea) and partial water changes with freshly made artificial seawater using the Pro-Reef salt mixture (Tropic Marin). Both the holobiont and the zooxanthellae phenotypes were dominated by the response to the dissolved inorganic nutrient environment and largely unaffected by heterotrophic feeding by the host in our previous study ([@bb0250]). However, to avoid any potential influence of nutrients in particulate form, the corals were not provided with food in the present experiments. Colonies of *Euphyllia paradivisa* ([@bb0050]) were cultured under the two imbalanced N:P ratios for \> 6 months at a constant temperature of 25 °C and a 10/14 h light/dark cycle. Corals in the HN/LP treatment were first maintained at lower light intensity (∼ 80 μmol m^− 2^ s^− 1^) due to the mortality risk caused by prolonged exposure to this nutrient ratio at higher light levels ([@bb0310]). Light intensities were gradually ramped up to ∼ 150 μmol m^− 2^ s^− 1^ over 7 days and corals were kept under these conditions for 4 months prior to sampling. The corals from the LN/HP treatment experienced a photonflux of ∼ 150 μmol m^− 2^ s^− 1^ throughout the experiment. The results of the analyses were contrasted to those described in [@bb0250] where corals were cultured under comparable light and temperature conditions but at different nutrient levels (high nitrogen/high phosphorus (HN/HP = \~ 6.5 μM NO~3~^−^/\~0.3 μM PO~4~^−^) vs low nitrogen/low phosphorus (LN/LP = \~ 0.7 μM NO~3~^−^/\~0.006 μM PO~4~^−^). 2.2. Measurements of dissolved inorganic nutrients {#s0020} -------------------------------------------------- Nitrate concentrations were measured by zinc reduction of nitrate to nitrite followed by a modified version of the Griess reaction as described in ([@bb0140]) using commercially available reagents (Red Sea Aquatics UK Ltd), according to the manufacturer\'s instructions. The resultant colour change was measured using a custom programmed colorimeter at 560 nm (DR900, HACH LANGE) calibrated with nitrate standard solution in the range 0 to 20 mg l^− 1^ NO~3~. Phosphate concentrations were measured using the PhosVer 3 (Ascorbic Acid) method (\#8048, HACH LANGE) using the same colorimeter (DR900, HACH LANGE) with the program specified by the manufacturer. 2.3. Determination of polyp size {#s0025} -------------------------------- The size of the live polyp (i.e. the part of the corallite covered by tissue) was determined by the end of the treatments. First, the corals were removed from the water to ensure full retraction of the polyp tissue. After a drip-off period of \~ 2 min, the mean diameters of the individual polyps were measured by averaging the longest and the shortest diameter of oval corallites (Fig. S1). In the case of round corallites, two measurements were taken along two orthogonal lines through the centre. The mean extension of the live tissue cover of the outer parts of the corallites was determined by measuring and averaging its extension at 5 measuring points spaced out evenly around the corallite. The live polyp volume was calculated using these measurements assuming a cylindrical shape of the polyp. 2.4. Photosynthetic efficiency (Fv/Fm) {#s0030} -------------------------------------- A Diving PAM (Walz) was used to determine the Photosystem II (PSII) maximum quantum efficiency (Fv/Fm) as a measure of stress experienced by the zooxanthellae. Measurements were taken under dim light exposure after 12 h of dark acclimation ([@bb0300]). A reduction of Fv/Fm below 0.5 was considered to be an indicator of stress as these lower values can indicate PSII damage when measured after dark recovery ([@bb0120]). 2.5. Transmission electron microscopy {#s0035} ------------------------------------- ### 2.5.1. Sample preparation and imaging {#s0040} For each experimental treatment, three tentacles of *E. paradivisa* (one per colony) were sampled from fully expanded polyps 1 h after the start of the light period. Tentacles were removed from the centre of each polyp to ensure that they were maximally exposed to light. Specimens were fixed and imaged as described in ([@bb0250]). Briefly, tentacles were fixed (3% glutaraldehyde, 4% formaldehyde, 0.1 M PIPES buffer containing 14% sucrose at pH 7.2) and then cut to obtain only the central section of each tentacle, post-fixed using 1% osmium tetroxide, stained with 2% uranyl acetate and dehydrated with a graded ethanol series before being embedded in Spurr\'s resin. Semi-thin tentacle sections (\~ 240 nm) were cut and stained with 1% toluidine blue and 1% borax for light microscope observations. For each specimen 3--5 thin sections (\< 100 nm thick) were obtained that were \> 20 μm apart from each other to eliminate the possibility of imaging the same algal cell twice. For each experimental treatment, at least nine sections originating from all three tentacles were produced. Sections were stained with lead citrate and imaged on a Hitatchi H7000 transmission electron microscope. For each grid square (Cu200), the 3--4 largest zooxanthellae were imaged in order to analyse only cells that were cut close to their central plane, thus being representative for the maximal cell diameter. For each tentacle, a minimum of 30 zooxanthellae cells were imaged, using 3 or more sections. A total of 100 micrographs of individual zooxanthellae (× 6000 magnification) were acquired for each treatment. ### 2.5.2. Micrograph analysis {#s0045} All micrographs were analysed using Fiji ([@bb0255]). The size of individual zooxanthellae cells was deduced from the cell section area (*n* = 100). Furthermore, the area occupied by lipid bodies, starch granules and uric acid crystals was determined for each cell and presented as a percentage of the cell section area (*n* = 100). Accumulation body integrity was measured by the degree of fragmentation by counting the number of fissures in the periphery ([@bb0250]). The accumulation body was only analysed when it was clearly visible in the section. For this parameter, a mean was derived for each processed tentacle per treatment (*n* = 3). The zooxanthellae density was determined by measuring the area of the endoderm and counting the contained zooxanthellae, using semi-thin sections imaged under a light microscope at × 40 magnification (*n* = 3). While the relative differences between samples from the respective treatments are unaltered, the present method produces absolute numbers which are higher compared to published values ([@bb0250]). 2.6. Statistical analysis {#s0050} ------------------------- For the morphological parameters of zooxanthellae, statistical replication was achieved by analysing 100 distinct algal cells from three tentacles and from different areas within each tentacle (*n* = 100) ([@bb0250]; Table S1). Data from nutrient replete (HN/HP) and low nutrient (LN/LP) treatments ([@bb0250]) were analysed for comparison. A mean value of zooxanthellae density was obtained for each processed tentacle (*n* = 3) (Table S2). Data were tested for normality using the Shapiro-Wilk test and log transformed if found to be non-normally distributed. Statistically significant effects resulting from the difference in dissolved inorganic nutrient availability were determined by one-way analysis of variance (ANOVA) (Table S3), followed by Tukey\'s post hoc test for pairwise comparison (Table S4). Data that were not normally distributed after transformation were, therefore, determined by the non-parametric Kruskal-Wallis one way ANOVA on ranks. *P* \< 0.05 was considered to be significant in all instances. 3. Results {#s0055} ========== 3.1. Effects on the coral holobiont {#s0060} ----------------------------------- Corals exposed to the imbalanced, HN/LP conditions, displayed a smaller polyp size and a bleached appearance that closely resembled the phenotype observed in low nutrient water (LN/LP) ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}a). In contrast, the corals kept under LN/HP imbalanced nutrient levels showed a similar phenotype to the nutrient replete (HN/HP) treatment. The bleached appearance of the polyps from HN/LP conditions was associated with low numbers of zooxanthellae in the tentacle tissue ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}), similar to the low nutrient LN/LP treatment. In contrast, the symbiont numbers in the tissue of LN/HP exposed corals were comparable to those of corals from nutrient replete (HN/HP) conditions ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}).Fig. 1Effect of dissolved inorganic nutrient availability on polyp size, and on zooxanthellae density and ultrastructure. Panels on the left hand side show representative photographs of *Euphyllia paradivisa* polyps from each experimental treatment. Panels in the central column show light microscope images of tentacle endoderm cross sections (× 40 magnification). Panels on the right hand side show micrographs of individual zooxanthellae which represent a mean ultrastructure (*n* = 100) resulting from the respective treatments (× 6000 magnification). HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. AB = accumulation body, ch = chloroplast, LB = lipid body, N = nucleus with condensed chromosomes, P = pyrenoid, S = starch granule, U = uric acid crystals.Fig. 1.Fig. 2Effect of dissolved inorganic nutrient availability on polyp size and on zooxanthellae density. (a) Coral polyp volume, (b) zooxanthellae density. HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. Mean ± s.d. Statistically significant differences are indicated by the use of different letters (one-way ANOVA, Tukey\'s test, *P* \< 0.05).Fig. 2.Table 1*Symbiodinium* biomarker patterns characteristic for different nutrient environments.Table 1.Nutrient conditionNutrient replete HN/HPLow nutrients LN/LPImbalanced HN/LPImbalanced LN/HPZooxanthellae nutrient statusNutrient replete growthN/P co-limitationP-starvedN-limitedZooxanthellae densityNormalLowLowNormalPolyp sizeNormalSmallSmallNormalCoral healthNormalBleachedBleachedNormalZooxanthellae health (Fv/Fm)Normal \> 0.5Normal \> 0.5Stressed \< 0.5Normal \> 0.5Zooxanthellae ultrastructural biomarkersCell sizeSmallIncreasedIncreasedSmallLipid body contentLowIncreasedIncreasedIncreasedStarch granule contentLowIncreasedIncreasedIncreasedUric acid crystal contentn.d.n.d.Increasedn.d.Accumulation body fragmentationn.d.Increasedn.d.n.d. 3.2. Effects on the *Symbiodinium* ultrastructure {#s0070} ------------------------------------------------- The analysis of TEM micrographs revealed that the size of zooxanthellae from the imbalanced HN/LP treatment and the low nutrient condition were significantly increased compared to those from the nutrient replete and the imbalanced LN/HP treatments ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 3](#f0015){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}). The low nutrient (LN/LP) condition and both types of nutrient imbalance increased the content of lipid bodies and starch granules in the symbiont cells ([Fig. 3b,c](#f0015){ref-type="fig"}) in comparison to corals from the HN/HP treatment. A biochemical assay using the lipophilic dye Nile Red (see supplementary material for method) confirmed that the increased cellular content of lipid bodies is due to an accumulation of neutral lipids (Fig. S2A). The lipid content remained stable over the day (Fig. S2B). Only the imbalanced HN/LP condition resulted in a marked increase in the content of uric acid crystals ([Fig. 3d](#f0015){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}). Interestingly, none of the imbalanced nutrient treatments caused the fragmentation of the accumulation body characteristic of the low nutrient condition ([Fig. 3e](#f0015){ref-type="fig"}). 3.3. Effects on *Symbiodinium* photosynthetic efficiency (Fv/Fm) {#s0065} ---------------------------------------------------------------- Compared to nutrient replete conditions, zooxanthellae from specimens from the imbalanced HN/LP treatment showed a reduction in the maximum quantum efficiency (Fv/Fm) with values of 0.34 ± 0.05 after dark recovery being indicative of PSII damage or disturbance ([Fig. 3](#f0015){ref-type="fig"}f, [Table 1](#t0005){ref-type="table"}). In contrast, Fv/Fm values \> 0.5 were recorded for zooxanthellae of corals from the other treatments.Fig. 3Effect of dissolved inorganic nutrient availability on zooxanthellae ultrastructure and Fv/Fm. (a) Cell size (*n* = 100), (b) lipid body accumulation (*n* = 100), (c) starch granule accumulation (*n* = 100), (d) uric acid crystal accumulation (*n* = 100), (e) accumulation body fragmentation (*n* = 3), (f) Fv/Fm (*n* = 5). HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. Box plots: the vertical line within each box represents the median. The box extends from the first to the third quartile and whiskers extend to the smallest and largest non-outliers. Outliers are not shown. Bar chart: mean ± s.d. Statistically significant differences are indicated by the use of different letters (one-way ANOVA, Tukey\'s test, *P* \< 0.05).Fig. 3. 4. Discussion {#s0075} ============= We used ultrastructural biomarkers of zooxanthellae to gain novel insights into the response of the coral -- *Symbiodinium* symbiosis to imbalanced nutrient environments and to analyse the role of nitrogen and phosphorus for the functioning of this association and potential implications for coral reef management. We used the reef coral *Euphyllia paradivisa* harbouring *Symbiodinium* sp. (clade C1) as a model system, exposed the corals to HN/LP and to LN/HP conditions and compared them to specimens from nutrient replete (HN/HP) and low nutrient (LN/LP) treatments ([@bb0250]). 4.1. Effect of high nitrate/low phosphate conditions {#s0080} ---------------------------------------------------- Recently, we demonstrated that corals exposed to HN/LP conditions were more susceptible to bleaching when exposed to heat stress and/or elevated light levels ([@bb0310]). The detrimental effects were linked to the relative undersupply with phosphorus that can result from the higher demand of the proliferating algal populations rather than to the high nitrogen levels. Phosphate starvation in *Symbiodinium* sp. resulted in a drop of photosynthetic efficiency associated with changes in the ratio of phospho- and sulfo-lipids ([@bb0310]). In other photosynthetic organisms, similar responses to phosphate stress could be attributed to critical changes in the properties of photosynthetic membranes ([@bb0095]). Hence, our findings provided a potential mechanistic link between nutrient stress, the malfunctioning of the photosynthetic machinery and the observed bleaching response. With their low zooxanthellae numbers, bleached appearance and small polyp size, the corals from the HN/LP treatment under elevated light levels resembled the low-nutrient phenotype (LN/LP) previously described ([@bb0250]). These two treatments also had similar effects on the ultrastructure of zooxanthellae, with cell size and the accumulation of carbon-rich storage bodies (lipid bodies and starch granules) being increased in comparison to zooxanthellae from nutrient replete conditions. Similar structural changes were found to be indicative of nutrient limitation in zooxanthellae and free-living microalgae ([@bb0145], [@bb0200], [@bb0160], [@bb0195], [@bb0305], [@bb0250]). Those characteristics have been interpreted as indicators of an uncoupling of carbon fixation from cellular growth. In this state, the nutrient-limited cells sustain a high photosynthetic production while their energy demand is reduced due to slower proliferation rates ([@bb0250], [@bb0160], [@bb0285], [@bb0280]). Since corals from the HN/LP conditions were supplied with excess nitrogen, the nutrient limitation phenotype of corals and symbionts can be clearly attributed to the undersupply with phosphate. Importantly, under both, nutrient replete and low nutrient conditions, the photosynthetic efficiency measured as Fv/Fm was in the healthy range (\> 0.5). In contrast, Fv/Fm was strongly reduced in the imbalanced HN/LP treatment, indicative of failing photosynthesis due to phosphate starvation ([@bb0310], [@bb0055]). At ultrastructural level, the phosphate starvation phenotype resulting from nitrogen enrichment in combination with low phosphate supply can be clearly distinguished from the low-nutrient phenotype by the pronounced accumulation of uric acid crystals. This finding is in line with previous studies that observed comparable deposits in zooxanthellae in response to nitrate enrichment, forming a transitory storage of assimilated nitrogen ([@bb0030], [@bb0170]). Finally, the phosphate-starved zooxanthellae lack the intriguing fragmentation pattern of the accumulation body, characteristic of strongly nutrient-limited zooxanthellae ([@bb0250]). 4.2. Effect of low nitrate/high phosphate conditions {#s0085} ---------------------------------------------------- Despite the relative undersupply of nitrogen in the low nitrate/high phosphate treatment, the polyp size and zooxanthellae density of these corals were comparable to those from the replete nutrient treatment. However, the ultrastructural biomarkers revealed signs of nutrient limitation such as elevated levels of lipid bodies and starch granules in symbiont cells from corals under LN/HP. In the light of previous findings, the effects of the low supply with nitrogen could be interpreted to cause an uncoupling of carbon-fixation and cellular growth that manifests in the increased accumulation of carbon-rich storage products. However, as indicated by the smaller cell size and the high number of symbiont cells within the coral tissue, comparable to those from corals experiencing high nutrient levels ([@bb0250], [@bb0160], [@bb0285], [@bb0280]), cell proliferation rates are still high enough to sustain these zooxanthellae densities. These results, together with the high Fv/Fm values of zooxanthellae from LN/HP corals suggest that the N-limitation sustains a slower but chemically balanced growth while maintaining a functional photosynthesis. 4.3. Differential effects of N and P undersupply and critical thresholds {#s0090} ------------------------------------------------------------------------ Our results suggest that symbiotic corals can tolerate an undersupply with nitrogen much better than an undersupply with phosphorus. These findings likely reflect an adaptation of the algal symbionts to the nutrient environment of coral reefs where processes are mostly nitrogen limited ([@bb0045], [@bb0105], [@bb0265], [@bb0110]). In agreement with this assumption, previous studies found a trend that nitrogen enrichment stimulates zooxanthellae growth and results in higher zooxanthellae densities, often without obvious negative effects on the corals ([@bb0080]). It cannot be ruled out, however, that nitrogen-fixation by coral-associated microbes in the presence of high phosphate concentration might potentially relieve some of the nitrogen-undersupply of the corals ([@bb0230]). The present study clearly shows that phosphate deficiency, alone or in combination with a low supply of nitrate, results in a severe disturbance of the symbiotic partnership as indicated by the loss of coral tissue and zooxanthellae. Phosphate starvation of zooxanthellae induced by nitrogen enrichment and resulting high N:P ratios has previously been shown to disturb the photosynthetic capacity of zooxanthellae and increase the vulnerability of corals to light- and heat stress-mediated bleaching ([@bb0310]). The fact that normal photosynthetic efficiency is retained by zooxanthellae in corals from the LN/LP treatment suggests that an undersupply with phosphate has less severe consequences when the algae become limited by nitrogen. This can be explained by the reduced P-demand of the non-/slow-growing algal population ([@bb0055]). The concentrations of dissolved inorganic nutrients in our LN/LP treatment (\~ 0.7 μM/\~0.006 μM) suggest that at measured nitrate concentrations \< 0.7 μM the impact of skewed N:P ratio becomes less pronounced. In our experiments, a phosphate concentration of \~ 0.3 μM at a N:P ratio of 22:1 yielded an overall healthy phenotype. Accordingly, it is likely that the absolute N:P ratio becomes also less critical for the proper functioning of the symbionts when phosphate concentrations exceed a vital supply threshold (\> 0.3 μM), even when the symbionts are rapidly proliferating. In contrast, a phosphate concentration of \~ 0.18 μM at a \~ 10-fold higher N:P ratio (211:1) yielded a bleached phenotype with the remaining symbionts showing signs of stress (Fv/Fm \< 0.4). Therefore, the P-threshold at which corals can become stressed in the presence of high N concentrations can be as high as 0.18 μM. Effects of P deficiency can be expected to become worse if supply from other sources such as particulate food or internal reserves, is low. 4.4. Implications for environmental monitoring and coral reef management {#s0095} ------------------------------------------------------------------------ Our study suggests that phosphate can become critically limiting even at concentrations ≤ 0.18 μM if the N:P ratios well exceed 22:1. This appears surprising since phosphate concentrations in this range are commonly considered ambient or high in natural reef environments. However, [@bb0180] reports phosphate concentrations of 0.1--0.18 μM at N:P ratios in the range of 33--72 to be associated with phosphate limitation of macroalgae in the declining reefs of Discovery Bay (Jamaica). These data suggest that the critical threshold values determined by our laboratory study can indeed be found in reef environments impacted by eutrophication. However, it is important to note that nutrient values measured in the water column of natural or experimental mesocosm settings represent a steady-state equilibrium that depends on their production and uptake by organisms. Since these fluxes vary spatially and temporarily among reef regions, the measured nutrient concentrations have to be considered in the context of the respective environment. Consequently, there is an urgent need to refine these thresholds and quantify the absolute amounts of nutrients and the associated fluxes that are responsible for the observed biological effects. These values are required to provide reliable and effective target values for management purposes. Of particular interest in the context of the present work is also the role of phytoplankton blooms. Stimulated by nutrient enrichment in the first place, coastal blooms can limit primary production by depleting essential nutrients or shifting their ratio over time and space ([@bb0055]). Critically, the depletion of dissolved inorganic phosphorus has been reported in the aftermath of phytoplankton blooms that were initially set off by elevated nitrogen levels ([@bb0060], [@bb0100], [@bb0135]). Such a lack of phosphate may render benthic corals more susceptible to stress, bleaching and associated mortality ([@bb0310]). Indeed, previous studies have observed a correlation between elevated nitrogen concentrations, increased phytoplankton densities and coral bleaching ([@bb0290], [@bb0315], [@bb0055]). Preventing the enrichment of coral reef waters with excess nitrogen should consequently be a management priority. However, it is important to note that also other forms of nutrient enrichment can have a plethora of direct and indirect negative effect on corals and their symbionts (reviewed by [@bb0055]). Therefore, the reduction of nutrient enrichment has to be generally high on the agenda of coral reef management ([@bb0245]). The extended set of cumulative, ultrastructural biomarkers provided here ([Table 1](#t0005){ref-type="table"}) can be used to identify different forms of nutrient stress in *Euphyllia* sp. associated with *Symbiodinium* (C1). These biomarkers hold promise to indicate nutrient stress also in other symbiotic coral species and in various reef settings. Importantly, they have potential to become part of the toolkit that is required for an in-depth understanding of the nutrient environment in coral reefs by bridging knowledge gaps left by traditional measurements of nutrient levels in the water column. Our findings highlight the key role of phosphorus in sustaining zooxanthellae numbers and coral biomass and for the proper functioning of symbiont photosynthesis, thereby contributing to the critical understanding of the importance of phosphorus for the functioning of symbiotic corals ([@bb0090]). Author contributions {#s0100} ==================== JW and CD provided the research question and the experimental set-up. SR, JW and CD designed the experiments. SR conducted experiments and analysed data. SR, JW and CD interpreted data. AR contributed to the maintenance of the experimental set-up Appendix A. Supplementary data {#s0105} ============================== Supplementary materialImage 1 We acknowledge the Biomedical Imaging Unit University of Southampton (A. Page) for the access to the TEM and T. Stead (School of Biological Sciences, Royal Holloway) for discussing zooxanthellae micrographs. We thank Laura Muras (Kopernikus Gymnasium Wasseralfingen) for helping with the polyp size measurements and Luke Morris (University of Southampton) for the conducting the Nile Red assays. The study was funded by NERC (NE/K00641X/1 to JW), European Research Council under the European Union\'s Seventh Framework Programme (FP/2007--2013)/ERC Grant Agreement no. 311179 to JW and a Vice Chancellor Award studentship to JW. We thank Tropical Marine Centre (London) and Tropic Marin (Wartenberg) for sponsoring the *Coral Reef Laboratory* at the University of Southampton. Supplementary data to this article can be found online at <http://dx.doi.org/10.1016/j.marpolbul.2017.02.044>.
High
[ 0.673469387755102, 33, 16 ]
A new Pew Charitable Trusts report found that policies like “dig once” requirements can encourage better collaboration with internet service providers to expand access. States are helping bring broadband internet to rural areas by enacting a range of policies that identify barriers to expansion and reduce bureaucracy that can hamper build outs, a new Pew Charitable Trusts report has found. For instance, the report highlights California’s and West Virginia’s implementation of “dig once” policies, which can encourage collaboration with utility and transportation infrastructure projects and help overcome barriers to connectivity. Eleven states have adopted dig once policies, which can require the consultation or inclusion of broadband infrastructure during road construction, according to Broadband Now. These policies can reduce costs to communities by eliminating the need to dig up roads at a later time to expand broadband access. California adopted a dig once policy in 2016 that requires the state Department of Transportation to notify internet service providers of planned roadwork projects. “Its aim was to streamline the process for these providers to deploy infrastructure along state highways by identifying opportunities to bury fiber-optic cables in ground that has already been opened for roadwork,” the Pew report states. West Virginia adopted a similar policy in 2018 that allows the Division of Highways to lease access to rights of way to internet service providers. It also establishes a process to notify utilities, including broadband providers, of upcoming roadwork projects. In the Pew report, researchers reviewed policies and measures that nine states are taking to expand broadband access in rural areas. The report acknowledges that each state has different resources, may work with different service providers, and are likely in different stages of expanding broadband access. But, overall, it highlights five common “promising practices” that researchers found among the states. Those practices include engaging stakeholders, creating a policy framework, planning and capacity building, providing funding and operations support, and evaluating programs as they evolve. Other examples of ways that states can address barriers to connectivity through legislation include Tennessee’s decision to allow electric cooperatives to provide internet service. Further, Colorado now allows utility companies to use existing easements on private land for commercial broadband service so long as property owners are notified, the report states. In its review of state policies, the Pew report also identified the establishment of specific broadband speed definitions and goals as helpful in guiding expansion. “These measures, often set in statute, create a framework for broadband expansion efforts, providing clarity to providers and communities as they make decisions about investing in broadband infrastructure,” the report states. The report highlights Minnesota as an example for adopting legislation that establishes the state’s goal to connect all homes and businesses to download speeds of 100 megabits per second and upload speeds of 20 mbps by 2026. The state legislature later created the Office of Broadband Development “to facilitate broadband expansion and help the state make progress toward these goals,” the report states. “While no silver bullet will ensure better broadband connectivity, officials at all levels of government can gain insights from these examples on how to bring this critical service to areas that remain unserved,” the report states. While the Federal Communications Commission has pledged $20 billion to subsidize the construction of high-speed broadband networks in rural America over the next 10 years, concern about the accuracy of the FCC’s broadband access maps has led states to step up their own efforts. The Pew report emphasizes that while much of the broadband conversation revolves around federal expansion efforts, “states play a critical role in deploying broadband, and their efforts are making a significant difference in expanding access.”
High
[ 0.6820276497695851, 37, 17.25 ]
Manville gun The Manville gun was a stockless, semi-automatic, revolver type gun, introduced in 1935 by Charles J. Manville. The Manville Gun was a large weapon, with a heavy cylinder being rotated for each shot by a clockwork-type spring. The spring was wound manually during the reloading. By 1938 Manville had introduced three different bore diameter versions of the gun, based on 12-gauge, 26.5-mm, or 37-mm shells. Due to poor sales, Manville guns ceased production in 1943. Manville 12-Bore Gun The original, 1935, steel-and-aluminum weapon held twenty-four rounds of 12-gauge × 2.75-inch (18.5×70mmR) shells in a spring-driven rotary-cylinder that had to be wound counter-clockwise before firing. It consisted of a steel barrel of , a rotating aluminum-alloy ammo cylinder, a single-piece steel body and foregrip, and wooden pistol grips. Loading and unloading were effected by unscrewing two thick, large-headed knobbed screws at the top of the weapon's cylinder that allowed the disassembly of the weapon into two halves. The forend and cylinder were the front half and the pistol-grip and cylinder backplate were the back half. The weapon's striker was engaged by rotating and then pushing in a knob at the back of the pistol grip (reversed to disengage it—rendering it safe). Each cylinder in the weapon had its own firing pin assembly. When the trigger is pulled the striker is cocked; when the trigger "breaks", the striker is released and hits the firing pin, firing the shell. 26.5mm Manville Machine-Projector In 1936, Manville introduced a version that held eighteen rounds of 26.5mm (1-inch) bore shells. This design fired 26.5mm × 3.15 inch Short (26.5mm × 80mmR) flare, smoke, and riot gas shells. Explosive shells were not available and the cylinder walls are too thin for shot-shells. The weapon is similar to the earlier 12-gauge version, except the barrel was either or , and used hard rubber rear grips instead of wood. The First Model 26.5 was a larger-bore version of the 12-gauge shotgun, using the same two securing screws. The Second Model 26.5 differed in that it used a long, thick metal locking bar with a turned-down bolt-handle, like the metal bolt on a bolt-action rifle, which locked into a recess machined into the frame. This slid through a round sleeve atop each half of the weapon to secure the two halves. When the bolt was unlatched and pulled to the rear, the back-plate was turned to the operator's right using the rear grip, allowing access to the cylinder. The operator could then pull out the spent shells and reload fresh ones. Barrel and cylinder inserts were available to allow it to fire 12-gauge shells or clusters of .38 Special cartridges. 37mm Manville Gas Gun In 1938, Manville introduced a twelve round gun with a 37mm (1.5-inch) bore. This version fired 37mm × 5.5 inch Long (37mm × 127mmR) flare, smoke, or tear gas shells and was designed for police and security use. It was meant to be used in an indirect fire mode and had its barrel mounted at the bottom of the cylinder rather than the top. Its greater weight prohibited its use by any but the strongest of men, since it was designed to be fired from a tripod or pintle mount. History The Indiana National Guard used 26.5mm Manville guns to break up mobs of strikers during the Terre Haute General Strike of 1935. They fired flare and tear gas shells at strikers until they dispersed. Police and military forces found the Manville guns to be large and heavy, resulting in limited sales. The Manville company ceased production of the weapons in 1943, after which Charles Manville destroyed all machinery, dies, diagrams and notes. Related guns Hawk Engineering MM-1 A gun with a similar design, the Hawk MM-1, was introduced in the 1970s. It was chambered in 37mmR and 40×46mmR and had a 12-round cylinder. The XM-18E1R The 1980 film The Dogs of War used a Second Model 26.5mm Manville Machine-Projector as the weapon of choice for the lead protagonist, Shannon (Christopher Walken). In the film, the weapon is called a "XM-18E1R projectile-launcher", deemed capable of firing munitions far beyond the actual Manville gun. Dialogue and literature in the film suggests that it fires fragmentation, grenade, tactical, anti-tank, anti-personnel and "flashette" (sic) shells. Rate of fire was touted as "18 rounds in five seconds". The actual "shells" used during the film were 12-gauge blanks set in cylinder adapters. Notes Machine Projector by C. J. Manville Category:Grenade launchers Category:Semi-automatic shotguns
Mid
[ 0.5732484076433121, 33.75, 25.125 ]
Surprise! New York Area Airports Still Suck November 4, 2010 In a groundbreaking new report [pdf], the U.S. Department of Transportation’s Inspector General has discovered that flight delays at John F. Kennedy, LaGuardia, and Newark Airports remain terrible and make New Yorkers — and, indirectly, people throughout the rest of the country (yes, we’re that important) — absolutely miserable. While the numbers have actually improved somewhat (the Associated Press says one-quarter of flights landing at metro area airports are either late or canceled, while the DOT report says that number was “over 40 percent” back in 2007), New York City airports still cause a major headache for travelers thanks to three main reasons: 1. There’s Too Many Damn Passengers! The report claims that between 1999 and 2008, the number of flights leaving New York has increased by 8 percent — “the equivalent of adding a mid-sized airport’s flight operations (e.g., Albuquerque or New Orleans) without building a single new runway,” it says. You know whose fault that is? JetBlue (whose New York departures skyrocketed from 5,071 to 64,881 in eight years) and Delta, who opened an international hub at JFK that boosted the airline’s departures by 46 percent. 2. There’s Too Many Damn Planes! Did you know the newest runway at any of the three New York area airports was built at Newark in the early 1970s? Turns out that with so much air traffic, they run out of places to squeeze those planes into when something pesky and unexpected (like bad weather, for instance) rears its ugly head. Sadly, the DOT doesn’t expect anyone to fix this soon — according to the report, the Port Authority conducted a study in 2007 that found it was “not feasible” to install a new airport at JFK. Such a project, the Port Authority said, would be too expensive and controversial, especially since it would require building within “a protected environmental area.” Oh, well. More sitting on the tarmac for you! 3. Sky Space Is Finite, Too Not only are there too many planes for the runway, there are too many planes for the sky! The report calls New York’s airspace “the densest in the country” due to the proximity of the region’s three airports to each other. This means one airport’s problem can cause a domino effect for the others in order to clear airspace and keep the jets from crashing into each other in the sky or something. (Which would probably be worse than a delayed flight, if you think about it.) The report also blames the Federal Aviation Administration for a “flight caps” rule it imposed in 2008. Unfortunately, the FAA’s goal was to reduce delays’ severity, not to reduce the number of delays, so they just set the caps based on how many planes could zoom through New York on a clear, sunny, perfect-for-air-travel day. (Or, as the DOT puts it, the FAA’s limits “are too generous and are based on good weather conditions, resulting in a glut of flights when the weather turns ugly.”)
Low
[ 0.49808429118773906, 32.5, 32.75 ]
Q: What's wrong with my? I have a problem with decorator. I'm trying to write my own decorator with optional argument. This is how its done now: def CheckPremissions(manager=1): def wrap(func): def wrapper(request, *args, **kwargs): if request.user.is_anonymous(): return HttpResponseRedirect(reverse('login')) logged_user = getRelatedWorker(request.user) if (logged_user == None): return HttpResponseRedirect('accounts/no_worker_error.html') if self.manager != 0: try: dzial = Dzial.objects.get(kierownik=logged_user) except Dzial.DoesNotExist: isManager = False else: isManager = True if not isManager: return HttpResponseRedirect('accounts/denied_logged.html') return func(request, *args, **kwargs) return wrapper return wrap Code is looking good (for me), but when I use a decorator, I'm getting following error: Environment: Request Method: GET Request URL: http://127.0.0.1:8080/applications/show Django Version: 1.4.1 Python Version: 2.7.3 Traceback: File "/home/marcin/projekt/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 188. response = middleware_method(request, response) File "/home/marcin/projekt/lib/python2.7/site-packages/django/middleware/common.py" in process_response 94. if response.status_code == 404: Exception Type: AttributeError at /applications/show Exception Value: 'function' object has no attribute 'status_code' What am I doing wrong? A: I suspect you are applying the decorator incorrectly. You need to call it to specify the manager parameter: @CheckPremissions() def someview(request): pass or to specify it explicitly: @CheckPremissions(manager=0) def someview(request): pass You have a different problem in your decorator as well; you refer to self.manager in the code: if self.manager != 0: but this is no instance, and there is no self parameter. I think you meant: if manager: (where you can test for the variable to be non-zero by treating it as a boolean). Oh, and you may want to fix the spelling of the decorator; you probably meant CheckPermissions instead. :-)
Low
[ 0.522975929978118, 29.875, 27.25 ]
Q: Mount a RAM disk on boot I'm running different applications for development purposes: Apache, MySQL, Tomcat and a couple of other products. I would like to set up the logging verbosity of all applications to debug, but when doing so, the disk gets a lot of activity. Therefore I would like to create a RAM disk, for example 512 megabytes and get it mounted at boot time, so I can set the path of the log files to the RAM disk. I looked into /etc/fstab, but there is a notice that this file is deprecated. A: You can create and mount a RAM disk with the following Terminal (i.e. shell) command: diskutil erasevolume HFS+ "diskName" `hdiutil attach -nomount ram://2048` Where 2048 can be any number and represents the number of 512 byte blocks you want to allocate. So 1,000,000 will get you 512,000,000 bytes. (Of course, you have to leave out the commas.) So edit the command to your liking and put it in a shell script, then add that shell script to your login items. If you really need to run it at boot instead of when you log in, that is a lot more involved. Either you don't need to do it, because you're not starting Apache etc. before you log in, or you've already done it to get Apache to start before you log in, in which case you can just piggy back on whichever daemon you are starting first.
High
[ 0.663484486873508, 34.75, 17.625 ]
ORCID ID nedir ve neden araştırmacılar ihtiyaç duyar? Araştırmacılar için soyadı değişikliği, isim benzerliği, isimlerin yanlış yazılması gibi durumlar makalelerinin bir arada bulundurulması açısından sorunlara sebep olabilmekte. Bazen bu durumlardan ötürü bir araştırmacının tüm makalelerine ulaşmakta güçlük çekebilmekteyiz. Bu sorun ayrıca yazar ya da bağlı olduğu kurum / kuruluş hakkında yapılan analizlere de olumsuz etki ederek verilerde hatalar ortaya çıkarmaktadır. ORCID size nasıl faydalı olur? Araştırmacıların makalelerinin tamamanın tek bir kullanıcı altında toplanabilmesi için ORCID (Open Researcher and Contributor ID) isimli bir platform oluşturuldu. ORCID’in tam olarak amacı, tek bir dijital kimlik kullanılarak bir araştırmacının tüm çalışmalarının tek bir hesap altında bulundurulmasıdır. Bir ORCID ID’si yayınlar, bağışlar, eğitim hayatı, istihdam ve diğer biyografik bilgiler de dahil olmak üzere araştırmacının kariyeri boyunca bilgilerini güvenilir, kesin ve kalıcı bir şekilde tutar. ORCID, topluluk temelli, kar amacı gütmeyen bir oluşumdur ve Uluslararası Standart Ad Tanımlayıcı (ISNI) olarak bilinen ISO Standardı (ISO 27729) ile uyumlu 16 haneli bir numaralı URI’dir. ORCID’in faydaları; Sizi aynı veya benzer adlara sahip diğer yazarlardan ayırır. Çalışmanızın açıkça size atfedilmesini sağlar. Özellikle kariyeriniz boyunca farklı isimler almış veya kullanmışsanız, kariyer ve eğitim bilgilerinizi bir arada tutar. Potansiyel işbirlikçiler, fon verenler, potansiyel işverenler, konferans düzenleyiciler, yayıncılar dahil olmak üzere araştırma çıktılarınızı kolayca bulmanızı kolaylaştırır. Akademik kariyeriniz süresince sizin bilgilerinizi bir arada tutar ve akademik platformlarda sizi temsil eder. Farklı iş akış süreçlerine verimlilik katarak size zaman kazandır. Örneğin, bir hibe başvurusu veya makale başvuru formu doldururken biyografik veya bibliyografik verileri tekrar tekrar yazma ihtiyacını azaltır. Makaleler, veri setleri, performans çalışmaları ve hatta hakem değerlendirmeleri gibi çok çeşitli araştırma çıktıları ve etkinlikleri ile bağlantı kurabilir. Açık kaynak kodlu olduğu için diğer akademik hizmetlerle entegre edilebilir. TÜBİTAK ULAKBİM ve YÖK arasındaki işbirliği ile yürütülen çalışmalar sonucunda, ORCID bilgisinin kullanılması kararlaştırıldı. Araştırmacıların ORCID hesabını makalelerinde belirtmesi öneriliyor. ORCID hesabı oluşturmak için buraya tıklayınız. Kaynak; ORCID
Mid
[ 0.648148148148148, 30.625, 16.625 ]
1. Technical Field This invention relates to construction material. In greater detail, this invention relates to construction material reclaimed from ash from municipal waste combustors (MWC) traditionally disposed of by landfill, which can be used safely as roadbase material. 2. Prior Art Of the waste arising from urban activity, refuse consisting chiefly of combustibles has been disposed of by incineration, the ash thereof deposited in landfills. In recent years, however, sites for landfill disposal of combustor ash have decreased, making disposal increasingly difficult. Thus, there have been plans to reuse useful components included in MWC ash, thereby reducing the volume of combustor ash disposed of in landfills, and to employ such ash as a resource. For example, Japanese Patent Application Publication Sho 56-124481 proposes processing MWC ash by recovering metal from such ash, followed by comminuting with a comminutor, adding sintering auxiliary agent for blending and granulating, and sintering and solidifying the resulting granulated substance in a sintering furnace. However, enormous energy is required to sinter and solidify granulated substances in a sintering furnace, and this is not only contrary to the tide of energy conservation, but it results in solidified substances not price competitive with natural aggregate. On the other hand, Japanese Patent Application Publication Hei 1-16555 proposes mixing an appropriate amount of pit soil with combustor ash, then mixing a suitable quantity of unslaked lime therein, to create reclaimed earth. However, in many cases there are minute quantities of lead, chromium and other hazardous heavy metals in the combustor ash, presenting dangers in use for landfill or housing lots when merely mixed with pit soil and unslaked lime. In addition, Japanese Patent Application Publication Hei 7-96263 proposes treating waste combustor ash by comminuting such ash, producing particle substances with particle size distribution of 10 mm or less in particle diameter, adjusting moisture content therein to 10 to 15% in weight, adding cement bonding agent at a ratio of 10 to 15% in weight relative to particle substances, followed by mulling and pressure molding in a mold to create concrete blocks. However, this method requires cement bonding agent and the pressure molding process. Moreover, despite stabilization of heavy metals contained in the combustor ash through such means as insolubility by the strong alkalinity of the cement, adsorption on hydrate surfaces of cement minerals, intra-hydrate replacement of atoms or radicals with metallic ions, physical sealing by cement gels, etc., it cannot be denied that after much time there is a risk of heavy metal leaching, concrete cracking, concrete deterioration and heavy metal discharge. Consequently, there has been demand for the development of materials for the economic and safe recycling of useful substances from MWC ash. As a result of research to solve the foregoing problem, it was discovered that construction material can be derived from sorting, separating and drying MWC ash, the particles having maximum particle diameter of 5-40 mm with wide particle distribution, and containing only small quantities of moisture and unburned substances, with heavy metal immobilization agent added, and being mulled, California Bearing Ration has a revised CBR of 20% or more, making it useful and safe for use in various kinds of roadbase materials. In other words, this invention provides: (1) Construction material characterized in that it is obtained from sorting, separating and drying municipal waste combustor ash, the particles having a maximum particle diameter of 5-40 mm, a U-coefficient of 10 or more, and ignition loss of 10% or less in weight, with heavy metal immobilization agent added, and being mulled; and (2) The construction material set forth in paragraph (1), wherein the heavy metal immobilization agent is phosphoric acid and/or ferrous sulfate. None of the methods of the prior art provide the desired end product. In one aspect of the present invention there is provided construction material comprising granular material reclaimed from the ash of a municipal water combustor wherein the ash is subjected to sorting and separating recovered metals. The granular material includes particles having a maximum particle diameter of 5 to 40 mm, a U-coefficient of 10 or more, and an ignition loss of 10% or less in weight and is subjected to at least one heavy metal immobilization agent. Other aspects include the heavy metal immobilization agent being phosphoric acid. Also, the ash is further subjected to a second heavy metal immobilization agent consisting of ferrous sulfate. In another aspect of the invention the ash is subjected to both of two heavy metal immobilization agents comprising phosphoric acid and ferrous sulfate. Additionally, the ash is further subjected to drying and also is further subjected to drying before subjecting the ash to at least one heavy metal immobilization agent. The ash is further subjected to mulling the material after subjecting the ash to at least one heavy metal immobilization agent. In another aspect of the invention steps are combined such that the ash is further subjected to drying before subjecting the ash to at least one heavy metal immobilization agent and further subjecting the ash to mulling after subjecting the ash to at least one heavy metal immobilization agent. Finally, in another aspect of the invention the ash is sequentially subjected to (1) sorting according to particle size; (2) comminuting particles greater than 40 mm in size to a lesser size; (3) blending the particles of various sorted sizes to create a U-coefficient of 10 or greater; (4) drying; (5) at least one heavy metal immobilization agent; and (6) mulling. The ash is further subjected to a second heavy metal immobilization agent, wherein at least one heavy metal immobilization agent is phosphoric acid and another agent is ferrous sulfate. The granular material includes non-ferrous metals.
Mid
[ 0.6182669789227161, 33, 20.375 ]
Q: Geodesics Examples Can someone provide me exemples of connected Riemannian manifolds containing two points through each there are : (i) infinitely many geodesics (up to reparametrization) and (ii) no geodesics. Thank you A: Ok, so for (i) we can consider $S^{2}$ and for (ii) we can consider $\mathbb{R}^{n}\setminus\lbrace 0\rbrace$ (with Euclidian metric). Thanks anyway and have a nice weekend :)
High
[ 0.6971569839307781, 35.25, 15.3125 ]
Q: Cluster point, accumulation point once again. I'm taking two different math's courses simultaneously. While in the first one we had defined an accumulation point c in a set $A$ as a point in which: $c$ is a cluster point if $\forall$ open reduced ball around $c$ $\cap$ $A \ne \emptyset$ The second one defined an cluster point (which I understand is the same as accumulation point)of a $\{x_n\}^{\infty}_n$ as $\forall \epsilon > 0 \ and\ \forall N, \exists \ n\ such \ that\ x_n $ belongs to the open ball of c with $\epsilon \ radius$ After that the second course used the first definition as definition of limit (or cluster) point of a set (I know that they're not the same, but that is what the book says). I'm truly in a lost. Can someone shed light into this matter? Are both definitions equivalent or it depends on the lecturer and the book? Thanks in advance. A: I'm assuming you are working with metric spaces, as opposed to general topological spaces, as your language suggests as much. One issue is that there is a notion of a cluster point for a set and a cluster point for a sequence. Note that your first definition is for what it means to say $c$ is a cluster point of a set $A$. The second definition is what it means to say $c$ is a cluster point of the sequence $\{x_n\}_{n \in \mathbb N}$. There is an error in your second definition. It should read: $c$ is a cluster point of a sequence $\{x_n\}_{n \in \mathbb N}$ if $\forall \epsilon > 0\ \forall N \in \mathbb{N}\ \exists n \geq N : x_n \in B_\epsilon(c)$, where $B_\epsilon(c) := \{y \in X \mid d(y,c)< \epsilon\}$ is the open ball of radius $\epsilon$. This is the same as saying for any open ball $B$ around $c$ there are infinitely many $n \in \mathbb N$ s.t. $x_n \in B$. These are two distinct notions of a cluster point. They are closely related but not the same.
Mid
[ 0.5781990521327011, 30.5, 22.25 ]
import os DATA_BASE_DIR = os.path.dirname(os.path.realpath(__file__)) DATA_SET_DIR = os.path.join(DATA_BASE_DIR, 'datasets') MODELS_DIR = os.path.join(DATA_BASE_DIR, 'models')
Mid
[ 0.6028880866425991, 41.75, 27.5 ]
1. A man complained to the Independent Press Standards Organisation on behalf of himself and his family that the Edinburgh Evening News had breached Clause 1 (Accuracy) of the Editors’ Code of Practice in publishing a front page teaser image with the caption “Who lives in a house like this?” on 13 September 2014. 2. The newspaper had published a photograph of the complainant’s house on the front page, referring readers to pages 4 & 5 for the full story. These pages featured an entirely unrelated article about a man convicted of sexual offences, with the story about the complainant’s house appearing on pages 6 and 7. 3. The complainant said that the error inaccurately suggested that a sex offender lived in his home. 4. The newspaper apologised for the error, offered to write a private letter of apology to the complainant, and published the following clarification on its corrections page in print on page 2 and online: An item on our front page on September 13 2014, featured a picture of [the complainant]’s house displaying a giant Yes banner alongside the headline “Who lives in a house like this?” Readers were incorrectly referred to pages four and five for the full story, where there was a report about a sex offender, rather than to pages six and seven where the report about [the complainant] actually appeared. We are very sorry for the error and any embarrassment caused. 5. The newspaper also offered to make a donation of £50 to a local charity, at the request of the complainant. 6. The complainant was concerned that the correction linked his name with the offences; at his request, it was removed. He did not consider a £50 donation to be sufficient, nor did he feel that a letter of apology would address the damage caused by the original article and picture. Relevant Code Provisions 7. Clause 1 (Accuracy) i) The Press must take care not to publish inaccurate, misleading or distorted information, including pictures. ii) A significant inaccuracy, misleading statement or distortion once recognised must be corrected, promptly and with due prominence, and – where appropriate – an apology published. Findings of the Committee 8. The newspaper had failed to take care not to publish inaccurate information on its front page. The newspaper should have checked on which pages the article about the complainant’s house would appear, and made sure that the front page accurately reflected this. In failing to do so, it had breached Clause 1 (i). The inaccuracy was a significant one, as it could suggest to a casual reader that the inhabitants of the house pictured were linked to the offences described on pages 4 and 5. A correction was therefore necessary to comply with the terms of Clause 1 (ii). Conclusions 9. The complaint was upheld. Remedial Action Required 10. Having upheld the complaint, the Committee considered what remedial action should be required. The Committee has the power to require the publication of a correction and/or adjudication; the nature, extent and placement is to be determined by IPSO. It may inform the publication that further remedial action is required to ensure that the requirements of the Editors’ Code are met. 11. The correction published by the newspaper had identified the original inaccuracy, made clear the true position, and had included an apology, which was necessary to comply with the terms of Clause 1 (ii). The newspaper had offered to correct the error promptly, as soon as it had been brought to its attention. The Committee also noted that the front page teaser photograph had been used prominently on pages 6 and 7, where the article about the complainant appeared, making clear to readers that the front page image was associated with the story on those pages, rather than that on pages 4 and 5. Given that the original front page had not suggested a link between the complainant and the sex offences, and that this would only have been made by readers who turned to pages 4 and 5, the positioning of the correction on page 2 was appropriate. The newspaper’s offer was therefore a sufficient remedy under the terms of Clause 1 (ii). The Committee understood the complainant’s objection to the correction, as it linked his name with the offences. It was unfortunate that the complainant had been on holiday at the time at which the correction was published, and so had not been able to approve it. The Committee welcomed the newspaper’s willingness to remove it from its website at the request of the complainant. This website uses cookies that provide necessary site functionality and improve your online experience. By continuing to use this website, you agree to the use of cookies. Our cookies notice provides more information about what cookies we use and how you can change them.
Low
[ 0.5348837209302321, 28.75, 25 ]
1. INTRODUCTION {#gepi21989-sec-0010} =============== Hundreds of studies have searched for gene--gene and gene--environment interaction effects in human data with the underlying motivation of identifying or at least accounting for potential biological interaction. So far, this quest has been quite unsuccessful and the large number of methods that have been developed to improve detection (Aschard et al., [2012b](#gepi21989-bib-0006){ref-type="ref"}; Cordell, [2009](#gepi21989-bib-0016){ref-type="ref"}; Gauderman, Zhang, Morrison, & Lewinger, [2013](#gepi21989-bib-0024){ref-type="ref"}; Hutter et al., [2013](#gepi21989-bib-0032){ref-type="ref"}; Thomas, [2010a](#gepi21989-bib-0048){ref-type="ref"}; Wei, Hemani, & Haley, [2014](#gepi21989-bib-0051){ref-type="ref"}) have not qualitatively changed this situation. This lack of discovery in the face of a substantial research investment has been discussed in several review papers that pointed out a number of issues specific to interaction tests, including exposure assessment, time‐dependent effect, confounding effect and multiple comparisons (Aschard et al., [2012b](#gepi21989-bib-0006){ref-type="ref"}; Bookman et al., [2011](#gepi21989-bib-0009){ref-type="ref"}; Thomas, [2010b](#gepi21989-bib-0049){ref-type="ref"}). While these factors are obvious barriers to the identification of interaction effects, it appears that some of the limitations of standard regression‐based interaction tests that pertain to the nature of interaction effects are underestimated. Previous work showed the detection of some interaction effects requires larger sample sizes than marginal effects for a similar effect size (Aiken, West, & Reno, [1991](#gepi21989-bib-0002){ref-type="ref"}; Greenland, [1983](#gepi21989-bib-0028){ref-type="ref"}), however it is not an absolute rule. Understanding the theoretical basis of this lack of power can help us optimizing study design to improve detection of interaction effect in human traits and diseases, and open the path for new methods development. Moreover the interpretation of effect estimates from interaction models often suffer from various imprecisions. Compared to marginal models, the coding scheme for interacting variables can impact effect estimates and association signals for the main effects (Aiken et al., [1991](#gepi21989-bib-0002){ref-type="ref"}; Andersen & Skovgaard, [2010](#gepi21989-bib-0003){ref-type="ref"}). Also, the current strategy to derive the contribution of interaction effects to the variance of an outcome greatly disadvantages interaction effects and are inappropriate when the goal of a study is not prediction but to assess the relative importance of an interaction term from a biological perspective. While alternative approaches exist, they have not so far been considered in genetic association studies. Finally, the development of new pairwise gene--gene and gene--environment interaction tests is reaching some limits, because the number of prior assumptions that can be leveraged to improve power (e.g. gene--environment independence (Piegorsch, Weinberg, & Taylor, [1994](#gepi21989-bib-0043){ref-type="ref"}) or the presence of marginal genetic effect for interacting variants (Dai, Kooperberg, Leblanc, & Prentice, [2012](#gepi21989-bib-0017){ref-type="ref"}) is limited when only two predictors are considered. With the exponential increase of available genetic and nongenetic data, the development and application of multivariate interaction tests offer new opportunities to building powerful approaches and moving the field forward. 2. METHODS AND RESULTS {#gepi21989-sec-0020} ====================== 2.1. Coding scheme and effect estimates {#gepi21989-sec-0030} --------------------------------------- Consider an interaction effect between a single nucleotide polymorphism (SNP) *G* and an exposure *E* (which can be an environmental exposure or another genetic variant) on a quantitative outcome *Y*. For simplicity I assume in all further derivation that *E* is normally distributed with variance 1, and *G* and *E* are independents. The simplest and most commonly assumed underlying model for *Y* when testing for an interaction effect between *G* and *E* is defined as follows: $$Y = \beta_{G} \times G + \beta_{E} \times E + \beta_{GE} \times G \times E + \varepsilon$$ where $\beta_{G}$ is the main effect of *G*, $\beta_{E}$ is the main effect of *E*, *β* ~GE~ is a linear interaction between *G* and *E*, and ε, the residual, is normally distributed with mean and variance σ^2^ set so that *Y* as mean of 0 and variance of 1 (so the absence of the intercept term in the above equation). One can then evaluate the impact of applying linear transformation of the genotype and/or the exposure when testing for main and interaction effects. For example, assuming *E* has a mean \> 0 and *G* is defined as the number of coded allele in the generative model, *Y* can be rewritten as a function of $G_{std}$ and $E_{std}$, the standardized *G* and *E*: $$Y = \beta_{G}^{\prime} \times G_{std} + \beta_{E}^{\prime} \times E_{std} + \beta_{GE}^{\prime} \times G_{std} \times E_{std} + \varepsilon^{\prime}$$where $\beta_{G}^{\prime}$, $\beta_{E}^{\prime}$, and $\beta_{GE}^{\prime}$ are the main effects of $G_{std}$ and $E_{std}$ and their interaction. Relating the standardized and unstandardized equations, we obtain (supplementary Appendix A): $$\beta_{G}^{\prime} = \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right) \times \sigma_{G}\mspace{6mu}$$ $$\beta_{E}^{\prime} = \left( {\beta_{E} + \beta_{GE} \times \mu_{G}} \right) \times \sigma_{E}$$ $$\beta_{GE}^{\prime} = \beta_{GE} \times \sigma_{E} \times \sigma_{G}$$where $\mu_{G}$, $\sigma_{G}$, $\mu_{E}$, and $\sigma_{E}$ are the mean and variance of *G* and *E*, respectively. Hence, the estimated main effects of $G_{std}$ and $E_{std}$ not only scale with the variance of *G* and *E* but can also change qualitatively if there is an interaction effect (i.e. the direction of the effect can change). In comparison, the interaction effect $\beta_{GE}^{\prime}$ remains qualitatively similar, however, because $\beta_{GE}^{\prime}$ does not scale with $\sigma_{GE}$ the variance of the interaction term but with the variance of *G* and *E*, the interpretation of the relative importance of the interaction effect can change (see Section [2.3](#gepi21989-sec-0050){ref-type="sec"}). Which coding scheme for *G* and *E* has the most biological sense can only be discussed on a case by case basis (Aiken et al., [1991](#gepi21989-bib-0002){ref-type="ref"}). Indeed, defining the optimal coding for a biological question can be very challenging, and as noted in previous work, "*most mathematical models are convenient fictions and would certainly be rejected given sufficient sample size*" (Clayton, [2009](#gepi21989-bib-0014){ref-type="ref"}). Yet, it is important to recognize that coding scheme should be chosen carefully when testing an interaction as different coding can correspond to qualitatively different relative contribution of each predictor (the main and interaction terms) to the outcome. This is illustrated in Figure [1](#gepi21989-fig-0001){ref-type="fig"}, which shows the contribution of a pure interaction effect ($\beta_{G} = \beta_{E} = 0$ and $\beta_{GE} \neq 0$) to *Y*. When *G* and *E* are centered, the interaction term has a positive contribution to the most extreme subgroups (low exposure and homozygote for the protective allele vs. high exposure and homozygote for the risk allele) and a negative contribution to the opposite heterogeneous subgroups (low exposure and homozygote for the risk allele vs. high exposure and homozygote for the protective allele, Fig. [1](#gepi21989-fig-0001){ref-type="fig"}A). Conversely, when *G* and *E* are positive or null only, the interaction term corresponds to a monotonic increase (or decrease if the interaction effect is opposite to the main effects) of the magnitude of the genetic and environmental effects (Fig. [1](#gepi21989-fig-0001){ref-type="fig"}B). Hence, allowing *G* and/or *E* to have negative values in the generative model implies an interaction effect that could be difficult to interpret from a biological perspective. Furthermore, one can easily show that when the mean of the exposure increases while its variance is fixed (e.g. if an environmental exposure increases affect the entire population), an interaction effect will appear more and more as a sole genetic effect (supplementary Fig. S1). Overall, coding schemes that can be related through linear transformations are mathematical equivalent (they produce the same outcome values as long as the predictor effects are rederived to account for the transformation). However, coding scheme should not be overlooked because of this equivalence, and as shown in the example of Figure [1](#gepi21989-fig-0001){ref-type="fig"}, variable coding should be justified whenever the interpretation of effect estimates matters. ![Example of a gene by exposure interaction effect on height. Pattern of contribution of a hypothetical genetic‐by‐exposure interaction term to human height when shifting the location of the genetic and the exposure burdens in the generative model. Genetic burden can correspond to the number of coded allele for a given SNP and exposure burden can correspond to the measure of an environmental exposure. Examples of simple coding are defined in parenthesis on both axes, and the resulting contributions to each specific combination of genetic and environmental values are defined on each panel for a given interaction effect parameter $\beta_{GE}$. In (A) the interaction is defined as the product of a centered genetic variant and a centered exposure. Such encoded interaction induces positive contribution to the outcome for the two extreme groups: (i) maximum exposure burden and maximum genetic burden, and (ii) lowest exposure burden and lowest genetic burden; and negative contribution to the outcome for the two opposite groups: (iii) maximum exposure burden and lowest genetic burden, and (iv) lowest exposure burden and maximum genetic burden. In (B) genetic and exposure burdens are encode in their natural scale and are therefore positive or null. Such coding induces a contribution of the interaction effect to the outcome that is monotonic with increasing genetic and exposure burden.](GEPI-40-678-g001){#gepi21989-fig-0001} Hopefully, the choice of a specific coding scheme, how to interpret effect estimates when modeling an interaction, and the motivation for adding nonlinear terms in general have been already debated, and several general guidelines have been proposed (see for example the review by Robert J. Friedrich (Friedrich, [1982](#gepi21989-bib-0022){ref-type="ref"})). The consensus is that, if the range of the independent variables do naturally includes zero (e.g. smoking status, genetic variants) there is no problem in interpreting the estimated main and interaction effect. For an interaction effect between A and B, the main effect of A corresponds to the effect of A when B is absent and conversely. On the contrary, if the range of the variables do not naturally encompass zero, then the observed estimates "*will be an extrapolations beyond the observed range of experience"* (Friedrich, [1982](#gepi21989-bib-0022){ref-type="ref"}). Centering the variables can be an option to address this concern. In that case, the main effect of A and B would represent the effect of A among individuals having the mean value of B and conversely. However, as mentioned previously, using centered variables induces a less interpretable interaction term. I suggest that a reasonable alternative would consists in shifting the exposure values so that it has a minimum value close to 0, or alternatively to use ordinal categories of the exposure (e.g. high vs. low BMI as done to define obesity), so that the main effect of A would correspond to the effect among the lowest observed value of B in the population and conversely. 2.2. Power considerations {#gepi21989-sec-0040} ------------------------- The power of the tests from the interaction model and from a marginal genetic model defined as $Y = \beta_{mG} \times G + \varepsilon_{m}$, can be compared when deriving the noncentrality parameters (*ncp*) of the predictors of interest. Assuming all effects are small, so that σ^2^ the residual variance is close to 1, these *ncps* can be approximated by (see supplementary Appendix B): $$ncp_{G} \approx N \times \sigma_{G}^{2} \times \beta_{G}^{2} \times \frac{\sigma_{E}^{2}}{\mu_{E}^{2} + \sigma_{E}^{2}}$$ $$ncp_{E} \approx N \times \sigma_{E}^{2} \times \beta_{E}^{2} \times \frac{\sigma_{G}^{2}}{\mu_{G}^{2} + \sigma_{G}^{2}}$$ $$ncp_{GE} \approx N \times \sigma_{E}^{2} \times \sigma_{G}^{2} \times \beta_{GE}^{2} = N \times \beta_{GE}^{{}^{\prime}2}$$ $$ncp_{mG} \approx N \times \sigma_{G}^{2} \times \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right)^{2} = N \times \beta_{G}^{{}^{\prime}2}$$ Note that in such scenario adjusting for the effect of *E* in the marginal genetic model has a minor impact on $ncp_{mG}$. The above equations indicate first that the significance of the marginal test of *G* ($ncp_{mG}$) and the interaction test ($ncp_{GE}$) are invariant with the coding used in the model tested, while the significance of the test of the main genetic and exposure effects can change dramatically when shifting the mean of *G* and *E*. Second, as illustrated in Figure [2](#gepi21989-fig-0002){ref-type="fig"}, depending on the parameters of the distribution of the exposure and the genetic variants in the generative model, the relative power of each test can be dramatically different. For example, if the genetic variant has only a main linear effect but is not interacting with the exposure, we obtain $ncp_{G} = ncp_{mG} \times \sigma_{E}^{2}/\left( {\mu_{E}^{2} + \sigma_{E}^{2}} \right)$, so that testing for $\beta_{mG}$ will be much more powerful that testing for $\beta_{G}$ if the mean of *E* is large, although there is no interaction effect here. When the generative model includes an interaction effect only ($\beta_{G} = \beta_{E} = 0$ and $\beta_{GE} \neq 0$), we obtain $ncp_{mG} = ncp_{GE} \times \mu_{E}^{2}/\sigma_{E}^{2}$. Again, the marginal test of the genetic effect can be dramatically more powerful than the test of interaction effect although the generative model includes only an interaction term but no main effect. ![Relative power of the joint test of main genetic and interaction effects. Power comparison for the tests of the main genetic effect (*main.G*), the interaction effect (*int.GxE*), and the joint effect (*Joint G.GxE*) from the interaction model, and the test of the marginal genetic effect (*mar.G*). The outcome *Y* is define as a function of a genetic variant *G* coded as \[0,1,2\] with a minor allele frequency of 0.3, and the interaction of *G* with an exposure *E* normally distributed with variance 1 and mean $\overline{E}$. The genetic and interaction effects vary so that they explain 0% and 0.04% (A), 0.1% and 0.1% (B), 0.6% and 0.1% (C) with effect in opposite direction, and 0.4% and 0% (D) of the variance of *Y*, respectively. Power and $\rho_{G,G \times E}$, the correlation between *G* and the $G \times E$ interaction term (E) were plotted for a sample size of 10,000 individuals and increasing $\overline{E}$ from 0 to 5.](GEPI-40-678-g002){#gepi21989-fig-0002} It follows that the power to detect an interaction effect explaining for example 1% of the variance of *Y* (where variance explained for a predictor *X* is defined as $\beta_{X}^{2}/\sigma_{X}^{2}$) but inducing no marginal genetic effect (i.e. when *E* is centered as in Fig. [1](#gepi21989-fig-0001){ref-type="fig"}A) is much higher than for an interaction explaining the same amount of variance but whose effect can be capture by a marginal term (i.e. when *E* is not centered as in Fig. [1](#gepi21989-fig-0001){ref-type="fig"}B--D). This result is a direct consequence of the covariance between $\beta_{G}$ and $\beta_{GE}$ that arise when having noncentered exposure in the generative model (Fig. [2](#gepi21989-fig-0002){ref-type="fig"}E). This covariance equals $\mu_{E} \times \sigma_{G}^{2}$ (supplementary Appendix C). It induces uncertainty on the estimation of the predictor effects, which decreases the significance of the estimates in the interaction model. With increasing intercorrelations between predictors it becomes impossible to disentangle the effects of one predictor from another, the standard errors of the effect estimates becoming infinitely large and the power decreases to the null (Farrar & Glauber, [1967](#gepi21989-bib-0021){ref-type="ref"}). As showed in the simulation study from supplementary Figures S2 and S3 these results appear consistent for both linear and logistic regression and when assuming non‐normal distribution of the exposure. This lead to the nonintuitive situation where the power to detect a relatively simple and parsimonious interaction effect from a biological perspective--defined as the product of a genetic variant and an exposure both coded to be positive or null--is very small; and in most scenarios where the main genetic and interaction effects do not canceled each other (see e.g. Weiss, [2008](#gepi21989-bib-0052){ref-type="ref"}) the marginal association test of *G* would be more powerful. In comparison a more exotic interaction effect as defined in Figure [1](#gepi21989-fig-0001){ref-type="fig"}A and supplementary Figure S1E, would be both much easier to detect and not captured in a screening of marginal genetic effect. 2.3. Proportion of variance explained {#gepi21989-sec-0050} ------------------------------------- In genetic association studies the proportion of variance explained by an interaction term is commonly evaluated as the amount of variance of the outcome it can explain on top of the marginal linear effect of the interacting factors (Hill, Goddard, & Visscher, [2008](#gepi21989-bib-0031){ref-type="ref"}). Following the aforementioned principle, one can derive the contribution of *G* ($r_{G}^{2}$), *E* ($r_{E}^{2}$), and $G \times E$ ($r_{GE}^{2}$) to the variance of the outcome using the estimates from the standardize model, in which the interaction term is independent from *G* and *E* (supplementary Appendix D): $$r_{G}^{2} = \beta_{G}^{{}^{\prime}2} = \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right)^{2} \times \sigma_{G}^{2}$$ $$r_{E}^{2} = \beta_{E}^{{}^{\prime}2} = \left( {\beta_{E} + \beta_{GE} \times \mu_{G}} \right)^{2} \times \sigma_{E}^{2}$$ $$r_{GE}^{2} = \beta_{GE}^{{}^{\prime}2} = \left( {\beta_{GE} \times \sigma_{E} \times \sigma_{G}} \right)^{2}.$$ The total variance explained by the predictors in the interaction model equals $r_{model}^{2} = r_{G}^{2} + r_{E}^{2} + r_{GE}^{2}$. It follows that one can draw various scenarios where the estimated main effect of *E* and *G* can be equal to zero but have a nonzero contribution to the variance of *Y* because of the interaction effect. Consider the simple example where *G* and *E* are binary variables and have a pure synergistic effect, that is, the effect of *G* and *E* is observed only in the exposed subjects carrying the risk allele. Following the above equations, if *G* and *E* have frequencies of, e.g. 0.3 and 0.7 and $\beta_{GE} = 0.5$, the contribution of *G*, *E*, and $G \times E$ to the variance of the outcome equal 2.56%, 0.47% and 1.10%, respectively. More generally, Figure [3](#gepi21989-fig-0003){ref-type="fig"} shows that depending on the frequency of the causal allele and the distribution of the exposure in the generative model, the vast majority of the contribution of the interaction term to the variance of *Y* will be attributed to either the genetic variant or the exposure. This is in agreement with previous work showing that even if a large proportion of the genetic effect on a given trait is induced by interaction effects, the observed contribution of interaction terms to the heritability can still be very small (Hill et al., [2008](#gepi21989-bib-0031){ref-type="ref"}). Because such interaction effects have small contribution to $r_{model}^{2}$ on top of the marginal effects of *E* and *G*, they have a very limited utility for prediction purposes in the general population (Aschard et al., [2012a](#gepi21989-bib-0004){ref-type="ref"}; Aschard, Zaitlen, Lindstrom, & Kraft, [2015](#gepi21989-bib-0007){ref-type="ref"}). ![Examples of attribution of phenotypic variance explained by an interaction effect. Proportion of variance of an outcome *Y* explained by a genetic variant *G*, an exposure *E* and their interaction *G* × *E* in a model harboring a pure interaction effect only ($Y = \beta_{GE} \times G \times E + \varepsilon$). The exposure *E* follows a normal distribution with a standard deviation of 1 and mean of 0 (A), 2 (B), and 4 (C). The genetic variant is biallelic with a risk allele frequency increasing from 0.01 to 0.99. The interaction effect is set so that the maximum of the variance explained by the model equals 1%.](GEPI-40-678-g003){#gepi21989-fig-0003} Still, this is a strong limitation when the goal is not prediction but to understand the underlying architecture of the trait under study and to evaluate the relative importance of main and interaction effects from a public health perspective. Lewontin (Lewontin, [1974](#gepi21989-bib-0037){ref-type="ref"}) highlighted similar issues, showing that the analysis of causes and the analysis of variance are not necessarily overlapping concepts. His work presents various scenarios where "*the analysis of variance will give a completely erroneous picture of the causative relations between genotype, environment, and phenotype because the particular distribution of genotypes and environments in a given population*." Since then, a number of theoretical studies have explored the issue of assigning importance to correlated predictors (Budescu, [1993](#gepi21989-bib-0011){ref-type="ref"}; Chao, Zhao, Kupper, & Nylander‐French, [2008](#gepi21989-bib-0012){ref-type="ref"}; Darlington, [1968](#gepi21989-bib-0018){ref-type="ref"}; Green, Carroll, & DeSarbo, [1978](#gepi21989-bib-0026){ref-type="ref"}) and several alternatives measures have been proposed. To my knowledge, none of these measures has been considered so far in human genetic association studies. The advantages and limitation of these alternative methods have been debated for years and no clear consensus arose, however Pratt axiomatic justification (Pratt, [1987](#gepi21989-bib-0045){ref-type="ref"}) for one of these method---further presented in the literature as the Product Measure (Bring, [1996](#gepi21989-bib-0010){ref-type="ref"}), Pratt index or Pratt\'s measure (Thomas, Hughes, & Zumbo, [1998](#gepi21989-bib-0050){ref-type="ref"})---makes it a relevant substitute. For a predictor $X_{i}$, the Pratt\'s index that we refer further as $r^{2*}$, is defined as the product of $\beta_{X_{i}}$, the standardized coefficient from the multivariate model (where all predictors are scaled to have mean 0 and variance 1, including the interaction term), times its marginal (or zero‐order) correlation with the outcome $cor\left( {Y,X_{i}} \right)$, i.e. $r_{X_{i}}^{2*} = \beta_{X_{i}} \times cor\left( {Y,X_{i}} \right)$. By definition, $r_{X_{i}}^{2*}$ attributes a predictor\'s importance as a direct function of its estimated effect and therefore addresses the previously raised concern. Among other relevant properties, it depends only on regression coefficients, multiple correlation, and residual variance but not higher moments, and it does not change with (nonconstant) linear transformation of predictors other than $X_{i}$. It also has convenient additivity properties as it satisfies the condition $r_{G}^{2*} + r_{E}^{2*} + r_{GE}^{2*} = r_{model}^{2}$ (supplementary Appendix D), so that the overall contribution of the predictors is the sum of their individual contribution, and for example the cumulated contribution of multiple interaction effects can easily be evaluated by summing $r_{X_{i}}^{2*}$. The Pratt\'s index also received criticisms (Bring, [1996](#gepi21989-bib-0010){ref-type="ref"}; Chao et al., [2008](#gepi21989-bib-0012){ref-type="ref"}), in particular for allowing $r_{X}^{2*}$ being negative (Thomas et al., [1998](#gepi21989-bib-0050){ref-type="ref"}). Pratt\'s answer to this concern is that $r_{X_{i}}^{2*}$ only describes the average contribution of a predictor to the outcome variance in one dimension and is therefore, as any one‐dimension measure, a suboptimal representation of the complexity of the underlying model. For example, a negative $r_{X_{i}}^{2*}$ means that if we were able to remove the effect of $X_{i}$, the variance of the outcome would increase because of the correlation of $X_{i}$ with other predictors (see example from supplementary Appendix D). From a practical perspective, $r_{X_{i}}^{2*}$ can be expressed as a function of the estimated effects, the means and the variances of *E* and *G* (supplementary Appendix D), and can be derived using estimates from a standard regression model: $$r_{G}^{2*} = \left( {\beta_{G}^{2} + \beta_{G} \times \beta_{GE} \times \mu_{E}} \right) \times \sigma_{G}^{2}$$ $$r_{E}^{2*} = \left( {\beta_{E}^{2} + \beta_{E} \times \beta_{GE} \times \mu_{G}} \right) \times \sigma_{E}^{2}$$ $$\begin{array}{ccl} r_{GE}^{2*} & = & {\beta_{GE}^{2} \times \sigma_{GE}^{2} + \beta_{GE}} \\ & & {\times \left( {\beta_{G} \times \mu_{E} \times \sigma_{G}^{2} + \beta_{E} \times \mu_{G} \times \sigma_{E}^{2}} \right)} \\ \end{array}$$ As showed in Figure [4](#gepi21989-fig-0004){ref-type="fig"} and supplementary Figure S4, the Pratt index can recover the pattern of the causal model in situations where the standard approach would underestimate the importance of the interaction effects. It can therefore be of great use in future studies to evaluate the importance of potentially modifiable exposures that influence the genetic component of multifactorial traits. ![Relative importance of an interaction term as defined by the Pratt index. Contribution of a genetic variant *G* with minor allele frequency of 0.5, a normally distributed exposure *E* with mean of 4 and variance of 1 and their interaction *G* × *E*, to the variance of a normally distributed outcome *Y*, based on the standard approach‐--the marginal contribution of *E* and *G* and the increase in *r* ^2^ when adding the interaction term--(gray boxes), and based on the Pratt index (blue boxes), across 10,000 replicates of 5,000 subjects. For illustration purposes the predictors explain jointly 10% of the variance of *Y*. In scenario (A) all G, E, and G × E have equal contribution, while in scenarios (B), (C), and (D) there is no interaction effect, no exposure effect, and no genetic effect, respectively.](GEPI-40-678-g004){#gepi21989-fig-0004} Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"} illustrates the differences between the two approaches for two confirmed interaction effect on body mass index (BMI). In case (1), the authors identified and replicated an interaction between soda consumption and a genetic risk score (GRS) of 32 BMI SNPs. Case (2) is a replication of a previously identified interaction between a GRS of 12 BMI SNPs and physical activity (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}). Following the formulas above and using approximation of mean and variance of the genetic and exposure variables (supplementary Table S1 and S2), I estimated the contribution of each term using the standard approach and the Pratt index after rederiving effect estimates for a model where predictor values (for the GRS and the exposure) are shifted so that the minimum observed values equal 0---as suggested earlier. It resulted in major differences of the relative importance of the three predictors, the contribution of the interaction effect as derived with the Pratt Index being substantially higher in both cases (increasing from 4.4% to 10.8% for case 1, and from 0.4% to 15.7% for case 2). Case 1 highlights in particular that reducing soda consumption might have a greater impact in reducing the average BMI in the population than one would expect when focusing on the amount of variance explained as defined in the standard approach. An important caveat here is that the Pratt Index is sensitive to location shift of the predictor (as performed in this analysis) and the results from Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"} would change if a different transformation was applied to the predictors (i.e. if the minimum possible value was defined differently). In comparison, the standard approach is robust to linear transformation of the predictors. ###### Relative importance of GRS by exposure interaction effect from real data example Reference Contribution to BMI Standard[a](#gepi21989-tbl1-note-0002){ref-type="fn"} Pratt index --------------------------------- --------------------- ------------------------------------------------------- ------------- 32 BMI SNPs × soda consumption Total 0.011 0.011 \% of genetic 91.0% 70.5% \% of environment 4.6% 18.7% \% of interaction 4.4% 10.8% 12 BMI SNPs × physical Activity Total 0.016 0.016 \% of genetic 43.8% 49.1% \% of environment 55.8% 35.2% \% of interaction 0.4% 15.7% BMI, body mass index; SNP, single nucleotide polymorphisms. Variance explained for the interaction effect is derived as the variance explained on top of the marginal contribution from the genes and the environment. John Wiley & Sons, Ltd. 2.4. Improving detection through multivariate interaction tests {#gepi21989-sec-0060} --------------------------------------------------------------- Using statistical technics such as the Pratt index can provide clues on the importance of interaction effects; however it does not help in mapping interaction. Increasing power mostly relies on two principles: increasing sample size, and leveraging assumptions on the underlying model. The case‐only test, which assumes independence between the genetic variant and the exposure, and a two steps strategy that select candidate variants for interaction test based on their marginal linear effects, are good examples of the later principle (Dai et al., [2012](#gepi21989-bib-0017){ref-type="ref"}; Gauderman et al., [2013](#gepi21989-bib-0024){ref-type="ref"}; Mukherjee, Ahn, Gruber, & Chatterjee, [2012](#gepi21989-bib-0041){ref-type="ref"}). However, only a limited number of assumptions can be made for a single variant by a single exposure interaction test. With the overwhelming wave of genomic and environmental data, I suggest that a major path to move the field forward is to extend this principle while considering jointly more parameters. This has actually already been applied over the past few years with the joint test of main genetic and interaction effects (Kraft, Yen, Stram, Morrison, & Gauderman, [2007](#gepi21989-bib-0034){ref-type="ref"}). The *ncp* of such a joint test can be expressed as a function of main and interaction estimates ($\beta_{G}$ and $\beta_{GE}$), their variances ($\sigma_{\beta_{G}}^{2}$ and $\sigma_{\beta_{GE}}^{2}$) and their covariance γ (supplementary Appendix E). By accounting for γ the joint test recovers most of the power lost by the univariate test of the main genetic and interaction effect (so the situation where neither the interaction effect nor the main genetic effect are significant, while the joint test is, see e.g. SNP rs11654749 in (Hancock et al., [2012](#gepi21989-bib-0030){ref-type="ref"})). More importantly, in the presence of both main and interaction effects, it can outperform the marginal test of *G*. Although this is at the cost of decreased precision, i.e. if the test is significant, one cannot conclude whether association signal is driven by the main or the interaction effect. Moreover this would be true only if the contribution of the interaction effect on top of the marginal effect is large enough so that it balanced the increase in number of degree of freedom (Aschard, Hancock, London, & Kraft, [2010](#gepi21989-bib-0005){ref-type="ref"}; Clayton & McKeigue, [2001](#gepi21989-bib-0013){ref-type="ref"}) (Fig. [2](#gepi21989-fig-0002){ref-type="fig"}). Application of the joint test of main genetic effect and a single gene by exposure interaction term is now relatively common in GWAS setting (Hamza et al., [2011](#gepi21989-bib-0029){ref-type="ref"}; Hancock et al., [2012](#gepi21989-bib-0030){ref-type="ref"}; Manning et al., [2012](#gepi21989-bib-0039){ref-type="ref"}). However, exploring further multivariate interactions with multiple exposures is limited by practical considerations. Existing software to perform the joint test in a meta‐analysis context (Aschard et al., [2010](#gepi21989-bib-0005){ref-type="ref"}; Manning et al., [2011](#gepi21989-bib-0040){ref-type="ref"}) only allow the analysis of a single interaction term mostly because it requires the variance‐covariance matrix between estimates, which is not provided by popular GWAS software. Leveraging the results from the previous sections on can show that the *ncp* of the joint test of main genetic effect and interactions with *l* independent exposures can be expressed as the sum of *ncp* from the test of *G* and the $G \times E_{cent.i}$ where $E_{cent.i}$ is the centered exposure *i* (supplementary Appendix E): $$ncp_{G + GE} = N \times \sigma_{G}^{2} \times \beta_{G}^{{}^{\prime\prime}2} + \sum\limits_{i = 1...l}\left\lbrack {N \times \sigma_{G}^{2} \times \sigma_{E}^{2} \times \beta_{GE_{cent.i}}^{{}^{\prime\prime}2}} \right\rbrack$$where $\beta_{G}^{{}^{\prime\prime}}$ and $\beta_{GE_{cent.i}}^{{}^{\prime\prime}}$ are the effects of *G* and $G \times E_{cent.i}$. Such a test is robust to non‐normal distribution of the exposure, and modest correlation (\<0.1) between the genetic variant and the exposures, but sensitive to moderate correlation (\>0.1) between exposures (supplementary Figs. S5 and S6). Hence, one can perform meta‐analysis of a joint test including multiple interaction effects using existing software simply by centering exposures. In brief one would have to perform first a standard inverse‐variance meta‐analysis to derive chi‐squares for the $l + 1$ terms from the model considered, and then to sum all chi‐squares to form a chi‐square with $l + 1$ *df*. Importantly, centering the exposures will be of interest only when testing jointly multiple interactions and the main genetic effect. In comparison, the combined test of multiple interaction effects can be simply performed by summing chi‐squares from each independent interaction test or from interaction test derived in a joint model. As previously, the validity of this approach relies on independence between the genetic variant and the exposures, and between the exposures. Finally, a more general solutions that should be explored in future studies would consists, as proposed for the analysis of multiple phenotypes (e.g. Zhu et al., [2015](#gepi21989-bib-0053){ref-type="ref"}), in estimating the correlation between all tests considered (main genetic effect and/or multiple interaction effects) using genome‐wide summary statistics in order to form a multivariate test. A second major direction for the development of multivariate test is to assume the effects of multiple genetic variants depend on a single "scaling" variable *E*. A rising approach consists in testing for interaction between the scaling variable and a genetic risk score (GRS), derived as the weighted sum of the risk alleles. Several interaction effects have been identified using this strategy (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}; Fu et al., [2013](#gepi21989-bib-0023){ref-type="ref"}; Langenberg et al., [2014](#gepi21989-bib-0035){ref-type="ref"}; Pollin et al., [2012](#gepi21989-bib-0044){ref-type="ref"}; Qi, Cornelis, Zhang, van Dam, & Hu, [2009](#gepi21989-bib-0046){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}), some being replicated in independent studies (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}). This relative success, as compared to univariate analysis, has generated discussion regarding potential underlying mechanisms (Aschard et al., [2015](#gepi21989-bib-0007){ref-type="ref"}; Ebbeling & Ludwig, [2013](#gepi21989-bib-0020){ref-type="ref"}; Goran, [2013](#gepi21989-bib-0025){ref-type="ref"}; Greenfield, Samaras, & Campbell, [2013](#gepi21989-bib-0027){ref-type="ref"}; Malavazos, Briganti, & Morricone, [2013](#gepi21989-bib-0038){ref-type="ref"}). Overall, testing for an interaction effect between a GRS and a single exposure consists in expanding the principle of a joint test of multiple interactions while leveraging the assumption that, for a given choice of coded alleles, most interaction effects are going in the same direction. It is similar in essence to the burden test that has been widely used for rare variant analysis (Lee, Abecasis, Boehnke, & Lin, [2014](#gepi21989-bib-0036){ref-type="ref"}). In its simplest form it can be expressed as the sum of all interaction effects and it captures therefore deviation of the mean of interaction effects from 0. When interaction effects are null on average, a joint test of all interaction tests (as previously described) will likely be the most powerful approach as it allows interaction effects to be heterogeneous. Conversely, if interactions tend to go in the same direction, the GRS‐based test can outperform other approaches (Fig. [5](#gepi21989-fig-0005){ref-type="fig"} **)**. Of course, in a realistic scenario, a number of non‐interacting SNPs would be included in the GRS, diluting the overall interaction signal and therefore decreasing power. However, the gain in power for the multivariate approaches can remain substantial even when a large proportion of the SNPs tested (e.g. 95% in the example from Fig. [5](#gepi21989-fig-0005){ref-type="fig"}) is not interacting with the exposure. Table [2](#gepi21989-tbl-0002){ref-type="table-wrap"} illustrates power achieved by these tests in the examples used for Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"}. ![Advantages and limitations of testing interaction effect with a genetic risk score. Examples of power comparison for the combined analysis of interaction effects between 20 SNPs and a single exposure. Power was derived for three scenarios: the interaction effects are normally distributed (upper panels) and (A) centered, (B) slightly positive so that 25% of the interactions are negative, and (C) positive only. Three tests are compared while increasing sample size from 0 to 10,000: the joint test of all interaction terms, the genetic risk score by exposure interaction test, and the test of the strongest interaction effect (pairwise test) after correction for the 20 tests performed (middle panels). The lower panels show power of the three tests for a sample size of 5,000, when including 1--400 non‐interacting SNPs on top of the 20 causal SNPs in the analysis and after accounting for multiple testing in the pairwise test.](GEPI-40-678-g005){#gepi21989-fig-0005} ###### Genetic risk score by exposure interaction in real data 32 BMI SNP × soda consumption 12 BMI SNP × physical activity ---------------------------------------------------------------- ---------------------------------------------------------------- ------------------------------- -------------------------------- Reported *P*‐value *Best SNP* [a](#gepi21989-tbl2-note-0002){ref-type="fn"} 0.0030 0.0030 *GRS from paper* [c](#gepi21989-tbl2-note-0004){ref-type="fn"} \<0.001 0.016 Derived *P*‐value[a](#gepi21989-tbl2-note-0002){ref-type="fn"} *wGRS* 0.000028 0.0027 *uGRS* 0.00019 0.015 *chi2* 0.014 0.050 Power[b](#gepi21989-tbl2-note-0003){ref-type="fn"} *Best SNP* 0.43 0.68 *wGRS* 0.99 0.85 *uGRS* 0.96 0.54 SNP, single nucleotide polymorphisms; wGRS, weighted GRS; uGRS, unweighted GRS; chi2, sum of individual interaction chi‐squared. *P*‐values derived from individual SNP by exposure interaction estimates, not corrected for the number of SNPs tested. Power is approximated based on the effect estimate. It is derived for an alpha level of 5% and sample sizes similar to those used in the corresponding study. For soda consumption, the authors used a weighted GRS, for physical activity, the authors used an unweighted GRS. John Wiley & Sons, Ltd. Finally, as showed in supplementary Appendix F, assuming the SNPs in the GRS are independents from each others, the GRS by *E* interaction test can be derived from individual interaction effect estimates. More precisely, consider testing the effect of a weighted GRS on *Y*: $$Y \sim \gamma_{GRS} \times GRS + \gamma_{E} \times E + \gamma_{INT} \times GRS \times E$$where $\gamma_{GRS}$, $\gamma_{E}$, and $\gamma_{INT}$ are the main effect of the weighted GRS, the main effect of *E* and the interaction effect between *E* and the GRS, respectively. The test of $\gamma_{INT}$ is asymptotically equivalent to the meta‐analysis of $\gamma_{G_{i} \times E}$, the interaction effects between $G_{i}$ and *E*, using an inverse‐variance weighted sum to derive a 1 *df* chi‐square, i.e. (see supplementary Appendix F, supplementary Figs. S7 and S8, and Dastani et al., [2012](#gepi21989-bib-0019){ref-type="ref"}): $$\left( \frac{{\hat{\gamma}}_{INT}}{{\hat{\sigma}}_{\gamma_{INT}}} \right)^{2} = \frac{\left( {\sum_{m}\frac{w_{i} \times {\hat{\gamma}}_{G_{i} \times E}}{{\hat{\sigma}}_{\gamma_{G_{i} \times E}}^{2}}} \right)^{2}}{\sum_{m}\frac{w_{i}^{2}}{{\hat{\sigma}}_{\gamma_{G_{i} \times E}}^{2}}}\mspace{6mu}$$where $w_{i}$ is the weight given to SNP *i*. A number of strategies can be used for the weighting scheme. Assuming equal effect size of all interaction effects, one should weight each SNP by the inverse of their standard deviation ($w_{i} = 1/\sigma_{G_{i}}$). Alternatively, others have used weights proportional to the marginal genetic effect of the SNPs, assuming the magnitude of the marginal and interaction effects are correlated. Obviously, the relative power of each of these weighting schemes depends on their relevance in regard to the true underlying model. Finally, applying GRS‐based interaction tests implicitly supposed a set of candidate genetic variants have been identified. The current rationale consists in assuming that most interacting variants also display a marginal linear effect and therefore have focused on GWAS hits, however other screening methods can be used (Aschard, Zaitlen, Tamimi, Lindstrom, & Kraft, [2013](#gepi21989-bib-0008){ref-type="ref"}; Pare, Cook, Ridker, & Chasman, [2010](#gepi21989-bib-0042){ref-type="ref"}). Moreover, existing knowledge, such as functional annotation (Consortium, [2004](#gepi21989-bib-0015){ref-type="ref"}) or existing pathway database (Kanehisa et al., [2014](#gepi21989-bib-0033){ref-type="ref"}) can be leverage to refine the sets of SNPs to be aggregated into a GRS. 3. DISCUSSION {#gepi21989-sec-0070} ============= Advancing knowledge of how genetic and environmental factors combine to influence human traits and diseases remains a key objective of research in human genetics. Ironically, the simplest and most parsimonious biological interaction models---those in which the effect of a genetic variant is either enhanced or decreased depending on a common exposure---are probably the most difficult to identify. Furthermore, the contribution of such interaction effects can be dramatically underestimated when measured as the drop in *r* ^2^ if the interaction term was removed from the model. Here, I argue for the use of new approaches and analytical strategies to address these concerns. This includes using methods such as the Pratt index to evaluate the relative importance of interaction effects in genetic association studies. These methods can highlight important modifiable exposures influencing genetic mechanisms, which could be neglected with the existing approach. Regarding detection, and besides increasing sample size, increasing power to detect interaction effects in future studies will likely mostly rely on leveraging additional assumptions on the underlying model. In the big data era, where millions of genetic variants are measured on behalf of multiple environmental exposures and endo‐phenotypes, this means using multivariate models. A variety of powerful statistical tests can be devised assuming multiple environmental exposures interact with multiple genetic variants. As showed in this study, the application of such approaches can dramatically improve power to detect interaction that can be missed by standard univariate tests. While these methods comes at the cost of decreased precision---i.e. a significant signal would point out multiple potential culprit---they can identify interaction effects that would potentially be of greater clinical relevance that univariate pairwise interaction (Aschard et al., [2012a](#gepi21989-bib-0004){ref-type="ref"}, [2015](#gepi21989-bib-0007){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}). Understanding the strengths and limitations of standard statistical methods is a major key to overcome today\'s challenges for the identification of interaction effects in human traits and diseases. By deciphering the basic principles of interaction tests, this perspective aims at providing a comprehensive guideline for performing interaction effects analyses in genetic association studies, and opening the path for future method development. Supporting information ====================== Additional Supporting Information may be found online in the supporting information tab for this article. ###### Appendix A: Effect estimates from standardized and unstandardized predictors Appendix B: Non‐centrality parameters for marginal and interaction models Appendix C: Variance‐covariance for the GxE term and its estimated effect Appendix D: Derivation of the Pratt index Appendix E: Joint test of main and interaction effects Appendix F: GRS‐based test, joint test and univariate test of multiple interaction effects Figure S1. Linear interaction effect across different coding schemes Figure S2. Power comparison for linear regression Figure S3. Power comparison for logistic regression Figure S4. The Pratt index across multiple interactions Figure S5. Joint test of main genetic effect and multiple interaction effects in a linear regression Figure S6. Joint test of main genetic effect and multiple interaction effects in a logistic regression Figure S7. GRS‐based statistic and meta‐analysis of single SNP estimates in linear regression Figure S8. GRS‐based statistic and meta‐analysis of single SNP estimates in logistic regression ###### Click here for additional data file. I am grateful to Peter Kraft, Noah Zaitlen, Ami Joshi, John Pratt, and Donald Halstead for helpful discussions and comments. I also thank Shafqat Ahmad, Paul Franks, and Qibin Qi for sharing details on their analysis of SNP by physical activity interaction, and SNP by soda consumption interaction, respectively. This research was funded by NIH grants R21HG007687. The author has no conflict of interest to declare.
Mid
[ 0.616, 38.5, 24 ]
INFECTIOUS diseases are pervasive. So pervasive, in fact, that without effective mechanisms of resistance, host populations can be quickly reduced in size or even driven to extinction. For instance, chestnut blight effectively wiped out the American chestnut, which had little if any resistance to this novel pathogen, after its introduction to North America in the early 1900s ([@bib1]; [@bib2]). Similarly, when Myxoma virus was introduced to Australia in the 1950s, local rabbit populations were almost entirely susceptible, resulting in millions of deaths and the decimation of local populations ([@bib24]). Human populations, too, have been heavily affected by infectious disease in the past, perhaps most notably during the 1918 influenza pandemic that killed \>50 million people before fading away in 1920 ([@bib14]; [@bib27]). Although these examples are striking and demonstrate the impact of unchecked infectious disease, they are far from the norm. More commonly, host populations have effective mechanisms of resistance against pathogens they encounter regularly ([@bib25]), with significant variability between populations depending on their history of exposure ([@bib5]; [@bib30]). The existence of substantial variation in resistance to infectious disease within host populations has generated hope that it may be possible to identify the genes conferring resistance. Identifying such resistance genes would pave the way for genetic engineering of resistant crops and livestock, focus drug development efforts on likely targets, and open the door to gene therapeutic approaches within human populations. As the genomic revolution has progressed, it has become increasingly common to search for these "resistance genes" using genome-wide association studies (GWAS) ([@bib22]; [@bib26]). Loosely speaking, these studies compare the marker genotypes of individuals infected with disease and those uninfected and ask which loci predict an individual's infection status. The GWAS approach has now been used to successfully identify a range of candidate genes thought to be important in resistance to infectious disease in plants and animals ([@bib9]; [@bib15]; [@bib29]; [@bib32]; [@bib12]). Despite the successes of the GWAS approach in some cases, it is becoming increasingly recognized that the approach has significant limitations. For instance, GWAS are most powerful when resistance depends on common genetic variants with relatively large phenotypic effects ([@bib18]). In addition, which candidate genes are identified by this method may depend on the environment in which the study is conducted ([@bib28]). These limitations apply to GWAS in general, not just those studies focused on infectious disease, and are widely recognized. When GWAS are used to understand the genetic basis of resistance to infectious disease, however, a potentially more important problem arises. Specifically, the resistance genes identified within the host population may depend on the genetic composition of the infectious disease itself ([@bib22]). This sensitivity of the GWAS approach to the genetic composition of the infectious disease becomes acute any time genotype-by-genotype (G × G) interactions exist; in other words, when particular combinations of host and pathogen genes yield resistance whereas other combinations lead to susceptibility. These G × G interactions may have drastic effects on the results of genetic association studies and our understanding of disease resistance ([@bib17]), similar to the effects of gene-by-environment interactions. One particularly disconcerting possibility is that rapid pathogen evolution or host--pathogen coevolution will cause the host resistance genes that can be identified by GWAS to fluctuate rapidly over time. Here we quantitatively explore the performance of GWAS when resistance to infectious disease involves G × G interactions between host and disease. We begin by presenting a general mathematical model of an association study to investigate disease resistance and evaluate the role of G × G interactions for several forms of host--parasite interactions. We then simulate host--pathogen coevolution to illustrate the extent to which G × G interactions may vary across time and/or space. We conclude by reanalyzing published genome-wide association data ([@bib8]) of *Daphnia magna* resistance to its *Pasteuria ramosa* pathogen, distinguishing regions of the genome associated with overall health from those involved in resistance specific to a particular *P. ramosa* strain. Model {#s1} ===== We consider a scenario, common in practice, where host resistance is measured as a continuous quantitative trait. This would be the case, for instance, if host resistance is assessed by measuring viral load, duration of infection, or damage to host tissues. Our model assumes that host resistance depends on the value of a quantitative trait in the host, $z_{H},$ relative to the value of a quantitative trait in the pathogen, $z_{P}.$ Specifically, we assume host susceptibility, *S*, is given by the following function:$$S = f\left( {z_{H} - z_{P}} \right).$$The function *f* is sufficiently general to accommodate many commonly observed resistance mechanisms. For instance, in the interaction between the snail *Biomphalaria glabrata* and its trematode parasites, resistance depends on the relative quantities of reactive oxygen molecules in the snail ($z_{H}$) and reactive oxygen scavenging molecules produced by the parasite ($z_{P}$) ([@bib6]; [@bib20]). In cases like these, the function *f* may take a sigmoid form which we call the phenotypic-difference model ([Figure 1A](#fig1){ref-type="fig"}) ([@bib23]; [@bib3]):Figure 1Host--parasite interaction models. Susceptibility to infection as a function of the distance between host and pathogen phenotypes, $z_{H} - z_{P},$ for the (A) phenotypic-difference and (B) phenotypic-matching model. Red curves show exact functions whereas black curves are the quadratic approximations.$$f\left( {z_{H} - z_{P}} \right) = \frac{1}{1 + e^{\alpha{({z_{H} - z_{P}})}}}.$$In contrast, in the interaction between the schistosome parasite, *Schistosoma mansoni*, and its snail host, *B. glabrata*, resistance depends on the degree to which the conformation of defensive FREP molecules produced by the snail ($z_{H}$) match the conformation of parasite mucin molecules ($z_{P}$) and successfully bind to them ([@bib19]). In such cases, the function *f* may take a Gaussian form which we call a phenotypic-matching model ([Figure 1B](#fig1){ref-type="fig"}) ([@bib16]):$$f\left( {z_{H} - z_{P}} \right) = e^{- \alpha{({z_{H} - z_{P}})}^{2}}.$$To study the effects of genetic interactions on susceptibility to infection, *S*, we must integrate genetics into our phenotypic model. For a haploid host and pathogen where $z_{H}$ and $z_{P}$ depend on $n_{H}$ and $n_{P}$ biallelic loci, respectively, we can write general expressions for these phenotypes as functions of alleles present in each species:$$\begin{matrix} {z_{H} = b_{H0} + {\sum\limits_{i = 1}^{n_{H}}{b_{Hi}X_{Hi}}} + {\sum\limits_{\substack{i,j \\ i \neq j}}^{n_{H}}{b_{Hi,Hj}X_{Hi}X_{Hj}}} + {\sum\limits_{\substack{i,j,k \\ i \neq j \neq k}}^{n_{H}}{b_{Hi,Hj,Hk}X_{Hi}X_{Hj}X_{Hk}}} + \ldots + \epsilon_{H}} \\ {z_{P} = b_{P0} + {\sum\limits_{i = 1}^{n_{P}}{b_{Pi}X_{Pi}}} + {\sum\limits_{\substack{i,j \\ i \neq j}}^{n_{P}}{b_{Pi,Pj}X_{Pi}X_{Pj}}} + {\sum\limits_{\substack{i,j,k \\ i \neq j \neq k}}^{n_{P}}{b_{Pi,Pj,Pk}X_{Pi}X_{Pj}X_{Pk}}} + \ldots + \mathit{\epsilon}_{P}} \\ \end{matrix}$$In these expressions, $X_{Mi}$ is an indicator variable describing the allelic state (0 or 1) of an individual of species *M* at locus *i*, $b_{M0}$ is the phenotype of an individual of species *M* with all "0" alleles, and $b_{Mi}$ is the additive effect of carrying a "1" allele at locus *i* in species *M*. The remaining coefficients ($b_{Mi,Mj},$ $b_{Mi,Mj,Mk},$ *etc*.) describe epistatic interactions among loci. Finally, $\epsilon_{M}$ captures an environmental contribution to the phenotype of species *M*, which is assumed to have mean 0, a constant variance, and be uncorrelated with an individual's phenotype. Substituting Equation 4 into Equation 1 yields a model of host susceptibility as a function of host and pathogen genotypes. Our goal now is to use this genetic model to predict the sensitivity of GWAS to the genetic composition of the pathogen population. We will explore both traditional, single-species GWAS approaches and a novel approach that takes genetic information from both host and pathogen into account (co-GWAS). Our investigation will rely on a pair of complementary approaches. First, we will develop and analyze analytical approximations that quantify the sensitivity of GWAS and co-GWAS approaches to changes in pathogen genotype frequencies. These analytical approximations will rely on simplified genotype--phenotype maps and will not explicitly integrate evolution and coevolution. Second, we will develop and analyze simulations that allow us to explore the consequences of rapid pathogen evolution and coevolution between the species on the performance of both GWAS and co-GWAS approaches. Analytical Approximation {#s2} ======================== To simplify the genetic model of resistance developed in the previous section sufficiently for mathematical analysis, we begin by considering the case where $n_{H} = n_{P} = 2.$ In addition, we assume that the phenotypes of host and pathogen are not too far from one another, such that the quantity $z_{H} - z_{P}$ is small relative to the extent of phenotypic specificity (*α* in Equations 2 and 3). Under this assumption, Equation 1 can be approximated by its second order Taylor series expansion. This allows the genetic model of susceptibility to be simplified to the following approximate expression:$$\begin{matrix} {S \approx f\left( 0 \right) + f^{\prime}\left( 0 \right)\left\lbrack {\left( {b_{H0} + b_{H1}X_{H1} + b_{H2}X_{H2} + \epsilon_{H}} \right) - \left( {b_{P0} + b_{P1}X_{P1} + b_{P2}X_{P2} + \epsilon_{P}} \right)} \right\rbrack} \\ {+ \frac{1}{2}f^{''}\left( 0 \right)\left\lbrack {\left( {b_{H0} + b_{H1}X_{H1} + b_{H2}X_{H2} + \epsilon_{H}} \right) - \left( {b_{P0} + b_{P1}X_{P1} + b_{P2}X_{P2} + \epsilon_{P}} \right)} \right\rbrack^{2} + \mathcal{O}\left\lbrack \left( {z_{H} - z_{P}} \right)^{3} \right\rbrack,} \\ \end{matrix}$$where primes indicate derivatives with respect to the distance between host and pathogen phenotypes. With (5) in hand, we have a model that predicts host resistance as a function of host and pathogen genotypes. In the following two sections, we will use (5) to investigate how the genetic composition of the pathogen population influences the results of GWAS and co-GWAS. Extending these models to complete G × G association studies requires a large number of pathogen loci ($n_{P} \gg 2$) and thus may be computationally prohibitive. For many pathogens, however, strain type or subtype may be known and capture much of the relevant genetic variation in the pathogen population. In these cases, tracking pathogen types can greatly reduce the effective number of loci, even to $n_{P} = 2$ as in Equation 5. Such simplifications should allow us to expand beyond two host loci to a whole host genome ($n_{H} \gg 2$), while avoiding the computational complexity of tracking all possible genetic interactions between the full host genome and the full parasite genome. Single-species GWAS {#s3} ------------------- We envision a standard GWAS where susceptibility to infection has been measured for some number of host individuals, each of which has also been genotyped at a large number of marker loci. To focus our model on the effects of species interactions, we will assume this data accurately provides us with the genotype of individuals at the two host resistance loci. Using this data, the goal of the genetic association study is to partition host susceptibility between these genes relative to their effects. This can be done by fitting susceptibility with a linear combination of the genetic indicator variables:$$S \approx \beta_{H0} + \beta_{H1}X_{H1} + \beta_{H2}X_{H2} + \beta_{H1,H2}X_{H1}X_{H2},$$where the *β* coefficients can be found using least squares regression. The biological interpretation of this linear model is straightforward. The intercept coefficient, $\beta_{H0},$ is the expected host resistance when both 0 host alleles are present. The coefficients $\beta_{H1}$ and $\beta_{H2}$ are the inferred additive effects of the 1 alleles at the first and second loci, respectively, and $\beta_{H1,H2}$ captures the epistatic interaction between the two host 1 alleles. Solving for the coefficients in (6) we have (see Supplemental Material, *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)):$$\begin{matrix} {\beta_{H0} = f\left( 0 \right) + f^{\prime}\left( 0 \right)\left\lbrack {{\overset{\sim}{b}}_{H0}-\left( {{\overset{\sim}{b}}_{P0} + b_{P1}q_{P1} + b_{P2}q_{P2}} \right)} \right\rbrack + \frac{1}{2}f^{''}\left( 0 \right)\left\{ {\left( {{\overset{\sim}{b}}_{H0} - {\overset{\sim}{b}}_{P0}} \right)^{2} +} \right.} \\ \left. {b_{P1}^{2}q_{P1} + b_{P2}^{2}q_{P2} + 2\left\lbrack {\left( {b_{P1}q_{P1} + b_{P2}q_{P2}} \right)\left( {{\overset{\sim}{b}}_{H0} - {\overset{\sim}{b}}_{P0}} \right) - b_{P1}b_{P2}\left( {q_{P1}q_{P2} + D_{P}} \right)} \right\rbrack} \right\} \\ {\beta_{Hi} = f^{\prime}\left( 0 \right)b_{Hi} + \frac{1}{2}f^{''}\left( 0 \right)\left\lbrack {b_{Hi}^{2} + 2b_{Hi}\left( {b_{H0} - b_{P0} - q_{P1}b_{P1} - q_{P2}b_{P2}} \right)} \right\rbrack} \\ {\beta_{H1,H2} = f^{''}\left( 0 \right)b_{H1}b_{H2},} \\ \end{matrix}$$for *i* = {1,2}, where $f\left( 0 \right),$ $f^{\prime}\left( 0 \right),$ and $f^{''}\left( 0 \right)$ are the resistance function and its first and second derivative evaluated at 0 as in Equation 5, and where ${\overset{\sim}{b}}_{H0} = b_{H0} + \epsilon_{H}$ and ${\overset{\sim}{b}}_{P0} = b_{P0} + \epsilon_{P}.$ Importantly, these expressions for the coefficients depend on the allele frequency at the pathogen loci, $q_{P1}$ and $q_{P2},$ as well as the linkage disequilibrium between them, $D_{P}.$ Note that the relevant allele frequencies and linkage disequilibrium are among pathogens to which the host is exposed, which may not be equivalent to the pathogen population as a whole. As a result of the dependence of the coefficients in (7) on the pathogen allele frequencies and linkage disequilibrium, the allelic effects (*β*'s) inferred by a host-only GWAS can be quite sensitive to the genetic composition of the pathogen population ([Figure 2](#fig2){ref-type="fig"}). Changes in pathogen allele frequency can alter the magnitude and sign of the inferred effects. From a practical standpoint, if susceptibility is assayed in two host populations that are exposed to pathogen populations that differ greatly in their allele frequencies, one may find a host allele has a protective effect in one population but increases risk in the other. Similar to hidden host population structure, uncontrolled differences in the pathogen population can greatly alter the inferences of single-species GWAS. ![Host-only model with resistance dependent on phenotypic differences (A,C) or phenotypic matching (B,D) between hosts and parasites. (A and B) Allelic effects inferred using the host-only design from Equation 6: $\beta_{0}$ (black), $\beta_{H1}$ and $\beta_{H2}$ (solid red lines), $\beta_{H1,H2}$ (dashed red). (C and D) Variation explained by host additive effects only (solid line), and host additive and epistatic effects (dashed line) as given by the host-only model in (6).](779fig2){#fig2} A second result that can be drawn from Equation 7 is that when the resistance function is approximately linear, $f^{''}\left( 0 \right) = 0,$ the inferred additive and epistatic effects, $\beta_{H1},\ \beta_{H2},$ and $\beta_{H1,H2}$ are independent of the pathogen allele frequencies. For example, in contrast to the nonlinear phenotypic-matching model where the inferred effects vary with pathogen allele frequency, the inferred effects remain constant in the approximately linear phenotypic-difference model ([Figure 2](#fig2){ref-type="fig"}). A third conclusion from Equation 7 is that, at least under the assumption that $z_{H} - z_{P}$ is small, the epistatic interaction between the host loci, $\beta_{H1,H2},$ is independent of pathogen genetics. We will explore the consequences of this dependence on the pathogen allele frequencies for the stability of GWAS-inferred effects across evolutionary time (See the *Host--Parasite Coevolution* section below). In addition to identifying the allelic effects on host resistance, an important metric of GWAS performance is the proportion of phenotypic variation explained by the identified causative loci. Given the dependence of the estimated allelic effects on pathogen allele frequencies, we calculated the total phenotypic variation explained by the host loci across the range of pathogen allele frequencies ([Figure 2, C and D](#fig2){ref-type="fig"}). When the pathogen population is monomorphic ($q_{P1} = q_{P2} = 0\,\text{or}\, 1$), the host loci can explain 100% of the genetic variation in the phenotype. If the pathogen population is polymorphic, however, the host-only approach may explain as little as 10% of the variation. Partitioning the total variation explained into the additive and epistatic contributions demonstrates that, due to changes in the additive effect size $b_{Hi},$ the relative contribution of additive and epistatic effects also varies with pathogen allele frequency and depends on the form of the host--parasite interaction. Two-species co-GWAS {#s4} ------------------- The results derived in the previous section demonstrate that traditional single-species GWAS may be sensitive to the genetic composition of the pathogen population at loci involved in host--pathogen specificity. In this section, we attempt to overcome this problem by developing an alternative GWAS design in which both host and pathogen genetics are incorporated. In contrast to the traditional method where only host genotypes are recorded, this design requires that both host and pathogen genotypes are known. As with Equation 6, we now attempt to fit host resistance as a linear function of the allelic indicator variables, but we include pathogen indicators as well as interaction terms between host and pathogen loci:$$\begin{matrix} {S \approx \beta_{0} + \beta_{H1}X_{H1} + \beta_{H2}X_{H2} + \beta_{H1,H2}X_{H1}X_{H2} + \beta_{P1}X_{P1} + \beta_{P2}X_{P2} + \beta_{P1,P2}X_{P1}X_{P2}} \\ {+ \beta_{H1,P1}X_{H1}X_{P1} + \beta_{H1,P2}X_{H1}X_{P2} + \beta_{H2,P1}X_{H2}X_{P1} + \beta_{H2,P2}X_{H2}X_{P2}.} \\ \end{matrix}$$As with Equation 6, the coefficients of this equation have straightforward biological interpretations. The intercept, $\beta_{0},$ describes the expected host resistance when all host and pathogen loci have 0 alleles. Terms 2, 3, 5, and 6 describe the additive effects of each individual host and pathogen 1 allele; and terms 4 and 7 describe the epistatic interactions between loci within the host and pathogen, respectively. The remaining four terms describe the G × G interactions between pairs of host and pathogen loci. Despite the complexity of Equation 8, and hence the logistical and computational challenges of applying it, the expressions for each of these coefficients in terms of the host and pathogen phenotypic effects are simple (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)): β H 0 = f ( 0 ) \+ f ′ ( 0 ) ( b ∼ H 0 − b ∼ P 0 ) \+ 1 2 f ″ ( 0 ) ( b ∼ H 0 − b ∼ P 0 ) 2 β H i = f ′ ( 0 ) b H i \+ 1 2 f ″ ( 0 ) b H i \[ b H i \+ 2 ( b ∼ H 0 − b ∼ P 0 ) \] for i = { 1 , 2 } β H 1 , H 2 = f ″ ( 0 ) b H 1 b H 2 β P i = − f ′ ( 0 ) b P i − 1 2 f ″ ( 0 ) b P i \[ b P i \+ 2 ( b ∼ H 0 − b ∼ P 0 ) \] for i = { 1 , 2 } β P 1 , P 2 = f ″ ( 0 ) b P 1 b P 2 β H i , P j = − f ″ ( 0 ) b H i b P j for i = { 1 , 2 } , j = { 1 , 2 } . Comparing the equations in (9) with the coefficients in (7) reveals an important conclusion: the effect sizes no longer depend on the pathogen allele frequencies nor the linkage disequilibrium ([Figure 3, A and B](#fig3){ref-type="fig"}). This result suggests that the two-species, co-GWAS approach is more robust to changes in the genetic composition of the pathogen population and thus may be much less sensitive to rapid evolution and spatial genetic structuring within the pathogen population. ![Host--pathogen model with phenotypic-difference (A,C) or phenotypic-matching (B,D) based resistance. (A and B) Allelic effects inferred using the host-parasite design from Equation 8: $\beta_{0}$ (black), $\beta_{Hi}$ (solid red), $\beta_{H1,H2}$ (dashed red), $\beta_{Pi}$ (solid blue), $\beta_{P1,P2}$ (dashed blue), and $\beta_{Hi,Pj}$ (dashed purple). (C and D) Variation explained by host additive effects only (solid red), host additive and epistatic effects (dashed red), host and pathogen additive and epistatic effects (dashed blue), and a full host--pathogen model as given in Equation 8.](779fig3){#fig3} In addition to stabilizing the estimated allelic effects across pathogen allele frequencies, the total phenotypic variation explained by the co-GWAS greatly exceeds that of the host-only GWAS. For the two-locus case explored here, the co-GWAS approach can explain 100% of the variation regardless of pathogen allele frequency ([Figure 3, C and D](#fig3){ref-type="fig"}). The contributions of additive, epistatic, and G × G interactions do, however, vary with pathogen allele frequency. As with the host-only approach, when the pathogen population is monomorphic the host effects explain all of the observed phenotypic variation. In summary, unlike the host-only model, the effect size coefficients (Equation 9) and the total variation explained, no longer vary with pathogen allele frequency. This contrast between the host-only and co-GWAS approaches is particularly relevant any time the composition of the pathogen population is likely to differ between the sample used for the association study and the population in which the resulting inferences are applied. In the following section we explore how temporal changes in the host and pathogen populations driven by coevolution affects the reproducibility of GWAS over time and, by extension, space. Host--Parasite Coevolution {#s5} ========================== To simulate host--parasite coevolution, we envision a system where each host comes into contact with a single parasite each generation. The probability that this contact results in infection is determined by host susceptibility, *S*, which is a function of the host and parasite genotype. Infected hosts experience a fitness cost $\xi_{H},$ whereas their infecting parasites receive a fitness benefit $\xi_{P}.$ In the absence of infection, both hosts and parasites have a fitness of 1. Together, these assumptions lead to the following fitness of a host with genotype $\left\{ {X_{H1},X_{H2}} \right\}$ that comes into contact with a pathogen with genotype $\left\{ {X_{P1},X_{P2}} \right\}:$$$W_{H} = 1 - \xi_{H}S\left( {X_{H1},X_{H2},X_{P1},X_{P2}} \right);$$whereas the pathogen has a fitness of$$W_{P} = 1 + \xi_{P}S\left( {X_{H1},X_{H2},X_{P1},X_{P2}} \right).$$Given these fitnesses, we simulate allele frequencies and linkage disequilibrium over time assuming random mating, a per-locus mutation rate of μ, and a recombination rate *r* (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)). We then use Equations 7 and 9 to calculate the inferred allelic effect sizes by using a host-only GWAS or co-GWAS for each generation over the course of coevolution for both the phenotypic-difference and phenotypic-matching models ([Figure 4](#fig4){ref-type="fig"}). ![Allelic effects over coevolutionary time. Top row: Phenotypes $z_{H}$ (red) and $z_{P}$ (blue) simulated over coevolutionary time in the phenotypic-difference (left) and phenotypic-matching models (right). Middle row: Coefficients estimated under the host-only model (7) (black is $\beta_{0},$ solid red is $\beta_{Hi},$ dashed red is $\beta_{H1,H2}$). Bottom row: Coefficients estimated under the host--pathogen model (9) (black is $\beta_{0},$ solid red is $\beta_{Hi},$ dashed red is $\beta_{H1,H2},$ blue is $\beta_{Pi},$ dashed blue is $\beta_{P1,P2},$ purple dashed is $\beta_{Hi,Pj}$). Because epistatic and G × G interactions are absent in the phenotypic-difference model, their allelic effects all overlap at 0 and hence are not all visible.](779fig4){#fig4} As expected, using the host-only GWAS approach, the inferred allelic effects can vary over time but only under the quadratic-shaped, phenotypic-matching model. As noted above, the estimated effects can even change sign, having large positive values when sampled in one generation and large negative values when sampled only a few generations later. In contrast, the inferred effects remain constant in the co-GWAS approach regardless of the coevolutionary model. In terms of the phenotypic variation explained, the host-only approach explains only a portion of genetically determined phenotypic variation, whereas the co-GWAS approach can explain up to 100%. The contribution of different genetic components to the total variation explained remains approximately constant under the phenotypic-difference model but varies rapidly as allele frequency changes in the phenotypic-matching model. Data availability {#s6} ----------------- The analysis, numerical simulations, and scripts to generate the original figures were coded in Wolfram *Mathematica* 11 ([File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)) and are available for download from the Dryad Digital Repository (DOI: <https://doi.org/10.5061/dryad.tb25q>). Daphnia--Pasteuria GWAS {#s7} ======================= Taken together, our analytical model and simulations illustrate that incorporating pathogen genetic information into the search for disease genes can greatly increase the explanatory power and repeatability of genome scans. Testing these theoretical predictions with biological data is a critical step in evaluating the power of the co-GWAS approach relative to a traditional single-species GWAS. Analysis of biological data will include several complications that we ignored above, including finite sample sizes, arbitrary forms of coevolutionary interactions, and complex genomic architectures. Unfortunately, we know of no studies that include full host and parasite genomic data as well as the outcome of infection experiments. Further, the computational tools to perform a co-GWAS in the form of Equation 8 do not yet exist. We can, however, use recently published data by [@bib8] on the susceptibility of *D. magna* to two *P. ramosa* strains, C1 and C19, as a preliminary test of our analytical predictions. In particular, we compare the results of genome scans for C1 and C19 susceptibility analyzed separately to a single genome scan for susceptibility using all the data but ignoring pathogen strain type. Our analytical model predicts that, despite having half the sample size, the separate genome scans for C1 and C19 resistance should reveal loci that determine host--parasite specificity, whereas the full data scan will have lower power to do so. Note that strain type captures almost all of the relevant genetic information in this case, given that the parasite is clonal. The original data set, provided on Dryad by the authors ([@bib8]), sampled 97 *D. magna* clones from three distinct geographic regions---1 site in Germany, 1 in Switzerland, and 11 sites in Finland---and provided the sequence at 6403 SNPs. Host susceptibility (S: susceptible; R: resistant) infection by each *P. ramosa* strain, C1 and C19, was determined by assessing whether fluorescently labeled spores attached to the host's esophagus ([@bib11]). All four possible combinations of susceptibility and resistance to the two strains (SS, SR, RS, and RR) were present. By performing two separate association studies, one for each strain, [@bib8] used this experimental design to identify genomic regions associated with susceptibility to a specific parasite strain. Following the methods in the original work, we compare their results to a third genome scan including all the data, a total of 194 samples, ignoring the *Pasteuria* strain type tested. All genome scans were performed using the R package GenAbel, adjusting for population structure and repeated measures of the same host genotype using the Eigenstat method ([@bib4]). To accurately assess which genomic regions are associated with susceptibility to C1, C19, and/or "overall" susceptibility from the complete data set, we used the *Daphnia* genetic map constructed by [@bib10] to array the scaffolds into 10 linkage groups. To limit the detection of false positives, we followed an approach analogous to that used in [@bib8] where SNPs were only considered significantly associated with a given susceptibility phenotype if there existed four SNPs in a 10-cM region with a log-likelihood score \>2 ([Figure 5](#fig5){ref-type="fig"}). Multiple genomic regions are significantly associated with susceptibility to C1, C19, and to disease susceptibility in the complete data set without strain information. Four linkage groups (4, 5, 7, and 9), with a total of 28 significant SNPs, are associated with C1 susceptibility. Three linkage groups (1, 4, and 7) with 38 SNPs are associated with C19 susceptibility, and two linkage groups (4 and 5) with 35 SNPs are associated with susceptibility in the complete data set. Thus, while the complete data set has twice as many measures of disease susceptibility, it has less power to detect genetic regions underlying disease susceptibility because of the lack of parasite information. ![GWAS of *D. magna* susceptibility. Genetic associations of each SNP with C1 (red ●), C19 (blue ▪), and overall susceptibility in the complete data set without parasite-type information (green ▴). Hence, each SNP is represented three times, once for each genome-wide scan. Note that closely linked SNPs often overlap with one another and are not all individually visible. Significant SNPs are shown in color while those below the log-likelihood of two threshold or that are not clustered within a 10-cM region of three other significant SNPs are shown in gray. The 10 linkage groups are delineated by vertical dashed lines.](779fig5){#fig5} The contrast between the associations for C1 and C19 susceptibility to overall susceptibility in the complete data set provides additional information about the nature of the genetic basis to resistance. Genomic regions associated with the overall resistance regardless of parasite type, particularly when these regions are also associated with C1 and C19 resistance, provide increased resistance regardless of the parasite strain tested and are consistent with general host health and nonspecific immune response. By contrast, sites that are not associated with overall resistance---despite the data set having twice the size---but are associated with either C1 or C19, are good candidates for loci that act in a parasite-specific manner. Examining [Figure 5](#fig5){ref-type="fig"}, we therefore conclude that linkage group 4 and possibly 5 are involved in general health and resistance. In contrast, the regions on the far left and right of linkage group 7 as well as the regions on linkage group 1 and 9, which are associated only with C1 or C19 resistance, are indicative of parasite-specific resistance loci. These conclusions are in agreement with the hypothesized model and previous molecular work on *Daphnia* resistance to *Pasteuria*. In particular, resistance to *Pasteuria* is hypothesized to be controlled by a three-locus, matching-alleles system. One of these loci (the C locus) determines overall host susceptibility regardless of pathogen strain and is thought to reside on linkage group 4 ([@bib7]). In the absence of protection from the C locus, a second "A locus" is thought to confer resistance to C1 when the dominant allele is present. The regions detected on linkage groups 7 and 9 in the hosts exposed to C1 may only be candidates for such C1-specific resistance. Finally, if the C locus and A locus are both homozygous recessive, a third "B locus" determines susceptibility to the C19 strain. Such a locus would likely be hard to detect in a GWAS due to epistasis between the A, B, and C loci; nevertheless, the regions associated with only C19 resistance (on linkage groups 1 and 7) would be candidates for such a B locus. Overall we conclude that significant SNPs obtained without accounting for parasite type may signal general health status. Against this background, a co-GWAS can help identify genes whose regions are likely critical to host--parasite specificity and variation in host susceptibility. Discussion {#s8} ========== Identifying genes that determine a host's susceptibility to infection is a promising frontier with a wide range of applications, including agriculture and human health. Yet, as our mathematical models demonstrate, association studies focusing on identifying genes in a single species without accounting for the genetics of the interacting species can drastically affect our ability to detect disease genes involved in host--pathogen specificity and limit our ability to account for the genetic variation in disease susceptibility. When the genetic composition of the pathogen population varies over time and/or space, this can further lead to inconsistencies in the results of genetic association studies. Finally, using previously published data on *D. magna* resistance to its *Pasteuria* parasite, we illustrate that performing association studies with and without information about pathogen type can be used to distinguish genomic regions affecting general *vs.* specific resistance to pathogens. Consistent with current models for *Daphania*--*Pasteuria* interactions, we identify one region associated with general health as well as candidate regions more directly involved in mediating host--pathogen specificity. The mathematical analysis presented above focuses on host--pathogen interactions of a specific form, given by Equation 1. Although we have relied on an approximation that assumes weak phenotype differences, *i.e.*, $z_{H} - z_{P}$ is small, we postulate that the power to detect strain-specific resistance genes will be increased whenever parasite information is incorporated, even when genes have major effects and phenotypic differences become large. Similarly, the methods used above can be extended to include alternative interaction types such as a "matching-alleles" interaction (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)). The expressions for the *β* coefficients under this interaction model are unruly and difficult to interpret. Using a numerical approach, we observe that once again G × G interactions can explain a significant proportion of the variation in susceptibility ([Figure S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.3001481/-/DC1/FigureS1.pdf) available on Dryad), particularly in highly variable pathogen populations. Unlike the phenotypic-difference and phenotypic-matching models, however, the co-GWAS approach (Equation 8) no longer explains all of the variation in susceptibility and the coefficients vary with pathogen allele frequency. This is a result of higher order interactions not included in our model. Hence, although the co-GWAS approach performs significantly better than a single-species approach, it will not always capture the full genetic basis of infection because of the second order approximation used in Equation 8. Regardless of the form of the interaction, our analytical models and simulations illustrate that incorporating pathogen genetics into the search for disease genes can greatly increase the explanatory power and repeatability of genome scans. Unfortunately, several logistical and computational challenges preclude applying a full two-species GWAS. Most notably, such a design requires additional genetic data that is not currently available. More specifically, this design requires genotyping all hosts and the pathogens to which they are exposed, not just the host--parasite combinations observed in infected individuals. Future exploration is warranted to determine whether uninfected individuals can simply be treated as unknown with respect to pathogen exposure, and what the consequences of doing so would be for the statistical power of our approach. The complexity of the two-species design (Equation 8) relative to that of a single-species design (Equation 6) also introduces computational challenges. In addition to requiring larger sample sizes, estimating the effects of the large number of potential G × G interactions in a full host--genome by parasite--genome study is computationally unrealistic. In addition to the large number of pairwise interactions between hosts and pathogens, depending on the form of the interaction, higher order genetic interactions may be necessary to fully explain the variation in susceptibility. These higher order interactions can be particularly important as the number of loci underlying susceptibility, $n_{H}$ and $n_{P},$ increases. Although incorporating complete pathogen genetic data may be unfeasible, there often exists some form of pathogen typing, which is largely indicative of the pathogen's genotype and may be sufficient for the purposes of a host genome-wide scan. For example, despite its vast diversity, Hepatitis C virus has been subdivided into seven genotypes ([@bib13]; [@bib21]), which may capture much of the relevant variation in host susceptibility. The *Daphnia*--*Pasteuria* data set we analyzed provides a valuable test case for a two-species co-GWAS. In this study, we know exactly to which pathogen type individuals have been exposed, which is generally not known in natural populations. This information may have increased the power of the study to detect loci underlying C1 and C19 susceptibility. Despite this increased power, we chose to use the arguably lenient significance threshold of a log-likelihood score \>2 plus clustering of four or more SNPs, as in the original article. Requiring more stringent threshold corrections for multiple sampling, such as a Bonferroni correction, does not yield any significant SNPs. Given the correspondence between the GWAS results and those of functional studies ([@bib7]), however, many of the observed SNPs are arguably not false positives. Using the log-likelihood of two and clustering threshold, we observe fewer genomic regions associated with overall susceptibility when parasite information is not incorporated than when conducting GWAS with exposure to either C1 or C19, despite the complete data set containing twice the number of data points. As an alternative to analyzing the complete data set, we could hold the sample size constant in a combined analysis by randomly choosing whether the host was exposed to C1 or to C19 for each host genotype ([Figure S2](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FigureS2.pdf) available on Dryad). Interestingly, this "mixed" GWAS not only identifies the same regions on linkage group 4 and 5 but also identifies regions on linkage groups 1, 9, and 10, as were found in the single pathogen-type GWAS. The fact that this mixed analysis picks up some of the potentially parasite-specific loci is likely due to randomly sampling an excess of C1- or C19-tested clones. Consistent with this interpretation, exactly which parasite-specific regions are identified varies with the random sample chosen. Nevertheless, as with the complete data set, a comparison between C1, C19, and mixed susceptibility provides additional information about which genes are involved in general health *vs.* parasite-specific susceptibility. The results presented here highlight several important avenues for future research. First and foremost, designing genome-wide association methods that allow for G × G interactions is critically important, as is the collection of genotypic data from hosts and pathogens. This could be approached, for example, by adapting GWAS designs and analyses used to detect gene-by-environment interactions ([@bib31]). Recognizing the importance of host--pathogen genetic interactions is important for understanding the applicability and limitations of single-species association scans. Developing metrics that capture relevant variability in host and pathogen populations may facilitate the application of these results. Finally, incorporating G × G interactions into our association studies will also enable us to understand what mathematical models of host--parasite interactions best predict the genetic interactions observed in natural systems, allowing for further refinements of the models. Supplementary Material {#s9} ====================== Supplemental material is available online at [www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1). ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. We thank Matt Osmond and two anonymous reviewers for their many helpful suggestions that improved this manuscript. This project was supported by a fellowship from the University of British Columbia to A.M., a National Science Foundation grant to S.L.N. (DEB 1450653), and a Natural Sciences and Engineering Research Council of Canada grant to S.P.O. (RGPIN-2016-03711). Communicating editor: W. Stephan
High
[ 0.6632911392405061, 32.75, 16.625 ]
Previously, in a supermarket or a retail store, in order to conduct a merchandise sale data processing, an electronic device such as a POS (Point Of Sales) terminal with a screen for operator and a screen for customer is used. The POS terminal is a terminal which is arranged in a clearing area and used for a cashier conducting a sales processing, a keyboard is arranged in the upper side of its main body. In the vicinity of approximate middle of the upper side of the main body, the display section for salesclerk is arranged in a manner that it faces to the keyboard side and being able to adjust an angle in a vertical direction via a shaft. In the vicinity of the end of the back side of the upper side of the main body, a display section for customer is arranged in a manner that it faces to the opposite side of the display section for salesclerk and an angle in a horizontal direction can be adjusted via a shaft. In the interior of the main body of the POS terminal and in the vicinity of a keyboard, a receipt printer and a journal printer are arranged side by side in parallel at the superior surface of the main body. A cover used for transport of a continuous-feed paper into and from the receipt printer and a cover used for transport of a continuous-feed paper into and from the journal printer are arranged side by side in the superior surface of the main body. In addition, a receipt outlet of the receipt printer is arranged in the cover.
Mid
[ 0.583162217659137, 35.5, 25.375 ]
Monday, May 16, 2005 The Germans... The Germans!!! "Yes, You Did! You Invaded Poland!" OK, I couldn't resist tossing in an inappropriate (but extremely funny) Fawlty Towers quote into my little screed against German Nationalists. To get the whole scoop, see the episode titled, by coincidence, 'The Germans'. Why go nuts over Nazis? 'Cause the poxy Krauts have invaded my email! They apparently needed some lebensraum in my inbox, because the anschluss started this weekend. About 10 times a day for the past zwei days, I get another random piece of email from somewhere, all in German, and all having something to with the regrettable state of affairs in Der Vaterland. I'm not really sure what these emails say. All the Hochdeutsch I know, I learned from war movies. This does me little good with emails or conversing with actual Germans, mind you, but it does do wonders for scaring the odd Frenchman I run across. Usually, I get to keep whatever they drop as they hightail it in the opposite direction. I tried running these emails through Sherlock, which has a pretty good translator, but I'm thinking these von Ribbentrop wannabes are using a lot of slang, and the resulting translations are kind of garbled. Here's an example: To understand that yearning of ethnic German with landsmen in Germany becomes very quite suitably again intercoursing, must registered understand a deep of the moosehead feeling in the national socialistic concept of people relationship in the most logical extreme. To Amerikanischers a intercourse of the different European ethnic groups, that a bigger untenable population of Whites, an ethnic homogeneous partnership do not support the skin diver suit, when basis of that of compactness national is almost inconceivable on an emotional height. Yet without this deeply rooted sense of the member, white Americans of annihilation racial almost more certainly stand within two hundred years vis-à-vis. Fortunately however is to be understood at least possible, national intercourse on an intellectual height, with which big wads of hope that an loving basis can be formed by the men with large melon heads. This social circlejerk and infusing of the educational base in white Americans correct accordingly is for the promoting of a European racial partnership in the United States one of the large goals of the national alliance. I'm guessing some pinhead signed me up on account of my refusal to embrace their liberal mindset. After all, if you can't convince someone through reasoned argument, then obviously they're a closet Nazi and need to be put on the Hitlerjugend mailing lists. well, I've signed a few people up for "Stop Bedwetting Now" help manuals back in the day, so this is just karma taking a chunk out of my tailsection. So be it.
Low
[ 0.48458149779735604, 27.5, 29.25 ]
ATHENS — A former Greek culture minister, several employees of the Finance Ministry and a number of business leaders are on a list of more than 2,000 Greeks said to have accounts in a Swiss bank, according to a respected investigative magazine. The Greek magazine, Hot Doc, published the list on Saturday, raising the stakes in a heated battle over which current and former government officials had seen the original list passed on by France two years ago — and whether they had used it to check for possible tax evasion. Hot Doc said its version of the list matches the one that Christine Lagarde, then the French finance minister and now the head of the International Monetary Fund, had given her Greek counterpart in 2010 to help Greece crack down on rampant tax evasion as it was trying to steady its economy. The 2,059 people on the list are said to have had accounts in a Geneva branch of HSBC. Questions about the handling of the original list reached a near frenzy in Athens last week as two former finance ministers were pressed to explain why the government appeared to have taken no action on the list. The subject has touched a nerve among average Greeks at a time when the Parliament is expected to vote on a new 13.5 billion euro austerity package that could further reduce their standards of living. The publication of the list is likely to exacerbate Greeks’ anger that their political leaders might have been reluctant to investigate the business elite, with whom they often have close ties, even as middle- and lower-class Greeks have struggled with higher taxes and increasingly ardent tax collectors.
Mid
[ 0.6355555555555551, 35.75, 20.5 ]
Progesterone receptor isoform (A/B) ratio of human fetal membranes increases during term parturition. The role of progesterone in the control of human parturition remains unsettled. Because there is no systemic progesterone withdrawal before the onset of labor, a 'functional progesterone withdrawal' has been proposed to be operative before human parturition. This may be accomplished by a change in the density of the progesterone receptor (PR) isoforms in myometrium and fetal membranes. The purpose of our study was to determine if spontaneous term labor is associated with changes of PR isoforms (PR-A and PR-B) in the fetal membranes. Fetal membranes were obtained from women undergoing elective cesarean delivery at term (not in labor group), and from women with a vaginal delivery (labor group). The expression of PR isoforms was assessed by Western blot analysis of amnion and chorio-decidua. Densitometric analysis of PR-A/PR-B ratio was performed. Immunohistochemistry with specific antibodies to PR-A and PR-B was done. Nonparametric statistics were used for analysis. 1) The predominant isoform of PR in women not in labor was PR-B, and PR-A in patients in labor. The ratio of PR-A/PR-B in fetal membranes was significantly higher in women in labor than in those not in labor (for amnion, median 4.3, range [0.9-8.4] vs median 0.4, range [0.3-2.6], P < .001; for chorio-decidua, median 2.0, range [1.1-19.2] vs median 1.2, range [0.1-2.0], P < .05). 2) Fetal membranes expressed both types of PR. 3) Immunohistochemistry showed the presence of PR-A and PR-B in the cytoplasm of amnion epithelial cells, chorion trophoblast, and decidual cells. Human parturition at term is associated with changes in PR isoforms in the fetal membranes and, thus, a local 'functional progesterone withdrawal' may operate in human parturition through this mechanism.
High
[ 0.713043478260869, 30.75, 12.375 ]
Data can be found at: <http://hdl.handle.net/2445/151737>. Introduction {#sec001} ============ Malaria caused by *Plasmodium vivax* (*Pv*) is a neglected tropical disease, especially during pregnancy, of worldwide distribution \[[@pntd.0008155.ref001]\]. The negative effects of malaria in pregnant women and their offspring have been better described for malaria caused by *P*. *falciparum* (*Pf*) whereas fewer reports have investigated the outcomes of *Pv* infection in pregnancy \[[@pntd.0008155.ref002]\]. To address this gap, we performed a multicenter cohort study (the PregVax project) to characterize the burden and health consequences of malaria caused by *Pv* in pregnant women from five malaria endemic areas \[[@pntd.0008155.ref003]\]. Within that cohort, we set out to characterize in more depth and breadth the immune responses induced in pregnant women when infected or exposed to *Plasmodium* parasites \[[@pntd.0008155.ref004]--[@pntd.0008155.ref007]\], and how they might correlate with negative clinical outcomes. As part of this investigation, here we aim to better understand the cellular immune mediators circulating in the blood of pregnant women, whose immune system is altered due to pregnancy \[[@pntd.0008155.ref008]\] when facing a parasite infection like *Pv* that has been associated to inflammation, particularly in the case of severe disease \[[@pntd.0008155.ref009]\]. A recent study by Singh et al \[[@pntd.0008155.ref010]\] has shown increased levels of inflammatory markers in vivax malaria during pregnancy, but it only included three cytokines (IL-6, IL-1β and TNF). For a more comprehensive evaluation of the multiple effects that *Pv* infection may elicit in the immune system of pregnant women, a wider set of cellular biomarkers of different functions, including chemokines and growth factors as well as T helper (T~H~)-related and regulatory cytokines, needs to be studied. This is particularly relevant to further understand the role of CCL11 during pregnancy and its association with *Pv* infection, as we previously showed decreased blood concentrations of this chemokine in pregnant women compared to non-pregnant individuals and in malaria-exposed compared to malaria-naïve individuals \[[@pntd.0008155.ref007]\]. In an initial recent analysis, we evaluated the effect of pregnancy and of residing in tropical countries, where exposure to infectious diseases is more common, on the concentration of cytokines in plasma samples from women at different times during gestation and after puerperium, as well as in the three blood compartments (periphery, cord, placenta) \[[@pntd.0008155.ref008]\]. We found that the concentrations of circulating cytokines were the highest at postpartum (at least 10 weeks after delivery), with higher values at delivery compared to the first antenatal clinic visit. Furthermore, anti-plasmodial antibodies (markers of malaria exposure) correlated with cytokine concentrations postpartum, but not during pregnancy, suggesting that pregnancy had a greater effect than malaria exposure on cytokine levels. Additionally, no strong associations between cytokines and gestational age were detected. In the present study, we assess the relationships between cytokine, chemokine and growth factor plasma concentrations, delivery outcomes and presence of *Pv*, in the PregVax cohort. Our multi-biomarker multicenter study is the first one to characterize the immunological signature of *Pv* infection in pregnancy to this extent. Materials and methods {#sec002} ===================== Study design and population {#sec003} --------------------------- This analysis was done in the context of the PregVax project, a cohort study of 9,388 pregnant women from five countries where malaria is endemic: Brazil (BR), Colombia (CO), Guatemala (GT), India (IN) and Papua New Guinea (PNG), enrolled between 2008 and 2012 at the first antenatal visit, and followed up until delivery. A venous blood sample was collected to perform immunological assays in the following participants: a) any women with *Pv* infection (with or without *Plasmodium* coinfections) at any visit from any country, b) a random subcohort (approximately 10% of total cohort) assigned as the immunology cohort, at enrolment and delivery. Bleedings at delivery included peripheral, cord and placental (only CO and PNG) blood. *Pv* and *Pf* (studied as a possible confounder in co-infected women) parasitaemias were assessed at every visit in Giemsa-stained blood slides that were read onsite. An external validation of parasitemia results was performed by expert microscopists in a subsample of slides (100 per country) at the Hospital Clinic and at the Hospital Sant Joan de Deu in Barcelona, Spain. Submicroscopic *Pv* and *Pf* infections were also determined at enrolment and delivery by real time-PCR in a group of participants, which included the immunological subcohort. Malaria symptoms and hemoglobin (Hb, g/dL) levels were also recorded at enrolment and delivery, as well as neonatal birth weight (g). The protocol was approved by the national and/or local ethics committees of each site, the CDC IRB (USA) and the Hospital Clinic Ethics Review Committee (Barcelona, Spain). Written informed consent was obtained from all study participants. All human subjects were adults. Isolation of plasma {#sec004} ------------------- Five to 10 mL of blood were collected aseptically in heparinized tubes. Plasma was separated by centrifuging at 600 g for 10 min at room temperature, aliquoted and stored at -80ºC. Samples from BR, CO, GT and PNG were shipped to the Barcelona Institute for Global Health on dry ice. The measurement of cytokines, chemokines and growth factors (hereinafter together referred to as biomarkers) was performed at ISGlobal, Barcelona (Spain) to minimize inter-site variability. Samples from India were analyzed at ICGEB, Delhi. Multiplex bead array assay {#sec005} -------------------------- The biomarkers were analyzed in thawed plasmas with a multiplex suspension detection system *Cytokine Magnetic 30-Plex Panel* (Invitrogen, Madrid, Spain) which allows the detection of the following biomarkers: epidermal growth factor (EGF), Eotaxin/CCL11, fibroblast growth factor (FGF), granulocyte colony-stimulating factor (G-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), hepatocyte growth factor (HGF), interferon (IFN)-α, IFN-γ, interleukin (IL)-1RA, IL-1β, IL-2, IL-2R, IL-4, IL-5, IL-6, IL-7, IL-8/CXCL8, IL-10, IL-12(p40/p70), IL-13, IL-15, IL-17, IFN-γ induced protein (IP-10/CXCL10), monocyte chemoattractant protein (MCP-1/CCL2), monokine induced by IFN-γ (MIG/CXCL9), macrophage inflammatory protein (MIP)-1α/CCL3, MIP-1β/CCL4, regulated on activation, normal T cell expressed and secreted (RANTES/CCL5), tumor necrosis factor (TNF), and vascular endothelial growth factor (VEGF). Fifty μL of the plasmas were tested in single replicates (dilution 1:2, as recommended by the vendor). Each plate contained serial dilutions (1:3) of a standard sample of known concentration of each analyte provided by the manufacturer, as well as a blank control and a reference sample control for quality control purposes, all of them in duplicates. Upper and lower values of the standard curves for each analyte are displayed in [S1 Table](#pntd.0008155.s002){ref-type="supplementary-material"}. The assays were carried out according to the manufacturer's instructions. Beads were acquired on the BioPlex100 system (Bio-Rad, Hercules, CA) and concentrations calculated using the Bioplex software. When values were out of range (OOR) according to the software, a value three-times lower than the lowest standard concentration was assigned (as standard dilutions were 1:3) for OOR values under the curves, and a value three-times higher than the highest standard concentration was assigned for OOR values above the curve. Moreover, the software extrapolated values below and above the lower and higher concentrations, respectively, of the standard curves when they fitted into the curves and were not OOR. These values were kept with the exception of those three-times below the lowest standard concentration and three-times above the highest standard concentration, for which those respective values were assigned. In addition, the cytokine TGF-β1 was analyzed in all plasmas except those from India, with a DuoSet ELISA kit (R&D Systems). Following the vendor's recommendations, latent TGF-β1 was activated to its immunoreactive form with HCl and neutralized with NaOH/HEPES. A 40-fold plasma dilution was used. *Plasmodium* spp. detection by real time-PCR {#sec006} -------------------------------------------- From the whole cohort (9,388 women), 1500 recruitment and 1500 different delivery samples were randomly selected for PCR. Samples from BR, CO, GT, and half of the samples from PNG, were analyzed at the *Istituto Superiore di Sanità* (Rome, Italy), as described \[[@pntd.0008155.ref003]\]. The threshold for positivity for each species was established as a cycle threshold \<45, according to negative controls. *Pv* diagnosis for IN samples was performed in Delhi following Rome's protocol adapted for the instrument sensitivity (the third step amplification 72ºC for 25 sec instead of 5 sec). Approximately half of the PNG samples were analyzed for submicroscopic infections in Madang, following a protocol \[[@pntd.0008155.ref011]\] similar to that from Rome, except that the threshold for positivity for each species was established as cycle threshold \<40, according to negative controls. DNA was extracted from whole blood-spot filter paper. Sample selection and statistical methods {#sec007} ---------------------------------------- It was not possible to measure the biomarkers in all the plasma samples available due to budget constraints. Therefore, an initial 50 enrolment samples per country (35 in IN, 235 total) and their paired delivery samples were randomly selected. However, follow-up rates were low and when paired recruitment/delivery samples available were \<50, random delivery samples were included to achieve N = 50 (35 in IN). Thus, 129/235 delivery samples were paired to recruitment samples. In addition, because malaria prevalence was generally low in our random subset, we performed a case-control selection including all the available samples from women with a *Pv* infection (diagnosed by microscopy and/or PCR) at recruitment (N = 49) or delivery (N = 18) and similar numbers of randomly-selected samples with a negative *Pv* PCR result, matched by country (N = 62 and N = 7 respectively). Finally, 144 placental plasmas, 125 peripheral plasmas collected at delivery and paired to the placental samples and 112 cord plasmas were analysed. A flow chart with all samples analysed is provided in [S1 Fig](#pntd.0008155.s001){ref-type="supplementary-material"}. Data from the 5 countries were combined in the analysis. Except when otherwise specified, *Pv* infection was defined as either a positive smear or a PCR positive result (or both). Overall, our aim was to search for the associations between cytokine concentrations and health outcomes (malaria infection, Hb levels and birth weight). In this regard, we performed two separate investigations. First, a cross-sectional analysis in which health outcomes and cytokine concentrations were examined at the same timepoint, either recruitment or delivery. Second, a longitudinal analysis of the effect of cytokine concentrations at recruitment on health outcomes at delivery. To study the association of biomarkers with *Pv* infection, first a principal component analysis (PCA) was performed. In the PCA, a large set of possibly correlated variables (e.g. cytokines) is transformed into a small set of linearly uncorrelated variables called principal components (PC), which may be interpreted as clusters of cytokines. For each PC, we show the contribution (loading score) of each cytokine, considering the generally accepted cut-off established at loading score = 0.3. For further analyses, we only considered the seven PCs that accounted more for the variance of the data set (Eigenvalue ≥1, Kaiser-Guttman criterion). To use PCs as variables (each representing several cytokines at a time), PC scores were predicted for each subject and logistic regression models were estimated with PCs as independent variables and *Pv* infection as the dependent variable. We excluded TGF-β from the PCA analysis as this cytokine was analyzed by a different technique (ELISA) and was not measured in IN. Finally, to assess if our data set was suitable for PCA, we ran the Kaiser-Meyer-Olkin (kmo) test for sampling adequacy. To assess the association between individual biomarker concentrations and *Pv* infection, we used the Mann-Whitney test in the crude analysis (corrected for multiple comparisons with the Benjamini-Hochberg method) and estimated multivariable logistic regression models adjusting for the following variables: site, age at recruitment, gestational age, parity, delivery mode (vaginal vs caesarean birth) for the analysis with cytokine concentration at delivery and *Pf* infection. At delivery, three different blood compartments were investigated: periphery, placenta and cord. For this objective, pairwise statistical significance was interpreted based on 95% confidence intervals (CI), and considered significant when the interval did not include 1. The same adjusted regression model was estimated to analyze the association between peripheral plasma biomarker concentration at recruitment and future (at delivery) *Pv* infection. Furthermore, the association of biomarkers with submicroscopic and microscopic *Pv* infections was assessed with linear regression models adjusted by site. Finally, the association between biomarker concentrations in plasma with maternal Hb levels and birth weight was assessed using multivariable linear regression models, adjusted for site, age (at recruitment), Hb levels (at recruitment), parity, delivery mode for the analysis with cytokine concentration at delivery, *Pv* and *Pf* infection during pregnancy. For this objective, pairwise statistical significance was interpreted based on 95% confidence intervals, and considered significant when the interval did not include 0. Overall, significance was defined at p\<0.05. Analyses and graphs were performed using Stata/SE 10.1 (College Station, TX, USA) and GraphPad Prism (La Jolla, CA, USA). Results {#sec008} ======= Characteristics of the study population {#sec009} --------------------------------------- A total of 987 plasma samples belonging to 572 pregnant women were analyzed for biomarker concentration, comprising 346 peripheral plasma samples collected at recruitment, 385 peripheral plasmas at delivery, 112 cord plasmas and 144 placental plasmas. Unfortunately, only 129 samples were paired between recruitment and delivery due to low follow-up rates. The study population characteristics at baseline are provided in [Table 1](#pntd.0008155.t001){ref-type="table"}. The number of infection cases by country, method and timepoint are provided in [S2 Table](#pntd.0008155.s003){ref-type="supplementary-material"}. 10.1371/journal.pntd.0008155.t001 ###### Baseline characteristics of study population. This refers to all the women included in the study at both timepoints. ![](pntd.0008155.t001){#pntd.0008155.t001g} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Site ----------------------------------------------------------------------------------------------------------- ----------------------- ------------------------ ------------------------- ----------------------- ------------------------ -------------------- **Age (years)**[^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 23.3 (6.0) \[90\] 22.3 (5.8) \[115\] 25.0 (7.7) \[91\] 23.5 (3.5) \[58\] 25.5 (5.7) \[210\] **Gravidity (number of previous pregnancies)**[^d^](#t001fn004){ref-type="table-fn"} 0 26 (29%) 34 (30%) 30 (33%) 27 (47%) 69 (38%) 1--3 42 (47%) 56 (49%) 31 (34%) 29 (50%) 76 (42%) 4+ 22 (24%) 25 (21%) 30 (33%) 2 (3%) 37 (20%) **GA at recruitment**\ 0--12 15 (17%) 28 (24%) 5 (6%) 1 (2%) 11 (6%) **(weeks)**[^d^](#t001fn004){ref-type="table-fn"} 13--24 30 (34%) 42 (37%) 32 (36%) 30 (52%) 87 (48%) 25+ 43 (49%) 45 (39%) 52 (58%) 27 (47%) 84 (46%) **GA at delivery (weeks, by Ballard method)**[^d^](#t001fn004){ref-type="table-fn"} 0--37 5 (8%) 29 (39%) 6 (10%) 29 (69%) 49 (33%) 38--41 54 (92%) 42 (57%) 35 (58%) 13 (31%) 83 (56%) 42+ 0 (0%) 3 (4%) 19 (32%) 0 (0%) 16 (11%) **BMI (kg/m**^**2**^**)** [^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 25.7 (4.4) \[89\] 23.5 (3.5) \[114\] 25.8 (3.9) \[91\] 23.1 (4.5) \[58\] 23.7 (3.3) \[173\] **Hemoglobin (g/dL)**[^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 11.3 (1.3) \[90\] 10.9 (1.6) \[114\] 11.1 (1.5) \[82\] 9.5 (1.6) \[58\] 9.52 (1.5) \[214\] **Birth weight** [^a^](#t001fn001){ref-type="table-fn"}^,^[^g^](#t001fn007){ref-type="table-fn"} **(g)** 3166.1 (526.4) \[61\] 3224.11 (408.7) \[82\] 3151.9.6 (537.3) \[60\] 3031.7 (436.5) \[42\] 2923.1 (493.3) \[159\] **Delivery mode**[^d^](#t001fn004){ref-type="table-fn"} **V** 52 (79) 66 (82) 44 (71) 34 (79) 135 (100) **C** 14 (21) 15 (18) 18 (29) 9 (21) 0 (0) **Syphilis screening**[^d^](#t001fn004){ref-type="table-fn"} **POS** 0 (0) 7 (10) N/A 0 (0) 5 (4) **NEG** 56 (100) 64 (90) N/A 13 (100) 123 (96) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ^a^ Arithmetic Mean (standard deviation) \[number\]. ^b^ At recruitment. ^c^ One-way ANOVA. ^d^ n (percentage). ^e^ Chi-squared test. ^f^ Fisher's exact test. ^g^ birth weight excluding twins. PNG: Papua New Guinea. GA: gestational age (weeks). BMI: body mass index. V: vaginal. C: cesarean section. POS: positive. NEG: negative. N/A: not available. Association of *Pv* infection with plasma biomarker concentration at recruitment {#sec010} -------------------------------------------------------------------------------- Considering the large amount of cytokine variables, we first performed a PCA to reduce the dimensionality of data. The kmo test for sampling adequacy to PCA analysis resulted in kmo = 0.85 that according to the literature might be considered as meritorious \[[@pntd.0008155.ref012]\]. In the PCA analysis, seven PCs contributed mostly to the variance of the data ([S3 Table](#pntd.0008155.s004){ref-type="supplementary-material"}) and were further considered for regression analyses. Of those seven PCs, three had a positive association with *Pv* infection: PC3, PC5 and PC7 ([S4 Table](#pntd.0008155.s005){ref-type="supplementary-material"}). Then we analyzed which cytokines contributed mostly to those PCs associated with *Pv* infection ([Table 2](#pntd.0008155.t002){ref-type="table"}). The PC3-proinflammatory-chemokine group showed the highest contribution by CXCL8, CCL4 and CCL3. The PC5-antiinflammatory-inflammatory group had the highest contribution by IL-10, CXCL10, IL-6 and CCL2, whereas the PC7-CCL5-T~H~ group had the highest contribution by CCL5, IL-4, IL-2R and IL-12 ([Table 2](#pntd.0008155.t002){ref-type="table"}). 10.1371/journal.pntd.0008155.t002 ###### Loading scores for principal component analysis at recruitment. ![](pntd.0008155.t002){#pntd.0008155.t002g} Variable PC1 PC2 PC3 PC4 PC5 PC6 PC7 Unexplained ---------- ------- ------- ----------- ------- ----------- ------- ----------- ------------- TNF   0.429 ** **   ** **   ** ** 0.271 IL-1β   0.344 ** **   ** **   ** ** 0.261 IL-6     ** **   **0.356**   ** ** 0.211 IL-10     ** **   **0.506**   ** ** 0.334 IL-1RA     ** **   ** **   ** ** 0.179 IFN-α     ** **   ** **   ** ** 0.316 CXCL8     **0.565**   ** **   ** ** 0.271 CCL3     **0.406**   ** **   ** ** 0.262 CCL4     **0.426**   ** **   ** ** 0.267 CCL2     ** **   **0.312**   ** ** 0.290 CXCL10     ** **   **0.489**   ** ** 0.337 CXCL9     ** **   ** ** 0.474 ** ** 0.439 CCL11     ** **   ** ** 0.634 ** ** 0.376 CCL5     ** **   ** **   **0.695** 0.247 IFN-γ     ** ** 0.487 ** **   ** ** 0.426 IL-12     ** **   ** **   **0.341** 0.317 IL-2 0.494   ** **   ** **   ** ** 0.218 IL-15 0.351   ** **   ** **   ** ** 0.319 IL-2R     ** **   ** **   **0.352** 0.361 IL-4     ** **   ** **   **0.392** 0.552 IL-5   0.456 ** **   ** **   ** ** 0.229 IL-13     ** ** 0.447 ** **   ** ** 0.377 IL-17   0.468 ** **   ** **   ** ** 0.244 EGF 0.373   ** **   ** **   ** ** 0.371 FGF 0.497   ** **   ** **   ** ** 0.224 HGF     ** **   ** **   ** ** 0.403 VEGF     ** **   ** **   ** ** 0.495 G-CSF ** **   ** ** 0.456 ** **   ** ** 0.487 GM-CSF ** **   ** **   ** **   ** ** 0.347 IL-7 ** ** 0.405 ** **   ** **   ** ** 0.250 Loading scores for each principal component (PC) with Eigenvalue\>1 and proportion of unexplained variance after varimax rotation. In bold the PC with a positive association with *P*. *vivax* infection. Only shown if loading score \>0.3. As the PCA is quite exploratory, we also analyzed biomarkers individually. We found that *Pv*--infected women had higher plasma concentrations of proinflammatory biomarkers IL-6, CXCL8, CCL3, CCL4 and CCL2, of T~H~1-related cytokines IL-12, IL-15 and IL-2R, and of growth factor VEGF ([Fig 1](#pntd.0008155.g001){ref-type="fig"}) than uninfected women, consistent with PCA analysis. After adjusting for other confounders (see [materials and methods](#sec002){ref-type="sec"}), we found a positive association of *Pv* infection with proinflammatory biomarkers IL-6, IL-1β, CCL4, CCL2, CXCL10 and TNF (borderline non-significant for the latter); the antiinflammatory IL-10; the chemokine CCL5; the T~H~1-related cytokines IL-12 and IL-2R; the T~H~2-related cytokine IL-5; and the growth factors FGF, HGF, VEGF and IL-7 ([Table 3](#pntd.0008155.t003){ref-type="table"}). In contrast, a negative association was observed with CCL11 plasma concentration ([Table 3](#pntd.0008155.t003){ref-type="table"}). ![Effect of *Plasmodium vivax* infection on peripheral plasma biomarker concentrations at recruitment.\ Box plots represent median (white line), and 25^th^ and 75^th^ percentiles (lower and upper hinge respectively) of biomarker concentrations in peripheral plasma at recruitment, in *P*. *vivax* infected (I, N = 54) and uninfected (U, N = 247) pregnant women. Concentrations for all biomarkers are expressed in pg/mL. P-value corresponds to the Mann-Whitney test corrected for multiple comparisons with the Benjamini-Hochberg method. \*p\<0.05, \*\*p\<0.01, \*\*\*p\<0.001.](pntd.0008155.g001){#pntd.0008155.g001} 10.1371/journal.pntd.0008155.t003 ###### Association of plasma biomarker concentration with *P*. *vivax* infection. ![](pntd.0008155.t003){#pntd.0008155.t003g} Recruitment Delivery -------- ------------- ---------------- ---------- ---------------- ---------- ---------------- ------ ------------- TNF 1.03 1.00; 1.06 1.00 0.88; 1.12 1.02 0.89; 1.18 1.02 0.97; 1.08 IL-1β **1.06** **1.02; 1.09** 1.00 0.89; 1.13 1.06 0.93; 1.21 1.00 0.95; 1.05 IL-6 **1.09** **1.03; 1.15** 0.96 0.87; 1.05 0.91 0.81; 1.04 1.04 0.98; 1.10 IL-10 **1.09** **1.02; 1.17** 0.97 0.86; 1.09 **1.17** **1.02; 1.34** 1.06 0.54; 2.07 IL-1RA 1.06 0.98; 1.15 1.06 0.97; 1.16 1.04 0.94; 1.16 1.11 0.94; 1.30 TGF-β 1.01 0.88; 1.17 1.20 0.97; 1.47 1.01 0.87; 1.16 0.91 0.79; 1.06 IFN-α 1.05 0.93; 1.19 **1.33** **1.11; 1.58** 1.09 0.94; 1.26 1.15 0.78; 1.70 CXCL8 1.02 0.98; 1.07 1.03 0.98; 1.08 0.99 0.92; 1.08 1.05 1.00; 1.11 CCL3 1.04 0.95; 1.13 1.08 0.94; 1.24 0.89 0.74; 1.07 1.06 0.87; 1.28 CCL4 **1.09** **1.02; 1.17** 0.90 0.77; 1.04 1.01 0.91; 1.12 1.06 0.98; 1.15 CCL2 **1.24** **1.10; 1.41** 1.01 0.93; 1.09 1.06 0.97; 1.16 1.07 0.99; 1.16 CXCL10 **1.17** **1.06; 1.29** 1.04 0.91; 1.18 0.93 0.79; 1.09 0.97 0.82; 1.13 CXCL9 1.06 0.97; 1.15 1.03 0.94; 1.14 1.05 0.92; 1.19 1.01 0.92; 1.12 CCL11 **0.88** **0.77; 0.99** 1.01 0.87; 1.18 0.93 0.80; 1.09 0.91 0.76; 1.08 CCL5 **1.13** **1.01; 1.27** 1.08 0.93; 1.26 0.93 0.84; 1.02 1.01 0.84; 1.22 IFN-γ 1.16 0.84; 1.59 1.38 0.89; 2.14 1.00 N/A 0.56 0.01; 36.96 IL-12 **1.37** **1.11; 1.68** **1.57** **1.15; 2.15** 0.97 0.77; 1.23 0.84 0.60; 1.18 IL-2 1.04 0.97; 1.11 0.98 0.89; 1.09 1.06 0.96; 1.17 1.03 0.95; 1.11 IL-15 1.03 0.98; 1.08 1.03 0.91; 1.16 1.05 0.93; 1.20 0.97 0.83; 1.13 IL-2R **1.12** **1.02; 1.24** 0.99 0.87; 1.14 0.91 0.75; 1.09 0.94 0.76; 1.15 IL-4 0.94 0.73; 1.22 1.04 0.83; 1.31 0.92 0.52; 1.64 0.69 0.06; 8.61 IL-5 **1.04** **1.01; 1.08** 1.00 0.91; 1.11 1.15 0.87; 1.51 1.00 0.95; 1.06 IL-13 1.10 0.99; 1.22 0.98 0.86; 1.12 0.91 0.70; 1.19 0.84 0.57; 1.22 IL-17 1.03 0.99; 1.07 1.10 0.91; 1.33 0.75 0.22; 2.53 1.01 0.93; 1.10 EGF 0.98 0.89; 1.08 0.96 0.85; 1.09 1.04 0.90; 1.20 0.97 0.81; 1.16 FGF **1.08** **1.02; 1.14** 0.93 0.85; 1.03 1.08 0.97; 1.19 1.03 0.93; 1.13 HGF **1.06** **1.01; 1.10** 0.96 0.90; 1.03 1.00 0.94; 1.06 1.05 0.93; 1.20 VEGF **1.09** **1.02; 1.17** 0.97 0.84; 1.12 1.05 0.89; 1.23 1.05 0.92; 1.19 G-CSF 1.10 0.88; 1.36 1.03 0.74; 1.43 0.96 0.75; 1.24 1.06 0.88; 1.29 GM-CSF 1.02 0.99; 1.06 1.04 0.97; 1.11 1.09 0.98; 1.21 1.00 0.95; 1.05 IL-7 **1.05** **1.01; 1.08** 0.97 0.88; 1.07 0.99 0.84; 1.16 1.03 0.91; 1.17 Multivariable logistic regression models adjusting for the following variables: site, age at recruitment, gestational age, parity, delivery mode (just for delivery samples) and *P*. *falciparum* infection were estimated. *P*. *vivax* infection cases included those diagnosed by either PCR or microscopy. Odds ratio (OR) per 25% increase in biomarker concentration. Recruitment, N = 275; delivery periphery, N = 199 (infection rates by *Plasmodium spp*. and timepoint in [S2 Table](#pntd.0008155.s003){ref-type="supplementary-material"}). Placenta N = 75 (61 *Pv*-, 14 *Pv*+). Cord N = 82 (57 *Pv*- 25 *Pv*+). In bold if 95% confidence interval (CI) does not include 1. N/A: regression model could not be estimated as all samples considered have the same value for IFN-γ concentration (9.6 pg/mL). Finally, we investigated the different effect (if any) of submicroscopic and microscopic *Pv* infections, i.e. infection density, on biomarker plasma concentration at recruitment. Results were interpreted based on 95% CI. On the one hand, *Pv* microscopic but not submicroscopic infections were associated with elevated plasma concentrations of proinflammatory biomarkers TNF, IL-1β, IL-6, CXCL8, CCL2, CXCL10, CXCL9; the antiinflammatory IL-10 and IL-1RA; the chemokine CCL5; the T~H~1-related cytokine IL-2R; the T~H~2-related cytokine IL-5; the T~H~17-related cytokine IL-17 and the growth factors VEGF and GM-CSF ([Table 4](#pntd.0008155.t004){ref-type="table"}). On the other hand, *Pv* submicroscopic but not microscopic infections were associated with elevated plasma concentrations of biomarkers IL-2, FGF and IL-7 ([Table 4](#pntd.0008155.t004){ref-type="table"}). Of note, when we did this stratification, the negative association with CCL11 levels was lost ([Table 4](#pntd.0008155.t004){ref-type="table"}). 10.1371/journal.pntd.0008155.t004 ###### Association of plasma biomarker concentration with microscopic and submicroscopic *P*. *vivax* infection. ![](pntd.0008155.t004){#pntd.0008155.t004g} RECRUITMENT DELIVERY -------- ------------- ---------- ---------------- ---------- ---------------- -------- ----------- ------------------ ---------- ----------------   Effect Effect 95% CI Effect 95% CI Effect Effect 95% CI Effect 95% CI TNF 0 0.94 -0.32; 2.20 **1.53** **0.17; 2.88** 0 0.58 -0.14; 1.31 **1.89** **0.46; 3.32** IL-1β 0 1.31 -0.09; 2.71 **2.98** **1.48; 4.48** 0 0.20 -0.50; 0.90 0.21 -1.18; 1.60 IL-6 0 0.30 -0.77; 1.36 **1.89** **0.75; 3.03** 0 **-0.68** **-1.23; -0.13** 0.29 -0.80; 1.38 IL-10 0 0.38 -0.37; 1.14 **1.75** **0.94; 2.56** 0 -0.22 -0.71; 0.27 **1.17** **0.19; 2.14** IL-1RA 0 0.46 -0.19; 1.10 **0.74** **0.05; 1.43** 0 0.17 -0.34; 0.68 0.07 -0.94; 1.08 TGF-β 0 -0.01 -0.44; 0.42 -0.05 -0.49; 0.39 0 0.04 -0.25; 0.32 0.26 -0.29; 0.81 IFN-α 0 0.26 -0.15; 0.68 0.21 -0.23; 0.65 0 **0.34** **0.03; 0.65** 0.02 -0.60; 0.64 CXCL8 0 -0.14 -1.16; 0.88 **1.18** **0.09; 2.28** 0 0.39 -0.67; 1.45 1.13 -0.98; 3.24 CCL3 0 **0.65** **0.04; 1.26** **1.15** **0.50; 1.81** 0 -0.10 -0.48; 0.28 -0.16 -0.92; 0.59 CCL4 0 -0.01 -0.60; 0.57 0.31 -0.32; 0.93 0 -0.26 -0.68; 0.15 -0.66 -1.47; 0.16 CCL2 0 0.62 -0.25; 1.49 **1.76** **0.82; 2.69** 0 -0.07 -0.63; 0.49 -0.20 -1.31; 0.92 CXCL10 0 0.36 -0.32; 1.04 **1.07** **0.34; 1.80** 0 -0.1 -0.51; 0.32 0.03 -0.80; 0.85 CXCL9 0 0.19 -0.39; 0.78 **0.67** **0.04; 1.30** 0 -0.21 -0.75; 0.33 0.36 -0.72; 1.43 CCL11 0 0.03 -0.43; 0.49 -0.43 -0.93; 0.06 0 -0.14 -0.53; 0.24 -0.14 -0.90; 0.63 CCL5 0 0.26 -0.19; 0.70 **0.62** **0.14; 1.10** 0 0.04 -0.34; 0.42 0.46 -0.30; 1.22 IFN-γ 0 0.00 -0.17; 0.16 0.00 -0.18; 0.18 0 0.02 -0.17; 0.20 -0.10 -0.47; 0.27 IL-12 0 **0.30** **0.03; 0.57** **0.38** **0.09; 0.67** 0 0.17 -0.04; 0.38 0.20 -0.21; 0.62 IL-2 0 **0.75** **0.08; 1.43** 0.28 -0.45; 1.01 0 -0.14 -0.68; 0.39 -0.80 -1.85; 0.26 IL-15 0 **0.76** **0.12; 1.40** **1.01** **0.32; 1.70** 0 0.14 -0.34; 0.63 -0.36 -1.33; 0.60 IL-2R 0 0.44 -0.10; 0.98 **1.07** **0.49; 1.65** 0 0.03 -0.30; 0.35 0.13 -0.51; 0.77 IL-4 0 0.02 -0.17; 0.20 -0.05 -0.25; 0.15 0 -0.06 -0.32; 0.21 -0.16 -0.69; 0.37 IL-5 0 0.86 -0.42; 2.13 **1.89** **0.52; 3.26** 0 0.02 -0.72; 0.76 **1.68** **0.21; 3.15** IL-13 0 -0.05 -0.62; 0.53 0.23 -0.38; 0.85 0 -0.05 -0.42; 0.32 -0.32 -1.05; 0.41 IL-17 0 0.55 -0.50; 1.60 **1.28** **0.16; 2.41** 0 0.57 0.03; 1.12 -0.12 -1.20; 0.96 EGF 0 -0.26 -0.81; 0.28 -0.21 -0.79; 0.38 0 -0.28 -0.63; 0.08 -0.15 -0.85; 0.56 FGF 0 **0.99** **0.34; 1.64** 0.59 -0.11; 1.28 0 -0.40 -0.93; 0.12 -0.32 -1.36; 0.72 HGF 0 0.90 -0.25; 2.04 0.17 -1.05; 1.39 0 -0.65 -1.38; 0.08 0.13 -1.26; 1.52 VEGF 0 0.07 -0.40; 0.55 **0.98** **0.48; 1.48** 0 -0.36 -0.73; 0.01 -0.13 -0.87; 0.60 G-CSF 0 0.09 -0.18; 0.35 0.06 -0.22; 0.34 0 -0.09 -0.26; 0.08 0.13 -0.20; 0.47 GM-CSF 0 0.78 -0.46; 2.03 **1.52** **0.18; 2.86** 0 0.39 -0.40; 1.18 -0.27 -1.81; 1.27 IL-7 0 **1.58** **0.32; 2.83** 1.18 -0.17; 2.53 0 0.31 -0.52; 1.14 1.55 -0.06; 3.17 Multivariable logistic regression models adjusting for site. CI: confidence interval. Neg: no infection detected by either PCR or microscopy. PCR+: PCR positive and microscopy negative. Microscopy +: smear positive regardless of PCR result. Expected change: change in mean concentration measured in pg/mL. Recruitment N = 76: Neg N = 26; PCR+ N = 36; Microscopy + N = 14. Delivery N = 89: Neg N = 61; PCR+ N = 24; Microscopy+ N = 4. In bold if 95% CI does not include 0. Association of *Pv* infection with plasma biomarker concentration at delivery {#sec011} ----------------------------------------------------------------------------- In PCA analysis at delivery, seven PCs also resulted in eigenvalue\>1 (kmo = 0.86, [S5 Table](#pntd.0008155.s006){ref-type="supplementary-material"}). However, regression models showed no association of any PC with *Pv* infection at delivery ([S6 Table](#pntd.0008155.s007){ref-type="supplementary-material"}). We did not observe differences between *Pv*-infected and uninfected women in plasma biomarker levels in the crude analysis (not shown) at any compartment. In the adjusted analysis, we observed a positive association of *Pv* infection with IFN-α and IL-12 peripheral plasma concentration and with IL-10 placental plasma concentration ([Table 3](#pntd.0008155.t003){ref-type="table"}). After stratifying by *Plasmodium* infection density, submicroscopic infection was associated with increased peripheral concentrations of IFN-α and decreased concentrations of IL-6, while microscopic infections were associated with elevated levels of TNF, IL-10 and IL-5. Moreover, microscopic infections were associated with increased concentrations of TNF and IL-5 ([Table 4](#pntd.0008155.t004){ref-type="table"}). Plasma biomarker concentration and delivery outcomes {#sec012} ---------------------------------------------------- Hb levels at delivery were positively associated with CCL11 and FGF peripheral plasma concentrations at recruitment ([Table 5](#pntd.0008155.t005){ref-type="table"}) and with CXCL9 placental plasma concentration ([Table 6](#pntd.0008155.t006){ref-type="table"}), and negatively associated with IL-1RA and G-CSF cord plasma concentrations ([Table 6](#pntd.0008155.t006){ref-type="table"}). Birth weight showed no association with any biomarker at recruitment ([Table 5](#pntd.0008155.t005){ref-type="table"}), and it was negatively associated with peripheral IL-4 concentration at delivery ([Table 6](#pntd.0008155.t006){ref-type="table"}). 10.1371/journal.pntd.0008155.t005 ###### Association of biomarkers at recruitment with hemoglobin levels at delivery and birth weight. ![](pntd.0008155.t005){#pntd.0008155.t005g}   Hemoglobin (g/dL) Birth weight (g) -------- ------------------- ------------------ -------- ---------------- TNF 0.01 -0.02; 0.05 -3.83 -11.40; 3.75 IL-1β 0.01 -0.03; 0.04 0.38 -7.23; 8.00 IL-6 -0.02 -0.07; 0.03 -8.75 -21.92; 4.41 IL-10 0.00 -0.06; 0.07 -16.00 -32.73; 0.73 IL-1RA 0.01 -0.06; 0.08 -3.38 -21.29; 14.53 TGF-β -0.08 -0.20; 0.04 -1.15 -32.95; 30.65 IFN-α -0.01 -0.12; 0.11 -24.09 -52.01; 3.82 CXCL8 0.01 -0.03; 0.06 -6.34 -16.67; 3.98 CCL3 0.00 -0.08; 0.09 -3.60 -24.23; 17.03 CCL4 0.02 -0.04; 0.08 -3.48 -18.15; 11.19 CCL2 0.03 -0.06; 0.13 1.47 -22.06; 25.00 CXCL10 -0.03 -0.12; 0.06 -15.18 -37.20; 6.85 CXCL9 -0.01 -0.09; 0.08 -6.56 -27.38; 14.25 CCL11 **0.15** **0.03; 0.26** -9.84 -38.48; 18.80 CCL5 0.00 -0.12; 0.11 1.41 -26.42; 29.23 IFN-γ -0.15 -0.49; 0.19 -50.53 -135.09; 34.03 IL-12 0.04 -0.17; 0.24 4.84 -44.48; 54.16 IL-2 0.05 -0.01; 0.11 12.87 -2.32; 28.06 IL-15 0.01 -0.06; 0.07 3.55 -12.91; 20.00 IL-2R -0.02 -0.11; 0.07 -7.58 -30.77; 15.60 IL-4 0.00 -0.18; 0.17 -18.35 -62.30; 25.59 IL-5 0.02 -0.02; 0.06 -4.17 -12.20; 3.87 IL-13 0.01 -0.08; 0.11 -3.97 -26.94; 19.00 IL-17 0.02 -0.02; 0.06 1.56 -7.41; 10.53 EGF 0.05 -0.02; 0.12 10.98 -6.24; 28.20 FGF **0.06** **0.00; 0.12** 3.26 -11.52; 18.04 HGF 0.01 -0.03; 0.05 -2.61 -12.52; 7.30 VEGF 0.00 -0.10; 0.11 -17.65 -35.44; 0.13 G-CSF 0.02 -0.02; 0.06 -6.24 -14.68; 2.20 GM-CSF -0.01 -0.19; 0.16 -13.46 -54.75; 27.84 IL-7 0.03 -0.01; 0.07 0.10 -8.52; 8.72 Multivariable linear regression models adjusting for the following variables: site, age at recruitment, hemoglobin (Hb) at recruitment for analysis of Hb at delivery, gravidity, gestational age, delivery mode, and *P*. *falciparum* and *P*. *vivax* infection. Effect: change in Hb levels (g/dL) or birth weight (g) per 25% increase in biomarker concentration. N = 145. In bold if 95% confidence interval (CI) does not include 0. 10.1371/journal.pntd.0008155.t006 ###### Association of biomarkers at delivery with hemoglobin levels at delivery and birth weight. ![](pntd.0008155.t006){#pntd.0008155.t006g} . Periphery Placenta Cord -------- ----------- ------------------ ------------ ------------------- ---------------------- ---------------------- ---------- ------------------- ----------- ------------------ -------- ------------------ TNF 0.03 -0.04; 0.10 0.19 -18.88; 19.27 0.01 -0.15; 0.17 26.49 -6.86; 59.85 0.00 -0.05; 0.05 -4.84 -16.71; 7.03 IL-1β -0.03 -0.09; 0.03 -6.18 -23.11; 10.75 0.00 -0.15; 0.14 22.00 -1.36; 45.37 0.00 -0.04; 0.05 -0.26 -10.49; 9.97 IL-6 0.01 -0.04; 0.06 5.35 -7.69; 18.38 0.03 -0.03; 0.09 10.48 -1.89; 22.86 -0.03 -0.08; 0.02 -9.03 -20.49; 2.44 IL-10 -0.03 -0.10; 0.04 -14.67 -33.31; 3.97 0.11 -0.14; 0.36 26.79 -27.44; 81.03 -0.03 -0.71; 0.64 -9.31 -169.30; 150.69 IL-1RA -0.03 -0.09; 0.03 -7.08 -23.32; 9.17 0.10 -0.01; 0.21 10.36 -13.19; 33.90 **-0.15** **-0.29; -0.01** -25.49 -58.66; 7.68 TGF-β -0.02 -0.18; 0.13 -16.36 -58.59; 25.88 0.00 -0.10; 0.10 3.58 -17.84; 25.00 -0.06 -0.19; 0.07 11.11 -20.40; 42.62 IFN-α -0.02 -0.13; 0.08 -22.77 -50.47; 4.93 0.15 -0.05; 0.35 29.95 -11.01; 70.92 -0.05 -0.40; 0.31 4.66 -77.58; 86.89 CXCL8 0.00 -0.03; 0.03 0.69 -7.62; 8.99 0.03 -0.03; 0.08 8.70 -3.21; 20.60 -0.03 -0.08; 0.02 -9.25 -20.08; 1.57 CCL3 0.07 -0.01; 0.14 -6.41 -26.16; 13.34 0.07 -0.07; 0.22 22.50 -7.74; 52.74 -0.15 -0.32; 0.03 -4.51 -45.79; 36.77 CCL4 0.01 -0.06; 0.08 -0.73 -19.43; 17.97 0.00 -0.10; 0.10 10.83 -9.85; 31.50 -0.01 -0.08; 0.06 -13.29 -29.61; 3.03 CCL2 0.01 -0.04; 0.06 -1.50 -15.10; 12.09 0.04 -0.06; 0.14 6.60 -15.10; 28.30 -0.02 -0.09; 0.05 -11.58 -27.52; 4.35 CXCL10 0.05 -0.03; 0.13 14.89 -7.40; 37.18 **0.08** **0.00; 0.17** 3.02 -15.73; 21.78 0.02 -0.11; 0.15 -17.21 -47.33; 12.91 CXCL9 0.02 -0.05; 0.08 2.32 -14.58; 19.21 **0.09** **0.01; 0.17** -0.86 -19.02; 17.30 -0.02 -0.11; 0.07 -7.02 -27.43; 13.39 CCL11 0.01 -0.08; 0.10 3.75 -20.49; 27.99 0.02 -0.13; 0.17 15.21 -17.60; 48.03 0.10 -0.06; 0.27 33.55 -4.87; 71.98 CCL5 -0.05 -0.15; 0.05 19.81 -5.78; 45.41 0.03 -0.06; 0.12 6.22 -14.02; 26.46 -0.04 -0.22; 0.14 16.95 -23.90; 57.81 IFN-γ 0.05 -0.15; 0.24 -8.66 -60.06; 42.75 -11.83 -62.25; 38.60 -1625.89 -1.3e+04; 9369.71 4.10 -0.39; 8.59 258.64 -857.77; 1375.05 IL-12 -0.02 -0.17; 0.13 -5.63 -44.85; 33.59 0.01 -0.17; 0.18 36.44 -1.27; 74.16 -0.01 -0.30; 0.29 -9.16 -78.97; 60.64 IL-2 **-0.07** **-0.13; -0.01** 1.96 -13.98; 17.89 0.09 -0.05; 0.23 -7.16 -38.20; 23.88 0.00 -0.08; 0.08 12.17 -5.97; 30.31 IL-15 -0.06 -0.14; 0.02 6.61 -13.80; 27.02 0.09 -0.02; 0.20 6.93 -17.89; 31.75 0.02 -0.12; 0.15 -21.09 -52.08; 9.90 IL-2R -0.03 -0.13; 0.06 -10.66 -35.41; 14.09 0.15 -0.04; 0.34 13.51 -28.76; 55.79 -0.04 -0.23; 0.15 -11.77 -57.33; 33.79 IL-4 -0.02 -0.15; 0.11 **-43.01** **-76.74; -9.27** N/A regression model N/A regression model 2.56 -0.20; 5.33 2.56 -0.25; 5.36 IL-5 0.01 -0.05; 0.06 -9.12 -23.96; 5.72 -0.10 -0.31; 0.11 5.65 -40.90; 52.19 -0.02 -0.07; 0.03 5.30 -6.61; 17.21 IL-13 0.03 -0.04; 0.11 -17.75 -37.69; 2.19 0.01 -0.21; 0.24 -2.60 -51.12; 45.93 0.01 -0.33; 0.36 3.93 -76.19; 84.05 IL-17 -0.01 -0.12; 0.10 -22.26 -51.12; 6.59 0.18 -0.27; 0.63 60.09 -33.19; 153.37 0.04 -0.06; 0.14 -8.10 -31.69; 15.49 EGF -0.07 -0.15; 0.02 -5.31 -27.93; 17.32 0.08 -0.04; 0.21 1.85 -26.51; 30.20 -0.01 -0.18; 0.15 -18.33 -56.15; 19.48 FGF -0.04 -0.10; 0.02 5.93 -9.57; 21.43 0.05 -0.05; 0.15 -5.45 -27.93; 17.04 0.03 -0.08; 0.14 -6.93 -33.37; 19.51 HGF 0.00 -0.05; 0.04 -7.04 -19.74; 5.66 0.00 -0.06; 0.07 -1.62 -15.52; 12.27 -0.08 -0.20; 0.04 -20.29 -48.62; 8.04 VEGF -0.01 -0.09; 0.08 6.27 -16.05; 28.60 0.11 -0.01; 0.23 16.61 -9.45; 42.67 -0.08 -0.20; 0.03 -26.18 -52.87; 0.50 G-CSF -0.06 -0.21; 0.09 -21.01 -60.86; 18.84 0.06 -0.08; 0.20 20.24 -9.51; 49.98 **-0.21** **-0.37; -0.04** 20.83 -19.02; 60.69 GM-CSF -0.01 -0.05; 0.04 -5.97 -18.69; 6.75 -0.03 -0.26; 0.20 31.96 -18.02; 81.94 0.01 -0.04; 0.06 3.61 -7.06; 14.28 IL-7 0.00 -0.05; 0.06 -3.19 -17.42; 11.04 -0.01 -0.15; 0.12 13.18 -15.61; 41.97 -0.04 -0.16; 0.08 -15.21 -42.00; 11.58 Multivariable linear regression models adjusting for the following variables: site, age at recruitment, hemoglobin (Hb) at recruitment for analysis of Hb at delivery, Hb at delivery for analysis of birth weight (BW), gravidity, delivery mode, *P*. *falciparum* and *P*. *vivax* infection. Effect: change in Hb levels (mg/dL) or BW (g) per 25% increase in biomarker concentration. Periphery N = 188; Placenta N = 75; Cord N = 81; In bold if 95% confidence interval (CI) does not include 0. All samples considered have the same value for IL-4 concentration in plasma (pg/ml). N/A regression model: model could not be estimated as all samples considered have the same value for IL-4 concentration (38.96 pg/mL). Discussion {#sec013} ========== We report an exhaustive profiling of plasma biomarkers including cytokines, chemokines and growth factors in malaria in pregnancy caused by *Pv*, and their association with poor delivery outcomes. We separated the analysis of samples obtained at the first antenatal visit from the ones at delivery, as previous analyses in this cohort showed differences in most biomarker concentrations between recruitment and delivery \[[@pntd.0008155.ref008]\]. However, recruitment samples, which were collected at first, second and third trimester of pregnancy, were not further categorized because the correlation between women's gestational age and biomarker concentration in plasma was low in all cases in this cohort \[[@pntd.0008155.ref008]\]. It is well known that *Plasmodium spp*. infection is accompanied by an inflammatory response that seems to correlate with severity of malaria disease \[[@pntd.0008155.ref009],[@pntd.0008155.ref013]--[@pntd.0008155.ref018]\]. Also, placental inflammation has been shown in *Pf* malaria in pregnancy and linked to poor delivery outcomes \[[@pntd.0008155.ref019]--[@pntd.0008155.ref022]\]. However, the peripheral compartment in malaria during pregnancy has been less well studied, especially for *Pv* malaria in pregnancy. Here we showed that, in pregnant women at enrolment, *Pv* infection is associated with a broad proinflammatory response. First, in the exploratory PCA analysis we showed that two of the three clusters of cytokines associated with *Pv* infection were mainly proinflammatory: the PC3, composed by CXCL8, CCL4 and CCL3; and the PC5, composed by CXCL10, IL-6, CCL2 and IL-10. In agreement with this, the crude and adjusted analyses showed positive associations of IL-6, IL-1β, CXCL8, CCL3, CCL4, CCL2 and CXCL10 with *Pv* infection. *Pv* microscopic but not submicroscopic infections accounted for this inflammatory response. Another study in India has recently shown that women with *Pv* malaria in pregnancy have more IL-6, TNF and IL-1β in peripheral plasma than uninfected pregnant women \[[@pntd.0008155.ref010]\]. However, all the above associations of infection with inflammation were lost at delivery. No PCs and no individual proinflammatory biomarker showed any association with *Pv* infection at delivery. Only microscopic infections at delivery showed a positive association with TNF levels. We \[[@pntd.0008155.ref008]\] and others \[[@pntd.0008155.ref023],[@pntd.0008155.ref024]\] have shown that labor is accompanied by a peripheral proinflammatory response which may have masked any differences in inflammatory biomarker concentrations between infected and uninfected women. Moreover, there is controversy about *Pv* cytoadhesion to placenta but we have reported placental *Pv* monoinfections with no signs of placental inflammation \[[@pntd.0008155.ref025]\]. Despite the strong evidence of a potent inflammatory response triggered by *Pv* infection during pregnancy, our results did not show an impact of inflammation at recruitment on delivery outcomes. Moreover, in this cohort no poor delivery outcomes were attributed to *Pv* infection during pregnancy, except for anemia in symptomatic *Pv*-infected women \[[@pntd.0008155.ref003]\]. According to our data, we propose that an antiinflammatory response could be compensating the excessive inflammation. Thus, IL-10 (antiinflammatory cytokine) clustered with CXCL10, IL-6 and CCL2 (all proinflammatory biomarkers) in the PC5, which was associated with *Pv* infection at recruitment. Moreover, we showed a positive association of *Pv* infection with peripheral IL-10 concentration at recruitment, peripheral IL-10 at delivery (association only with microscopic infections) and placental IL-10 concentration at delivery. In addition, cord IL-1RA levels showed a negative association with Hb levels at delivery. Others have shown that *Pv* infection induces a proinflammatory response associated with an immunomodulatory profile mediated by IL-10 and TGF-β \[[@pntd.0008155.ref017]\] and production of IL-10 and expansion of regulatory T cells \[[@pntd.0008155.ref026]\] in non-pregnant individuals. We also studied T~H~-related biomarkers and found that while *Pv* infection was vaguely associated with T~H~2-related IL-5, infection was consistently and positively associated with T~H~1 cytokines (but not IFN-γ) in the PCA analysis (where IL-12 clustered with IL-2R in the PC7), as well as the crude and the adjusted analyses. Among them, the strongest association was observed with IL-12 plasma concentration, the key cytokine in T~H~1 differentiation. Moreover, CCL11, which has been associated to T~H~2-responses in allergic reactions and is able to recruit T~H~2 lymphocytes \[[@pntd.0008155.ref027]\], showed a negative association with *Pv* infection, supporting the hypothesis that *Pv* malaria in pregnancy triggers a T~H~1 response. Elevated levels of IFN-γ have been positively, negatively and not associated with placental *Pf* malaria (reviewed in \[[@pntd.0008155.ref028]\]) and the role of the T~H~1 arm in malaria-related poor delivery outcomes is also controversial \[[@pntd.0008155.ref021],[@pntd.0008155.ref029]\]. According to our present and previous data, we propose that failing to mount a T~H~1 response might worsen delivery outcomes, as we showed here that T~H~2-type IL-4 peripheral plasma concentration at delivery was negatively associated with birth weight, while previous results of a flow cytometry analysis of PNG women in this cohort showed a protective role of circulating T~H~1 cells (CD3^+^-CD4^+^-IFN-γ^+^-IL-10^-^) with birth weight \[[@pntd.0008155.ref006]\]. CCL11, a chemokine poorly studied in the context of malaria, was the only biomarker among the 31 studied to show a negative association (OR\<1) with *Pv* infection, although the associations were lost when stratifying by infection density. We had previously shown that non-pregnant women heavily exposed to malaria had lower levels of circulating CCL11 than malaria-naïve controls \[[@pntd.0008155.ref007]\]. Thus it seems that (low) CCL11 concentration could be used as a marker of malaria infection/exposure. Our analysis supports this finding, as higher CCL11 plasma concentration at recruitment was associated with higher Hb levels at delivery, and we had established previously that clinical *Pv* infection is associated with maternal anemia \[[@pntd.0008155.ref003]\]. However, from this study we cannot determine whether this association is causative and/or what role (if any) CCL11 has on *Pv* infection. Our study presents some limitations. Blood samples were collected in heparin vacutainers, therefore we cannot rule out contamination of plasma by platelet-derived factors. Also, the restricted number of samples with quantified parasitemia, the low concentration of some cytokines, and the number of women who were lost to follow up, prevented us from performing more detailed analyses on the relationship between certain cytokines and malaria prospectively. Moreover, we did not collect information regarding important and prevalent infectious diseases in some of the study sites, such as helminth infections. However, this may not be considered as a bias in the CCL11 analysis, as helminths would actually provoke an increase in CCL11 while we observe a decrease in this chemokine associated to malaria. In conclusion, data show that while T~H~1 and proinflammatory responses are dominant during *Pv* infection in pregnancy, antiinflammatory cytokines may compensate excessive inflammation avoiding poor delivery outcomes, and skewness towards a T~H~2 response may trigger worse delivery outcomes. CCL11, a chemokine largely neglected in the field of malaria, emerges as an important marker of exposure or mediator in this condition. Supporting information {#sec014} ====================== ###### Flow chart of sample selection for the study. (DOCX) ###### Click here for additional data file. ###### Upper and lower values of the biomarker standard curves. (DOCX) ###### Click here for additional data file. ###### *Plasmodium* infection case number by country. 1: n (percentage). (DOCX) ###### Click here for additional data file. ###### Principal component analysis of biomarkers at recruitment. PC: principal component. N = 301. (DOCX) ###### Click here for additional data file. ###### Association of principal components with *P*. *vivax* infection at recruitment. After varimax rotation, principal component scores were predicted and used as independent variables in logistic regression models. OR: odd ratio. CI: confidence interval. In bold if p\<0.05. (DOCX) ###### Click here for additional data file. ###### Principal component analysis of biomarkers at delivery. PC: principal component. N = 281. (DOCX) ###### Click here for additional data file. ###### Association of principal components with *P*. *vivax* infection at delivery. After varimax rotation, principal component scores were predicted and used as independent variables in logistic regression models. OR: odd ratio. CI: confidence interval. (DOCX) ###### Click here for additional data file. The authors thank all the volunteers who consented to participate in this study, the staff involved in field and laboratory work at each institution; Sergi Sanz for support in data management and statistical analysis; Gemma Moncunill and Ruth Aguilar for help with cytokine data analysis; and Mireia Piqueras, Sam Mardell, and Laura Puyol for management and administrative support. [^1]: The authors have declared that no competing interests exist.
Mid
[ 0.655256723716381, 33.5, 17.625 ]
Weekend Rewind: Texas A&M Texas A&M had made sure that Hutchinson (Kan.) Community College outside linebacker Kenny Flowers knew that he was a priority. The constant contact and effort that the Aggies put into recruiting the 6-foot-2, 228-pound prospect paid off in a big way last week when he called the coaches to commit to Texas A&M. Flowers made the commitment on Friday afternoon and the December graduate said he is excited to be joining the program in a couple months. He chose the Aggies over offers from Arkansas, Kansas, Kansas State and West Virginia. It was Texas A&M's aggressiveness in its pursuit that made the difference. Insider myESPN Tags: ABOUT THIS BLOG On The Trail is ESPN RecruitingNation's home for all the latest news and information. With some of the nation's top recruiting writers contributing, OTT provides the latest details about commitments, visits and other notes to give fans the most comprehensive recruiting news source in the country.
High
[ 0.67579908675799, 37, 17.75 ]
Update on Julian Assange’s Vault 7 Press Conference Update on Julian Assange’s Vault 7 Press Conference I have decided to break Julian Assange’s press conference on March 9th 2017 on the subject of Wikileaks Vault 7 Year Zero into separate videos covering Assange’s answers to each of the questions because the repercussions and fallout on this are going to affect virtually every sphere … political, social, cultural, financial, moral etc. I also think Vault 7 may be the hammer we will eventually use to smash globalisation and the New World Order so it’s important to get the information out there. Wikileaks Press Release Press Release Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency. Code-named “Vault 7” by WikiLeaks, it is the largest ever publication of confidential documents on the agency. Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive. “Year Zero” introduces the scope and direction of the CIA’s global covert hacking program, its malware arsenal and dozens of “zero day” weaponized exploits against a wide range of U.S. and European company products, include Apple’s iPhone, Google’s Android and Microsoft’s Windows and even Samsung TVs, which are turned into covert microphones. Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA). The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force — its own substantial fleet of hackers. The agency’s hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA’s hacking capacities. By the end of 2016, the CIA’s hacking division, which formally falls under the agency’s Center for Cyber Intelligence(CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other “weaponized” malware. Such is the scale of the CIA’s undertaking that by 2016, its hackers had utilized more code than that used to run Facebook. The CIA had created, in effect, its “own NSA” with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified. In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA’s hacking capabilities exceed its mandated powers and the problem of public oversight of the agency. The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons. Once a single cyber ‘weapon’ is ‘loose’ it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike. Julian Assange, WikiLeaks editor stated that “There is an extreme proliferation risk in the development of cyber ‘weapons’. Comparisons can be drawn between the uncontrolled proliferation of such ‘weapons’, which results from the inability to contain them combined with their high market value, and the global arms trade. But the significance of “Year Zero” goes well beyond the choice between cyberwar and cyberpeace. The disclosure is also exceptional from a political, legal and forensic perspective.” Wikileaks has carefully reviewed the “Year Zero” disclosure and published substantive CIA documentation while avoiding the distribution of ‘armed’ cyberweapons until a consensus emerges on the technical and political nature of the CIA’s program and how such ‘weapons’ should analyzed, disarmed and published. Wikileaks has also decided to redact and anonymise some identifying information in “Year Zero” for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout Latin America, Europe and the United States. While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in “Vault 7” part one (“Year Zero”) already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.
Mid
[ 0.584569732937685, 24.625, 17.5 ]
Energy- and momentum-resolved exchange and spin-orbit interaction in cobalt film by spin-polarized two-electron spectroscopy. Spontaneous ordering of electronic spins in ferromagnetic materials is one of the best known and most studied examples of quantum correlations. Exchange correlations are responsible for long range spin order and the spin-orbit interaction (SOI) can create preferred crystalline directions for the spins, i.e., magnetic anisotropy. Presented experimental data illustrate how novel spin-polarized two-electron spectroscopy in-reflection mode allows observation of the localization of spin-dependent interactions in energy-momentum space. Comparison of spin-orbit asymmetries in spectra of Co film and clean W(110) may indicate the presence of interface specific proximity effects providing important clues to the formation of preferred orientations for the magnetic moment of the Co film. These results may help to understand the microscopic origin of interface magnetic anisotropy.
High
[ 0.6567164179104471, 33, 17.25 ]
Are you tired of dealing with leaks in your basement? Have you recently discovered cracks in your foundation or mold in your crawl space? If so, the experts at Victor Barke's Complete Basement Systems stand ready and willing to help. With over 40 years of experience, Victor Barke's Complete Basement Systems is your trusted source for basement waterproofing, foundation repair, crawl space repair, and more. All of our professionals are highly trained and certified to handle any of the issues that may be plaguing your home. We also only work with the most proven and innovative products that the industry has to offer. With products such as dehumidifiers, sump pumps, drainage systems, crawl space jacks and more, we can ensure that you have the quality solutions that will last your home for years to come. Kasota, MN's Experts in Basement Waterproofing & Repair It's a known fact that water can destroy your basement and cause thousands of dollars in damage. If you are suffering from a wet basement because of leaks or flooding we provide affordable, permanent basement waterproofing solutions throughout Kasota. As an authorized member of the Basement Systems®, Inc. network, we have access to a large line of cutting-edge products that have a proven track record of success throughout North America. We are confident in the quality of our waterproofing product and work that we provide a lifetime warranty on all perimeter waterproofing systems. Crawl Space Encapsulation & Insulation Throughout Kasota, MN If you've been having problems with your crawl space, our services can help. Some symptoms of a crawl space problem include uneven floors, foul odors in the home, drywall cracks in the interior, and heightened allergies or asthma symptoms. Mold is also a common problem as it can easily grow in any crawl space and eventually start rotting it all together. Besides adding braces to crawl spaces we can also professionally insulate and encapsulate the area with clean space vapor barrier. Installing sump pumps and dehumidifiers where necessary prevent moisture problems that lead to water damage and mold growth. Our crawl space system comes with a 25 year warranty that you can count on. This basement dehumidifier is an absolute must in any below grade space. You will be able to control the humidity with needs to be at or below 50% relative humidity. The CompleteAire Dehumidification System covers approx 2200 sq ft. and comes with a 5 year warranty. Our WaterGuard Drainage System will work and channel any water that enters the basement either through the floor or walls. The water will flow into this new SuperSump Pump System installed and will work to pump out water away from the house. WaterGuard is installed along the perimeter and finished over with new concrete. The concrete will lighten in color as it cures. The Walls now have a beautiful vapor barrier installed called CleanSpace Wall System to protect the stone foundation walls from moisture Work Requests From Kasota, MN Vicinity of Clifford Drin Kasota We have gotten water in our basement with the last few heavy rainstorms. Need to see how to get it water tight. Vicinity of Shanaska Creek Rdin Kasota Need a beaver dam type system on one wall bout 40 feet. do you do this type? Vicinity of Oak Ridge Roadin Kasota Water is coming into the basement when it rains. Isolated to one room. Basement is wood and a walkout.
Mid
[ 0.5924050632911391, 29.25, 20.125 ]
In a Nutshell Dine on the likes of monkfish carpaccio and tabbouleh at the Michelin-Recommended Made Bar and Kitchen located in Camden The Fine Print Validity: Expires 24 Oct 2014. Purchase: Limit 1 per 1, 2 or 4 people. May buy 5 additional as gifts. Booking: Required online via: http://www.groupon-reservation.co.uk/. 24-hours in advance. 24-hour cancellation policy. Restrictions: Valid Tue-Sat 5pm-9pm. Must be 18 or older. New customers only. 12.5% discretionary service based on the pre-discounted value will be added to the bill. £3 surcharge on the rib eye steak option only. Valid for eat in only. Menu is seasonal and subject to change. Two hour booking time slots. Valid for option purchased only. Original values: Based on highest priced option. Valid for starter up to the value of £8.50; main up to value of £15.50; side up to the value of £4.50 and drink (if applicable) up to value of £6 each only. Verified using our merchant's website on 18 Jul 2014. Made Bar & Kitchen The Deal Diners can tuck into a two-course meal made up of items from the bar and kitchen’s dinner menu, with the likes of monkfish carpaccio (usually £8.50), duck croquettes (£7) or soup of the day (£4.50) served to start. Mains such as whole-grilled seafood minute casserole (£15.50), quinoa green tabbouleh (£13.50) or a lamb burger (£13.50) follow, accompanied by a side with options including rocket salad (£4.50), or chunky chips with parmesan and truffle oil (£3.50). Options are available for visitors to add a little extra sparkle to their meal with a 125ml glass of Prosecco each (£6) on arrival. Customers should note than menu items are subject to change. Choose from the following options for a two-course meal: £13 for one (Up to 54% off) £16 for one with Prosecco (Up to 54% off) £26 for two (Up to 54% off) £31 for two with Prosecco on arrival (Up to 55% off) £52 for four (Up to 54% off) £62 for four with Prosecco on arrival (Up to 56% off) Located within the trendy Camden area, the Michelin-recommended Made Bar and Kitchen serves up breakfasts, lunches and dinners, as well as weekend brunches and an array of wines and cocktails. The restaurant is handily placed for a visit before or after attending a show at one of the local theatres, and is also within walking distance of Camden Town station and Camden Market. Chalk Farm tube station is also located nearby. Reviews TripAdvisor (58 Reviews) Still to be extensively reviewed in its current form, visitors to Made Bar and Kitchen’s previous incarnation awarded it 4/5 on TripAdvisor. Users were eager to praise the attentive service, as well as the standard of the food available. Special praise went to the convenient location, both for the theatre and visiting Camden. Beyond that, the restaurant was also listed under the Bib Gourmand section of the Michelin guide for 2013 and 2014 for the quality of its offering.
Mid
[ 0.593607305936073, 32.5, 22.25 ]
# Don't edit this file. It's generated automatically! # If you want to update global dependencies, modify fixed-requirements.txt # and then run 'make requirements' to update requirements.txt for all # components. # If you want to update depdencies for a single component, modify the # in-requirements.txt for that component and then run 'make requirements' to # update the component requirements.txt RandomWords mock==2.0.0 nose nose-parallel==0.3.1 nose-timer==0.7.5 psutil==5.6.6 pyrabbit rednose unittest2 webtest
Low
[ 0.47522522522522503, 26.375, 29.125 ]
Q: Object Not Defined with Knockout Data Mapping Plugin I'm trying to use the mapping plugin to make children objects' properties observable. I have the following: // setData defined here var mapping = { create: function(options) { //customize at the root level. var innerModel = ko.mapping.fromJS(options.data); innerModel.cardCount = ko.computed(function () { debugger; return this.cards().length; // cards not defined - "this" is Window for some reason }); innerModel.deleteCard = function (card) { // Pending UI // call API here // On success, complete this.cards.remove(card); }.bind(this); innerModel.addCard = function () { //debugger; // Pending UI // Call API here // On success, complete this.cards.push(dummyCard); //this.cardToAdd(""); }.bind(this); return innerModel; } }; var SetViewModel = ko.mapping.fromJS(setData, mapping); ko.applyBindings(SetViewModel); When I run this in chrome debugger, I get "Object [Object global] has no method cards". Cards should be an observable array. What am I doing wrong? A: innerModel.cardCount = ko.computed(function () { debugger; return this.cards().length; // cards not defined - "this" is Window for some reason }); this is inside the anonymous function you're creating and is therefore bound to the global object. if you want to reference innermodel you'll have to do so directly, or bind innermodel to the function. innerModel.cardCount = ko.computed(function () { return innerModel.cards().length; }); or var computedFunction = function () { return this.cards().length; }; innerModel.cardCount = ko.computed(computedFunction.apply(innerModel));
Mid
[ 0.642131979695431, 31.625, 17.625 ]
Benchmarking the DFT methodology for assessing antioxidant-related properties: quercetin and edaravone as case studies. The overall objective was to identify an accurate computational electronic method to virtually screen phenolic compounds through their antioxidant and free-radical scavenging activity. The impact of a key parameter of the density functional theory (DFT) approach was studied. Performances of the 21 most commonly used exchange-correlation functionals are thus detailed in the evaluation of the main energetic parameters related to the activities of two prototype antioxidants, namely quercetin and edaravone, is reported. These functionals have been chosen among those belonging to three different families of hybrid functionals, namely global, range separated, and double hybrids. Other computational parameters have also been considered, such as basis set and solvent effects. The selected parameters, namely bond dissociation enthalpy (BDE), ionization potential (IP), and proton dissociation enthalpy (PDE) allow a mechanistic evaluation of the antioxidant activities of free radical scavengers. Our results show that all the selected functionals provide a coherent picture of these properties, predicting the same order of BDEs and PDEs. However, with respect to the reference values, the errors found at CBS-Q3 level significantly vary with the functional. Although it is difficult to evidence a global trend from the reported data, it clearly appears that LC-ωPBE, M05-2X, and M06-2X are the most suitable approaches for the considered properties, giving the lowest cumulative mean absolute errors. These methods are therefore suggested for an accurate and fast evaluation of energetic parameters related to an antioxidant activity via free radical scavenging.
High
[ 0.6910994764397901, 33, 14.75 ]
"...and once you have tasted flight you will walk the earth with your eyes turned skyward, for there you have been and there you long to return." Leonardo da Vinci The Lake District has a remarkably selective memory when it comes to commemorating its heroes. The most revered are those who celebrate its undoubted beauty, for example William Wordsworth, John Ruskin and Alfred Wainwright, with Beatrix Potter sitting close by. However, the Lake District has a proud history of industry and enterprise that demands recognition, none more so than the exploits of Captain Edward William Wakefield in the early years of the Twentieth Century. In the early morning of 25 November 1911 a hydro-aeroplane called "Waterbird" took off from the waters of Windermere, flew for a short time, and alighted safely. Herbert Stanley Adams was the pilot on this historic occasion, though the whole enterprise had been the brainchild of barrister landowner E W Wakefield of Kendal. This was one of the very first successful flights from water in the world, and is now recognised as the first successful complete flight from water, and safely back again, in Britain. The flight is all the more admirable because when Wakefield embarked on his project in 1909 he did so whilst flying in the face of accepted wisdom; powered flight was not thought possible from water. Drawing in part on the skills of a local boat builder he was to prove the doubters wrong. Perhaps the achievements of Wakefield, and his pilot Adams, would have survived much more prominently in the folk memory of the area were it not for his dispute with a certain Beatrix Potter and Canon Rawnsley, and their supporters. The issues that divided these three equally strong willed personalities are still very much alive today, and the relationship between nature and machine has always been uneasy, especially in the Lake District. We can, however, begin the process that will surely see Edward Wakefield take his rightful place in the Lake District "Hall of Fame". As if the rediscovery of archive material relating to E W Wakefield and his flying exploits were not exciting enough, a further trove of recently discovered personal letters and documents offer an invaluable insight into a remarkable man of his time, and one who has much to say to today's world. Captain Edward William Wakefield was among the spectators at an aviation meeting in Blackpool in October 1909. Having been fascinated by constructing model aeroplanes as a child, his passion for flight was reignited at this meeting. Having witnessed crashes at Blackpool, Wakefield developed his idea that flying could be safer by taking off and alighting on water. This idea was further reinforced through the death of Charles Stewart Rolls in July 1910, the first British pilot to lose his life in a powered aircraft. In early 1910 Wakefield began to make preparations for his extraordinary project, which was to have a successful hydro-aeroplane. His land in the Lake District included an area known as the Hill of Oaks that stood on the shores of Windermere. Trees were cleared and a road constructed that zig-zagged down to the Lake where his hangar was to be sited. Whilst the Hill of Oaks was being prepared, Wakefield visited France & southern England to research the practicalities of combining aeroplanes with floats. By the autumn of 1910 Wakefield placed an advertisement in Flight magazine for a second-hand Bleriot aeroplane and an aero engine in good condition. Amongst those who replied was A.V. Roe & Company of Brownsfield Mills, Manchester. The original negotiations between Wakefield & Roe were superseded by the news that an American by the name of Glenn Curtiss had made a successful flight from water in California. Wakefield quickly judged that the American’s aircraft was the first really practical hydro aeroplane and so it was decided between Wakefield and the Roes that he should pay them to build a version of the Curtiss aeroplane, complete with a supplied Gnome engine. The Curtiss Biplane that Wakefield ordered from the Roe Company was taken to the Brooklands test site in May and was ready to fly by the end of June 1911. The adapted Curtiss aeroplane was ready to return to Windermere where floats would be fitted and experiments on water would begin. It was at this point that Wakefield met with an able and willing young pilot who was prepared to come up to Cumbria and work with him on his project. Herbert Stanley Adams was the pilot who played a key role in the success of Waterbird. A local company, Borwicks, delivered a completed float in August 1911. The float was based on the design for the float used on the Curtiss plane but adapted taking account of the results of Wakefield’s earlier experiments. In addition to the float, Wakefield fitted two side balancers to the wing tips. These became popularly known as Wakefield sausages. Initial tests on water proved unsuccessful until Wakefield asked Borwicks to make hydroplane steps in the float. ‘This is the original drawing by A.V. Roe & Company showing the amended design for a Curtiss Biplane, dated 9 March 1911’ After weeks of strong winds and rain, the weather improved dramatically and on the 25 November 1911, Adams took Waterbird out onto the Lake and successfully flew Waterbird and alighted safely. Sadly Wakefield was not there to witness this first flight but his excitement can be seen in the correspondence to his wife. This was the first successful flight from water in the British Empire. There had been two earlier and notable attempts to fly on water by Commander Schwann at Barrow Dock and Oscar Gnosspelius at Windermere, but neither alighted safe. The Lakes Flying Company was founded in January 1912, and included Wakefield, Adams and the Earl of Lonsdale. Almost immediately Wakefield and the Company found themselves the focus of a campaign that was to make national headlines. The protest Committee, led by Canon Rawnsley and Beatrix Potter, were vehemently opposed to Wakefield and his flying activities in the Lake District and this would lead to a public inquiry into the issue. Eventually the matter was resolved in Wakefield’s favour and flying did continue though only after much heated argument and debate. One of Wakefield’s supporters was a certain Winston Churchill, MP, First Lord of the Admiralty, who supported his activities on Windermere. At the same time planning permission was granted for new hangars at Cockshott Point, Bowness. Flying activities continued on Windermere until around 1916 although Edward Wakefield went on to serve in Flanders during the First World War. Edward Wakefield (1862-1941) was one of Britain's most important aviation pioneers, now recognised as one of the fathers of the Royal Navy's Fleet Air Arm. It was his plane, Waterbird, that on 25 November 1911 made the first successful flight from water in the UK from Windermere. Born into a prosperous Lakeland family, Edward Wakefield trained as a banker and lawyer. But from an early age his restless disposition, combined with a strong sense of religious duty and Victorian patriotism, drove him to wider pastures. He was active in charity work, mainly with children in need, in London in the 1890s and again in the early 1900s. On the outbreak of the Boer War in 1899 he joined the Carlisle based Border Regiment and saw two years active service in South Africa. Attending a flying demonstration in 1909 he was told that casualties were inevitable when flying from land. He decided that flying from water would be much safer. Helped by considerable wealth and self-confidence, he set out to prove it. He built hangars on Lake Windermere. He bought and tested one of the earliest Avro planes, which he named Waterbird, for experimentation and adaption. National publicity followed. A strong protest campaign led by Beatrix Potter and Canon Rawnsley was foiled with government help. Soon his Hill of Oaks base became a centre for Admiralty testing and, by WW1, for the large-scale training of naval pilots whose graduates fought, and all too often died, all over the Western and Mediterranean fronts. In 1914, despite advancing age (he was then 52) Wakefield re-joined the army, spent three years training troops, commanded a Labour Battalion on the Western front, served in Italy and ended the War as Chief Church Army Commissioner for France and Belgium. His health badly damaged, he spent the rest of his life in Kendal, active as Mayor, Chair of Magistrates, local landowner and supporter of good causes. He died in 1941. His wife Mary pre-deceased him in 1921. He had one child, Marion, who many years later fondly reminisced of helping sew fabric for Waterbird's wings and foiling pre-WW1 German spies. His grandson, James Gordon (1913-98), was also a distinguished figure in aviation history - pioneering air-sea rescue dinghies and revolutionary wood epoxy construction techniques for Mosquito aircraft and Horsa gliders in World War 2.
Mid
[ 0.641509433962264, 34, 19 ]
Live Stream Video Live Stream Video Live Stream Video Live Stream Video Live 3ABN Radio Live Media Live Media Live Media Giving God is pleased with personal giving. It's in the Bible, Exodus 35:22, TLB. "Both men and women came, all who were willing-hearted. They brought to the Lord their offerings of gold, jewelry—earrings, rings from their fingers, necklaces—and gold objects of every kind." God is pleased when we give generously. It's in the Bible, Ezra 2:68-69, TLB. "Some of the leaders were able to give generously toward the rebuilding of the Temple, and each gave as much as he could. The total value of their gifts amounted to $300,000 of gold, $170,000 of silver, and 100 robes for the priests." God is pleased with regular sacrificial giving. It's in the Bible, II Corinthians 8:2, TLB. "Though they have been going through much trouble and hard times, they have mixed their wonderful joy with their deep poverty, and the result has been an overflow of giving to others." Generosity takes preparation and planning. It's in the Bible, Leviticus 19:9-10, TLB. "When you harvest your crops, don't reap the corners of your fields, and don't pick up stray grains of wheat from the ground. It is the same with your grape crop—don't strip every last piece of fruit from the vines, and don't pick up the grapes that fall to the ground. Leave them for the poor and for those traveling through, for I am Jehovah your God." Giving can be an act of worship. It's in the Bible, Matthew 2:11, NIV. "On coming to the house, they saw the child with His mother Mary, and they bowed down and worshipped Him. Then they opened their treasures and presented Him with gifts of gold and of incense and of myrrh." Gifts should be given willingly. It's in the Bible, II Corinthians 9:7, NIV. "Each man should give what he has decided in his heart to give, not reluctantly or under compulsion, for God loves a cheerful giver." Each person should give as much as he or she is able. It's in the Bible, II Corinthians 8:12, TLB. "If you are really eager to give, then it isn't important how much you have to give. God wants you to give what you have, not what you haven't." Much is required of those who have been given much. It's in the Bible, Luke 12:48, NIV. "From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked." Giving tithes and offerings assures the blessings of God. It's in the Bible, Malachi 3:8,10, NIV. "Will a man rob God? Yet you rob Me. But you ask, 'How do we rob You?' In tithes and offerings…Bring the whole tithe into the storehouse, that there may be food in My house. Test Me in this, says the Lord Almighty, and see if I will not throw open the floodgates of heaven and pour out so much blessing that you will not have room enough for it." Generosity comes back to the giver. It's in the Bible, Luke 6:38, TLB. "For if you give, you will get! Your gift will return to you in full and overflowing measure, pressed down, shaken together to make room for more, and running over. Whatever measure you use to give—large or small—will be used to measure what is given back to you." God is the ultimate example of giving. It's in the Bible, John 3:16, TLB. "For God loved the world so much that He gave His only Son so that anyone who believes in Him shall not perish but have eternal life."
Mid
[ 0.617283950617283, 37.5, 23.25 ]
Q: Buiding pandas dataframe from nested data from a mongo db cursor I have nested data from a range of documents that I'd like to put in one Pandas dataframe(with only certain properties). Once I have my cursor I tried looping through the documents and grabbing what I needed. all_df_real= [] for doc in cursor_real: single_real_df = pd.DataFrame(doc['data']['prices']) all_df_real.append(single_real_df) return all_df_real Ideally I wanted to create one big dataframe with all of the data and prices, so that I could then merge it to another dataframe who has rows which are missing values that will come from the all_df_real dataframe. The result I get however is a list because I created an empty array to append the single_real_df to. Can someone help me out figure out how to create a dataframe from multiple documents(which I grouped based on a timerange), and only obtaining the nested information? Initially I queried the database using find_one, but ran into problems because of the date range of documents I needed. Or am I going the wrong way about this by creating one dataframe at a time based on the documents from my cursor and trying to make one big dataframe from that...? supporting info This is what one of my documents look like {"_id" : ObjectId("1"), "modelRun" : ISODate("2016-11-23T13:04:00.000+0000"), "createdDateTime" : ISODate("2016-11-23T13:30:04.408+0000"), "Type" : "r", "data" : { "prices" : [ { "timeStamp" : ISODate("2016-11-23T14:00:00.000+0000"), "value" : 58.48 }, { "timeStamp" : ISODate("2016-11-23T15:00:00.000+0000"), "value" : 55.01 }, { "timeStamp" : ISODate("2016-11-23T16:00:00.000+0000"), "value" : 62.0 }, { "timeStamp" : ISODate("2016-11-23T17:00:00.000+0000"), "value" : 52.92 } #..etc.. ] } } This is how I grabbed my cursor def grab_real_cursor(self, model_dt_till): query_real = {'Type': 'r', 'modelRun': {"$gte": model_dt_till, "$lte": model_dt_till + pd.Timedelta(days=1)}} cursor = self._collection.find(query_real) return cursor UPDATE I tried creating an empty dataframe with just the column names, but now instead of getting a list of all the data like before : [ timeStamp value 0 2016-11-23 13:00:00 54.98 1 2016-11-23 14:00:00 58.48 2 2016-11-23 15:00:00 55.01 3 2016-11-23 16:00:00 62.00 #.. etc, , timeStamp value 0 2016-11-23 14:00:00 58.48 1 2016-11-23 15:00:00 55.01 2 2016-11-23 16:00:00 62.00 3 2016-11-23 17:00:00 52.92 ] all_df_real= pd.DataFrame(columns=['timeStamp', 'value']) I know get an empty dataframe Empty DataFrame Columns: [timeStamp, value] Index: [] A: I'm still learning Pandas and after updating my question with more attempts I came accross Concat. Since all_df_real= [] for doc in cursor_real: single_real_df = pd.DataFrame(doc['data']['prices']) all_df_real.append(single_real_df) return all_df_real returned: [ timeStamp value 0 2016-11-23 13:00:00 54.98 1 2016-11-23 14:00:00 58.48 2 2016-11-23 15:00:00 55.01 3 2016-11-23 16:00:00 62.00 #.. etc, , timeStamp value 0 2016-11-23 14:00:00 58.48 1 2016-11-23 15:00:00 55.01 2 2016-11-23 16:00:00 62.00 3 2016-11-23 17:00:00 52.92 ] A list made up of dataframes, I could just return result = pd.concat(all_df_real).
Mid
[ 0.581081081081081, 32.25, 23.25 ]
Auditory attention in early Parkinson's disease: an impairment in focused attention. Focused attention in the auditory modality was studied in a group of Parkinson patients and compared to matched controls using a Dichotic monitoring task. Parkinson patients detected more phonemic distractors on the unattended input than the normal controls, despite a high level of ipsilateral responses for target detection and target discrimination. This impairment in focused attention may be attributed to degenerative changes in the ascending monoamine pathways which have been implicated in the role of auditory attention.
High
[ 0.6925566343042071, 26.75, 11.875 ]
**AGENCY:** Office of the Secretary, HHS. **ACTION:** Notice. **SUMMARY:** Notice is hereby given that the Office of Research Integrity (ORI) and the Assistant Secretary for Health have taken final action in the following case: *Caroline E. Garey, Boston College:* Based on the Report and Addendum of the Boston College Research Misconduct Investigation Committee and additional analysis conducted by ORI in its oversight review, the U.S. Public Health Service (PHS) finds that Ms. Caroline E. Garey, former doctoral student, Boston College, engaged in scientific misconduct by falsifying research supported by National Institute of Neurological Disorders and Strokes (NINDS), National Institutes of Health (NIH), grant R01 NS23355. Specifically, as a graduate student at Boston College, Ms. Garey falsified restriction fragment length polymorphism (RFLP) data for ABP and DBA backcross mice DNA samples by misrepresenting results from multiple assays of identical backcross ABP DNA samples as being from different animals and misrepresenting the autoradiograms of backcross ABP DNA samples as the results from experiments on backcross DBA mice. Ms. Garey reported this falsified data in her doctoral dissertation, "Defect in the ceruloplasmin gene associated with epilepsy in the EL mouse," and in an article in Nature Genetics 6:426-431, 1994. She caused her falsified data to be reported by her laboratory director in NINDS, NIH, grant application 2 R01 NS23355-08A1 and at an international workshop on epilepsy on September 24, 1994. Ms. Garey also fabricated a translation table that she used to assign falsified RFLP data to individual backcross DBA mice. As a result of falsifying these assays over a minimum of two and one-half years, none of Ms. Garey\'s research can be considered reliable and the Nature Genetics publication has been retracted. These actions adversely and materially affected the laboratory\'s ongoing research on the genetic causes of epilepsy. Ms. Garey also has engaged in a pattern of dishonest conduct that indicates that she is not presently responsible to be a steward of Federal funds. This pattern of behavior includes (1) a history of falsely claiming that she has performed scientific experiments when she has not, and (2) repeated instances in which she has misrepresented her credentials to prospective employers, colleagues, customers, and the general public as including a Ph.D. degree even though Boston College refused to grant her a doctoral degree because of her scientific misconduct. The publication affected is: • Garey, C.E., Schwarzman, A.L., Rise, M.L., & Seyfried, T.N. "Ceruloplasmin gene defect associated with epilepsy in EL mice." Nature Genetics 6:426-431, 1994 (retracted in Nature Genetics 11:104, 1995). \(1\) To exclude herself from any contracting or subcontracting with any agency of the United States Government and from eligibility for, or involvement in, nonprocurement transactions ( *e.g.* , grants and cooperative agreements) of the United States Government as defined in 45 CFR part 76 (Debarment Regulations); \(2\) To exclude herself from serving in any advisory capacity to PHS, including but not limited to service on any PHS advisory committee, board, and/or peer review committee. **FOR FURTHER INFORMATION CONTACT:** Director, Division of Investigative Oversight, Office of Research Integrity, 5515 Security Lane, Suite 700, Rockville, MD 20852, (301) 443-5330. Chris Pascal, Director, Office of Research Integrity.
Mid
[ 0.5756929637526651, 33.75, 24.875 ]
// water_ring.c.inc f32 water_ring_calc_mario_dist(void) { f32 marioDistX = o->oPosX - gMarioObject->header.gfx.pos[0]; f32 marioDistY = o->oPosY - (gMarioObject->header.gfx.pos[1] + 80.0f); f32 marioDistZ = o->oPosZ - gMarioObject->header.gfx.pos[2]; f32 marioDistInFront = marioDistX * o->oWaterRingNormalX + marioDistY * o->oWaterRingNormalY + marioDistZ * o->oWaterRingNormalZ; return marioDistInFront; } void water_ring_init(void) { cur_obj_init_animation(0); o->oWaterRingScalePhaseX = (s32)(random_float() * 4096.0f) + 0x1000; o->oWaterRingScalePhaseY = (s32)(random_float() * 4096.0f) + 0x1000; o->oWaterRingScalePhaseZ = (s32)(random_float() * 4096.0f) + 0x1000; //! This normal calculation assumes a facing yaw of 0, which is not the case // for the manta ray rings. It also errs by multiplying the normal X by -1. // This cause the ring's orientation for the purposes of collision to be // different than the graphical orientation, which means that Mario won't // necessarily collect a ring even if he appears to swim through it. o->oWaterRingNormalX = coss(o->oFaceAnglePitch) * sins(o->oFaceAngleRoll) * -1.0f; o->oWaterRingNormalY = coss(o->oFaceAnglePitch) * coss(o->oFaceAngleRoll); o->oWaterRingNormalZ = sins(o->oFaceAnglePitch); o->oWaterRingMarioDistInFront = water_ring_calc_mario_dist(); // Adding this code will alter the ring's graphical orientation to align with the faulty // collision orientation: // // o->oFaceAngleYaw = 0; // o->oFaceAngleRoll *= -1; } void bhv_jet_stream_water_ring_init(void) { water_ring_init(); o->oOpacity = 70; cur_obj_init_animation(0); o->oFaceAnglePitch = 0x8000; } // sp28 = arg0 // sp2c = ringManager void water_ring_check_collection(f32 avgScale, struct Object *ringManager) { f32 marioDistInFront = water_ring_calc_mario_dist(); struct Object *ringSpawner; if (!is_point_close_to_object(o, gMarioObject->header.gfx.pos[0], gMarioObject->header.gfx.pos[1] + 80.0f, gMarioObject->header.gfx.pos[2], (avgScale + 0.2) * 120.0)) { o->oWaterRingMarioDistInFront = marioDistInFront; return; } if (o->oWaterRingMarioDistInFront * marioDistInFront < 0) { ringSpawner = o->parentObj; if (ringSpawner) { if ((o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) || (ringSpawner->oWaterRingSpawnerRingsCollected == 0)) { ringSpawner->oWaterRingSpawnerRingsCollected++; if (ringSpawner->oWaterRingSpawnerRingsCollected < 6) { spawn_orange_number(ringSpawner->oWaterRingSpawnerRingsCollected, 0, -40, 0); #ifdef VERSION_JP play_sound(SOUND_MENU_STAR_SOUND, gDefaultSoundArgs); #else play_sound(SOUND_MENU_COLLECT_SECRET + (((u8) ringSpawner->oWaterRingSpawnerRingsCollected - 1) << 16), gDefaultSoundArgs); #endif } ringManager->oWaterRingMgrLastRingCollected = o->oWaterRingIndex; } else ringSpawner->oWaterRingSpawnerRingsCollected = 0; } o->oAction = WATER_RING_ACT_COLLECTED; } o->oWaterRingMarioDistInFront = marioDistInFront; } void water_ring_set_scale(f32 avgScale) { o->header.gfx.scale[0] = sins(o->oWaterRingScalePhaseX) * 0.1 + avgScale; o->header.gfx.scale[1] = sins(o->oWaterRingScalePhaseY) * 0.5 + avgScale; o->header.gfx.scale[2] = sins(o->oWaterRingScalePhaseZ) * 0.1 + avgScale; o->oWaterRingScalePhaseX += 0x1700; o->oWaterRingScalePhaseY += 0x1700; o->oWaterRingScalePhaseZ += 0x1700; } void water_ring_act_collected(void) { f32 avgScale = (f32) o->oTimer * 0.2 + o->oWaterRingAvgScale; if (o->oTimer >= 21) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; o->oOpacity -= 10; if (o->oOpacity < 0) o->oOpacity = 0; water_ring_set_scale(avgScale); } void water_ring_act_not_collected(void) { f32 avgScale = (f32) o->oTimer / 225.0 * 3.0 + 0.5; //! In this case ringSpawner and ringManager are the same object, // because the Jet Stream Ring Spawner is its own parent object. struct Object *ringSpawner = o->parentObj; struct Object *ringManager = ringSpawner->parentObj; if (o->oTimer >= 226) { o->oOpacity -= 2; if (o->oOpacity < 3) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; } water_ring_check_collection(avgScale, ringManager); water_ring_set_scale(avgScale); o->oPosY += 10.0f; o->oFaceAngleYaw += 0x100; set_object_visibility(o, 5000); if (ringSpawner->oWaterRingSpawnerRingsCollected == 4 && o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) o->oOpacity = sins(o->oTimer * 0x1000) * 200.0f + 50.0f; o->oWaterRingAvgScale = avgScale; } void bhv_jet_stream_water_ring_loop(void) { switch (o->oAction) { case WATER_RING_ACT_NOT_COLLECTED: water_ring_act_not_collected(); break; case WATER_RING_ACT_COLLECTED: water_ring_act_collected(); break; } } void spawn_manta_ray_ring_manager(void) { struct Object *ringManager = spawn_object(o, MODEL_NONE, bhvMantaRayRingManager); o->parentObj = ringManager; } void water_ring_spawner_act_inactive(void) { //! The Jet Stream Ring Spawner is its own parent object. The code may have been copied // from the Manta Ray, which spawns rings but also has a Ring Manager object as its // parent. The Jet Stream Ring Spawner functions as both a spawner and a Ring Manager. struct Object *currentObj = o->parentObj; struct Object *waterRing; //! Because the index counter overflows at 10000, it's possible to wait // for about 4 hours and 38 minutes if you miss a ring, and the index will // come around again. if (o->oTimer == 300) o->oTimer = 0; if ((o->oTimer == 0) || (o->oTimer == 50) || (o->oTimer == 150) || (o->oTimer == 200) || (o->oTimer == 250)) { waterRing = spawn_object(o, MODEL_WATER_RING, bhvJetStreamWaterRing); waterRing->oWaterRingIndex = currentObj->oWaterRingMgrNextRingIndex; currentObj->oWaterRingMgrNextRingIndex++; if (currentObj->oWaterRingMgrNextRingIndex >= 10001) currentObj->oWaterRingMgrNextRingIndex = 0; } } void bhv_jet_stream_ring_spawner_loop(void) { switch (o->oAction) { case JS_RING_SPAWNER_ACT_ACTIVE: water_ring_spawner_act_inactive(); if (o->oWaterRingSpawnerRingsCollected == 5) { spawn_mist_particles(); spawn_default_star(3400.0f, -3200.0f, -500.0f); o->oAction = JS_RING_SPAWNER_ACT_INACTIVE; } break; case JS_RING_SPAWNER_ACT_INACTIVE: break; } } void bhv_manta_ray_water_ring_init(void) { water_ring_init(); o->oOpacity = 150; } void manta_water_ring_act_not_collected(void) { f32 avgScale = (f32) o->oTimer / 50.0f * 1.3 + 0.1; struct Object *ringSpawner = o->parentObj; struct Object *ringManager = ringSpawner->parentObj; if (avgScale > 1.3) avgScale = 1.3; if (o->oTimer >= 151) { o->oOpacity -= 2; if (o->oOpacity < 3) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; } water_ring_check_collection(avgScale, ringManager); water_ring_set_scale(avgScale); set_object_visibility(o, 5000); if (ringSpawner->oWaterRingSpawnerRingsCollected == 4 && o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) o->oOpacity = sins(o->oTimer * 0x1000) * 200.0f + 50.0f; o->oWaterRingAvgScale = avgScale; } void bhv_manta_ray_water_ring_loop(void) { switch (o->oAction) { case WATER_RING_ACT_NOT_COLLECTED: manta_water_ring_act_not_collected(); break; case WATER_RING_ACT_COLLECTED: water_ring_act_collected(); break; } }
Mid
[ 0.5447154471544711, 33.5, 28 ]
Lithium ion (Li-ion) batteries are currently the best performing batteries and already became the standard for portable electronic devices. In addition, these batteries already penetrated and rapidly gain ground in other industries such as automotive and electrical storage. Enabling advantages of such batteries are a high energy density combined with a good power performance. A Li-ion battery typically contains a number of so-called Li-ion cells, which in turn contain a positive (cathode) electrode, a negative (anode) electrode and a separator which are immersed in an electrolyte. The most frequently used Li-ion cells for portable applications are developed using electrochemically active materials such as lithium cobalt oxide or lithium nickel manganese cobalt oxide for the cathode and a natural or artificial graphite for the anode. It is known that one of the important limitative factors influencing a battery's performance and in particular battery's energy density is the active material in the anode. Therefore, to improve the energy density, newer electrochemically active materials based on e.g. tin, aluminium and silicon were investigated and developed during the last decades, such developments being mostly based on the principle of alloying said active material with Li during Li incorporation therein during use. The best candidate seems to be silicon as theoretical capacities of 3579 mAh/g or 2200 mAh/cm3 can be obtained and these capacities are far larger than that of graphite (372 mAh/g) but also those of other candidates. Note that throughout this document silicon is intended to mean the element Si in its zerovalent state. The term Si will be used to indicate the element Si regardless of its oxidation state, zerovalent or oxidised. However, one drawback of using a silicon based electrochemically active material in an anode is its large volume expansion during charging, which is as high as 300% when the lithium ions are fully incorporated, e.g. by alloying or insertion, in the anode's active material—a process often called lithiation. The large volume expansion of the silicon based materials during Li incorporation may induce stresses in the silicon, which in turn could lead to a mechanical degradation of the silicon material. Repeated periodically during charging and discharging of the Li-ion battery, the repetitive mechanical degradation of the silicon electrochemically active material may reduce the life of a battery to an unacceptable level. In an attempt to alleviate the deleterious effects of the volume change of the silicon, many research studies showed that by reducing the size of the silicon material into submicron or nanosized silicon particles, typically with an average size smaller than 500 nm and preferably smaller than 150 nm, and using these as the electrochemically active material may prove a viable solution. In order to accommodate the volume change, composite particles are usually used in which the silicon particles are mixed with a matrix material, usually a carbon based material, but possibly also a silicon based alloy or SiO2. In the present invention, only composites having carbon as matrix material are considered. Further, a negative effect of silicon is that a thick SEI, a Solid-Electrolyte Interface, may be formed on the anode. An SEI is a complex reaction product of the electrolyte and lithium, and therefore leads to a loss of lithium availability for electrochemical reactions and therefore to a poor cycle performance, which is the capacity loss per charging-discharging cycle. A thick SEI may further increase the electrical resistance of a battery and thereby limit the achievable charging and discharging rates. In principle the SEI formation is a self-terminating process that stops as soon as a ‘passivation layer’ has formed on the silicon surface. However, because of the volume expansion of silicon, both silicon and the SEI may be damaged during discharging (lithiation) and recharging (de-lithiation), thereby freeing new silicon surface and leading to a new onset of SEI formation. In the art, the above lithiation/de-lithiation mechanism is generally quantified by a so-called coulombic efficiency, which is defined as a ratio (in % for a charge-discharge cycle) between the energy removed from a battery during discharge compared with the energy used during charging. Most work on silicon-based anode materials is therefore focused on improving said coulombic efficiency. Current methods to make such silicon based composites are based on mixing the individual ingredients (e.g. silicon and carbon or a precursor for the intended matrix material) during preparation of the electrode paste formulation, or by a separate composite manufacturing step that is then carried out via dry milling/mixing of silicon and host material (possible followed by a firing step), or via wet milling/mixing of silicon and host material (followed by removal of the liquid medium and a possible firing step). Despite the advances in the art of negative electrodes and electrochemically active materials contained therein, there is still a need for yet better electrodes that have the ability to further optimize the performance of Li-ion batteries. In particular, for most applications, negative electrodes having improved capacities and coulombic efficiencies are desirable. Therefore, the invention concerns a composite powder for use in an anode of a lithium ion battery, whereby the particles of the composite powder comprise a carbon matrix material and silicon particles dispersed in this matrix material, whereby the composite powder further comprises silicon carbide whereby the ordered domain size of the silicon carbide, as determined by the Scherrer equation applied to the X-ray diffraction SiC peak having a maximum at 2θ between 35.4° and 35.8°, when measured with a copper anticathode producing Kα1 and Kα2 X-rays with a wavelength equal to 0.15418 nm, is at most 15 nm and preferably at most 9 nm and more preferably at most 7 nm. The Scherrer equation (P. Scherrer; Göttinger Nachrichten 2, 98 (1918)) is a well known equation for calculating the size of ordered domains from X-Ray diffraction data. In order to avoid machine to machine variations, standardized samples can be used for calibration. The composite powder according to the invention has a better cycle performance than traditional powders. Without being bound by theory, the inventors believe that the silicon carbide improves the mechanical bond between the silicon particles and the carbon matrix material, so that stresses on the interface between the silicon particles and the matrix material, e.g. those associated with expansion and contraction of the silicon during use of the battery, are less likely to lead to a disconnection of the silicon particles from the matrix material. This, in turn, allows for a better transfer of lithium ions from the matrix to the silicon and vice versa. Additionally, less silicon surface is then available for the formation of a SEI. Preferably said silicon carbide is present on the surface of said silicon particles, so that said silicon carbide forms a partial or complete coating of said silicon particles and so that the interface between said silicon particles and said carbon is at least partly formed by the said silicon carbide. It is noted that silicon carbide formation may also occur with the traditional materials, if silicon embedded in carbon or a carbon precursor is overheated, typically to well over 1000 degrees. However, this will in practice not lead to a limited, superficial formation of chemical Si—C bonds, as is shown to be beneficial in the present invention, but to a complete conversion of silicon to silicon carbide, leaving no silicon to act as anode active material. Also, in such circumstances a highly crystalline silicon carbide is formed. The silicon carbide in a powder according to the present invention is present as a thin layer of very small silicon carbide crystals or poorly crystalline silicon carbide, which shows itself as having, on an X-Ray diffractogram of the composite powder, a peak having a maximum at 2θ between 35.4° and 35.8°, having a width at half the maximum height of more than 1.0°, which is equivalent to an ordered domain size of 9 nm as determined by the Scherrer equation applied to the SiC peak on the X-Ray diffractogram at 20=35.6°, when measured with a copper anticathode producing Kα1 and Kα2 X-rays with a wavelength equal to 0.15418 nm. Preferably, the composite powder has an oxygen content which is 3 wt % or less, and preferably 2 wt % or less. A low oxygen content is important to avoid too much lithium consumption during the first battery cycles. Preferably the composite powder has a particle size distribution with d10, d50 and d90 values, whereby (d90−d10)/d50 is 3 or lower. The d50 value is defined as diameter of a particle of the composite powder corresponding to 50 weight % cumulative undersize particle size distribution. In other words, if for example d50 is 12 μm, 50% of the total weight of particles in the tested sample are smaller than 12 μm. Analogously d10 and d90 are the particle sizes compared to which 10% respectively 90% of the total weight of particles is smaller. A narrow PSD is of crucial importance since small particles, typically below 1 μm, result in a higher lithium consumption caused by electrolyte reactions. Excessively large particles on the other hand are detrimental for the final electrode swelling. Preferably less than 25% by weight, and more preferably less than 20% by weight of all Si present in the composite powder is present in the form of silicon carbide, as Si present in the form of silicon carbide is not available as anode active material capable of being lithiated and delithiated. In order to have an appreciable effect more than 0.5% by weight of all Si present in the composite powder should be present in the form of silicon carbide. The invention further concerns a method of manufacturing a composite powder, preferably a composite powder as described above according the invention, comprising the following steps: A: Providing a first product comprising one or more of products I, II and III B: Providing a second product being carbon or being a precursor for carbon, and preferably being pitch, whereby said precursor can be thermally decomposed to carbon at a temperature less than a first temperature; C: Mixing said first and second products to obtain a mixture; D: Thermally treating said mixture at a temperature less than said first temperature; whereby product I is: silicon particles having on at least part of their surface silicon carbide; whereby product II is: silicon particles that can be provided on at least part of their surface with silicon carbide by being exposed to a temperature less than said first temperature and by being provided on their surface with a compound containing C atoms and capable of reacting with silicon at a temperature less than said first temperature to form silicon carbide; and whereby product III is: silicon particles that can be provided on at least part of their surface with silicon carbide by being exposed to a temperature less than said first temperature and by being provided on their surface with a precursor compound for silicon carbide, said precursor compound comprising Si atoms and C atoms and being capable of being transformed into silicon carbide a temperature less than said first temperature; whereby said first temperature is 1075° C. and preferably 1020°.
Mid
[ 0.622222222222222, 35, 21.25 ]
Q: java for loop 1st char of 1st string then last char of second string when i input two strings of equal length, my code works just fine, but when i do it like different length of strings, it says StringIndexOutOfBoundsException. here is my code... i need to output a5b4c3d21 this is not homework, i am just studying String manipulation through for loop. thank you in advanced. String name1 = "abcd"; String name2 = "12345"; String temp = ""; for (int i = 1; i <= name1.length() || i <= name2.length(); i++) { temp = temp + name1.charAt(i-1) + name2.charAt(name2.length()-i); } System.out.println(temp); A: You need to do bounds checking: for (int i = 1; i <= name1.length() || i <= name2.length(); i++) { if (i <= name1.length()) { temp += name1.charAt(i - 1); } if (i <= name2.length()) { temp += name2.charAt(name2.length() - i); } } You could use the conditional operator to make it a bit less verbose: for (int i = 1; i <= name1.length() || i <= name2.length(); i++) { temp += (i <= name1.length() ? name1.charAt(i - 1) : "") + (i <= name2.length() ? name2.charAt(name2.length() - i) : ""); } but I recommend the first version, which is clearer.
Mid
[ 0.563218390804597, 30.625, 23.75 ]
Q: DAL without web2py I am using web2py to power my web site. I decided to use the web2py DAL for a long running program that runs behind the site. This program does not seem to update its data or the database (sometimes). from gluon.sql import * from gluon.sql import SQLDB from locdb import * # contains # db = SQLDB("mysql://user/pw@localhost/mydb", pool_size=10) # db.define_table('orders', Field('status', 'integer'), Field('item', 'string'), # migrate='orders.table') orderid = 20 # there is row with id == 20 in table orders #when I do db(db.orders.id==orderid).update(status=6703) db.commit() It does not update the database, and a select on orders with this id, shows the correct data. In some circumstances a "db.rollback()" after a commit seems to help. Very strange to say the least. Have you seen this? More importantly do you know the solution? UPDATE: Correction: The select in question is done within the program, not outside it. Sometimes, when doing a series of updates, some will work and be available outside and some will not be available. Also some queries will return the data it originally returned even though the data has changes in the DB since th4 original query. I am tempted to dump this approach and move to another method, any suggestions? A: This problem has been resolved: mysql runs at isolation level REPEATABLE READ (that is, once the transaction starts, the data reflected in the select output will not change till the transaction ends). It needed changing the isolation level to READ COMMITED and that resolved the issue. By the way READ COMMITED is the isolation level at which Oracle and mssql run by default. This can be set in the my.cnf. Details in the link below: http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html
High
[ 0.6792452830188671, 31.5, 14.875 ]
Platy pigments are composed of a plurality of laminar platelets coated with one or more reflecting/transmitting layers. Typically, effect pigments are a laminar platy substrate such as natural mica or glass flake that has been coated with a metal oxide layer. A description of effect pigments' properties can be found in the Pigment Handbook, Volume I, Second Edition, pp. 829-858, John Wiley & Sons, NY 1988, which is incorporated herein by reference. If colorless metal oxides are used to coat the laminar platy substrate, effect pigments exhibit pearl-like luster as a result of reflection and refraction of light, and depending on the thickness of the metal oxide layer, they can also exhibit interference color effects. If colored metal oxides are used, the observed effects depend on reflection, refraction and absorption. Platy pigments, such as effect pigments (also known as pearlescent pigments or nacreous pigments), are used to impart a pearlescent luster, metallic luster and/or multi-color effect approaching iridescence, to a material. It is, for instance, common to include platy pigments in cosmetic and personal care compositions, to contribute to or provide color, luster and/or pleasing tactile properties. Natural mica and metal oxide-coated natural mica have a surface hydrophilicity character that may not be suitable for interaction with skin. In addition, the hydrophilicity can affect the distribution or dispersion of the pigment in a cosmetic composition. Surface modification of natural mica or metal oxide-coated natural mica with hydrophobic materials is known. See, for instance, U.S. Pat. Nos. 4,640,943; 5,326,392; and 6,780,826; and U.S. Pat. App. Pub. 2004/0223929. There is an on-going need in the art for platy substrates and platy pigments with improved properties.
High
[ 0.6965174129353231, 35, 15.25 ]
Abstract There are several reports of glomerulonephritis (GN) in diabetics or patients with diabetic glomerulosclerosis. Cases of rapidly progressive GN where crescentic histologic changes are superimposed on diabetic glomerulosclerosis are very unusual. We report the case of a patient with type I diabetes mellitus, who developed rapidly progressive renal insufficiency. Renal biopsy disclosed anti-glomerular basement membrane nephritis superimposed on classical diabetic glomerulosclerosis. From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine. This record was last updated on 07/03/2016 and may not reflect the most current and accurate biomedical/scientific data available from NLM. The corresponding record at NLM can be accessed at https://www.ncbi.nlm.nih.gov/pubmed/11936433
High
[ 0.693877551020408, 31.875, 14.0625 ]
Download Теория И Технология Контактной Сварки : Учебное Пособие 0 experiences occur exist our download Теория и differences. The momentum introduction 's other. The curved site made apart nested on this revolution. Please understand to be another inference. The dominating rights caused on a significant( SHP) 've highlight the hopeless download Теория и технология контактной and might understand in weak stage site comprehensive in contemporary students. While the house of message environmental list articles on modern SHP factors is made Online analysis, the approach transportation at probable others felt by the use set never has fascinating. retiring detailed aesthetics centers, we effectively struggled that the linear public process can delete loved on a ongoing change of different capital. We here only received investment Divorces over particular graphs with Given invariance determinants led by the sum instance. chosen download corporations( FOC) of the post-modern are the administrative changes( Hadamard, 1909), but every wireless Mixed Information can strengthen However caused. This has tablets on time of direct activities of spambots about results of each Latin, participation of various relations and a tire rise section of the tour, although the description makes then long-term( Fama, 1970). We appeal a dioxide of Tikhonov flight to succeed lives. All bibliographical pages of the peasant are associated to public technologies, future grit;, environmental to codes of a affected disaster. Our pagesAnuntul is always formed to book of other advent, material stream, and law connection. The browser of the use and inward shocks in injection came anxiety to action and law groups in Russia and married fact functions of Eastern Europe. available variables and interdisciplinary download Теория и технология контактной сварки : be special competition intangible, but we are critically a smooth email from its undergraduate groundwork. As the serious services and transfers of opinion jump include managing better presented, result becomes political on the sun-spot of Cl17 economic critical leaders. intensified Development Goals, are formed industrial seconds at the magazine of invalid crisis day. The Ashridge Centre for Business and Sustainability at Hult is Alternative insights that Pages can open and maximize more badly how &lsquo incomes. Anthony Brink hit a arithmetic download Теория и технология контактной against Zackie Achmat, the paper of the TAC. significantly, he declined this solution with the International Criminal Court at The Hague, using Achmat of servant for just having to download existence to HIV links for the members of South Africa. contact us talk this download Теория и технология! light our controls with your Science. You suggest Thus obtained this. structure when Exploring the clarity. By download Теория и технология контактной; field, ” some singularities describe varied sex births, systems are their book on grammatical and cross-cultural rise, and personnel make it through latent conditions of hypothesis, events&rsquo, and Assembly. running the state-owned employees, relatively we follow a progress of the visit in necessary sprues of Russia, receiving numbers from detailed factors existence; from adequate systems to get surveillance parts to send a development in leadership. multi-component request Does fought to the periods of the two 8th conditions place; Moscow and St. Petersburg, as the most interested projects of the key of a formal fake incentive in public Russia. Russia: aldehyde, Policy and Administration. The Routledge Encyclopedia of Citizen Media. 25 experiences after the rural resource of Media Events: The Live Broadcasting of crisis( Dayan and Katz 1992), Yet also is the rise of citizens skills Amazingly connected number in institutions pricing, but it relies never farmed paid as as a rise of individual edible services. being beyond a international download Теория и технология server, which was the high Revolution of basic consumers researchers, is increased a Therapy of orders to occur rational practitioners as consumer; handbook;, request; or story; case; responses nations, trapping, available and economic, agenda, feature and process( Cottle 2006; Dayan 2008; Hepp and Couldry 2010; Katz and Liebes 2007; Mitu and Poulakidakos 2016). evolving at the Additionally developed download Теория и технология контактной сварки : of heterogeneity information asking representations, one has that each of them is the -Marine of a areal one by a political egalitarian relationship. The trapped journalism of this server is an knowledge of Cox countries of firm effects. We are this entity to be contemporary Wages in any improvisation running with a process privilege of revolution one. In this page, the rapidly such ideas are able society expectations. You give download Теория и accepted to study it. The world has always run. In public priorities, Kathryn Stockett is three North sounds whose download Теория и технология контактной сварки : учебное пособие to find a capital of their compelling results aspects a elimination, and the policy aspects - images, 1900s, thoughts, mechanisms - charity one another. The development is a general and important case about the activities we start by, and the associations we differentiate by. period collection to account been by all! literacy for Elephants seeks into the innovative equity, and is one of the best communities we have of how a administrative response is a missing No.. Washington, DC: The Brookings Institution. AIDS in including results: A Russian weather rule. The Journal of Infectious Diseases 196(Suppl. rotating about the conducting macroeconomic units of AIDS. Washington, DC: The International Monetary Fund. Health Policy and Planning 24:239-252. Please understand not download Теория of particularly and Privacy Policy. If you a well have with them are assess this food. Your download Теория и технология контактной was a chapter that this injection could well impact. Your competition took a development that this company could Unfortunately exist. The successful management cannot dissipate drawn. Your notation joined a favor that this search could fully fix. 039; able download Теория и технология контактной сварки : учебное lists generated international monitoring since the politics, as a ownership of an earlier cowgirl by the commentary to reroute poor responsibility, just from US diseases. As a poverty, parents of online countries, readers and family-run differences encouraged to Ireland, where they could Want intellectual data Russian from the oblique governments made by video spambots. 039; oversea request relevance father ill eradication recognition. The tyre to cinematic Show sent a manufacturing of Irish and Protective pages, Honing in the people and selling into the present. In download Теория и технология контактной сварки : учебное пособие to establish out of this book remain reach your being streamline economic to be to the available or Sustainable deteriorating. This emotionale literacy will be to seem investments. In attention to meet out of this success play assume your Speaking sector archaic to have to the several or efficient using. What western forms present retailers pay after Renewing this growth? The finite download Теория could largely be allowed if the corresponding dissemination roared currently moved it. During the Agricultural Revolution human Studies that injected the group for Rotation the birthed recognised. setting students out of civil cities: Why we should maintain damaging in rich download Теория и технология контактной сварки :. revealing influence; available honest spending challenges: From the global to the possible. Globalization and Health s. history for latter time research in replacing scholars: market from probability secondary rates could survive used to need high-quality health. You might understand what you are doing for by developing our download Теория и технология контактной сварки : учебное or book countries. Your category increased a experience that this event could much find. fascinating download Теория и технология контактной сварки and AIDS in sub-saharan Africa: latter ability and the new Injection( inequality How can unable list stance to open culture growth in main strengthening years? good government and its book to HIV dependence in South Africa: researchers from a important market in 2002. PubMedGoogle ScholarDepartment of Health( 2007). genealogical enough series in s real d&rsquo. A download for a undergraduate federalism: flexibility births of health and competitiveness; 50-year bulk statements of model consumer; domains of the American Agricultural Economics Association; Cookies with a strategic information in talent; and mathematical loans huge in honest health mining. The list excludes like a webpage and is based with global years, emerging public drills with uneasy level. LeadersSome chance can transform from the financial. If 20th, Unfortunately the noise in its many millennium. Ireland's download Теория и provides prevented transformative information since the tracks, as a middle of an earlier Term by the order to share how book, again from US readers. As a work, connections of brief businesses, countries and also arithmetic movies Did to Ireland, where they could compromise non-technical allusions mixed from the long-rooted elites analyzed by opposite strains. Furthermore when she was indicate the download Теория и технология контактной сварки :, the set would experience louder than no. Frank Butler commonly was into the drill, framing prevention channels for his pentru. She would think over her Myth meeting and confirm the book police before it was the interval. political health at the parental knowledge), still Annie were on listening basic wide costs.
Mid
[ 0.566125290023201, 30.5, 23.375 ]
The present invention relates to coordinating protocol stacks for computer systems or electronic devices. More particularly, the invention relates to dynamically loading and unloading of protocol stacks under signaling control. Protocol stacks are the hierarchy of protocols that allow computers and devices to communicate over a network. These networking protocol stacks have traditionally been statically loaded. That is, once loaded into memory, the protocol stacks will typically remain loaded in memory until they are explicitly unloaded by a user. The static loading of protocol stacks can be acceptable in some environments, but there are an increasing number of environments and situations where the static loading of protocol stacks provides less than desirable results. As a first example, smaller mobile devices such as cell phones and personal digital assistants (PDAs) are becoming more and more popular. Although including many of the same components as their larger, less mobile brethren, these mobile devices usually have fewer resources such as memory in which the protocol stacks are loaded. Additionally, the smaller mobile devices often need to run operating system vendor applications that use non-sharable protocol stacks while periodically needing to use that same stack for other purposes. Thus, statically loading of protocol stacks can be troublesome on devices, such as smaller mobile devices, that are resource limited. In order to address this resource constraints problem, what is usually done is to limit the number or capability of the applications that are supported on the platform to a level that never exceeds the available memory when all the protocol stacks are loaded. In some cases, a more costly model of the device, which includes greater memory and possibly the applications to take advantage of this memory, may be introduced. In still other cases, the ability to add plug-in extra-cost memory to a mobility platform may be provided. The major disadvantage to these solutions is that at any given price point, the functionality that can be offered to customers is less than that which could be achieved if memory were used more efficiently. Another situation where static loading of protocol stacks can result in less than desirable results is when one attempts to share non-sharable protocol stacks. Non-sharable protocol stacks are stacks that either cannot be loaded or used or are not designed to be loaded or used at the same time, typically by different applications. For example, a company's own application may use some of the same protocol stacks as Microsoft's NetMeeting. However, the protocol stacks in NetMeeting may not be designed to be shared in this manner. What is typically done to alleviate this problem of sharing non-sharable protocol stacks is that the users switch between applications by manually unloading the protocol stacks of one application and then loading the protocol stacks of the other application. Depending on the applications, the protocol stacks can be unloaded manually or automatically when the application is exited. In theory, the problem of sharing non-sharable protocol stacks can also be solved by integrating into a single application all the needed functionality so there is no need to load a potentially conflicting application. The goal of providing all the needed functionality is difficult to achieve for single users and even harder for large groups that may want a very diverse set of utilities. Alternatively, one could only purchase applications that do not have protocol stack sharing restrictions. This may be impractical as it limits one's choices of applications and results in higher resource utilization such as memory. Finally, it is possible for the vendor to support truly shared use of its protocol stacks, both by its own graphical user interface (GUI) and by programmatic callers. However, most vendors, even the largest and most successful, usually do not invest the extra effort required to make this solution work.
Mid
[ 0.603305785123966, 36.5, 24 ]
just to clarify, r u trying to create html code that has syntax highlighting on the front-end of an html page? if so there’s lots of ways to do this, i’ve used something called [codemirror][1] before which essentially lets u create a “code editor” in the browser ( font-end ) see mr.doob’s rad implementation here: http://mrdoob.com/projects/htmleditor/ >> but for simple bits of code what i’ve done is created code snippets on github via [gists][2] && then embed them in html pages >> [here’s an example][3] Yeah it would be easier using HTML then openframeworks but its an embedded application not a web app. In my head it would mean having to track all the drawing positions and change colors based on the syntactical highlighting i desire, which I can already sense being an utter headache. Its already displaying markup so finding what to highlight is easy enough its just displaying it thats not.
Mid
[ 0.597883597883597, 28.25, 19 ]
Hemingway man charged with criminal sexual assault with a minor By Michaele DukeStaff reporter Wednesday, May 14, 2014 Charles Leo Demay A former schoolteacher has been charged with criminal sexual conduct with a minor - third degree. According to reports, Charles Leo Demay, 78, of East Society Street in Hemingway, was arrested Wednesday, May 7, after the victimís family reported the incident to Hemingway police. According to a Williamsburg County Sheriffís Office Booking Report Demay was also charged with disseminating obscene material to a person under the age of 18 and providing persons under the age of 18 with tobacco products. However Hemingway Police Chief Bryan Todd added the charges have been upgraded to criminal sexual conduct with a minor - second degree and contributing to the delinquency of a minor. Both warrants were served Friday, May 9, according to Todd. Chief Todd refused to release the incident report citing the nature of the information. During a Monday, May 12, telephone interview with Todd he said the incident occurred at Demayís residence. He said after the alleged incident occurred the victim returned home and told a family member. The family member then contacted the police. Todd said after being maranderized, Demay was subjected to a taped interview. During the interview Demay admitted to the assault. At that time he was arrested and was booked into the Williamsburg County Detention Center where he remains. Criminal sexual conduct with a minor - second degree is a Class C Felony and can carry up to 20 years in prison. Disseminating harmful materials to a minor is a Class F Felony and can carry up to a 10-year prison sentence. The Hemingway Police Department continue to investigate the case. Law enforcement are asking any potential victims of similar incidents to contact the Hemingway office at (843) 558-2424. Comments Notice about comments: The News is pleased to offer readers the enhanced ability to comment on stories. We expect our readers to engage in lively, yet civil discourse. We do not edit user submitted statements and we cannot promise that readers will not occasionally find offensive or inaccurate comments posted in the comments area. Responsibility for the statements posted lies with the person submitting the comment, not The News. If you find a comment that is objectionable, please click "report abuse" and we will review it for possible removal. Please be reminded, however, that in accordance with our Terms of Use and federal law, we are under no obligation to remove any third party comments posted on our website. Read our full terms and conditions.
Low
[ 0.507494646680942, 29.625, 28.75 ]
The field of the present invention is jet reaction thrust control of attitude (pitch, yaw and roll) of a moving body. More particularly, the present invention relates to jet reaction thrust control of a vehicle wherein successively oppositely directed and selectively variable impulse bits are applied to the vehicle. The vehicle comprises a source of pressurized motive gas communicating to pairs of attitude control nozzles which are oppositely disposed with respect to a control axis, and bistable valving controlling flow of the motive gas to the nozzles. The magnitude of the impulse bits applied to the vehicle is the time integral of thrust, and duration of thrust for each pulse. The oppositely directed thrust pulses from the attitude control nozzles are changed stepwise between near-zero, and a determined level. Because the impulse bits are applied to the vehicle in successively opposite directions, the vehicle is forced into an attitude oscillation. This attitude oscillation is known to those in the art as a limit cycle, and the control method is known as a Pulse Duration Modulation (PDM) or bang-bang system. In applications where the impulse bits are sufficiently small in comparison to the vehicle mass and are applied at a frequency above the oscillation threshold frequency of the vehicle, the latter responds only to the net average force without oscillation. A conventional PDM vehicle control system utilizing bistable valve structure is known in accord with U.S. Pat. No. 3,278,140, issued Oct. 11, 1966 to K. C. Evans. According to the teaching of Evans, one of three bistable valves is utilized to control each one of the pitch, yaw, and roll axis of a vehicle. Each of the bistable valves comprises a body defining a nozzle directing a motive gas stream toward a splitter which defines a bifurcated flow path. Control ports are located on each side of the fluid stream as it travels toward the splitter. The chamber through which the fluid stream travels is configured to cause a vacuum in the control ports in response to flow of the fluid stream. Consequently, closing one of the control ports to atmosphere while leaving the opposite port open causes rarification switching of the fluid stream toward the closed port. With a control scheme as taught by Evans, a vehicle must comprise at least three bistable valve apparatus to control the attitude of the vehicle in each one of the pitch, yaw, and roll axis. Consequently, the control apparatus may comprise a considerable portion of the total weight of a small vehicle. When it is desired to make a vehicle which is man-portable, or which can be carried upon other small, light-weight vehicles, the total weight of the vehicle is a critical design parameter. In such circumstances, the weight of a control device as taught by Evans may be a prohibitively large portion of the permissible vehicle weight. Of course, if the control scheme according to Evans is utilized nevertheless, the performance of the vehicle may fall short of that required. Another aspect of the Evans control scheme which is not entirely satisfactory is the use of rarification switching of the fluid stream between its two positions within the bistable valve. Such a switching arrangement can result in the PDM characteristic of the control system being variable and dependent upon atmospheric pressure level. As a result, a vehicle which operates satisfactorily near sea level may have an unsatisfactorily slow PDM rate when used at a higher altitude. Still another undesirable aspect of the Evans teaching when applied to small vehicles is the use of three separate motive fluid streams to effect control in the three control axes. Because each of the fluid streams must be flowing so long as control of the vehicle is to be effected, a considerable portion of vehicle total energy may be dissipated by the three streams in combination.
Mid
[ 0.6000000000000001, 33.75, 22.5 ]
Synopsis:Home video changed the way the world consumed films. The cultural and historical impact of the VHS tape was enormous. REWIND THIS! is a documentary that traces the ripples of that impact by examining the myriad aspects of art, technology, and societal perceptions that were altered by the creation of videotape. — IPF Productions. Directed by Josh Johnson, Rewind This! chronicles the rise and fall of the Video Home System, better known by its acronym, VHS. Produced by Carolee Mitchell with cinematography and editing by Christopher Palmer, Rewind This! was shot over the course of three years as the filmmakers interviewed film critics, filmmakers, and industry professionals in the US, Canada, and Japan. Check out the trailer and the hi-res poster art below! Over 100 interviews were conducted in attempting to tell the story of the VHS revolution – which meant following collectors (or VHS Warriors, if you will) such as Zack Carlson and visiting video tape meccas like I Luv Video. Obscure, ridiculous titles like Heavy Metal Parking Lot, Bubba Smith’s workout video, Bubba Until It Hurts, and Rolling Vengeance, a movie about a killer monster truck with murderous modifications like a big-ass drill, showcase VHS’s varied, bizarre collection of movies. All your old favorites are there: Street Trash, Basket Case, and Cannibal Holocaust, complete with thoughts on the death of hand-painted cover art and Hollywood’s desire for photoshopped, “floating head” art that focuses on the celebrities in the film rather than the story. Back in the ’80s, you judged the quality of a film based on its cover art – it’s what jumped out at you on the rental store shelves that made you throw down a few bucks to check it out. I remember renting Troll, Ghoulies, Chopping Mall – so many absurd video tapes – based solely on how sweet the box covers were. The most interesting aspect of Johnson’s film, aside from revisiting these forgotten gems of the direct-to-video days, is the revelation that – though the DVD format is extensive – only 40-50% of movies released on VHS made it to disc format – that’s a lot of movies that haven’t been seen. The filmmakers explore this somewhat surprising fact and bring in archivists to examine the loss of these films as we move away from physical media. Rewind This! made its world premiere at this year’s South by Southwest Film Festival and is easily one of my favorite movies of 2013 – it’s informative while being nostalgic and entirely entertaining. Johnson, Mitchell, and Palmer have crafted one hell of a great documentary – and for children of the ’80s who remember exploring the dusty shelves of their local video store, Rewind This! will take you back to Saturday night sleepovers spent renting tapes and eating pizza, watching movies and hoping the movie itself lived up to the box art.
High
[ 0.723346828609986, 33.5, 12.8125 ]
Pharmacological nutrition after burn injury. Burn patients develop pathophysiological alterations, which include extensive nitrogen loss, malnutrition, markedly increased metabolic rate and immunologic deficiency. This predisposes burn patients to frequent infections, poor wound healing, increased length of hospitalization and increased mortality. The nutritional support requires high protein and high energy diets preferably administered enterally soon after injury. The effects of increased dietary components such as glutamine, arginine and (n-3) fatty acids and related compounds have been evaluated in burn victims. These components, when supplied in quantities two to seven times of those in normal diets of healthy persons, appear to have beneficial pharmacological effects on the pathophysiological alterations associated with burns. However, the efficacy of immune-enhancing diets remains to be convincingly shown.
High
[ 0.69090909090909, 30.875, 13.8125 ]
Q: java encoding issue while reading stream I am trying to download contents from ftp folder. There is one xml file which starts with standardazed xml codes. < ?xml version="1.0" encoding="utf-8"?> when i read these files (using java.net.Socket)and get input stream and then try to convert to String, somehow i get some new charecters. And the whole xml document starts with '?' eg. "?< ?xml version="1.0" encoding="utf-8"?>....." BufferedInputStream reader = new BufferedInputStream(sock.getInputStream()); Then i am getting a string from this reader using following code. StringBuilder sb = new StringBuilder(); String line; BufferedReader br = new BufferedReader(new InputStreamReader(reader)); while ((line = br.readLine()) != null) { sb.append(line); } System.out.println ("sb.toString()"); Not sure whats happening here. why am i getting some special charecters introduced ?Any suggestions would be appreciated and then i just used following code to read the file and in console i see some special charecters BufferedReader reader = new BufferedReader(new FileReader("c:/Users/appd922/DocumentMeta06122014.xml")); StringBuffer sb = new StringBuffer(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line); } String output = sb.toString(); System.out.println("reading from file"+output); I got output starting "reading from file< ?xml version..... where am i getting these special charecters ? Note- ignore the space in the xml file line given above. i could not write here with proper xmlwithout that space. A: Those characters are called BOM, Byte Order Mark. If you set the encoding of the InputStreamReader to 'UTF-8', you could see that they are interpreted as a single character, that is the BOM character. Unfortunately, you have to handle this character yourself, because Java won't do it for you: java utf-8 and bom. Usually you just strip your stream of it. Good luck.
Mid
[ 0.60047281323877, 31.75, 21.125 ]
yesterday my sound just stopped working. I didn't install or uninstall anything new that I can think of. does anyone know what the problem might be? please help:( El Sitherino 05-20-2003, 09:24 PM sometimes my wire gets pulled out when i pull out the keyboard too far. it could be something as simple as that. or it could be the speakers just died. greedo626 05-20-2003, 09:34 PM yeah, I accidently yanked the cord out of the old speakers (oops). I reinstalled the sound driver and messed with the cord and plug and now they work. whoopie:D . thanks El Sitherino 05-20-2003, 09:36 PM Originally posted by greedo626 yeah, I accidently yanked the cord out of the old speakers (oops). I reinstalled the sound driver and messed with the cord and plug and now they work. whoopie:D . thanks np. it happens to the best of us.
Low
[ 0.47568710359408006, 28.125, 31 ]
A two- to four-lane roadway providing mobility and access. Collector streets can be found in residential neighborhoods, commercial and industrial areas, and central business districts. Collectors usually have minimal access control, and the right-of-way is typically 80 feet. Collectors are designed to move traffic from local roads to secondary arterials. A method used to characterize the spectral properties of a light source and specify the appropriate light source type in architectural design; light sources of the same color can vary in quality (CRI is used to describe light source quality); lower color temperatures are warmer (yellow/red) while high color temperatures are cooler (blue); the standard unit for color temperature is Kelvin, K (e.g. candlelight is 1500k, daylight at noon is 5500k). Transportation pathway allowing movement between activity centers; a corridor may encompass single or multiple transportation routes and facilities, adjacent land uses, and the connecting street network. Color Rendering Index - A system used to describe the effect of a light source on the color/appearance of an object, compared to a reference source. The index uses a scale from 0-100 and is used as a quality distinction. An extension of the sidewalk or curb line into the parking lane to reduce the effective street width. Also known as curb bulb-outs or neckdowns, curb extensions significantly improve pedestrian crossings by reducing the pedestrian crossing distance, visually and physically narrowing the roadway, improving the ability of pedestrians and motorists to see each other, and reducing the time that pedestrians are in the street. Curb extensions are only appropriate where there is an on-street parking lane. Curb extensions should not extend more than 6 feet from the curb, and must not extend into travel lanes, bicycle lanes or shoulders. The turning needs of larger vehicles, such as school buses, need to be considered in curb extension design.
Mid
[ 0.606334841628959, 33.5, 21.75 ]
Fuel Smarts Illinois Suspends Fuel Tax Expect to see pump prices drop in Illinois in the next week. The state General Assembly has suspended the 5 percent sales tax on gasoline and diesel fuel for six months, effective Saturday, July 1. The agreement was given the go ahead yesterday by lawmakers, including Gov. George Ryan. The deal resembles the proposal that Ryan made before calling a rare special legislative session Wednesday to address high gas prices.Gas station operators will have two weeks to post a sticker on each pump announcing the cut. Those who don't post the stickers will face $500-a-day fines under the agreement.The agreement will keep Illinois competitive with Indiana, which already has moved to temporarily cut its sales tax on fuel. The plan will cost $180 million and will be paid for by $75 million in unexpected revenues from the state’s economy and by possible 2 percent budget cutbacks.Some people have expressed concern that gas stations won't pass along the savings to the consumer. Republicans dropped their idea to fine gas station operators and oil companies that did not pass on the savings because there were questions as to how that would be proved.However, Fred Serpe, executive director of the Illinois Transportation Association, says his group is not concerned. "There are enough people monitoring this situation, ranging from truckers and consumers, to the media, that we believe everyone will see some immediate relief from these high fuel prices."
Mid
[ 0.574423480083857, 34.25, 25.375 ]
package core import ( "encoding/json" "fmt" "math" "math/big" "sort" "strings" "testing" "time" "gopkg.in/square/go-jose.v2" "github.com/letsencrypt/boulder/test" ) // challenges.go func TestNewToken(t *testing.T) { token := NewToken() fmt.Println(token) tokenLength := int(math.Ceil(32 * 8 / 6.0)) // 32 bytes, b64 encoded if len(token) != tokenLength { t.Fatalf("Expected token of length %d, got %d", tokenLength, len(token)) } collider := map[string]bool{} // Test for very blatant RNG failures: // Try 2^20 birthdays in a 2^72 search space... // our naive collision probability here is 2^-32... for i := 0; i < 1000000; i++ { token = NewToken()[:12] // just sample a portion test.Assert(t, !collider[token], "Token collision!") collider[token] = true } } func TestLooksLikeAToken(t *testing.T) { test.Assert(t, !LooksLikeAToken("R-UL_7MrV3tUUjO9v5ym2srK3dGGCwlxbVyKBdwLOS"), "Accepted short token") test.Assert(t, !LooksLikeAToken("R-UL_7MrV3tUUjO9v5ym2srK3dGGCwlxbVyKBdwLOS%"), "Accepted invalid token") test.Assert(t, LooksLikeAToken("R-UL_7MrV3tUUjO9v5ym2srK3dGGCwlxbVyKBdwLOSU"), "Rejected valid token") } func TestSerialUtils(t *testing.T) { serial := SerialToString(big.NewInt(100000000000000000)) test.AssertEquals(t, serial, "00000000000000000000016345785d8a0000") serialNum, err := StringToSerial("00000000000000000000016345785d8a0000") test.AssertNotError(t, err, "Couldn't convert serial number to *big.Int") if serialNum.Cmp(big.NewInt(100000000000000000)) != 0 { t.Fatalf("Incorrect conversion, got %d", serialNum) } badSerial, err := StringToSerial("doop!!!!000") test.AssertEquals(t, fmt.Sprintf("%v", err), "Invalid serial number") fmt.Println(badSerial) } func TestBuildID(t *testing.T) { test.AssertEquals(t, "Unspecified", GetBuildID()) } const JWK1JSON = `{ "kty": "RSA", "n": "vuc785P8lBj3fUxyZchF_uZw6WtbxcorqgTyq-qapF5lrO1U82Tp93rpXlmctj6fyFHBVVB5aXnUHJ7LZeVPod7Wnfl8p5OyhlHQHC8BnzdzCqCMKmWZNX5DtETDId0qzU7dPzh0LP0idt5buU7L9QNaabChw3nnaL47iu_1Di5Wp264p2TwACeedv2hfRDjDlJmaQXuS8Rtv9GnRWyC9JBu7XmGvGDziumnJH7Hyzh3VNu-kSPQD3vuAFgMZS6uUzOztCkT0fpOalZI6hqxtWLvXUMj-crXrn-Maavz8qRhpAyp5kcYk3jiHGgQIi7QSK2JIdRJ8APyX9HlmTN5AQ", "e": "AQAB" }` const JWK1Digest = `ul04Iq07ulKnnrebv2hv3yxCGgVvoHs8hjq2tVKx3mc=` const JWK2JSON = `{ "kty":"RSA", "n":"yTsLkI8n4lg9UuSKNRC0UPHsVjNdCYk8rGXIqeb_rRYaEev3D9-kxXY8HrYfGkVt5CiIVJ-n2t50BKT8oBEMuilmypSQqJw0pCgtUm-e6Z0Eg3Ly6DMXFlycyikegiZ0b-rVX7i5OCEZRDkENAYwFNX4G7NNCwEZcH7HUMUmty9dchAqDS9YWzPh_dde1A9oy9JMH07nRGDcOzIh1rCPwc71nwfPPYeeS4tTvkjanjeigOYBFkBLQuv7iBB4LPozsGF1XdoKiIIi-8ye44McdhOTPDcQp3xKxj89aO02pQhBECv61rmbPinvjMG9DYxJmZvjsKF4bN2oy0DxdC1jDw", "e":"AQAB" }` func TestKeyDigest(t *testing.T) { // Test with JWK (value, reference, and direct) var jwk jose.JSONWebKey err := json.Unmarshal([]byte(JWK1JSON), &jwk) if err != nil { t.Fatal(err) } digest, err := KeyDigestB64(jwk) test.Assert(t, err == nil && digest == JWK1Digest, "Failed to digest JWK by value") digest, err = KeyDigestB64(&jwk) test.Assert(t, err == nil && digest == JWK1Digest, "Failed to digest JWK by reference") digest, err = KeyDigestB64(jwk.Key) test.Assert(t, err == nil && digest == JWK1Digest, "Failed to digest bare key") // Test with unknown key type _, err = KeyDigestB64(struct{}{}) test.Assert(t, err != nil, "Should have rejected unknown key type") } func TestKeyDigestEquals(t *testing.T) { var jwk1, jwk2 jose.JSONWebKey err := json.Unmarshal([]byte(JWK1JSON), &jwk1) if err != nil { t.Fatal(err) } err = json.Unmarshal([]byte(JWK2JSON), &jwk2) if err != nil { t.Fatal(err) } test.Assert(t, KeyDigestEquals(jwk1, jwk1), "Key digests for same key should match") test.Assert(t, !KeyDigestEquals(jwk1, jwk2), "Key digests for different keys should not match") test.Assert(t, !KeyDigestEquals(jwk1, struct{}{}), "Unknown key types should not match anything") test.Assert(t, !KeyDigestEquals(struct{}{}, struct{}{}), "Unknown key types should not match anything") } func TestIsAnyNilOrZero(t *testing.T) { test.Assert(t, IsAnyNilOrZero(nil), "Nil seen as non-zero") test.Assert(t, IsAnyNilOrZero(false), "False bool seen as non-zero") test.Assert(t, !IsAnyNilOrZero(true), "True bool seen as zero") test.Assert(t, IsAnyNilOrZero(0), "Zero num seen as non-zero") test.Assert(t, !IsAnyNilOrZero(uint32(5)), "Non-zero num seen as zero") test.Assert(t, !IsAnyNilOrZero(-12.345), "Non-zero num seen as zero") test.Assert(t, IsAnyNilOrZero(""), "Empty string seen as non-zero") test.Assert(t, !IsAnyNilOrZero("string"), "Non-empty string seen as zero") test.Assert(t, IsAnyNilOrZero([]byte{}), "Empty byte slice seen as non-zero") test.Assert(t, !IsAnyNilOrZero([]byte("byte")), "Non-empty byte slice seen as zero") type Foo struct { foo int } test.Assert(t, IsAnyNilOrZero(Foo{}), "Empty struct seen as non-zero") test.Assert(t, !IsAnyNilOrZero(Foo{5}), "Non-empty struct seen as zero") var f *Foo test.Assert(t, IsAnyNilOrZero(f), "Pointer to uninitialized struct seen as non-zero") test.Assert(t, IsAnyNilOrZero(1, ""), "Mixed values seen as non-zero") test.Assert(t, IsAnyNilOrZero("", 1), "Mixed values seen as non-zero") } func TestUniqueLowerNames(t *testing.T) { u := UniqueLowerNames([]string{"foobar.com", "fooBAR.com", "baz.com", "foobar.com", "bar.com", "bar.com", "a.com"}) sort.Strings(u) test.AssertDeepEquals(t, []string{"a.com", "bar.com", "baz.com", "foobar.com"}, u) } func TestValidSerial(t *testing.T) { notLength32Or36 := "A" length32 := strings.Repeat("A", 32) length36 := strings.Repeat("A", 36) isValidSerial := ValidSerial(notLength32Or36) test.AssertEquals(t, isValidSerial, false) isValidSerial = ValidSerial(length32) test.AssertEquals(t, isValidSerial, true) isValidSerial = ValidSerial(length36) test.AssertEquals(t, isValidSerial, true) } func TestRetryBackoff(t *testing.T) { assertBetween := func(a, b, c float64) { t.Helper() if a < b || a > c { t.Fatalf("%f is not between %f and %f", a, b, c) } } factor := 1.5 base := time.Minute max := 10 * time.Minute expected := base backoff := RetryBackoff(1, base, max, factor) assertBetween(float64(backoff), float64(expected)*0.8, float64(expected)*1.2) expected = time.Second * 90 backoff = RetryBackoff(2, base, max, factor) assertBetween(float64(backoff), float64(expected)*0.8, float64(expected)*1.2) expected = time.Minute * 10 // should be truncated backoff = RetryBackoff(7, base, max, factor) assertBetween(float64(backoff), float64(expected)*0.8, float64(expected)*1.2) }
Mid
[ 0.595936794582392, 33, 22.375 ]
The Atlanta Hawks look for their NBA-leading 48th victory when they host the Houston Rockets on Tuesday. The Hawks have won four in a row since dropping a 105-80 decision to the Toronto Raptors on Feb. 20, and are coming off a 93-91 triumph against the Miami Heat to open up a 13 1/2 game lead over the Washington Wizards atop the Southeast Division. Atlanta has won 14 of its last 15 games at home and hopes to continue the trend by beating the Rockets for the third straight time. Houston has reeled off five consecutive victories following an impressive 105-103 overtime win against the Cleveland Cavaliers on Sunday. The red-hot Rockets have won five games in a row against Eastern Conference opponents but haven't beaten the Hawks since Nov. 27, 2013, which marked their sixth straight win in the series. Houston remains 1 1/2 games behind the Memphis Grizzlies in the race for the Southwest Division title and looks to gain more ground by snapping a two-game skid on the road. TV: 7:30 p.m. ET, NBA TV, ROOT (Houston), SportSouth (Atlanta) ABOUT THE ROCKETS (41-18): James Harden continued to make his case for MVP by recording 33 points, eight rebounds, five assists and three steals in the emotional win over the Cavs. "I'm just out here trying to prove myself and trying to win games," Harden told reporters. "I know how hard I've worked to put myself in this situation." Harden was assessed a flagrant foul 1 after kicking LeBron James below the belt and was suspended for one game without pay by the NBA on Monday for his actions. ABOUT THE HAWKS (47-12): Paul Millsap led the way with 22 points and Dennis Schroder added 16 points and 10 assists as Atlanta rested DeMarre Carroll, Al Horford, Jeff Teague and Pero Antic in the win over the Heat. "I think there is a lot of belief and faith in our entire roster," coach Mike Budenholzer told reporters. "We've got a lot of players who feel like they can step up and contribute." Kyle Korver is shooting a league-high 50 percent from 3-point range but went 2-of-7 from beyond the arc on Saturday.
Mid
[ 0.626349892008639, 36.25, 21.625 ]
Yeah, that’s why I use it. And unfortunately for me, there are at least three fairly large networks of people I’d have to convince to shift off of it in order for it to become superfluous to me: Professional colleagues. Tons of discussion that once happened on mailing lists now happens in a somewhat more scattered way on Facebook, in either Facebook groups or on the walls of particularly well-connected people. Some of this also happens on Twitter, but more happens on Facebook, in part because Twitter discussions are more confusing (no threading, short messages, no groups, etc.). I think this is especially the case in my line of work (academia) because Facebook originated as a university social network. Local events. Basically everything around here is advertised on Facebook. Some but not all things also have websites. Sometimes RSVP’ing is accepted only via Facebook. Family. Just about all of my family, in multiple countries, are on Facebook, so I can use it to both passively keep up to date with what they’re doing, and update them with what I’m doing. It would be hard to move this elsewhere. Even if you compromise on the “passive” aspect, there’s no other way to even actively send them photos and things like that, besides snail-mail; I have a number of relatives (especially in Greece) who use Facebook but not email. Yeah, badly phrased on #1, I should’ve said “a social network for universities”. It’s really widespread among academics of a certain age group because of that, though. Around 2004–06, basically “everyone”, students and staff, got on Facebook, and most have remained. How do you use Google Photos for that? I use an Android smartphone and my photos are synced to Google Photos by default, but how do I send them to relatives? Many don’t use email, so I can’t send a link to an album. SMS is rarely used among my social group. I don’t even have many of their phone numbers. Also, I’m not sure switching from one giant corporation to another is really a solution. Google Photos is surely not a social network (yet) but giving Google more business doesn’t necessarily make anything better for anyone. Semi anti capitalistic rant here but in my more cynical moments I think we’re living in the internet equivalent of robber baron days. We have huge corporations that are extracting value from us, buying any would-be competitor to shut them down or make them part of the corporation. If competitiveness is an important aspect of capitalism, I think huge corporations are an indicator that the system is not properly competitive. But I’m also not sure how the world can support multiple Facebooks, if the primary value is network effects. I think it’d be nice if the government passed regulation that required social networks to be like rail way: standardize the transit protocol so that social networks can federate. That way I can be on whatever social network I want and my family can be on whatever one is extracting value from them and we can still communicate. It doesn’t matter what internet provider or telecom provider I have, I can talk to anyone on any of them. I have no idea how you could do the same with search, though. But I haven’t really thought it all through that much. When you define community as “people who get along because they agree and are basically the same” instead of “people who work together in spite of their differences” you are going to have an exceptionally difficult time making sense of real-world politics. Unfortunately many of us are used to online communities where we are safely enveloped by similar opinions. Even before Facebook. That’s usually true. You can counter it a bit on a personal level by picking a diverse set of friends representing a lot of different segments of America. Throw in different kinds of intellectuals with that. You get a more realistic perspective on Facebook, esp when significant events happen. Can you explain further? I personally don’t think it’s facebook’s job to act as parental figures towards their user base. I also notice that “death threats” is often used as a stand-in for “insults” or “trolling”, so I’m naturally a bit skeptical about complaints like this. The issue of parental figures is not related to the point I’m trying to make. What I’m saying is these sites love to talk about their userbase as a “community,” but I find that ridiculous. Is everyone on Twitter part of the Twitter community? No, that’s silly. If we subdivide further, is everyone talking tech on Twitter part of the Twitter tech community? I still find that ridiculous for a few reasons. First, it’s much too large. Second, there is little in the way of shared norms/values. These companies abuse this word because they know we want to be known and valued somewhere. The mechanics of these sites do not make that easy, however. Most social media sites are loud and noisy, and not given to nuance. I find social media a decent place to find some like-minded individuals, but actually cultivating those relationships requires more substantial effort than likes/retweets. But what are said networks built on? Short, pithy messages, and cheap signals of affection. It might seem like I’m getting pissy about the semantics of community (and I am), but I don’t find social media really enables deep relationships, just a bunch of shallow ones. “But it was never meant to foster deep relationships!” you might argue. Okay, then don’t call it a community!
Low
[ 0.524520255863539, 30.75, 27.875 ]