text
stringlengths 1
6.27k
| id
int64 0
3.07M
| raw_id
stringlengths 2
9
| shard_id
int64 0
0
| num_shards
int64 16
16
|
---|---|---|---|---|
Design Of Microtremor Monitoring Tools Using Accelerometer Sensor On Android Mobile To Determine The Natural Building Frequency In UNS Library Design of Microtremor monitoring tools using accelerometer sensor on android phone to determine the natural frequency of buildings in UNS Library has done. This study aims to determine the ratio of resonance between the soil and building of UNS Library. The main sensor for recording microtremor activity is Accelerometer sensor on android phone. Microtremor activity recording is done on every building’s floor of UNS Library and the surface of land outside the building.The recorded data has been sent to the server by telemetry methode. The data stored on the server then displayed in the graph on the microtremor web monitoring. Microtremor data is processed with the FFT to determine the dominant frequency. From the dominant frequency, ratio of resonance of soil and building of UNS Library could be found. The value of resonance ratio is 69.35 - 94.48% in the NS component and 70.42 - 98.61% in the EW component with low resonance status on each floor of the building. Introduction Microtremor is a very small and continuous ground vibration that is sourced from various kinds of vibrations such as, traffic, wind, human activity, and others. Microtremor can also be interpreted as a continuous natural harmonic vibration of the soil, trapped in surface sediments, reflected by a fixed layer or boundary layer, caused by micro vibrations below the surface of the soil and other natural activities. Microtremor can be measured by a microtremor meter device that | 3,070,000 | 164561464 | 0 | 16 |
consisting of amplitude and period measurements. In the study of seismic techniques, lighter lithology has a higher risk when shaken waves of earthquakes, because it will make greater amplification (wave) compared with more compact rock [1]. One important factor that can be used to predict earthquake hazard in a building is the resonance measurement between the natural frequency of the building and the ground below the building [2]. If the value of the building frequency is close to the natural frequency value of the underlying material, the seismic vibration will give rise to resonance in the building which will increase the stress on the building [3]. Nowadays many technologies have been created to analyze various natural phenomena such as earthquakes. Technology in today is already very complex. Not only used as a medium of communication between humans, but also as a medium of information between humans and the natural surroundings. Android is one of the Operating System (OS) that dominates the smartphone market. Android is open source that allows users to optimize the functions of the devices available on the smartphone [4]. In the android device generally, there is a motion sensor that is accelerometer sensor. Accelerometer sensor on android is used to determine the orientation of motion on the android smartphone. The accelerometer on the android smartphone measures the acceleration of movement with units of m/s 2 [5]. Sensor is very sensitive and can detect microtremor vibration.This study aims to determine the ratio of resonance between the soil and building of UNS Library using accelerometer | 3,070,001 | 164561464 | 0 | 16 |
sensor On android mobile. Collecting data process of natural vibration of the soil and vibration of buildings around of UNS Library Building Data collection of land and building frequency are done by installing Android mobile phone on ground level around UNS Library and every floor of UNS Library. Android Mobile placed on mounting. Android phone was setting its level so that the axis exactly in accordance with its position. Taking data at each point is done for approximately 30 minutes to 1 hour. From the vibration data during that period, the vibration data will be convert from the time domain into the frequency domain. This conversion process used Fast Fourier Transform (FFT) method. ;< = ;< cos 2 − ;< sin 2 9: , ;< , and => is a signal function in the frequency domain on the EW, NS, and UD axes. Each point of vibration measurement will get the result of all three axes. From these three axes can be analyzed HVSR (Horizontal to Vertical Spectral Ratio) and FSR (Floor Spectral Ratio). The equation for HVSR is shown in (6), and the equation for FSR is shown in (7) and (8). Analyzes for natural soil vibrations using HVSR (Horizontal to Vertical Spectral Ratio) analysis, than analysis on each floor of the building using FSR (Floor Spectral Ratio). The HVSR value is obtained from the comparison between the natural ground vibrational frequencies on the horizontal axis (EW and NS) with the natural ground vibrational frequencies on the vertical axis (UD). The FSR value is obtained | 3,070,002 | 164561464 | 0 | 16 |
from the comparison between the vibration frequency of the building and the frequency of the ground on the same axis. FSR EW is the floor spectral frequency on the EW axis, and FSR NS is the spectral frequency of the building floor on the NS axis. The data processing will be done by calculation of resonance ratio of land and building with equation (9). R is the resonance ratio, fb is the resonance of the building, and ft is the soil resonance. Levels of resonance of building to earthquakes are classified into three, low if resonance value (R)> 25%, medium if resonance value (R) = (15% -25%), and high if resonance value (R) <15% [6]. Natural vibration data of soil and buildings in around of UNS Library Building Microtremor data retrieval was done after 12 pm. As shown in Figure 2 Data collection on each floor is done with duration 30 to 60 minutes. Before running the program, device android paired on the mounting and adjusted its leveling. Mounting is fixed to the floor using a double-foam tape. An example of a recording of a microtremor recording graph on the 1st floor is shown in Figure 3. After getting data microtremor, the data will be calculated to the dominant frequency values, HVSR and FSR. In the determination of the Dominant Frequency used Fast Fourier Transform, Short-Time Fourier Transform, and vibration spectrum. These functions are already widely available in modern programming languages such as C, C #, C ++, IDL, Java, and Phyton. Determination of the dominant | 3,070,003 | 164561464 | 0 | 16 |
frequency value is done by processing the microtremor data in the Matlab R2013a from the MathWorks developer [7] After obtained the dominant frequency, it can be known the value of HVSR and FSR. The dominant frequency data shown in Table 1. The dominant frequency graph is shown in Figure 4. Calculation of HVSR value refers to equation (6), obtained HVSR value is 3.407772. While the calculation of FSR value refers to equation (7) and (8), FSR value on each floor of UNS Library shown in Table 2. Analysis of microtremor data results Data analysis is done by comparing the FSR value of each floor of UNS Library with HVSR value of the surrounding the soil. The results of the FSR value data in Table 2 were processed using equation (9). Levels of resonance of buildings to earthquakes are classified into three, low if resonance value (R)> 25%, medium if resonance value (R) = (15% -25%), and high if resonance value (R) <15% [6 ]. From the results, obtained data show that the resonance ratio of building on each floor of the building UNS Library is low. The resonance ratio value on each floor is different. The difference is due to the height of the soil surface, the combination of the air column underneath, the layout on each floor, and others. The function of space on each floor also affects the natural frequency on the floor. On the basement floor, the room is below ground level outside the UNS Library Units. In this basement room, the room | 3,070,004 | 164561464 | 0 | 16 |
is not sealed but a rather large room with several pillars. Frequency in this basement space is high compared to other spaces above. The frequency of the basement floor on the NS and EW axes almost matches the frequency of the ground whereas the frequency value on the larger z axis. On the 1st floor, the room is larger because it is used as a front office library. On the 1st floor, there is an internal server of UNS Library which is to the south of the data retrieval point. From Table 2 it's known that the dominant frequency value on the Y-axis (NS) is greater than the X-axis (EW). This is due to the existence of the internal server UNS Library is active. On the 2nd floor is used for Administration room and meeting room. On the 3rd floor to 6th floor, half the room functioned as reading room and half again as information center, mushola and bathroom. Almost on the floor, the frequency value is almost the same except on the 6th floor which the dominant frequency value is small. On the 7th floor, the function and layout are different from the floor below. On the 7th floor is functioned as a UNS Museum. The resonance ratio in the NS direction varied between 69.35 -94.48%, also on the EW axis is 70.42 -98.61% as shown in Table 3. Greater the resonance ratio is better the natural frequency at that location further away from the natural frequency of the ground. In Table 2. shows the | 3,070,005 | 164561464 | 0 | 16 |
dominant frequency values varying on each axis. In this study for the dominant frequency of buildings analyzed is the frequency on the horizontal axis of NS and EW. The orientation of vibration that significantly affects the strength of the building in the horizontal direction. The tower-like shape of the building makes it very vulnerable to collapse when subjected to vibration especially vibration with horizontal orientation. Usually, the frequency values in the direction of NS and EW close to the same or the difference is not much. But in this study, the average value of the dominant frequency in the direction of NS and EW is bold difference. This can be caused by the location of the building around the building of the UNS Library Unit. Figure 5. shows the location of UNS Library. The average value of the dominant frequency in the NS direction is greater than the EW. This is because there is an effect of vibration activity from the flanking building in the north and south, and caused by thickness profile of top soil. Conclusion The natural frequency values structure of UNS Library as shown in Table 2. From the data obtained, it shows that the resonance ratios of buildings on each floor of the UNS Library are low. The resonance ratio value on each floor is different. The difference is due to the height of the soil surface, the combination of the air column underneath, the layout on each floor, and others. The function of space on each floor also affects the natural | 3,070,006 | 164561464 | 0 | 16 |
frequency on the floor. The resonance ratio in the NS direction varied between 69.35 -94.48%. The resonance ratio on the EW axis varies between 70.42 -98.61%. The greater the resonance ratio the better because of the natural frequency at that location further away from the natural frequency of the ground. The average value of the dominant frequency in the NS direction is greater than the EW. This is because there is an effect of vibration activity from the flanking building in the north and south, and coused by thickness profile of the top soil. | 3,070,007 | 164561464 | 0 | 16 |
Exchange coupling controlled ferrite with dual magnetic resonance and broad frequency bandwidth in microwave absorption Ti-doped barium ferrite powders BaFe12−xTixO19 (x = 0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 and 0.8) were synthesized by the sol–gel method. The phase structure and morphology were analyzed by x-ray diffraction (XRD) and scanning electron microscopy, respectively. The powders were also studied for their magnetic properties and microwave absorption. Results show that the Ti-doped barium ferrites (BFTO) exist in single phase and exhibit hexagonal plate-like structure. The anisotropy field Ha of the BFTO decreases almost linearly with the increase in Ti concentration, which leads to a shift of the natural resonance peak toward low frequency. Two natural resonance peaks appear, which can be assigned to the double values of the Landé factor g that are found to be ∼2.0 and ∼2.3 in the system and can be essentially attributed to the existence of Fe3+ ions and the exchange coupling effect between Fe3+ and Fe2+ ions, respectively. Such a dual resonance effect contributes a broad magnetic loss peak and thus a high attenuation constant, and leads to a dual reflection loss (RL) peak over the frequency range between 26.5 and 40 GHz. The high attenuation constants are between 350 and 500 at peak position. The optimal RL reaches around −45 dB and the practicable frequency bandwidth is beyond 11 GHz. This suggests that the BFTO powders could be used as microwave absorbing materials with extraordinary properties. Introduction Recently, electronic devices such as local-area networks, mobile phones, satellite television sets and radar | 3,070,008 | 16496676 | 0 | 16 |
systems have been widely used for wireless communication based on electromagnetic (EM) waves in the gigahertz (GHz) Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. range with the advantage of large data transmission [1,2]. Meanwhile, the emergence of EM interference, EM wave pollution and other problems has triggered extensive studies on the applications of microwave absorbing materials, which can absorb unwanted EM signals and are expected to be used in military technology, microwave darkrooms, human health, etc [3,4]. Researchers all around the world have focused on obtaining novel materials with excellent microwave absorption properties, i.e. strong absorption and broad bandwidth. Microwave radiation with the frequency range between 26.5 and 40 GHz has the characteristics of both centimeter waves and millimeter waves. Materials with microwave absorption properties can thus work not only as all-weather materials but also as high-resolution probes. Today there are many kinds of radar with the 26.5-40 GHz wave band being widely applied. So it is very important to have extensive research on microwave absorption properties over the frequency range between 26.5 and 40 GHz. Microwave absorbing ability depends strongly on material properties, including complex permeability, complex permittivity and resistivity. Ferrites exhibit outstanding microwave absorption properties and are widely employed in military and civil fields due to their high resistivity and severe EM energy attenuation, especially near the natural resonance frequency of magnetic | 3,070,009 | 16496676 | 0 | 16 |
moments [5][6][7][8]. As for the M-type barium ferrite BaFe 12 O 19 (BFO), its fairly large magnetic anisotropy, large saturation magnetization, high coercive force and excellent chemical stability make it one of the most potential microwave absorbers in the future [9,10]. As a matter of fact, the natural resonance, which is related to the appearance of Fe 3+ ions in the ferrite, contributes to magnetic loss strongly and thus a high attenuation constant as well as a strong EM wave loss at the resonant frequency. In addition, the magnetic properties of the M-type barium ferrite depend strongly on the substitution of Fe 3+ ions in different sites by other cations or cationic combinations, including Al 3+ , Cr 3+ , Co 2+ -Ti 4+ , Co 2+ -Ru 4+ and so on [11][12][13]. As is known, if the low positively charged Fe 3+ ions are substituted by foreign high positively charged ions in the ferrite, the rest of the Fe 3+ ions will convert into Fe 2+ ions in response to the remnant electrical neutrality [14][15][16][17] and an exchange interaction between Fe 2+ and Fe 3+ ions will occur [18]. Consequently, a new natural resonance with respect to the exchange coupling between Fe 2+ and Fe 3+ ions will appear in the ferrite, but the resonant frequency will be different from that related to the Fe 3+ ions. This implies that if the exchange coupling occurs in the ferrite, a new resonance peak and thus a new magnetic loss peak are expected to appear at another | 3,070,010 | 16496676 | 0 | 16 |
frequency position resulting from the Fe 3+ ions [19,20]. Therefore, double natural resonance peaks will be generated simultaneously in the ferrite in this case. It will probably bring about two magnetic loss peaks and two absorption peaks, which expand the absorption bandwidth in the EM wave spectrum. Thus, it is attractive to obtain the ferrite with excellent absorption properties. Herein, we propose the Ti-doped barium ferrites BaFe 12−x Ti x O 19 (BFTO) synthesized by the sol-gel method. It is worth noting that BFO is a typical ferrite with strong EM wave absorption and Ti 4+ is a typical dopant with a relatively high electron valence as well as a close radius compared to that of Fe 3+ in the ferrite [21,22]. In this case, the Ti 4+ ions could easily substitute the Fe 3+ ions in the BFO ferrite to engender exchange coupling between Fe 3+ and Fe 2+ in addition to the existence of Fe 3+ centers. In this work, the resonances with respect to both Fe 3+ ions and the exchange coupling between Fe 3+ and Fe 2+ were investigated in detail. The dual magnetic loss peaks with different magnetic resonances were successfully obtained and controlled in the Ti-doped ferrite. A broad bandwidth beyond 11 GHz was obtained with dual reflection loss (RL) peaks over the frequency range from 26.5 to 40 GHz. Experimental procedure Ti-doped barium ferrite powders were synthesized by the solgel method. Barium nitrate (Ba(NO 3 ) 2 ), tetrabutyl titanate (Ti(OC 4 H 9 ) 4 ), ferric nitrate | 3,070,011 | 16496676 | 0 | 16 |
nonahydrate (Fe(NO 3 ) 3 · 9H 2 O), citric acid, ammonia, absolute ethyl alcohol and deionized water were selected as raw materials. According to the composition of BaFe 12−x Ti x O 19 (x = 0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 and 0.8), the metal nitrates and an appropriate amount of citric acid were initially dissolved in the deionized water by stirring to obtain a clear solution (1), and then specific amounts of Ti(OC 4 H 9 ) 4 and citric acid were dissolved in absolute ethyl alcohol by stirring to get a clear solution (2). Solution (2) was slowly added into solution (1) to get a clear solution (3), and then ammonia was used to adjust the pH value to 7.0. The obtained solution was evaporated with stirring to form viscous sol precursors, then dried at 120 • C for 24-48 h and heat treated at 1200 • C for 3 h to get BFTO powders. The phase structures of the ferrite powders were identified by XRD (PANalytical B V Empyrean 200895, Cu Kα radiation). The microstructure studies were conducted by scanning electron microscopy (SEM) (Hitachi SU-70 FESEM). The magnetic properties were measured by a magnetic property measurement system (MPMS-XL-5). The microwave absorbing properties were measured by an Agilent vector network analyzer (E8363C PNA) and the RL was calculated by EM parameters in the frequency range of 26.5-40 GHz. The samples were mixed with paraffin initially with a weight ratio of 8:3 and then heated at 80 • C, and then the mixtures | 3,070,012 | 16496676 | 0 | 16 |
were filled into holders of different thickness for the measurement of EM parameters. Figure 1 shows the XRD patterns of the BaFe 12−x Ti x O 19 (x = 0, 0.3, 0.6 and 0.8) powders sintered at 1200 • C for 3 h. It is seen that all the BFTO powders with different titanium contents exhibit only the M-type barium ferrite phase. The lattice constants 'a' and 'c' are listed in table 1, and they slowly increase with increasing Ti content. Moreover, the ratio of 'c/a' also increases due to the larger increase of the c-axis than the a-axis with increasing Ti 4+ ions because the ionic radius of Ti 4+ (0.68 Å) is larger than that of Fe 3+ (0.64 Å) [23]. Figure 2 shows the SEM morphologies of the BaFe 12−x Ti x O 19 (x = 0, 0.3, 0.6 and 0.8) ferrite powders. It is seen that typical plate-like particles of M-type hexagonal barium ferrite form in all samples. Particle size varies between 400 nm and 2 µm. It reduces initially and then increases gradually with increasing Ti content in the ferrite. When x = 0.3-0.4, the mean particle size is minimum and only about 600 nm. It grows bigger with increasing Ti content especially above x = 0.5, which is from about 600 nm at x = 0.4 to 2 µm at x = 0.8. The magnetic hysteresis loops of BaFe 12−x Ti x O 19 with x = 0, 0.3, 0.6 and 0.8 are shown in figure 3(a). It can be | 3,070,013 | 16496676 | 0 | 16 |
found that the magnetization increases steeply at first and then slowly with increasing applied magnetic field, and the areas of the hysteresis loops as well as the coercive force (H c ) are typically large. All the ferrites approach saturation under an applied field of up to 3 T. The saturation magnetization (M s ) and the anisotropy field (H a ) of all the samples are obtained according to the law of approach to saturation (LAS), which is an effective method to describe the magnetization properties of polycrystalline magnetic material in approaching the saturation stage [24]. According to LAS, magnetization (M) versus magnetic field (H) can be expressed as [25] Results and discussion where A is the inhomogeneity parameter that can be neglected at high fields, B is the anisotropy parameter and χ p is the high-field differential susceptibility. Under the situation of hexagonal symmetry, B may be expressed as B = H 2 a /15 [24]. Using the experimental data within the high-field range (approaching saturation) as shown in figure 3(a) and equation (1), both M s and H a can be obtained. Figure 3(b) shows the dependence of H c , M s and H a on Ti content. Note that H a decreases linearly from 15.43 to 10.34 kOe with x changing from 0 to 0.8. Likewise, there is also a reduction in M s , but not very great, from 72.38 emu g −1 (x = 0) to 59.82 emu g −1 (x = 0.8). However, H c increases from 3.03 | 3,070,014 | 16496676 | 0 | 16 |
to 3.43 kOe when x changes from 0 to 0.3 initially and then decreases greatly from 3.43 to 1.06 kOe when x is between 0.3 and 0.8. It is known that the Fe 3+ ions are located in three kinds of coordination sites including five different crystallographic points in the magnetoplumbite phase structure of BFO. The first is the octahedral site including three different shapes represented as 12k, 2a and 4f 2 , respectively; the second is the trigonal bipyramid site represented as 2b; the third is the tetrahedral site represented as 4f 1 . The Fe 3+ ions with up-spin are filled into the 12k, 2a and 2b sites, while they are in the 4f 1 and 4f 2 sites with spin in the opposite direction. It has been reported that the Ti 4+ ions prefer the octahedral sites of 12k, 2a and 4f 2 for stability because of their rare gas outer electron shell structure [26,27]. They can also occupy the 2b site at high doping levels [16]. For the magnetic properties of the ferrite, the anisotropy field H a is related to the sites occupied by Ti 4+ ions. When the Fe 3+ ions are substituted in different sites by Ti ions, a different anisotropy field in the ferrite will be induced. As is known, the Fe 3+ ions in the 4f 2 and 2b sites greatly contribute to the anisotropy field [25]; thus the decrease in H a exhibits the preferential replacement of Fe 3+ ions in the 4f 2 and 2b | 3,070,015 | 16496676 | 0 | 16 |
sites by Ti ions, although the substitution of Fe 3+ ions by Ti increases slightly in the anisotropy field due to the lattice distortion as shown in table 1. The more substitution, the lower the H a of the ferrite, as shown in figure 3(b). In addition, the Fe 3+ ions with up-spin substituted by non-magnetic Ti ions reduce the magnetization of the ferrite, while the ferrite in which the Fe 3+ ions with down-spin are substituted leads to an increase in magnetization [28]. The slow reduction of M s as shown in figure 3 reveals likely more substitution of Ti 4+ ions for Fe 3+ ions in the 12k, 2a and 2b sites than in the 4f 2 site. That is to say, the substitution of Ti 4+ ions into 2b instead of other sites is the most probable preferential process in the BFTO ferrite. Moreover, the coercive force depends strongly on the anisotropy field [26]. But it also changes in general with the grain size of the phase particles [29]. The smaller the phase particle, the larger the H c due to the large density of the grain boundary and thus more pinning centers that impede domain wall motion. As is shown in figure 2, the particle size of the ferrite powders decreases with increasing x to 0.3 and then increases with increasing x up to 0.8. As a result, H c increases below x = 0.3 and then decreases above 0.3 as shown in figure 3(b). However, the trend of the coercive force | 3,070,016 | 16496676 | 0 | 16 |
that depends on the doping content of Ti still shows some decrement in the ferrite. This implies that the coercive force is controlled dominantly by H a based on the substitution of Fe by Ti in the 2b site. Obviously, the Ti 4+ ions doped into the ferrite contribute mainly a large part of substitution for Fe 3+ ions in the sites of 2b in the magnetoplumbite phase structure of BFTO. Figure 4 shows the frequency dependence of the real part (ε ) and the imaginary part (ε ) of the complex permittivity and the real part (µ ) and the imaginary part (µ ) of the complex permeability for the BFTO samples over 26.5-40 GHz, respectively. There are no significant differences between ε (and ε ) in the ferrites with and without doping of Ti, although the ε values of all doped samples are larger than those that are undoped over the whole frequency range. However, the typical resonance phenomena of the magnetic moments can be seen for all the samples with doping Ti 4+ ions. The imaginary part of the complex permeability µ increases from a low to high of about 0.5 and then decreases with frequency. The resonance peaks are asymmetric, which is more obvious in the samples with high Ti content. When x = 0.7 and 0.8, a distinct shoulder appears in µ depending on frequency. In general, the magnetic loss of a ferromagnetic material mainly originates from magnetic hysteresis loss, domain wall resonance, eddy current loss and natural resonance [30,31]. The | 3,070,017 | 16496676 | 0 | 16 |
magnetic hysteresis loss can be negligible under a weak applied field. The domain wall resonance mainly occurs in the frequency range lower than a certain GHz. The eddy current effect is responsible mainly for magnetic loss in low-resistivity materials [32], but not for M-type BaFe 12 O 19 with high resistivity [9]. Doping Ti may make probably a contribution to enhancing the eddy current effect with decreasing resistivity, although M-type BaFe 12 O 19 has high resistivity. However, it is limited and it can thus be ignored compared to natural resonance loss. Therefore, the magnetic loss peaks are mainly dependent on the natural resonance in the BFTO ferrites in the frequency range of 26.5-40 GHz. The resonance frequency ( f r ) obviously shifts toward low frequency from 39.60 to 29.46 GHz with the increase in Ti 4+ content from x = 0.2 to 0.8. It is known that the natural resonance frequency ( f r ) is proportional to the anisotropy field (H a ) as expressed by the following equation [33]: where γ is the gyromagnetic ratio. The value of γ /2π is 1.4g GHz kOe −1 , and g is the Landé factor. Hence, the shift of the natural resonance toward low frequency is contributed by decreasing H a . The higher the natural resonance frequency, the higher the anisotropy field of the system. The high H a of the undoped M-type barium ferrite that can be inferred according to the trend of the increase in frequency with decreasing Ti content contributes a | 3,070,018 | 16496676 | 0 | 16 |
higher natural resonance frequency above 40 GHz, which is agreeable with previous reports [34]. Furthermore, double peaks appear in the magnetic loss µ curves of the doped samples as shown in figure 4(d). The µ curves can be separated into two single peaks mathematically as illustrated in figure 5, which shows the samples of BaFe 11 [20,35], while the exchange coupling between Fe 3+ and Fe 2+ gives a g factor that is greater than 2.0 [20]. According to equation (2), the two values of g can be obtained based on the resonance frequency revealed by the two peaks. They are 2.02 and 2.31 for BaFe 11.4 Ti 0.6 O 19 and 2.03 and 2.32 for BaFe 11.2 Ti 0.8 O 19 , respectively. Furthermore, the same amount of Fe 3+ ions in the phase structure will change into Fe 2+ ions, with Ti 4+ ions substituting the Fe 3+ ions in order to maintain charge balance. The exchange coupling between Fe 2+ ions and Fe 3+ ions is thus generated in the ferrite with the g value of about 2.3. Obviously, the peaks at low and high frequencies can be ascribed to the Fe 3+ ions and the exchange coupling between Fe 3+ and Fe 2+ , respectively. The increase in the intensity of the high-frequency peak with dopant can be therefore attributed to the increase in Fe 2+ ions and the appearance of exchange coupling between Fe 3+ and Fe 2+ . What is more interesting is that the double resonance peaks contribute actually | 3,070,019 | 16496676 | 0 | 16 |
two magnetic loss peaks in the ferrite within a broad frequency range. It may make an effective contribution to the improvement of EM wave absorption properties, and therefore the BFTO will be attractive for use as a high-quality material for EM wave absorption. Figure 6(a) shows the reflection loss (RL) of the BFTO samples as a function of frequency over the range from 26.5 to 40 GHz. It is seen that the two peaks of the RL whose intensity is high up to ∼40 dB or above appear especially for samples with high Ti content in the BFTO, such as x from 0.6 to 0.8. The lower the content of Ti doping, the higher the absorbing peak frequency. As is known, RL is usually used to characterize the microwave absorbing property of materials, which can be calculated based on the complex permittivity and permeability according to transmission line theory by the expressions below [6,36]: where Z in is the input impedance at the absorber surface, Z 0 is the characteristic impedance in free space, ε r and µ r are the complex permittivity (ε r = ε -jε ) and permeability (µ r = µ -jµ ), respectively, f is the microwave frequency, d is the thickness of the absorber and c is the velocity of light. The peak indicates that the absorption property strongly occurs with high absorption intensity at a characteristic frequency. For as-prepared BFTO, the high intensity depicts that its absorbing property is quite strong at different frequencies for different samples, in which | 3,070,020 | 16496676 | 0 | 16 |
all the curves shown in figure 6(a) are obtained with the appropriate thickness needed according to equations (3) and (4). Figure 6(b) illustrates the two RL curves separately calculated by using the data resulting from the two separated resonance peaks, respectively, as shown in figure 5 for BaFe 11 . They indicate apparently that the single resonance peak contributes only one absorption peak and double peaks provide dual peaks correspondingly. The absorption peak at low frequency depends on the resonance peak at low frequency with the Landé factor g = 2 and that at high frequency is matched with the resonance peak at relatively high frequency with the Landé factor g above 2.3. Therefore, the peak frequency decreases gradually from high to low following the resonance peak along with the increase in Ti content as shown in figure 4(d), in which double absorption peaks begin to appear typically at x = 0.6 while the intensity of the high-frequency peak gradually increases. The strong absorption seems to be attributed to the high attenuation constant appearing in the Ti-doped ferrites. As is known, the attenuation constant α is calculated as below [37]: where f is the EM wave frequency and c is the velocity of light. The frequency dependence of the attenuation constant of the BFTO is shown in figure 7. It is obvious that all the ferrite samples with different Ti 4+ contents have a high attenuation constant throughout the test frequency range of 26.5-40 GHz. Meanwhile, their attenuation constants at peak positions distribute between 350 and | 3,070,021 | 16496676 | 0 | 16 |
500, in which the highest one is 3-10 times higher than that reported by other researchers [37,38]. Actually, the attenuation constant shows almost the same trend with frequency as that of the magnetic loss shown in figure 4(d) and is associated with the absorption properties as shown in figure 6(a); for instance, they exhibit two characteristic peaks in the ferrites with x above 0.6. It is thus most probably dominated by the permittivity and permeability as shown in equation (5). The absorption peak appears hence following the peak occurring in the attenuation constant. Obviously, the magnetic and dielectric losses can promote the EM wave attenuation, and the magnetic loss peak obtained around the frequency of the natural resonance contributes to the outstanding absorbing properties of the BFTO. Other than the strong absorption at the characteristic frequency of the absorption peak, the BFTO with high Ti content especially from x = 0.5 to 0.8 shows broad bandwidths with an RL of less than −10 dB over the frequency range beyond 11 GHz. As is known, an absorber is thought to be practicably and acceptably used as an absorbing material when its RL is less than −10 dB, which is equal to 90% attenuation corresponding to the power of the incident radiation into the absorber. The RLs of the BFTO are obviously less than −10 dB over the frequency range between the two absorption peaks. This implies that the BFTO with two types of natural resonance as well as strong peaks of magnetic loss makes the absorption bandwidth | 3,070,022 | 16496676 | 0 | 16 |
broader than 11 GHz. Take the BaFe 12−x Ti x O 19 ferrite for example; the bandwidths are 11.44, 11.73 and 11.32 GHz at x = 0.6, 0.7 and 0.8, respectively. That is to say, both the resonances of magnetic moments based on Fe 3+ and the exchange coupling between Fe 3+ and Fe 2+ ions simultaneously contribute to the magnetic loss for EM wave absorption in Ti-doped ferrites. The absorption peak based on the exchange coupling between Fe 3+ and Fe 2+ ions increases gradually and the dual peaks are typically generated with increasing Ti above x = 0.6 as shown in figure 6(a). The broadband absorbing property appears thus in the BFTO ferrites in which the strongest microwave absorption property around −45 dB of RL and 11 GHz of practicable bandwidth are obtained with x varying from 0.5 to 0.8, making them attractive candidates for use as absorbing materials with extraordinary properties. Conclusion In this paper, the single-phased Ti-doped barium ferrites (BFTO) with typical hexagonal plate-like structure were synthesized successfully by the sol-gel method. The Ti 4+ ions doped into the ferrite substitute for the Fe 3+ ions in the sites of 2b in the magnetoplumbite phase structure of BFTO. Some unsubstituted Fe 3+ ions in the phase structure change into Fe 2+ ions in order to maintain the charge balance in the ferrite, resulting in two kinds of resonances of magnetic moments at relatively low and high frequency based on Fe 3+ ions with Landé factor g ∼ 2.0 and on the exchange | 3,070,023 | 16496676 | 0 | 16 |
coupling between Fe 3+ and Fe 2+ with g ∼ 2.3, respectively. Simultaneously, the resonances supply dual magnetic loss peaks, promoting the EM wave attenuation of Ti-doped ferrites. The high attenuation constant with dual resonance peaks dominates beam absorption, contributing strong double RL peaks as well as considerably broad bandwidth in absorbing frequency. The BFTO ferrites can be used as excellent microwave absorbing materials with strong absorption and broad bandwidths in the frequency range between 26.5 and 40 GHz. The optimal RL reaches around −45 dB and the practicable bandwidth is beyond 11 GHz. | 3,070,024 | 16496676 | 0 | 16 |
Strong Minimizers of the Calculus of Variations on Time Scales and the Weierstrass Condition We introduce the notion of strong local minimizer for the problems of the calculus of variations on time scales. Simple examples show that on a time scale a weak minimum is not necessarily a strong minimum. A time scale form of the Weierstrass necessary optimality condition is proved, which enables to include and generalize in the same result both continuous-time and discrete-time conditions. Introduction Dynamic equations on time scales is a recent subject that allows the unification and extension of the study of differential and difference equations in one and same theory [10]. The calculus of variations on time scales was introduced in 2004 with the papers of Martin Bohner [6] and Roman Hilscher and Vera Zeidan [15]. Roughly speaking, in [6] the basic problem of the calculus of variations on time scales with given boundary conditions is introduced, and time scale versions of the classical necessary optimality conditions of Euler-Lagrange and Legendre proved, while in [15] necessary conditions as well as sufficient conditions for variable end-points calculus of variations problems on time scales are established. Since the two pioneer works [6,15] and the understanding that much remains to be done in the area [13], several recent studies have been dedicated to the calculus of variations on time scales: the time scale Euler-Lagrange equation was proved for problems with double delta-integrals [9] and for problems with higher-order delta-derivatives [14]; a correspondence between the existence of variational symmetries and the existence of conserved | 3,070,025 | 14982572 | 0 | 16 |
quantities along the respective Euler-Lagrange delta-extremals was established in [5]; optimality conditions for isoperimetric problems on time scales with multiple constraints and Pareto optimality conditions for multiobjective delta variational problems were studied in [20]; a weak maximum principle for optimal control problems on time scales has been obtained in [16]. Such results may also be formulated via the nabla-calculus on time scales, and seem to have interesting applications in economics [1,2,3,21]. In all the works available in the literature on time scales the variational extrema are regarded in a weak local sense. Differently, here we consider strong solutions of problems of the calculus of variations on time scales. In Section 2 we briefly review the necessary results of the calculus on time scales. The reader interested in the theory of time scales is referred to [10,11], while for the classical continuous-time calculus of variations we refer to [12,19], and to [18] for the discrete-time setting. In Section 3 the concept of strong local minimum is introduced (cf. Definition 3.1), and an example of a problem of the calculus of variations on the time scale T = { 1 n : n ∈ N} ∪ {0} is considered showing that the standard weak minimum used in the literature on time scales is not necessarily a strong minimum (cf. Example 3.2). Our main result is a time scale version of the Weierstrass necessary optimality condition for strong local minimum (cf. Theorem 3.3). We end with Section 4, illustrating our main result with the particular cases of discrete-time and | 3,070,026 | 14982572 | 0 | 16 |
the q-calculus of variations [4]. Time Scales Calculus In this section we introduce basic definitions and results that will be needed for the rest of the paper. For a more general theory of calculus on time scales, we refer the reader to [10,11]. A nonempty closed subset of R is called a time scale and it is denoted by T. If σ(t) > t, we say that t is right-scattered, while if ρ(t) < t we say that t is left-scattered. Also, if t < sup T and σ(t) = t, than t is called right-dense, and if t > inf T and ρ(t) = t, then t is called left-dense. The set T κ is defined as T without the left-scattered maximum of T (in case it exists). The graininess function µ : if it is regulated and if it is continuous at all right-dense points t ∈ T. Following [15], a function f is piecewise rd-continuous (we write f ∈ C prd ) if it is regulated and if it is rd-continuous at all, except possibly at finitely many, right-dense points t ∈ T. We say that a function f : We call f △ (t) the delta derivative of f at t and say that f is delta differentiable , provided this limit exists, and in right-scattered i.e., we get the usual derivative of Quantum calculus [17]. Let f, g : T → R be delta differentiable at t ∈ T κ . Then (see, e.g., [10]), where we abbreviate here and throughout | 3,070,027 | 14982572 | 0 | 16 |
the text f • σ by f σ . prd ) if f is continuous and f △ exists for all, except possibly at finitely many, t ∈ T κ and f △ ∈ C rd . It is known that piecewise rd-continuous functions possess an antiderivative, i.e., there exists a function F with F △ = f , and in this case the delta integral is defined by where the integral on the right-hand side is the classical Riemann integral. If The delta integral has the following properties (see, e.g., [10]): (i) if f ∈ C prd and t ∈ T κ , then The Weierstrass Necessary Condition Let T be a bounded time scale. Throughout we let t 0 , t 1 ∈ T with t 0 < t 1 . For an interval [t 0 , t 1 ] ∩ T we simply write [t 0 , t 1 ]. The problem of the calculus of variations on time scales under consideration has the form over all x ∈ C 1 prd satisfying the boundary conditions A function x ∈ C 1 prd is said to be admissible if it satisfies conditions (3.2). Let us consider two norms in C 1 prd : where here and subsequently T denotes the set of points of [t 0 , t 1 ] κ where x △ (t) does not exist, and The norms · 0 and · 1 are called the strong and the weak norm, respectively. The strong and weak norms lead to the | 3,070,028 | 14982572 | 0 | 16 |
following definitions for local minimum: A weak minimum may not necessarily be a strong minimum: on the time scale T = { 1 n : n ∈ N} ∪ {0} (note that we need to add zero in order to have a closed set). Let us show thatx(t) = 0, 0 ≤ t ≤ 1 is a weak local minimum for (3.3). In the topology induced by · 1 consider the open ball of radius 1 centered atx, i.e., We use the notation B k r for the ball of radius r in norm · k , k = 1, 2. For every x ∈ B 1 1 (x) we have hence L[x] ≥ 0. This proves thatx is a weak local minimum for (3.3) since L[x] = 0. Now let us consider the function defined by Function x d is admissible and Therefore, for every δ > 0 there is a d such that We have , and x △ d (t) = 0 for all t = t 0 , σ(t 0 ). Hence, |x △ d (t)|, 0 ≤ t ≤ 1, can take arbitrary large values since µ(t) = t 2 1−t → 0 as t → 0. Note that for every δ > 0 we can choose d and t 0 such that x d ∈ B 0 δ (x) and d µ(σ(t0)) > 1. Finally, Therefore, the trajectoryx cannot be a strong minimum for (3.3). From now on we assume that f : [t 0 , t 1 ] κ | 3,070,029 | 14982572 | 0 | 16 |
× R × R → R has partial continuous derivatives f x and f v , respectively with respect to the second and third variables, This function, called the Weierstrass excess function, is utilized in the following theorem: Theorem 3.3 (Weierstrass necessary optimality condition on time scales). Let T be a time scale, t 0 , t 1 ∈ T, t 0 < t 1 . Assume that the function f (t, x, r) in problem (3.1)-(3.2) satisfies the following condition: for each (t, x) ∈ [t 0 , t 1 ] κ × R, all r 1 , r 2 ∈ R and γ ∈ [0, 1]. Letx be a piecewise continuous function. Ifx is a strong local minimum for (3.1)-(3.2), then for all t ∈ [t 0 , t 1 ] κ and all q ∈ R, where we replacex △ (t) byx △ (t−) andx △ (t+) at finitely many points t wherex △ (t) does not exist. Second, we suppose that a ∈ [t 0 , t 1 ] κ , a < t 1 , is a right-dense point and [a, b] ∩ T is an interval between two successive points wherex △ (t) does not exist. Then, there exists a sequence {ε k : k ∈ N} ⊂ [t 0 , t 1 ] with lim k→∞ ε k = a. Let τ be any number such that σ(τ ) ∈ [a, b) and q ∈ R. We define the function x : [t 0 , t 1 ] ∩ | 3,070,030 | 14982572 | 0 | 16 |
T → R as follows: Clearly, given δ > 0, for any q one can choose τ such that x −x 0 < δ. so that, by Theorem 5.37 in [7] and Theorem 7.1 in [8], we obtain (3.7) Invoking the relation φ △1△2 = φ △2△1 (see Theorem 6.1 in [8]), integration by parts gives Sincex verifies the Euler-Lagrange equation (see [6]), we get On account of the above, from (3.6)-(3.7) we have To establish the condition (3.5) for all t ∈ [t 0 , t 1 ] κ , we consider the limit t → t 1 from left when t 1 is left-dense, and the limit t → t p from left and from right when t p ∈ T . Remark 3.5. Let T be a time scale with µ(t) depending on t and such that the time scale interval [t 0 , t 1 ] may be written as follows: [t 0 , t 1 ] = L ∪ U with µ(t) = 0 for all t ∈ L and µ(t) = 0 for all t ∈ U . An example of such time scale is the Cantor set [10]. Then, for t ∈ U the condition (3.4) is trivially satisfied, while for t ∈ L (3.4) is nothing more than convexity of f with respect to r. Let now T = q N , q > 1. Ifx is a local minimum of the problem x(t 0 ) = α, x(t 1 ) = β, α, β ∈ R, and | 3,070,031 | 14982572 | 0 | 16 |
the function f (t, x, r) is convex with respect to r ∈ R for each (t, x) ∈ [t 0 , t 1 ) × R, then for all t ∈ [t 0 , t 1 ) and all p ∈ R. | 3,070,032 | 14982572 | 0 | 16 |
Evaluation of halophilic microalgae isolated from Rabigh Red Sea coastal area for biodiesel production: Screening and biochemical studies In the present study, different water samples from Red Sea coastal area at Rabigh city, Saudi Arabia were studied for their dominant algal species. Microalgal isolation was carried out based on dilution method and morphologically examined using F/2 as a growth medium. Dry weight and main biochemical composition (protein, carbohydrates, lipids) of all species were performed at the end of the growth, and biodiesel characteristics were estimated. Nannochloropsis sp., Dunaliella sp., Tetraselmis sp., Prorocentrum sp., Chlorella sp., Nitzschia sp., Coscinodiscus sp., and Navicula sp. were the most dominant species in the collected water samples and were used for further evaluation. Nannochloropsis sp. surpassed all other isolates in concern of biomass production with the maximum recorded dry weight of 0.89 g L−1, followed by Dunaliella sp. (0.69 g L−1). The highest crude protein content was observed in Nitzschia sp. (38.21%) and Dunaliella sp. (18.01%), while Nannochloropsis sp. showed 13.38%, with the lowest recorded lipid content in Coscinodiscus sp. (10.09%). Based on the growth, lipid content, and biodiesel characteristics, the present study suggested Dunaliella sp. and Nitzschia sp. as promising candidates for further large-scale biodiesel production. Introduction Due to energy shortage and negative environmental consequences of fossil fuels excessive utilization, scientific research has been motivated towards exploring alternative energy sources that meet the world demand and mitigate the climate change through carbon dioxide sequestration and/or emissions reduction (Jia et al., 2014;Abomohra et al., 2017). Among different resources, biomass-based fuels have | 3,070,033 | 249810635 | 0 | 16 |
been proposed as a sustainable, renewable and eco-friendly alternatives (Jia et al., 2014;Almutairi, 2020a,b). Edible oils, non-edible lignocellulosic biomass, and municipal waste have been discussed as biofuel feedstocks Abomohra et al., 2021a;Abomohra et al., 2021b;Li et al., 2021). However, edible and non-edible biomass, known as first-and second-generation biofuel feedstocks, have serious economic and environmental impacts, raising critical sustainability and food safety issues (Doan et al., 2011;Abomohra et al., 2016;El Arroussi et al., 2017;Munisamy et al., 2018). Recently, algal biomass has attracted much attention as a third-generation feedstock for biodiesel production (Wang et al., 2018;Abomohra et al., 2021c;Almutairi et al., 2021). They have numerous recompences in comparison with the other biofuel generations due to the relatively high lipid and biomass production, no need for arable land to grow where they can be grown in brackish or saline water, tolerate the extreme environments, and no need to apply pesticides or herbicides. Moreover, microalgae grow photosynthetically where they can fix carbon dioxide, reducing the greenhouse gases impact (Selvarajan et al., 2015;Li et al., 2015). Also, microalgal cells utilize phosphorus and nitrogen from wastewater, adding extra advantage of bioremediation (Abdelaziz et al., 2014;Shao et al., 2018), with the advantage of genetic transformation and gene editing (Barati et al., 2021). Therefore, microalgal biodiesel is well-thought-out as a potential renewable biofuel to compensate fossil diesel without affecting the crop products or competing with agricultural land. Nevertheless, biodiesel commercial production from microalgal biomass is not economically feasible yet due to the high production cost (Gumbi et al., 2017;Abomohra et al., 2018) attributed to | 3,070,034 | 249810635 | 0 | 16 |
utilization of freshwater, nutrients, and harvest cost (Khan et al., 2018). Therefore, enhancements of upstream and downstream processes to improve cost-effective biomass production is of great importance. These include optimization of cultivation conditions, innovative lipid extraction techniques, co-product development and management, and combining microalgal cultivation with wastewater treatment or seawater desalination (Abdelaziz et al., 2014;Taleb et al., 2016;Li et al., 2015). However, screening of microalgae for high lipid production with a suitable fatty acid (FA) profile using seawater is considered as a bottleneck to enhance the process economy. Therefore, the objective of the present screening study is to isolate and evaluate different indigenous microalgal isolates at Red Sea coastal area for high lipid and biodiesel production. Marine microalgae were isolated and the biochemical composition was determined. Biodiesel yield and characteristics were also determined for all isolates to recommend a promising species. Seawater sampling Seawater samples were collected from Red Sea coastal area at Rabigh city, Saudi Arabia. Using a plankton net of 20-lm mesh, about 1 L of seawater at each location was filtered to remove seaweeds and suspended particles, then water samples were collected in sterile tubes. It was moved to sterilized flasks and incubated under light conditions for enrichment. Microalgal cells were isolated and purified by serial dilution followed by cultivation in Petri plates (Vu et al., 2018) containing sterile F/2 medium (Guillard and Ryther, 1962) and incubated under continuous illumination (120 lmol m À2 s À1 ) at 25°C. Morphological identification After isolation and purification, 8 marine microalgal strains were isolated in axenic | 3,070,035 | 249810635 | 0 | 16 |
culture and used in further experiments. The isolates were identified according to their morphological features (Hoek et al., 1995) using light microscope at 100x objective lens (Olympus BX53). Inoculum preparation The obtained axenic cultures were preliminary scaled up in 300 mL glass tubes, then cultivation was performed in 14 L fully transparent Plexi-Glass columns (El et al., 2015). Finally, the aeration was turned off to allow gravity settling for overnight. The upper layer was discarded, followed by dewatering by centrifugation (4000 rpm, 10 min) of the remained biomass. The obtained cells were washed three times with pre-sterilized artificial seawater and then used in next experiments. Microalgal growth Microalgae were incubated in all experiments at 120 lmol m À2 s À1 of continuous light intensity at aeration with 3% CO 2 filtered air. Growth parameter During the whole cultivation period, growth of the 8 isolates was determined in triplicates as dry weight (g L À1 ). Samples were centrifuged for 10 min at 3000x g then cell pellet was transfer to a pre-weighed 1.5 mL Eppendorf tube. After that, sample was freezedried at À80°C, then cell weight of the dried samples was calculated. Biomass productivity was determined by measuring the wet and dry biomass on the last day of experiment, and was calculated as previously described (Abomohra and Almutairi, 2020). Chemical and biochemical analysis Cell metabolites including proteins, carbohydrates and lipids were analyzed. Based on the method of Ma and Zuazaga (1942), total proteins (%) were determined as T.N Â 6.25 and defined as crude protein, while | 3,070,036 | 249810635 | 0 | 16 |
true protein was measured through determination of soluble protein by Trichloroacetic acid (TCA). Then true protein content was calculated by subtracting the soluble protein from total protein. The method of Dubois et al. (1956) was performed to measure total carbohydrate content. Dinitrosalicylic acid (DNS) method was used to estimate the reducing sugars using glucose as a standard (Miller, 1959). Lipid extraction was performed by Soxhlet extraction using n-hexane as a solvent (Reda et al., 2020). Fatty acid methyl esters (FAMEs) were prepared and analyzed according to the modified method of Christie (1993) as describe previously (Almarashi et al., 2020). Biodiesel properties The main biodiesel characteristics including cetane number (CN), long chain saturated factor (LCSF), saponification value (SV), cold filter plugging point (CFPP), iodine value (IV), and degree of unsaturation (DU) were evaluated for different isolates according to Ramos et al. (2009) andFrancisco et al. (2010) as follows; where F represents each fatty acid proportion (as % of total fatty acids); D represents the double bonds number, and M w is the fatty acid molecular weight. Growth and biomass production The eight strains isolated in the present study were belonging to Chrythophyta (Nannochloropsis sp.), Chlorophyta (Dunaliella sp., Chlorella sp., and Tetraselmis sp.) Dinofalgellates (Prococentrium sp.) and Bacillarophyta (Nitzschia sp., Navicola sp., and Cosinodiscus sp.), revealing the dominance of Chlorophytes and Bacillariophytes in water samples. Different growth patterns were observed among the studied isolates by measuring the dry weight ( Fig. 1). Nannochloropsis sp. surpassed all other isolates regarding biomass accumulation (0.89 g L À1 ) and biomass | 3,070,037 | 249810635 | 0 | 16 |
productivity (0.05 g L À1 d À1 ), while Nitzsha sp. showed the lowest dry weight (0.33 g L À1 ) and biomass productivity (0.011 g L À1 d À1 ). Followed by Nannochloropsis sp., Dunaliella sp. showed high biomass productivity of 0.047 g L À1 d À1 . Protein content Under similar growth conditions, crude protein in different isolates ranged from 21% to 38% ( Fig. 2A). Among different isolates, Nitzschia sp. showed the highest crude protein content, while the Chlorophyte Dunaliella sp. showed the minimum. Nitzschia sp., which exhibited the highest protein content comparing with other examined algae, also showed the highest soluble protein (2.04%) and high true protein (36.17%). A moderate content of Nannochloropsis protein was detected (28.19%). Carbohydrates Total carbohydrates of Chlorophyata genera were found higher than other genera and such content ranged from 22.06% in Chlorella sp. to 38.16% in Tetraselmis sp. to the maximum of 39.07% in Dunaliella sp. (Fig. 3). The highest carbohydrates content of 42.13% was recorded in Prorocentrium sp.; while the lowest carbohydrate content of 19.36% was recorded in Nitzschia sp. Reduced sugars are the most important fraction for metabolism of algal cells and for other beneficial industrial uses. Bacillaropyhte species showed relatively lower reduced sugars content (3.15-5.08% on dry weight basis); while Chrythophytes represented a moderate level (5.81%), where the highest content was recorded in Dinoflagellates (8.98%). Overall, based on the relatively high content of carbohydrate/reducing sugars recorded in most of the studied species, they can be used as a proper feedstock for bioethanol production through | 3,070,038 | 249810635 | 0 | 16 |
fermentation. Lipids The third algal metabolites (lipids and their fractions) have been discussed during the last years as the most important biomass constituent for renewable energy production in the form of biodiesel. Many factors are affecting such content in microalgal cells. In the current study, 10.09-19.17% of total lipids were recorded in the eight tested species (Fig. 4). Among all, Nitzschia sp. showed the maximum lipid content (19.17%); while the lowest was recorded in Coscinodiscus sp. (10.09%). Fatty acids and biodiesel properties In the present study, FAMEs composition was varied based on the isolate (Table 1), ranging from C8:0 to C24:0 carbon chain length. In different isolates, myristic acid (14:0) recorded the most abundance (30.77%) in the Chlorophyte Tetrasilmes sp., while Prorocentrum showed the lowest content of 1.32% (Table 1). The fatty acids within the range of C14-C18 represented from 52.22% in Coscinodiscus sp. to 83.05% in Prorocentrum sp. The aforementioned fatty acids chemical composition markedly affected the properties of the produced biodiesel, where long chain fatty acids as well as saturated fatty acids responded on the initial energy of biodiesel. Polyunsaturated fatty acids (PUFAs) are the key components maintaining the membrane fluidity of cells avoiding cells rupturing (Hyun et al., 2016). As shown in Table 2, saponification values of different produced FAMEs ranged from 200.19 in Coscinodiscus sp. to 219.53 in Nannochloropsis sp. The high saponification value monitoring the short chain fatty acids, which in turn reduces the net obtained energy compared with those of long chain fatty acids. Cetane number is a main parameter | 3,070,039 | 249810635 | 0 | 16 |
to indicate the biodiesel quality, reflecting its ignition quality (Francisco et al., 2010). Among different isolates, Dunaliella sp. showed the maximum cetane number of 32.07 (Table 2). In addition, European biodiesel standards (EN 14214, 2008) recommended iodine value of 120 g I 2 /100 g oil for the best engine performance (Abomohra et al., 2021c). The present study showed compatible iodine value in the isolates Dunaliella sp., Chlorella sp., Tetraselmis sp., Prorocentrum sp., and Navicula sp. (63.38, 86.57, 111.37, 118.71, and 86.88 g I 2 /100 g oil, respectively). Discussions Within the same microalgal growth conditions, variation in biomass accumulation of different isolates might be attributed to the specific requirements of each species and/or media suitability. Nutritional requirements and salinity margins differ from species to another, which explains the recorded variation in the growth and biochemical composition of different genera under the same growth conditions. Although F/2 has been suggested as a common growth medium for marine microalgae, the current study confirmed that it has significant difference in growth acceleration from a species to another. It also can be noted that Prorocentrum sp. has a higher dry weight comparing to Nannochloropsis sp., however, it showed lower biomass productivity, which is attributed to the slow growth within a longer incubation period. Regarding Duna- Fig. 1. Dry weight, biomass productivity, and growth rate of different marine water isolates grown in F/2 medium. The same series with different letters showed significant differences (P < 0.05). A.W. Almutairi Saudi Journal of Biological Sciences 29 (2022) 103339 liella sp., Costa et | 3,070,040 | 249810635 | 0 | 16 |
al. (2004) stated that growth rate of Dunaliella was enhanced by 30% using seawater-enriched F/2 medium. In addition, a recent study (Abomohra et al., 2020a) confirmed the potential of Dunaliella sp. to grow at extreme salinity levels up to 250‰. Thus, supplementation of nutrients to the medium could enhance the microalgal growth and further improves the microalgal biomass production rate (Xin et al., 2010). Overall, response of different microalgal species to utilize the growth medium depends mainly on their specific biological requirements which results in considerable differences among different genera/species. Due to environmentally friendly cultivation of microalgae without competing with agriculture industry and various unique metabolites with remarkable biological qualities, microalgae are widely discussed as a potential alternative sustainable source for many industrial products (Bhosale et al., 2010;Khan et al., 2017;Shao et al., 2019). However, selection of the promising candidate for a target product still the main challenge for commercialization. Natural habitats approximately define the biochemical profile within the same margin rather than the belonging taxa. For instance, Chlorella vulgaris and Scenedesmus obliquus greatly differ in their biochemical composition than Dunaliella sp. due to the salinity margin of their natural habitats. Following carbon, nitrogen represents the second constituent of cellular minerals due to its important role in cell metabolism. Protein is the main constituent belonging to nitrogen-containing compounds, where protein content ranges from 10% to about 60% in microalgae (Salbitani and Carfagna, 2021). Both of environmental conditions and nutritional status markedly influence such content, and the first monitor of drastic conditions is the reduction in protein | 3,070,041 | 249810635 | 0 | 16 |
content (El-Sayed, 1999;Almutairi, 2020a,b). Publications on algal protein content are widely varied even within a sole species, where three different Dunaliella sp. (ABRIINW-B1, GT/1, and 11) showed protein content of 19%, 41% and 31%, respectively, under the same conditions (Gharajeh et al., 2020). Other species of Dunaliella are rich in protein, where 40-57% was found in Dunaliella salina (Milledge, 2011) and 49% in Dunaliella bioculata (Van Krimpen et al., 2013). Microalgal proteins are currently used in aquaculture feed and provided as a health functional food due to the complete profile of essential amino acids (EAAs) (Schwenzfeier et al., 2011;Koyande et al., 2019). In that context, Xu-xiong et al. (Xu-xiong et al., 2004) found that exponential phase of Nannochloropsis have the highest crude protein content (33.99%), while the lowest (28.33%) was recorded in the stationary phase. However, it showed higher amino acid content (225.02 mg g À1 and 214.82 mg g À1 ) at stationary phase and phase of declining relative growth, respectively, with lowest amino acid content (98.87 mg g À1 ) in the exponential phase. Therefore, the total lipid content of algae in phase of declining relative growth was significantly higher than those in exponential phase and stationary phase. Under unfavorable conditions of high salinity, nitrogen depilation, high light irradiation, protein decomposition take place in many ways coupling with growth failure. In this case, the net gain of protein decomposition to other nitrogenous compounds takes place which are easy to loss from the algal growth media. To avoid the loss of dry weight regardless the | 3,070,042 | 249810635 | 0 | 16 |
protein decomposition, correct extending photosynthesis should be employed through carotenogensis, which is the main method used to increase the lipid content of microalgae (El-Sayed, 2010). In terrestrial plants and microalgae, polysaccharides are used as structural elements and for energy storage. Comparing to treated lignocellulosic biomass or conventional sugars resources, microalgal carbohydrates are preferred as an alternative in fermentation processes (Xu et al., 2019). Microalgal biomass and their lessvalue residues can be converted into high-quality bioethanol through fermentation (Lam and Lee, 2012;Chew et al., 2017). Naturally, many enzymes can degrade the lignocellulosic starch to simple sugars for easier transportation across the gut lumen, but cannot digest many more complex polysaccharides, known as dietary fibers (Saiki, 1906), that can be fermented in the large intestine based on the enzymatic production of the microbiome (Cian et al., 2015). Algal cell walls are different since they contain unique polysaccharides and polyuronides that may be acetylated, methylated, sulfated, or pyruvylated (Stiger-Pouvreau et al., 2016). Mostly, carbohydrate serves as one of the main growth metabolites of algal biomass and certain sugars are much desired for different purposes. A saline alga also surpasses those of freshwater or brackish water grown algae as a functional component on the expense of protein content. For instance, 37.6% available carbohydrates of Nannochloropsis sp. was obtained by Rebolloso-Fuentes et al. (2001) compared to 28.8% protein content. In addition, saccharides from algae have various antitumor, antimicrobial, antiviral, anticoagulant, and fibrinolytic properties (Dere et al., 2003). Thus, the present study can be served as information source for further wide-range applications | 3,070,043 | 249810635 | 0 | 16 |
of the studied marine microalgae for different carbohydrates-based industrial purposes. The rise in reduced sugars content was found to be positively correlated to the total carbohydrates content, in which reducing sugars from microalgae is a good feedstock for bioethanol production (El et al., 2015;El-Sayed et al., 2017;El-Sayed et al., 2020). Comparing to other biodiesel feedstocks, microalgae can accumulate higher amounts of cellular lipids that can be further converted to biodiesel by transesterification. Thus, high lipid content of a certain species is one main criterion to select a proper microalgal strain as a renewable biodiesel source (Wang et al., 2022). Lipid content is widely varied within the same genus, as for example, lipid contents of 42%, 47%, and 36% were recorded in different isolates of Dunaliella sp. (Ahmed et al., 2017;Khan et al., 2017). Concerning Dunaliella, lipid content in the present study was above the reported middle data range of this genus, while it is similar to that of Dunaliella salina reported by Adarme-Vega et al. (2014). Regarding fatty acids, microalgae are considered as the original long chain and very-long-chain PUFAs producers (>18C and > 20C, respectively) including n3 and n6 fatty acids. These fatty acids enter the human body through the food chain by fish feeding on phytoplankton, which is considered as the subsequent nutritional source (Kainz et al., 2009). These fatty acids can be used for nutrition, which promote the biorefinery approach of microalgal biomass. In addition, fatty acid profile significantly influences the main biodiesel characteristics (Abomohra et al., 2020b). Due to the varied fatty | 3,070,044 | 249810635 | 0 | 16 |
acid profile among different studied species, they showed different biodiesel characteristics. However, most of the estimated characteristic were within the recommended ranges by international standards. Putting altogether, the present study suggests Dunaliella sp. and Navicula sp. as potential candidates for biodiesel production, which require future studies on the optimization of growth conditions for enhanced biodiesel and biomass production. Conclusions This work aimed to evaluate different dominant microalgal species isolated from marine water at the coastal area of the Red Sea for possible application as biodiesel feedstocks. Results showed that Nannochloropsis sp. has the highest biomass yield, but it showed low lipid content (13.38%) and non-desirable biodiesel properties regarding cetane number and iodine value. On the other hand, Dunaliella sp. and Navicula sp. showed the highest lipid contents among the studied species (18.01% and 19.17%, respectively) with biodiesel characteristics complied with the international standards. Therefore, these two isolates are recommended as promising candidates for further studies and large-scale lipid production which could enhance the biodiesel production from marine microalgae. In addition to biodiesel production, the present screening study could be served as information pool for further wide-range applications of the studied marine microalgae at Rabigh coastal area for different industrial purposes. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 2 The main fuel properties of biodiesel produced from the different marine water isolates grown in F/2 medium. | 3,070,045 | 249810635 | 0 | 16 |
A Measure to Target Antipoverty Policies in the European Union Regions The reformed cohesion policy (CP), which is the major investment tool in the European Union (EU) for delivering the Europe 2020 targets, will soon make available substantial funds to improve the quality of life of the EU citizens through supporting the economic and social development of the EU’s regions and cities. Because the reformed CP has intensified the emphasis on measuring results, also with respect to reducing poverty and social exclusion, this paper is about measuring poverty to better target EU local policies. We propose a measurement of poverty at the sub-national level in the EU by means of three poverty components describing absolute poverty, relative poverty and earnings and incomes. The core data source is the cross-sectional European Statistics on Income and Living Conditions (EU-SILC) micro-data, waves 2007–2009. Data reliability at the sub-national level is statistically assessed and the regional level is described whenever possible. To calculate the poverty components, an inequality-adverse type of aggregation is applied in order to limit compensability across indicators populating a component. No aggregation is, however, performed across the three components. In the computations of income-related indicators, individual disposable income adjusted for housing costs, used as a proxy for the costs of living, is used. Poverty is confirmed to be a multi-faceted phenomenon with clear within-country variability. This variation depends on the type of region likely linked to the urbanisation level and, consequently, to the costs of living. The proposed measure may serve to better target anti-poverty measures at | 3,070,046 | 255280096 | 0 | 16 |
the local, sub-national level in the EU. Introduction The European Union (EU) cohesion policy (CP) is an integrated approach to support the economic and social development of regions. One of the main objectives of the CP is to improve the level of well-being of people across the EU. The reformed CP for the period 2014-2020, approved by the European Parliament in November 2013, represents the EU's most important investment tool for delivering the Europe 2020 targets 1 : creating growth and jobs, tackling climate change and energy dependence, and reducing poverty and social exclusion. It also sets out new conditions for funding and intensifies the emphasis on measuring results with respect to delivering the Europe 2020 targets. Many aspects of well-being and standard of living have indeed a straightforward link to policies, which are mostly defined at regional and local levels. As such, the CP lies at the core of the EU policy objective of improving the quality of life of its citizens. However, to fulfil the objectives of the CP, it is important to know how to measure people's quality of life. Its measurement goes well beyond Gross Domestic Product (GDP). The rethinking of economic growth following the economic and financial crisis added another impetus to developing alternative measures of quality of life, well-being and living standards. While there are numerous initiatives and concrete examples of socioeconomic well-being indicators at the national level, the availability of regional indicators is rather scarce and usually limited to one country. There is, however, a multidimensional measure of poverty | 3,070,047 | 255280096 | 0 | 16 |
officially used in the EU, namely the 'at risk of poverty or social exclusion' (AROPE) rate, which is reported not only at the country level, but also for different geographical levels (NUTS levels 2 ) and different density of population areas. This measure, using both income and non-income indicators and referring to the situation of people either at risk of poverty, or severely materially deprived or living in a household with a very low work intensity, informs about the shares of poor. Yet, it does not take into account other measures of poverty, like the poverty depth and intensity, and does not include the variability of the costs of living across regions. We provide a more detailed and systematic measurement of poverty in the EU regions to better target anti-poverty policies at the local level. The paper is organised as follows. Poverty measures presents different approaches to the measurement of poverty and Conceptualisation of poverty measures describes the proposed conceptualisation of poverty. In A note on the adjustment for housing costs the importance of adjusting for costs of living is discussed, especially when going subnational. Micro-data sources, with special emphasis on sub-national level representativeness, are presented in Data source and reliability while The three components of regional poverty presents the three poverty aggregated measures. The statistical approach, adopted for the setting-up of these aggregated measures, is presented in Aggregated measures. The distribution of poverty across EU regions presents three poverty measures and discusses our reasons for not proceeding with the computation of a final, single measure | 3,070,048 | 255280096 | 0 | 16 |
of regional poverty. Finally, Summary summarises the main outcomes. Poverty Measures No one questions any longer that poverty and well-being are multidimensional concepts (Lustig 2011). Many recent studies not only address poverty by means of numerous dimensions, such as poverty in education, health and living standards, but also include monetary and non-monetary indices (Alkire and Foster 2011a, b;Antony and Visweswara Rao 2007;Atkinson et al. 2002Atkinson et al. , 2004Atkinson et al. , 2010Betti et al. 2012;Bubbico and Dijkstra 2011;Callander et al. 2012;Merz and Rathjen 2014;Ravallion 2011;Rojas 2011;Wagle 2008;Weziak-Bialowolska and Dijkstra 2014). However, the notion of poverty is understood differently in different contexts (Callander et al. 2012). According to Wagle (2008) and Saunders (2005) there are three main approaches in the conceptualisation and operationalisation of poverty: economic well-being, capability and social inclusion. Nevertheless, an analysis of their basis and meaning reveals that the capability approach considerably stems from the economic well-being approach. The economic well-being concept links poverty to the economic deprivation that, in turn, relates to material aspects and/or standards of living (Boulanger et al. 2009;Wagle 2008). Thus, the perfect measure of poverty in terms of economic well-being should be a combination of income, consumption and welfare. Although the measurement of income is not a problematic issue, at least to some extent, the measurement of consumption level and welfare is not straightforward. For these reasons, the level of disposable income is often used as a proxy of consumption (Decancq and Lugo 2013). The capability approach, proposed by Sen (1993), expands the notion of poverty from welfare, | 3,070,049 | 255280096 | 0 | 16 |
consumption and income to broader concepts like freedom, well-being and capabilities. In his approach poverty is understood as a state of capability or functioning deprivation that happens when people lack freedom and opportunities to acquire or expand their abilities. Capabilities are things persons are able to do or which enable them to lead the life they currently have. Functioning represents the achievement that a person is capable of realising, or, as modified by Sen (2002) later on, the ability to make outcomes happen. Freedom is a principle determinant of individual initiative and social effectiveness that enhances the ability of individuals to help themselves, which implies that the use of freedom is part of what wellbeing is. According to Sen (1999) there are five distinct freedoms: political freedoms, economic facilities, social opportunities, transparency guarantees and protective security, which determine what people are 'capable' of becoming or doing (achieving). The social inclusion approach is the opposite to social exclusion, which relates to a condition of systematic isolation, rejection, humiliation, lack of social support, and denial of participation (Wagle 2008). It focuses on deficiencies, while the capability approach focuses on possibilities and abilities. The last two approaches expand the economic notion of poverty by including the sociological point of view. Conceptualisation of Poverty Measures In this paper we limit ourselves to poverty understood as economic well-being, or economic deprivation measured in absolute and relative terms at the sub-national level, optimally at the second level of the NUTS, namely NUTS 2, which are basic regions for the application of regional | 3,070,050 | 255280096 | 0 | 16 |
policies. It implies that no measures of poverty related to education or health, which are two of the most frequently occurring non-income poverty dimensions, are used. Although we are aware of the consequent limits, this is intentional as in this paper we focus on poverty and not on well-being or quality of life in a broader sense. The multidimensional measure of poverty at the regional level is assumed to consist of three components: 1. Absolute Poverty; 2. Relative Poverty and 3. Earnings and Incomes. Indicators populating the poverty components are listed in Fig. 1 and described in The three components of regional poverty. Their choice results from both theoretical considerations and data availability and quality. For each component an aggregated measure, which ensures non-full compensability between indicators, is provided. It means that a deficit in one variable cannot be entirely offset by a surplus in another. To be in line with the variety of poverty definitions to assess multidimensional poverty, we use both monetary and non-monetary indicators and take into account subjective measures, by including several self-assessed indicators of absolute poverty. No direct measure of perceived poverty level is included in the analysis due to the lack of reliable data at the sub-national level. To the best of our knowledge, our approach features the following innovative points. i) We focus on regional variability because the EU regions, not the countries, are the key elements of the EU's regional policy (Becker et al. 2010) and local differences in poverty are essential for targeted anti-poverty policies. ii) We | 3,070,051 | 255280096 | 0 | 16 |
take into account the housing costs, which are a crucial factor in the computation of an individual's disposable income as, due to highly diversified rental and purchase prices across regions, they can strongly affect actual disposable incomes. iii) We adopt an inequality-adverse type of aggregation, generalised mean of order β=0.5, within each poverty component. This approach prizes higher the improvement in those indicators that perform poorly, thus not allowing for full compensation between indicators. Such an approach is in line with recent developments in the field, such as the Human Development Index (Klugman et al. 2011), based on the geometric mean (generalised mean of order β=0) since 2011 and the Material Condition Index proposed by the OECD (Ruiz 2011). A Note on the Adjustment for Housing Costs The inclusion of housing costs in the computation of the individual disposable income shall be conceptually justified. Most authors do not include costs of living in their computation of disposable income and income poverty (Wagle 2008;Whiting 2004;Wong 2005). It may result from the fact that the classical definition of disposable income says that it is an income remaining after deduction of taxes and social security charges, and available to be spent or saved. However, in order to reliably compare regions, both within and across countries, the inclusion of cost of living in the estimation of actually disposable income is especially important. Indeed, as shown by Dijkstra (2013), the cost of living can differ substantially across areas with different degrees of urbanisation. Adjusting the income for cost of living at | 3,070,052 | 255280096 | 0 | 16 |
the sub-national level is, however, quite challenging as there are no harmonised data on the within-country living costs in the EU. Still, an approximated approach can be proposed. On the one hand, it is known that services such as telecommunications, postal services and energy are provided at the same cost throughout a country and most tradable goods do not differ substantially in cost between the EU countries. On the other hand, housing costs do substantially differ between different areas of a country and between countries. Therefore, they can be seen as one of the key contributors to differences in cost of living in developed countries and thus in regional poverty distribution (Kemeny and Storper 2012;Wong 2005) It was shown by many researchers- Hutto et al. (2011), Jolliffe (2006, Kemeny and Storper (2012) and Ziliak (2010) Miranti et al. (2011) and Tanton et al. (2010) for Australia; and Massari et al. (2010) for Italy-that the impact of accounting for housing cost differences across rural and non-rural areas is considerable. Not adjusting for differences in cost of living leads to a significant overestimation of poverty in low -cost areas and an underestimation of poverty in high -cost areas. It may also result in a complete reversal of poverty rankings as claimed by Jolliffe (2006). A recurrent argument against the inclusion of housing costs is that a within-country variation of housing costs is not only due to differences in prices but also due to different preferences of the households. In other words, some households are willing to pay more | 3,070,053 | 255280096 | 0 | 16 |
for housing services because they opt for different-better quality, higher size-housing solutions. This is certainly true for a number of households, but do they constitute the majority? Dijkstra (2013) recently provided the evidence of the scale of the problem by showing that: 1. housing costs in the EU cities are in all EU countries higher than the national average (the only exception is Germany); 2. housing costs in rural areas in all EU countries are lower than the national average (with the exception of Belgium). How does this affect poverty rate related measures? In the countries with large differences in housing costs across areas with different population densities, adjusting the at-risk-of-poverty rate for housing costs will significantly change the poverty incidence. This means that housing costs do affect individual disposable income especially of the poorer part of the population, which is exactly the reason for taking them into account in a poverty-related analysis like ours. Data Source and Reliability Our core data source is the cross-sectional European Statistics on Income and Living Conditions (EU-SILC) micro-data that describe different aspects of living standards at the household and individual level in the EU. Three waves (2007, 2008 and 2009) are used in the computations. In populating the poverty concepts, our requirement is to describe the sub-national level, optimally the NUTS 2 level. However, the level finally described is determined by the availability of the regional identifier in the database (Table 6. in the Appendix) and by data reliability analysis. Indicators from the Eurostat regional statistics are also used. | 3,070,054 | 255280096 | 0 | 16 |
Being aware of country -level representativeness of the EU-SILC data, we tried to make the best use of currently available data. Methodological approaches to increase the reliability at the sub-national level of data designed to be representative at the national level only are broadly described in the literature devoted to the use of the EU-SILC (Lelkes and Zolyomi 2008;Longford et al. 2012;Verma et al. 2010;Ward 2009). The approach adopted here is rather pragmatic and combines two of the most popular approaches for the EU-SILC micro -data analysis. First, sub-national data reliability is assessed by comparing the EU-SILC weighted sample size for different gender-age classes with the Eurostatbased population share in the same gender-age classes (age classes: 0-14, 15-34, 35-54, 55-74, 75+). The level of significance of the differences is approximated by the t-statistic. Significant discrepancies, at level α=0.05, account for 7.7, 4.0 and 0.0 % of all the cases for the EU-SILC 2007 respectively (more details in Annoni et al. (2012)). To reduce the impact of sample sizes detected as not reliable enough, first, the subnational level for France is moved from NUTS 2 to NUTS 1, as also adopted by Ward (2009), who employed the EU-SILC at the sub-national level. Second, two problematic Spanish regions, namely Ciudad Autónoma de Ceuta (ES63) and Ciudad Autónoma de Melilla (ES64), are discarded from the analysis, while keeping the NUTS 2 level for the rest of the country. Then, each indicator is computed for each wave separately and then averaged across the three waves, 2007-2009 to improve the precision | 3,070,055 | 255280096 | 0 | 16 |
of the poverty measurement. 4 Consequently, the lowest feasible and most appropriate (in terms of regional representativeness) geographical level adopted in our analysis is as follows: & the lowest sub-national, spatial level-NUTS Unfortunately, many big countries lack the regional identifier in the EU-SILC database. We tried to solve this problem by examining country-specific household surveys with regard to their subnational representativeness and similarity of povertyrelated questions. 6 However, the analysis showed a non-optimal data reliability at the regional level and many comparability problems with the EU-SILC derived indicators, especially with respect to the definition of disposable income components. Results are not shown here but can be found in Annoni et al. (2012). Absolute Poverty The Absolute Poverty component measures the individual's capacity to afford basic needs and includes the following indicators calculated at the regional level: (1) material deprivation rate as used by Eurostat (Törmälehto and Sauli 2010), (2) material 4 We are aware that by averaging across 2007, 2008 and 2009 waves we provide a snapshot of poverty in the EU just before the start in 2008 of the financial and then economic crisis. The crisis has very likely worsened the picture presented here. With the availability of micro-data for the 2012 wave, expected in 2014, we plan to repeat the analysis with the three most recent waves, namely 2010-2012. We recall that the EU-SILC income data refer to the total annual income of households in the year prior to the survey with the exception of the United Kingdom (for which the household income is | 3,070,056 | 255280096 | 0 | 16 |
calculated on the basis of current income) and Ireland (where the calculation is based on a moving income reference period covering part of the year of the interview and part of the year prior to the survey) (Fusco et al. 2010). 5 Cyprus, Estonia, Lithuania, Luxembourg, Latvia and Malta are small countries with no administrative regions. Germany, Denmark, the Netherlands, Portugal, Slovakia and the United Kingdom do not provide regional identifiers making it impossible to disaggregate the indicators at the sub-national level. 6 Even if the poverty-related questions are present in country-specific household surveys, they are often not formulated in the same way with respect to their content, wording, answer categories, etc., thus hampering comparability. deprivation depth, (3) percentage of people experiencing difficulty in making ends meet, (4) percentage of people experiencing problems with their dwelling, (5) percentage of people living in over-crowded houses, (6) percentage of people who cannot afford necessary medical treatments, and (7) percentage of people who cannot afford necessary dental treatments. A detailed description of the indicators is presented in Table 1. Relative Poverty Relative Poverty component includes the three well -known Foster-Greer-Thorbecke (FGT) measures: poverty incidence P 0 , poverty depth P 1 and poverty severity P 2 (Foster et al. 1984(Foster et al. , 2010) calculated according to the general formula: where α is a real positive, y=(y 1 ,y 2 ,…,y n ) is a vector of properly defined income in increasing order, z>0 is a predefined poverty line, n is the total number of individuals under analysis, (z-y | 3,070,057 | 255280096 | 0 | 16 |
i )/z is the normalised income gap of Material deprivation rate Inability to afford some items considered by most people to be desirable or even necessary to lead an adequate life. It is defined as the proportion of people lacking at least three out of nine items describing these consumption goods and activities that are typical in a society, irrespective of people's preferences with respect to these items (Törmälehto and Sauli 2010). Material deprivation depth Unweighted mean number of items lacked by the deprived population (Eurostat 2010 Percentage of people who cannot afford necessary dental treatments Computed by combining two questions from the EU-SILC in order to describe situations when dental needs are unmet due to economic reasons only. a Some previous analyses show that values of the crowding index higher than 2 are associated to critically low socioeconomic status (Melki et al. 2004) individual i and q is the number of individuals having income not greater than the poverty line z. The parameter α can be seen as a parameter of 'poverty aversion': the higher α, the higher the relevance assigned to the poorest poor (Foster et al. 2010). P 0 , P 1 and P 2 are computed for all EU regions using national poverty lines (defined as 60 % of the median national disposable income) and individual disposable income adjusted for cost of living, as described shortly below. The national, instead of the regional, disposable income is used to compute poverty lines in order to highlight the differences between regions within the same | 3,070,058 | 255280096 | 0 | 16 |
country, as suggested by (Betti et al. 2012). Specifically, individual disposable income adjusted for housing costs is computed as follows: where: -HY020 is the total household disposable income; in the EU-SILC it represents a comparable measure of household income across the EU 7 ; -HH070 is monthly total housing costs; they comprise structural insurance, services and charges (sewage removal, refuse removal, etc.), taxes on dwelling, regular maintenance and repairs, cost of utilities (water, gas, electricity and heating), mortgage interest payments for owners, rent payments for tenants, housing benefits for households whose house is rented for free; -HY025 is a within-household non-response inflation factor used to correct for nonresponse distortions; -HX050 is the equivalised household size according to the modified OECD approach: where HM 14+ is the number of household members aged 14 and over and HM 13 − is the number of members aged 13 or less. Following suggestions by Eurostat, housing costs are deducted from both the individual disposable income and the poverty line, so as not to weaken too much the link between poverty and low living standards. 7 Disposable income is the most common indicator of economic resources used in poverty studies (McNamara et al. 2006). The EU-SILC defines household disposable income as the sum of a number of household and personal income components (Eurostat 2010): (1) gross (or net) personal income components of all household members, like employee income, company car, profits or losses from self-employment, unemployment benefits and other benefits (+); (2) gross (or net) income components at household level, like | 3,070,059 | 255280096 | 0 | 16 |
income from rental of a property or land, family or housing -related allowances, interests or profit from capital investments, regular (+); (3) inter-household cash transfers received and other types of household incomes (+); and (4) deductions, like taxes on income, social insurance and wealth, inter-household cash transfer paid (−). Earnings and Incomes The Earnings and Incomes component describes the monetary aspects of standards of living with three indicators: compensation of employees, net adjustable household income and median regional income. Compensation of employees captures the working conditions in the region, in terms of salaries, while the net adjusted household income provides the income corrected for the cost of services financed or subsidised by the government. Without this type of adjustment, household income is generally underestimated in countries with extensive public services, like in the Nordic member states, and overestimated in those where households have to pay for most of these services (EC 2010). The median regional income is computed from the equivalised household disposable income after correcting for housing costs. Detailed definitions of the indicators in this component are provided in Table 2. The choice of the median instead of the mean in the computation of regional average incomes is driven by the fact that, as the distribution of income is skewed, '… median consumption (income, wealth) provides a better measure of what is happening to the "typical" individual or household than average consumption (income or wealth) …' (Stiglitz et al. (2009, pp. 13-14)). SAS® ver. 9.2 was used for indicator extraction and computations. Aggregated Measures The | 3,070,060 | 255280096 | 0 | 16 |
issue of aggregating indicators into a single, composite index is a widely debated topic in socioeconomics, especially when measuring poverty and quality of life (Decancq and Lugo 2013;Lustig 2011;Ravallion 2011;Wagle 2008). The aggregation process always implies, explicitly or implicitly, the choice of weights to be assigned to Compensation of employees It refers to gross wages, salaries and other benefits earned by individuals in economies other than those in which they are resident, for work performed and paid for by residents of those economies. Compensation of employees includes salaries paid to seasonal and other short-term workers (less than 1 year), to the employees of embassies and of other territorial enclaves that are not considered part of the national economy and to cross-border workers. Net adjustable household income It is household disposable income that is adjusted for social transfers in kind. Social transfers in kind are goods and services such as education, healthcare and other public services that are provided by the government for free or below provision cost. It includes income from economic activity (wages and salaries; profits of self-employed business owners), property income (dividends, interests and rents), social benefits in cash (retirement pensions, unemployment benefits, family allowances, basic income support, etc.), and social transfers in kind (goods and services, such as healthcare, education and housing, received either free of charge or at reduced prices). Median regional income It is a median of equivalised household total disposable income after correcting for total housing costs. different, suitably selected and scaled indicators and the aggregation method. Both issues play | 3,070,061 | 255280096 | 0 | 16 |
a crucial role in determining the trade-offs between the different aspects measured (OECD-JRC 2008). Although we are aware that multi-criteria methods are analytical instruments to study these kinds of problems, like the counting method proposed by Alkire and Foster (2011a) or the purely multi-criteria approaches based on partial order (Annoni 2007;Annoni and Bruggemann 2009;Bruggemann and Carlsen 2012), within each poverty component we opt for a classical aggregation technique, as we assume, test and confirm an internal consistency of each component. Indicators are then aggregated only within each poverty component. For all the regions in the analysis three separate aggregated measures are computed: Absolute Poverty Index (API), Relative Poverty Index (RPI) and Earnings and Incomes Index (EII). Following recommendations by different scholars on the topic, see for example Ravallion (2011) and Stiglitz et al. (2009), no aggregation is performed across the three components. They indeed describe different, and sometimes contradicting, aspects of people's standards of living, which implies that it would make little sense to provide a single, aggregated measure of the three. Within each component we: (i) check for statistical internal consistency; (ii) standardise indicators by means of weighted z-scores; (iii) adopt an inequality-adverse type of aggregation; and (iv) use equal weights. Principal Component Analysis (PCA) (Morrison 2005) is employed for internal consistency assessment. The aim is to check to what extent indicators within the same component measure the same latent variable. Internal consistency, which is related to the level of correlation or association among indicators, if established, reduces the effect of different weighting scheme on | 3,070,062 | 255280096 | 0 | 16 |
the final, aggregated measure (Decancq and Lugo 2013;Hagerty and Land 2007;Michalos 2011). In our case, selected indicators show a good level of internal consistency for all three components (Table 3 summarises the PCA outcomes). It can be seen that the share of variance explained by the first principal component (PC) is always very high. It varies from 74 % for Absolute Poverty to 95 % for Relative Poverty, suggesting that the indicators included are indeed measuring a single latent phenomenon in each of the three components. The analysis of the loadings, which are always statistically significant, shows that almost all the indicators contribute to the first PC to the same extent, supporting our choice of equal weights. The only exception is the 'share of people living in crowded houses' indicator that shows the lowest value among the Absolute Poverty indicators, namely 0.29, whereas all other indicators have a loading value higher than 0.37. According to the well-known principle, particularly true when speaking of wellbeing, stating that deficiency in one element leads to a general failure, good living standards are ensured if all poverty indicators are at satisfactory levels. It implies, in turn, that shortages in one indicator of the poverty component cannot be fully compensated with surpluses in another indicator. In the aggregation procedure, full compensability can be avoided with generalised weighted means; this is supported in the literature of multidimensional poverty and inequality (Decancq and Lugo 2013;Ruiz 2011). Let x ij denote the value of indicator j (j=1,…,q) for region i (i=1,…,n). For each region | 3,070,063 | 255280096 | 0 | 16 |
the vector x=(x 1 ,…,x q ) is assumed available at a certain time point with the same positive orientation with respect to the latent phenomenon under analysis. A generalised mean of order β is defined as: where f(x j ) represents transformed (standardised) indicators, and the vector w=(w 1 ,…,w q ) contains the indicator weights, such that w 1 + …+w q =1. Our approach is based on the assumption of 0 < β < 1. Under this assumption the generalised mean is said to be inequality-adverse: a rise in the level of one indicator in the lower tail of the distribution will increase the overall mean by more than a similar rise in the upper tail, thus giving more importance to low levels (Ruiz 2011). Generalised means of the type (2) satisfy a series of mathematical properties required for aggregated measures, especially in the field of welfare and inequality (Ruiz 2011). In our case we are particularly interested in the marginal substitution rate between indicator j and k-MSR j,k -which is defined as: In case of aggregation of type (2), MSR j,k depends on three elements: 1. weight dependency: 2. transformation dependency:level dependency: where f' indicates the first derivative of function f 3. level dependency: Weight dependency is generally recognised and corresponds to the role of the weights when performing linear aggregations. Transformation dependency is more subtle and not always clear to interpret. It influences the role of the indicators in a composite measure. For example, if we choose z-score standardisation, as in | 3,070,064 | 255280096 | 0 | 16 |
our case, the transformation-related element of MSR j,k is the ratio of standard deviations σ j /σ k of original indicators. The level dependency links different indicator levels (values) with the order β of the mean. The order β has the role of balancing the achievements between the two indicators j and k. Given that the indicator orientation is positive (the higher, the better), when β increases, more importance is given to the upper tail of the indicator distribution; while as β decreases, greater weight is given to the lower tail. The generalised mean of power β = 0.5 is adopted. However an influence of different values of β in the interval [0,1] (from geometric to arithmetic mean) on final scores and ranks is tested through a Monte-Carlo exercise for each poverty component (Annoni et al. 2012;Saisana et al. 2005). The analysis shows only very minor differences in region scores and ranks, as expected given the high internal consistency of the indicators within each component (Decancq and Lugo 2013;Hagerty and Land 2007;Michalos 2011). The Distribution of Poverty Across EU Regions Absolute Poverty Index Figure 2 shows API scores for regions within each country. The countries are ordered from the best (low poverty levels) to the worst (high poverty levels), according to the weighted country average. The best countries, with the lowest levels of absolute poverty, are the EU Scandinavian countries (Finland, Denmark and Sweden), Luxembourg and the Netherlands. Central and eastern European (CEE) countries, Hungary, Poland, Latvia, Romania and Bulgaria, are the worst performing ones, with | 3,070,065 | 255280096 | 0 | 16 |
the last three characterised by an especially inferior situation regarding absolute poverty. In terms of within-country variability, which could not be measured for all the countries due to the limitation of data availability, Spanish, Italian, Romanian and Bulgarian regions are those showing the highest levels of variability (read inequality), while Swedish, Finish, Polish and Greek regions show the lowest. Table 7. in the Appendix lists all the regions sorted from the lowest to the highest API scores (normalised from 0 to 100). The best 20 % of the regions, 8 corresponding to the low levels of absolute poverty, include all Finnish and Swedish regions, two out of three Austrian regions (AT2 and AT3), 9 the Netherlands, Denmark, Luxembourg, one Belgian region (BE2) and a few regions in the northern part of Spain (ES13, from ES21 to ES24). The worst 20 % of regions are almost all from the CEE countries, namely all Romanian and Bulgarian regions, five out of six Polish regions, Latvia and one Hungarian region (HU3). The only exception is insular Italy comprising Sardinia and Sicily (ITG). Relative Poverty Index The poverty picture resulting from RPI scores changes considerably with respect to the one derived from API scores, confirming the intrinsic difference between absolute and relative measures of poverty (RPI scores are presented in Fig. 3). In this case, the lowest levels of relative poverty are observed in two southern European countries, namely in Cyprus and Malta, in one CEE country, namely Slovenia, and in Austria and Luxembourg. At the other end of the | 3,070,066 | 255280096 | 0 | 16 |
scale are three CEE countries-Bulgaria, Latvia and Romania-but also one southern European country, namely Greece, and the United Kingdom. With respect to the RPI, the importance of sub-national analysis in measuring poverty is easily noticeable. It can be noted that the same country may comprise regions belonging to the top and bottom performers. The most striking examples are regions in Belgium, Spain and Italy in which, even with all the needed precautions due to regional data limitations, within-country variability of the RPI is extremely high. RPI scores are shown in Table 8. in the Appendix, with regions reordered from best to worst. The best regions (with the scores lower than the P20 percentile) include four French regions (FR20, FR40, FR50 and FR70), three Czech regions (CZ01, CZ02 and CH03), two Hungarian regions (HU1 and HU2), two Austrian regions (AT2 and AT3), two Spanish regions (ES12 and ES22), one Italian region (ITD), one Belgian region (BE2), Cyprus, Slovenia and Malta. Among the worst performers (scores of the RPI above P80) are Latvia, the United Kingdom, five out of eight Romanian regions, two Greek regions (GR1 and GR2), two southern Italian regions (ITG and ITF) and one Bulgarian region (BG3). Earnings and Incomes Index In terms of EII scores (Fig. 4) the highest overall income and earnings values are decisively in Luxembourg, which is followed by the Netherlands and Austria. Then, slightly lower performance characterises the group of Belgium, France, Cyprus, the United Kingdom and Germany. The lowest overall income and earnings values are in the CEE countries, | 3,070,067 | 255280096 | 0 | 16 |
such as Estonia, Poland, Latvia, Bulgaria and Romania. Also in this case the sub-national variability, when measured, is relevant, especially in France, Italy, Spain, the Czech Republic, Hungary and Romania, highlighting the presence of high levels of inequality. Table 9. in the Appendix lists all the regions reordered according to EII scores. The group of most affluent regions includes Luxembourg, the Netherlands, Cyprus, all Austrian regions, two French regions (FR10 and FR70), two Belgian regions (BE1 and BE2), three Spanish regions (ES21, ES22 and ES30), two northern Italian regions (ITC and ITD), one Czech region (CZ01) and the Swedish capital region (SE1). At the Fig. 3 RPI scores-countries reordered according to the weighted country average (an explanation of the country codes is provided in Table 6. in the Appendix) bottom end of the distribution, where scores of the EII are below the P20 percentile, one can find almost all Romanian regions (apart from the capital region RO32), two Bulgarian regions (BG3 and BG4), Latvia, Estonia, five out of six Polish regions (apart from the capital region PL1) and two Hungarian regions (HU2 and HU3). Shall we Aggregate Further? The three poverty measures describe the concept of poverty from considerably different perspectives. Two of them, Absolute Poverty and Income and Earnings, are absolute measures of economic deprivation: the former in terms of non-financial householdrelated aspects, the latter in terms of income-related levels. Relative poverty is instead intrinsically different. It is indeed by construction a 'relative' concept, which basically captures the level of deprivation people experienced compared to | 3,070,068 | 255280096 | 0 | 16 |
those living in the same area. Low values of relative poverty do not necessarily imply that people are welloff; it shows a low level of heterogeneity of poverty across the population. Our statistical analysis supports this reasoning. Table 4 shows that the three indices are interrelated both in terms of classical Pearson's correlation (left side of the table) and rank correlation (right side of the table). Correlation levels are always statistically significant even if the RPI shows the lowest values. PCA outcomes (Table 5) indicate the presence of a strong first latent dimension accounting for 72 % of total variance almost equally explained by the three indices, as can be seen from the loadings of the first PCA component. Still, there is a second component of not scant relevance that accounts for 21 % of variance. Additionally, it is mostly driven by the RPI (with the loading of 0.84) and also characterised by the negative loadings of the API (− 0.21) and This can be interpreted as follows. On the one hand, there are some regions in the EU with pockets of poverty implying that a part of the population is classified as poor both in absolute terms and compared to other people in the region. These situations positively contribute to the correlation level between absolute and relative measures of poverty. On the other hand, there are regions where poverty is homogeneously spread and people are classified as poor in absolute terms but not in relative ones (in regions in which most of the population is | 3,070,069 | 255280096 | 0 | 16 |
worse off, relative poverty cannot be high by definition). What is the worst case between the two? It is not up to us to decide. As our aim is to detect such a situation, we must mention that the detection is biased if further aggregation is carried out as it would level-off contrasting conditions. This is the main reason for not aggregating further in this case. Table 10. in the Appendix provides separate regional rankings for the three indices. Among the three components of poverty, especially the concepts of absolute and relative poverty are substantially different and sometimes even in conflict. The scatterplot in Fig. 5 compares the API with the RPI. The scatterplot is divided into four quadrants-low-low, high-low, high-high and low-high, for an easier interpretation. Most of the regions are either in the low-low or in the high-high quadrant. It indicates that for these regions either low absolute poverty corresponds to low relative The top-left part of the plot comprises regions where, despite low absolute poverty levels, relative poverty can be deep and severe. As these regions may experience a high level of living standards inequality, this emphasises the presence of pockets of deprivation. This is the case of the United Kingdom and some regions in southern Europe, such as south-western Spanish regions (ES43, ES42 ES61, ES62 and ES70), the north-western regions in Greece (GR1 and GR2) and the most southern Italy (ITF), even if it is very close to the border of the quadrant. The contrary can be said for the regions in | 3,070,070 | 255280096 | 0 | 16 |
the bottomright part of the scatterplot, which includes regions experiencing high material deprivation with rather low relative poverty. These are generally regions in the CEE countries, such as Bulgarian, Hungarian, Polish and Romanian regions. People living there are deprived but the deprivation is almost equally spread across the population. Summary In the framework of the cohesion policy, the European Union (EU) provides funds to regions lagging behind with the aim of reducing poverty and social exclusion, among others. In this respect, there is a considerable need for measuring tools enabling better identification of regions both most in need and where investments are expected to have the highest impact. In this study, we measure poverty, understood as economic wellbeing, across the EU at the sub-national level. The proposed conceptualisation of poverty comprises three components for which aggregated measures are computed: Absolute Poverty Index, Relative Poverty Index and Earnings and Incomes Index. These indices evaluate poverty in absolute and in relative terms, taking into account monetary and non-monetary indicators by means of objective and self-assessed measures. Our core data set is the main EU data source on living conditions and income, the EU- SILC, waves 2007SILC, waves -2008SILC, waves -2009. Because the EU-SILC is designed to be representative only at the country level, going regional is quite a challenge. Therefore, the appropriateness of regional analysis is statistically checked. Results suggest that for most regions the level of sub-national representativeness is acceptable. Yet, specific actions are taken to correct discrepancies in some cases. Eventually, poverty is assessed for a | 3,070,071 | 255280096 | 0 | 16 |
total of 88 EU regions using 13 indicators. This does not mean that we are not aware of the shortcomings and limitations of this approach. On the contrary, we consider our analysis as an exercise, more than the final recipe, which should raise awareness on the importance of the availability of reliable regional data. Apart from the sub-national level, our study features two novelties: the adjustment of disposable income for housing costs and the adoption of a generalised weighted mean to aggregate indicators within a component, to penalise inequality and mitigate compensability. No aggregation is, however, performed across the three poverty components that, being intrinsically different, provide sometimes very different pictures of regional poverty. In particular, the comparisons of absolute and relative poverty measures show that there are quite a few regions in which people are well-off in absolute terms but not in relative ones and vice-versa. This clearly shows the multidimensionality of the poverty concept and gives justification not to further aggregate the three poverty measures so as not to blur the actual picture. Multi-criteria analysis would help in this case and is indeed the approach adopted in an ongoing project on the same data. Preliminary results set a flag on particular regions for which the aggregation can hide important contrasting patterns across the poverty measures. Poverty was also shown to be a local concept, with high levels of within-country variability. This implies that, to be effective, the EU needs more targeted local policies and monitoring. We see some implications for future research. First, in-depth | 3,070,072 | 255280096 | 0 | 16 |
empirical research, for example employing individual level data and multi-level modelling, is needed to test the usefulness of the three indices of poverty. Second, the availability of the most recent 2012 EU-SILC wave, not yet released at the time when this paper was written, will allow us to repeat the analysis for the 2010-12 period and compare pre-versus postcrisis poverty levels. Third, estimating the poverty indices over time will enable monitoring regional policy effectiveness. Last, a multi-criteria analysis of the three indices by partial order tools would allow summarising the overall picture across the EU while preserving the intrinsically multidimensional nature of poverty. Acknowledgments We would like to thank Lewis Dijkstra, Directorate-General for Regional and Urban Policy of the European Commission, who initiated and funded the project on which this analysis is based. He constantly guided the analysis during all of its steps and provided essential inputs, in particular on the conceptualisation of the poverty measures, advice on indicators and the recommendation about the inclusion of housing costs. | 3,070,073 | 255280096 | 0 | 16 |
Effectiveness of Detection-based and Regression-based Approaches for Estimating Mask-Wearing Ratio Estimating the mask-wearing ratio in public places is important as it enables health authorities to promptly analyze and implement policies. Methods for estimating the mask-wearing ratio on the basis of image analysis have been reported. However, there is still a lack of comprehensive research on both methodologies and datasets. Most recent reports straightforwardly propose estimating the ratio by applying conventional object detection and classification methods. It is feasible to use regression-based approaches to estimate the number of people wearing masks, especially for congested scenes with tiny and occluded faces, but this has not been well studied. A large-scale and well-annotated dataset is still in demand. In this paper, we present two methods for ratio estimation that leverage either a detection-based or regression-based approach. For the detection-based approach, we improved the state-of-the-art face detector, RetinaFace, used to estimate the ratio. For the regression-based approach, we fine-tuned the baseline network, CSRNet, used to estimate the density maps for masked and unmasked faces. We also present the first large-scale dataset, the ``NFM dataset,'' which contains 581,108 face annotations extracted from 18,088 video frames in 17 street-view videos. Experiments demonstrated that the RetinaFace-based method has higher accuracy under various situations and that the CSRNet-based method has a shorter operation time thanks to its compactness. I. INTRODUCTION Throughout the Covid-19 pandemic, we have seen that the use of masks has helped to prevent infection. The situation is improving, but a system to estimate the rate of mask use would be an | 3,070,074 | 244709698 | 0 | 16 |
important analysis tool for public health officials. Several studies [1,2] have focused on developing a method for automatically estimating the mask-wearing ratio from images or videos, such as those captured by surveillance cameras. However, this task is very challenging due to the small face areas, severe occlusion, and cluttered background common in images and videos of congested streets. Furthermore, a large-scale and well-annotated dataset for measuring the performance of proposed methods is lacking. The pioneering work [3]- [5] mostly focused on detecting masked and unmasked faces in images by using a face detector, such as Faster R-CNN or YOLO, followed by a masked/unmasked classifier. The obtained results can be further processed and used to compile statistics and issue warnings. Due to the lack of an established dataset, a small number of images crawled from the Internet were used to evaluate the performance of the proposed methods. The results demonstrated the potential application of these methods to practical systems, e.g., surveillance systems. Unfortunately, to push the work forward, several problems need to be tackled. First, existing methods are based solely on face detection, with no consideration given to approaches for related tasks such as crowd counting. Further research comparing different approaches is necessary to identify the strengths and weaknesses of each. Second, since the images were crawled from the Internet, the faces generally had high resolution (larger than 32 × 32 pixels) and were in frontal view. They thus differed greatly from surveillance camera images in which the face areas are typically very small and unclear due | 3,070,075 | 244709698 | 0 | 16 |
to the distance between the camera and the subjects. It is thus necessary to investigate the performance of face mask estimation under real-world conditions. In addition to the currently utilized face detection methods, it is also feasible to use other crowd-counting approaches [6] to estimate the number of people wearing masks. Like the mask-wearing ratio estimation task, the crowd-counting task has to tackle congested scenes captured by street-view cameras. Recently introduced convolutional neural network (CNN)-based crowd-counting methods are especially efficient for congested pedestrian flows thanks to the utilization of density maps. Early crowd-counting efforts [7,8] have taken a detection-based approach, using face or head detectors to count the number of people, but this is computationally demanding. More recent efforts [9,10] have taken a regression-based approach, using a CNN to accurately and quickly predict density maps. Detection-based methods are inefficient for handling tiny and occluded objects, which are common in street-view images, while regression-based ones can tackle them effectively. Regression-based methods predict the number of people without their localization in images. Therefore, several efforts have focused on aggregate regression-based counting and localizing using an end-to-end network [10]- [12]. Although the regression-based methods have achieved good performance on crowd counting, their effectiveness on mask-wearing ratio estimation has not been investigated. We have evaluated and compared detection-based and regression-based methods on their ability to estimate the mask-wearing ratio. For the detection-based approach, we used an improved RetinaFace [13] face detector enhanced with a bi-directional feature pyramid network (BiFPN) [14] and trained using the focal loss function [15] to effectively | 3,070,076 | 244709698 | 0 | 16 |
classify masked/unmasked faces. For the regression-based approach, we used the Congested Scene Recognition Network (CSRNet) [9], an easily trained regression network. To compare these methods, we annotated approximately 580,000 face bounding boxes extracted from about 18,000 video frames from 17 street-view videos recorded in several Japanese cities, in both daytime and nighttime, before and during the Covid-19 pandemic. The contributions of this paper are threefold: • First, we present a comparative evaluation of two approaches to estimating the mask-wearing ratio: the detection-based approach and the regression-based approach. • Second, we introduce a large-scale dataset of images extracted from street-view videos for use in estimating the face mask ratio. Our dataset contains 18,088 video frames with more than 580,000 face annotations. To our best knowledge, this is the first face mask dataset containing images extracted from street-view videos. • Third, we present the results of comprehensive experiments to evaluate the detection-and regression-based approaches in terms of both accuracy and operation speed. Their advantages and disadvantages are also discussed. The remainder of the paper is organized as follows. Section 2 summarizes related work. Sections 3 and 4 respectively introduce the RetinaFace-and CSRNet-based mask-wearing ratio estimation methods used, respectively, for the detectionbased and regression-based approaches in the experiments. Section 5 presents our NFM dataset. The experimental results are given and discussed in Section 6. Finally, the key points are summarized and future work is mentioned in Section 7. A. Detection-based Approach Methods using the detection-based approach estimate the mask-wearing ratio by detecting faces, classifying them as masked or unmasked, | 3,070,077 | 244709698 | 0 | 16 |
and tallying the number of each. Many methods have been proposed for the face detection part, such as Single Stage Headless (SSH) [16], PyramidBox [17], and RetinaFace [13]. However, traditional face detection methods face difficulties in working with faces wearing masks, especially in challenging situations (e.g., crowded areas at night and bad weather conditions). In pioneering work, Ge et al. created a dataset dubbed Masked Faces (MAFA) [18] to overcome the lack of datasets with images of masked faces. The MAFA dataset consists of 30,811 images crawled from the Internet with 35,806 masked faces (occluded by a face mask or another object). They used a locally linear embedding CNN (LLE-CNN) for detecting masked faces. Wang et al. subsequently proposed a face attention network (FAN) [19] for leveraging context information. Experimental results on MAFA demonstrated that FAN outperforms the LLE-CNN by more than 10% mean average precision (mAP). Recently, with the spread of Covid-19, several efforts have been devoted to detecting face masks only. Loey et al. [3] investigated the accuracy of a well-known object detector, YOLOv2 [20] with a ResNet-50 backbone, for detecting only medical face masks. They collected images from two public datasets from the Kaggle community (the Medical Masks Dataset with 682 images and the Face Mask Dataset with 853 images) to create a dataset with 1415 images. The investigation showed that YOLOv2 outperformed the LLE-CNN method [18] (81.0% vs 76.1% mAP). Batagelj et al. [21] investigated the effectiveness of off-the-shelf face detectors for masked and unmasked faces. He constructed a dataset from the | 3,070,078 | 244709698 | 0 | 16 |
MAFA and WiderFace datasets that consists of 41,934 images with 63,072 face annotations (face size at least 40 × 40 pixels). The results showed that RetinaFace achieved the highest accuracy among the evaluated detectors. Furthermore, for computing the value of the safety impact on the community, Almalki et al. [1] presented a maskwearing detection (MWD) system for estimating the percentage of people wearing a mask and the percentage of people not wearing one or wearing one incorrectly. The MWD system adds a layer at the end of the YOLOv3 detector [22] to classify faces. The super-resolution CNN architecture is used to pre-process an image before it is input to the detector. For evaluation, a new MWD dataset containing 526 images was created using images from Google Images. The system detected masked/unmasked faces with 71% mAP. Similarly, aiming to automatically detect violations of face mask-wearing and physical distancing protocols among construction workers, Razavi et al. [2] created a face mask dataset containing 1853 images and used it and the Faster R-CNN [23] with the Inception ResNet-V2 network to detect face masks. B. Regression-based Approach Unlike the detection-based approach, methods using the regression-based approach directly predict the number of faces without detecting them. They have become mainstream methods for crowd counting thanks to the effectiveness of density maps. A CNN [9,24] is typically used to predict a density map from which the count can be derived quickly. Several groups [10]- [12] have recently proposed combining density map estimation and counting with detection in a unified network. Idrees et | 3,070,079 | 244709698 | 0 | 16 |
al. [11] proved that count prediction, density maps, and detection are interrelated and can be efficiently solved by training a CNN with the proposed composition loss. Similarly, Liu et al. [12] presented a crowd-counting method that can detect human heads. This method can be trained with only point annotations. For crowd counting, the question of which is the better approach, detection-based or regression-based, also comes. As compared by Liu et al. [10], a detection-based method counts people accurately for low-density areas but is unreliable for congested areas. On the other hand, a regression-based method tends to overestimate the number of people in lowdensity areas. Gomez et al. [25] compared regression-and detection-based methods for counting fruit and grains in an image. They concluded that the approaches are comparable when the density in the image is low and that regression is more accurate when the density is higher. A. Improved RetinaFace-based Detector The pipeline of the improved RetinaFace-based detector used for our evaluation is illustrated in Figure 1. It consists of three modules: a feature pyramid network for extracting features, a context module for integrating context information, and two prediction heads for bounding box regression and masked/unmasked face classification. 1) Bi-directional Feature Pyramid Network: We use a BiFPN [14] for extracting multi-scale features at different resolutions. The architecture of the BiFPN is illustrated in Figure 1. Given a list of input pyramid features m in = (m in 1 , m in 2 , ...), where m in i represents the feature map at level i in the | 3,070,080 | 244709698 | 0 | 16 |
pyramid, which has a resolution of 1 2 i that of the input images. The BiFPN fuses the different feature layers and then creates a list of better features: The conventional FPN [26] uses feature maps from level 3 to 7 in the input feature pyramid m in = (m in 3 , ..., m in 7 ) and aggregates them in a top-down manner: where Resize(·) is usually an upsampling or downsampling operation for resolution matching, and Conv is usually a convolutional operation for feature processing. Instead of fusion in a top-down manner, the BiFPN integrates feature layers in both directions: top-down and bottom-up. For example, feature layer m out 6 is computed as where w i and w i are learnable weights. 2) Context Module: Inspired by SSH [16] and Reti-naFace [13], we also apply independent context modules to the feature pyramid levels to increase the receptive field size and leverage the context information. The use of sequential 3 × 3 filters in the context module increases the size of the receptive field in proportion to the stride of the corresponding layer, which increases the target scale of each detection module. 3) Prediction Heads: a) Anchors: We use anchor boxes with different sizes on feature maps, similar to their use in RetinaFace [13]. There are three aspect ratios (1:2, 1:1, 2:1) for each anchor box. A length K one-hot vector and a 4-coordinate vector are assigned to each anchor. The one-hot vector is the classification target, and the 4-coordinate vector is the bounding box | 3,070,081 | 244709698 | 0 | 16 |
regression target. Specifically, K = 3, corresponding to three labels: masked, unmasked, and background. b) Masked/Unmasked Face Classification Head: We attached a fully convolutional network (FCN) subnet to each BiFPN level, similar to its use in RetinaFace [13], for predicting the probability of a masked/unmasked face at each anchor position. Each FCN ("face classification subnet") consists of four 3 × 3 convolutional layers, each with 256 filters, followed by ReLU activations and sigmoid activations. c) Bounding Box Regression Head: To regress the offset from each anchor box to a nearby ground-truth face, another FCN is attached to each pyramid level in parallel with the face classification subnet. The architecture of this subnet is similar to that of the classification subnet except that it outputs four values corresponding to the relative offsets between the anchor and the ground-truth box. d) Multi-task Loss: For any training anchor i, we minimize the multi-task loss: where L obj (p i , p * i ) is the binary cross entropy in which p i is the predicted probability of anchor i being a face: p * i is 1 for a positive anchor and 0 for a negative anchor. A major element of L cls (p i ,p * i ) is an improved binary cross-entropy loss, the "focal loss" [15], wherep i is the predicted probability of anchor i being a masked face, andp * i is 1 for a masked face anchor and 0 for an unmasked face anchor. The focal loss addresses the imbalance ratio between the | 3,070,082 | 244709698 | 0 | 16 |
number of masked and unmasked faces. The face box regression loss is represented by L box (t i , t * i ), where t i = (t xi , t yi , t wi , t hi ), and t * i = (t * xi , t * yi , t * wi , t * hi ) denote the coordinates of the predicted boxes and ground-truth ones associated with the positive anchor, respectively. B. Ratio Estimation Given an input image, the improved RetinaFace-based detector detects both masked and unmasked faces. The detected faces are filtered using a confidence threshold (which was set to 0.5 in the experiments). The numbers of detected masked faces, unmasked faces, and all faces are then tallied. The mask-wearing ratio is simply calculated by dividing the number of masked faces by the total number of detected faces. In detail, the improved RetinaFace-based detector is trained using the stochastic gradient descent (SGD) optimizer with the momentum and weight decay at 0.9 and 0.0005, respectively. The learning rate starts at 0.01 and is divided by 10 at 50 and 68 epochs. Our training process stops after 80 epochs. IV. CSRNET-BASED MASK-WEARING RATIO ESTIMATION A. Dilated Convolutional Neural Network -CSRNet Using CSRNet [9] is an accurate way to count by regression. CSRNet uses a conventional CNN network for extracting features followed by several dilated convolution layers for predicting a density map. The CSRNet architecture is visualized in Figure 2. Given an input image, the network first extracts the image features using convolutional | 3,070,083 | 244709698 | 0 | 16 |
3×3 and max-pooling layers (similar to the VGG-16 architecture). It then predicts the density map by using dilated convolutional 3 × 3 and 1 × 1 layers. The resolution of the output density map is 1/8th the original resolution. We can straightforwardly apply this method to mask-wearing estimation because it produces high-quality density maps while having a pure convolutional structure. In the training stage, a ground-truth density map is generated for each image on the basis of the annotated faces. The computation is similar to that used by CSRNet in that geometry-adaptive kernels are used to handle highly crowded scenes. Conventional density maps are computed by convolving a Gaussian kernel, which is normalized to 1, to blur the face annotations. For each face annotation, a geometry-adaptive kernel estimates the appropriate standard deviation of the Gaussian kernel by considering the distance to the k nearest face annotations. The geometry-adaptive kernel is defined as where δ(x − x i ) is a function representing whether there is a face at pixel x i (1 or 0), and the ground truth (face locations) of an image with N labeled faces (i.e., masked and unmasked) is represented as is a discrete function. A density map (a continuous function) is generated by convolving δ(x − x i ) with a Gaussian kernel with parameter σ i = βd i (standard deviation), where d i indicates the average distance to the k nearest face annotations. In our experiments, we used the configuration used by Zhang et al. [27], with β = 0.3 | 3,070,084 | 244709698 | 0 | 16 |
and k = 3. For training CSRNet, the Euclidean distance is used to measure the distance between the estimated density map and the ground truth. The loss function is defined as where X i is the input image and Θ is the set of learnable parameters of CSRNet. The estimated density map of input image X i is denoted by D(X i ; Θ), D GT i is the ground-truth density map, and N is the size of the training batch. B. Ratio Estimation To predict the mask-wearing ratio, we need to estimate the number of masked and unmasked faces. To this end, we train two CSRNet models to separately predict the numbers. After obtaining the numbers, we simply divide the number of unmasked faces by the total number of faces (masked and unmasked) to obtain the ratio. Visualization of the CSRNetbased mask-wearing ratio estimation pipeline is shown in Figure 3. In detail, we first train a CSRNet model to estimate the number of faces. The obtained model is then fine-tuned to estimate the numbers of masked and unmasked faces separately. The SGD optimizer is applied with the momentum and weight decay set at 0.95 and 0.0005, respectively. In the experiments, we used a fixed learning rate of 10 −6 for the training and terminated the training after 45 epochs. A. Dataset Creation We created a face mask dataset containing 581,108 face annotations extracted from 18,088 video frames in 17 streetview videos obtained from the Rambalac YouTube channel 23 . The details of the videos are | 3,070,085 | 244709698 | 0 | 16 |
summarized in Table I. The videos were taken in multiple places, at various times, before and during the Covid-19 pandemic. The total length of the videos is approximately 56 hours. As shown in the table, 6 videos were shot before the pandemic, and 11 were shot during the pandemic. The images in our dataset thus have various face mask ratios. After creating our dataset, we extracted and selected frames for annotating. This process comprised three steps. • Step 1 -Extract raw frames: for each video, we extracted a frame every 2 seconds. • Step 2 -Detect faces: we applied the RetinaFace detector with the Resnet-50 pretrained model 4 (WiderFace dataset [28]) to the extracted frames to count all faces. • Step 3 -Select frames containing faces: we excluded raw frames containing very few face samples from our dataset, leaving us with 18,088 video frames. B. Image Annotation An image annotation comprises a bounding box and a label. Four coordinates (left, top, right, bottom) were used to denote a bounding box. The area of the face to which the bounding box was applied was the smallest square area surrounded by the hairline (upper forehead/hairline), lower jaw, and front of ears. In addition to annotating front-facing faces, we also annotated side-facing ones taken at an angle to confirm whether a mask was worn. The size of the quadrangle for an annotated face was assumed to be 10 × 10 pixels or more. For occluded faces, if the occlusion was judged to be more than half of the face | 3,070,086 | 244709698 | 0 | 16 |
area, annotation was not performed. For each annotated face, one of three labels was attached with a bounding box. Mask-wearing ratios were estimated for four scenarios: a) a sparse scene with a low ratio; b) a dense scene with a low ratio; c) a sparse scene with a high ratio; and d) a dense scene with a high ratio. GT: ground-truth ratio. Images were extracted from videos on Rambalac channel (https://www.youtube.com/c/Rambalac/videos). Faces are blurred for anonymity. • "Unknown": a face for which it could not be determined whether a mask was worn due to image quality or environmental conditions. If the mask was not properly worn, such as when the mask was stretched under the chin or hung on an ear, the "Unmasked" label was assigned. After the manual annotation, we performed verification on 20% of the annotations (manually double-checked) to identify annotation mistakes. C. Dataset Statistics The statistics for our dataset are plotted in Figure 4 and summarized in Table II. As expected, the number of masked faces is smaller than that of unmasked ones because Japanese residents were strongly encouraged to stay home during the pandemic period. On average, there are more than 30 annotated faces in each image, as shown in Table III. A. Evaluation Metrics To evaluate prediction accuracy, we used the mean absolute error (MAE) and Pearson correlation metrics. The MAE metric is defined as where N is the total number of images in the testing set, and c i and c gt i are the predicted and ground-truth counts, respectively, | 3,070,087 | 244709698 | 0 | 16 |
for image i. The Pearson correlation coefficient γ is defined where c and c gt are the mean of c and c gt , respectively. Likewise, to evaluate the mask-wearing ratio, we computed the Pearson correlation coefficient using the estimated and ground-truth ratios. B. Results We first evaluated the ability of the improved RetinaFacebased detector (RetinaFace ) to detect faces. We labeled each annotated face as "L," "M," or "S." "S" was assigned to a face for which both dimensions were from 8 to 16 pixels, "L" was assigned to a face for which both dimensions were greater than 32 pixels, and "M" was assigned to the remaining faces. We excluded faces for which any dimension was smaller than eight pixels. The face detection results using the conventional object detection metric-average precision (AP)-are shown in Table IV. We set the IoU (intersection over union) threshold to 0.4 because the faces in our dataset were small. The improved RetinaFace-based detector detected "L" faces with an accuracy of 91.2% for unmasked faces and 86.5% for masked faces in terms of AP. The accuracy was lower for smaller faces, especially for "S" faces, but was nevertheless higher than with the original detector. The improved RetinaFacebased detector was effective for faces larger than 16 pixels and outperformed the original detector overall by 2.6% mAP. Next, we evaluated the mask-wearing ratio. To truncate the noise ratio of images containing very few faces, we set a threshold k on the number of faces per image, meaning that images with fewer than k | 3,070,088 | 244709698 | 0 | 16 |
faces were excluded. From observation of the images in the NFM dataset, we set k = 5. Table V shows the results for the RetinaFace-based and CSRNet-based methods in terms of the MAE and correlation coefficient. The RetinaFace-based method produced good results for both metrics. The MAE scores for predicting the number of masked faces, unmasked faces, and all faces were 2.41, 10.80, and 12.55, respectively. All the estimations had correlation coefficients greater than 0.8. The correlation coefficient between the estimated mask-wearing ratio and the ground truth was 0.94. As described above, two CSRNet models were used for the CSRNet-based method to predict the total number of faces and the number of unmasked faces. Although the CSRNet-based method estimated the total number of faces and unmasked faces with higher correlation coefficients than the RetinaFace-based one (0.91 and 0.92, respectively), the final estimated mask-wearing ratio was only 0.73. This is because CSRNet does not effectively work on classification tasks (e.g., masked/unmasked). As shown in Table IV, CSRNet did poorly on estimating the number of masked faces (0.38 correlation coefficient). The estimation results for both methods are also shown in Figure 5. The RetinaFace-based method accurately predicted the ratios for all four scenarios (sparse/dense scenes with low/high ratios) while the CSRNet-based one performed Figure 6) highlights the better performance of the improved RetinaFace-based method. Furthermore, we computed the average mask-wearing ratio for each video in our NFM dataset to evaluate the applicability of the estimation methods used in our experiments. As shown in Table VI, the improved RetinaFace-based | 3,070,089 | 244709698 | 0 | 16 |
method produced accurate estimations for all videos-the estimated ratios are close to the actual ones. Taking a closer look at the effects of environmental conditions, we computed the accuracy of both methods on video frames extracted under daytime and nighttime conditions. As shown in Table VII, both methods work better on video frames extracted in the daytime. This is because the camera can capture higherquality images with clearer faces in the daytime. This enables better features to be, which enables the ratios to be estimated more precisely. C. Operation speed We used the same machine (Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20 GHz with one Tesla V-100 GPU card) to calculate the operation speed of the proposed methods. As described above and shown in Table V, the improved RetinaFace-based estimation method clearly outperformed the CSRNet-based one in terms of both the MAE and correlation coefficient metrics. However, it was slower. While the CSRNet-based estimation method operated at 3.17 FPS, our the RetinaFace-based estimation methodone operated at 0.81 FPS (because RetinaFace is an one-stage object detector, it predicts the bounding boxes for masked faces and unmasked faces at the same time). An advantage of the RetinaFace-based method is its ability to accurately estimate the mask-wearing ratio while a disadvantage is its low operation speed. In contrast, the CSRNet-based one can operate four times faster but with less accuracy. Furthermore, the CSRNet models are more compact 5 . VII. CONCLUSION AND FUTURE WORK We have presented the first comparative evaluation of detection-based and regression-based approaches for estimating the | 3,070,090 | 244709698 | 0 | 16 |
mask-wearing ratio. For detection-based estimation, we used an improved RetinaFace-based face detector enhanced with a bi-directional feature pyramid network and trained using the Focal loss function. For regression-based estimation, we used two CSRNet models to estimate the total number of faces and the number of unmask faces in video images. Evaluation of these methods on our largescale face mask dataset (581,108 annotations) revealed the advantages and disadvantages of each approachmethod. Future work includes integrating the two approaches into a unique framework that can be jointly trained. This framework should enable efficient switching between settings to achieve accurate estimations under different conditions. | 3,070,091 | 244709698 | 0 | 16 |
Human Intestinal Epithelial Cells Release Antiviral Factors That Inhibit HIV Infection of Macrophages As a rich source of CD4+ T cells and macrophages, the gastrointestinal (GI) tract is a major target site for HIV infection. The interplay between GI-resident macrophages and intestinal epithelial cells (IECs) constitutes an important element of GI innate immunity against pathogens. In this study, we investigated whether human IECs have the ability to produce antiviral factors that can inhibit HIV infection of macrophages. We demonstrated that IECs possess functional toll-like receptor 3 (TLR3), the activation of which resulted in induction of key interferon (IFN) regulatory factors (IRF3 and IRF7), IFN-β, IFN-λ, and CC chemokines (MIP-1α, MIP-1β, RANTES), the ligands of HIV entry co-receptor CCR5. In addition, TLR3-activated IECs release exosomes that contained the anti-HIV factors, including IFN-stimulated genes (ISGs: ISG15, ISG56, MxB, OAS-1, GBP5, and Viperin) and HIV restriction miRNAs (miRNA-17, miRNA-20, miRNA-28, miRNA-29 family members, and miRNA-125b). Importantly, treatment of macrophages with supernatant (SN) from the activated IEC cultures inhibited HIV replication. Further studies showed that IEC SN could also induce the expression of antiviral ISGs and cellular HIV restriction factors (Tetherin and APOBEC3G/3F) in HIV-infected macrophages. These findings indicated that IECs might act as an important element in GI innate immunity against HIV infection/replication. As a rich source of CD4 + T cells and macrophages, the gastrointestinal (GI) tract is a major target site for HIV infection. The interplay between GI-resident macrophages and intestinal epithelial cells (IECs) constitutes an important element of GI innate immunity against pathogens. In this study, | 3,070,092 | 3343664 | 0 | 16 |
we investigated whether human IECs have the ability to produce antiviral factors that can inhibit HIV infection of macrophages. We demonstrated that IECs possess functional toll-like receptor 3 (TLR3), the activation of which resulted in induction of key interferon (IFN) regulatory factors (IRF3 and IRF7), IFN-β, IFN-λ, and CC chemokines (MIP-1α, MIP-1β, RANTES), the ligands of HIV entry co-receptor CCR5. In addition, TLR3-activated IECs release exosomes that contained the anti-HIV factors, including IFN-stimulated genes (ISGs: ISG15, ISG56, MxB, OAS-1, GBP5, and Viperin) and HIV restriction miRNAs (miRNA-17, miRNA-20, miRNA-28, miRNA-29 family members, and miRNA-125b). Importantly, treatment of macrophages with supernatant (SN) from the activated IEC cultures inhibited HIV replication. Further studies showed that IEC SN could also induce the expression of antiviral ISGs and cellular HIV restriction factors (Tetherin and APOBEC3G/3F) in HIV-infected macrophages. These findings indicated that IECs might act as an important element in GI innate immunity against HIV infection/ replication. Keywords: human intestinal epithelial cells, hiV, macrophages, toll-like receptor 3, interferons, iFn-stimulated genes, exosomes inTrODUcTiOn The gastrointestinal (GI) tract has the largest mucosal surface in the body and serves as an important barrier between pathogens in the external environment and the body's sterile internal environment (1). Tight epithelial junctions together with the GI immune system protect the host from pathogenic invasion. The GI tract is rich in HIV target cells, mainly activated CD4 + T cells and macrophages. Therefore, the GI tract is a major site for HIV infection. As first layer cells in the GI tract, intestinal epithelial cells (IECs) constantly exposed | 3,070,093 | 3343664 | 0 | 16 |
to HIV or HIV-infected cells, which could have a profound impact on the immune and barrier functions of the GI tract (2). In addition, IECs express galactosylceramide and HIV co-receptor CCR5 (3), which facilitate translocation of CCR5-tropic HIV from the apical to the basolateral surface via vesicular transcytosis (4,5). Activation of IECs Inhibit HIV Frontiers in Immunology | www.frontiersin.org February 2018 | Volume 9 | Article 247 Central to the capacity of IECs to maintain barrier and immunoregulatory functions is their ability to act as frontline sensors to their microbial encounters and to integrate commensal bacteria-derived signals into antimicrobial and immunoregulatory responses (6). Studies have shown that the IECs express pattern-recognition receptors (PRRs) that enable them to act as dynamic sensors of the microbial environment and as active participants in directing mucosal immune cell responses (7). Among PRRs, toll-like receptor 3 (TLR3) in conjunction with TLR7 and TLR9 constitutes an effective system to monitor viral infection and replication. TLR3 is known to recognize viral double-stranded RNA (dsRNA), while TLR7 and TLR9 detect single-stranded RNA (ssRNA) and cytosine phosphate guanine DNA, respectively (8). Therefore, expressing functional TLR3, 7 and 9 in IECs play a crucial role in virus-mediated GI innate immune responses (9). Macrophages present in the GI system constitute a major cellular reservoir for HIV due to the abundance of these cells at mucosal sites. GI-resident macrophages represent the largest population of mononuclear phagocytes in the body (10). In the rectum, there are more than three times as many CD68 + macrophages expressing CCR5 as those | 3,070,094 | 3343664 | 0 | 16 |
in the colon (4). The high expression of CCR5 on rectal macrophages suggests that the most distal sections of the gut may be especially vulnerable to HIV infection. Macrophages constitute up to 10% of infected cells in HIV-infected individuals (11,12). HIV-Infected macrophages can transfer virus with high-multiplicity to CD4 + T cells and reduce the viral sensitivity to antiretroviral therapy and neutralizing antibodies (13,14). In mucosa infiltrating, macrophages also play a role in systemic HIV spread (5). Macrophage activation contributes to HIV-mediated inflammation, as they can produce and release inflammatory cytokines that induce systemic immune activation, a hall marker of HIV disease progression. Conversely, macrophages play an important role in the host defense against HIV infection. Macrophages are a major producer of type I interferons (IFNs). Our early investigations (15,16) showed that TLR3 activation of macrophages produced multiple intracellular HIV restriction factors and potently suppressed HIV infection/replication. However, the ability of macrophages to produce type I IFNs are significantly compromised by HIV infection. HIV blocks IFN induction in macrophages by inhibiting the function of a key kinase (TBK1) in the IFN signaling pathway through viral accessory proteins (Vpr and Vif) (17). In addition, HIV infection downregulates the antiviral IFN-stimulated genes (ISGs) (ISG15, OAS-1, and IFI44) in primary macrophages (18). Exosomes play a key role in intercellular communication and innate immune regulation. A recent study showed that exosomes are formed in an endocytic compartment of multivesicular bodies (19). Exosomes are involved in many biological processes such as tissue injury and immune responses by transfer of antigens, antigen | 3,070,095 | 3343664 | 0 | 16 |
presentation (20), and the shuttling of proteins, mRNAs, and miRNA between cells (21). As such, it has been postulated that exosomes mediate intercellular communication by delivering functional factors to recipient cells (22). IEC lines also can secrete exosomes bearing accessory molecules that constitute a link between luminal antigens and local immune system (23). Studies have documented that the bystander cells can produce and release the exosomes, which contain multiple antiviral factors that can inhibit viral replication in target cells, including hepatitis B virus (24), HCV (25), and HIV (26,27). Evidently, the interplay between GI-resident macrophages and IECs has a key role in the GI innate immunity against viral infections. Unlike macrophages, IECs are not a host for HIV infection/replication, and it is unlikely that HIV has a direct and negative impact on functions of IECs. However, because IECs in the GI tract have to encounter a number of stimuli and immune cells, including HIV-infected macrophages (28), the activation of these non-immune cells in the GI tract is inevitable. Recent studies (19,29) have shown that IECs can be induced to express and secrete specific arrays of cytokines, chemokines, and antimicrobial defense molecules, which is crucial for activating intestinal mucosal innate and adaptive immune responses. However, there is little information about whether the IECs are involved in the GI innate immunity against HIV infection. Specifically, it is unknown whether the IECs possess functional TLRs that can be immunologically activated to produce anti-HIV factors. Therefore, this study aimed to determine whether IECs have the ability to mount TLR3-IFN-mediated antiviral | 3,070,096 | 3343664 | 0 | 16 |
activities against HIV infection of macrophages. cell culture The human intestinal epithelial cell line (NCM460), originally derived from the normal colonic mucosa of a 68-year-old Hispanic male, were expanded in RPMI-1640 medium (30). Cells were cultured at 37°C with 5% CO2 and 100% humidity, and culture medium was changed every 3 days. To polarize IECs, we used a transwell system (31,32), in which IECs (1 × 10 5 cells/well) were grown on a 0.4 µm pore sized, 6.5 mm diameter transwell insert. The transepithelial electrical resistance was measured by Ohm meter. The cell cultures were considered to constitute a polarized epithelial monolayer when resistances were ≥600 Ω × cm 2 and stable (33). Purified human peripheral blood monocytes were purchased from Human Immunology Core at the University of Pennsylvania (Philadelphia, PA, USA). The Core has the Institutional Review Board approval for blood collection from healthy donors. Freshly isolated monocytes were cultured in the 48-well plate (2.5 × 10 5 cells/well) in DMEM containing 10% FBS. Macrophages refer to 7-day cultured monocytes. Activation of IECs Inhibit HIV Frontiers in Immunology | www.frontiersin.org February 2018 | Volume 9 | Article 247 exosome isolation Intestinal epithelial cells were transfected with poly I:C (0.1, 1, 10 µg/ml) for 4 h and fresh-culturing medium containing 10% exosome-free FBS was added. At 48 h post-transfection, IECs supernatant (SN) was collected and exosomes were isolated through multiple rounds of centrifugation and filtration as previously reported (24). Briefly, 10 ml of SN were centrifuged at 300 × g for 10 min to remove floating | 3,070,097 | 3343664 | 0 | 16 |
cells, then at 2,000 × g for 10 min, and 10,000 × g for 30 min to remove cell debris, shedding vesicles, and apoptotic bodies. Finally, exosomes pellet were collected by ultracentrifugation at 100,000 × g for 70 min. For further purification, the pellets were washed with phosphate buffered saline (1× PBS) (Gibco, NY, USA) and centrifuged at 100,000 × g for 70 min. The pellet was resuspended in 100 μl 1× PBS, then immediately stored at −80°C until use. immunofluorescence of exosome Macrophages were cultured at a density of 2.0 × 10 5 cells/well in 48-well plates. Isolated exosomes from IECs SN were labeled with PKH67 Fluorescent according to the manufacturer's protocol (Sigma-Aldrich). Purified PKH67 exosomes were incubated with macrophages and cultured at 37°C for 18 h in a CO2 incubator. Macrophages were then stained with a PKH26 Fluorescent for membrane and Hoechst 33342 for nuclei and washed three times with 1× PBS. The cells were photographed under a confocal microscope (Nikon A1R, Nikon, Japan). qrT-Pcr Quantification of mrna and mirna Total RNA from cultured cells was extracted with Tri-Reagent (Molecular Research Center, OH, USA) as previously described (34). Total RNA (1 µg) was subjected to reverse transcription (RT) using reagents from Promega (Promega, WI, USA). The RT system with random primers for 1 h at 42°C. The reaction was terminated by incubating the reaction mixture at 99°C for 5 min, and the mixture was then kept at 4°C. The resulting cDNA was then used as a template for qPCR quantification. The qPCR was performed | 3,070,098 | 3343664 | 0 | 16 |
with iQ SYBR Green Supermix (Bio-Rad Laboratories, CA, USA) as previously described (35). Thermal cycling conditions were designed as follows: initial denaturation at 95°C for 3 min, followed by 40 cycles of 95°C for 10 s, and 60°C for 1 min. miRNA was extracted from IECs-derived exosomes using the miRNeasy Mini Kit (Qiagen, CA, USA) in accordance with the manufacturer's instruction and reverse-transcribed with a miScript Reverse Transcription Kit (Qiagen, CA, USA). qRT-PCR was carried out using miScript Primer Assays and miScript SYBR Green PCR Kit from Qiagen as previously described (36). Synthetic caenorhabditis elegans miRNA-39 (cel-miR-39) was used as a spiked-in miRNA for normalization. Western Blot Total cell lysates of IECs transfected with Poly I:C was prepared by using the cell extraction buffer (Thermo Fisher Scientific, MA, USA) according to the manufacturer's instructions. Equal amounts of protein lysates (30 µg) were separated on 4-12% sodium dodecyl sulfate polyacrylamide gel electrophoresis precast gels and transfected to an Immunobiolon-P membrane (Millipore, Eschborn, Germany). The blots were incubated with primary antibodies in 5% nonfat milk in PBS with 0.05% Tween 20 (PBST) overnight at 4°C (IRF3, 1:1,000; Phospho-IRF3, 1:1,000; IRF7, 1:1,000; Phospho-IRF7, 1:1,000; GAPDH, 1:5,000; β-actin, 1:5,000; EEA1, 1:1,000; CD63, 1:1,000; LAMP2, 1:2,000; Alix cytometric Bead array (cBa) assay The CBA assay was performed to simultaneously measure CC chemokines (MIP1-α, MIP1-β, and RANTES) levels in cell culture supernatant, according to the instructions of the manufacturer (BD Biosciences, CA, USA). prior to infection with DNase I-treated HIV Bal for 3 h. Cellular DNA, including genomic and viral DNA products, | 3,070,099 | 3343664 | 0 | 16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.