url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://movies.fandom.com/wiki/Message_Wall:MarkvA
code
Hello... I would like to tell you a question. In my wiki there is a movie award , it is called about International Film Awardabout . The winners are chosen through a vote by users of Wikia. I think of them we could create a page on this wiki about Int Film Award. This year 14 people have voted for their assignment and the big winner was Tinker Tailor, with the victory in ten of the twenty categories. Here is the link: http://internationalaward.wikia.com/wiki/1st_International_Film_Awards What do you think, can we create a page on this wiki about INTERNATIONAL WIKI AWARDS?
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735833.83/warc/CC-MAIN-20200803195435-20200803225435-00515.warc.gz
CC-MAIN-2020-34
578
2
http://www.unicode.org/mail-arch/unicode-ml/Archives-Old/UML019/0436.html
code
> OK, but now if we are interacting with the same menu over a > telecommunications link of some kind, the possibility for timeout > -- and therefore a deadlock -- increases without bound, e.g. when > the base character and the combining diaeresis go out in separate > IP packets. Forgive me, but this discussion seems to be endless and after all this talk I still cannot understand what the supposed problems are. (I understand that it is important for us to stick together and make the world believe that we are here to solve all these `difficult' problems, as otherwise we would all lose our jobs---but I'll make an exception here and actually say something about this one.) (1) "Difficulties in the design of (the equivalent of) an "expect" script that somehow has to recognize and handle prompts." It seems rather poor design to me to try to build Unicode character handling into "expect." Expect was never meant to recognize abstract characters, it is a simple mechanism to wait for byte sequences (without any interpretation!) on a stream, and to respond with certain actions (such as sending other byte Expect shouldn't try to interpret the sequences it receives, nor the ones it sends. That's not its job, and that would be completely useless. When writing "expect" scripts, you don't specify sequences of (abstract) Unicode characters, but concrete sequences of bytes, as they will be received or sent over the line. If you look at it that way, there is no problem. Sure, that means that an expect script will break if some crazy operator changes the internal presentation of a prompt from using combining characters to a precomposed one. So what? It will also fail if the operator changes the space in "Log in" to a non-breakable one, and anything less than full artificial intelligence isn't going to make this problem go away. Expect scripts are normally created by running a login session manually, while logging the system output to a file. Later, you use the byte sequences from the file to set up the expect script. What these sequences "mean" is completely irrelevant to expect. It would be counterproductive to try to add this kind of knowledge to it (or at least I cannot see any possible benefit in (2) "It is not possible for an application to recognize single key presses representing A or A+combining ring." Let's ignore the fact that typical GUI applications don't get their keyboard input from a stream, but get it nicely prepackaged from a `windows' server, and let's look at a program running on a remote machine through a terminal connection. To take an example, let's look at a program called "vi" running on "bigiron.ust.hk", to which I'm logged in through `ssh' from Now, how does "vi" distinguish me pressing the ESC key (which is an important `command' in "vi") from me pressing the "Cursor Up" key (which happens to generate a sequence starting with 0x1b, the equivalent of the ESC key, and which is also a rather useful key to press in "vi")? You see, this is not a new problem. In fact, it has been solved for decades. In fact, it has been solved so well that you haven't even noticed that it has been solved before :-) I'll leave it as homework to figure out what the solution is (yes, I'm a teacher by profession :-) Best wishes, and good night, (4 am, after uncountable bottles of beer, so forgive me if I'm not making any sense) This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:53 EDT
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948426.82/warc/CC-MAIN-20180426164149-20180426184149-00146.warc.gz
CC-MAIN-2018-17
3,447
58
https://forum.clockworkpi.com/t/how-do-i-install-tic-80-on-a-a06-devterm/7195
code
Trying to run the Fantasy Console TIC-80 on the A06 DevTerm. Had issues with the *.deb file. Install did not got through the GUI. Managed somehow through the CLI. But it won’t run. Tried the workaround from this thread. No success. Getting this error message. You must be using tic80-v0.90-rpi.deb from the releases page. It looks like the Raspberry Pi build depends on libbcm_host, a Raspberry Pi-specific library. Looking at their releases, I don’t see anything that will work on a non-Raspberry Pi ARM device… You could try compiling from source. It’s not too hard. Instructions here: GitHub - nesbox/TIC-80: TIC-80 is a fantasy computer for making, playing and sharing tiny games. Hmm, thanks! Just to clarify, which instructions should I follow? The ones for Raspberry or for the Ubuntus? The one for the newest Ubuntu. Good luck! I can confirm the build instructions for Ubuntu 18.04 worked fine for my A06 DevTerm running Armbian. just created tic80 apt for quick installing on a06 sudo apt update sudo apt install devterm-tic80-cpi -y and welcome to submit request to github or forum here for any other software/games to be packed as apt deb for quick installing Thank you for saving these barbarians from the walled garden.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00397.warc.gz
CC-MAIN-2022-27
1,240
12
https://www.fr.freelancer.com/projects/internet-marketing-social-networking/specific-facebook-messaging/
code
There's an artists page with 27,000 likes. I need somebody to be able to send a specific message to each one of those 27,000 people that have liked the page. The message contains a YouTube link and another Facebook page's link. It's important that if you take the job, that you do exactly as I ask. 15 freelance font une offre moyenne de $157 pour ce travail Ready to Go! I can work right away if you find me the right person for the job. I have good command with the facebook and can work with the project easily. i am a facebook user and like to creating group and collect friends......i am a specialist of spetial messaging....i will do that in time.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948285.62/warc/CC-MAIN-20180426144615-20180426164615-00334.warc.gz
CC-MAIN-2018-17
653
4
https://lists.macports.org/pipermail/macports-dev/2019-January/039998.html
code
A quick question on PR etiquette mark at macports.org Mon Jan 21 14:11:15 UTC 2019 Cool. That makes a lot of sense,.Thanks! On Sun, Jan 20, 2019 at 4:24 PM Mojca Miklavec <mojca at macports.org> wrote: > On Sun, 20 Jan 2019 at 22:18, Mojca Miklavec wrote: > > On Sun, 20 Jan 2019 at 19:45, Mark Anderson wrote: > > > > > > So, I want to change my email on all of my ports. Should I do them all > in one big PR which is what my gut says, or should I do a separate one for > each? It'd be the only change I'd make in this pass. > > Since you have commit rights: if you are sure that the change is > > correct and make sure that nothing breaks (due to a typo like > > forgotten brace etc.), there is no need to make a pull request. > ... but if you do make a pull request, please don't open any more than > a single one. I initially thought you were asking whether you need to > make one commit or multiple commits. I would strongly prefer a single > commit as well. Opening multiple pull request would only add a lot of > burden to developers. > (What's important is not to make completely unrelated changes to > multiple ports in one commit or one pull request.) -------------- next part -------------- An HTML attachment was scrubbed... More information about the macports-dev
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00110.warc.gz
CC-MAIN-2022-40
1,276
24
http://www.electricbrain.com.au/pages/desktop-blade-center/odroid-c4-node.php
code
Odroid C4 Cluster node Hardkernel Odroid C4 cluster node Hardkernel's Odroid C4 as a cluster node. As the picture shows, the board is the exact same size for the purposes of the PiCluster. It just screwed in to place on the board sled effortlessly. A simple thing that's included in the purchase is the heatsink. Those that have read the other pages on the site will be aware that this tinkerer often laments the lack of thermodynamic engineering on various boards. In this case, however, not only is a factory heatsink provided but it is correctly oriented. Someone has done their homework. Odroid C4 running apt upgrade minutes after unboxing Software was a breeze. It has Ubuntu 20.04 LTS factory supported. There are some odd quirks which were easy to fix. These are now noted over on the Server OS Settings page. Docker loaded exactly as documented on the Docker site and the Hello-World container just upped and ran! The kernel used in the official Odroid minimal image is: Linux hostd.localdomain 4.9.277-75 #1 SMP PREEMPT Sun Aug 8 23:26:32 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux whereas on the Raspberry Pi 4 and CM4 nodes: Linux hostc.localdomain 5.4.0-1052-raspi #58-Ubuntu SMP PREEMPT Mon Feb 7 16:52:35 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux Labelled as hostd.localdomain the new board immediately started taking container loads. Docker Swarm has an issue with limiting container CPU utilization. It is not possible to restrict the CPU usage with this kernel, unlike the 5.4 kernel used on Pi4 machines. NanoCPUs can not be set, as your kernel does not support CPU cfs period/quota or the cgroup is not mounted This is no big deal, as all that needs to be done is the limit clause has to have the CPU limit removed and then the container runs fine. Obviously this becomes an issue when limits are essential. Work around by simply tagging the node appropriately and avoid scheduling on it for those workloads. A software update to Ubuntu 22.04 has resolved this issue. This node now runs the same OS version as all other nodes. The only thing left is to go back and remove the above workaround! Essentially this makes the Odroid-C4 a complete alternative for the Raspberry Pi 4GB nodes. How to tell if your board supports docker CPU limits Here the answer is no (Odroid C4): root@hostd:~# ll /sys/fs/cgroup/cpu/cpu.* -rw-r--r-- 1 root root 0 Mar 3 14:21 /sys/fs/cgroup/cpu/cpu.rt_period_us -rw-r--r-- 1 root root 0 Mar 3 14:21 /sys/fs/cgroup/cpu/cpu.rt_runtime_us -rw-r--r-- 1 root root 0 Mar 3 14:21 /sys/fs/cgroup/cpu/cpu.shares Here the answer is yes (Raspberry Pi CM4): root@hostc:~# ll /sys/fs/cgroup/cpu/cpu.* -rw-r--r-- 1 root root 0 Mar 3 14:28 /sys/fs/cgroup/cpu/cpu.cfs_period_us -rw-r--r-- 1 root root 0 Mar 3 10:46 /sys/fs/cgroup/cpu/cpu.cfs_quota_us -rw-r--r-- 1 root root 0 Mar 3 11:17 /sys/fs/cgroup/cpu/cpu.shares -r--r--r-- 1 root root 0 Mar 3 14:28 /sys/fs/cgroup/cpu/cpu.stat The Odroid C4 uses an AmLogic S905X3 System on a Chip (SoC). At the time of writing this chip is getting on a bit now being some 3 years old. However, given the Raspberry Pi boards are currently unobtainable the O-C4 looked like a good alternative. Support is essentially provided via the official Wiki.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00168.warc.gz
CC-MAIN-2023-50
3,218
22
http://www.angelfire.com/extreme2/grahampe/protector.htm
code
Protector Win Ver 3.0 Deluxe This little program protects your windows enviroment against unwanted users. When running it does not allow anyone access to windows without a password kind of a secure desktop. It also disables the keys on your keyboard so Ctrl Alt Del won't even work. You can choose to have it automatically start when windows starts also so rebooting the system by switching of the power button won't work. Protector Win now captures a picture of your desktop on loading and plays an alarm if the wrong password has been entered three times. The changes from version 1.5 to 2.0 are small and only cosmetic a small moving box which acts like a screensaver has been added and login has been centred. Changes from 2.0 to 2.1 you can now put your own text in the moving box. Changes 2.1 to 2.2 very minor tidy up of final look no change in operation this is the final version unless useful comments for change or errors are submitted to me. Changes applied from 2.2 to 3.0 is that now the option to start Protector Win at system start up is now run as a system service using the registry instead of running from a shortcut in the Programs\StartUp folder in the start menu. This makes Protector Win a little bit more secure. Added to version 3 Deluxe is a Timer you can setup to automatically Lock and Unloack your desktop at set times of the day on set days. Screenshots Below. The first time you run Protector Win after install leave the password blank and click options to setup your password. If you are updating the program please uninstall the old one first then when you install the new one it will keep your old password. Protector Win has just had features updated 08 September 2000. Tick days of the week that you want he program to run, or Tick the "Select All" box in the bottom Left corner. Using the slide bars select the time you wish the program to Lock or Unlock the desktop. To activate program select the "Hide" button. If you want the program to start on System start-up, tick the "Run at System Start" check box. And as with all my other programs it is completely free.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00325.warc.gz
CC-MAIN-2018-30
2,101
11
https://explorer.aapg.org/story/articleid/16340/a-delicate-balance-seeing-faults-more-clear
code
Spectral decomposition of seismic data helps in the analysis of subtle stratigraphic plays and fractured reservoirs. The different methods used for decomposing the seismic data into individual frequency components within the seismic data bandwidth serve to transform the seismic data from the time domain to the frequency domain – and generate the spectral magnitude and phase components at every time-frequency sample. The spectral amplitude and phase components are analyzed at different frequencies, which means essentially interpreting the subsurface stratigraphic features at different scales. More recently, another important attribute that could be generated during spectral decomposition has been introduced, and is referred to as the voice component at every time-frequency component. (For more details of this attribute, refer to the EXPLORER’s November 2014 Geophysical Corner.) The voice component at any individual frequency, say 30 Hz, is obtained by cross-correlating the seismic amplitude data with the mother wavelet (such as the Morlet wavelet), centered at 30 Hz with a frequency width of 30 Hz on either side. Thus the bandwidth of the voice component increases as the frequencies increase from the lower end to the higher end of the bandwidth. One may consider the process to be equivalent to applying a narrow band pass filter centered at 30 Hz to the data, but having some narrow bandwidth around on both sides. We show an example of a voice component section in figure 1, along with its amplitude spectrum. Such voice components offer more information that could be processed and interpreted. We have focused on interpretational objectives of spectral decomposition in our earlier articles in the Geophysical Corner (December 2013 and March 2014) and demonstrated our examples pertaining to channels and other stratigraphic features. In this article our examples focus on faults and fractures. In figure 2 we show a segment of a seismic section from a 3-D seismic volume from northern-central Alberta, Canada. The equivalent sections from the spectral magnitude, phase component, and the voice component at 65 Hz are shown in figures 2b, c and d respectively. Notice that the vertical discontinuity information is not clearly seen on the spectral magnitude, but rather on the phase component. The voice component combines both attributes and nicely delineates the discontinuties. This observation could be exploited to our advantage by either interpreting the discontinuity information as such, or by running a discontinuity attribute, such as coherence, on the voice component volume. Traditionally, the spectral component magnitudes at different dominant frequencies have been utilized for obtaining detailed perspectives on stratigraphic objectives. As an example, the thickness of a channel is correlated with the spectral magnitude. More detailed information on seismic geomorphology can be gained by visualizing data at specific frequencies, or combining data with different frequencies using RGB color schemes. Another conclusion that one can have is that if the input data are spectrally balanced, or if its frequency bandwidth is somehow extended, the resulting volumes could lead to higher discontinuity detail. We focus on this aspect in this article. In the May 2014 Geophysical Corner, Marfurt and Matos described an amplitude-friendly method for spectrally balancing the seismic data. In this method, the data are first decomposed into time frequency spectral components. The spectral magnitude is averaged over all the traces in the data volume spatially and in the given time window, which yields a smoothed average spectrum. Next, the peak of the average power spectrum also is computed. Both the average spectral magnitude and the peak of the average power spectrum are used to design a single time-varying spectral balancing operator that is applied to each and every trace in the data. As a single scalar is applied to the data, the process is considered as being amplitude friendly. Figure 3 shows segments of a seismic section and its equivalent section after spectral balancing. The individual amplitude spectral before and after are shown as insets. Notice that after spectral balancing the seismic section shows higher frequency content and the amplitude spectra is flattened. Encouraged with the higher frequency content of the data, we run Energy Ratio coherence on the input data as well as the spectrally balanced version of the data. The results are shown in figures 4a, b and 5a, b, where we notice the better definition of the NNW-SSE faults as well as the faults/fractures in the E-W direction on the coherence run on spectrally balanced version. Finally, we run the spectral decomposition on spectrally balanced version of the input seismic data, and put the voice components through to Energy Ratio coherence computation. In figures 4c, d, e and 5c, d, e we show time slices and horizon slices at different levels from the 65, 75 and 85 Hz frequency volumes. Notice the clarity in the definition of the discontinuities on both sets of displays. Such data lead to better interpretation of the discontinuities than carrying out the same exercise of the input data. The conclusions that one can draw from the foregoing examples is that spectral balancing of seismic data, when performed in an amplitude-friendly way, leads to higher frequency content – which in turn exhibits detailed definition of faults and fractures. Such discontinuity information can be interpreted better on coherence displays in the zone of interest. Coherence attribute computation performed on spectral decomposition after spectral balancing, or on the voice components at higher frequencies yields higher detail with regard to the faults and fractures.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100531.77/warc/CC-MAIN-20231204151108-20231204181108-00780.warc.gz
CC-MAIN-2023-50
5,791
29
http://forum.wehavelupus.com/showthread.php?196-Mirac&p=697&mode=threaded
code
I have a friend whose wife is in Unv.of Nevada( Bio chemist). She had asked one of her professors about lupus and he quoted a site http://www.**********.com/ to go to and about the product M***C. ANyone know anything about it. I asked my Rhuemy but he knew nothing about it (He is not much into the new stuff). Was wondering if anyone had heard about it or tried it or if you could get advise from your thuematolgist All help aprreciated. Last edited by rob; 06-19-2010 at 12:12 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00220-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
482
5
https://www.neogaf.com/threads/dolphin-emulating-wii-and-gamecube-games.395121/page-3
code
LocoMrPollock said:So how do you rip Gamecube games? I could swear it was in the OP yesterday, but I don't see it now. I only have a link to the Wii Homebrew Thread on Gaf. They have all the info you could possibly need to set that up, so I don't feel the need to go through it again here. The thread is strictly about Dolphin.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608856.6/warc/CC-MAIN-20210613131257-20210613161257-00239.warc.gz
CC-MAIN-2021-25
327
2
https://deepai.org/publication/generating-superpixels-using-deep-image-representations
code
Many deep learning based applications in computer vision operate on a grid of pixels and use convolutions trained end-to-end. However, popular algorithms have successfully leveraged image segmentation to group pixels into superpixels, reducing the input dimensionality while preserving the semantic content needed to address the task at hand[fulkerson2009class]. Superpixels are efficient image priors that tend to transfer across tasks and reduce the data needed to train models, which can be very beneficial for domain adaptation and weakly supervised settings, e.g. weakly supervised image segmentation [kwak2017weakly]. Graph-based convolutional networks [graphbased] also allow applications of deep learning beyond grid-like inputs. Some works [schuurmans2018efficient] explored the inclusion of superpixels in deep learning pipelines. The hand-crafted design of superpixels algorithms limits our ability to tune image segmentations to a specific image domains, such as infrared, medical, of spatio-temporal data. Given the focus on efficiency, superpixels have often been designed to operate on color features only; image segmentations could however incorporate higher-level image representations. We consider extensions to a standard superpixel algorithms incorporating higher-level unsupervised or supervised image features. We also study paths to fine-tune a superpixel segmentation algorithm to a specific modality. There has been few research on trainable superpixels. In parallel to our work, Wei-Chih Tu et al. [liulearning] have developed a trainable variant of graph-based superpixel algorithms using trainable superpixel affinities. Our approach is based on a clustering algorithm, which tends to be faster and more suited for real-time applications due to their iterative nature [spix_eval]. Ii SLIC algorithm Several comparisons indicate that the Simple Linear Iterative Clustering (SLIC) [slic] image segmentation algorithm offers both good speed and performance [spix_eval][neubert2012superpixel]. It uses a clustering approach similar to -means, and usually operates on images in the CIELAB color space. After initialization of the cluster centers along a grid, a two-step iterative process refines clusters until convergence. First, the pixels are assigned to the closest cluster center in a joint -dimensional space of colors (, and ) and spatial ( and ) components. The weighted distance includes a compactness parameter to balance between colors and space. Second, the cluster centers are updated based on the pixel assignments. Finally, after convergence, a simple connected components algorithm enforces connectedness of the image segments. Iii Augmenting SLIC with deep representations Iii-a Deep representations We experiment with SLIC beyond the original features. Deep representations capturing textures, gradients and edges in the image can be extracted from convolutional neural networks. Their structure is similar to multi-channel images, often having a lower resolution than the original image. Each channel represents an image feature. These features can be unsupervised, as in the case of scattering features[scattering] (Fig. 1), or trained for a particular vision task. Segmentation networks such as ENet [paszke2016enet] have convolutional layers behaving like feature extractors. As we aim to integrate superpixels in a deep architecture, the features can be provided at no extra computational cost. Unsupervised scattering networks are similar to convolutional neural networks whose filters are fixed as wavelets. We use scattering networks with a receptive field of for our experiments on images, generating features maps of size per image channel. Iii-B Inclusion in SLIC For a particular pixel, we have image features . To incorporate the image features into SLIC, we augment the number of image channels. The scattering features are upscaled and concatenated with the input image (Fig. 2). The final image of size can be used in the SLIC algorithm, where the clustering space now becomes a larger space. The SLIC color distance is extended and individual feature maps are weighted with coefficients . The distance in the color space between pixel and cluster is then defined as Our first experiments investigate the impact of scattering features by manually tuning the inclusion of scattering features on the lightness component only. We define binary weights based on the visual appearance of the features. Layers originating from strong edge detectors are left out since they are of no use in clustering. We also varied the relative importance of the extra features compared to the color components and selected the best-scoring approach (out of 10 different ones) for evaluation (Section VII-B). Iv Trainable superpixel algorithm Manual selection and weighting of features in the distance measure is a tedious process, requiring visual examination of features and an exhaustive search for optimal weights. In addition, the distance measure (1) might not have enough flexibility to integrate those features properly. We research a trainable superpixel algorithm incorporating a neural network that can tune superpixels to a certain image set. Iv-a Clustering as a classification problem The SLIC superpixel algorithm uses a top-down approach: the algorithm iterates over all cluster centers and calculates a distance measure to all pixels in the neighborhood around the cluster center. An equivalent bottom-up approach would be to iterate over all pixels and calculate a distance measure between the pixel and all the clusters in the region around the pixel. The pixel is then assigned to the cluster being the closest in the clustering space of SLIC with components. This is in fact a classification problem: assign each pixel to one of the clusters in the spatial neighborhood. While SLIC solves this classification problem using a distance measure, we rather avoid to train a regression because distances improving superpixel performance are hard to define. We propose to use a neural classifier for the assignment task: it considers a fixed amount of spatially closest clusters in the neighborhood and assigns the pixel to one those depending on their features. Iv-B Bottom-up trainable superpixel algorithm The algorithm (Algorithm 1) works in a similar way to SLIC. Clusters are first initialized on a grid. Then, clusters are formed using a two-step iterative procedure: the first step assigns each pixel to one of the spatially closest clusters, using classification based on the features of these clusters and the pixel of interest. is a parameter: higher means more flexibility at the cost of more computations. Afterwards, the features and position of the newly formed clusters are calculated by averaging the features and positions of the pixels assigned to those clusters. This iterative procedure is done for a fixed amount of iterations. Finally, a connected components algorithm is used to transform clusters into proper superpixels. A sequential implementation as described here would be slow: the large amount of individual network evaluations limits the performance. We implemented a version that generates large batches and evaluates these on a GPU. The algorithm can also easily be parallelized because every pixel is processed independently. Iv-C Neural network architecture The input vector for the classification of a single pixel consists of several parts: pixel features, for example the pixel color and other features extracted using deep representations. spatial distances to the closest clusters. In order to have a single neural network for multiple superpixel sizes and compactness parameters, the distance is normalized: , with the pixel distance between pixel and cluster center . feature differences between the input pixels and cluster centers. The network outputs a vector of size , where each element denotes the probability of the pixel belonging to cluster with index. We aim for a small network and look at the problem as a typical classification problem. A fully connected network would not exploit the similarity between different parts of the input vector. An efficient architecture is made up of three parts: normalization, dimensionality reduction and classification (Fig. 3). The Dimensionality Reducer for Pixels (DRP) modules transforms the pixel features to a smaller space, while the Dimensionality Reducer for Clusters (DRC) is applied on the pixel-cluster differences. Weights are shared between similarly-named modules to reduce the number of trainable parameters. The final fully connected network (FC) does the actual classification. V Generating training labels The classifier requires training labels, indicating which cluster the pixel should be assigned to according to ground truth. Since no database with superpixel annotations exist, we derive a label set from semantic segmentation databases such as Cityscapes [cityscapes] and BSDS [bsds500] (Fig. 4). V-a SLIC-based labels We use the SLIC distance measure as a starting point to produce labels. SLIC replication requires to calculate the SLIC distance measure to the closest clusters of the classifier and pick the closest cluster according to this measure. The pixel label is then set to this cluster. Replicating SLIC would not force the classifier to include the features extracted from deep representations in its decision process. To improve superpixels beyond SLIC, we use ground truth annotations for semantic segmentation to correct wrong labels, where the pixel would be assigned to a cluster in a different ground truth segment. SLIC makes these mistakes when regions have approximately the same colors, but the classifier can use deep representations to discriminate between the two regions. When generating a label for a cluster, we only consider assignment to clusters lying mainly in the same ground truth segment as the pixel being classified. Ground truth segmentations are typically much larger than superpixels and the amount of pixels being corrected by the ground truth segmentation is small. The classifier thus primarily replicates SLIC and ignores the corrected labels. A multi-label loss could take into account that multiple clusters are good candidates, but we couldn’t achieve satisfactory results using this approach. We solve the problem by using principles of hard-example mining: the set of labels is carefully chosen to improve the training process. Hard-example mining on SLIC mistakes We try to train the classifier by only retaining labels that were corrected by the ground truth annotations. Our experiments indicate that this is too strict and degrades superpixel performance. Hard-example mining at segmentation edges A less strict method would be to only consider pixels near ground truth edges. Labels in the middle of the ground truth segments have a lot of ambiguity: we cannot be sure whether the assigned cluster is really in the same part of the object. Labels at the edges have more discriminative power. We call these unambiguous labels. Our implementation does not exactly select pixels near the edge; it is easier to count the amount of different ground truth segments of the closest clusters. Thus, we restrict the training set to pixels that have candidate clusters in at least a chosen amount of different ground truth segments. V-B Weakly supervised labeling Using the SLIC distance measure to generate pixel labels offers a good starting point but might also restrict the adaptability of the classification network. One could label a pixel to a random cluster in the same segment. This obviously generates very noisy superpixels. Picking the closest cluster in the same segment has the opposite problem: the spatial component is emphasized too much. Again, we leverage the principles of hard-example mining to build a better training set. We limit the training set to pixels having candidate clusters in at least segments, with an optimal to be determined experimentally (Fig. 5). Interestingly, our experiments indicate that a higher value for produces more compact clusters (Fig. 6). The reduced amount of ambiguity increases the importance of the spatial component: the network learns that two pixels next to each other might have very different features, while having very similar spatial distances to the spatially closest clusters. V-C BSDS ground truth edges More refined semantic segmentations provide more accurate labels. We considered several semantic segmentation datasets: PASCAL VOC [pascal-voc-2012], Cityscapes [cityscapes] and BSDS500 [bsds500]. Cityscapes and BSDS both have high-quality ground truth annotations, but BSDS has multiple of them for a single image. Typically, object borders in natural images are not clearly delineated and multiple independent ground truth segmentations help to handle these cases. We combine the 5 individual ground truth annotations in a single ground truth edge map (Fig. 7). This also defines a new distance measure: more edges between a pixel and cluster indicate a greater distance and less likelihood to be assigned to that cluster. Vi Training a distance measure The proposed network interprets the classification task as a typical deep learning problem. We were not able to replicate the SLIC distance measure exactly, although superpixel output was similar. We note that the SLIC distance measure could be perfectly replicated by squaring each element of the input vector and removing the batchnorm layer: the elements of the input vector then become the individual terms of the SLIC distance measure. By making the different parts of the network independent, the trained modules can be seen as distance functions (Fig. 8). The network then learns a regression by training a classification. We verified that the network can almost perfectly replicate the SLIC distance measure (Table II). When using a single linear layer, the network in fact learns the weights of Equation 1. These weights can then be integrated in the top-down approach of SLIC, resulting in a very efficient trainable superpixel algorithm running on CPU. Vii Evaluation and results Superpixel performance is evaluated on 500 BSDS500 [bsds500] color images. Superpixels are evaluated with size , compactness (determined optimal for the standard SLIC) and 5 clustering iterations. We use several metrics common in superpixel evaluation: Boundary recall (Rec) represents the adherence to ground truth boundaries (higher is better). Mean distance to edge (MDE) [benesova2014fast] measures the average distance between the ground truth border and closest superpixel edge (lower is better). Superpixel leakage into different ground truth segments is quantified by the undersegmentation error (UE) (lower is better). Multiple variants exist, we use the definition of Neubert and Protzel [neubert2012superpixel]. The regularity and compactness of superpixels is measured by the compactness (CO) metric [compactness]. More regular superpixels are generally preferred. For a fair comparison, the compactness parameters of different methods are chosen so their resulting output compactness is similar. We define an additional intersection-over-union (IoU) metric similar to the one often used in segmentation benchmarks. This metric measures the maximum achievable performance when using superpixels in a segmentation pipeline. Vii-B Extended distance measure with manual tuning As a first experiment, we evaluate the inclusion of scattering features in the extended distance measure for SLIC (Section III). The scattering transformation is applied on the lightness channel of the image (converted to the color space) and we manually select the most important representations. We refer to this method as ‘Manual tuning’ and Table I shows that all metrics are improved. Mainly the mean distance to edge and undersegmentation metrics are impacted: the low-resolution features do not help at a pixel-scale level, but avoid superpixel leakage. The difference is larger at lower compactness values (Fig. 10). Evaluating the methods for their own optimal compactness, improvement of MDE is 9.4% compared the 4.3% improvement for . The approach with scattering features benefits from the increased flexibility, while SLIC performance decreases. Superpixels incorporating deep representations also consistently perform better (Fig. 9): most images are slightly improved. In addition, we experimented with greyscale images and the effect of scattering features is even stronger. Vii-C Trainable superpixels Trainable superpixels should be able to improve superpixel quality without having to manually tune the distance measure weights. Quality assessment of the trainable superpixels is a three-stage process: a label set is generated, a classifier is trained on these labels and the superpixel algorithm using the trained classifier is evaluated. We selected the most promising label methods for evaluation on BSDS500 images and tested scattering and ENet features. The 243 scattering features have a receptive field of and spatial dimensions of . The ENet features are extracted from the first convolutional layer, designed to be feature extractor and consisting of filters having a receptive field of . They have a better spatial resolution of size , but there are only 16 features. We selected a simple network with an architecture as in Fig. 3, where the dimensionality reducers DRP/DRC are 2-layer networks (hidden layers of 100 and 15 neurons) and the classification networkFC‘Deep learning classification network’. We also test regression architectures as in Fig. 8 with a single linear layer for the distance measure module: this is in fact just a weighted addition of the squared pixel-cluster differences. This approach is called ‘1-layer network’. In addition, we evaluate a network where the single layer is expanded to 3 layers (‘3-layer’). We trained on several label methods and experimented with different variations of hard-example mining for both SLIC-based labels and weakly supervised labels. Our experiments found that more engineered methods performed better. The best SLIC-based method corrects labeling mistakes with the segmentation ground truth and applies hard-example mining, with parameter , in order to remove ambiguous labels. The best weakly supervised label method also removes ambiguous labels, but with parameter , retaining clusters lying in at least 6 different ground truth segments. Vii-D Validation loss As different labeling methods employ different loss functions, we cannot directly compare the values of these loss functions on the validation set. For a single label method, a comparison between network architectures and features is possible and serves as an indication for resulting superpixel quality. TableII shows that scattering features and ENet features achieve similar validation losses in most cases. Unsurprisingly, the 3-layer regression network performs better than the 1-layer one, and it also performs slightly better than the classification-based network that used batch normalization and dimensionality reduction modules. Vii-E Superpixel quality The superpixel quality for each of these methods is compared in Table I. The 3-layer regression-based network, having the lowest validation loss, also achieves the best metric scores. Superpixel quality is improved over standard SLIC and also over the manually tuned method of Section III. Comparing methods visually (Fig. 12) shows that the manually tuned method tends to concentrate superpixels around object borders. This effect is not seen in the trained superpixels. During evaluation of manually tuned superpixels, we already discovered that the extra features mainly influence the mean distance to edge and undersegmentation metrics and the same effect can be seen here. The weakly supervised method has surprisingly similar scores to SLIC and during our tests we noticed that the variation in compactness was much lower (Fig 11). Superpixels are image priors that tend to transfer across tasks. This works elaborates on a trainable approach for superpixels incorporating deep image representations. We introduce several new ideas not yet addressed in research: we include deep representations in a superpixel algorithm, build a set of superpixel training labels from segmentation annotations and devise a trainable superpixel algorithm. We demonstrate that a simple inclusion of deep representations by extending the SLIC distance measure improves superpixel quality in a consistent way. The trainable approach can surpass the scores of the simple inclusion, but requires appropriate training labels. The performance increase could be limited by the dataset and features used in our experiments. We used natural images, which have a high variability in features. We believe larger performance increases can be achieved by targeting specific modality, such as medical imaging. More specialized features can be incorporated, possibly having a less restricted receptive field than the scattering features. We hope that our analysis paves the way to the inclusion of trainable superpixels in deep learning pipelines.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00204.warc.gz
CC-MAIN-2020-24
21,176
56
http://stackoverflow.com/questions/6886273/multi-threaded-multi-windowed-application-not-working
code
I want to create a program that does remote installs. I'd like to be able to have a window launch at the start of each installation and give a progress bar with the approximate progress of the job. If I create the window off the main thread I can access all of it's elements but it's not responsive due to the task of actually running the install. If I launch the window in a new thread I can't seem to get the main thread to recognize any of the elements in the window so I can't update the progress bar. How can I create a multi-threaded UI that is responsive?
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661095.66/warc/CC-MAIN-20150417045741-00297-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
562
3
https://www.iza.org/de/publications/dp/5892/role-selection-and-team-performance
code
IZA DP No. 5892: Role Selection and Team Performance published in: International Economic Review, 2018, 59(3), 1547-1569. Team success relies on assigning team members to the right tasks. We use controlled experiments to study how roles are assigned within teams and how this affects team performance. Subjects play the takeover game in pairs consisting of a buyer and a seller. Understanding optimal play is very demanding for buyers and trivial for sellers. Teams perform better when roles are assigned endogenously or teammates are allowed to chat about their decisions, but the interaction effect between endogenous role assignment and chat unexpectedly worsens team performance. We argue that ego depletion provides a likely explanation for this surprising result.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00660.warc.gz
CC-MAIN-2021-31
769
3
http://www.ttuhsc.edu/it/is/training/oucampus/basics/logging_in.aspx
code
OU Campus Content Management System The layout that is viewed upon logging into the CMS depends upon the role of the user. Content contributors can use an on-page link called the DirectEdit link from a published page. Users are logged in directly to edit that page rather than having to navigate a file structure. In this case, the predominant view is the WYSIWYG editor. In addition the WYSIWYG view, when navigating the CMS various functions are available on the tabbed interface depending upon the authority level of the user. Access a Page Pages are accessed in one of two ways: the DirectEdit link or navigating through the folder structure within the interface itself. In both cases, users must have been granted editing access to the page in order to access it. When using the folder structure to navigate, users must have access to the directories/folders leading to the page to be edited in order to traverse through the structure. Initial access to the system is granted from the TTUHSC DirectEdit link. Once logged into the system the user can navigation the structure tree based on their authority level. Editing a Page with the DirectEdit Method - Click the DirectEdit link on the published page (the copyright symbol in the institutional footer - Login using eRaider credentials - Edit the page using the edit link for editable regions based on access levels - Save and publish the page (or send to an approver) The view when accessing a page directly depends on the type of page or content being accessed. The page may include many editable regions, just one area that can be edited or may provide an Asset Chooser or Image Chooser by which only specific content may be selected. Selecting an editable region button provides access to edit hat hare using the WYSIWYG editor.
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443598.37/warc/CC-MAIN-20141017005723-00016-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
1,789
12
http://rubyforge.org/pipermail/typo-list/2006-April/002328.html
code
[typo] Noob Typo Questions (uh oh!) meta at pobox.com Tue Apr 11 15:34:19 EDT 2006 Terry Donaghe wrote: > First, where's the best place to look for typo documentation? Ah, the perennial question for most Ruby projects... > Third, is it common for the sidebar functionality to freak out > everytime you use it? I had some similar problems when I upgraded to trunk. It turned out I had some duplicate records in my database; I cleared the sidebars table and re-did my sidebars from scratch, and everything was OK after that. Not sure how the duplicates got there. More information about the Typo-list
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00073-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
598
13
http://phoronix.com/forums/showthread.php?80433-Ubuntu-To-Get-Its-Own-Package-Format-App-Installer/page4
code
as a developer, I love that decision. (100% compatibility/stability no interefering with new, untested libs coming in via the system's repository) as a user, I hate that decision on every level. It means that I've to rely on the vendor to deliver security patches, I've the same lib 100times on my system which takes up space (and no, cheap space is no excuse to that. Bad habits stay bad habits). I'm not happy about it. Similiar to my feelings about the GPL. As a software developer that needs money to pay my rent, pay for my lunch and so on, I hate the GPL. It's retarded in being infectious. Sure, you can sell your software but only one person will purchase it and can then rename it, put it on a website, and sell it for 1/4 of the price. As a user, I love the GPL because it gives me the ability to do anything with it. Is it normal to be so split about such things? Oh noes! It's the rabid wolves out to get Ubuntu again. If the many existing solutions were perfect or just needed a bit of work to be acceptable then why hasn't anyone done it yet? Why fault Ubuntu for trying to solve an obvious problem? I've learned that Ubuntu is damned if they do and damned if they don't so they might as well ignore you bunch and try to accomplish their goals. Is that the same 'acceptable performance' as that of the Software Centre?...they already have proof of concept code working with the current system written in Python and acceptable performance... While i understand canonicals frustration with the current situation (yes its a long standing problem for linux) i think this is a problem that should be discussed with all the major distros. FFS we don't need yet another standard. There is only one way out of this - to provide well-thought stable interfaces, and when stuff gets updated to shift them into legacy bindings, till they irreversibly break. But that would mean that library developers will have new hurd of work to do - for every new version to test how legacy bindings perform with them. And even this does not guarantee long run, because architectural changes are inenvitable, and those break everything. So, how about stop being lazy and actually getting responsibility to maintain your software or to opensource it so that others can maintain it. One law I learned from Linux: Application unmaintained = Application is dead. You can as well set liberation money on your software, ie "reverse"-kickstarter. Blender was born this way. If you apply classic proprietary "sell copies" monetarization approach to GPL, don't worry to become a butthurt. Its like trying to catch water with a colander Last edited by brosis; 05-08-2013 at 03:03 PM. So if it's all dynamically linked, and they ship their .so files, then it's an OK idea, even if redundant in terms of disk and RAM usage. But by far not if it's statically linked. More copies of the same lib is not really a problem, you can easilly use a deduplication software in userspace (via symlinks) activated by every installation/disinstallation, or even the emerging kernel ones. The real issue is that this is all about proprietary software, do you understand this? In an ideal GPL world recompile a package isn't really a problem, doesn't the distros born because of this? Have someone who takes care of the dependancies and sort the problems for you? If even a rolling distro works good, i can't see a problem here, if not to please closed source software. In my experience, arch works good 99% of the time, the other 1% is because of conflicts with binary blobs. Selfcontaining packages are just a bunch of data files, executables and libraries with a script that set appropriate LD paths. EVERY developer can put whatever he wants in that package by reaching all the deps until glibc (game developers usually put just sdl and no much else), so i really CANT see the need of this canonical move; i think they just want to chain developers to their system, instead. What if in the future the ubuntu base system the devs are linking to will contain a blob? think about that, it is dangerous. Said that, ubuntu is taking a direction, and i will NOT follow their way of development. Last edited by kokoko3k; 05-08-2013 at 03:25 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636213.71/warc/CC-MAIN-20150417045716-00309-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
4,203
30
https://support.kaspersky.com/ksc/14.2/175728
code
Creating Azure SQL database and SQL Server Dec 4, 2023 You need an SQL database and SQL Server in the Azure environment. To create an Azure SQL database and SQL Server: - Follow the instructions on the Azure website. You can create a new server when Microsoft Azure prompts you to do so; if you already have an Azure SQL Server, you can use it for Kaspersky Security Center rather than creating a new one. - After creating the SQL database and SQL Server, make sure that you know its resource name and resource group: - Go to https://portal.azure.com and make sure that you are logged in. - In the left pane, select SQL databases. - Click the name of a database from the list of your databases. The properties window opens. - The name of the database is the resource name. The name of the resource group is displayed in the Overview section of the properties window. You need the resource name and resource group of the database for migrating the database to Azure SQL.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00576.warc.gz
CC-MAIN-2023-50
969
13
https://www.azurecurve.co.uk/2013/02/how-to-install-microsoft-dynamics-gp-2013-web-services-management-tools/
code
With the web services installed, configured and verified we can install the Management Tools on a client machine so we don’t need to access the server each time we need to make a change; note that the Management Tools can only be installed on a client machine if there is a domain controller on the network. To install the Web Services Management Tools, open the Microsoft Dynamics GP setup utility and select Web Services Management Tools; Click Next on the Welcome screen to proceed; Accept the License Agreement; Next you need to provide the Web Service URLs for the Web Services for Microsoft Dynamics GP and for the Microsoft Dynamics Security Admin Service. The Web Services for Microsoft Dynamics GP URL will be along the lines of http://<machinename>:<port>/DynamicsGPWebServices/DynamicsGPService.asmx where you need to swap out the machine name and port for the server and port you installed the Web Services. In my case the machine name is azc-2011-crm and the port is the default of 48620. The same needs to be done for the Microsoft Dynamics Security Admin Service where the structure is http://<machinename>:<port>/DynamicsAdminService.asmx which will typically run on port 48621; The final step of the installation is to confirm you’re ready by clicking Install; Once the install has completed a confirmation page will be displayed; At this point, everything needed is installed and configured for so we’re good to go with the Web Services for Microsoft Dynamics GP. There will generally be more specific configuration, such as security, needed depending on how you intend to use the Web Services.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00128.warc.gz
CC-MAIN-2021-04
1,618
10
http://www.gettinginformationdone.com/presentations/
code
These are some old presentations I have uploaded to SlideShare. I will be adding more presentations shortly. The following presentation was given at the AIIM Document Management Service Providers Executive Forum in Austin, Texas on November 7, 2008. The following presentation was given at a joint AIM/ARMA event in Chicago, April 2008. This presentation was given at the Gilbane Conference on Content Management in Washington DC on June 5, 2007 during a panel discussion. I presented on this topic at the AIIM National Capital Chapter breakfast meeting. I had a great time presenting my view of what the future may hold and talked about some of the clues that we may have been able to pick up at the recent AIIMexpo 2007 in Boston. Organizations implement records management to ensure their records are well organized, accurate, and retained securely through their lifecycle. What happens when documents are removed from the system? This session will discuss adding information rights management to ECM solutions to extend the effective management of those records outside of the repository. Presented at AIIMexpo 2007 in Boston. This presentation was give at the Gilbane Content Management Conference in San Francisco on 4/11/2007. The presentation provide three thing managers should do to effectively manage content in a Web 2.0 World. This presentatio was given in person and is designed to educate professional project managers on the importance of managing the documents and artifacts from projects as official business records. This presentation is an overview for those new to blogs and blogging. I presented this to a live audience late last year. I also provide some of the basics and cover some off the tools available for new bloggers.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00000.warc.gz
CC-MAIN-2017-51
1,748
9
https://ctl.columbia.edu/announcements/faculty-spotlight-yi-zhang/
code
Faculty Spotlight: Yi Zhang on the Benefits of Adopting Immediate Feedback Assessment Techniques Yi Zhang, Associate in the Discipline of Industrial Engineering and Operation Research (IEOR), was awarded a Provost’s Innovative Course Design grant to redesign his course, Simulation. Dr. Zhang worked closely with the CTL to implement a new in-class auto-grading system to help monitor student progress, offer more immediate feedback, and improve student engagement in his large lecture class. In this spotlight, Yi discusses why he chose to implement this active learning technique, how he worked with the CTL, how it enhanced the student learning experience, how he measured success, and what advice he has for faculty interested in implementing active learning methods into their classrooms. Please describe your course and the learning goals and objectives for students in it. Simulation is a core course for graduate students receiving their MS in Operations Research and MS in Industrial Engineering in the Industrial Engineering and Operation Research Department. In this course, we introduce different simulation techniques with a focus on the application in the field of operation research and business analytics. Students explore simulation techniques both from the theoretical perspective and the application perspective. Besides aiming for a good grasp of simulation theories, students also need to develop the ability to solve real-life simulation problems using the programming skills that they learn. At the end of this course, students are expected to have proficiency in using Python to - generate random samples from different statistical distributions based on various sampling methods - implement system simulation in various real-world settings, including hospital, inventory, insurance, restaurant, etc, and provide business recommendations - improve the efficiency of the system simulation based on important variance reduction methods What were the main challenges/limitations of the previous iteration of your course? Approximately 150 students enroll in this course each semester. Creating an active learning experience for a large class like this has been a challenge. During each lecture, students are given time to work on application-oriented coding questions on their laptops to help them better understand the important theories and methods of the class. Due to the large class size, it is difficult for me to monitor the progress of each student and to offer them immediate help. From the student perspective, they were not receiving valuable feedback while they were implementing their problem-solving process and had long waiting periods before receiving help. For me, as the instructor, I needed to find a way to track the progress of the class to understand their knowledge gaps and offer my help accordingly. What is the intervention that you implemented and how does it enhance the student learning experience? Auto-grading has been adopted in many courses for grading the assignments and exams. For this project, we developed an immediate feedback system by incorporating the auto-grading features into the lectures and making students the primary users of these features. For the in-class exercises, I worked with my teaching assistant, Achraf Bahamou, to program a Python script based on the rubrics we developed for each question. When working on the exercises, a student can submit their work electronically. The script will then run their response through various checkpoints to give them graded marks and provide individualized feedback based on the type of mistakes they made at those checkpoints. Students can then try to learn from the feedback and then correct their mistakes and submit their answers again. On the instructor side, I am able to observe the progress of the students, which is important for me to know where the students stand and adapt my teaching to student needs during the lesson.. Students were very excited about the auto-grading features and were able to use these features effectively to improve their work. In addition, since I have access to all the marked work, I can go over some of the student submissions with the names hidden during whole class discussions. Students were noticeably more engaged when seeing the work from their peers and actively engaging in the learning process from me explaining and correcting common mistakes. What resources did you need to help you implement your intervention? (e.g. CTL support, software, hardware, etc.) This project was an exhibition of great teamwork. Michael Tarnow, Learning Designer of Science and Engineering at CTL, was very involved in the whole pipeline of the project. This project greatly benefitted from his expertise in educational technology and his valuable insights into pedagogical practices. During the proposal stage, I received valuable feedback from Jessica Rowe, the Associate Director for Instructional Technology at the CTL. During the assessment stage, I worked closely with Melissa Wright, the Associate Director of Assessment and Evaluation at CTL and Megan Goldring, a Ph.D. student from the Department of Psychology and a CTL Teaching Assessment Fellow. Achraf Bahamou, a Ph.D. student from the IEOR Department, played an important role in implementing the auto-grading features. He also worked closely with me determining the rubrics of the practice questions and analyzing the assessment results. On the software part, we first experimented with and then used EdStem, a digital learning platform. It included an auto-grading feature that was specifically designed to work in conjunction with Jupyter Notebooks. EdStem also allowed us to seamlessly integrate all the course content and the discussion board on one platform. The platform was unique in that almost all its tools, like the discussion board, were built to run code right in the feature itself. The platform co-founder, Scott Maxwell, was very open to experimenting with different features with us and offered great technical support, often making updates to the platform almost immediately to suit our needs. Did you encounter any design or teaching challenges throughout the implementation? First, there was an upfront cost of learning all the platform features and restructuring all the existing materials into Jupyter Notebook format that would also be well presented on the EdStem platform. This was a time-consuming process for the first iteration. However, this restructured course content so that it was much more organized and easier to follow from the learner’s perspective. The second challenge came from the actual implementation of our new tool. Bringing the immediate feedback system into the lectures was a learning experience for all the stakeholders. During our first experiment, we received mixed feedback from the students. Based on the student feedback we collected from the post-lesson survey, we were able to improve the design and the execution of the intervention. As we moved to the later stages of the course, students became more comfortable with the auto-grading features and were able to gain great appreciation for the intervention . How did you measure the success of this intervention? We adopted an A/B testing approach by randomly assigning the students into two groups: a treatment group and a control group. Students in the treatment group received the invention, while students in the control group did not. To be fair to the students we would flip who was in the treatment and control groups in the following lesson. In total this experiment was conducted in two sets of two lessons (4 total lessons) at different stages of the course. After each experiment, we used subjective and objective measurements to evaluate the success of the intervention. - Subjective measurement: After each exercise, students filled in a short survey hosted on Qualtrics. The goal was to collect their opinions about the problems so they could reflect on their experience with the auto-grading features, and the impact of the intervention on their learning (how confident they felt about the subject matter). Collecting the subjective feedback allowed us to see how the students reacted to the intervention and helped us improve our work from the learner perspective. - Objective measurement: We compared student scores from the developed questions to see whether there was any significant difference between students in the control group and the treatment group. This testing offers us an objective measurement of the outcome of the intervention. Based on our assessment, students with access to the auto-grading features were more confident in the subject matter. Across two lectures, students were assigned to either normal Python-based feedback (NIF) or the auto-grading feedback tool (IF-AT), so that half of the students were in the NIF condition in the third lecture and half of the students were in the auto-grading condition in the fourth lecture. From our objective measurement, we observed that students with access to the auto-grading features received a grade that was 30-50% higher on average than those who did not. (All differences were statistically significant with p values < 0.015.). See graph below. For which type of classes do you see faculty being able to use this intervention? Do you have any advice for them? Faculty members who are using the flipped-class teaching method or hoping to incorporate some programming exercises in a classroom will find this intervention extremely useful. Being able to create active learning experiences for students has been a challenge for teaching and learning in large classes. Having a learning environment where students can get immediate feedback in the classroom can help the students better engage with the content and detect any misunderstandings early on. It also allows the instructors to collect valuable information from this system to help them improve their teaching.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00613.warc.gz
CC-MAIN-2024-10
9,975
26
https://surnam.es/biafra-surname
code
Surnames of the world To understand more about the Biafra surname is always to know more about individuals who probably share typical origins and ancestors. That is amongst the explanations why it's normal that the Biafra surname is more represented in one single or more nations associated with globe than in others. Right Here you can find down in which countries of the planet there are many people with the surname Biafra. The surname Biafra within the globe Globalization has meant that surnames spread far beyond their nation of origin, such that it can be done to locate African surnames in Europe or Indian surnames in Oceania. The same happens when it comes to Biafra, which as you can corroborate, it may be stated it is a surname that may be found in most of the countries associated with the world. Just as there are nations by which definitely the density of men and women because of the surname Biafra is higher than far away. The map of the Biafra surname The likelihood of examining for a world map about which nations hold more Biafra on the planet, assists us a whole lot. By putting ourselves in the map, on a concrete nation, we could begin to see the concrete amount of people using the surname Biafra, to have in this manner the particular information of all the Biafra that one can currently get in that country. All this additionally helps us to know not only where the surname Biafra arises from, but also in what way the folks who're initially part of the family members that bears the surname Biafra have moved and moved. In the same manner, you are able to see by which places they have settled and grown up, which explains why if Biafra is our surname, this indicates interesting to which other nations of this world it's possible this 1 of our ancestors once moved to. Nations with additional Biafra on the planet In the event that you think of it carefully, at apellidos.de we give you all you need so that you can have the real data of which nations have actually the highest amount of people because of the surname Biafra in the entire globe. More over, you can observe them in a really graphic method on our map, where the countries with all the greatest number of individuals utilizing the surname Biafra can be seen painted in a more powerful tone. In this manner, sufficient reason for an individual glance, it is possible to locate in which countries Biafra is a very common surname, as well as in which nations Biafra can be an unusual or non-existent surname.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00022.warc.gz
CC-MAIN-2022-21
2,499
8
https://forum.duolingo.com/comment/25545352/%EB%8B%B9%EC%8B%A0%EC%9D%98-%EC%84%A0%EB%AC%BC
code
Everywhere I've looked and everyone I've talked to says 당신 is a really rude word for anyone outside of loved ones like husband/wife, please remove this. For anyone else coming here, I'd like to summarize the podcast's relevant points. When used in direct conversation, it's considered lowly, foreign, or tantamount to 'fighting words'. It is more about just addressing someone without much regard or, at the least, improperly with respect to your relationship, as far as I can tell. You can, however, use it for translation purposes, for when you require indicating the "you" in the sentence. This comes up in the spoken form when you switch the topic of the sentence and need to directly refer to "you". So in other words there is a specific, but practical use there. A third use is in writing and music. Music is not directed at anyone, thus saying 당신 is totally fine, encouraged, even, it seems, given how frequent it is. For writing, I understand it as how any language is more specific and more formal when written (99% of the time, anyway), so that would explain why it is allowed. Another use is between two intimate parties (spouses was the specific example), but it was given with the caveat that it is mostly used by middle-aged and older, giving me the impression that it is a bit more antiquated and proper, but even then it is sort of rare and not incredibly common among all people. Finally, you can use it as a form of an honorific to refer to someone respectfully in the form of he/she when someone is not there. So it seems the biggest takeaway here is formality, and context. Use it in the right context, or when writing or translating, use it for your spouse, but don't use it with just anyone, and for those cases, default to 너 if you actually need the pronoun. Hope this helps! To be a little more clear it can be rude but honestly it's not used a ton. I don't think you should censor knowledge, just put the (very informal) caveat. Besides, it's completely fine to use with those close to you. It'd be weird to treat friends like the equivalent to "sir". Uh. Either everyone you know is crazy or you're lying. 당신 is uncommon because mostly foreigners use it, being habituated to pronouns.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578551739.43/warc/CC-MAIN-20190422095521-20190422121521-00552.warc.gz
CC-MAIN-2019-18
2,224
12
https://www.robotuniproject.com/coder-update-2-camera-setup/
code
So last update on Robot University I promised a post describing the camera rig set up once I had it working. That was a few weeks ago. I only just got it working. Almost. It has turned out to be a greater challenge than I anticipated. Here’s a recap of the situation. The wall at The Cube that will be displaying Robot University is made up of 12 touch screen panels, set up as linked pairs controlled by a single PC called a node. The panels are in portrait orientation, side by side along the bottom of the wall. The top of the wall is three projectors being handled by another node. This node handles the image compositing, so thankfully I don’t need to worry about combining and overlapping three projectors, I can simply treat this is a good old fashioned single machine with some sort of crazy monitor running a 5360 x 1114 resolution. Now my initial thought was that each node would have a camera in the scene that it would set up based on it’s node number (or I build all the cameras in the scene and each node deletes the ones it doesn’t need based on it’s node number) so I immediately started experimenting with that. Since this is a single scene that basically needs to look like you’re looking straight into it no matter where on the wall you are standing, my first thought was cameras side by side and just figure out how to stitch them together. But that doesn’t work because of the shape of a perspective frustum and issues with camera angle. Let me show you. Now as you can see, this image is actually referring to how our eyes process an image, but it effectively displays how two perspective cameras side by side aren’t going to see an object form the same angle. In the case of our eyes, the brain compensates, but programatically these obviously aren’t going to stitch together. Especially not when there are 6 of them in parallel. So my next thought was a fan of cameras. This would line up edges of the frustums so there are no gaps between them, essentially creating one really wide frustum. This works, as you can see in the image, but you’ll notice it creates a view just like a panoramic photo on your phone (since it’s done in exactly the same way). This is because you are looking at the scene from a singe point of view, so as you point the camera to the far left and right you are much further from it then you are form the middle, so you get a bowed look to it. You could compensate for this physically and wrap the scene in a crescent shape around the camera position. But then the whole team has to keep factoring that in and thinking in that space. It would be better to find a solution that allowed us to build the scene as you would expect it to exist, in a straight line. So it took me a lot of playing around, but eventually I managed to sit down and put the problem into words. What, exactly, was I trying to achieve here? I want to look at this scene through a single camera, but only display parts of that view to each node. So instead of multiple cameras being stitched together to show the scene, one camera being cut up into the node spaces. I’m sure such a thing would be possible, but I’d never heard of it being done, nor could I think of a game play reason to do it, so I wasn’t expecting to find a solution out there for me, I was expecting to have to spend a lot of time down the matrix math rabbit hole. I was wrong. It’s apparently a common technique (though I still can’t think of a use for it outside of this rather unique project). It’s called a Scissor. Unity already has built in functionality for controlling the View rect. That is you can easily render the cameras entire frustum into just a sub section of the screen space. This I can think of a use for. Mini maps are the first that come to mind, rear view mirrors are another I thought of. I understand they are normally done with shaders, but you could potentially treat it like a reverse camera in a car and render it onto just a section of the screen space. This wouldn’t work if you wanted it to look pretty and have it on the surface of a model of the mirror in the car. But in the early racing games that had this feature it was just a little rectangle at the top of screen. Anyway, this is easy to do. But I didn’t want that. I needed to only render a subsection of the cameras view to all of it’s screen space. Essentially the reverse. Traditionally a Scissor cut on the camera is used to only render that subsection of the view into that subsection of the screen. So it was similar to the View rect in unity, except it didn’t render the whole camera view, only what fits into the new view rect. I was lucky enough to find someone sharing a little script that did the scissor for me. I just had to modify it slightly so that it would render the cut to the whole screen. A single line of code. I have’t had time to completely deconstruct this code and figure out how it’s doing it, but there is a warning that one of the values isn’t being used. So I will eventually take a look at that. But it works, so I’ll share it here. public class Scissor : MonoBehaviour public Rect scissorRect = new Rect (0,0,1,1); public Rect viewRect = new Rect(0,0,1,1); public static void SetScissorRect( Camera cam, Rect r, Rect view ) if ( r.x < 0 ) r.width += r.x; r.x = 0; if ( r.y < 0 ) r.height += r.y; r.y = 0; r.width = Mathf.Min( 1 – r.x, r.width ); r.height = Mathf.Min( 1 – r.y, r.height ); cam.rect = new Rect (0,0,1,1); Matrix4x4 m = cam.projectionMatrix; cam.rect = r; Matrix4x4 m1 = Matrix4x4.TRS( new Vector3( r.x, r.y, 0 ), Quaternion.identity, new Vector3( r.width, r.height, 1 ) ); Matrix4x4 m2 = Matrix4x4.TRS (new Vector3 ( ( 1/r.width – 1), ( 1/r.height – 1 ), 0), Quaternion.identity, new Vector3 (1/r.width, 1/r.height, 1)); Matrix4x4 m3 = Matrix4x4.TRS( new Vector3( -r.x * 2 / r.width, -r.y * 2 / r.height, 0 ), Quaternion.identity, Vector3.one ); cam.projectionMatrix = m3 * m2 * m; cam.rect = view; void OnPreRender () SetScissorRect( camera, scissorRect, viewRect ); The only thing I added here is the final line in the SetScissorRect() function. This renders the scissor cut to a different view rect (in my case I’m always using 0,0,1,1 which is the full screen. So, now I could split my camera view the way I needed it. I just needed to make it a little data driven to calculate the size of the scissor cuts. Since the test wall we use has only 4 touch screens and a smaller projector, I wanted code that would work on both. So I set it up to read a little text config file that told the program which node it was and how many nodes there are total. If the node number was the last one (so if it was the same as the number of nodes) it would make the assumption it was the projector node. Otherwise it would use the numbers to calculate how wide it’s cut was and where along the bottom it was. This worked a treat. After some trial and error I eventually realised (should not have taken me so long) that the aspects of the cameras had to all be the same. So whatever the aspects were on the bottom, that’s the aspect the top needed to be to line up. Now we are at the last little problem to solve. Each touch screen has a frame on it. That actual rendering screen doesn’t go all the way to the edge. But the view into the world assumes otherwise. This isn’t a big deal on it’s own, the human eye can ignore that without even being aware of it. But the projector space has none of that. So anything that needs to line up where two touch panels meet and the top (like the very large robot in the centre), won’t line up properly because the pixels are next to each other physically on the projector space, but separated by the frame of the screens on the bottom. So I need to calculate the world space taken up by these frames and trim those off the scissor rects. This will mean some of the world will be hidden behind the frames of the screens, but that fits the world we are creating since these are supposed to be as windows into the scene. Plus it’s necessary to things lined up above and below. The final step for the camera set up to be called working was getting the networking set up so something could pass from one screen to another. Since technically they are all showing different, but identical, scenes nothing could move across the environment. The main one for that will again be the large robot in the middle. Any animations he has need to be in sync across all nodes or it just won’t work. Luckily Unity’s networking is pretty easy to set up for something as simple as this. I wrote a little script that has a list of objects for each node. Anything in the scene that needs to sync across the network will be assigned to a node (to spread the load) and that node will instantiate them at load up. All the other nodes then get one created on their end automatically by the network library built into unity. Another little script checks to see if the object is owned by this node or not and deactivates any logic scripts if it isn’t so the only instance that’s doing anything is the one on the node that owns it. That worked very nicely and much quicker than I had anticipated (networking is not my strongest coder discipline), which is good since the camera set up took longer than I had hoped. All in all, a lot of the core functionality for the set up is in. I have touch working to an extent, though the TUIO protocol isn’t super clean or easy to understand when you’re used to unity’s built in Input handling touch on mobile. I’ll do another post on that shortly. The only final note I’ll make is about the data loading. As I said, I was using a text file to load in essentially 2 numbers that were used to define everything. This wasn’t the most convenient way to do it since each file had to have the same name (since the code loaded the file based on a path and file name) so putting the exe on each machine also meant manually dropping a text file on each and making sure you got the right one. So a bit of digging showed you could use command line arguments. .net gives very simple access to these. So we could run the exe from the command line with some extra arguments for the numbers. We were already running it from command line to use the -popupwindow argument that creates a window without a frame (this was needed to get fullscreen across two touch screens, since full screen defaulted to one of the two as windows treated it like an extended desktop over two monitors). So by creating shortcuts to a single network location, and modifying the path of each shortcut to include the extra arguments, we could set up each machine to run perfectly each time. All I have to do is replace the files on the network with a new version and when I run the shortcut on each machine it’s running the new code with it’s own arguments already set up. Made my testing process much quicker. If anyone is interested in seeing how to do set up command line arguments I’m happy to do a very quick little post on it, it’s not difficult. That’s all for now. Next post I’ll talk a bit about touch input. I want to wait till I have some placeholder UI in so I can talk about interacting with it, since it’s not as straight forward as I would have liked. Hope you found that interesting,
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00707.warc.gz
CC-MAIN-2023-06
11,332
35
https://beautyschoolmakeup.com/2019/04/27/gigi-gorgeous-amy-pham-try-on-wigs-%F0%9F%92%81%F0%9F%8F%BC-tries-it-trl/
code
Gigi Gorgeous & Amy Pham Try on Wigs 💁🏼 | Tries It | TRL Amy Pham and Gigi Gorgeous bond over their love of wigs at a hair shop in Los Angeles. #TRL #GigiGorgeous #AmyPham #MTV Subscribe to #TRL: https://goo.gl/GzVEu1 More from TRL: Like TRL: https://www.facebook.com/trl/ Follow TRL: https://twitter.com/TRL TRL Instagram: https://www.instagram.com/trl/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00059.warc.gz
CC-MAIN-2020-24
360
8
http://forums.theregister.co.uk/forum/1/2009/09/03/ifa_toshiba_winmo_update/
code
Toshiba is revamping its TG01 Windows Mobile smartphone. Well, sort of. The hardware's staying the same so far as we can judge - it's just getting a Windows Mobile 6.5 upgrade. Still, that's enough for Toshiba to claim the update as a "new" model: the TG01 Windows Phone. Toshiba TG01 Toshiba's TG01: will get Windows Mobile 6.5 … Excellent work. That should help the hackers get downloadable WM6.5 ROMs out for anybody to upgrade from WM6. Of course, Microsoft could then sue for copyright infringement but they'd be muppets so to do. If they want to keep people on Windows Mobile they'd be well advised to let people maintain their interest. It wouldn't make sense to distribute it free, naturally, since consumers would delay buying sparkly new phones, but stemming the flow to AN Other mobile OS has to be the top priority, even if it costs a bit in the short-term. I've had the WinMo 6.5 for a while. It's a Zune like GUI, with really not much added functionality, at least on the version I got. - Nokia: Read our Maps, Samsung – we're HERE for the Gear - Ofcom will not probe lesbian lizard snog in new Dr Who series - Kaspersky backpedals on 'done nothing wrong, nothing to fear' blather - Too slow with that iPhone refresh, Apple: Android is GOBBLING up US mobile market - Episode 9 BOFH: The current value of our IT ASSets? Minus eleventy-seven...
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00145-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
1,360
10
https://en.everybodywiki.com/Twisted_Tales_(book_series)
code
Did you know a wiki could be used internally in your company ? For better knowledge management and internal communication. Less email and office files. 30 days free trial. (Ad) Twisted Tales (book series) The same as Top Ten (book series), this spin-off of Horrible Histories takes a look at many twisted tales and crazy stories that we never thought could be true. This series features a range of authors and illustrators. A list of the books in the series are: - Twisted Tales: Greek Legends - Twisted Tales: Irish Legends - Twisted Tales: Bible Stories - Twisted Tales: Horror Stories - Twisted Tales: Arthurian Legends - Twisted Tales: Ghost Stories - Twisted Tales: Shakespeare Stories The book series was later recovered and published as Top Ten (book series) Others articles of the Topic Children's literature : Horribly Famous, Witch & Wizard: The Fire, Master Crook's Crime Academy, Stan Lee (Judge Dredd), Adele Broadbent, Tony De Saulles, Freddy Fox Some use of "" in your query was not closed by a matching "".Some use of "" in your query was not closed by a matching "". This article "Twisted Tales (book series)" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347411862.59/warc/CC-MAIN-20200531053947-20200531083947-00037.warc.gz
CC-MAIN-2020-24
1,318
15
http://www.mostwatchedtoday.com/sht-asian-moms-say/
code
[NSFW - Language] It seems the ‘Sh*t _____ Say’ parody will never end. Reminds us of the several hundred Nyan Cat videos out there. The ‘Sh*t Say’ videos have covered friends, family members, cities and celebrities, looks like this trend will be with us for even longer. Most Watched Today - Best Pole Dancer Ever – Jenyne Butterfly - BOATLIFT – An Untold Tale Of 9/11 Resilience - Compilation Of Scaredy Dogs Terrified Of Walking Past Cats - Compilation Of Jerk Cats Bullying Their Humans - Sh*t Girlfriends Say. - Compilation Of Furry Animal Thieves Who Will Steal Just About Anything - Stubborn Husky Throws Tantrum When It’s Time To Leave The Dog Park
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00098-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
669
9
https://globalgamejam.org/2021/games/tonno-cato-9
code
2D Plattformer about a Cat lost in the City. Follow the Tuna Trail to find your way back home! Tools and Technologies: Blender + Gimp for all Models and Textures Micha - Programming; Game Design Emi - Assets / Models; Game Design Ada - Music; Sound Design
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00406.warc.gz
CC-MAIN-2023-23
255
6
https://slashdot.org/journal/74355/fautly-connection
code
My first clues that something was going wrong started several months ago. Our ISP always disconnects us after 120 minutes without fail, as a matter of policy. Very occasionally, when reconnecting, I'd find the new connection didn't seem to work right somehow, but I wasn't sure what was going on. I assumed it was the p2p client I was using at the time, not always being able to deal with the change in IP address. Since then, early this year, I realised when using more conventional internet stuff, that I wasn't able to resolve any domain names. I looked further into it, and found the problem seemed to be that I was getting connections that somehow didn't have any routes defined. It was working such that when I gave the "route" command, it'd just sit there indefinitely until I killed it! The routeless state remained until I disconnected. Reconnecting again, it seemed to work. But I soon found that the screw ups became more and more frequent. Now, I'm inclined to blame Freeserve (hock-ptui!) as they clearly changed something that made shit break, but my parent's Windows machines don't seem affected, and it's pretty obvious the route command shouldn't be acting that way. If there's no route, it should be less shit about it. Although of course the route command is just a symptom. Probably. Well, I'm getting a bit desparate now, because just now when I tried to get online it took me about TEN TIMES of "connect, test route, disconnect, wait a few minutes, start again" before I got a working connection, and I don't see it getting any better. A few weeks ago I tried upgrading the ppp packages to the ones from Debian/Testing, figuring there might have been a fix, but it made no difference. Does anybody know where this comes from? (apologies for unanswered posts, I've not slept since, uh, some time yesterday, I don't know when, and I just had to rake the lawn this afternoon in the baking heat, and then I ended up stupidly double-posting what was meant to be a quick message in e2.com, and embarrasing the crap out of myself, then getting incredibly confused with opening too many browser windows trying to do I don't know what. What am I saying? Don't know... I'm saying I'm worn out and I'll be back tomorrow probably when I'm capable of writing more, and thinking, yes that's handy. I'm only writing this because, well, what I said before. Must sleep now. Bye) (PS Apology for lack of coherence too. I'm sure you get the picture)
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864544.25/warc/CC-MAIN-20180521200606-20180521220606-00406.warc.gz
CC-MAIN-2018-22
2,452
6
https://www.npmjs.com/package/ls-stream
code
readable stream of file paths + stat objects a readable stream of file and directories paths and entries. var ls = require'ls-stream'ls'.git'on'data' console.logbindconsole create a readable stream of entry objects. will start emitting data on next tick unless paused. users may optionally provide their own fs object if native fs is not available for whatever reason (e.g., in browser). path: "path/to/file-or-dir"stat: fsStat object If called on the same event loop turn as the event is received, prevents recursing into this directory (or is a no-op if the entry represents a file). Optionally takes a single argument which defaults to true to set the var through = require'through'ls = require'ls-stream'ls'/path'pipethroughconsole.logentrypathifentrypath == "/path/something"// if we see "/path/something" *don't* list files// and dirs that it contains.entryignore Warning: As aforementioned, this only works if the entry is ignored on the same event loop turn. For example, the following code would fail to ignore the given entry: // WARNING: this will not work:ls'/path'pipethrough// by the time we tell the entry that it// should be ignored, `ls` has already// recursed into it!setTimeoutentryignore0
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051165777.47/warc/CC-MAIN-20160524005245-00116-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,208
18
https://www.zophar.net/forums/index.php?threads/tutorial-using-a-debugger-to-figure-out-game-mechanics.13521/page-2
code
If I had to guess, I would bet that there's an array that stores the learned abilities. By getting the IDs of the abilities, you can possibly have some luck searching for the arrays.Thanks for the tutorial, i'm just learning about this for my hack, which, funnily enough, is BoF4 I been trying for days but no luck..., i want to be able to change things like, which ability(or even when) the characters get when they level up. I have the address for when they level up, these go to 0 when you enter into another combat. If anyone have any ideas or tips about how to achieve something like this i would appreciate it a lot. If not, you would have to set a write breakpoint in the characters inventory. If you'd like, my YouTube channel has a discord server that has a rom hacking discussion channel. I try to respond to questions as much as I can, and maybe if I have time I'll try to fire up the debugger and help you out, but it's been years since I've done a deep dive into MIPS assembly.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00162.warc.gz
CC-MAIN-2022-27
990
5
https://oak.ulsan.ac.kr/handle/2021.oak/5760
code
의료영상을 이용한 인공지능 분류 모델 성능을 향상시키기 위한 라벨 노이즈 및 영상 크기에 관한 연구 - Deep learning, a cutting-edge paradigm for machine learning, had accelerated development of medical artificial intelligence on imaging modalities. Today, many studies on various imaging modalities are based on deep learning algorithms. Among deep learning algorithms, convolutional neural network (CNN) is major tool for studying images, videos. Medicine is distinguished domain to apply deep learning methods. Medical images are different from common images, as they are composed into digital imaging and communication in medicine (DICOM) format. Common images are based on 8-bit image format, such as portable network graphics (PNG) or joint points expert group (JPEG), while medical DICOM images are based on bits same or higher than 8-bit, for example 12-bit or 16-bit. Furthermore, their unique acquisition protocols, imaging contrast mechanisms are different from those of natural images. Furthermore, natural images contain objects usually in area nearby center of image, in contrast in DICOM images the region of interest (ROI) can be in any spot, any size. For example, lung nodule can locate in upper area of lung in chest X-ray (CXR), lower area, middle area, that is, literally anywhere. Also, it can have sharp margin, speculated margin, or vague margin as well. Therefore, deep learning training strategy may, or should be different from that of natural images. In this study, we contemplated how to train medical artificial intelligence efficiently, in the perspective of robust learning and image size in CXR. There are enormous factors that have effect on model performance. From accuracy of label or matrix size, model selection, to dataset size, every factor determines model performance. However, in this paper, we only experimented label noise and matrix size, which are considered to be most basic factors when constructing dataset and feeding image data to network. In the perspective of robust learning, it is common sense for artificial intelligence researchers to acquire clean and accurate labels. In many fields, there is even a proverb, “garbage in, garbage out”, abbreviated as GIGO. Therefore, we investigated how accuracy of deep learning model depends on the degree of dataset distillation. We have collected CT-confirmed CXR datasets and the interval of CT image and its corresponding CXR image is within 7 days. As CXR images are CT-confirmed, we can consider CXR labels are highly credible. To analyze effect of accurateness of labels, we have randomly converted label with given ratios. That is, we have randomly converted labels from normal to abnormal, and abnormal to normal, with 0%, 1%, 2%, 4%, 8%, 16%, 32% and analyzed area under the receiver operating characteristic (AUROC). There was statistically significant difference between 0% of our collected dataset from 2% noise rate to 32% noise rate. This means CNN model is highly sensitive to label noise. Furthermore, we had experimented the same setting on public dataset, from national institute of health (NIH) and Stanford CheXpert dataset, and the result showed these public datasets endured label noise up to 16%. This result has to possible interpretations: (1) CNN is sensitive to label noise and public datasets endure label noise because they contain label noise to some extent. (2) CNN itself is robust to label noise, yet for some reason, CNN model on our dataset seems to be sensitive to label noise. To distinguish these two possibilities, we randomly selected images from each public dataset and one radiologist with more than 10-years experiences visually confirmed whether images are correctly labeled or not. The result of visual scoring said that there was around 20~30% incorrect labels. Therefore, we could conclude that possibility (1) is correct. For the matrix size of medical artificial intelligence, to investigate the optimal input matrix size for deep learning-based computer-aided diagnosis (CAD) of nodules and masses on chest radiographs. Detection model and classification models were experimented to find out optimal matrix size, with various matrix sizes (256, 448, 896, 1344, 1792) We had experienced two networks for detection, and one network for classification. In detection networks, matrix size was proved to be optimal with size 896 and 1344, and 896 in two models, respectively. In classification network, matrix size was proved to be optimal with size 896. Thus, we can conclude that matrix size around 1000 is optimal for training medical image data. This is coherent to the fact that many deep learning studies are based on matrix size of around 1024. To summarize, in this paper we analyzed two factors to increase model performance in medical artificial intelligence on imaging modalities. First is the label noise, which had conclusion that the more accurate dataset, the higher performance. Second is the matrix size, which had conclusion that matrix size around 1000 is best for detection and classification tasks. - Issued Date - Awarded Date - Authorize & License - Files in This Item: Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00546.warc.gz
CC-MAIN-2023-14
5,277
13
https://www.bandicam.com/forum/viewtopic.php?f=10&t=9753&p=27101&sid=5e9dbbede6c46504ad57cc514f549e7b
code
Yes, you can record the screen with the PC sound, your voice from the microphone, and your face from the webcam at the same time in different files for each. Please refer to the following links below and try the functions with the free version. 1. Enable the "Save webcam video as separate file (.mp4) option. 2. See "How to record computer and microphone sound separately" and enable the "Save audio tracks while recording (.wav)" option.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00065.warc.gz
CC-MAIN-2021-31
439
4
https://github.com/tonytran
code
Create your own GitHub profile Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 28 million developers. A mobile app that recognizes if daily food products contain things you are allergic too your personal fitness fan twitter sentiments ayyy This project is an app that crowd funds for the transportation of homeless people through homeless shelters. 8 contributions in the last year September - November 2018 tonytran has no activity yet for this period. Press h to open a hovercard with more details.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00314.warc.gz
CC-MAIN-2018-47
567
10
https://www.sarkarijobalert.org/date/2009/07
code
I am trying to use Spring and wx-xmlrpc together. The problem is that XmlRpcClient has a setConfig() method that doesnt follow the Java Bean spec : the setter and the getter dont use the same Class. So Spring complaints when I have the following context.xml : <bean id="xmlRpcClient" class="org.apache.xmlrpc.client.XmlRpcClient"> <property name="config"> <bean class="org.apache.xmlrpc.client.XmlRpcClientConfigImpl"> <property name="serverURL" value="http://example.net" /> </bean> </property> </bean> It says : Bean property ‘config’ is not writable or has an invalid setter method. Does the parameter type of the setter match the return type of the getter? Is there a way to override that ? I know I could write a specific factory for this bean, but it seems to me that it is not the last time I find this kind of problem. I work a lot with legacy code of dubious quality … Being able to use Spring XML configuration with it would be a great help ! Write a FactoryBean for that class and have it call the correct setter.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00382.warc.gz
CC-MAIN-2022-21
1,030
5
http://www.christiano.ch/wordpress/category/microsoft/microsoft_windows_server_2008_r2/
code
Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from the domain’s PDC Operation Master. Therefore the PDC must synchronize his time from an external source. I usually use the servers listed at the NTP Pool Project website. Command Line version of Server Manager in Windows Server 2008 R2 Today I was using “ServerManagerCmd.exe” on a Microsoft Windows Server 2008 R2. When I executed it I saw the following informational message: “Servermanagercmd.exe is deprecated, and is not guaranteed to be supported in future releases of Windows. We recommend that you use the Windows PowerShell cmdlets that are available for Server Manager.” Unattend deploy Windows 7 and Windows Server 2008 R2 with Microsoft Deployment Toolkit (MDT) 2010 RC (Release Candiate). MDT 2010 is the next version of Microsoft Deployment Toolkit, a Solution Accelerator for unattended / automatically operating system and application deployment. Look for new features in MDT 2010 including flexible driver management, optimized transaction process, and access to distribution shares from any location to simplify deployment and make your job easier. Save time and money when you deploy faster and easier with MDT 2010. Microsoft Windows Server 2008 R2 Trial Software is now available for download. The Microsoft Windows Server 2008 R2 Evaluation is available via two separate downloads: - a compilation edition (Standard, Enterprise, Datacenter, and Web) - an individual Itanium edition. Just choose the version that fits your requirements, and download the evaluation. Need help determining which version is the best fit for you? Missing the classic telnet.exe (Telnet Client) on Microsoft Windows Vista, Windows 7 or Windows Server 2008 and Windows Server 2008 R2? Microsoft removed the Telnet Client starting with Microsoft Windows Vista / Windows Server 2008. To get it back / install it, use pkgmgr on the Client Operating Systems and ServerManagerCMD on the Server Operating Systems.. The Hyper-V role in Windows Server 2008 R2 provides you with the tools and services you can use to create a virtualized server computing environment. This virtualized environment can be used to address a variety of business goals aimed at improving efficiency and reducing costs. This type of environment is useful because you can create and manage virtual machines, which allows you to run multiple operating systems on one physical computer and isolate the operating systems from each other. According the the release notes of Microsoft Windows Server 2008 R2, the Hyper-V role now fully supports Live Migration of Virtual Machines using the failover clustering role, added and configured on the servers running Hyper-V.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00179.warc.gz
CC-MAIN-2020-16
2,836
13
https://www.notion.so/Traction-Channels-0dc45decc57840a585180c2856c344fc
code
<aside> 👉 Review common Traction Channels & Tactics in the first table. Then score them on a scale of 1-10 with the Prioritizer table. Ability: How proficient are you in this channel/media? Cost: How costly is this channel, per acquisition? Reach: How good is this channel at generating impressions? Half-life: How long does each product of effort last? (e.g. An SEO article survives far longer than a Tweet) Impact: How potent is this channel per impression? i.e. How effective is it at converting impressions to actions? Timeline: Over what period should you expect results? Score: A sum of the individual score ratings (Cost is a negative). <aside> 🎯 Next, you can apply the Bullseye Framework to identify your Top 3 (’What’s Working’), Next 6 (’What’s Probable’), and the rest (’What’s Possible’). Traction Channels [Component] Built in Flotion.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00293.warc.gz
CC-MAIN-2022-27
873
11
https://wow-whats-this.com/forums/topic/buy-vantin-indiana-farmacia-vantin-crocco/
code
One of the most famous and effective medicines ever! MED-TOP.NET – The top online pharmacies!!!!!! Best Deals Online!!! Random Internet Quotes: Compared to explore this, have questions, as medication the cribs for. Can use the walking distance from the arrive at checkout. Will lose most suited pharma brain you might like free money or in overseas markets a precursor to assess and their steroids. They what younger these practice settings include a period of over 190 medications, to third parties in your conversions giving you a combination of providing insurance for all of our clients of major warm periods similar to cure muscle scars without prescription filling process as my best ways of generic pain capsules image gabapentin for elderly people who explains that savings is a few clicks. Gosh i couldnt get introduced with e-bill express website. Some pharmacists the group is having this portion control is the molecule using. Risk/benefit ratios, until it sunday your expertise with their store. Scratching is made available to you can call a national social insurance program designed the nation and say they could make the academy,…
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107866404.1/warc/CC-MAIN-20201019203523-20201019233523-00702.warc.gz
CC-MAIN-2020-45
1,151
4
http://social.gl-como.it/profile/valhalla
code
After many days of failed attempts, yesterday @Diego Roversi finally managed to setup SPI on the BeagleBone White¹, and that means that today at our home it was Laptop Liberation Day! We took the spare X200, opened it, found the point we were on in the tutorial installing libreboot on x200 , connected all of the proper cables on the clip³ and did some reading tests of the original bios. While the tutorial mentioned a very conservative setting (512kHz), just for fun we tried to read it at different speed and all results up to 16384 kHz were equal, with the first failure at 32784 kHz, so we settled on using 8192 kHz. Then it was time to customize our libreboot image with the right MAC address, and that's when we realized that the sheet of paper where we had written it down the last time had been put in a safe place… somewhere… Luckily we also had taken a picture, and that was easier to find, so we checked the keyboard map², followed the instructions to customize the image , flashed the chip, partially reassembled the laptop, started it up and… a black screen, some fan noise and nothing else. We tried to reflash the chip (nothing was changed), tried the us keyboard image, in case it was the better tested one (same results) and reflashed the original bios, just to check that the laptop was still working (it was). It was lunchtime, so we stopped our attempts. As soon as we started eating, however, we realized that this laptop came with 3GB of RAM, and that surely meant "no matching pairs of RAM", so just after lunch we reflashed the first image, removed one dimm, rebooted and finally saw a gnu-hugging penguin! We then tried booting some random live usb key we had around (failed the first time, worked the second and further one with no changes), and then proceeded to install Debian. Running the installer required some attempts and a bit of duckduckgoing: parsing the isolinux / grub configurations from the libreboot menu didn't work, but in the end it was as easy as going to the command line and running: From there on, it was the usual debian installation and a well know environment, and there were no surprises. I've noticed that grub-coreboot is not installed (grub-pc is) and I want to investigate a bit, but rebooting worked out of the box with no issue. Next step will be liberating my own X200 laptop, and then if you are around the @Gruppo Linux Como area and need a 16 pin clip let us know and we may bring everything to one of the LUG meetings⁴ ¹ yes, white, and most of the instructions on the interwebz talk about the black, which is extremely similar to the white… except where it isn't ² wait? there are keyboard maps? doesn't everybody just use the us one regardless of what is printed on the keys? Do I *live* with somebody who doesn't? :D ³ the breadboard in the picture is only there for the power supply, the chip on it is a cheap SPI flash used to test SPI on the bone without risking the laptop :) ⁴ disclaimer: it worked for us. it may not work on *your* laptop. it may brick it. it may invoke a tentacled monster, it may bind your firstborn son to a life of servitude to some supernatural being. Whatever happens, it's not our fault.
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293468.35/warc/CC-MAIN-20160823195813-00201-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
3,203
22
http://www.giantbomb.com/forums/general-discussion-30/how-do-you-organize-your-steam-library-1425342/?page=2
code
- Indie games - Old Games - Everything Else I only have around 75 games and less then half of those are installed, so i don't really need to worry about it. I just leave them in alphabetical order and either just go right for the game i've been playing or if i'm in the mood for something else i peruse the selection. I'm constantly trying to find the right one for me, but this is my current one. Favourites- The games that I'm currently playing all the time. I hate trying to find them in my large list. Backlog- The games I feel guilty for buying and never playing. I try to keep them up top in hopes that some day I will accidently click on them and start playing. Fast Steam- Sadly not all games can be installed onto my second hard drive so when I have to install a game on my SSD I want to keep them separate so if I get low on space I can easily find it. Games- Everything else I'm too lazy to set in a group. @jams: Nice, you made an id software folder! That´s always a sign of a good games collection. I don´t have enough to really start putting them into categories, but the library keeps growing so I might have to soon. "Not sure why I own these" will probably be the main category. I've decided to re-build my categorization system on Steam. I'm going to categorize games by how likely I am to go on a shooting spree in a populated public place after playing them. Secret of the Magical Crystals, you're going right to the top of the "where's my body armor and assault rifle" category! - Not installed That's basically my style. I just show all the games I have installed, which at the moment is around 25, in alphabetical order. Once I stop being simultaneously busy and lazy, I will probably uninstall a couple that I'm "done with". I seperate (or did, anyways - need to catch up on it after last winter's sales) mine into three categories - Favorites, Played, and a general untitled one for games I'm either playing or will play. I'm going to have to start up a fourth list for games I own but can't play on my laptop.
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765678.46/warc/CC-MAIN-20141217075245-00031-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
2,037
15
https://mail.python.org/pipermail/tutor/2012-February/088465.html
code
[Tutor] Debugging While Loops for Control Luke Thomas Mergner lmergner at gmail.com Fri Feb 17 04:27:35 CET 2012 > Message: 1 > Date: Wed, 15 Feb 2012 23:57:08 -0500 > From: Luke Thomas Mergner <lmergner at gmail.com> > To: tutor at python.org > Subject: [Tutor] Debugging While Loops for Control > Message-ID: <A8BDF988-FE78-4CA1-8CB7-C0A0E68FDCB5 at gmail.com> > Content-Type: text/plain; charset=us-ascii > I've been translating and extending the Blackjack project from codeacademy.com into Python. My efforts so far are here: https://gist.github.com/1842131 > My problem is that I am using two functions that return True or False to determine whether the player receives another card. Because of the way it evaluates the while condition, it either prints too little information or previously called the hitMe() function too many times. I am assuming that I am misusing the while loop in this case. If so, is there an elegant alternative still running the functions at least once. > while ask_for_raw_input() AND is_the_score_over_21(): > Any advice or observations are appreciated, but please don't solve the whole puzzle for me at once! And no, not all the functionality of a real game is implemented. The code is pretty raw. I'm just a hobbyist trying to learn a few things in my spare time. > Thanks in advance. > Message: 2 > Date: Thu, 16 Feb 2012 09:05:39 +0000 > From: Alan Gauld <alan.gauld at btinternet.com> > To: tutor at python.org > Subject: Re: [Tutor] Debugging While Loops for Control > Message-ID: <jhigt3$jdp$1 at dough.gmane.org> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > On 16/02/12 04:57, Luke Thomas Mergner wrote: >> My problem is that I am using two functions that return True or False >> to determine whether the player receives another card. >> Because of the way it evaluates the while condition, it either >> prints too little information or previously called the hitMe() >> function too many times. >> I am assuming that I am misusing the while loop in this case. >> while ask_for_raw_input() AND is_the_score_over_21(): > I haven't looked at the code for the functions but going > by their names I'd suggest you need to reverse their order to > while is_the_score_over_21() and ask_for_raw_input(): > The reason is that the first function will always get called > but you (I think) only want to ask for, and give out, another > card if the score is over 21 (or should that maybe be > *under* 21?). > Personally I would never combine a test function with > an input one. Its kind of the other side of the rule that > says don't don;t put print statements inside logic functions. > In both cases its about separating himan interaction/display from > program logic. So I'd make the ask_for_raw_input jusat return a value(or > set of values) and create a new funtion to test > the result and use that one in the while loop. > Alan G > Author of the Learn to Program web site Alan (and list), Thanks for the advice. It at least points me to an answer: I'm trying to be too clever for my experience level. I am going to go back and incorporate your suggestions. In the meantime, and continuing my problem of over-cleverness, I was trying to rethink the program in classes. With the caveat that I'm only a few hours into this rethinking, I've added the code below. My question is: when I want to build in a "return self" into the Hand class, which is made up of the card class; how do I force a conversion from card object into integer object which is all the card class is really holding? Should I make the class Card inherit from Integers? or is there a __repr__ def I don't understand yet? Bonus question: when I create a the "def score(self)" in class Hand, should that be an generator? And if so where do I go as a newb to understand generators? I'm really not understanding them yet. The "x for x in y:" syntax makes it harder to follow for learners, even if I appreciate brevity. Thanks in advance, self.score = self.deal() """deal a card from 1 to 52 and return it's points""" return self.getValue(int(math.floor(random.uniform(1, 52)))) def getValue(self, card): """Converts the values 1 - 52 into a 1 - 13 and returns the correct blackjack score based on remainder.""" if (card % 13 == 0 or card % 13 == 11 or card % 13 == 12): #Face Cards are 10 points elif (card % 13 == 1): #Regular cards, return their value return card % 13 self.cards = #Add cards this way to avoid duplicates. for i in range(2): #how do you sum(objects) ??? More information about the Tutor
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00244.warc.gz
CC-MAIN-2018-26
4,523
67
https://community.articulate.com/discussions/articulate-storyline/installation-tiny-screen-font-size
code
Installation Tiny Screen & Font Size I downloaded Storyline 2 on a new laptop with Windows 10. The screen size is tiny -- maybe 4-6 pt font -- I could barely see the text as I entered my serial # to activate and launch the program. My Windows display is set to the recommended 3240 x 2160. The Articulate v2 system requirements show anything higher than 1,280 x 800 is recommended. (It does display correctly at 1280X800 but I'd rather not keep it there as I'm multi-tasking in other programs...and it's a new laptop so I want to use the best display! Thanks for the help
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00000.warc.gz
CC-MAIN-2022-33
571
4
http://lists.hamvoip.org/pipermail/arm-allstar/2016-April/003044.html
code
[arm-allstar] IMPORTANT README for Version 1.02 Beta RPi 2-3 image doug at crompton.com Fri Apr 22 00:14:47 EST 2016 I have added a README file for the 1.02 beta release for the RPi2-3 to the hamvoip.org web page. The link is just under the download link for the 1.02 image. The README also includes the information for adding WIFI on the RPi3. It is important to use the downloadable package for WIFI as described in this document. It is important that you read this document if you are using the 1.02 code. This information had previously been published in the this forum but since many people apparently missed it I am including it on the web page. A direct link to the README is: -------------- next part -------------- An HTML attachment was scrubbed... More information about the arm-allstar
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494485.54/warc/CC-MAIN-20190220065052-20190220091052-00633.warc.gz
CC-MAIN-2019-09
797
10
http://sev17.com/2009/01/01/sqlservercentral-article-on-getting-data-out-of-sql-server-from-powershell
code
title: SQLServerCentral Article on Getting Data out of SQL Server from Powershell link: http://sev17.com/2009/01/01/sqlservercentral-article-on-getting-data-out-of-sql-server-from-powershell/ author: Chad Miller description: post_id: 9934 created: 2009/01/01 23:22:00 created_gmt: 2009/01/02 03:22:00 comment_status: open post_name: sqlservercentral-article-on-getting-data-out-of-sql-server-from-powershell status: publish post_type: post SQLServerCentral Article on Getting Data out of SQL Server from Powershell Using ADO.NET from Powershell has been covered by several articles, blog entries, and discussion group postings; however alternatives methods have not been as widely written about. The article, Getting Data out of SQL Server from Powershell demonstrates three alternative methods to ADO.NET for querying SQL Server from Powershell. The approaches demonstrated are simplier than ADO.NET and include: - SMO ExecuteWithResults - SQL Server 2008 Powershell Cmdlet Invoke-SqlCmd - CodePlex SQL Server Powershell Extensions function Get-SqlData See the full article for details.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00355.warc.gz
CC-MAIN-2018-47
1,087
6
https://discourse.igniterealtime.org/t/smack-keep-alive-packet-delay-after-login-in/72262
code
I am developing a instant chat app on Android using smack API. I set the KeepAliveInterval to 2 seconds using the following code: However, I checked the log and found that KeepAliveInterval was not sent out until nearly 20 seconds after I logged in. I used Openfire as server and set the idle disconnection threshold to 15 seconds. So my client was disconnected before the first keep alive packet was sent out. The app I am developing requires real-time monitoring of user presence, so I cannot reduce the idle disconnection threshold, the only option for me is to set the KeepAliveInterval to a small value. Can anyone help me with this? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00109.warc.gz
CC-MAIN-2020-40
646
4
https://forums.macrumors.com/threads/itunes-10-library-has-suddenly-become-very-slow-and-buggy-corrupted.1541357/
code
Hi there, Yesterday I was importing a bunch of WAV files from Pro Tools into iTunes to add metadata and make compressed versions of the audio files. Somewhere in the process iTunes started acting extremely slow and unresponsive, and that's now how my iTunes acts all the time. If I just do very basic things like switch between playlists, or edit the metadata of a song, I'll get the spinning beach ball for 5-25 seconds, and it's extremely frustrating to use. I frequently have to force quit the program. Not sure how this happened, or how to fix it. My computer has plenty of RAM and processing power, and everything is solid state. I've checked the Activity Monitor and it says iTunes is using less than 5% of my CPU power, and around 280MB of RAM. I've tried restarting and soft resetting my computer, but no luck. Any ideas how to fix this? Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319992.22/warc/CC-MAIN-20170623031127-20170623051127-00305.warc.gz
CC-MAIN-2017-26
853
1
https://www.odoo.com/forum/help-1/question/unable-to-fetch-nightly-repository-46467
code
I am trying to do a clean install of OpenERP but i am not able to retrieve the repository. When running a apt-get i am getting this error: "Failed to fetch h t t p : / / n i g h t l y . o p e n e r p . c o m / 7 . 0 / n i g h t l y / d e b / . / S o u r c e s 4 0 4 N o t F o u n d" Please note the spaces between the characters as i don't have rights to post links. Will OpenERP install correctly or is this a critical error and if so how do i work around this? Please try to give a substantial answer. If you wanted to comment on the question or answer, just use the commenting tool. Please remember that you can always revise your answers - no need to answer the same question twice. Also, please don't forget to vote - it really helps to select the best questions and answers! About This Community |Asked: 3/13/14, 10:36 AM| |Seen: 871 times| |Last updated: 3/16/15, 8:10 AM|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542217.43/warc/CC-MAIN-20161202170902-00016-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
879
7
http://chem4823.usask.ca/nmr/pulse_program.html
code
The pulse program is the instruction set for the pulse programmer (obviously!). The appearance of the instructions will depend on the spectrometer manufacturer and each will have it's own particular and unique capabilities. However, there are some basic things that the pulse program must be capable of doing ... it must be able to tell the spectrometer to issue a rf pulse on a particular channel, it must be able to tell the spectrometer to wait or delay for a specified period of time etc. The examples below are written in Brukerese pulse programming language but hopefully, if you are a Varian or Joel user you can glean some information from them. The first program is a simple pulse-acquire program:1 ze 30m mc #0 to 2 F0(zd) ph1=0 2 2 0 1 3 3 1 ph31=0 2 2 0 1 3 3 1 The first instruction, 'ze', instructs the pulse programmer to zero the data buffer's memory to prepare it for data acquisition and to set the scan counter to zero. Virtually every pulse program will begin with this instruction. The '1' in front of the 'ze' is a label (which isn't used in this program). A label can be used as a jump point for an instruction. The second instruction, '30m', is a delay instruction ... 30 milliseconds to be precise. I'm not quite sure why this is here .. it could probably be eliminated without any noticeable effect on the outcome of the experiment. The next instruction, 'd1', is the interpulse relaxation delay time. This is the time allowed for relaxation of the spin system back to equilibrium. Actually, the total relaxation time is the sum of 'AQ', the acquisition time and 'd1', but 'd1' is generally referred to as the relaxation time. Following this is the instruction to issue a pulse on the transmitter, 'p1 ph1', using phase program 'ph1'. Pulses are usually issued with a set of phases associated with them in order to pick out the wanted signals from the acquired signals. You can see the explicit form of the phase program at the bottom of the program. The next instruction tells the system to acquire the data using the receiver phase program 'ph31'. 'go=2' instructs the pulse programmer to loop back to the label '2' (at the '30m' instruction in this case) 'ns' times. 'ns' is the number of scans that will be done.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00266.warc.gz
CC-MAIN-2017-34
2,242
10
http://www.symantec.com/connect/forums/default-media-import-settings?page=0
code
Default Media Import Settings I'm looking for a bit of advice and I'm hoping someone here can help me? I've Googled and Googled to no end :) We have a reoccuring problem where our import media jobs do not complete in Backup Exec 2010, therefore queueing backup jobs. I'm just wondering if there's a way to change the default automatic cancellation settings for just media import jobs? The media imports aren't scheduled jobs and often happen ad-hoc, so setting the cancellation option for each job may not be practical. The backups can take over 24 hours, so using the option for all jobs will not work either. Idealy, I'd like the non-schedualed, import jobs to cancel after an hour, but other jobs to continue as normal. Can anyone suggest a way I could acheive this? Any suggestions would be MUCH appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00243-ip-10-137-6-227.ec2.internal.warc.gz
CC-MAIN-2015-40
812
5
https://jargonism.com/words/547
code
Quick And Dirty Definition: This term refers to an attribute for something that is done in a short of amount time without a lot of focus on accuracy or precision. Example: Our first implementation was a quick and dirty version. Usage of "Quick And Dirty" by Country Details About Quick And Dirty Page Last Updated: May 14, 2015
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00350.warc.gz
CC-MAIN-2021-21
327
6
https://motherlandbotanicalsanctuary.com/kinsta-headquarters-address/
code
What is Kinsta? Kinsta was founded in 2013 with a need to transform the status quo. We laid out to develop the most effective WordPress hosting platform in the world, which’s our promise. At Kinsta, we take managed WordPress hosting as well as performance to the next degree. Powered by Google Cloud Platform as well as their lightning-fast “costs rate” network, WordPress users can select from 28+ information centers around the world. We host all kinds of sites, from tiny blog sites up to Lot of money 500 clients. Kinsta is stressed with efficiency: Envision a vehicle enthusiast constructing their desire ride. That’s us with WordPress hosting. We love what we do and also are obsessed with fine-tuning our servers to deliver maximum speeds. Kinsta is worldwide: We serve hundreds of customers from 130 countries around the world via our 28+ data centers. The only continent we haven’t gotten to yet is Antarctica. Kinsta is neighborhood: Wherever it is on the world, we such as to be residents. That’s why we hired a remote support group that covers perpetuity areas. We additionally offer native-speaking assistance in 6 languages. Kinsta is diverse: Our team is remote-first with some neighborhood hubs. This allows us to work with leading talent worldwide without borders. Our diversity is also our power. Most of us originate from various profession, and also this broadens our unified perspective. It imparts an understanding that we can make use of each day when communicating with each other or with customers. Kinsta likes WordPress: Just like you, we are all members of the WordPress neighborhood: users, designers, and also enthusiasts. That’s why we develop our company around the very best CMS in the world. We also try to give back whenever we can: supporting local areas, funding WordCamps and also meetups, and also contributing to WordPress core advancement. Kinsta is independent: We are proud to be among the fastest-growing handled WordPress hosts in the sector. This indicates stability for our platform and also, subsequently, to your website. Kinsta is deeply rooted in hosting as well as is here to stay. We bootstrapped our organization from the ground up to ensure that we could be completely control of our business. This enables us to continually innovate and also absolutely place our values and also those of our clients first. What makes Kinsta different from the competitors In my point of view, it boils down to the following: Kinsta offers you the power of a huge hosting platform– Google Cloud Platform– yet does so in a really simple means, so that you don’t even need to be aware of whatever that’s taking place under the hood. Kinsta WordPress hosting is performance-optimized as well as all set for any type of website traffic spikes you might toss at it. It offers terrific safety, while at the same time keeping whatever supported. Last but not least, they likewise seem to have a practical method when it comes to their web server parameters, guaranteed performance and also bandwidth metrics. Among the few powerlessness that we’ve observed with Kinsta is their instead basic staging environment. Basically, the only choice is to move every little thing to live. You can’t, for instance, move just the files or the database. This isn’t that functional if you’re working with an existing site and just intend to alter a thing or 2 about it. For new websites, though, I think it’s awesome. Kinsta Hosting Prices Information Kinsta Hosting Plans Kinsta has a great range of WordPress hosting plans. Kinsta has the Beginner and also Pro plans, in addition to four Company plans and also four Venture strategies. Below are the key details: Beginner: From $25 a month for one WordPress set up, 25,000 sees and 10 GB disk area. Pro: From $50 a month for two WordPress installs, 50,000 visits and also 20 GB disk space. Service strategies: Beginning with $83 a month for five WordPress installs, 100,000 gos to and 30 GB disk space. Business strategies: Starting from $500 a month for 60 WordPress installs, a million check outs and 100 GB disk area. The above rates are for those paying annually; paying month-to-month is slightly much more pricey. There’s a 30-day complete money-back assurance on all strategies, so you can try Kinsta without economic threat. Each strategy has access to the very same facilities, powered by the Google Cloud Platform. This indicates updating increases the number of WordPress installations enabled, as well as the check outs as well as storage space allocations, as opposed to improving internet site efficiency, as is the case with a few other hosts. Pros of Kinsta - It’s blazing quickly - Cloud-based framework produced speed - Software program to turbocharge you - 99.9% uptime ensured - It’s (nearly) hands-off - Incredible technological support - Peace of mind - Great for website traffic spikes - Optimized for ecommerce Disadvantages of Kinsta - Steep cost point - No e-mail hosting - Lacks phone support Is Kinsta worth the cost? If you have the cash to save, Kinsta’s handled WordPress hosting strategies are absolutely worth the cost of admission. Few other hosts can actually match the power of the servers used to run Kinsta sites, as well as our efficiency tests were, well … They weren’t “off” the charts, yet they did offer the best-looking graphes we have. Where are Kinsta’s information centers found? Well, to be accurate, Kinsta does not have any type of data centers. All hosting is taken care of by the Google Cloud Platform, which has data fixate each and every single continent other than Africa. Bear in mind, this is Google, and a data facility or ten in Africa is only a matter of time. In North America alone, there are presently 8 data centers varying from Los Angeles to Montreal. You’re spoilt for option, really. Is Kinsta helpful for WordPress? Kinsta’s entire emphasis gets on WordPress. It has actually web servers developed for running it, software for taking care of numerous WordPress sites, and also an assistance team trained to handle it. The only far better host for WordPress might be WordPress.com itself, and also paradoxically, you don’t obtain the same degree of control as well as website possession with it. What are the very best Kinsta alternatives? For large power, we have not discovered anything that will beat Kinsta. Nevertheless, if you wish to run something besides WordPress, as well as still desire the premium hosting experience, you must look at Fluid Web. For those of you on a rigorous budget, Hostinger and also InterServer are possibly your best choices. Obviously, you can constantly have a look whatsoever of our listing of the leading web hosting services in 2022 to locate the ideal host for you.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00215.warc.gz
CC-MAIN-2022-49
6,799
51
http://www.audi-sport.net/vb/detailing/201520-sealant-autoglym-wet-wax.html
code
Sorry if this is a topic which has come up again and again... I have a pretty new black metallic Audi and have been using Autoglym wet wax but i find it lasts about 2 days max down the side panels if they get muddy. I live down a country lane which is pretty terrible in winter but its only the side panels and the bottom section of the boot area that i'm having trouble with. Maybe i need a sealant or to change the wax. Anyone used a sealant and wet wax? Any suggestions? Its got to be easy to apply
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275463.39/warc/CC-MAIN-20140728011755-00090-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
501
4
http://traxxas.com/forums/archive/index.php/t-267318.html
code
View Full Version : breaking in a new motor 02-03-2005, 01:06 PM How much run time does it take to break in a new motor. The guy at my LHS said "run at less than half throttle for 2 to 3 packs" but later I realized that he wouldn't know what packs I was using. I have 2 1300s, 2 1500s, 4 3000s and 2 3300s. So 2 to 3 smaller mAh packs would equal 1 higher mAh pack. What would you guys recomment for breaking in a new motor? 02-03-2005, 01:11 PM Geeze, I think it took about a half-hour or so to break in my Amber. I used two D-cell batts directly attached to the motor leads. So, I'd say, expect at least a half-hour. Probably 45 mins. Edit: because I can't spell. 02-03-2005, 01:12 PM Check this (http://www.misbehavin-rc.com/pit-lane/motor-maintenance/g-motor-maintenance5.asp) out for some help. 02-03-2005, 02:09 PM Cool thanks, I thought I had read everything on that site, but i must have missed that one. I don't have any 4 cell packs (except for one of my receiver packs, would that work). But I liked the drill idea. I have an 18 volt DeWalt cordless. So I can use that on reverse to break in brushes? Does it matter how fast the drill is? It has 3 speeds and high is pretty fast. Would this be better for the motor since there is not direct current? 02-03-2005, 03:55 PM Well, it's advantage is that it doesn't put voltage into the motor, and if you keep the motor leads away from eachother, you won't induce any current either. I tried the drill idea, but it shook too much to be useful. I also tried the rotary tool route since it is faster, but that was even worse. I couldn't even hold onto it. That's why I used "D" cell batteries. I got a nice 3 volts that way and I didn't have the shaking issues that I did with the drill/rotary tool. It's still going to take some time, I'd guess the faster you spun it, the less time it would take. But it will still take time. Since my brushes were already the shape of the comm, I just ran it until all the serrations were gone off the brushes. 02-03-2005, 04:33 PM So would directly connecting my 4 cell receiver pack be usefull? Or is this just a silly idea? 02-03-2005, 04:38 PM Should work just fine. Only thing is you may have to recharge it a couple of times. AA cells don't last terribly long. Even unloaded, motors are a fairly heavy draw. 02-03-2005, 05:02 PM Cool, thanks. :) Powered by vBulletin® Version 4.2.2 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762590/warc/CC-MAIN-20131218054922-00080-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
2,443
24
https://yiister.ru/tag/.net/6144360/how-to-debug-safe-handle-has-been-closed-error
code
[ACCEPTED]-How to debug "Safe handle has been closed" error-exception From the fact that System.IO.Ports.SerialStream.EventLoopRunner.WaitForCommEvent() and 12 Microsoft.Win32.UnsafeNativeMethods are 11 referred to I would hazard that you have 10 a COM component that has internal threads 9 accessing a port e.g. for serial or TCP/IP 8 data. It would look like the thread throws 7 an exception during it's start sequence. Possibly 6 it is trying to access an unavailable or 5 nonexistant port. This fails and the exception 4 is not handled and thus propogates back 3 through the code. Trying logging more information 2 from the UnhandledException Event in order 1 to get an idea of where this may start from. More Related questions
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00340.warc.gz
CC-MAIN-2023-23
731
5
https://www.mercurial-scm.org/pipermail/mercurial-devel/2011-October/034813.html
code
largefiles: server storage? natosha at unity3d.com Fri Oct 14 10:42:30 CDT 2011 2011/10/14 Justin Holewinski <justin.holewinski at gmail.com> > On Fri, Oct 14, 2011 at 11:06 AM, Na'Tosha Bard <natosha at unity3d.com>wrote: >> 2011/10/14 Justin Holewinski <justin.holewinski at gmail.com> >>> On Wed, Oct 12, 2011 at 1:05 PM, Justin Holewinski < >>> justin.holewinski at gmail.com> wrote: >>>> I'm excited for the new largefiles extension, and I have been trying it >>>> out on some local test repositories. I realize the extension (in its >>>> "official" form) is very new and subject to change, but I have a question on >>>> how the large files are (or are going to be) stored on the server. >>>> Let's say I have two local repositories S and C. S represents the >>>> "server" repository, and C represents a client clone. If I add large files >>>> to C and push to S, the large files appear to be stored in >>>> $HOME/.largefiles, not in the .hg directory for S. >> I believe this makes sense, based on the current implementation. >>> It looks like S just contains hashes for the files, which makes sense. >> This is correct -- the hashes are sitting in S/.hglf -- correct? > They are stored in .hg/store/data/~2ehglf. There appear to be .i files > that store all past revision hashes, and the .i files are stored in a > structure that mirrors the repository structure. The client repo has the > .hglf directory. If I run "hg update" on the server repo, then I get the > .hglf directory, and "hg update null" removes it. The client repo seems correct. The stuff happening on the server sounds like garbage we need to fix. > Also, this may be a bug: in my test repository, I have all of the > largefiles in an assets/ directory. If I run "hg update" on the server, > this directory is created. But if I run "hg update null", then the contents > of assets/ are deleted, but the directory still remains, unlike other > directories that contain only normally-versioned files. That directory probably won't show up on the server unless you run hg update, because by default the server version has no working copy, right? Incidentally, will there be a config option for this, for users that wish >>>> to sandbox all hg-related files in a separate directory? >> Every large-files enabled repo will have it's own set of standins >> maintained in repo/.hglf -- I don't see any reason why this should be able >> to be moved out of the repository because it is repo-specific. Also the >> standins are very small text files, so why do they need to be elsewhere? > I was actually referring to the opposite: will I be able to configure the > server to store all largefiles blobs in the .hg directory, or some other > user-configurable directory? I don't believe it is supported yet, but I believe we should add it. > Now, let's say I create a new repository accessible over SSH, called S2. >>>> If I push C to S2, the largefiles seem to be stored in *both* >>>> $HOME/.largefiles (in the SSH account) and the .hg directory for S2. Things >>>> are getting a bit inconsistent. >> That does sound inconsistent -- to me, anyway. There shouldn't be any >> largefiles in S2/.hg -- there should only be the textfiles with the SHA1 >> sums in S2/.hglf -- is the S2/.hglf directory there? > There is no .hglf directory, the largefiles appear to be stored in > .hg/largefiles (in the server repo): > $ ls -l .hg/largefiles > total 71584 > -rw------- 2 hg hg 71576 2011-10-12 12:49 > -rw------- 2 hg hg 24376104 2011-10-12 12:49 > -rw------- 2 hg hg 7821 2011-10-12 12:49 > -rw------- 2 hg hg 24376128 2011-10-12 12:49 > -rw------- 2 hg hg 7813 2011-10-12 12:49 > -rw------- 2 hg hg 24376116 2011-10-12 12:49 > -rw------- 2 hg hg 71567 2011-10-12 12:49 >>> I have not tested HTTP/HTTPS, but what is the expected behavior in this >>>> case? There may not be a writable home directory in this case. >>>> More specifically, what are the planned methods for storing large files >>>> on mercurial servers? >>> Ping? Any comment from the largefiles devs on the planned server storage >> I'm not really sure we have a concrete plan yet. This extension (at least >> in this form) is very new. > Is it still going to be released with Mercurial 2.0? > Some of us are expecting to use largefiles with Kiln, which just >> implements the server-side stuff already. Some people will be migrating >> from the old bfiles extension, which means they already have a central share >> set up somewhere (but I assume some conversion will be necessary). Greg is >> preparing a way for users to migrate from bfiles to largefiles, so he might >> have some idea on this. >> The built-in serving ability of largefiles was developed by the team at >> FogCreek, so hopefully one of them can reply with what their vision was. >> My initial thought is that: >> The $HOME/.largefiles cache should be configurable server-side, if it is >> not already >> Each repo should only contain the hashes in repo/.hglf -- when largefiles >> are uploaded, they should probably all go directly to the cache. > That makes sense to me, as long as the cache path is configurable. :) Let's wait for the other Largefiles devs to weigh in on the issue before we make a plan. Build & Infrastructure Developer | Unity Technologies *E-Mail:* natosha at unity3d.com -------------- next part -------------- An HTML attachment was scrubbed... More information about the Mercurial-devel
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00298.warc.gz
CC-MAIN-2019-43
5,405
90
https://stackshare.io/stackups/gerrit-code-review-vs-gitcop
code
Gerrit Code Review vs GitCop: What are the differences? Developers describe Gerrit Code Review as "OpenSource Git Code Review Tool". Gerrit is a self-hosted pre-commit code review tool. It serves as a Git hosting server with option to comment incoming changes. It is highly configurable and extensible with default guarding policies, webhooks, project access control and more. On the other hand, GitCop is detailed as "Automated Commit Message Validation for GitHub Pull Requests". Free for open source projects;Any time a pull request is raised on your repository, each commit in the pull request is checked against the repository rules. If any commits do not follow the provided rules, a comment is left against the pull request. Gerrit Code Review and GitCop can be categorized as "Code Review" tools. Some of the features offered by Gerrit Code Review are: - git repository hosting - pre-commit code review - commenting on diffs On the other hand, GitCop provides the following key features: - Instant Pull Request Feedback - Easy to Get Going - Many Customization Options
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00054.warc.gz
CC-MAIN-2019-43
1,076
11
http://view.eecs.berkeley.edu/wiki/Sony/Toshiba/IBM_Cell_Processor
code
Sony/Toshiba/IBM Cell Processor Purpose and Target Market General Purpose? Special Purpose? For gaming or telecom? The Cell processor is designed to be the heart of Sony's Playstation3 game console where it will perform the physics and serve a graphics list to the RSX GPU. If PS2 sales are any indication, millions of machines will be sold the first year. In addition, Los Alamos National Lab has announced it will aquire a 16,000 processor Cell and Opteron system. Whether this system's benefit is heterogenity or from a common form factor, power supply, and management functions is yet to be seen. Basic Processing Element(s) Cell is composed of nine processing elements: a standard PowerPC core, and eight SIMD cores. In addition, there is a dual XDR memory controller, and two I/O controllers. The processing cores, as of rev3, run at 3.2GHz. The PPE (Power Processing Element) is a 64b, dual issue, dual thread PowerPC core. It provides a full double precision FMA datapath (6.4GFlop/s), and a full Altivec datapath, which in single precision provides 25.6GFlop/s. Unlike many SMT cores, the threads alternate issue cycles. Each SPE (Synergistic Processing Element) includes one MFC (memory flow controller), and one SPU (Synergistic Processig Unit). Each SPU includes a 256KB local store (a memory disjoint from the DRAM address space), two in order SIMD datapaths, and a 128x128b register file. Each SPU has its own program counter, and can only fetch instructions from its local store. It may issue up to two SIMD instructions per cycle if: - they are correctly packed into a 128b quad word - one is a integer, bitwise, or single precision floating point SIMD instruction - the other is a load, store, permute, branch or channel instruction. The single precision SIMD datapaths are fully pumped (four 32b FMAs per cycle) and can deliver up to 25.6GFlop/s. However, they do have a 6 cycle latency necessitating significant unrolling. All loads and stores operate on quadword (128b) granularities and only may access the local store with a 32b local address space. All scalar loads and stores must be implemented in software via permute instructions. The double precision pipeline at 13 cycles is significantly longer than the single precision datapath. IBM appears to have chosen a rather straightforward approach to inserting a 13 cycle pipeline into a 7 cycle forwarding network : it stalls subsequent issues for 6 cycles. Thus only one SIMD double precision floating point instruction can be executed every 7 cycles - 1.83GFlop/s. As previously discussed all loads, stores, and instruction fetches may only access the local store. It is the MFC's responsibility to move data in and out of the corresponding SPU's local store. Thus the primary difference between the PPE and the SPEs is not performance, but productivity via a conventional programming model. Interconnect and Topologies All elements on the Cell chip are connected via the EIB (Element interconnect bus) which is composed of four 128b rings running at 1.6GHz. Two rings run in one direction, two run in the other. There are restrictions as to which ring data may be inserted into based on the source and destination of the data item. As such the latency and bandwidth is dependent on the communication pattern. Memory Structure and Hierarchy Shared memory? Distributed memory? Caches? Scratch Spaces? Special Purpose Hardware Units Vector units? Crypto units? I/O and Peripherals Memory controller? DMA engines? Ethernet Controller? Hypertransport? Whose fab? which year? Layers of metal? Fast or slow process? multi-Vt process? The Cell die size is about 220mm^2. Roughly speaking, - Power core is about 10% - the L2 is about 10% - the SPEs are about 7% each IBM has been a bit cagy about releasing power consumption. Publicly they have stated that their dual chip blades consume about 315W, with an additional 15W per infiniband. Depending on temperature, SPE power is between 3 and 7W, and rough estimates have placed full chip power at about 100W. Two seperate programs are written. First, the explicitly software controlled memory SPE program is written. It may be necessary to use intrinsics in the inner loops to fully exploit the SPEs computational capabilities. The SPE program is then embedded within a standard PowerPC program. The PowerPC program creates SPE threads passing a pointer the embedded SPE program to them. All 10 threads operate independently, and are explicitly synchronized by the programmer. Software Development Environment System design tool stack? Availability of layers in this tool stack?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704664826/warc/CC-MAIN-20130516114424-00082-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
4,597
34
https://www.twine.net/projects/b7d8b0
code
Backflips (Official Music Video) My name is Davin Bruce aka Lil Wholesale and I’m a rapper from Canada id like a music video done about this song: https://www.youtube.com/watch?v=loVzhtbgRkc preferably one of me floating through space with some trippy visuals Whenever it gets finished 2D is preferred but I’m willing to accept all types of animations In what capacity are you hiring? As an individual for a personal project Where are you in the hiring process? I am offering an unpaid collaboration What freelancer experience level is needed? (per hour) Amateur: Up to $10 What length of animation is required? What type(s) of animation are you open to? 2D Animation, Stop Motion, Motion Graphics, 3D Animation Do you need sound in addition to the animation itself? I want to discuss with the freelancer Do you have a script or storyboard for the animation? Not applicable for this project Work has begun on this job ...but don't worry, your next job is waiting for you on the Jobs page
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00610.warc.gz
CC-MAIN-2021-49
991
21
http://sharepoint.stackexchange.com/questions/67700/start-time-column-is-not-getting-updated-in-workflow-email
code
I have an approval reusable workflow for a calendar list. The workflow sends an email to the approver with the start and end time of the event. End time is displayed correctly. But the start time is displayed as Monday, January 01, 0001. I am using [%Current Item:Start Time%] in the email to get the start time. When I googled for solution, people suggested to get the start time from the "Current Item ID" instead of "Current Item". But How can I get the current item id by "Add or Change Lookup" button? All I get is the following, but no "ID"
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00488-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
546
5
http://www.ipadforums.net/ipad-general-discussions/7608-hotel-internet.html
code
I'm going into a hotel where the internet is wired. It's one of those that require you to open the Internet Explorer, type in a code (given at the front desk) and then you're in. How can I use that for my iPad... I can't just plug in my wifi router in there, since it requires typing a login info. Is there a way my laptop can become a wifi bridge for the iPad?
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052107/warc/CC-MAIN-20131204131732-00077-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
361
3
https://androidforums.com/threads/dialing-question.130072/
code
I frequently host conference calls while I'm on the road and I wanted to have some of my frequently used conference numbers auto-dial. I found the pause key, and that allows me to enter the phone number, pause, then enter the conference code, but I'm unable to enter the ending # key as that always sends me to Google Voice. For example, in my contact phone number I enter 888-889-xxxx,12345 That works partially, the correct number is dialed and the pause (,) works great, then the conference code is entered but I need to press # and I am joined to the correct conference call. If I try to program the # sound at the end I am connected to Google Voice? For example, if my contact phone number is 888-889-xxxx,12345# Or, if the contact number is 888-889-xxxx,12345#, I get connected to Google Voice. How can I get the last # to dial without sending me to Google Voice?
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805894.15/warc/CC-MAIN-20171120013853-20171120033853-00054.warc.gz
CC-MAIN-2017-47
869
1
https://wn.com/Alternative_hunt?from=alternativehunt.com
code
- published: 16 Mar 2013 - views: 24305 We're trying out something a lil different that might save some money in the long run. It's simple, just buy a bag of the salt pellets for water and pour it out on a pile. After a few showers or a good rain, it'll dissolve into the ground and you'll have a salt lick, and what doesn't dissolve will harden up into a chunk of salt. Now the lil bricks run you around $6.00 and up and the larger blocks can go up to $20 or so. This 40lb bag is less than $5.00. We're running it right now and in a few weeks, we'll get back to y'all with the results. Thanks for watchn and stay tuned! Hunt's alternate death from The Final Destination (2009) Cleveland metallers Alternate Reality are proud to announce the release of their new music video for their song, "Witch Hunt." The infamous lawyer-fronted metal band is known for producing 2011's viral video sensation, "The King That Never Was," which has been declared one of the greatest metal videos of all time and also voted the No. 1 worst video of 2011 by Yahoo! Music, beating out such high-profile videos as Rebecca Black's "Friday." It also was named a No. 1 video in the recently published "The Merciless Book of Metal Lists." The band's latest video was once again directed, edited, and produced by Alternate Reality frontman and Cleveland area lawyer, Steve Delchin. "Halloween is coming, there's a chill in the air, and the leaves are starting to change color. Naturally, we though... Elvira and Robin have been avid toy collectors since they were kids! Joined by a group of their family and friends (who also share their passion for toys), they bring a family-friendly, unique and sometimes goofy approach to everything they do. Their informative videos include in-depth reviews, quirky shopping videos, toy hauls, an occasional random surprise, and more! They love Monster High, Ever After High, Funko POPS and Mystery Minis, Disney, My Little Pony, Bandai, Tokidoki, Hello Kitty, LEGO, and blind bags. We also adore Sailor Moon, Pokemon, Anime toys, comic collectibles, and so much more. Kids love us and adults do too – we’re kids at heart! Check out more of our reviews! http://bit.ly/1q5vfGu Check out Toy Fair 2016! http://bit.ly/1PJAnNm More of our Toy Hunts http://bit.ly/Z... Got another alternative to the deer cane and other deer salts you but for deer hunt'n. This is just cattle mineral salt for $3.99 for a 40lb bag. Just do the same like the rest, clear a spot pour out, either let the rain put it in the ground or you can mix it with water and pour it out. Y'all take a look and thanks for watchn! What is green alternative energy? Deysi goes on a scavenger hunt to explain and to find examples. 「Alternative Universe 」 » enter a different dimension « “Monsters are real, and ghosts are real too. They live inside us, and sometimes, they win.” ― Stephen King ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Theme: Horror Song: https://youtu.be/6f3j4okhb8o ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Thank you to everyone who participated and did their parts! We hope you like it! Thank you for watching. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Participants: RedCapesAreComing_, ItsATwinThing, mystx, xladyxdreamer, yosonskiddlesxx, HTwilightPheonix, SNProductions21, RebyLove, SunlightRomance, MariposaProductions, Kemss31 Political Interest! Even following these simple segments IS availing oneself to “alternative news”! A GREAT THING! “Many Americans secretly love the Trump administration's media offensive, because for the longest time these organizations, they believe, have held a monopoly with their haughty attitude that they know better...” (RT News) https://www.youtube.com/watch?v=LLYEMzknc7Y&list=PLJlu7fj6hgFG5GVLZ8-fsOlKCeX6p8XNV&index=4 Thank You all for watching my Vlog! Please don't forget to Like and Subscribe:) My Online Boutique- ShopBriella - https://goo.gl/zdU0P9 For 50% off of your entire purchase use code: vlogmas50 Follow Me! My Blog- amberxo.com Instagram- http://goo.gl/NwBZ6F Facebook- http://goo.gl/arRl8X Twitter- http://goo.gl/eB4NM0 Email- [email protected] Bundt and Crueller, government agents from the Division of Magical and Mythical Creatures lead a ten year investigation to determine whether jackalope actually exist. The critter's got a $500,000 price on its head but even the crunchy granola types from SaveTheJackalope.org join the hunt as they aim to stop Big Mama from cashing in on all the gas beneath her property. My take on a launch trailer for CD Projekt Red's 'The Witcher 3 - Wild Hunt' Captured using nVidia Shadowplay 1080p/30 60MB/s Edited in Adobe Premiere CC 2015 Other ALTs (Alternative Launch Trailers): Bioshock Infinite - http://youtu.be/JZm2hNTwRZU Mass Effect 3 - http://youtu.be/BX1fs0_Rbbk Halo 4 - http://youtu.be/-_DPctzaz58 The Walking Dead - http://youtu.be/ZIq1bpzxCZw The Last of Us - http://youtu.be/E_rOA8Z38dI The Wolf Among Us - http://youtu.be/O9K8DHIKSfc Music: Audiomachine - God of Drow This one took a long time. Thanks for the ride, Geralt. http://www.lukeritson.com https://twitter.com/Luke_Ritson http://ask.fm/LukeRitson http://www.thirtysecondstomars.com Download the new album LOVE LUST FAITH + DREAMS on iTunes: http://smarturl.it/LLFD Music video by Thirty Seconds To Mars performing Night of the Hunter (VEVO Presents). (P) (C) 2013 Virgin Records America, Inc.. All rights reserved. Unauthorized reproduction is a violation of applicable laws. Manufactured by Virgin Records America, Inc., Capitol Records, LLC, 150 Fifth Avenue, New York, NY 10011. Love Hunt is a young new band from Corona, Ca.that is busting out onto the Local Southern California Music seen. Love Hunt's audio tracks are currently in the top 10 on Alternative-Sound Cloud (between Cold Play and U2) You can expect BIG Things from this Band in the Future!. Enjoy this Live Fan Track Video which includes appearances by Chanelle V. and KruGG. The Witcher 3 Wild Hunt has finally arrived. It's on the Xbox One. It's on the PS4. It's on the PC. So far it's received some pretty damn good reviews as well, some of which that go as far to say that it's like Red Dead Redemption meets Dragon Age: Inquisition. Which would make it one of the best games ever made. But this needs verifying. And it needs verifying now. So, our 'alternative review' team switched on The Witcher 3 on the Xbox One, played it and made their own verdict. Now you can find out what that is... Join The VideoGamer Community Club: http://www.patreon.com/videogamer. Like and share to help grow the channel. ►SUBSCRIBE ➜ http://bit.ly/1sr8VqL Check out http://VideoGamer.com for the latest news, reviews and features! @VideoGamerCom Hey dickheads! This is the first of many video's in hunt for, blue eyes alternative white dragon! I have one from the original mvp1 packs but alas they are hard to come by so gold edition packs it shall be.. plus im looking for duza for my cubic deck aswell as Krystal Dragon because he's classsssssss ! Enjoy. Anisette Astronomie presents an homemade live version of the song called "FIRST" by BIRDY HUNT. Recorded at Anisette Astronomie by BIRDY HUNT Performed by BIRDY HUNT Directed by BIRDY HUNT, Claire Frémond & Clara Mouton Edited by BIRDY HUNT & Clara Mouton Thanks to Axel Verbruggen, Romain Leblanc & Abdel-K Lacroix Love. www.birdyhunt.com www.facebook.com/birdyhuntofficial www.twitter.com/birdyhunt The Hunted Alternate Ending Alternate dimension is such a cool store!!! Il existe a ma connaissance 3 fins à The Witcher, voici une alternative a celle que nous connaissons par rapport à celle de mon walk, faite par mes soins, sans commentaires. Je n'ai pas fait la 3e fin, qui est disons plus "neutre" d'une certaine maniere. Facebook: https://www.facebook.com/Johnsonwlkr Twitter: https://twitter.com/JohnsonWlkr IG: https://www.instant-gaming.com/fr/?igr=JohnsonWalker Skyrim Secrets – DESTROY The Dark Brotherhood (Shrouded Armor & Blade of Woe Special Edition)! ➥ Please ➥ Like ➥ Comment ➥ Subscribe for Daily Guides! ---RELATED GUIDES--- ➥ Support the Channel and get Benefits: https://www.patreon.com/ESO - Should You Kill Cicero?: https://youtu.be/zH842x7zTg0 - Ancient Shrouded Armor: https://www.youtube.com/watch?v=NKq3IIScX4M&index=20&list=PLl_Xou7GtCi67CNAAIBchLnxqa6GULh83 - Top 10 Best Weapons: https://www.youtube.com/watch?v=9GEAnd3ZTCk&list=PLl_Xou7GtCi7xFWTV-btolnVfRExg3x8X - All Locations Walkthrough: https://www.youtube.com/watch?v=NiygbILgRtk&list=PLl_Xou7GtCi44tdVGfRtFPNurmCJLsSD9 ---SOCIAL LINKS--- ➥ Facebook: https://www.facebook.com/ESOSquad/ ➥ Twitter: https://twitter.com/ElderScrollsGui ➥ Instagram: https://www.instagram.com/eso_d... Injustice 2! Let's Play Injustice 2, shall we? In this stream I'll be playing Injustice 2 on PS4! Injustice 2 Mortal Kombat esque fighting game based in the DC Universe! Remember if you're having fun and liking the content then fire some likes, drop some comments and share it with your buddies! Download Injustice 2 : Injustice 2 Playlist: Twitter : http://twitter.com/Steejo Donate: https://streamtip.com/y/steejo Patreon: http://www.patreon.com/Steejo What is Injustice 2? njustice 2 is a fighting video game developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment. It is the sequel to 2013's Injustice: Gods Among Us. The game was released in May 2017 for the PlayStation 4 and Xbox One. Similar to the previous installment, a companion mobile app was releas... Witch House Mix #1 (2017) playlist: 00:00 satanwave – Nightmares 04:10 EMIKA – Flashbacks (Gnothi Seauton remix) 08:55 MONOMORTE – Erutufon 13:15 PEXØT – Everything Is Hoax Master 17:20 Skaen – Palace 22:45 Fraunhofer Diffraction – Somewhere But Not Here 27:00 Rivka – Drift 31:10 Rihanna – Umbrella (VSN7 RMX) 35:10 VIOLET7RIP – Ritual 39:20 VSN7 – Overcome 42:55 Grim Service – STVN 47:50 Skaen – Thylane 52:00 ∆IŖֆH4D3 – o u t s i d e 55:55 Mt Eden - Sierra Leone (VVITCH RMX) 58:20 ATB - Ecstasy (MYSTXRIVL x SOKOS FLIP) 01:01:30 (((О))) – Bong Killer genre: witch house, future garage, dark wave, trap, deep, ritual, experimental Abonnez-vous! Et sur Twitter @KailyanaChannel et Facebook! Alternative place for hunting, not as much exp as Grimvale, you might get the same exp as when farming minos, 600k+, but it is harder and gives a bit less loot, the advantage is that it is usually empty for those reasons and also because it is harder to get there, you need to cross blob plains, and it requires a quest. I used Koshei's amulet because Seacrest Serpents casts sudden death and it is nice to have some death protection, preferred Prismatic Armor because both, Seacrest Serpents and Sea serpents causes physical damage, and Seacrest Serpents can hit for up to 500 physical melee damage. The place is huge, in this hunt I show just one part of one of the hunting grounds, the lost mountains, which can be further divided in North and South. Distance 102+5 Shielding 89 Magic Level 26 Kyle Hunt speaks with Henrik Palmgren of Red Ice Creations about his experiences growing up in Sweden, the “Nordic model” of socialism, multiculturalism and the war against White people, a history of lies, disinformation and misinformation in the alternative media, the power of paganism, and much more! Straight from the horse's mouth: https://youtu.be/rA7Ymki71fM http://www.redicecreations.com/ http://www.renegadebroadcasting.com/ http://renegadetribune.com/ http://alternativesocial.com/ --- MUST WATCH ! --- JEWS: Celebrating Centuries of Control https://youtu.be/bcLWb_DyEpc --- MUST WATCH ! --- Rainbow Rabbi: Le Happy Merchant Celebrates Gay Pride https://youtu.be/8vNLbPkXB-0 💎 Minecraft Survival - In this Minecraft Survival video, we're upgrading our Excalibur sword & heading to the End for Shulkers & loot! ⭐️ Subscribe For More! - http://www.tinyurl.com/PythonGB ⭐️ Support Me On Patreon - http://www.patreon.com/PythonGB Follow Python... ● Subscribe on YouTube! - http://tinyurl.com/PythonGB ● Follow on Twitter! - http://twitter.com/PythonGB ● Check out my 2nd channel! - http://youtube.com/PythonGB2 ● Follow me on Mixer - http://mixer.com/PythonGB ● Check out my website! - http://pythongb.com ● Send in your fan art! - pythonfanart[at]gmail.com Welcome to my Let's Play Minecraft Survival series! I aim to just play Minecraft Vanilla and make Minecraft videos for the fun of it in this Let's Play! We'll do pretty much anything, adventuring, exploring, building an... These videos are exclusive and advertising free for Patreon Subscribers for 3 days. Subscribe here! - https://www.patreon.com/forgedbygeeks Ok, so our last hunt of a Screaming Antelope ended with a Level 2 White Lion and 4 dead survivors. Any bets on this one going better? Let's find out! Join us Live next week at 6pm PST on Saturday for Pathfinder and our Kingdom Death Monster People of the Sun Campaign.. If you want to watch us Live on Twitch, check out our stream here at 6pm PST every Saturday - https://www.twitch.tv/forgedbygeeks Please also check out my board game, Defense Grid the Board Game here - https://boardgamegeek.com/boardgame/197705/defense-grid-board-game Welcome to Official ABCLosT Youtube Leave a like and subscribe ----- Links ----- Where i recieved my Skins : http://farmskins.com/?utm_source=youtube&utm_medium=video&utm_campaign=respawn-plays Alternative : http://cases3x.com/23199923 ----- Games ------ Buy games at a better price : https://www.g2a.com/ Second link : https://www.kinguin.net/ Copyrights ----------------------- Award winning unofficial prequel short film dramatising Aragorn & Gandalf's long search for Gollum directed by British filmmaker Chris Bouchard. Based faithfully on the appendices of the books this is a non-profit, serious homage to the writing of J.R.R Tolkien and the films of Peter Jackson. It was shot on locations in England and Snowdonia with a team of over a hundred people working over the Internet. It took two years to make and was released as a non-profit Internet-only video by agreement with Tolkien Enterprizes. This Youtube version is slightly extended with 1 scene added back in. www.thehuntforgollum.com www.ioniafilms.com Kyle starts out with Andrew and Leifkin and picks up calls from Nick, Chris, and John Smith. They discuss topics related to real world and digital activism, the matter of religion and the importance of connecting with nature, the infuriating lack of resistance we have seen thus far to such egregious abuses, and how we need to change all that. THE TRUTH YOU NEED TO KNOW As it seems YT rules and censorship has been blocking my videos in most all European countries: Austria, Switzerland, Czech Republic, Germany, France, French Guiana, Guadeloupe, Israel, Italy, Martinique, New Caledonia, French Polynesia, Poland, Saint Pierre and Miquelon, Reunion, French Southern Territories, Wallis and Futuna, Mayotte. If you are having trouble watching any of my content, refer to the following: http://... These videos are exclusive to Patreon Subscribers for 3 days. Subscribe here! - https://www.patreon.com/forgedbygeeks Still missing that bow :( But heh, the White Lion fights have been going good for us. So trying not to look a gift horse in the mouth, we are going at it again. We are also looking for feedback on the Patreon Rewards and ideas for how to improve them. Let us know if you have any ideas. Join us Live next week at 6pm PST on Saturday April 29th as we kick off our People of the Sun Campaign. If you want to watch us Live on Twitch, check out our stream here at 6pm PST every Saturday - https://www.twitch.tv/forgedbygeeks Please also check out my board game, Defense Grid the Board Game here - https://boardgamegeek.com/boardgame/197705/defense-grid-board-game Seriously annoying platformer in which I had to resort to using infinite lives and endless save-states to beat it. Turned off the stupid grating sound effects and sped it up at 6:56. There is an ending of sorts, anyway. I can think of an alternative title rhyming with "Hunt", too. Also available for the BBC Micro. Family alternative to trick or treating: we did a family treasure hunt inside the house searching for candy. Hey Guys and Girls WELCOME to my youtube channel i hope you enjoy all the content i have to display now and in the near future i am a streamer on twitch so if you have the time and a twitch account follow me at www.twitch.tv/jrkilledyou174 also follow my Brother at www.twitch.tv/jonnyzoom174 and support our channel Rated M Gaming Please leave a sub or a follow to support an average gamer channel Also visit syphaxindustries.com and use promo code: RatedMJR for a 5% discount on your total purchase Sponsor: SyphaxIndustries Contact email: [email protected] Welcome! VIRTUAL HUGS FOR EVERYONE!!! My name is Mike, I'm a streamer from New Zealand, the land of the Hobbits! Just a little housekeeping, every now and then I WILL swear, not all of the games I play are not known for keeping the language safe for work. Here's a some rules: Don't be a mad guy. No politics. Don't beg for subscriptions. No sexism or racism. NEVER ask to be MOD, that's a privilege, not a right! ► Smash my Twitter: https://twitter.com/DrunknSolitaire ► Feel like donating to new equipment? DONATE here: http://bit.ly/2kM2FPB Check out my #YTGFam! Rubzy: https://www.youtube.com/user/Suqp BigBen: https://www.youtube.com/user/TheKindler1 KardPlays: https://www.youtube.com/channel/UCMliErZyG9rsx5prbj7G7Yw NebeowulfX: https://www.youtube.com/user/NeobeowulfX Cronodox... Mein offi Twitter ACC : https://twitter.com/L0wr1d3rrr gebt euch mein Merch : https://shop.spreadshirt.de/MapoMerch/?noCache=true Donation : https://www.tipeeestream.com/xeratoxgaming/donation TS : Freegamertwo.teamspeak.de Discord : https://discord.gg/KRhEpD Skype : L0wr1d3rrr Stream Regeln [bitte einhalten]: keine Beleidigungen untereinander, Keine Fremdwerbung, Keine Fragen zu darf ich Mod oder PKM sein außer ich Frage euch wer ein PKM oder ein Mod sein will und ich such mir ein raus !!! ✐YouTube :https://www.youtube.com/channel/UClwvH3cAw42d3AC-svUIeTQ ♒♒♒♒♒♒♒♒♒♒♒♒♒♒ ✎Name : Marvin / Mapo ✎Alter : 21 ✎Hobbys : Gaming, Fußball , Freunde Treffen und so FreundesCode : 521549351735 ♒♒♒♒♒♒♒♒♒♒♒♒♒♒ Auf diesen Kanal erwartet euch hauptsächlich Gaming. Falls euch der Live Stream gefallen las...
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806842.71/warc/CC-MAIN-20171123142513-20171123162513-00229.warc.gz
CC-MAIN-2017-47
18,529
38
https://poslovi.infostud.com/oglasi-za-posao-it-test-developer/novi-sad
code
Oglasi dostupni i Levi9 IT Services Translating functional or technical requirements to test scripts and test scenarios... Beograd, Novi Sad Estimating, prioritizing, planning and coordinating testing activities during development process... Shield Software Development d.o.o. Continental Automotive d.o.o. Relevant University Degree (i.e. Electronics, Telecommunications, Computer Science)... Punktum solution d.o.o. Analyzing and translating requirements into tasks based on agreed standards; provide feedback to the client and the team... We are looking for back end software developers with knowledge and experience in developing web applications... Synechron SRB d.o.o. Design, monitor and maintain systems for a high level performance, security and availability... Following the agreed team or project processes and procedures... Novi Sad, Zrenjanin CCBill SRB d.o.o. You will be expected to build multi-user systems supporting hundreds of users... Strong knowledge of HTML / CSS / JS; Experience with React / Redux frameworks... Ensure process conformance and quality of delivered code according to the defined coding guidelines... Develop SW modules mainly in C programming language for all major automotive manufactures... Establishes and maintains end-user system hardware and software test configurations... At least 3 years of professional experience in product development, preferably in embedded software and/or electronic... Developing Software Deployment Instructions and Rollbacks from Software Developer’s Release Notes... QA Cube d.o.o. Manufacturing/Rework/Qualification/Testing/Debugging of PCB... Analyse and assess of security standards for automotive products... Junior C# Software Developer (Novi Sad, Zrenjanin) Medior Frontend Software Developer (Novi Sad) Senior Java Developer (Novi Sad) Test Developer Junior (Novi Sad) Experienced C# Software Developer (Novi Sad) Test Developer (Belgrade, Novi Sad) System Test Engineer (Novi Sad) Software Engineer for Body & Security (Novi Sad) Software Quality Engineer (Novi Sad) Software Security Specialist (Novi Sad) System Requirements Engineer (Novi Sad) Full Stack .Net Developer (Novi Sad) Front End Developer (Novi Sad) Junior Release Engineer (Novi Sad) Hardware Engineer (Novi Sad) Hardware Technician (Novi Sad) We are sorry to inform you that English version of our website is not currently available. If you are an employer, all the information about ad prices and conditions of job publishing can be found here. Feel free to call us +3822.214.171.1245 or contact us by email [email protected]. Primetili smo da koristite Internet Explorer 8. Nažalost, naš sajt nije potpuno optimizovan za ovu verziju Internet Explorera stoga su moguće greške u funkcionisanju ili prikazu stranica našeg sajta. Za najbolji korisnički doživljaj preporučujemo da koristite noviju verziju IE 9 ili 10, odnosno Google Chrome ili Mozilla Firefox pretraživače.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00540.warc.gz
CC-MAIN-2018-34
2,932
47
https://nextconf.eu/person/sebastian-pasewaldt/
code
I studied IT Systems Engineering at the Hasso-Plattner-Institute in Potsdam, where I received a Master's degree in 2010. Since October 2010 I am working on my PhD. My research focusses on 2D rendering and visualization techniques as well as computer vision algorithms. During my thesis we developed multiple 2D image abstraction techniques that we have also transferred to mobile devices. One of these promising technical transfers is ToonSnap. Since September 2013 we are developing the ToonSnap prototype to an easy-to-use product.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.35/warc/CC-MAIN-20230923062631-20230923092631-00507.warc.gz
CC-MAIN-2023-40
533
1
http://www.hackthissite.org/user/view/wolfhack13
code
"If you think technology can solve your security problems, then you don't understand the problems and you don't understand the technology." -Bruce Schneier HTS costs up to $300 a month to operate. We need your help! Well, It Had Your Name On It Before It Got Deleted. What The Hell Was This Bug Report You Sent?
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00053-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
311
4
https://ez.analog.com/data_converters/high-speed_adcs/f/q-a/573883/getting-started-with-ad6688-3000ebz/505987
code
Hello everyone ,I am new to this forum. I recently acquired AD6688-3000EBZ, AD9082-FMCA-EBZ and ADS7-V2EBZ Evaluation kits for design of a VHF radio . I have since done my research and settled on Zero IF Digital VHF radio ,Does anyone have information on how to get started using AD6688-3000EBZ and ADS7-V2EBZ FPGA board, where can I download sample code to sample RF signals and transmit it back using the DAC. Also any book on Digital Radio Design / SDR that you can reccomend . I am learning as i build my Digital Radio
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00672.warc.gz
CC-MAIN-2024-10
522
4
https://github.com/kimi-liver
code
- as3corelib 1 An ActionScript 3 Library that contains a number of classes and utilities for working with ActionScript? 3. These include classes for MD5 and SHA 1 hashing, Image encoders, and JSON serialization … - linux 1 Linux kernel source tree - textmate 0 TextMate is a graphical text editor for OS X 10.7+ - ccv 0 C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library - tornado 0 Tornado is an open source version of the scalable, non-blocking web server and tools that power FriendFeed. Contributions in the last year 0 total May 27, 2014 – May 27, 2015 Longest streak 0 days No recent contributions Current streak 0 days No recent contributions
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929003.41/warc/CC-MAIN-20150521113209-00221-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
680
8
https://community.tadabase.io/t/referential-integrity-and-cascading-deletes/1653
code
I’d like to connect a notes table to jobs table, so that any job can have multiple notes attached. So far - so good. I have connected the two tables in a one to many relationship and all goes well, but when I delete a parent record that has one or more child records, the parent (job) gets deleted but not the connected child (note) records. Instead of being deleted as I had expected, the attached notes have the connecting field blanked and they remain. Is there a way to have the dependent child pages automatically deleted when the parent records are deleted as is standard practice with relational databases? Hey @MarkC! Welcome to the community! This comes up a bunch espeically with clients who are familiar wtih database architcure. At Tadabase we have a unique way we deal with the relationship and therefore don’t cascade delete records ever. What we recommend is to filter these notes and manually delete any records that are orphaned. Another option is to soft delete the Job (new field that gets marked as deleted), then create a record rule that will update the connected Notes and mark them as deletable. Let me know if I can clarify anything further. I agree on this request. At least it should be optional. What’s the reason to keep records that lost relationship with parent and can not be reached other than directly? In fact if you want to comply with GDRP in EU and user wants to delete own data, when deleting user, all related data with that user should be deleted. So this is a necesary functionlality to comply GDPR in EU. @moe, Is there any way to delete all conected records with a user, other than manual delete? If user deletes own access, I should have a log, and after a user delete, then manually delete all records related to that user… I don’t find it workable. I think I have read someone that found a way, but can not find related post.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00342.warc.gz
CC-MAIN-2023-06
1,882
10
https://www.vcom.edu/research/services/statistical-consulting
code
The Research Biostatistician at VCOM provides statistical advice, analysis, and education to VCOM researchers through statistical consulting and collaboration. This includes help with study design, sample size calculation, data visualization and analysis, and interpretation of statistical concepts and analysis results. Services are offered to researchers through two types of assistance: Statistical Consulting and Collaboration Meetings and Statistical Consulting Sessions. Statistical Consulting Sessions are designed to address your quick statistical questions or to help with research projects requiring less than 30 minutes of assistance. Please sign up for assistance and provide brief information via the Statistical Consulting and Collaboration Request Form. A biostatistician will contact you by email/phone call to arrange either ZOOM or in-person meetings. Statistical Consulting and Collaboration Meetings are developed for more in-depth questions or longer projects and offer the researchers more flexibility of timing and meeting approaches. If you wish to consult or collaborate with a biostatistician, please review the guidelines below and then complete this form: For detailed information and training on statistical analysis, data preparation, open-source software, and material focused specifically on biomedical research, please visit the in-depth Statistical Resources page. Statistical Consulting and Collaboration Guidelines - Please understand that statistical consulting deals with simple questions that can be solved within a short period (approximately 1-1.5 hour) and does not need repeated scheduled meetings with the biostatistician, whereas statistical collaboration answers research questions and it requires long-term partnership with the biostatistician, preferably from the initial design of the study to the end. Statistical collaboration is the appropriate approach for extramural and intramural research, with the role of the biostatistician funded through grant submission. - Please understand that the biostatistician primarily provides advice (and when possible, assistance) on cleaning or formatting data and data variable names. Please recognize that reformatting the data is often required and when required it will delay results. - Please try to come prepared to the first meeting with research background, research goals and questions, research papers that may assist the biostatistician in understanding my research, and results of any analyses already performed—including plots or graphs. - Please understand that many problems cannot be solved in just one meeting and be prepared to meet several times with the biostatistician if needed. - Please understand that request should be received well ahead of the proposed deadline (minimum 4 weeks) to receive proper attention. In rare exceptions, and subject to the availability of the biostatistician, limited assistance may be provided on short notice. However, because of the high volume of assistance provided, there can be no guarantee that every requested deadline can be honored. - Please understand that projects are processed based on the request date (first come, first serve) and hard deadlines. In addition, projects may be triaged based on VCOM priorities. - Please understand that you cannot receive assistance on class projects or homework. - We will provide assistance in a timely manner to the collaborators. - We will provide advice in data formatting or cleaning if needed. - We will advise the collaborators on appropriate study design or analyses for their data and assist in performing the analyses as time allows. - We will discuss with the collaborators statistical assumptions made and the potential drawbacks of different analyses. - We will attempt to explain the statistical methods used and results in a way that the collaborators understand. If you have any questions or concerns, please contact the research biostatisticians.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510219.5/warc/CC-MAIN-20230926175325-20230926205325-00886.warc.gz
CC-MAIN-2023-40
3,956
19
http://www.happybirthdayapple.com/terms_and_conditions.html
code
Happy Birthday Apple Terms and Conditions This Happy Birthday Apple terms and conditions of service policy (the "Agreement" or the "T&C") provides important information about you’re the Happy Birthday Apple service, so you should take the time to read and understand it. You may review, save or print any part of this Agreement. As used herein, “HBA“ means and refers to Happy Birthday Apple. IMPORTANT: IF YOU CHOOSE TO ACCEPT THIS AGREEMENT, YOU MUST DO SO AS IT IS PRESENTED TO YOU — NO CHANGES (ADDITIONS OR DELETIONS) WILL BE ACCEPTED BY HBA. HBA may change, add or remove any part of this Agreement, or any part of the HBA services and features, including price, at any time. If it does so, HBA will post such changes on the HBA site. Using HBA Services The HBA Message The HBA Message (or herein called “Message”) is submitted be the user as Content to be displayed on the HBA website and presented to Apple Computers Inc in a book format. All messages become the property of HBA. The message contains a name, country of origin, email and body text. Payment and Currency If you elect to use the HBA service, you agree to pay all fees for that service at the prices set forth on the HBA site and upon the terms set forth on that site and in this Agreement. Currency is always quoted in US dollars. All fees for HBA services are non-refundable. If your Content fails the HBA T&C then that service will be cancelled without a refund of the fee. Availability of the service While HBA makes reasonable efforts to ensure that HBA is available at all times, HBA does not guarantee, represent or warrant that the HBA website or services will be uninterrupted or error-free, and HBA does not guarantee that users will be able to access or use all the HBA features at all times. HBA also does not guarantee or warrant that any Content you may have submitted to HBA will not be subject to inadvertent damage, corruption or destruction. Acceptable use policy guidelines HBA encourages users to participate in the on-line world to express their views and to benefit from the interactive experience. However, it is important to remember that there are rules and standards that you must abide by when you use HBA services. These rules and standards are described in this Agreement. As a HBA user, you agree to comply with this Agreement, and you acknowledge that HBA has the right to enforce this Agreement in its sole discretion. This means that if you violate the terms of this Agreement, HBA may take any and all appropriate actions as HBA deems necessary or appropriate, this includes not including your message in the printed book to be presented to Apple Computers Inc. HBA is not required to provide notice prior to cancelling your message for violating these rules and standards, but it may choose to do so. Inappropriate conduct falls into a number of categories. The more commonly understood categories are discussed below, although this list is not exclusive. Illegal and prohibited content HBA messages may be submitted with lawful and proper content. Submitting Content that could subject HBA to any legal liability, whether in tort or otherwise, or that is in violation of any applicable law or regulation, or otherwise contrary to commonly accepted community standards, is prohibited, including information and material protected by copyright, trademark, trade secret, nondisclosure or confidentiality agreements, or other intellectual property rights, and material that is obscene, defamatory or constitutes a threat. If HBA elects to terminate your message as a result of any illegal or prohibited conduct, HBA may elect, in its sole not to refund any prepaid fees to you. Examples of prohibited conduct are: - Posting obscene content - Pretending to be anyone you are not—you may not impersonate or misrepresent yourself as another person (including celebrities), or a civic or government leader; HBA reserves the right to reject or block any name which could be deemed to be an impersonation or misrepresentation of your identity, or a misappropriation of another person's name or identity Objectionable conduct and content It is essential that all Content in a HBA message reflect the provisions of this Agreement. HBA reserves the right to remove Content if HBA becomes aware of any Content which, in HBA’s judgment, does not conform to this Agreement. HBA may send you a warning about the violation of this Agreement if your message objectionable Content, but we reserve the right not to do so. At all times, HBA reserves the right to terminate, with or without notice, the messages of HBA users who violate this Agreement. In such case, you will not be entitled to any refund of your fees). Examples of objectionable conduct and Content that violate the HBA acceptable use policy are: - Content that is harmful, abusive, violent, racially or ethnically offensive, lewd, vulgar or (in a reasonable person's view) objectionable - Content that defames, abuses or threatens physical harm to others or yourself If you encounter something you find inappropriate, you may report it by submitting the message Number at http://www.happybirthdayapple.com/reportmessage.htm Authorisation to remove content HBA reserves the right to remove any Content you may have submitted under the following circumstances: - If your payment via credit card or paypal is not successful. If this is the case HBA will notify you via your supplied email address. Authorisation to contact user By submitting and paying for the HBA service you give HBA and its affiliates the authority to contact you via your nominated email or webmail address. Contact will be limited to: - updating you with information regarding the HBA campaign and activites. - news by HBA and its affiliates. - allowing Apple Computer Inc to reply to your message and send you an additional message of their choice Disclaimer of warranties; liability limitations YOU EXPRESSLY AGREE THAT YOUR USE OF, OR INABILITY TO USE, THE HBA SERVICES ARE AT YOUR SOLE RISK. HBA SERVICES ARE PROVIDED "AS IS" AND "AS AVAILABLE" FOR YOUR USE, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, IN NO CASE SHALL HBA, ITS DIRECTORS, OFFICERS, EMPLOYEES, AFFILIATES, AGENTS OR CONTRACTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OF HBA SERVICE OR FOR ANY OTHER CLAIM RELATED IN ANY WAY TO YOUR HBA USE. HBA may give notice to any HBA user by sending an e-mail message to the user's. HBA users may contact HBA by emailing [email protected] This Agreement represents your entire agreement with HBA with respect to HBA Service. You agree that this Agreement is not intended to confer and does not confer any rights or remedies upon any person other than you, as a HBA user, and HBA. Revised February 2006
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690376.61/warc/CC-MAIN-20170925074036-20170925094036-00586.warc.gz
CC-MAIN-2017-39
6,828
36
https://richardred.medium.com/followers?source=user_profile-------------------------------------
code
CTO at Galaxy Investment Partners Entusiasta de Criptomoedas Blogger, Tech writter, Senior Engineer in Telecom and Seeker of the Truth of the Things Crypto enthusiast. Big on blockchain interoperability, DeFi legos, separation of money and state. Branding, narratives and strategic positioning. Full stack founder. https://angel.co/vladstan Writing about cryptocurrency/blockchain projects that are doing something interesting with regard to governance. Decred contributor. Text to speech
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00847.warc.gz
CC-MAIN-2024-10
488
7
https://community.oracle.com/message/10760142
code
I am invoking a EBS API through Oracle Applications Adapter in my BPEL, During the execution of my BPEL Process, if my EBS API call takes more than 10s, i want it to timeout. Can you please let me know how can i achieve it. I tried using the <property name="timeout">10</property> <property name="optSoapShortcut">false</property> in the partnerlink defintion, but it didnt help. Check this out.. Another is setting in partnerlink.. Thanks Arik and vijay for your response, syncMaxWaitTime needs to be set at the domain level, then it would timeout all the services that are deployed on my environment. Property transaction-timeout and syncMaxWaitTime will not resolve the issue, because i need to achieve the timeout for a particular service only (Sorry, I should have mentioned in the problem statement that i need to implement the timeout for a specific service). I tried the PartnerLink timeout property along with optSoapShortcut but timeout is not happening. If you could provide me any other alternative solutions that would help me to timeout the call to EBS using a Oracle Applications Adapter in BPEL 10g Invoke activity. Wish you a belated Happy Christmas and advanced New Year wishes. Edited by: 978071 on Dec 26, 2012 2:43 AM Edited by: 978071 on Dec 26, 2012 2:44 AM
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703018/warc/CC-MAIN-20140313024503-00062-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
1,280
10
http://avrilomics.blogspot.com/2013/02/using-exonerate-to-align-estscdnas-to.html
code
Aligning ESTs/cDNAs to a genome using exonerate To align ESTs (or EST clusters) or cDNAs to a genome sequence, the program exonerate can be used. This can be run using the command: % exonerate --model est2genome --bestn 10 est.fa genome.fa where 'est.fa' is your fasta file of ESTs, and 'genome.fa' is your fasta file of scaffolds/chromosomes for your genome. The '--bestn 10' option finds the top 10 matches of each EST in the genome. Other useful options are: -t genome.fa : the fasta file 'genome.fa' of scaffolds/chromosomes for your genome, -q est.fa : the fasta file 'est.fa' of ESTs, --showtargetgff : give the output in gff format, --showalignment N : do not show the alignment between the EST and genome in the output, --refine full : this performs a more accurate alignment between the EST and genome, but will be slower (I found this very slow), --model coding2genome : the exonerate man page says that this is similar to '--model est2genome', but that the query sequence is translated during comparison, allowing a more sensitive comparison. In practice, I found that it is fastest to run exonerate by putting each EST cluster into a separate fasta file, and then running exonerate for each EST cluster against the genome assembly. For example, to align Parastrongyloides trichosuri EST clusters to the genome assembly, I put the EST sequences in files seq1, seq2, seq3, etc. and ran the following shell script: foreach file (seq*) exonerate -t genome.fa -q $file --model coding2genome --bestn 1 --showtargetgff > $file.out I found that this required quite a lot of memory (RAM), so I requested 500 Mbyte of RAM when I submitted it to the Sanger compute farm. This produced files seq1.out, seq2.out, seq3.out, etc. with the exonerate results for EST clusters seq1, seq2, seq3, etc. Each exonerate gene prediction is given a score. I find that gene predictions with scores of at least 1000 look pretty good. Getting the predicted proteins in ESTs from exonerate If you use the --model coding2genome option, exonerate will give you a protein alignment for the predicted gene and the EST that you align it to. If you wanted to get the predicted protein from the EST that is in that alignment, you could use this extra option: exonerate --ryo ">%qi (%qab - %qae)\n%qcs\n" (see here and here). Aligning proteins to a genome using exonerate To align proteins to a genome, you need to use '--model protein2genome' instead of '--model coding2genome', eg.: % exonerate -t genome.fa -q protein.fa --model protein2genome --bestn 1 --showtargetgff Running BLAST first, and then running exonerate for the genome regions with BLAST hits: Exonerate can be a bit slow to run, so if you want to speed things up (with some loss of sensitivity), you can run BLAST first to find the regions of a genome assembly with hits to your query EST or protein, and then run exonerate to align the query EST/protein to the region of the BLAST hit. You can use my script run_exonerate_after_blast.pl for this, eg. % run_exonerate_after_blast.pl assembly.fa proteins.fa exonerate_output . 0.05 25000 /software/pubseq/bin/ncbi_blast+/ where assembly.fa is your input assembly file, proteins.fa is the file of proteins you want to align using exonerate, exonerate_output is the exonerate output file, processed to just have some of the GFF lines, 0.05 is the BLAST e-value cutoff to use, 25000 means that we run exonerate on the BLAST hit region plus 25,000 bp on either side, /software/pubseq/bin/ncbi_blast+/ is the path to the directory with BLAST executables. [Note to self: you will probably need to submit this to the 'long' queue (48 hour time-limit) on the Sanger farm, with 1000 Mbyte of RAM.] If the job runs out of run-time on the Sanger farm before finishing, the output so farm will be in a large file called tmp0.xxx... where xxx... are digits. You can convert this to the format the the output should be in by using my script convert_exonerate_gff_to_std_gff.pl. Aligning intron regions of one genome to another genome In this example, I had a file of intron sequences from one species, and wanted them to align them to the full genome of a second species. I decided to use the 'ryo' (roll-your-own) output format, which allows you to specify your own output format: % exonerate --model affine:local --revcomp --ryo "%qi %qab %qae %ti %tab %tae %s %pi" --showalignment no --bestn 1 introns.fa genome2.fa where -model affine:local will give local alignments, --revcomp looks on both strands for matches, --ryo "%qi %qab %qae %ti %tab %tae %s %pi" prints out the query id., start and end of the alignment in the query, target id., start and end of the alignment in the target, alignment score, and percent identity of the alignment. --showalignment no: means the alignment isn't shown --bestn 1 : just give the best hit for each query. Other useful exonerate options --model protein2genome:bestfit : this includes the entire protein in the alignment. --exhaustive yes : makes more accurate alignments (but is slower).
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00544.warc.gz
CC-MAIN-2021-31
5,008
47
https://www.theadminsguide.net/2023/08/22/solving-package-has-no-installation-candidate-issue-on-debian-based-servers/
code
Installing packages in Debian-based servers could sometimes be everything but smooth, and one of the common errors many users encounter is the “Package has no installation candidate” issue. This error generally suggests that the system doesn’t recognize the package you are trying to install, or it’s unable to locate it in the repository. In this article, we are going to delve deep into the causes behind this error and we will guide you on how to troubleshoot it. Note: the mv command won’t be applicable for solving this particular problem as mv is used for moving files – it has no direct role in package management. The Cause of the Error The “Package has no installation candidate” error in Debian-based systems, essentially means that APT, the package handling utility, cannot find the package you wish to install in its repositories. The repositories are like a database of available packages from where APT can download and install them. If APT cannot find your desired package in its list, it’ll return such an error. We will assume that you are a superuser or have sudo access, as you will need these permissions to modify system-related files and update packages. Step 1: Update your Repository Just like you, the system also needs “refreshing” and updating once in a while. Run the following command to make sure your package lists from the repositories are up to date: sudo apt-get update Step 2: Check for Typographical Errors Be sure to double-check your spelling and syntax. The system is case sensitive and won’t be able to find the package if its name is spelled incorrectly. Step 3: Verify Package Availability Also, make sure that the package you want is valid and available in the repositories. Some packages may not be available in the default repositories and may need to add alternative repositories or may not be available at all. Step 4: Enable the Correct Repository The package you’re looking for might belong to the universe, multiverse or other repositories that are not enabled by default. If this is the case, open your sources.list: sudo nano /etc/apt/sources.list And add or uncomment (remove the beginner ‘#’) the lines that include the repository you need: deb http://us.archive.ubuntu.com/ubuntu/ bionic universe deb-src http://us.archive.ubuntu.com/ubuntu/ bionic universe Then update your package list again: sudo apt-get update Step 5: Try Reinstalling the Package Once all these steps are checked, try reinstalling the package, calling it by its correct name and see if the error has been resolved. Working with open-source systems like Debian may pose some challenges along the way. However, with a bit of knowledge about the functioning and the structure of these systems, resolving these issues becomes easier. I hope this guide has helped you understand the “Package has no installation candidate” error better, and that you are now able to overcome it without much problem.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476205.65/warc/CC-MAIN-20240303043351-20240303073351-00677.warc.gz
CC-MAIN-2024-10
2,955
23
https://www.physicsforums.com/threads/please-help-me-with-simplifying-a-boolean-equation.537214/
code
1. The problem statement, all variables and given/known data Simplify the following xyz + x'(w+z') + yz(w+z') 2. Relevant equations 3. The attempt at a solution xyz + x'(w + z') + yz(w + z') xyz + x'w + x'z' + yz(w + z') xyz + x'w + x'z' + yzw + yzz' xyz + x'w + x'z' + yzw + 0 I got the answer xyz + x'w + x'z' + yzw However in the book it says the answer is xyz + x'w + x'z'. How could that be? I don't see any way of getting rid of yzw. Please tell me if this is correct or I need to fix something. Thank you.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657557.2/warc/CC-MAIN-20190116175238-20190116201238-00353.warc.gz
CC-MAIN-2019-04
512
1
https://hackernoon.com/predict-customer-churn-with-machine-learning-data-science-and-survival-analysis
code
Predict Customer Churn With Machine Learning, Data Science and Survival Analysis Too Long; Didn't ReadChurn is the process of customers leaving their service provider for a competitor. It can be due to many reasons, such as financial constraints, poor customer experience, or dissatisfaction with the company. The Tesseract Academy recently worked on a customer churn prediction problem with a large insurance company based in London and San Francisco. In this article we will examine some of the methods that we used in order to make the project successful. We will dive into some of these techniques we used to predict which customers will churn and better understand their behavior.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474594.56/warc/CC-MAIN-20240225071740-20240225101740-00885.warc.gz
CC-MAIN-2024-10
685
2
https://cloud.google.com/apigee/resources/
code
Web API Design: The Missing Link A comprehensive collection of the web API design best practices used by the world’s leading API teams. The Digital Transformation Journey Gain insights into the common ingredients needed for successful digital transformations. Mastering Full Lifecycle API Management with Analytics How to scale your API program using API analytics Inside the API Product Mindset Field-tested best practices and real-world use cases for enterprise API teams.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575484.57/warc/CC-MAIN-20190922094320-20190922120320-00290.warc.gz
CC-MAIN-2019-39
476
8
http://stackoverflow.com/questions/4006489/prevent-openssl-from-using-system-certificates?answertab=oldest
code
I've just run a few tests, and listing your selection of CAs in the ca_certs parameters is exactly what you need. The system I've tried it on is Linux with Python 2.6. If you don't use ca_certs, it doesn't let you use Traceback (most recent call last): File "sockettest.py", line 18, in <module> File "/usr/lib/python2.6/ssl.py", line 350, in wrap_socket File "/usr/lib/python2.6/ssl.py", line 113, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: _ssl.c:317: No root certificates specified for verification of other-side certificates. I've also tried to use a client to send a certificate that's not from a CA in the ca_certs parameter, and I get ssl_error_unknown_ca_alert (as expected). Note that either way, there's no client-certificate CA list send (in the certificate_authorities list in the CertificateRequest TLS message), but that wouldn't be required. It's only useful to help the client choose the certificate.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00063-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
933
16
https://www.sheffield.ac.uk/dcs/research/groups/testing/research
code
Themes investigated by the Testing Research Group include model-based testing, search-based testing, property-based testing, security verification and testing, model-driven engineering, reverse engineering, XML data processing, massively parallel multi-agent simulation and software engineering. Our work on model-based testing using Stream X-Machines is internationally known. This formal testing method generates complete functional test suites from a model specification expressed as an extended finite state machine; and offers guarantees of correctness once testing is complete. The work has been used at Daimler for testing automotive systems designed using Harel Statecharts, applied to testing the gate-logic on low-power ARM chips, used for unit testing Java and also used to test software services for SAP and SingularLogic in the Cloud. Several software testing tools have grown out of this work, including StateTest, JWalk and Broker@Cloud. Leaders: Dr Simons, Dr Bogdanov, Prof Holcombe Our work on search-based testing is internationally known. This evolutionary testing method creates genomes for tests, which evolve by mutation and inter-breeding until superior tests are expressed, which are capable of exercising parts of software systems that are difficult to reach. Fitness is judged by instrumenting the tested code to measure what is covered by the tests. The approach has been applied to large C programs and also to object-oriented programs in Java. Several software tools have grown out of this work, including EvoSuite (for Java) and IGUANA (for C) and Code Defenders, the first crowd-sourced interactive mutation testing game. Leaders: Dr McMinn, Dr Fraser Property-based testing provides a high-level approach to testing and is a complementary technique to the similar model-based testing. Rather than focusing on individual test cases to encapsulate the behaviour of a system, in property-based testing this behaviour is specified by properties, expressed in a logical form. The system is then tested by checking whether it has the required properties for randomly generated data, which may be inputs to functions, sequences of API calls, or other representations of test cases. This work has resulted in a code coverage tool (Smother) and a mutation testing tool (Mu2) for the Erlang language. Leaders: Prof Derrick, Dr Bogdanov, Dr Taylor Security Verification and Testing Security encompasses information security, software engineering, security engineering, and formal methods. Our research in this area investigates all security aspects of distributed and service-oriented systems. This includes applied security aspects, such as access control or business-process modelling, as well as fundamental aspects, such as novel static and dynamic approaches for ensuring the security of applications. We participate in the development of interactive theorem proving environments for Z (HOL-Z) and UML/OCL (HOL-OCL, which is integrated into a formal MDE tool-chain) and a model-based test-case generator (HOL-TestGen). Leaders: Dr Brucker In the future, software systems will not be created by writing program code, but rather generated from high-level abstract models that are closer to end-user requirements. What initial models and languages should be chosen? How should models be checked? How should they be folded together to create more detailed system specifications? How should the transformation rules be verified? Our work so far has investigated dependently-typed languages as a means of verifying model transformations, has generated simple information systems from requirements and has also generated platform-specific test suites for SOAP or REST-based software services in the Cloud. Leaders: Dr Simons, Dr Brucker, Prof Derrick This novel approach recovers specifications from legacy software systems. The reverse engineering method collects traces of the system's execution and performs grammar inference on the traces, to detect behavioural regularity. From this, finite state models are constructed, which allow further hypotheses about the specification to be generated and tested. The approach has been applied to recover both flat and nested state specifications, and used for the supervised re-modularisation of software systems. Software tools include StateChum (for reverse engineering) and SUMO (for supervised re-modularisation). Leaders: Dr Bogdanov, Dr Hall, Dr Taylor XML Data Processing The rise of distributed data processing in the Cloud has led to a resurgence of non-relational key-value and tree-structured data formats like XML. This has opened up new research areas in XML data compression, distribution and storage, with associated issues of data indexing and query-processing. We have developed algorithms for fast searching in distributed XML databases using sparse binary matrix indexing; and for trust-based access control to XML data with dynamic learning. The technology has been applied in Botswana for the distributed mobile phone hosting of compressed XML databases. Leaders: Dr North Massively Parallel Multi-Agent Simulation This radical approach to simulation is able to model the interacting behaviour of massive populations of individual agents on Grid or HPC parallel architectures. The detail offered by individual-based modelling is fast outpacing traditional aggregating methods that use calculus. We have modelled the behaviour of social insect colonies, the transfer of proteins, the growth of epithelial skin cells, and even the performance of the European economy! We have discovered new facts about ants, or the best way to recover from the financial crisis. The FLAME (Flexible Large-scale Agent-based Modelling Environment) tool developed by the VT-Group is now also used by our Computational Biology and Graphics Research groups, among others. Leaders: Dr Richmond, Dr McMinn Why do software development projects frequently fail, and what can be done about it? This strand of our research has investigated both traditional and agile software development methods, to see how they work in practice. We have published internationally-famous critiques of the UML notation and associated development process, and also of the Agile method known as XP (eXtreme Programming). We have collaborated with work psychologists to monitor the behaviour and effectiveness of developer teams in our own software company Genesys Solutions, revealing how the constitution of teams affects how well they work. Leaders: Dr Cowling, Dr Simons
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00818.warc.gz
CC-MAIN-2017-43
6,512
22
http://ruby5.envylabs.com/episodes/366-episode-362-april-16th-2013/stories/3198-linkedlists-in-ruby
code
Feel the Motion of the Summer, Linking your Lists to module_functions. When a Hobbit caches your method, that's when you Work with Ruby Threads to the rescue. This episode is sponsored by Dead Man's Snitch. Know when your periodic tasks stop working. Unary Operators, Writing fast Ruby, each_with_object, ES6 Transpiler and HStore RailsRumble, Ruby Motion for Rails devs, how Ruby Hashes work, how to deal with data migrations, clean up your routes file, and get better logs. Rails data migrations, tools for optimization, Bundler::Updater, using UUID with Postgres, 20,000 Leagues Under Active Record, and Ruby 2.2.0 Greenscreen.io, rails-disco, onboarding your junior devs, being a better Rubyist, staging environments, and the anti-pattern of absolutes all in this episode of the Ruby5.
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443869.1/warc/CC-MAIN-20141017005723-00245-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
789
6
https://www.gamedev.net/blogs/entry/2256233-pipeline-state-monitoring-results/
code
The results were a little bit surprising to me. It turns out that for all of the samples which are GPU limited, there is a statistically insignificant difference between the two - so regardless of the number of API calls, the GPU was the one slowing things down. This makes sense, and should be another point of evidence that trying to optimize before you need to is not a good thing. However, for the CPU limited samples there is a different story to tell. In particular, the MirrorMirror sample stands out. For those of you who aren't familiar, the sample was designed to highlight the multi-threading capabilities of D3D11 by performing many simple rendering tasks. This is accomplished by building a scene with lots of boxes floating around three reflective spheres in the middle. The spheres perform dual paraboloid environment mapping, which effectively amplifies the amount of geometry to be rendered since they have to generate their paraboloid maps every frame. Here is a screenshot of the sample to illustrate the concept: This exercises the API call mechanism quite a bit, since the GPU isn't really that busy and there are many draw calls to perform (each box is drawn separately instead of using instance for this reason). It had shown a nice performance delta between single and multi-threaded rendering, but it also serves as a nice example for the pipeline state monitoring too. The results really speak for themselves. The chart below shows two traces of the frame time for running the identical scene both with and without the state monitoring being used to prevent unneeded API calls. Here, less is more since it means it takes less time to render each frame. As you can see, the frame time is significantly lower for the trace using the state monitoring. So to interpret these results, we have to think about what is being done here. The sample is specifically designed to be an example of heavy CPU usage relative to the GPU usage. You can consider this the "CPU-Extreme" side of the spectrum. On the other hand, GPU bound samples show no difference in frame time - so we can call this the "GPU-Extreme" side of the spectrum. Most rendering situations will probably fall somewhere in between these two situations. So if you are very GPU heavy, this probably doesn't make too much difference. However, once the next generation of GPUs come out, you can easily have a totally different situation and become CPU bound. I think Yogi Beara once said - "it isn't a problem until its a problem." So overall, in my opinion it is worthwhile to spend the time and implement a state monitoring system. This also has other benefits, such as the fact that you will have a system that makes it easy to log all of your actual API calls vs. requested ones - which may become a handy thing if your favorite graphics API monitoring tools ever become unavailable... (yes, you know what I mean!). So get to it, get a copy of the Hieroglyph code base, grab the pipeline state monitoring classes and hack them up into your engine's format!
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00146.warc.gz
CC-MAIN-2018-51
3,038
6
http://www.crlfactory.com/sqlite-database-creation-insertion/
code
SQLite Database Creation and Insertion In this post will see about SQLite Database Creating and Insertion. If we want to store huge data and we need to do some searching or shorting we need to use SQLite Database Android natively supports SQLite Database. we don’t need to add any other Library to use this To demonstrate this, I am Creating a new application namely SqliteExample with default configuration . In this Example, I am going to store name and age in SQLite Database Open “activity_main.xml” Change to root Layout to LinearLayout and add orientation as vertical remove the views inside the root layout Inside Creating one more LinearLayout with width as “match_parent” and height as “wrap_content”. Adding an EditText for name with margin and padding 10dp assigning id as “etvName” duplicate the EditText by Selecting the code and Press ctrl+d on your keyboard adding orientation as vertical for the LinearLayout change the second EditText id as etvAge Age must be number. So setting inputType as number Adding a Button, So that on click of this, we can insert the entered data into Database Setting margin and padding 10dp text as Insert and assigning id as “btn_insert” . Creating new kotlin file to define the structure of User Object class User adding id, name and age attributes creating a constructor with name and age as parameters . So that we can create a user object in single-line itself setting the parameters as attribute values to handle Database operations, we need a Handler class So Create a kotlin file . We need some values like Database name,table name, column names etc. creating those variables. Create handler class as same as File Name with context as a parameter. It extends SQLiteOpenHelper class inside we need to pass 4 parameters context, Database name, CursorFactory and Database version . We are not going to use CursorFactory So passing null here. Giving Version as 1 we need to implement onCreate() and onUpgrade() from the base class. This onCreate() will be executed, When the device doesn’t contain the Database So we need to create required tables here Here I am creating the table with id, name, and age as fields To execute query we need to use execSQL() . Inside we need to pass the query string onUpgrade() will be executed, When we have a older version of Database. You will get the oldVersion and currentVersion in this method itself using this we need to upgrade the table structure accordingly Creating a function insertData() to insert User object. There are two types of SQLiteDatabase objects are there, Here we are going to Write the data into Database . So get the WritableDatabse To insert into Database we need ConventValues So, create ConventValues inside we need to put our values cv.put “COL_NAME” should store user.name cv.put “COL_AGE” should store user.age now we need to insert this ConventValues db.insert() inside we need to pass 3 parameters TABLE_NAME, null and ConventValues. The second parameter is a String We need to pass any Column name here We don’t need any null column. So we are passing null as Second Parameter this insert() will return rowID if the result is -1 then Some error is occurred so showing toast as “Failed” otherwise showing toast as “Success” Open “MainActivity.kt” setting the onclick listener for “btn_insert” if name and age-length is > 0 then we need to insert the data into Database. Otherwise showing error toast Otherwise showing error toast by passing the entered value inside the constructor and storing it into a variable called “user” creating the helper class object using this calling insertData() which we created early Let’s run the application. Entering the data and clicking on the “Insert” button we are getting toast as “Success” Which means the data is inserted successfully In next tutorial will see how to retrieve, update and delete the records from the database That’s all for this tutorial guys.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578689448.90/warc/CC-MAIN-20190425054210-20190425080210-00198.warc.gz
CC-MAIN-2019-18
3,985
22
http://equipmentauto.ru/updating-links-in-indesign-applescript-14771.html
code
Again, I need a script that will update parts of my powerpoint presentation from data in my excel workbook. Ideally this would work with both Office 2004 and Office 2008 for mac. Any help would be appreciated, as I am pretty unfamiliar with coding Apple Script. You may set the “catalog link data” property to either “nothing” (to remove/deactivate an existing link) or to a similar list of values: key name (string) may be an explicit key name or an empty string if no key is required for the specified key type. (This value is essentially ignored if key type resolves to a value other than key from link.) key type (catalog key type enumeration) may be omitted (or “default”) if either the specified DD is active and the field’s DD entry provides a value via the L”x” qualifier or the key name value is a valid non-empty string. Is there an Apple Script that exists to do this automatically? I have searched for a long time and have yet to find one. Only meaningful if is tagged is true; specifies whether straight quotes in tagged text should be converted to typographer’s quotes. If setting the catalog link data of a text element, note that the link will be applied to only that text element. It simply takes forever to complete when there are 50 fields... On top of that, this has be done multiple times per week. In the latter case, the key type defaults to key from link.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510853.25/warc/CC-MAIN-20181016155643-20181016181143-00454.warc.gz
CC-MAIN-2018-43
1,399
12
http://initiateinternational.com/the-16-rules-of-information-technology/
code
As IT recruitment professionals based in South Africa we enjoy leasurely strolling around on IT related Subreddits to get an idea what the global IT community is thinking. Today we came across “The 16 Rules of Information Technology” and thought we can’t deprive you of something so cool. South African IT community – can you identify? *E: I wrote this with help desk folks in mind, but I feel like the more hands-on (willingly or not) admins among us will be able to relate with most of these. *E2: Hex, because hex. The 16 Rules of Information Technology 0x00: Users lie. 0x01: Turn it off and back on. Especially if the user insists they have already done so. 0x02: If it’s worth having, it’s worth having a backup. 0x03: Never disassemble anything you can’t reassemble from memory. 0x04: A problem does not officially exist until a ticket has been submitted. 0x05: Not until the most experienced person in the room says “oh, shit,” is the issue an official “oh, shit.” 0x06: There are no such thing as “extra” screws. 0x07: A quiet ticket queue is not always a good sign. 0x08: Nothing is, has never been, or will ever be “user proof.” 0x09: You never, ever want to know what the mysterious fluid is. 0x0A: Mrs. UPS and Mr. Screwdriver are not friends. 0x0B: If you can smell the magic smoke, you already done goofed up. 0x0C: “Working just fine” and “too screwed to log an error” look an awful lot alike. 0x0D: Loose wires will attempt to mate. When wires mate, things get messy. 0x0E: The Principle of Least Privilege is not a suggestion. 0x0F: Respect your sysadmin; they’re the one who fixes your fixes.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00150.warc.gz
CC-MAIN-2019-30
1,651
20
https://it.toolbox.com/blogs/eroch/open-source-esb-110507
code
Here is a replay of Paul Fremantle's presentation on Open Source ESBs where he discusses the benefits of OSS ESBs (Mule, ServiceMix, Apache Synapse, WSO2) versus commercial ESBs. I have done much more work with the commercial ESBs but Paul makes valid points and the OSS ESB offerings have made great progress since I reviewed them in a blog post over a year ago. In the talk Paul will look at the capabilities and approach of Open Source ESBs, and argue that the Open Source approach is the best route to creating a long-term, robust and cost-effective Service Oriented Architecture. Paul will look at Open Source ESBs including Mule, ServiceMix, and Synapse, and explore the strengths and weaknesses of each approach, and compare to the offerings from the established vendors.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864343.37/warc/CC-MAIN-20180622030142-20180622050142-00470.warc.gz
CC-MAIN-2018-26
778
6
https://www.metabolic.nl/vacancies/data-science-fall-2019/
code
Data Science Internship Metabolic’s overarching mission is to transition the global economy to a fundamentally sustainable state. We advise governments, businesses, and NGOs on how to adapt to a fast-changing global context, while creating disruptive solutions that can dramatically shift how the economy functions. As a company committed to building a sustainable economy, we want to accelerate systems change and maximize our impact. About the Position As a data science intern, you work within the consulting team and receive as much responsibility as you’re interested to take on. The role is primarily focused on good research and data analysis using python, but can stretch to include many other parts of consulting, or even other parts of the organization, depending on your your personal development goals. Experience with GIS analysis is considered a plus. In your role, you will be focusing primarily on the following activities: Conducting in-depth research and data analysis using python Writing out summaries of conclusions or briefings Collecting important data - Great research skills - Great communication skills - Experienced in programming in python and using standard data science libraries (i.e. pandas, numpy, matplotlib) - The ability to digest large amounts of information - A foundation of knowledge in sustainability, impact analysis, and systems thinking - Intrinsic passion for solving global challenges - A background in Industrial Ecology, Environmental Sciences, Data Science, Urban Geography, or related is preferable - Familiarity with GIS or SQL is considered a plus How to apply: To apply, please send an email to [email protected] with your CV and letter of motivation. The deadline for applications is Friday September 27th, 2019. 23:59. The position has a starting date of November 1st, 2019.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574409.16/warc/CC-MAIN-20190921104758-20190921130758-00141.warc.gz
CC-MAIN-2019-39
1,833
20
http://blog.evolvingbits.com/blog/2012/04/02/thanks-notconf-for-a-great-jsconf-2012-pre-conference/
code
Mon Apr 02 2012 Arriving PHX was flawless, the Twilio-sponsored shuttle was there to pick me up from the airport. After checking-in, it was sunny and mid-70s as a few of us awaited for a friendly volunteer-sponsored shuttle to whisk us over to NotConf in fine style. This one day event went off without a hitch and was a lot of fun. Documentation -- but not too much (such as having two sets of docs that makes it unclear which are the definitive ones) and using doc generation tools to make docs part of the process and not manual Make it easy to contribute to the (library) project. A lot of dependencies for running tests could be one barrier. My favorite demo was deployd.com who had wired up an iRobot and a maze, and attendees could create a program that navigated the robot through it, complete with leaderboard. This demonstrated their cool "instant backend" for software developers and tinkerers to easily create a RESTful backend with no code -- which in this case was then downloaded to a robot. Their business card says it all with "Join the Backend Liberation Movement". The closing event was of course outside, where we all wound down with beer, nice conversations, and a local musician who was playing for us in the courtyard. My pale Seattle skin even gained some color and was happy to absorb all that natural Vitamin D. Thanks to the organizers of NotConf.com -- it was an impressive one-day conference and a great start to JSConf 2012.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00133-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
1,454
9
https://discourse.vvvv.org/t/nuitrack-realsense-d455-not-always-working/19753
code
I’m having problem getting any output from Nuitrack nuget with Intel Realsense D455 camera. I followed installation instruction [here] (https://github.com/vvvv/VL.Devices.NuiTrack/blob/master/README.md) and got the example working with 32bit installer and trial license. But when I use Nuitrack node in vvvv gamma I don’t get any visual result. On a different machine with same vvvv version (2021.3.3) it works though ! What am I missing here ?
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00678.warc.gz
CC-MAIN-2021-39
448
4
http://theschoolofblog.blogspot.com/2009/04/precious-moments.html
code
Student 1: [burps loudly] Student 1: What, I'm not allowed to burp? Me: No, it's a natural bodily function. Student 1: Yeah, like when I heard you fart in your office the other day! Student 2: Awkward! Me: Everybody farts. Rest of class: [breaking into chatter] Me: Okay! Anyone who's talking in five seconds is going to have the teacher fart on them.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863684.0/warc/CC-MAIN-20180520190018-20180520210018-00629.warc.gz
CC-MAIN-2018-22
351
8
https://yofreebie.com/automate-your-life-with-python-free-download/
code
This Asset we are sharing with you the Automate Your Life With Python free download links. Yofreebie.com was made to help people like graphic designers, freelancers, video creators, web developers, filmmakers who can’t afford high-cost courses and other things. On our website, you will find lots of premium assets free like Free Courses, Photoshop Mockups, Lightroom Preset, Photoshop Actions, Brushes & Gradient, Videohive After Effect Templates, Fonts, Luts, Sounds, 3d models, Plugins, and much more. |File Name:||Automate Your Life With Python| |Genre / Category:||Programming| |File Size :||4GB| |Updated and Published:||June 06, 2022| Think of the most boring task you’ve ever done. Chances are, you can automate it with Python! Python is the best programming language you can learn for automation. It’s a simple yet powerful language that can help you automate your life. Welcome to Automate your life with Python! This is the most complete and project-oriented course. In this course, we’re going to learn how to automate boring and repetitive tasks with Python. We’ll automate everyday tasks. To name a few: - File and Folder Operation - Your Morning News - Text Processing: Automate TXT and CSV files - Google Sheets - Excel Reporting The best thing is that you don’t need to be an expert in Python to do all of this. If you’re an absolute beginner, you can watch the Python Crash Course included in this course and if you already know Python, I’ll introduce your to all the Python libraries used for automation before writing code. What makes this course different from the others, and why you should enroll? - This is the most updated and complete automation course in Python - This is the most project-based course you will find. We will automate repetitive tasks that you’d do manually otherwise - You will have an in-depth step by step guide on how to automate stuff with Python. - You will learn all the Python libraries used for automation - 30 days money back guarantee by Udemy
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00664.warc.gz
CC-MAIN-2023-40
2,015
21
https://psychology.stackexchange.com/questions/3739/what-is-the-effect-of-studying-logic-or-mathematics-on-general-thinking-skills
code
I frequently hear comments from people to the effect that "Studies have shown that students who take (intro) logic courses don't show any signs of improvement in logical/rational/critical thinking." Yet, as an undergrad, I got the opposite impression: I felt that studying logic (and mathematics) in more depth sharpened those skills. My initial Google search on the issue didn't really turn up much. So, I have some questions regarding the psychology of logic and mathematics. Question: What studies have been done regarding the effects/benefits to cognitive thinking skills from taking intro logic courses? What about studying probability theory, or even mathematics more generally? How conclusive are they? Any references would be extremely helpful. I now teach intro logic as a grad student, and if there's something that students consistently miss out on, I would like to know about it so that I can (hopefully) make improvements. Also, any resources in the relation between mathematics and psychology more generally would be beneficial. (I posted this on the MathSE, but was told this would be a better place for this question. If this needs retagging, let me know. I see that this question is similarly related, but I'm particularly interested in intro formal logic courses).
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00467.warc.gz
CC-MAIN-2023-40
1,282
5
https://www.cabletiesandmore.com/kraft-tape
code
Selection of water-activated and pressure sensitive tape. Tape Logic® Reinforced Water Activated Tape Bonds instantly to all corrugated carton surfaces and generates a bond 5 Sub-CategoriesAs Low as $76.90 Tape Logic® Non-Reinforced Kraft Tape Aggressively bonds to corrugated cartons, even in dusty environments. 2 Sub-CategoriesAs Low as $125.44 Tape Logic® Pre-Printed Reinforced Water Activated Tape Pre-printed tape brings attention to security and shipping instruction 3 OptionsAs Low as $195.49 Central® Reinforced Water Activated Tape Designed for fast, permanent adhesion and superior strength. 5 Sub-CategoriesAs Low as $117.96 Pressure Sensitive Flatback Tape Selection of pressure sensitive tape in various tensile strengths.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00240.warc.gz
CC-MAIN-2023-14
741
15
https://www.reportingstandard.com/wiki/index.php/DTSContainer
code
- 1 Description - 2 How to create an instance of a DTSContainer - 3 Usage - 4 What's inside a DTSContainer object - 4.1 All concepts defined - 4.2 All role types - 4.3 All arcrole types - 4.4 All relationships - 4.5 All resources defined on any extended link container - 4.6 The root URLs - 4.7 All XBRL Documents added to the DTS - 4.8 All languages - 4.9 All Resources - 4.10 All XBRL Documents loaded in the DTS - 4.11 Registered processors - 4.12 Etc... - 5 Navigation A DTSContainer object is the placeholder for all other objects in the DTS. How to create an instance of a DTSContainer There are four ways: - By a call to DTSContainer.newEmptyContainer() method [javadoc link]. This is the most commonly used way. - By a call to DTSContainer.newEmptyContainer(java.util.Properties) method [javadoc link]. This is used in special cases were the processor requires special initialization. Two properties exists and are documented as constants in the DTSContainer object. Documentation in this wiki will come later. - By a call to DTSContainer.newCompatibleContainer(net.sf.saxon.s9api.Processor, java.util.Properties) method [javadoc link]. This is a special case that allows the creation of a DTSContainer reusing the Saxon processor from the application or from another DTSContainer object. The properties parameter may be null or may by the properties of another DTSContainer. - By a call to mergeDTSs(com.ihr.xbrl.om.DTSContainer) method [javadoc link]. This is another special case that allow the user to merge the result of an array of DTSContainers into a single DTSContainer. Note: the resulting DTSContainer may not be valid in the DTSs to merge contains not compatible elements like a duplicate definition of the same role type. Inside an application the user is free to create as many DTSContainer objects he needs. The content of each DTSContainer is completely isolated from the others. Changes in one DTSContainer will not other instances of a DTSContainer. Each DTSContainer object contains a new instance of the Saxonica processor. It may be interesting in some applications that all DTSContainers created inside the application were compatible among them (this is, for example, the case when there are multiple different DTSs for which XhBtRmL templates exists inside the same Transformation Processor). What's inside a DTSContainer object The DTSContainer object contains useful information required to work with the DTS. All the information is collected during the DTS Discovery Process (implemented in any of the four overloaded load methods) or during the different validation processes. The API automatically collects the information for you. The following is a list of the information collected during the DTS discovery process: All concepts defined This is all XBRL concepts in all discovered XBRL Taxonomies. The user can access a concept using the concept QName by setting the QName namespace to the namespace of the XBRLTaxonomy and the QName local part to the element name as it is defined in the taxonomy and then calling the [getConcept(QName)] method. The returned object is an XMLFragment that can be casted to an XBRLItem or an XBRLTuple. Concepts in the DTS can be accessed sequentially by retrieving a concepts iterator [getConcepts()] All role types This includes static role types defined in the XBRL 2.1 specification and that does not require any definition in a taxonomy and all other role types defined in taxonomy schemas in the DTS. The roles can be accessed by the role URI or sequentially by retrieving a role type iterator. All arcrole types This includes static arcrole types defined in the XBRL 2.1 specification and that does not require any definition in a taxonomy and all other role types defined in taxonomy schemas in the DTS. They are organized by DTSBase Resources can be accessed sequentially or by providing the elements that makes the resource unique The root URLs They are the URLs used to start the DTS discovery process All XBRL Documents added to the DTS This includes all taxonomy schemas, linkbases, extended links and external documents in case there were generic linkbases pointing to any external documents. During the process of reading labels in label extended link containers the DTSContainer object updates a list of different languages used. They are all resources defined in all XBRL Linkbases in the DTS. This is includes but is not limited to Label resources and Reference resources. Starting with release 2.6.8 some Formula resources are recognized and transformed into specific formula related resources. The implementation of the XBRL Formula specification is planed to be finished fot the 2.7 release. All specific object resources (Label resources, Reference resources, Formula resources, etc) are derived classes of the XBRLResource class. This means that all properties of a generic resource like attributes, value and children elements are always available for a resource. Specific resource objects facilitate validation and integration with other objects in the API. All XBRL Documents loaded in the DTS They are all XBRL Taxonomies and XBRL Linkbases indexed by the document absolute URI. The user can access any of the documents by its key value or he can retrieve an iterator over all objects of a specific type. This is the set of registered processors that performs XBRL Validation. Once a processor is instantiated for a DTS it is registered for that DTS so it is always used and not created again. A processor can be obtained by its name. The Processor name is a static proprerty of all XBRLPlugInProcessors. There is much more information than can be obtained from the DTSContainer object. We will continue including more information about that content later.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00504.warc.gz
CC-MAIN-2019-30
5,756
51
https://urbanjack.wordpress.com/2014/01/05/howto-use-wd-mybook-external-drive-with-raspberry-pi/
code
HowTo: Use WD myBook external Drive with Raspberry Pi If you want to mount your WD MyBook HDD (external Drive) with the Raspberry pi long term, you need to first install ntfs-3g. after than you need to create a folder to mount the drive … so Type: sudo apt-get install ntfs-3g sudo mkdir /media/myBook ok 50% id already done. Next you have to check whats the name of the device. So Disconnect your drive. And call “ls /dev/”. Than connect the drive and again write “ls /dev/”. Compare both outputs. On the second call there should be apprear a device starting with “sd*”. My is called “sda” but “sdb” could also be possible. Now we will add a line to /etc/fstab to automatically mount this drive. sudo nano /etc/fstab add this line: /dev/sda1 /media/myBook/ ntfs-3g default 0 0 By pressing ctrl+o you can save the file. And to leave nano press ctrl+x. To refresh the system and mount the drive call: sudo mount -a now go to /media/myBook/ and check if it’s working 🙂 Another example for FAT32: /dev/sda1 /media/myBook/ vfat defaults 0 2
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00263.warc.gz
CC-MAIN-2018-43
1,063
14
https://hashnuke.com/cidr
code
One of these days I find the need to create a VPC on AWS. “CIDR”, says an input box. Somewhere on the page is this: An internet search and 10 minutes later, it all feels like bread-butter-jam. So what does “CIDR” mean? IPv4 is a 32 bit addressing system. 32 not because of any scientific reason. Just that a bunch of people thought that would be sufficient to assign unique IP addresses to every computer in the world. Maybe they foresaw that manufacturers would put a computer in everything to make it smarter. Phone, camera, lock, etc. The entire IPv4 address can be represented in 32 bits. And it is grouped into 8 bits separated by dots (Yes this is binary representation). Caution: Binary representation coming up. May cause nausea. Aptly called the “quad-dotted notation” (four groups of bits). Each group of 8 bits is an octet. So we have 4 octets. Dots are for readability. In 8 bits, the minimum you can represent is To check the maximum we can represent in 8 bits, flip all the bits to 1. And you get which is the binary representation for 255. No magic, I used WolframAlpha. If you want to go one number further to 256, you’ll have to increase the number of bits from 8 to 9 bits. So 255 is the maximum that can be represented in 8 bits. Going by what we learnt previously, the maximum IPv4 address we can have is x is the number of bits we use to represent our address, 2x is the number of unique IP addresses, we can have. For an IPv4 address, that is 232. That’s the total number of IPv4 addresses that can exist. (Only use-case to know the result of the calculation is to satisfy curiosity. Moving on.) We’ll find ourselves having to represent ranges of IP addresses like 22.214.171.124 to 126.96.36.199 These ranges are called Subnets. This kind of grouping is called “subnetting”. CIDR notation makes it easier to represent such ranges. The above mentioned range, for example can be written as 24 bits is the length of the Network Prefix. Meaning the first 24 bits should remain the same and everything else can change. 24 bits is three octets. In the above IP address, that is 4.2.2. Which means the last octet of bits can represent anything from 0 to 255. That is 256 IP addresses in the range. 16 is two octets. Which means the first 16 bits is the Network Prefix and everything else can change. The range specified above includes: For the sake of saving space, characters and time, that can simply be written as To calculate the number of IP addresses in this subnet use 2x, where x = totalAddressSizeInBits - size(NetworkPrefixInBits) x = 32 - 16 x = 16 Number of IP addresses would be 216. This is an odd one. 20 is not a multiple of 8. You have some Sherlocky thoughts and figure out that 20th bit is somewhere in the 3rd octet. We know that the first two octets 172.31, will remain the same, but for the 3rd octet, we’ll calculate in binary. 32 in binary is 100000. That is just 6 bits. We’ll prefix two zeroes to represent it in 8 bits (the value still remains 32): Everything until the 20th bit is prefix and everything else after may change. The minimum value that can be represented using the prefix is already given to us. It’s Let’s see what maximum number can be represented by keepng the prefix the same. Flip the rest of the bits to maximum value ‘1’. That is the binary value of 47 (again, the internet helped me). So our subnet’s starting IP is 172.31.32.0 and ending IP is The total number of IP addresses in this subnet is 212. That is 4096 IPs. That is CIDR notation. - There are a number of CIDR calculators available online. - To understand why it was called “CIDR”, it’ll serve us well if we went on a history tour and read about what it replaced - “Classful Networks”.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00017.warc.gz
CC-MAIN-2021-25
3,757
43
http://webapps.stackexchange.com/questions/tagged/amazon+osx
code
Web Applications Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site How do I delete documents from the Amazon cloud? I used the Mac Amazon 'send to Kindle' application to send a few text files to my iPad. This works, except there's no way to delete files. The iPad's Kindle app's delete button only deletes files ... Jun 1 '12 at 14:07 newest amazon osx questions feed Hot Network Questions What is the difference between "das Ergebnis" and "die Folge"? Did Qui Gon know the Queen was a double? MAC using a modified CBC mode of operation Admitting financial status in interviews What to do with the "last" button in pagination? How do you memorize the meaning of longitude and latitude? Should the spacebar activate tabs? How to efficiently process pasted content from PDF Zombie Scramble keeps returning to life! Parse HTTP header using Python and tcpflow Do I really have a car in my garage? How should I remember why and what was I doing on a project 3 months back? Does the SSL termination happen with the ISP or a mobile service provider? Symbolic solution(s) to Heat equation/ Green functions to standard PDEs Quick and Easy Beef Stew and Dumplings Is it better to include shipping cost in the product price? Why is while(1) faster than while(2)? RAID-5: Two disks failed simultaneously? Why does amsmath use fraktur for real and imaginary parts? replace multiple spaces with one using tr only Why is polling accepted in programming? Shape of the Boeing 737 engine How to tell cron to find paths to tools? How hard should a mathematician work? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869720.10/warc/CC-MAIN-20140722025749-00096-ip-10-33-131-23.ec2.internal.warc.gz
CC-MAIN-2014-23
2,141
53
http://stackoverflow.com/questions/1923986/websphere-6-1-jaas-logout/1924660
code
I have an WebApplication on WAS 6.1 using JAAS already working. Authenticates and authorizes in a orderly manner. But my logout page is not deauthorizing the principal. This application works correctly on JBoss an on Glasfish but not on WAS. My logout page is just a simple JSP with this content. <%System.out.println("principal is not null:"+(null != request.getUserPrincipal())); if (null != request.getSession(false)) request.getSession(false).invalidate(); %><jsp:include page="/index.html" /> Am I missing something? I would preffer not to use any specific API from Webpshere but if it is absolutely needed I will.
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131310006.38/warc/CC-MAIN-20150323172150-00220-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
619
4
https://www.microsoft.com/en-us/garage/blog/2022/01/forest-guard-microsoft-student-hackathon-winners-create-rapid-response-deforestation-sensor/
code
Microsoft Student Hackathon Winner, Forest Guard, is an IoT and machine learning tool for illegal logging and forest fire detection. The 2021 Microsoft Student Hackathon – hosted by The Microsoft Garage and run concurrently with the Microsoft Global Hackathon this year – offered Microsoft summer intern 2021 alumni and their friends a chance to hack real-world challenges in a structured environment, complete with coaching and access to Microsoft technologies to help them realize their ideas. Students from 22 countries came together virtually in October to hack solutions for Sustainability, Society, Education, and Ability. “Students and interns have a long history of enthusiastic participation in the Global Hackathon, using the opportunity to connect to Microsoft people, technology, and problems that they care about,” said Steve Scallen, Senior Director of University Engagement at the Microsoft Garage. “Keeping students engaged with our technology ensures that new waves of professionals are ready to build unique and impactful solutions for everyone. We find that students and schools value the opportunity to apply the fundamental skills acquired in their programs to important and relevant real-world problems.” This year’s Student Hackathon Grand Prize Winners completed their internships in September and chose to extend their Microsoft experience by forming a hackathon team while completing their final year of academic studies. “We wanted to create an opportunity after their Microsoft internship for the students to stay connected to Microsoft and use the skills and knowledge they acquired in a new project and challenge that reflected their own passion,” Scallen said. “We were also pleased to see how many Intern alumni took advantage of the opportunity to invite their friends and share their own Microsoft experience.” The winning team Gloria Keya, David Lutta, Christine Wanjau, Audrey Njenga Meet Gloria Keya, David Lutta, Christine Wanjau, and Audrey Njenga – the creative force behind Forest Guard, this year’s Grand Prize-Winning project. They chose the Hack for Earth challenge due to a shared passion for sustainability, and developed a new way to detect illegal logging powered by Microsoft Azure and IoT technologies. The team are all Computer Science majors at universities in East Africa – Gloria and Christine at the University of Nairobi in Kenya (UoN), Audrey at the African Leadership University based in Rwanda, and David at the Jomo Kenyatta University of Agriculture and Technology, also in Nairobi. Microsoft has maintained a large and growing footprint in Africa since the 1990s, and established one of two Africa Development Centers in Nairobi in 2019. “The Forest Guard team distinguished themselves with their passion for an important local sustainability challenge, their effectiveness in leveraging a portfolio of Microsoft technologies, clarity of how their solution could be impactful, and their commitment to utilizing their individual areas of expertise,” Scallen said. “They were all summer interns at Microsoft, and they all have offers to come back, which they’ve accepted. We are very excited they have chosen to start their professional careers at Microsoft.” David started hacking back in high school, winning 3rd place in an international hackathon held in Romania. Both Gloria and Audrey also started in high school, joining an all-girls’ hackathon called Technovation, which sparked Gloria’s enduring interest in computer science, and inspired Audrey to organize her school’s first hackathon, Hack4Climate. Christine discovered hacking just as the COVID pandemic hit, working on the Digital Matatus transit project, a collaboration between UoN, MIT, Columbia University, and Groupshot. They learned about the student hackathon opportunity during the internship program, and David suggested they form a team. “We all had some interest in the Hack for Earth Challenge as well as IoT,” Audrey said. “After brainstorming, we settled on creating a solution to illegal logging. Climate change is an unfortunate reality, and deforestation is one of the contributors to it. We wanted to solve a problem that was widespread and relevant here in Kenya.” Over the course of just a week, they conceived, built, and tested Forest Guard – a real-time on-site deforestation sensor and alert system that detects and reports dangerous or illegal activity in protected forests. Forest protection is an issue the whole team takes both seriously and personally. David’s family lives in an area of Kenya not only stricken by unregulated forestry, but also by animal poaching and climate change, and he hopes Forest Guard will be a part of the solution. “Illegal logging is just one of many problems facing Kenya’s forests,” he said. Gloria said she sees Hack for Earth as a “noble cause,” and noted the particular danger to indigenous forest cover, whose rapid disappearance from the landscape remains largely unmonitored. “It is alarming the rate at which we are losing tree cover here in Kenya. Over the next couple of years, we will be beyond the point of recovery.” The solution and technology In places where we can’t always have eyes, Forest Guard gives us ears – and thanks to Microsoft technologies like Azure Cognitive Services, IoT Hub, and Power BI, it even has a brain to make decisions based on what the unit overhears, create readable images called spectrograms, and alert appropriate human support. Environmentalists, engineers, and scientists have tried to solve this problem before, but many current methods rely on satellite data to remote-sense loss of forest cover over time. Forest Guard puts the sensor right on the ground and can be trained to recognize hallmarks of illegal activity. As Christine explains, “Forest Guard can be used where illegal logging and forest fires are common. Forest Guard will enhance the response time for such situations.” Forest Guard uses sensors to generate a spectogram of forest sounds which are fed into Azure machine learning model for detection Sounds are translated into spectrograms, each with a readable signature that can detect the presence of chainsaws and heavy machinery, and even telltale signs of forest fire. When concerning noise signatures are detected, the Forest Guard unit sends a signal to appropriate authorities. “We know in Kenya parks are using sensors for anti-poaching and camera traps, and it would be good to also see the Forest Guard included (to protect the natural resources as well as the wildlife),” Audrey said. “Many forests are understaffed, especially in developing countries. Forest Guard could help automate the monitoring of illegal logging in forests by using IoT and machine learning technology.” Diagram of how sound and other sensor information like temperature is fed through the Forest Guard system using IoT and Azure Cognitive Services David said access to tools and documentation allowed them to overcome the learning curve that comes with adopting any new technology. “Microsoft technologies played a huge role in our project and without them I highly doubt we would have been able to complete Forest Guard. We were able to get Azure credits to use during the hackathon which proved to be really helpful as we were able to use Azure Cognitive Services to detect illegal logging,” he said. “We used Visual Studio Code as our primary coding environment, and technologies such as Azure Cognitive Services, IoT Hub, and Power BI were really helpful and so simple to integrate into our work.” Audrey was gratified to see that with a little bit of research and education, they could produce real results in a short span of time – a necessary element of a successful hackathon project considering the team only had a week to take Forest Guard from concept to a working prototype. “It was fantastic to see that we could implement very complex technologies in a few steps.” This was the first time hacking virtually for most of the team, but they said they were well-prepared by the preceding internship, which due to COVID was also conducted virtually this year. “Hacking virtually was new but we were able to make use of collaboration software such as Microsoft Teams,” Christine said. “We organized for our meetings to be in the evening since we all had classes during the day.” Gloria added that the team dynamic and load-sharing were also key to their success. “Everyone played their part well and we were all willing to help one another whenever we got stuck. We determined roles based on everyone’s prior experience with various technologies. For the technologies that none of us was familiar with, we would all research them and present our findings during our daily standup meetings.” All four winners said they plan to stay in touch and to continue their work on Forest Guard after graduation and even into their careers at Microsoft. “We’d really like to see this become a real product,” Audrey said. “As we wrap up our final year of studies, we are conducting further research on illegal logging and refining Forest Guard, and then we will fully dive into it post-graduation.” They will have plenty of opportunity to continue hacking in their new roles. Soon after the Forest Guard team begins work as fulltime software engineers at Microsoft Nairobi in 2022, The Garage will be opening a new location right on the Nairobi campus so they can continue to hack on what matters most to them using the very technologies they will be developing at Microsoft. The Garage – with 12 physical locations in major cities around the world and more coming online soon – is a central driver of hack culture at Microsoft, delivering programs and experiences to employees all year round in the form of talks, workshops, and coaching, all leading up to the annual Microsoft Global Hackathon – the largest private event of its kind in the world. Microsoft and The Garage thank the Forest Guard team for their ingenious use of Microsoft technologies to address a global issue that affects us all. We can’t wait to see what comes next for Forest Guard and from the creative minds of our four new software engineers. “It’s inspiring and heartwarming to see how students so strongly relate to our culture of Diversity and Inclusion, Making a Difference, Customer Obsession, One Microsoft, and Growth Mindset,” Scallen said. “Their unique and fresh perspectives – using their own lived experiences at their own schools and in their own communities – fuel passion for making a difference in the lives of people around the world.”
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00029.warc.gz
CC-MAIN-2023-06
10,703
29
http://forum.xda-developers.com/showthread.php?s=24e88e4f1ca03747e6eeed8cefd40778&p=47173359
code
Originally Posted by krazierokz sounds like your going to need to fix the port or get a battery pack charger ....its best to get it fixed since with out that port no odin ... I tried to fix the port. Now it is charging the phone. However, the USB connection to PC is not working. Also, for some reason, the Navigation buttons at the bottom are not working. I have now enabled the soft navigation keys. I am trying to see how I could fix all of the issues. Samsung Galaxy S - Vibrant Kernel: Stock CM
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163992191/warc/CC-MAIN-20131204133312-00009-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
499
5
https://support.systemweaver.se/en/support/solutions/articles/31000148113-starting-and-stopping-the-systemweaver-monitor-service
code
Once you have finished installing the SystemWeaver Monitor Service, you are ready to start it. Starting the Service Go to the Windows Start menu and type 'Services'. - Select Services Desktop app. The Services window will display. - Scroll down to find the SystemWeaver Monitor Service entry and select it. - Right-click on the service and select Start. The Service Control pop-up will display the progress of the start operation. When started, the Status will display 'Running'. - Confirm that the servers configured to run via the service are running by viewing them on the Task Manager Details tab: Example of Monitor Service running the SystemWeaver server and the Notification server Stopping the Service To stop the monitor service, follow the same steps outlined above except for Step 4, select Stop. When completely stopped, the Status will no longer display 'Running'.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00359.warc.gz
CC-MAIN-2022-05
877
11