url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.computerworld.com.au/article/100947/give_us_day_our_daily_tip/
code
SAN FRANCISCO (03/20/2000) - Remember when you first installed Microsoft Corp.'s Windows 95 and that annoying "Welcome to Windows 95" message appeared in the middle of the screen every time you rebooted? Remember how quickly you unchecked Show this Welcome Screen next time you start Windows? (Windows 95 users can relive the moment: Choose Start*Run, type welcome, and press Enter.) Fortunately, Microsoft has found a less obtrusive way to give you an occasional message. All you need is Windows 98 SE, Windows 2000, or Internet Explorer 5. (If you're a Windows 95 user who installed IE 5 without first installing IE 4's Desktop Update, the following will work only when browsing Web pages.) In any folder, browser, or Explorer window, choose View*Explorer Bar*Tip of the Day. A small pane opens at the bottom of the window containing a tip. Click the Next Tip link if you want to see more. Or drag the bar separating the tip from the rest of the window to adjust its size. The tip will stay visible until you close the window. If you display the tip pane in a browser window, it should reappear the next time you launch Internet Explorer (with Windows Explorer or folder windows, you have to manually display the tip pane each time). Unfortunately, many of the tips apply only to IE 5. But the possibilities for that space are endless. If you'd like to see a tip about some other aspect of Windows, or view a periodic reminder to back up your files, or even use the tip pane to send messages to other family members who use the computer, you can replace or add to the list of tips that cycle through. The trick lies in editing the HTML file. Custom Tip Lists Here's what to do. First, use Explorer to locate the Tip.htm file in the Web folder in your Windows folder. If extensions are hidden, the name appears simply as "Tip." Copy this file to another folder as a backup, so you can restore it to the original folder if anything goes wrong. Next, choose Start*Run, type notepad c:\windows\web\tip.htm (your path may differ), and press Enter. To add a tip or message, scroll almost to the bottom of the file until you see lines that begin div ID="Tip73", div ID="Tip74", and so forth. Click at the end of the last of these paragraphs (following the text "/div") and press Enter a couple of times to separate your addition from the existing text. To introduce your tip, type div ID="Tip75" Style="display:none;" (or simply copy this header line from any of the tips in the file and paste it where you added the carriage returns). Just remember to adjust the tip number so that it sequentially follows the last tip's ("Tip75" if your last tip was "Tip74" and so on). Type your message, or copy and paste a short tip from another source, such as PC World Online. It's best to limit text to one paragraph so it fits in the tip pane. (But if you want to enlarge the window, type p at the end of each paragraph, or pp to double-space between paragraphs.) Complete the tip by typing /div at the end of the last paragraph. Add as many tips as you like; just remember to number them consecutively, following the example of the original tips in the file. Finally, look for the line that begins 'var nTips=' near the bottom of the file (about 20 lines from the end). Edit the number to the right of the equal sign so that it represents the total number of tips, including Tip0. So, if you have tips numbered from 0 to 75, this line should read var nTips=76;. When you're done, save the file. Now click in the tips pane at the bottom of your browser or folder window and press F5 to refresh the content. You should see your additions, or at least be able to get to them by clicking Next tip. Not satisfied? You can also change the "Did you know..." text at the top of each tip--just reopen Tips.htm in Notepad and search for that phrase. Edit as you please, but be sure you don't change any of the surrounding code. To apply bold type, add b at the beginning and end of the phrase you want to emphasize. To customize the graphic that appears next to the tips, replace the Tips.gif in the Windows\Web folder with any (preferably small) GIF image; just rename the original (to something like Tips.bak.gif), then name the new file Tips.gif (or just plain Tips if extensions are hidden). Windows will resize your picture to fit the allotted space. To change the display size of the picture, look for the phrases width="27" and height="36" in the Tip.htm file, and replace the values of the width and height in pixels. Open one File Type in Many Applications Windows recognizes a file's type by its three-digit file extension (which Windows 2000 hides by default) and will associate each type with one application. This makes it easy to open a file in a preferred application--simply double-click the file icon. But what if you open a file in multiple apps? At different times, for example, you may open a GIF file in an image-editing application like Adobe Photoshop, a Web-based animation application, or a quick-and-simple file viewer. In Windows 9x, this was doable but not very easy. See Windows Tips, July (www.pcworld.com/jul99/hh_windows), as well as this month's "Windows Toolbox" for a software solution. Fortunately, in Windows 2000, the capability has improved. Make your menu: In Windows 2000, right-click any file icon and choose Open With. The first time you do this for an unassociated file type, you'll be presented with an Open With dialog box listing several installed applications. Select one from the list or click Other. Select the application file from your hard disk, and click Open. Then click OK. The next time you go through this routine, you may still have to display the Open With dialog box and select another application. But once you've opened a file type in two or more applications, you'll get an Open With submenu that lists all the apps you've used to open this particular type of file so far. If the application you want is not on the Open With submenu, click Choose Program and go through the above routine again to add it to the list. The Open With submenu always shows the default program at the top; then items are sorted by date of use, with the most recent application first. Change default associations: If you want to change the default application for a file type--the one that launches when you double-click a file icon--you can use the same technique. Right-click a file and choose Open With*Choose Program (even if you see the application you want on the submenu). Select an application from the list, and this time check the box labeled Always use this program to open these files. Then click OK. Alternatively, you can right-click any file type and choose Properties (or select it and press Alt-Enter, or even Alt-double-click). In the Properties sheet, click the Change button next to the Open With line. Select the application, and click OK two times. Now, the hard part: If you want to rename or remove an item from the Open With menu or Open With dialog box, you'll have to edit the Windows Registry. Since this is risky, you should take the following steps to back up the Registry first: Choose Start*Programs*Accessories*System Tools*Backup. In the Backup window, choose Tools*Create an Emergency Repair Disk. When prompted, insert a floppy disk and check Also backup the registry to the repair directory. Click OK. If you run into trouble, these tools may or may not help. If they don't, you might have to reinstall Windows--so proceed at your own risk. Remove a menu item: If you add an item to your Open With menu and later decide to remove it, try this: Choose Start*Run, type regedit and press Enter to start the Registry Editor. Navigate down the tree on the left until you come to the branch HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts. If necessary, double-click the FileExts folder (called a key) to expand the branches beneath. Find the file extension for the menu that you want to edit, and double-click it. Then select the OpenWithList key under the extension key. In the pane on the right, select the icon corresponding to the menu item that you want to remove. (You won't see the menu item by name, but rather the name of the file that the menu item launches.) Press Delete and click Yes to confirm. Undo an accidental dialog box addition: If you accidentally added a nonexecutable file (such as a data file, which can't open any other files) to your Open With menu, the previous tip will remove it from your Open With menu. To remove it from the Open With dialog box as well, go to the Registry Editor, and navigate to the key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Applications. Double-click Applications, and underneath, locate the key that corresponds to your bogus application (the nonexecutable item you added to the dialog by mistake). Select the key and press Delete. Click Yes to confirm. Rename menu and dialog box items: If you navigate through the menu using keyboard shortcuts, renaming the items in your Open With menu and Open With dialog box can ease the process. For example, Shift-F10 displays the context (right-click) menu for selected icons; pressing H afterward displays the Open With menu. From there, just press the first letter of the menu item you want. But if more than one item starts with that letter, you may have to press it multiple times, followed by Enter. By renaming the menu items that begin with the same letter, you'll be able to access them faster. Here's what to do: In the Registry Editor, navigate to the HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Applications key. Double-click Applications, and then double-click the application name whose menu you want to modify. Under that key, select the Shell key. In the right pane, double-click the Friendly Cache icon. In the Edit String dialog box, edit the text in the Value data box--this is the text that will appear on the Open With menu and Open With dialog box. If you want to give the menu name a unique keyboard shortcut, put an ampersand (&) in front of one of the letters in the name. Be sure that no other menu items begin with the letter you pick. Click OK and exit the Registry. The next time you open this menu using the keyboard method, you'll see an underscore marking the letter you designated; press it to launch that application. This underscore doesn't appear if you use the mouse to display the menu. Browse your Folders If you run Windows 95 and installed Internet Explorer 4.x without the Desktop Update feature, you're missing out on some of the embellishments IE 4.x gives to Windows 98 users. But you can still get an enhanced file-management tool by using IE to browse your hard disk; it includes a Favorites menu or panel, forward and back navigation button, and a Links toolbar. To make IE search your hard disk the way Explorer does, see "A Better Way to Explore Your Hard Drives" (www.pcworld.com/jun99/windowstips). But if you want to open individual folders in a browser on the fly, you need another approach. You can customize Windows so that when you right-click a folder, you have the option of opening it in an Internet Explorer window. Here's what to do: First, double-click My Computer. In the My Computer window, choose View*Options. Click the File Types tab, and in the list, select Folder (not File Folder). Click the Edit button. At the bottom of the Edit File Type dialog box, click New to create a new action for the right-click menu. In the New Action dialog box, type Explore in Browser (or whatever you'd like to name your menu command) in the top box. In the bottom box, type "C:\Program Files\Internet Explorer\iexplore.exe" -e (adjust the path as needed to reflect the location of your browser). Then click OK. Now, your right-click menu has a command to open a selected folder in a two-pane Internet Explorer window with the folder tree on the left. To add a command that opens a folder in a single-pane browser window, do this: Click New once again to create another action. In the New Action dialog box, type something like Open in Browser in the top box and "C:\Program Files\Internet Explorer\iexplore.exe" in the bottom box (again, your path may vary). Then click Close twice to close the remaining dialog boxes. The next time you right-click a folder, you'll have two new menu commands that let you open that folder in your Internet Explorer browser. Print selectively with Internet Explorer 5In January's issue I explained how you could use Microsoft FrontPage or FrontPage Express to simulate a print preview for Web pages when surfing with IE 5 ("Save Paper When Printing From IE 5," www.pcworld.com/jan00/hh_windows). But reader Eli Winkler has pointed out a better way to save paper when printing a Web page. In IE 5, select the portion of the Web page you want to print (that is, drag the cursor over the area to highlight it). Then choose File*Print. In the 'Print range' section, choose Selection to print only the selected material. Then click OK. You'll get only the portion you selected. Make Icons Disappear In the April issue, I told you how to use a desktop toolbar to take the place of all the icons cluttering your screen. But I was wrong when I said that Windows 2000 didn't let you do this all on its own ("The Clean and Efficient Desktop," www.pcworld.com/apr00/hh_windows). It does, but like many things in Windows 2000, the controls have moved around a bit. In Windows 2000, right-click on the desktop and choose Active Desktop. Then, if there is a check mark by Show Web Content, choose Show Desktop Icons from the Active Desktop menu to uncheck Show Web Content. If Show Web Content isn't checked, add the checkmark. Then right-click on the desktop again and choose Active Desktop*Show Desktop Icons. Get there Faster with Document ShortcutsWindows file and folder shortcuts make it easy to open a document or folder fast. But they don't make it easier to get to, say, page 31, paragraph C of a long document or a specific cell of that monster spreadsheet you've been nursing all year. Fortunately, there is another kind of shortcut to solve this problem--the document shortcut. The only catch is that the application you use to open these documents must support object linking and embedding (OLE). For the majority of people, this means using Microsoft Word or Excel. To create a document shortcut, open the document or spreadsheet you want a shortcut for. Navigate to the particular page, paragraph, sentence, or cell that you frequently consult, and select it. Then use your right-mouse button to drag the selection to a folder or the desktop. Release the button and choose Create Document Shortcut Here. Depending on the application, you may need to copy the selection, navigate to your folder or desktop, right-click, and choose Paste Shortcut. With many applications, you'll need to return to the original app (the one that the shortcut points to) and choose File*Save. If you're creating a great number of these document shortcuts, you'll find them easier to manage if you keep them in their own folder. To create this folder, right-click the Start button and choose Open. In the Start Menu folder, right-click an empty area and choose New*Folder. Type a name and press Enter. Find files in this article at www.fileworld.com/magazine. Send your questions and tips to [email protected]. We pay $50 for published tips and questions. Scott Dunn is a contributing editor for PC World and a principal author of The PC Bible, 2nd Edition (Peachpit Press, 1995). Open One File Type in Many Apps with OpenExpertWant to open GIFs, JPEGs, or HTML files from your browser one day and from an editing program the next? Opening a file and its application from Explorer is usually a one-file-type, one-application kind of procedure. It's possible to dig through the myriad Explorer dialog boxes to change your options, but who wants to do that for several file types? OpenExpert can help. This handy little utility (free to home users, $20 for businesses) adds an "Open With" submenu to your right-click menu and makes it a breeze to configure applications. Once you've set up a few apps, adding them to the menu for other file types is a simple drag-and-drop procedure. You can also customize the names of applications and the order that they appear on the menu. Why didn't Microsoft think of this? Thank the people at BaxBex Software for such a great idea. You can download OpenExpert from the vendor's site (baxbex.com) or from FileWorld.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829115.83/warc/CC-MAIN-20181217183905-20181217205905-00254.warc.gz
CC-MAIN-2018-51
16,482
39
https://itch.io/jam/kenney-jam-2017
code
This jam is now over. It ran from 2017-08-25 11:00:00 to 2017-08-27 22:20:00. View 85 entries Create a game using only game assets made by Kenney ("Asset Jesus"). You're free to use, edit and remix them in any way you please (see the rules below). There will be no judging or ranking. The Kenney Jam 2017 theme was: Gameplay theme: "It's a feature, not a bug!" Find a team! Want to find some teammates? See the Kenney Jam page on CrowdForge: https://crowdforge.io/jams/kenneyjam No submissions match your filter
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00678.warc.gz
CC-MAIN-2022-21
511
7
https://share.ez.no/layout/set/esi/include/content/box/113717/(order)/latest
code
Friday 26 August 2011 3:16:11 pm The new index_ajax.php has been copied to the root of the installation, but the problem remains as long as the rewrite rule is present. Setting priority in admin2 works when I remove it. It's an Nginx rewrite, so I'm going to debug it some more to see if there's a problem in it. The reason I found this strange is that previous upgrades from 2011.4 all the way to 2011.7, using the recommended upgrade steps, have not resulted in any problems due to the rewrite. Modified on Friday 26 August 2011 3:22:19 pm by Daniel A. Øien
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00253.warc.gz
CC-MAIN-2020-24
560
4
https://www.groundai.com/project/when-relation-networks-meet-gans-relation-gans-with-triplet-loss/
code
When Relation Networks meet GANs: Relation GANs with Triplet Loss Though recent research has achieved remarkable progress in generating realistic images with generative adversarial networks (GANs), the lack of training stability is still a lingering concern of most GANs, especially on high-resolution inputs and complex datasets. Since the randomly generated distribution can hardly overlap with the real distribution, training GANs often suffers from the gradient vanishing problem. A number of approaches have been proposed to address this issue by constraining the discriminator’s capabilities using empirical techniques, like weight clipping, gradient penalty, spectral normalization etc. In this paper, we provide a more principled approach as an alternative solution to this issue. Instead of training the discriminator to distinguish real and fake input samples, we investigate the relationship between paired samples by training the discriminator to separate paired samples from the same distribution and those from different distributions. To this end, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability. Extensive experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks including unconditional and conditional image generation and image translation. Our source codes will be available on the website: https://github.com/JosephineRabbit/Relation-GAN Since first proposed in , generative adversarial networks (GANs) have witnessed a rapid development and found numerous applications in many computer vision tasks, such as image generation [8, 13, 44], person re-identification , image super-resolution etc. It also has been extended to natural language processing , video sequence synthesis , and speech synthesis recently. Though tremendous success has been achieved in many fields, training GANs is still a very tricky process and suffers from many issues, including the instability between the generator and the discriminator as well as the extremely subtle sensitivity to network architecture and hyper-parameters. It has been proved that most of these issues are due to the fact that the support of both target distribution and generated distribution are often of low dimension regarding to the base space, and therefore misaligned at most of the time, causing discriminator to collapse to a function that hardly provides gradients to the generator. To remedy this issue, recent works proposed to leverage the Integral Probability Metric (IPM), such as Gradient Penalty and Spectral Normalization . In IPM-based GANs, the discriminator is constrained to a specific class of function so that it does not grow too quickly and thus alleviates vanishing gradients. However, the existing IPM methods also have their limits. For instance, the hyperparameter tuning of gradient penalty is mostly empirical, while the spectral normalization imposes constrains on every conv-layers which hinders the learning capacity of discriminators. In , the authors argue that non-IPM-based GANs are missing a relativistic discriminator, which IPM-based GANs already possess. The relativistic discriminator is necessary to make the training process analogous to divergence minimization and produce sensible predictions based on the prior knowledge that half of the samples in the mini-batch are fake. Although they have shown the power of relativistic discriminator, the potential of comparing the relation between real and fake distribution still remains to be explored. In this paper, we explicitly study the effect of relation comparison in GANs by training the discriminator to determine whether the input paired samples are drawn from the same distribution (either real or fake). A relation network is present, acting as the discriminator. A new triplet loss is also designed for training the GANs. In this way, the before-mentioned problem of the disjointed support could be alleviated by projecting and merging the low dimension data distribution into a high dimension feature space. Mathematically, we prove our new triplet loss is a divergence and could achieve the Nash equilibrium leading to convergence of the generated data distribution to the real distribution. In addition, we analyze the oscillatory behavior that GANs exhibit for the Dirac-GAN and we demonstrate the proposed Relation GAN is locally convergent even with no regularized methods. Extensive experiments are conducted on conditional and unconditional image generation and image translation tasks. The promising performance demonstrates the proposed relation gan has great potential in various applications of GANs. In summary, the contributions of this paper are two folds. We propose a new training strategy for GANs to better leverage the relation between samples. Instead of separating real samples from generated ones, the discriminator is trained to determine whether a paired samples are from the same distribution. We propose a relation network architecture as the discriminator and a triplet loss for training GANs. We show both theoretically and empirically that the relation network together with the triplet loss give rise to generated density which can exactly match that of real data. Extensive experiments on 2D grid , Stacked MNIST , CelebA , LSUN , CelebA-HQ data sets confirm our proposed method performs favourably against state-of-the-arts such as relativistic GAN , WGAN-GP , Least Squares GAN (LSGAN) and vanilla GAN . 2 Related Work The vanilla GAN minimizes the JS divergence of two distributions, leading to the gradient vanishing problem when the two distributions are disjoint. Recent works try to address this issue by designing new objective functions [24, 32, 37, 1] or more sophisticated network architectures [14, 45, 5, 33]. Others investigate the regularization and/or normalization to constrain the ability of discriminator [28, 9, 16]. Recently, a new method is proposed to explore a relativistic discriminator. In the following, we will review recent works using different objective functions and a special case–relativistic GANs, which are closely related to our approaches. 2.1 Different Objective Functions in GANs Generally, there are two kind of loss functions in GANs: the minimax GAN and the non-saturating (NS) GAN. In the former the discriminator minimizes the negative loglikelihood for the binary classification task. In the latter the generator maximizes the probability of generated samples being real. The non-saturating loss as it is known to outperform the minimax variant empirically. Among them, loss sensitive GAN tries to solve the problem of gradient vanishing by focusing on training samples with low authenticity. WGAN proposes the Wasserstein distance to replace the JS divergence, which can measure their distance even though the two distributions are disjoint. In addition, also proposes to add noise to both real and generated samples to further alleviate the impact of disjoint distributions. improves WGAN by replacing the weight clipping constraints with a gradient penalty, which enforces the Lipschitz constraint on the discriminator by punishing the norm of the gradient. DRAGAN combines the two parts of WGAN and LSGAN, and only improves the loss function to a certain extent. The stability of loss training is controlled by constantly updating the coefficient of the latter term. 2.2 Relativistic GANs Instead of training discriminators to predict the absolute probabilities of the input samples being real, the relativistic GAN proposes to use a relativistic discriminator, which estimates the probability of the given real sample being more realistic than a randomly sampled fake sample. Although bears a similar spirit, our method differs from in that we adopt a relation network as the discriminator to estimate the relation score of a paired input. In comparison, the discriminator in treats input samples separately and relies on a ranking loss (e.g., hinge loss) to explore their relation. The idea of merging the features and comparing the relation between samples from two distribution has not been explored in the literature of GANs. In addition, our method proposes a new triplet loss to leverage the power of paired relation comparison, allowing more stability and better diversity for GANs without applying any IPM methods. 3 The Relation GAN Framework 3.1 Relation Net Architecture In traditional GANs, a discriminator is trained to distinguish real samples from fake ones and a generator is trained to confuse the discriminator by generating realistic samples. Consider a real data distribution , and the data distribution produced by the generator . Rather than training the discriminator on real and fake data independently, we propose to train a discriminator which predicts a relation score for a paired input, indicating whether the paired samples are from the same distribution (either or ). Inspired by the success of relation net architecture in other computer vision areas , our discriminator consists of two modules, including an embedding module and a relation module as shown in Figure 1. For a pair of input samples, the embedding module firstly maps each sample into a high dimensional feature space. Their corresponding features are then merged and fed into the relation module to produce the relation score for the input pair. For ease of description, we name paired inputs containing both real and fake samples as asymmetric pairs, and those containing samples from the same distribution (either real or fake) as symmetric pairs. The training process is then formulated as a min-max game (See Section 3.2), where the discriminator aims to maximize the relation scores of asymmetric sample pairs and minimize those of symmetric ones. Meanwhile, the generator is trained to confuse the discriminator by minimizing the relation scores of asymmetric sample pairs containing real and generated samples. 3.2 The Min-Max Game The min-max game in training GANs is conducted by optimizing the losses of and iteratively. In the no-IPM GANs, the generalized losses of and can be presented as follows: where and are scalar-to-scalar functions, is the distribution of real data, and denotes the generated data distribution. In our Relation GAN, the formulation of the losses functions of and are as follows: where and are also scalar to scalar function. The goal of relation discriminator is to learn a loss function parameterized by which separates symmetric and asymmetric sample pairs by a desired margin. Then the generator can be trained to minimize this margin by generating realistic samples. Inspired by the success of triplet loss , we formulate the similar loss function in our Relation GAN as follows: where and are samples from the real data distribution, is sample from the generated data distribution and is sample from the data generated by the generator in the last step of optimization. We use a distance metric to replace the constant ‘margin’ in the original triplet loss. This variable constraining leads to a smaller difference of relation scores when the distance between the two compared samples are smaller, which is more flexible than the original fixed margin. Our experiments also shows the superiority of our new triplet loss with margin. 3.3 A Variant Loss Since the training batch size is limited, the sampled distribution of each batch may deviates from the real data distribution. For an input batch of paired samples, the loss function in (3.2) can be written as follows: where . Our triplet loss is designed to reduce the relation scores of symmetric sample pairs and increase those of asymmetric ones. However, when the real sample distribution is fairly uniform with small variance, the original loss is rigorous and prone to be disturbed by outliers in one batch. For these cases, we design a variant of our new triplet loss as follows: where represents the index of samples in a batch. The variant loss is more relaxed and not easily disturbed by the extreme samples in the same batch. It performs better on evenly distributed data sets. Thus, we suggest to employ the variant triplet loss on uniform distribution data, e.g., datasets with only single class data. Our experiments results on the dataset of single class such as, CelebA and LSUN confirm it. 4 Theory Proof and Analysis As discussed in the introduction, the optimal discriminator of most GANs is a divergence. In this section, we firstly prove that the proposed discriminator based on the relation net also has such property, and then show the distributional consistency under our Lipschitz continuous assumption. 4.1 A New Divergence A divergence is a function of two variables , satisfies the following definition: Definition 1 If is function of two variables , satisfies the following properties: Then is a divergence between and . Assumption 1 In the training process, when not reach the optimal , ought to be more realistic than , and ought to give bigger relation score to the paired input than . ought to be more realistic than also means, is bigger than Under this assumption, we show the loss function of our relation discriminator is also a divergence in Supplementary 1. 4.2 Distributional Consistency We use to denote the parameterized function discriminator and to denote the parameterized function of generator. Based on , we use the definition of Lipschitz assumption of data density as follows: Definition 2 For any two samples and , the loss function is Lipschitz continuous with respect to a distance metric if with a bounded Lipschitz constant , i.e., Assumption 2 The data density is supported in a compact set and it is Lipschitz continuous wrt with a bounded constant which is satisfied with Definition 2. Then we show the existence of Nash equilibrium such that both the function and the density of generated samples are Lipschitz. Same as the , we have both (, ) and (, ) are convex in and in . Then, according to the Sionâs theorem , with and being optimized, there exists a Nash equilibrium (, ) We also have the following lemma. Under Assumption 2, there exists a Nash equilibrium (, ) such that both and are Lipschitz.Then we could prove that when reaching the Nash equilibrium, the density distribution of the samples generated by will converge to the real data distribution , which is the lemma 1 as follows: Lemma 1 Under Assumption 2, for a Nash equilibrium in Lemma 1, we have Thus, converges to . The proof of this lemma is given in the Supplementary 2. |(a) Vanilla GAN||(b) WGAN||(c) WGAN-GP||(d) GAN-QP||(e) Relation GAN| 4.3 The Convergence In the literature, GANs are often treated as dynamic systems to study their training convergence , , , . This idea can be dated back to the Dirac GAN , which describes a simple yet prototypical counterexample for understanding whether the GAN training is locally nor globally convergent. To further analyze the convergence rate of training the proposed Relation GAN, we also adopt the Dirac GAN theory. However, only discusses the situation where the data distributions are 1-D. We extend this theory into the 2-D case to gain better understanding. Definition 3 The Dirac-GAN consists of a (univariate) generator distribution and a linear discriminator , where denotes the parameter of the generator, is a 2-D vector, and represents the parameter of the discriminator. The real data distribution is a Dirac-distribution concentrated at . Suppose the real sample point is a vector , and the fake sample is being reorganized, which also represents a parameter of the generator. The discriminator uses the simplest linear model, i.e., , which also represents the parameters of the discriminator. Dirac GAN takes into account that in such a minimalist model, whether a false sample eventually converges to a true sample, in other words, whether a finally converges to . Specifically, in Relation GAN, our Dirac Discriminator could be simplified as: , where and denotes the parameter of the embedding module and relation module respectively. Based on the dynamic analysis for GANs in Supplementary 3, we have the numerical solution of the GANs’ dynamic equations with a initial point as the fig 2 shows. In , the author find that most unregulared GANs are not locally convergent. In our 2-D Dirac GANs, the numerical solutions of the WGAN , WGAN-GP , GAN-QP , vanilla GAN also perform oscillating near the real sample or hard to converge to the real sample point, while our Relation GAN success to converge. It indicates that our GAN has a good local convergence. We first evaluate the proposed Relation GAN on the 2D synthetic dataset and the Stacked MNIST dataset to demonstrate the diversity of generated data and the stability of generator. We then perform the image generation tasks with our method to show its superiority in synthesizing natural images. Finally, ablation study is conducted to verify the effects of the feature merging mechanism in relation nets and the proposed triplet loss. |(a) Vanilla GAN||(b) LSGAN||(c) WGAN-GP||(d)Relativistic GAN||(e) Relation GAN| 5.1 The Diversity of Generated Data 2D Datasets We compare the effect of our relation discriminator on the 2D 8-Gaussian distribution, 2D 25-Gaussian distribution and 2D swissroll distribution. The experimental settings follow . The results generated by our method and four popular methods under the same setting are shown in Figure 3. Compared with the other methods, ours can better fit these 2D distributions. Stacked MNIST For Stacked MNIST experiments, we use the setting and code of . Each of the three channels in each sample is classified by a pre-trained MNIST classifier, and the resulting three digits determine which of the 1000 modes the sample belongs to. We measure the number of modes captured with the pre-trained classifier. We choose Adam optimizer for all experiments. Our results are shown in Table 5.1. We find that our Relation GAN could achieve best mode coverage, reaching all 1,000 modes. 5.2 Unconditional Image Generation Datasets We provide comparison on four datasets, namely CIFAR-10 , CelebA , LSUN-BEDROOM and CelebA-HQ . The LSUN-BEDROOM dataset contains 3M images which are randomly partitioned into a test set of around 30k images and a training set containing the rest. We use version of CelebA-HQ with 30k images. We only compare our method with Relativistic GAN and WGANGP on CelebA-HQ due to limited computation resources. Settings For CIFAR-10, we use the Resnet architecture proposed in (with spectral normalization layers removed). For CelebA, LSUN and CelebA-HQ, we used a DCGAN architecture as in . We apply Adam optimizer on all experiments as Table 5.2 shows. We used 1 discriminator updates per generator update. The batch size used was 64. Other details of our experiments settings are provided in Supplementary. Evaluation To compare the sample quality of different models, we consider three different scores: IS , FID and KID which are based on the pre-trained Inception network on ImageNet . Results and Analysis Some random generated samples on 3 data sets are shown in Figure LABEL:fig:Generated. More generated images and evaluation scores are provided in Supplementary 6. From Table 3 we could find RelatioGAN is also highly competitive on single class data sets i.e. CelebA, LSUN, while RelationGAN achieves the best performance on CIFAR-10. As we discussed in Sec.3.3, the variant loss of is more relaxed and suitable for evenly distributed data sets while the loss of in eq. (3.3) is more strict and performs better on multi-class or harder data sets (also performs best on Stacked MNIST). 5.3 Conditional Image Generation We compare the MSGAN which is one of the best conditional gan model on conditonal CIFAR-10 datasets. The experiment is applied by simply replace the MS-loss in with the relation loss. Table 4 represents the results of FID. 5.4 Image Translation In addition to image generation task, GANs also gains promising progress in image translation task. It has been shown a great success in ranges of image translation tasks, including style transfer, image enhance, image super resolution and image segmentation. We conduct three relative experiments on image style transfer and image super resolution, respectively. Image Style Transfer For image style transfer task, we adopt the CycleGAN as our baseline model to translate Monetâs painting into photograph. FID score is applied to evaluate the quality of generated images. Table 5 shows the comparison of fid scores of generated images. The lower fid represents smaller perceptual difference between target domain images and generated images. We find the both relation loss and relation loss performs better than the oigianl adversarial loss in cycle-gan and the reltion loss performs best. Image Super Resolution For Image Super Resolution task, we employ SRGAN with the relastivistc loss which is the latest proposed loss for gans as our baseline. We denote our baseline as SRGAN. The train and val datasets are sampled from VOC2012. Train dataset has 16700 images and Val dataset has 425 images. We compare the psrn and ssim on three popular SR datasets: Set5 , Set14 and Urban100 . Table 6 lists the psnr and ssim of different approaches on five datasets. We can observe that the fid scores of the proposed algorithm perform better than the original method on photopainting datasets. 5.5 Ablation Study We conduct the ablation study on image generation datasets. We first compare our triplet loss with the siamese loss , whose results are shown in Table 7. The formulation of siamese loss function is shown in Supplementary 4. Second, we take a closer look on the impact of our embedding module and relation module. The “” in the Table 8 represents different architectures of discriminator, where the embedding module contains res-block and the relation module contains res-block. The “(0+3)” represents the samples are contacted together after first conv-layer and then put into the relation module (RM) which contains 3 res-block. The “no EM” represents the samples in which the paired input are packed in the beginning of the discriminator as . All experiments are conducted on CIFAR-10. Results and Analysis From Table 7, we could find the results of the proposed triplet loss is much better than Siamese loss. The “-” represents model collapse in training process. The results in Table 8 shows the bigger size of EM could enhance the performance which also demonstrates the effectiveness of our embedding strategy. In this paper we propose the Relation GANs. A relation network architecture is designed and used as the discriminator, which is trained to determine whether a paired input samples are from the same distribution or not. The generator is jointly trained with the discriminator to confuse its decision using a triplet loss. Mathematically, we prove that the optimal discriminator based on the relation network is a divergence, indicating the distance of generated data distribution and the real data distribution becomes progressively smaller during the training process. We also prove the generated data distribution will converge to the real data distribution when getting to the Nash equilibrium. In addition, we analysis our method and several other GANs in dynamic system. We demonstrate our GAN has excellent convergence by analyzing the dynamic system of the Dirac GANs. The results of experiments on simple 2D distribution data and Stacked MNIST verify the effectiveness of Relation GAN, especially in addressing the mode collapse problem. Our Relation GAN not only achieves state-of-the-art performance on unconditional and conditional image generation task with the basic architecture and training settings, but also achieves promising results in image translation tasks compared with other gan losses. - (2017) Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 214–223. Cited by: §2.1, §2, §4.3. - (2018) Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, neurips 2018, 3-8 december 2018, montréal, canada. Cited by: §1. - (2018) Demystifying MMD gans. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, Cited by: §5.2. - (2010) Learning mid-level features for recognition. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pp. 2559–2566. Cited by: §5.2. - (2018) Large scale GAN training for high fidelity natural image synthesis. CoRR abs/1809.11096. Cited by: §2. - (2018) Deep video generation, prediction and completion of human action sequences. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part II, pp. 374–390. Cited by: §1. - (2016) Learning local image descriptors with deep siamese and triplet convolutional networks by minimizing global loss functions. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 5385–5394. Cited by: §3.2, §5.5. - (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2672–2680. Cited by: §1, §1, §2, §3.2, §4.3. - (2017) Improved training of wasserstein gans. In Conference and Workshop on Neural Information Processing Systems, pp. 5769–5779. Cited by: §1, §1, §2.1, §2, §4.3. - (2016) Deep residual learning for image recognition. See ?, pp. 770–778. External Links: Cited by: §5.2. - (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 6629–6640. Cited by: §4.3, §5.2. - (2015) Single image super-resolution from transformed self-exemplars. See ?, pp. 5197–5206. External Links: Cited by: §5.4. - (2018) The relativistic discriminator: a key element missing from standard GAN. CoRR abs/1807.00734. Cited by: §1, §1, §1, §2.2, §2. - (2018) Progressive growing of gans for improved quality, stability, and variation. See ?, External Links: Cited by: §2. - (2014) Adam: A method for stochastic optimization. CoRR abs/1412.6980. Cited by: §5.1. - (2018) On convergence and stability of gans. CoRR. Cited by: §2.1, §2. - (1989) Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver, Colorado, USA, November 27-30, 1989], pp. 396–404. Cited by: §1, §5.1. - (2017) Photo-realistic single image super-resolution using a generative adversarial network. See ?, pp. 105–114. External Links: Cited by: §5.4. - (2001) New edge-directed interpolation. IEEE Trans. Image Processing 10 (10), pp. 1521–1527. External Links: Cited by: §5.4. - (2018) PacGAN: the power of two samples in generative adversarial networks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., pp. 1505–1514. Cited by: §5.5. - (2015) Deep learning face attributes in the wild. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pp. 3730–3738. Cited by: §1, §5.2. - (2018) Are gans created equal? a large-scale study. In Advances in Neural Information Processing Systems 31, pp. 700–709. Cited by: §1, §5.2. - (2019) Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 1429–1437. Cited by: §5.3. - (2017) Least squares generative adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pp. 2813–2821. Cited by: §1, §2. - (2018) Which training methods for gans do actually converge?. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 3478–3487. Cited by: §4.3, §4.3. - (2017) The numerics of gans. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 1823–1833. Cited by: §4.3. - (2017) Unrolled generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §1. - (2018) Spectral normalization for generative adversarial networks. CoRR abs/1802.05957. Cited by: §1, §2, §5.2. - (2017) Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 5591–5600. Cited by: §4.3. - (2017) SEGAN: speech enhancement generative adversarial network. CoRR abs/1703.09452. Cited by: §1. - (2017) Photo-realistic single image super-resolution using a generative adversarial network. Cited by: §1. - (2017) Loss-sensitive generative adversarial networks on lipschitz densities. CoRR abs/1701.06264. Cited by: §2.1, §2, §4.2, §4.2. - (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Cited by: §2. - (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. Cited by: §5.2. - (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2226–2234. Cited by: §5.2. - (1958) On general minimax theorems.. Pacific J. Math. 8 (1), pp. 171–176. Cited by: §4.2. - (2018-11) GAN-qp: a novel gan framework without gradient vanishing and lipschitz constraint. pp. . Cited by: §2, §4.3. - (2017) Adversarial generation of natural language. In Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017, Vancouver, Canada, August 3, 2017, pp. 241–251. Cited by: §1. - (2018) Learning to compare: relation network for few-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 1199–1208. Cited by: §3.1. - (2016) Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 2818–2826. Cited by: §5.2. - (2019) Improving generalization and stability of generative adversarial networks. In International Conference on Learning Representations, External Links: Cited by: §5.1, §5.1, §5.2. - (2015) LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. CoRR abs/1506.03365. Cited by: §1, §5.2. - (2010) On single image scale-up using sparse-representations. See ?, pp. 711–730. External Links: Cited by: §5.4. - (2018) Self-attention generative adversarial networks. CoRR abs/1805.08318. Cited by: §1. - (2018) Self-attention generative adversarial networks. CoRR abs/1805.08318. Cited by: §2.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00241.warc.gz
CC-MAIN-2020-34
32,023
128
https://devhubby.com/thread/how-to-speed-up-file-reading-in-pascal
code
Additionally, you can use the following techniques to further speed up file reading in Pascal: - Use File buffering: You can increase the buffer size used for file reading using the "SetTextBuf" procedure. This can reduce the number of disk read operations and improve performance. - Optimize disk access: Try to store frequently accessed files on faster storage devices, such as SSDs instead of HDDs. Fragmentation of files can also slow down the file reading process, so defragmenting the disk can help improve performance. - Reduce unnecessary file operations: Minimize the number of file seek operations or unnecessary file pointer movements. Sequentially reading a file can be faster than randomly accessing different parts of the file. - Parallel processing: If you need to process multiple files at the same time, you can parallelize the file reading and processing operations using threads or parallel programming techniques. This can leverage multiple CPU cores and speed up the overall file reading process. - Use efficient algorithms: If you need to perform specific operations on the data read from the file, choose algorithms that are optimized for efficiency. For example, using efficient sorting or searching algorithms can speed up data processing. By applying these techniques, you can significantly improve the speed of file reading in Pascal. However, keep in mind that the actual performance gains may vary depending on your specific application and hardware configuration.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00073.warc.gz
CC-MAIN-2024-18
1,493
7
https://www.techrepublic.com/forums/discussions/broken-display-laptop-hdmi-not-working/
code
Broken Display Laptop, HDMI Not Working. A friend of mine gave me a decent specs laptop without a display, it was broken and dismantled apart. The system does boot up and works completly fine but the main problem is that there is no display to work with and log in for doing the Win + P method. I have tried connecting it to HDMI but the FN + F10 change display output does not work. What could I do to make it connect to an external monitor or TV ? Would VGA display be completly different and show up the log-in screen aswell?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00623.warc.gz
CC-MAIN-2024-10
528
4
https://www.my.freelancer.com/projects/php-asp/computer-repair-service-software/
code
i am looking for something like this [url removed, login to view] maybe a little better 10 pekerja bebas membida secara purata $76 untuk pekerjaan ini I'm an experienced programer in VB.NET. This project is very similar with other that I had accomplished in the past. I'm sure you'll take the right decision. Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00741.warc.gz
CC-MAIN-2018-09
327
5
https://momentumadvisory.de/validating-input-in-javascript-9309.html
code
Clicking on the label will move the mouse cursor to the input with the id specified in the for attribute. The name of each input (or textarea) will be passed to the server to identify the contents of the form. All credit cards are 16 digits and the 16th digit can be calculated based on the first 15 numbers. By validating form responses before accepting them, we can alert users to their errors before they submit the form.In this way, client side form validation can vastly improve the user experience.Finally, notice that instead of using "text" as the input type for the email and url fields, we use "email" and "url".This will buy us free validation from browsers that support HTML5 even if Java Script is turned off.If you are using a My SQL database, this is called My SQL Injection.To avoid this problem, you must validate form submissions using your server.To get started, we'll create an HTML page that include a form with the id "contact". It should contain an label, an input (or textarea), and a span that will contain the error message.The form in HTML will look like this: tag improves usability.When the submit button is pushed, j Query will check whether all fields are valid.If any fields are not valid, the form will not be submitted and the user will be informed with error messages for the fields that are causing problems.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00274.warc.gz
CC-MAIN-2019-22
1,344
3
https://aclanthology.org/2023.eacl-main.15/
code
AbstractDespite the success of Transformer models in vision and language tasks, they often learn knowledge from enormous data implicitly and cannot utilize structured input data directly. On the other hand, structured learning approaches such as graph neural networks (GNNs) that integrate prior information can barely compete with Transformer models. In this work, we aim to benefit from both worlds and propose a novel Multimodal Graph Transformer for question answering tasks that requires performing reasoning across multiple modalities. We introduce a graph-involved plug-and-play quasi-attention mechanism to incorporate multimodal graph information, acquired from text and visual data, to the vanilla self-attention as effective prior. In particular, we construct the text graph, dense region graph, and semantic graph to generate adjacency matrices, and then compose them with input vision and language features to perform downstream reasoning. Such a way of regularizing self-attention with graph information significantly improves the inferring ability and helps align features from different modalities. We validate the effectiveness of Multimodal Graph Transformer over its Transformer baselines on GQA, VQAv2, and MultiModalQA datasets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00339.warc.gz
CC-MAIN-2023-40
1,249
1
https://www.mssqltips.com/sqlservertutorial/9091/biml-tutorial-biml-language-basics/
code
By: Koen Verbeeck In this chapter we will take a look at the foundations of the Biml language. Itís all XML As mentioned before, Biml is basically XML. This means it follows the hierarchical structure of XML. Letís illustrate with an example. This piece of Biml describes a simple package with one connection and one Execute SQL Task: The script starts with the Biml root node and then child nodes are added. Simplified, the hierarchical structure of Biml can be represented with this schema: The Biml node can have connections, fileformats (used with flat files) and packages. Packages have tasks and containers and containers can contain tasks. A data flow is a special task, which has its own set of child nodes: This schema just represents the basics of an SSIS package. With Biml you can specify every construct that you can also create manually in Visual Studio: event handlers, log providers, connection managers, package parameters, variables, expressions, script tasks and so on. The only notable exception are project parameters, which can be specified in Biml, but will not actually be generated. You can find more info about the Biml language elements in the official documentation. Since itís XML, Biml has to adhere to some rules. Each element has an opening and a closing tag, or it is an empty tag. An element can have zero or more attributes. Keep in mind some characters are reserved characters in XML. They need to be replaced with escape codes: |double quote " |single quote ' You can also specify comments in Biml, just as in XML: Biml uses XML because SSIS packages are also XML behind the scenes. You can verify this by right-clicking a package and selecting View Code. However, the XML of Biml is much more simplified than the XML schema used for SSIS packages. Itís certainly more readable; you can manually write it if you desire. This makes Biml an ideal starting point for automating the creation of your SSIS packages. Itís also much easier to compare Biml files (through source control for example) and to reuse code. In the next chapter, we will write a bit of Biml code to generate our first package. - The tip Introduction to Business Intelligence Markup Language (BIML) for SSIS gives also a general introduction to the Biml language.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00879.warc.gz
CC-MAIN-2024-18
2,273
17
http://www.itontec.com/how-to-calibrate-the-touchscreen-for-ras-with-raspbian-jessie/
code
ONLY follow this instruction if your OS is Raspbian Jessie We will install a xinput_calibrator and a script to load the calibration data each time X starts. The first time you run X, you will be presented with a calibration screen. This will only run once, and you will be asked to touch the touchscreen a few times. The calibration program will create a file which stores the calibration data( /etc/pointercal.xinput) To perform calibration again, just delete /etc/pointercal.xinput and restart X. You will be presented again with the calibration program once X starts. 1.Install all the prerequisites required for calibration sudo apt-get install libtool libx11-dev xinput autoconf libx11-dev libxi-dev x11proto-input-dev -y 2.Download and install xinput_calibrator git clone https://github.com/tias/xinput_calibrator cd xinput_calibrator/ ./autogen.sh make sudo make install When copying and pasting ./autogen.sh, you will need to confirm that the dot is copied 3.Download and setup the calibration script cd ~ wget http://s3.amazonaws.com/ttbox/xinput_calibrator_pointercal.sh wget http://s3.amazonaws.com/ttbox/calibrator.desktop sudo cp ~/xinput_calibrator_pointercal.sh /etc/X11/Xsession.d/xinput_calibrator_pointercal.sh sudo cp ~/calibrator.desktop /etc/xdg/autostart/calibrator.desktop When copying and pasting the last line above, you will need to confirm that all the quotes get copied correctly [Menu] – [Preference] – [Calibrate Touchscreen]
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00296.warc.gz
CC-MAIN-2018-09
1,459
14
https://som.yale.edu/faculty/paul-goldsmith-pinkham
code
Paul Goldsmith-Pinkham's research interests include consumer & corporate finance, econometrics, and social networks. His current work focuses on assessing the costs and benefits of debtor protection policies and understanding the role that consumer debt plays in the macroeconomy. Paul's research also studies machine learning techniques applied to economics questions. Before joining Yale, Paul was a Research Economist at the Federal Reserve Bank of New York. He earned a bachelor’s degree in economics from the Swarthmore College, and a PhD in economics from the Harvard University. - PhD, Harvard University, 2015 - MA, Harvard University, 2012 - BA, Swarthmore College, 2007 - Best Empirical Finance Paper for “The Gender Gap in Housing Returns", WRDS (Wharton Research Data Services), 2020 - Best Empirical Finance Paper for "Predictably Unequal? The Effects of Machine Learning on Credit Markets", WRDS (Wharton Research Data Services), 2019
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00083.warc.gz
CC-MAIN-2021-21
952
6
http://rprogrammingassignmenthel28147.diowebhost.com/19056954/rumored-buzz-on-python-homework-help
code
Yes! You can use Python programming language for creating a method that provides With all the Fibonacci series since the output on The idea of presented enter. Pick out Open in current window - this could shut The existing project, but you'll reopen it later on. Begin to see the web page Opening Several Projects for facts. There is nothing to bother with something. javaassignmenthelp are here to help with python homework plus your reports. We bolster you of Increased services and guidance than others and to make you far more proficient in Python Programming which is able to have you thru a prosperous profession. Large shops like Large Bazaar and Dining places can use This technique for making certain that every one goods are available throughout the year. Highlighted FREELANCER Exceptional operate, super speedy, Tremendous top quality and recognized the short flawlessly! If you're looking for any talented Website developer you will find folks like Charchit to help you attain your requirements. That’s it! We is likely to make your python assignment and supply to you inside of your deadline. Just loosen up and stick with it with other get the job done that you've and go away this assignment on us. It’s our position for getting it done for yourself. Straight away as you start typing, you'll want to see that PyCharm, similar to a pair-programmer, looks above your shoulder and indicates how to accomplish your line. Such as, you'd like to make a Python course. As you only start off typing the search phrase, a suggestion checklist seems: You need to obtain an automatic response notifying you that we acquired your facts. Someone from our view it now Company group is going to be achieving out to you personally shortly. Lecturers will mark this project better than Other people simply because That is one particular project fixing plenty of issue in contrast to Other people, wherever everyone seems to be fixing a person difficulty at a time. Passion needs to be Amongst the most mentioned phrases news in typical conversations on the globe today. Teachers want learners to become obsessed with their topics. Preachers want their congregations to get obsessed with God. A technique for a person and all, applying which individuals can e book their spa appointment from home. It's going to be the exceptional Python Project for final calendar year presentation. The easiest way to get python homework help currently is to hire an expert with the work, as an alternative to gonna your friends and family for help. This is especially since you in no way know When the function of your friends is up on the mark, whether it is free of faults which is unique. Find out the motorists, soreness factors, and contextual factors that make DevOps the two necessary and difficult to reach in greater businesses. You could review your do the job endlessly devoid of locating a solution, which may be very disheartening. The good news is, There is certainly help for all these conditions. In the event you’re having difficulties to accomplish your homework, don’t Permit anymore time slip by.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999263.6/warc/CC-MAIN-20190620165805-20190620191805-00098.warc.gz
CC-MAIN-2019-26
3,108
14
https://vendorsupport.paragonrels.com/question/6664/birmingham-aka-bamls-bial-have-issues-with-find-deep-links/
code
(Birmingham aka BAMLS (BIAL)) have issues with Find deep links Hi There, I am from Move Inc. one of our clients Birmingham aka BAMLS (BIAL) MLS have a couple of deep links pointing the Find application(Find is owned by Move Inc.). These deep links works based on SSO handshaking between Find and the MLS application. When the user clicks on the deep link the MLS application is supposed to send us the MLS name in URL path in this case what we are expecting is as below- https://find.Realtor.com/Birmingham?uid= but actually what we are receiving is - http://find.realtor.com/ENTER_MLS_CODE_HERE?uid=RcomPS&RedirectUrl=http%3a%2f%2ffind.realtor.com%2fBirmingham%2fPartner%2fPropertyDetails%2f834237 The issue is happening intermittently means sometimes we are receiving correct URL path and sometimes we are receiving the "ENTER_MLS_CODE_HERE" in the url.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654031.92/warc/CC-MAIN-20230608003500-20230608033500-00205.warc.gz
CC-MAIN-2023-23
855
4
http://greywolf.critter.net/weirdwars/houserules/archetype-traits.htm
code
Normally in Deadlands, the character generation system is fairly random. However, it might be that you'd prefer the Posse to start off on fairly equal footing. One way of doing this is to allow your players to choose from the character Archetypes presented in the rules, and letting them tweak the Aptitudes a bit to their liking. However, you may notice a bit of a pattern in most of the Archetypes insofar as their Traits go. (There are a few exceptions to this pattern, but they're not the rule.) Basically, the player would get the following results to distribute amongst his Traits, rather than drawing cards as per normal. From that point on, you calculate Wind, Pace, Strain, and the number of points available to spend on Aptitudes and Edges as usual. It is most certainly just fine for any player to stick the highest results below under whatever Trait that he'd benefit from. Just remember that each of these results can only be distributed to one Trait - You can't stick 2d12 or 3d10 in all of them! On the bright side, none of these came up as d4s...
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00328.warc.gz
CC-MAIN-2022-05
1,062
2
https://cve.circl.lu/capec/4
code
|Name ||Using Alternative IP Address Encodings | |Summary ||This attack relies on the attacker using unexpected formats for representing IP addresses. Networked applications may expect network location information in a specific format, such as fully qualified domains names, URL, IP address, or IP Address ranges. The issue that the attacker can exploit is that these design assumptions may not be validated against a variety of different possible encodings and network address location formats. Applications that use naming for creating policy namespaces for managing access control may be susceptible to being queried directly by IP addresses, which is ultimately a more generally authoritative way of communicating on a network. Alternative IP addresses can be used by the attacker to bypass application access control in order to gain access to data that is only protected by obscuring its location. In addition this type of attack can be used as a reconnaissance mechanism to provide entry point information that the attacker gathers to penetrate deeper into the system. | |Prerequisites ||The target software must fail to anticipate all of the possible valid encodings of an IP/web address. | |Solutions ||Design: Default deny access control policies Design: Input validation routines should check and enforce both input data types and content against a positive specification. In regards to IP addresses, this should include the authorized manner for the application to represent IP addresses and not accept user specified IP addresses and IP address formats (such as ranges) Implementation: Perform input validation for all remote content. | |CWE ID ||Description | |CWE-41 ||Improper Resolution of Path Equivalence | |CWE-180 ||Incorrect Behavior Order: Validate Before Canonicalize | |CWE-291 ||Reliance on IP Address for Authentication | |CWE-345 ||Insufficient Verification of Data Authenticity | |CWE-697 ||Incorrect Comparison | |CWE-707 ||Improper Enforcement of Message or Data Structure |
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00340.warc.gz
CC-MAIN-2020-05
2,005
15
https://support.ppolive.com/hc/en-us/articles/209873003-How-can-I-make-the-Project-Manager-field-required-when-a-project-has-moved-into-Initiating-
code
PPO allows users to set up Business Rules from the front-end of PPO. These rules include actions such as preventing an update if certain criteria aren't met (validation rules), the sending of e-mails based on trigger events in PPO (send e-mail) and the ability to call a web service in order to add, update or extract data from outside of PPO (call web service). This article will explain in detail one specific application of the Business Rules: How to make a field required based on a predefined condition. This will be explained using the example of making the Project Manager field required once the project has moved beyond the justification stage. This falls under the validation rules section of the business rules. For detailed steps on setting up validation rules and other Business Rules, see the following knowledge base article. Step 1: Add a new Business Rule and provide the base information Access the Business Rules from the Administration menu item and fill in the information as per below: Step 2: Set up the condition The rule should be that no project should be updated if it is in a stage after justification EXCEPT if the Project Manager field is populated. The conditions of the validation rule should therefore specify that the update should be prevented when the New values on the project stage field is anything but Justification. To set this up select the "New values" condition: Then set up the filter as follows: Submit the Filter Item and the Filter. Step 3: Set up the exception So far the validation rule specifies that no project should be allowed to be updated if the project is in any stage other than Justification. However, the rule should specify that the project may be edited after Justification only if the Project Manager field is populated. The exception should therefore be applied if the Project Manager field is NOT empty. To set this up select the "New values" exception: Then set up the Filter as follows: Submit the Filter Item, Filter and submit the Business Rule. This rule will thus prevent a project from being submitted if the stage is later than Justification and no Project Manager has been specified. The principles explained in this FAQ can be applied to other scenarios as well to make fields required and non-required based on conditions and exceptions.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00252.warc.gz
CC-MAIN-2021-43
2,313
18
https://wiki.ubuntu.com/UDSProceedings/N/Other
code
This page is here to collect together proceedings from sessions as part of sessions that don't fit one of the pre-defined tracks at the Natty UDS in Orlando, Florida. Please add proceedings by doing the following: - Add a new section with the name of the session. - Add the outcomes of the session as a collection of bullet points. The goal of the proceedings is to really focus on decided outcomes, so please try to keep them crisp. Paper Cuts for Natty Vish has been doing a lot last time, but he shouldn't have to do all work. This cycle more people should help him with triaging paper cuts; - There will come an open Launchpad team, for everyone who is interested in fixing paper cuts. This team gets assigned to all paper cuts that are accepted, so the team's mailing list gets a mail and the team members know there is a new paper cut to fix; - Until other package description projects get going we should continue to accept descriptions paper cuts, but they could be moved to a separate milestone; There will be a separate 'Paper Cuts'-like project for Unity, so the One Hundred Paper Cuts Project should be about applications and upstreams; - Paper cuts in featured applications and in applications of upstreams willing to cooperate will be accepted during the Natty cycle as well; - We will cooperate more directly with upstreams and Debian with regard to the package descriptions, to make sure our improvements benefit other projects as well, and we get feedback from the people who know the applications like no other; Vish will select the milestones for the Natty cycle; - We will investigate the possibility of using Launchpad Translations and a custom version of US English to provide a temporary easy way to modify package descriptions in Ubuntu; Testing Ubuntu on different architectures - Only the testcases with subscribers in the ISO trackers will be built. - Distributions will choose what architectures and "special testcases" they commit to. - We will have regular names for architectures and distro names. - The release manifest will include only the images that were tested properly. - All this process should be automatic - This change of policy will be announce to ubuntu-devel-announce - We will pursue and extend the efforts started under the ALIP umbrella to cover a larger set of packages - Automation will be setup to test cross-compilation of the new package set in both in "from scratch" and "source by source" modes - Multiarch control fields will be used to classify packages in host versus build packages - Kexec needs lot of fixes and testing on ARM, subarch by subarch - Will extend kexec to cover device tree - Some demo / test features will be implemented with kexec (kexec-reboot, boot menu etc.) Kernel configuration management - Will create a dataset of config requirements as to allow tailoring the config for a distro, or for a product - Only machine specific configs could possibly be stored upstream - Needs policies for Ubuntu and for Linaro to define common configs such as networking, security etc. ARM gdbserver support - gdbserver will be included in developer images - New package for an armel cross-gdb - Developer experience will include a new tool to grab debug symbols from ddebs - Will develop new test frameworks and safety checks Reinvigorating the artwork team Session identified need to start a web presence project including following activities: 1.research - .existing questionaire - .Describe problem - .Conduct survey - .gather data about lapsed members 2. Define our objective 3. Sketch solutions 4. Test and iterate 5. Launch - design a questionaire - Ivanka - Set up fortnightly meetings - Iain/ Ivanka/ Docmo - move the questionaire to a central place- docmo - email the loco team leaders the survey- have them direct - publicize the survey- on blogs, lists - .design blog, u-artlist local, contacts list, Flickr group. Project broken up into four sprints (2 weeks each) 1. Review data in central place 2. Design questionnaire 1. Get survey out there and receive input after which steps 3 & 4 can be determined. Design in open source A first project to kick us off while we prepare the web project is Launchpad Edit Icon - Icon brief by wednesday - picpick for contributions - two week deadline - last submission - winner announced 3 days later Defining Gestures in Ubuntu via Configuration - Agreed that physically literal gestures don't need configuration options Using QEMU for demonstrations - many of Linaro's other outputs (kernel power management, performance, graphics improvements) are not demoable via qemu - we should get the Beagle XM model working with Linaro images (as a compromise between memory size/capability and being a platform Linaro supports in h/w) - investigate/use the paravirtualised OpenGL work done by Nokia for maemo/meego - good audio support is tricky -- low priority Using QEMU for development - for development users we need not be modelling real hardware : we should define a "virtual platform" which is simpler and less prone to fragility in the face of kernel changes, and has more memory, fast virtual devices, and so on - we are effectively "filling the gap" between now and when native ARM dev/build hardware is more prevalent/cheap - need to test/validate qemu so it is trustable as a dev platform - need to consolidate qemu trees to avoid the "which qemu should we use?" problem Ubuntu One file sharing, sync status, and UDF selection - Ubuntu One button in Unity launcher and notifications that something has happened (a share has arrived and needs accepting, etc) will go into the Messaging Menu - Need to decide how to handle shares between these options: - Expose the current "reject this share completely" u1sdtool feature in the GUI, in the Nautilus folder right-click menu, clicking it completely rejects that share from all your machines - Change syncdaemon to allow rejection of shares per-machine (like UDFs will be handled) - Only provide "reject this share completely" on the web and provide rejection of shares per-machine in N+1 - User Defined Folderes (UDFs) will not sync automatically by default - User will need to tell Ubuntu One which folders she wants to sync via the Ubuntu One control panel and at initial setup Ubuntu One visibility and integration with Unity - Ubuntu One in Unity launcher, launching the Ubuntu One control panel (ubuntuone-preferences) - Ubuntu One notifications will go in the messaging menu - Indication of syncing will likely go in the Ubuntu One launcher item Ubuntu One integration with Zeitgeist - Ubuntu One will log events in Zeitgeist and aggregate this data to display notifications to the user about important activity (content synced, etc.) - An ontology for Ubuntu One events will be created and documented so that other similar services can use the ontology - GNOME Activity Journal to support viewing events other thancreated, edited, deleted type events Security AppArmor packaging - get bindings in order (have perl, want python and ruby built, tested and packaged) - cleanup to use modern debian packaging better upstart integration. This does not necessarily mean AppArmor will use a job file, but we should make it easier for applications with job files and AppArmor profiles to load their profiles via the job file. This will likely be done via a helper - clean up /etc layout - work towards inclusion in Debian so we can get expert feedback on existing profiles as well as more high-quality profiles Security AppArmor upstream planning Preliminary TODO list for natty: - get the compatibility patches into natty - High - should be next week - level 1 btrfs for natty (patch sent up for review very soon and then iterated) - High - level 2 btrfs for natty (should be small after the first is sent up) - Medium - level 1 network mediation - Medium (we have a compat patch in the meanwhile) - send up his thoughts to the ml - Low - (we have a compat patch in the meanwhile) - discuss and spec out sysfs hiearchy in the wiki - sysfs introspection - fix kernel so that it will drop policy compiled on a newer kernel for which it doesn't understand(currently network, capabilities, rlimits) - make sure upstream initscripts or ok - clean out super-crufty, old log parsing stuff - Current testing resources and methods: - abrek - automatically adds support for a large number of test suites - checkbox - some automated tests and implementation of manual tests - firmware test suite - linux standard base - Discussed methods of remote reset and power cycling features - Security testing may be a future requirement - Kernel tests to be implemented: NEON, Suspend/Resume, Kexec - A method of automated testing needs to be implemented - OEM has some ideas - Hardware needs to be consolidated to single test site Eclipse CDT Support CDT was accepted into Natty during the session. The blueprint will be massaged to encompass: - Ensure seamless C/C++ development experience using eclipse - Including external hardware - Including debug of that hardware - Including cross compilation targets - Including qemu Linaro Kernel Packaging - Investigate script for producing per-flavour kernel source packages to allow parallel builds - OMAP3 and OMAP4 should be able to coexist with some future upstream kernel - udebs are not needed for Linaro kernel Continuous Integration for Linaro - Hudson is the tool we're going to use for this - One instance will be shared by all linaro teams - Try to get Hudson deployed on a separate DMZ in the DC - If not acceptable, investigate and do it on ec2 as an interim solution - Need to figure out how it's going to be paid for - Can use Michael Hope's network for arm slaves as an interim solution Package Development Tools - Target audience: Linaro engineers not familiar with Debian packaging - Implemented as a single command with multiple subcommands - Use sane defaults to make things simpler but still allow flexibility for experienced users - Focus on package fetching, building and uploading this cycle - Will use chroots for cleanliness and to make it possible to target releases other than the one running on the host Kubuntu Natty Docs - stick with docbook for natty, reconsider in future if upstream KDE get the wiki process working well - On kubuntu-users mailing list ask for howtos - videos on new kubuntu.org site, basic setup topics. needs hosted agreed with IS and format sorted (ogg/HTML5 with youtube fallback?). not targetting translations for this cycle but consider in future. - wiki cleaning would be lovely. - Setup a request to the Kubuntu users ML and get them involved. - review welcome and about docs, consider combining them, add back link from live CD - ask sysadmin for help.kubuntu.org, export docs to HTML and put there - build an area of the wiki for documentation writing knowledge - update docs for 11.04/4.6 - backport docs for 10.10/4.5 - Get packaging translations scripted as much as possible - hold kubuntu docs days to list what needs to be fixed, updated, changed for cycle. at feature freeze - Agree hosting of videos for kubuntu.org with IS - Create tutorial videos for new kubuntu.org website Internationalization of Launchpad Answers - We discussed some of the internationalization issues with the current Launchpad code: - Launchpad was designed to have internationalization support, but it was never implemented end to end - Some parts of the code are in some way prepared for localization (e.g. python code with strings marked for translation), but they'd need more work - The way the Launchpad code is structured, internationalization could be implemented globally, but it would be interesting to do the work progressively: we are proposing starting with Answers and see how the process would work. - We came up with a set of high-level actions that would enable any community member interested in leading the effort of internationalizing Launchpad Answers to get started on this project and follow it through to completion: - Mark strings as translatable from Zope templates - Check the structure of sentences to be translator-friendly - Make sure POT template generation works with i18n_extract - Make sure localization works in Launchpad itself - Review directionality of Launchpad pages - Figure out a way to select a language for the user (get SSO to provide it?) - Set up translations in Launchpad as a project and get translations exposed Provide ARM cross-compiler packages for Ubuntu Natty We discussed state of current toolchain and what will be done/investigated for Natty. - need to check support for multilibs so we can get armv7 hardfp/softfp vfp/neon libs in one run - need to prepare patches to have PPA with backports for Lucid and Maverick - need to create a way to ship toolchain tarballs - relocatable toolchains? Supported releases will be: - current development (with current packages) - last stable release (with packages from release + current ones) - last LTS release (with current packages) - Apt maintainers (Debian/Ubuntu) will address the apt resolver issue that prevents packages that are installed in a pinned repository from pulling dependencies from the pinned repository. This will benefit Ubuntu Backports, Debian Backports, Debian Volatile, and Debian Experimental. - The Ubuntu Backports pockets will be set up so that packages from backports will not be automatically installed (take from backports only when the user requests it), but upgrades to newer packages in backports will be automatic. - Ubuntu Software Center, Kpackagekit (and/or Muon), and update-manager will be adapted to appropriately expose this new option for an alternative (newer, but less tested) version. Patch Tracking for Linaro - The Linaro landing teams would be interested in something like patchwork - The toolchain working group has some different needs for tracking patches carried, as well as changesets against their tree and upstream that are not in the other tree - They have a tool that they developed for tracking this that could serve as a prototype for what they are looking for - We should look at extending patchwork to cover the needs of both these teams - Another group has a completely different need that doesn't fit well within this model - tracking a small number of patches against a large, unknown number of packages and upstreams - Recommended approach is to use LP bugs against a tracking project, with upstream tasks to track upstream status - We need to get more users testing connman and we should do that from ubuntu-dev, blogs and tweets - The automatic connman hardware test script, ubuntu-connman-test, will be ported to use launch control backend. - Also we need to start looking at automated test suites for connman. * Python script for testing Connman: * http://launchpad.net/ubuntu-connman-test * automated test suite * possibly send results to launch control * http://dashboard.linaro.org/data/dashboard_app/testrun/objects/2 * Desktop Testing team integration? * getting more users to test connman * how to get testers * call for help on ubuntu-devel * blogging/twitter/etc. * automatic testing through code test suites ACTIONS: [kvalo] update ubuntu-connman-test scripts for launch control [kvalo] issue call for testing for connnman [mathieu-tl] upload newer connman - We discussed plans for ongoing maintenance of Linaro kernel and U-Boot trees: - The packaged kernel tree will continue to track the upstream Ubuntu Maverick tree - No new features will go into the packaged kernel tree or the Linaro stable tree - For the Natty cycle the Linaro next tree will include the upstream PM tree at the request of the PM team. - Linaro U-Boot next tree will track upstream mainline closely with appropriate delay when upstream is particularly unstable. - Linaro packaged U-Boot tree will track upstream releases. - A wiki page will be created to document this - Not trying to take over Android development from Google!!! (Or even do much Android development.) - Not trying to produce a Linaro-branded Android!!! - Will do: - Upstreaming: help make case for upstreaming of Android code, help with upstreaming. - Common tree: internal builds for Q/A of Linaro kernel and toolchain features. - Demo non-smartphone use cases for Android features Handle core boot files update on ARM - Coordinate with the Design Team to identify the best method to warn the user about the update - Design a tool that is able to update the core boot files, like x-loader and u-boot - Share the subarch detection library provided by other-arm-n-userland-subarch-detection - Create proper documentation describing the update procedure and what to do in case of problems
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00562.warc.gz
CC-MAIN-2021-43
16,640
208
http://foodtidings.blogspot.com/2009/02/are-you-using-yahoo-or-google-group-to.html
code
A lot have people do. It's a great way to get the word out, especially if there is already a large community of people using the group. When using these services, you should be aware of the way they handle links in order to make sure everyone in the group can get to the schedule easily. Make sure you use the built in text editors when emailing/posting to the group. Sometimes these services are not set up to allow linking text in the post/email. You will need to follow the instructions at whatever site you are on that will tell you how to insert a link in your post or email. Usually, this involves highlighting the pasted link from your foodtidings.com schedule, and clicking the "Link" icon (also called a URL) If you have questions on how to insert links when posting to these types of services (or even in your email) consult their web sites help pages.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864022.18/warc/CC-MAIN-20180621040124-20180621060124-00585.warc.gz
CC-MAIN-2018-26
862
3
https://garygrady.com/2018/12/03/day-3/
code
Since I last posted. Day 3 of Advent of Code I have decided to try my hand at Advent of Code. This is programming challenge that has been running for a few years. It has a daily challenge in two parts. You can use any language you want because it just checks your final answer. The last couple of days the first part of the challenge was straight forward, where the second part took more thinking and code. Today however because of the way I solved the first part, I was able to solve the second part fairly quick. Today’s challenge is you have rectangles that can overlap. You need to sum up the area that is used by more than one rectangle. I like to be able to visualize the problem so I found a python library that will output a PNG file given an array. I solved the problem by using a matrix and keeping track of the overlaps by adding the usage. Here is what the overlap looks like when outputted to a graphic file. The data involved over 1300 rectangles over a 1000×1000 grid. The second part of the challenged wanted to know which of the rectangles did not overlap. Since I had my matrix I just needed to loop through the rectangle IDs and find the one that only had values of 1. I modified the output to the PNG to highlight the target ID. The bright rectangle in the bottom half near the right side was the one. Generating the graphics was not required I just wanted some way to see if I was on track. I did some more work on the sculpture. I started to re-topologize my Elf model I did last year. I want to do this so I can get a more fluid movement when animating. I was limited in movement last year because the model would bunch up and fold in weird ways. I had sculpted the Elf and then used a remesh tool to reduce the number of polygons. This resulted in all triangles, which do not bend properly and is difficult to paint because there where spots that did not quite fit using the texture map I painted. Here is the old version Here is the new version. I am trying to convert it to be quads (four sided polygons). I have started to isolate the different body parts. the end result I hope will be a model that does not look like crap when being deformed by the armature. I completed an Inkscape course from Udemy.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00691.warc.gz
CC-MAIN-2021-39
2,233
11
https://www.emqx.com/en/careers/devops-lead
code
Join As: Employee (Sweden) or Consultant At EMQ, we are building the future of IoT data infrastructure. Being the most popular open-source MQTT broker provider, we are also constantly challenging ourselves to go further and do better in supporting the open-source community by sharing our work, as well as supporting enterprise customers by supplying massive-scale and solid-stable solutions. We are looking for a DevOps lead to join us as the first one in this role who will help us to build up an OPs team in Europe. Who you are With at least 2 years of proven working experience in DevOps or similar work. A great team player, a quick learner.Solid understanding of Linux system, container technologies. Proficient in bash and python scripting languages.Experienced in CI/CD tools such as terraform, ansible and such Swedish resident if join as an employee in Stockholm site What you will be doing In general: help building cloud-native products Advocate ops friendly development practices Lead the thinking in cloud-native solutions Develop Kubernetes operators Develop terraform modules Develop CI pipelines Support users (community users and paid customers) with their deployments;
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00242.warc.gz
CC-MAIN-2023-14
1,187
16
https://jira.mongodb.org/browse/SERVER-34897
code
Currently MongoS unconditionally retries find commands on network and replica set errors. While this behaviour helps increase availability, there is risk where a storm of poorly constructed and heavy to execute find commands, which get retried could bring down a node of a replica set. This ticket is to consider adding a MongoS boolean parameter, which allows customers to opt out of the retry behaviour. - is duplicated by SERVER-43042 Add option to control (or disable) the number of query retries in mongos
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00089.warc.gz
CC-MAIN-2023-23
510
4
http://www.faculty.rsu.edu/users/f/felwell/www/Theorists/Mills/IndQuote/WhiteCollar/78.html
code
"Bureaucratization in the United States is by no means total; its spread is partial and segmental, and the individual is caught up in several structures at once. Yet, over-all, the loose-jointed integration of liberal society is being replaced, especially in its war phases, by the more managed integration of a corporate-like society" (White Collar: The American Middle Classes, 1951, pp. 77-78).
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00040.warc.gz
CC-MAIN-2018-05
397
7
https://www.lynda.com/IT-Infrastructure-tutorials/Compare-keystroke-loggers/476620/511447-4.html
code
Keystroke Loggers record every keystroke made on the system. Lisa Bock compares software and hardware Keyloggers. Software Keyloggers work silently in the background, however can be detected by antimalware protection. Hardware Keyloggers have their own processor and on-board memory, however, you must have physical access to the device. - [Voiceover] Keystroke Loggers record every keystroke made on the system. They come in two different flavors, software and hardware. A software Keylogger will run in the background and record every keystroke. And then stores the results on a hard drive on the system. Work can be later copied or removed by the attacker. Software Keyloggers can be thought of in two different classes. One which is observable in the Task Manager and can be seen. If a Keystroke Logger can be seen in the Task Manager it can be disabled by simply right clicking and selecting end process. Then there are Stealth Keyloggers. These are not easily visible. They're a bit harder to detect. However, they can be found and disabled if the user has administrative privileges. Spyware and malware tools will also most of the time pick up Keyloggers as they are a form of spyware. When setting up a Keylogger some Keyloggers send an email after gathering a predetermined amount of activity. Although this might be a handy feature if we send anything such as an email, or information over to another network device, or an FTP Server. This creates noise. This might alert your anti-malware protection. Other Keyloggers can also monitor online activity. But like most aggressive software, this might slow down a system. And in addition, some Keyloggers can grab screen captures. But again, this is an image, and this activity may fill the hard drive and cause stability problems. Hardware Keyloggers are a little bit different, in that they must be physically attached to the system. Once on, they record each keystroke and save it to their own onboard memory. Installing a Hardware Keylogger is easy and can be done with little or no experience. However, installation requires physical access to the device. A Hardware Keylogger can be installed inside a keyboard. In addition, it can also be plugged into a USB. A user might not notice it's there. If you can imagine a retail environment where the device is out in the open, however no one really notices that USB device placed in the side of the device. With Hardware Keyloggers, no software is required. In general, it's undetected by anti-malware protection. In addition, it has it's own onboard processor. Now the benefit of this is it's going to work outside of the operating system and it won't interfere with the processing that happens inside of the system. Also, it's gonna maintain the data if the power is lost. And the contents inside can be encrypted which makes it difficult for anyone to access the data if they were to find the device. So as you can see there are different Keystroke Loggers. Software runs quietly in the background, but can be picked up by anti-malware protection. But Hardware might be a better option, however you have to access to the system. These tutorials, along with the other courses featured in the Ethical Hacking series, will prepare students to pass the Certified Ethical Hacker exam and start a career in this in-demand field. Find out more about the exam at https://www.eccouncil.org/programs/certified-ethical-hacker-ceh/. - Acquiring passwords - Generating rainbow tables - Understanding where passwords are stored - Defending against privilege escalation - Understanding spyware - Protecting against keylogging - Detecting steganography - How hackers cover their tracks
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685993.12/warc/CC-MAIN-20170919183419-20170919203419-00003.warc.gz
CC-MAIN-2017-39
3,681
16
http://notoverthehill.com/forums/display_topic/id_505/YouTube-Video-Problem/
code
Goping nuts again. I am trying to get a video grom YouTube, ano matter what I try, I keep getting the error that the video must be from YouTube. Well, it IS! Here is the code: What the heck do I do? I see it embeds just fine here, but I canot get it into my Video section. Also please excuse the typos, but here is no edit button! I see that it is on the page and will play, so I still don't know why I got the error messages, but it seems alright!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00091-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
448
7
http://www.shopping.com/exo-terra-exo-terra-the-congo-rainforest-terrarium-kit/info?sb=1
code
Exo Terra The Congo Rainforest Terrarium Kit Nurture your herpetological interests with the Exo Terra reptile habitat kit. With a compact top and canopy, this Exo Terra terrarium safely holds your little pets. European herpetologists have designed this Congo rainforest terrarium, which includes a jungle plant, spider orchid, water dish, and more. Along with a thermometer and hygrometer to track conditions, this terrarium is perfect for small species of frogs, lizards, geckos, and snakes.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095775.68/warc/CC-MAIN-20150627031815-00086-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
492
2
https://www.dk.freelancer.com/work/page-valign-middle-css/
code
...encouraged, the only needed expectations for this logo are as follows : 1. Include name Easymoneyline the image is given for reference. The idea of the Badge in the middle is very attractive. I like the aggressiveness and aesthetic of how much is going on in the example image. If my project can include the same type of feel that this logo gives I am looking for a designer to desig...the sketch with the Parts. All measurements are in mm, let me know if you need me to explain anything. Ignore the hole in the middle of the Part G, that is because i used a drill to create the depth, that hold in the middle is not needed. Let me know what you think if you don't understand anything. Regards I need a 2016 solidworks file redesigned for injection moulding. I need tabs put on it to screw and i need it split down the middle and the wall thickness cleaned up to a moulding standard. I have a step file for viewing and i need this done for chinese manufacturing This bot must provide an intermediate platform for users to work indirectly at freela...employee NOTE: all transactions that happen in our telegram bot is hidden to employee at freelancer.com and [log ind for at se URL] and they don't know that any telegram bot is in the middle NOTE: all actions can be done by APIs that freelancer.com and [log ind for at se URL] has provided I need a person who is expert in Advance HTML and CSS it will be a regular work. You have to login to my system and do the work .. actually we are using a WordPress and PHP plugin to stream our content to clients website. your job is to put header and footer in our system, so our content looks like same as clients website part. You have to work 1-2 ...interested to apply, please know that you'd need to introduce us to your developers since beginning. We’re NOT interested to work through managers / business developers / other middle men. Do not apply if you can't satisfy this condition.... ...accessibility. This is phase 1 of a bigger project. This phase is purely to do a spike to find a CSS / UX designer that has the right magic. The intention is, once we have found the correct candidate, to initiate a second project to complete all the CSS styling and UX improvements for the whole system. ( The second project will not be a contest. The ...components should contain the following details: 1. Input field components: this is simple HTML form field that can be captured with any HTML form capture process and ported on any page. The fields posting should be simple JSON based posting via orchestration layer to the back-office application. The field population will be configured in the back office and ...content and therefore there we are looking for a longer term freelance relationship. At the moment, we are trying to find the right fit. The topic for this contest can be either: Middle Eastern interior design or Unconventional Homes. Inspire us - you have the complete freedom to write about what you want. Content has to be compelling, attract readers to our We have to design about 15-20 pages like about us, our team, donations, current and upcoming projects etc. We are ...about us, our team, donations, current and upcoming projects etc. We are currently using Moodle LMS opensource. One should design pages in atto html editor and can add custom css if required. SEO and article writing will be plus point. I am the project lead. Our project is almost complete. We are looking for a UI developer who could build additional pages/sections, improve current UI (make pages responsive, fast) and also help in troubleshooting JS errors. We are flexible about fixing the number of hours of work every week or a monthly fixed salary. I am looking for a mentor who is presently working, or has the experience of having worked in recent past, as a human resources manager at middle/senior management level in a medium to large organisation. I have had a couple of years’ gap in my employment as a human resources professional due to family circumstances. I require the person to help me I have a website that is done and fulling working. Except I need to make a few of the graphic (Umbrella) spin clockwise direction. Looking for someone there is very experienced in working with doing website JS animation. Non complex animation. To start and complete by this weekend. Hi, We are using a theme for our SaaS project. However, we would like to enhance its CSS (& if required little bit of HTML). This way, we can move towards much cleaner SaaS product. Please bid only if you have done SaaS design work before. Please share your work / mockup design in the chat. First we will make changes on the small part of the website
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590559.95/warc/CC-MAIN-20180719051224-20180719071224-00063.warc.gz
CC-MAIN-2018-30
4,684
14
http://psx-scene.com/forums/f292/ps2psxe-public-preview-feedback-discussions-64878-print/index25.html
code
I've already added the framelimiter, so it works quite nice (a lot of tearing when playing ntsc games in pal mode or pal games in ntsc mode, but it cannot be avoided, I guess). The major drawback is the lack of framerate limiter, which makes the game run at ~250% max, making the characters run like hell :lol: Quite normal (well, but if you delete it, it should be remembered... If it's not, then it means there is some problem with the file attributes? I guess). Bad SIO timing and some things got screwed... Bios mc browser being one of them. It should work in most games though (just use empty memory card image)... Anyway, I know you're not expecting any feedback so this is just for the pleasure of talking, but I really enjoyed playing with your program. One question though (you don't have to answer if it bothers you) : is it normal that when loading only the bios, I can't see the contents of memory card 1, only memory card 2 (which contained some FF9 savegames which I assume are yours), and any attempt to remove them doesn't work upon reboot ?
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164019989/warc/CC-MAIN-20131204133339-00045-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,057
4
https://coderanch.com/u/141509/Arad-Chear
code
i have implement client server program client ask the server to download file then the server send that file to the client if it exist i create file in the server project/folder and type data on it i can see data appear correctly on the Text editor but when the data sent to the client its apear many ununderstandable characters followd by my original data something like this : \f0\fs24 \cf0 Hi\ iam on the server} TCP connection closed ,, Bye , Bye .. I used BufferedReader and PrintWriter in both sides client & server i am using Mac Leopard 10.5 with TextEdit progtam any suggestions ?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00148.warc.gz
CC-MAIN-2023-50
589
14
https://ijocta.org/index.php/files/article/view/945
code
A fix-and-optimize heuristic for the capacitated multi-item stochastic lot-sizing problem Keywords:Capacitated Lot-Sizing, Random Demand, Inventory, Mixed Integer Programming, Fix and Optimize This study addresses the stochastic multi-item capacitated lot-sizing problem. Here, it is assumed that all items are produced on a single production resource and unmet demands are backlogged. The literature shows that the deterministic version of this problem is NP-Hard. We consider the case where period demands are time-varying random variables. The objective is to determine the minimum expected cost production plan so as to meet stochastic period demands over the planning horizon. We extend the mixed integer programming formulation introduced in the literature to capture the problem under consideration. Further, we propose a fix-and-optimize heuristic building on an item-period oriented decomposition scheme. We then conduct a numerical study to evaluate the performance of the proposed heuristic as compared to the heuristic introduced by Tempelmeier and Hilger . The results clearly show that the proposed fix-and-optimize heuristic arises as both cost-efficient and time-efficient solution approach as compared to the benchmark heuristic. Axsäter, S. (2015). Inventory control. Vol. 225. Springer. Ramya, R., Rajendran, C., Ziegler, H., Mohapatra, S., Ganesh, K. (2019). Capacitated Lot Sizing Problems in Process Industries. Springer. Wagner, H. M., Whitin, T. M. (1958). Dynamic version of the economic lot size model. Management science, 5(1), 89-96. Florian, M., Lenstra, J. K., Rinnooy Kan, A. H. G. (1980). Deterministic production planning: Algorithms and complexity. Management science, 26(7), 669-679. Bitran, G. R., Yanasse, H. H. (1982). Computational complexity of the capacitated lot size problem. Management Science, 28(10), 1174-1186. Karimi, B., Ghomi, S. F., Wilson, J. M. (2003). The capacitated lot sizing problem: a review of models and algorithms. Omega, 31(5), 365-378. Robinson, P., Narayanan, A., Sahin, F. (2009). Coordinated deterministic dynamic demand lot-sizing problem: A review of models and algorithms. Omega, 37(1), 3-15. Buschkühl, L., Sahling, F., Helber, S., Tempelmeier, H. (2010). Dynamic capacitated lot-sizing problems: a classification and review of solution approaches. Or Spectrum, 32(2), 231-261. Brahimi, N., Absi, N., Dauzère-Pérès, S., Nordli, A. (2017). Single-item dynamic lot-sizing problems: An updated survey. European Journal of Operational Research, 263(3), 838-863. Hu, Z., Hu, G. (2016). A two-stage stochastic programming model for lot-sizing and scheduling under uncertainty. International Journal of Production Economics, 180, 198-207. Bookbinder, J. H., Tan, J. Y. (1988). Strategies for the probabilistic lot-sizing problem with service-level constraints. Management Science, 34(9), 1096-1108. Tempelmeier, H. (2011). A column generation heuristic for dynamic capacitated lot sizing with random demand under a fill rate constraint. Omega, 39(6), 627-633. Tempelmeier, H., Herpers, S. (2010). ABC $beta$--a heuristic for dynamic capacitated lot sizing with random demand under a fill rate constraint. International Journal of Production Research, 48(17), 5181-5193. Helber, S., Sahling, F., Schimmelpfeng, K. (2013). Dynamic capacitated lot sizing with random demand and dynamic safety stocks. OR Spectrum, 35(1), 75-105. Helber, S., Sahling, F. (2010). A fix-and-optimize approach for the multi-level capacitated lot sizing problem. International Journal of Production Economics, 123(2), 247-256. Tempelmeier, H., Hilger, T. (2015). Linear programming models for a stochastic dynamic capacitated lot sizing problem. Computers & Operations Research, 59, 119-125. De Smet, N., Minner, S., Aghezzaf, E. H., Desmet, B. (2020). A linearisation approach to the stochastic dynamic capacitated lot sizing problem with sequence-dependent changeovers. International Journal of Production Research, 58(16), 4980-5005. Choudhary, D., Shankar, R., Tiwari, M. K., Purohit, A. K. (2016). VMI versus information sharing: an analysis under static uncertainty strategy with fill rate constraints. International Journal of Production Research, 54(13), 3978-3993. Hilger, T., Sahling, F., Tempelmeier, H. (2016). Capacitated dynamic production and remanufacturing planning under demand and return uncertainty. OR Spectrum, 38(4), 849-876. Koca, E., Yaman, H., Aktürk, M. S. (2015). Stochastic lot sizing problem with controllable processing times. Omega, 53, 1-10. Liu, K., Zhang, Z. H. (2018). Capacitated disassembly scheduling under stochastic yield and demand. European Journal of Operational Research, 269(1), 244-257. Pauls-Worm, K. G., Hendrix, E. M., Alcoba, A. G., Haijema, R. (2016). Order quantities for perishable inventory control with non-stationary demand and a fill rate constraint. International Journal of Production Economics, 181, 238-246. Tunc, H (2019). The capacitated stochastic lot sizing problem with convex processing time compression cost. Working paper. Tunc, H., Kilic, O. A., Tarim, S. A., Eksioglu, B. (2013). A simple approach for assessing the cost of system nervousness. International Journal of Production Economics, 141(2), 619-625. Li, L., Song, S., Wu, C., Wang, R. (2017). Fix-and-optimize and variable neighborhood search approaches for stochastic multi-item capacitated lot-sizing problems. Mathematical Problems in Engineering, 2017. Liang, J., Wang, Y., Zhang, Z. H., Sun, Y. (2019). Energy efficient production planning and scheduling problem with processing technology selection. Computers & Industrial Engineering, 132, 260-270. Meistering, M., Stadtler, H. (2017). Stabilized?Cycle Strategy for Capacitated Lot Sizing with Multiple Products: Fill?Rate Constraints in Rolling Schedules. Production and Operations Management, 26(12), 2247-2265. Tavaghof-Gigloo, D., Minner, S. (2020). Planning approaches for stochastic capacitated lot-sizing with service level constraints. International Journal of Production Research. Tunc, H., Kilic, O. A., Tarim, S. A., Eksioglu, B. (2014). A reformulation for the stochastic lot sizing problem with service-level constraints. Operations Research Letters, 42(2), 161-165. Porteus, E. L. (2002). Foundations of stochastic inventory theory. Stanford University Press. Silver, E. A., Pyke, D. F., Peterson, R. (1998). Inventory management and production planning and scheduling. Vol. 3. New York: Wiley. Tarim, S. A., Kingsman, B. G. (2006). Modelling and computing $(R_n, S_n)$ policies for inventory systems with non-stationary stochastic demand. European Journal of Operational Research, 174(1), 581-599. Tunc, H., Kilic, O. A., Tarim, S. A., Eksioglu, B. (2016). The stochastic lot sizing problem with piecewise linear concave ordering costs. Computers & Operations Research, 65, 104-110. Tunc, H., Kilic, O. A., Tarim, S. A., Rossi, R. (2018). An extended mixed-integer programming formulation and dynamic cut generation approach for the stochastic lot-sizing problem. INFORMS Journal on Computing, 30(3), 492-506. Frenzen, C. L., Sasao, T., Butler, J. T. (2010). On the number of segments needed in a piecewise linear approximation. Journal of Computational and Applied mathematics, 234(2), 437-446. Gavrilovic, M. M. (1975). Optimal approximation of convex curves by functions which are piecewise linear. Journal of Mathematical Analysis and Applications, 52(2), 260-282. Rossi, R., Tarim, S. A., Prestwich, S., Hnich, B. (2014). Piecewise linear lower and upper bounds for the standard normal first order loss function. Applied Mathematics and Computation, 231, 489-502. Sahling, F., Buschkühl, L., Tempelmeier, H., Helber, S. (2009). Solving a multi-level capacitated lot sizing problem with multi-period setup carry-over via a fix-and-optimize heuristic. Computers & Operations Research, 36(9), 2546-2553. Pochet, Y., Wolsey, L. A. (2006). Production planning by mixed integer programming. Springer Science & Business Media. How to Cite Copyright (c) 2020 Huseyin Tunc, M. Edib Gurkan This work is licensed under a Creative Commons Attribution 4.0 International License. Articles published in IJOCTA are made freely available online immediately upon publication, without subscription barriers to access. All articles published in this journal are licensed under the Creative Commons Attribution 4.0 International License (click here to read the full-text legal code). This broad license was developed to facilitate open access to, and free use of, original works of all types. Applying this standard license to your work will ensure your right to make your work freely and openly available. Under the Creative Commons Attribution 4.0 International License, authors retain ownership of the copyright for their article, but authors allow anyone to download, reuse, reprint, modify, distribute, and/or copy articles in IJOCTA, so long as the original authors and source are credited. The readers are free to: - Share — copy and redistribute the material in any medium or format - Adapt — remix, transform, and build upon the material - for any purpose, even commercially. - The licensor cannot revoke these freedoms as long as you follow the license terms. under the following terms: - Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. - No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. This work is licensed under a Creative Commons Attribution 4.0 International License.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100568.68/warc/CC-MAIN-20231205204654-20231205234654-00499.warc.gz
CC-MAIN-2023-50
9,707
56
https://www.notjustnumbers.co.uk/2012/02/99-of-excel-users-get-this-wrong-how-do.html
code
Jack Nicklaus (Champion US golfer) When someone comes to me with a problem in an existing spreadsheet, the problem is invariably in the layout of the data. The spreadsheet is built for one purpose and works OK for that until something slightly different is required and it proves almost impossible to get the report that's needed. If a few simple rules are followed when laying out your data, then producing additional reports from that data, and using it for different purposes, becomes simple, instead of the nightmare it is for many users. These rules apply to any lists of data, be it monthly financial information, transactional data (such as lists of sales, purchases, payments or receipts), customer or supplier lists. If you are going to store data in your spreadsheet to produce reports from, you need to follow these rules. At the heart of these rules is the approach - you are not laying out your final report here, you are laying out the data in a format that can be reported from! These are two very different things (see my OAP approach to reporting in Excel). The rules to follow: - Columns with headings and no gaps - Every column should have its own UNIQUE heading, in the first row; - There should be no empty columns; - These columns represent the fields of a database, e.g. Customer Code, Customer Name, Telephone Number, Email Address, etc. - One row per record and no gaps - Every record should have all of its data on one row. E.g. in the above example, one row per customer; - There should be no empty rows; - Don't group data by putting it in different columns (THIS IS THE ONE THAT ALMOST EVERYONE GETS WRONG) - Don't split out financial or numerical data into separate columns to categorise the data into months, expense categories, customers, agents, etc. - Do have one column for the financial or numerical data and create a column for month, expense category, customer or agent, to categorise each row; - You can use data validation drop-down lists to select the appropriate category for each row; - This one is counter-intuitive because in any report, you will almost certainly will want a column (or row) for each of these categories - but if you do this in the data you will massively restrict what you can do with it. - Data following the rules above is perfectly prepared to be analysed using countless tools within Excel, for example: pivot tables, autofilter, SUMIF, COUNTIF, etc. - Most changes to the data don't require a change to the data layout. New categories, e.g. expense categories, customers, agents, etc. can just be added to the drop-down lists. Any new entries in these columns will be automatically picked up by pivot-tables, autofilter, etc. with no work involved.If you had to create a new column each time, you would also need to edit every report that used the data. - You can choose to analyse the data by any category you want. It takes seconds to edit a pivot table that has a column for each month and change it to a column for each expense category. This is almost impossible if the data was laid out in those columns. - You can add additional category columns to the data if needed and these can even be calculated from the data. You might, for example, introduce departments - simply add a department column to the raw data, and your pivot tables can analyse the data by this category as well, or instead of existing categories. As you can see, if you lay out your data according to these rules, you can do pretty much anything you want with it. The spreadsheet can grow with your business, and with any additional reporting requirements you want to add. It can take a little bit of time to get your head around point 3, but believe me, you'll be pleased you decided to be among the 1% that get this right. If you'd prefer me to redesign your spreadsheet for you, just visit www.needaspreadsheet.com and let me know what you need and I will send you a fixed price quote. If you enjoyed this post, go to the top left corner of the blog, where you can subscribe for regular updates and your free report. If you wish to help me to provide future posts like this, please consider donating using the button in the right hand column.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817249.26/warc/CC-MAIN-20240418222029-20240419012029-00561.warc.gz
CC-MAIN-2024-18
4,189
26
https://www.pakwheels.com/forums/t/toyota-vitz-and-toyota-aygo-spare-parts-compatibility/252418
code
Does anyone know if 2nd generation 1000cc toyota Vitz spare parts are usable in 1000cc Toyota Aygo? Apparently they have the same engine 1.0L. 1KR-FE 13 I am specifically talking about under the hood parts and not body parts of course...need expert opinions please, not generic ones! Thank you guys!
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00190.warc.gz
CC-MAIN-2017-43
299
2
https://www.unidata.ucar.edu/mailing_lists/archives/visad/1998/msg00199.html
code
> I have a couple of questions, hopefully you guys can help out. > I used Display.Shape to create a plate-object > The size/scale of the platess correspond to my sample data set. > And the "display" consists of multiple plates. > (1) Is there a way to select an individual plates? > I hope to trigger visualization-relative to selected plates. > As far as I know - the mouse interaction allows users to > rotate/zoom/translate/move cursor around the whole display Probably the best way to do this is to create a custom cursor as a RealTuple data object (whose RealTupleType consists of those RealTypes that determine plates' display locations). This RealTuple would be displayed using DirectManipulationRendererJ3D (or DirectManipulationRendererJ2D) and would trigger a CellImpl that computed the closest plate. > (2) Is there a way to specify multiple colors in different > regions on ONE plate? (if possible without having to > create the plate as a composite object of different shapes) > If there isn't - can you direct me if I want to implement that > feature somewhat efficiently You can control the color of each vertex in your VisADGeometryArray by setting values in its colors array - this float array has three elements per vertex for red, green and blue components, all in the range 0.0f to 1.0f. So colors is the red component of vertex 0, colors is it green component, colors is its blue component, then colors is the red component of vertex 1, etc. Of course, if the plate Shape occurs more than once in the display all occurrences will have the same colors. In that case you might have two (or more) plate Shapes and change values in your data Fields (probably FlatFields?) to control which data point is mapped to the plate Shape with the special color. If this happens as a result of plate selection in question (1), the switch of Field values could be done in the CellImpl that finds the closest plate. > (3) Thank you very much for the software. It's wonderful !! > But I must say that it's hard for one to learn to use the > many powerful features of visad. I have been reading the > documentation and experimenting with DisplayTest.java. > I find that it's difficult to know what is happening in > I have request - is it possible that you include some > samples that is less abstract - and somewhat more contained. > Contained => no fortran, no images or other data format, no > Basic ideas like - creating Real Data Objects that has numerical > values, mapping them to some display, Creating DataReference, > illustrating how to use the reference to change data values, > and have the display updated accordingly. > How to implement direct manipulation. I'm happy you like VisAD and also well aware of the shortcomings of the documentaiton. I revised the Developers Guide several times based on Tom Whittaker's suggestions, including adding Simple.java and several other application examples. Part of the problem is that VisAD is different from any other system so its basic ideas are unfamiliar ("too many concepts" as Tom Yoksas Part of the problem is my limitations as a writer, plus I am very familiar with the system so sometimes assume too much in my explanations. One cure for this would be for someone else to write tutorial(s). I will add links from the VisAD web page to any tutorials that anyone cares to write. > (4) BTW the javadoc link in the homepage is a broken link again > Can someone fix it? Oops, we'll see what we can do about this. p.s., still working late I see ;) Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 whibbard@xxxxxxxxxxxxx 608-263-4427 fax: 608-263-6738 "kill cross-platform Java by growing the polluted Java market" - from an internal Microsoft planning document
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00144.warc.gz
CC-MAIN-2019-39
3,731
64
https://www.statology.org/multiple-linear-regression-in-sas/
code
This tutorial explains how to perform multiple linear regression in SAS. Step 1: Create the Data Suppose we want to fit a multiple linear regression model that uses number of hours spent studying and number of prep exams taken to predict the final exam score of students: Exam Score = β0 + β1(hours) +β2(prep exams) First, we’ll use the following code to create a dataset that contains this information for 20 students: /*create dataset*/ data exam_data; input hours prep_exams score; datalines; 1 1 76 2 3 78 2 3 85 4 5 88 2 2 72 1 2 69 5 1 94 4 1 94 2 0 88 4 3 92 4 4 90 3 3 75 6 2 96 5 4 90 3 4 82 4 4 85 6 5 99 2 1 83 1 0 62 2 1 76 ; run; Step 2: Perform Multiple Linear Regression Next, we’ll use proc reg to fit a multiple linear regression model to the data: /*fit multiple linear regression model*/ proc reg data=exam_data; model score = hours prep_exams; run; Here is how to interpret the most relevant numbers in each table: Analysis of Variance Table: The overall F-value of the regression model is 23.46 and the corresponding p-value is <.0001. Since this p-value is less than .05, we conclude that the regression model as a whole is statistically significant. Model Fit Table: The R-Square value tells us the percentage of variation in the exam scores that can be explained by the number of hours studied and the number of prep exams taken. In general, the larger the R-squared value of a regression model the better the predictor variables are able to predict the value of the response variable. In this case, 73.4% of the variation in exam scores can be explained by the number of hours studied and number of prep exams taken. The Root MSE value is also useful to know. This represents the average distance that the observed values fall from the regression line. In this regression model, the observed values fall an average of 5.3657 units from the regression line. Parameter Estimates Table: We can use the parameter estimate values in this table to write the fitted regression equation: Exam score = 67.674 + 5.556*(hours) – .602*(prep_exams) We can use this equation to find the estimated exam score for a student, based on the number of hours they studied and the number of prep exams they took. For example, a student that studies for 3 hours and takes 2 prep exams is expected to receive an exam score of 83.1: Estimated exam score = 67.674 + 5.556*(3) – .602*(2) = 83.1 The p-value for hours (<.0001) is less than .05, which means that it has a statistically significant association with exam score. However, the p-value for prep exams (.5193) is not less than .05, which means it does not have a statistically significant association with exam score. We may decide to remove prep exams from the model since it isn’t statistically significant and instead perform simple linear regression using hours studied as the only predictor variable. The following tutorials explain how to perform other common tasks in SAS:
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00841.warc.gz
CC-MAIN-2023-40
2,949
29
https://support.mozilla.org/vi/questions/firefox?tagged=firefox-400&show=done&order=views&filter=solved&owner=all&page=1
code
For the past several months, Firefox has been eating memory like crazy. Once I get more than 5 tabs open or have Firefox running for more than an hour or so, memory usage… (xem thêm) For the past several months, Firefox has been eating memory like crazy. Once I get more than 5 tabs open or have Firefox running for more than an hour or so, memory usage hits around one GB. If I keep letting it go beyond that, it'll hit 1.5-2 GB and it makes my whole desktop freeze up. I've taken to keeping Task Manager and every hour or so just kill Firefox and restart. That keeps it from crashing my whole desktop. It's definitely worse when I open Google products, especially Maps and Docs. GMail usually is OK. I have tried disabling literally every Add-On and Extension (including Flash and Adblock). I'm on Windows 7, I have 4 GB of RAM installed. I am on version 40.0 of Firefox, but this has been going on through a couple of updates now (I think the problem started around March or April). I also tried to back-rev Firefox and go to the last version I had before the problems started, but that didn't do anything either. Chrome works fine on this machine without any similar problems -- but I prefer Firefox!
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00356.warc.gz
CC-MAIN-2021-39
1,207
5
https://www.enotes.com/homework-help/f-x-xcos-x-sinx-0-x-3-323048
code
`f(x)=xcos(x-sinx)` , `0<=x<=3`The graph of `f(x)` intersects the `x`-axis when `x=a`, `a!=0`.What is the value of `a`? Also, if the graph of f is revolved 360° about the x-axis from `x=0`... `f(x)=xcos(x-sinx)` , `0<=x<=3` The graph of `f(x)` intersects the `x`-axis when `x=a`, `a!=0`.What is the value of `a`? Also, if the graph of f is revolved 360° about the x-axis from `x=0` to `x=a`. What is the volume of the solid formed? To solve this problem, we'll have to be a bit clever. Let's start by finding where the graph intersects zero: `0 = acos(a-sina)` Because we're considering values of `a` that are not zero, we can simplify by dividing both sides by `a`: `0 = cos(a-sina)` Now, we simply look at possibilities where a cosine is zero: `(2k+1)/2 pi`, where `k in ZZ`. `(2k+1)/2 pi = a - sina` Unfortunately, we cannot solve for `a` algebraically at this point. So, we'll need to solve for it numerically for each possible value of `(2k+1)/2 pi`. We'll need to use the Taylor series to get the following result: `(2k+1)/2 pi = a - (a - a^3/(3!) + a^5/(5!) - ...)` Eliminating the parenthesis, we end up with the following series result: `(2k+1)/2 pi = a^3/(3!) - a^5/(5!) + a^7/(7!) - ... = sum_(n=1)^oo (-1)^(n+1)/((2n+1)!) a^(2n+1)` To approximate a solution, we can just stick with the first 3 terms in the series, which will give us an answer within one hundredth. In other words, we'll solve the following equation numerically: `(2k+1)/2 pi ~~ a^3/(3!) - a^5/(5!) + a^7/(7!)` Using a spreadsheet, you can find the following few solutions: If `k = 0` , a = 2.31 If `k = 1` , a = 3.97 If `k = -1` , `a= -2.31` Based on these answers and our given domain (`0<=x<=3`), it's clear that the only possibility we must be concerned about is when `k = 0`. Therefore, our final solution is `a = 2.31`. Now, in order to find the area of a solid of rotation around the x-axis between `x = 0` and `x = a`, we'll need to solve numerically again. Recall the formula to solve for a solid of revolution by the disc method: `V = int_0^a pi f^2(x) dx` In our case, we would need to find the following integral: `V = pi int_0^2.31 x^2cos^2(x-sinx) dx` We cannot solve this integral algebraically, so let's proceed numerically. Let's solve for values of `f^2(x)` numerically by Newton's method and then use the trapezoid rule to find the integral. To use Newton's method, we use the following approximation for `n in NN`: `f^2(nDeltax) ~~ f_n ^2 = (f_(n-1) + Deltax*d/dx f((n-1)Deltax))^2` Basically, we are adding an amount equivalent to the rise over run to the function to find the next point. To use this approximation, we must arbitrarily determine a reasonable step size, let's say `Deltax = 0.01` because our `a` is approximated to hundredths. We must also find the derivative of our squared function, so that we may approximate our function values: `(df(x))/(dx) = d/dx (xcos(x-sinx))` `= cos(x-sinx) - x(1-cosx)sin(x-sinx)` Now that we've established how we're finding our function, let's use the following sum which will give us the trapezoid rule approximation of our function's integral. Here, N will be the number of steps, which will be 231 in our case (because `a = 2.31` and `Deltax = 0.01`) `V = pi int_0^a f^2(x)dx ~~ pi (Deltax)/2(f^2(0) + f^2(a) + sum_(n=1)^(N-1) 2f^2(nDeltax))` Now, f(0) and f(a) are zero, so we can simplify the above summation: `V~~pi Deltax sum_(n=1)^(N-1) f^2(nDeltax)` Putting this information in a spreadsheet using the starting point of f(0) = 0, we will find the following result for our volume: More terms, smaller steps, and better approximations for `a` will yield a more accurate result. If you use a calculator integrator, you will get 5.89, which is not that far off from our result! However, using that tool is much less instructive! I hope this helps!
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00012.warc.gz
CC-MAIN-2018-05
3,800
38
https://forum.alpaca.markets/t/login-button-does-not-check-cookie/8027
code
Alpaca is a good algo-trading platform that provides easy and cheap access to trading APIs. Although not frequently used, but the web UI plays a key role in customer satisfactory. One critical issue I observe is: the “Login” button on the homepage do not check my cookie, and hence I have to login every time if I reopen the webpage. The two-step login verification makes this process even more annoying. One workaround is using the “Sign up” button instead of “Login” button. The “Sign up” button does remember my cookie. I’m not sure if this is an intended behavior or a bug that needs to be fixed.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00482.warc.gz
CC-MAIN-2022-21
618
2
https://blogs.sap.com/2013/12/14/generating-reporting-infoobjects-based-on-business-content-part-1-introduction/
code
Generating Reporting InfoObjects based on Business Content – Part 1: Introduction In an Enterprise Data Warehousing context, InfoObjects often play an arbitrary double role: they are used for modeling the Data Warehouse Layer and multi-dimensional modeling the Reporting Layer. In my blog Introducing Data Warehouse InfoObjects – Part 1: Conceptual Overview I advised segregation of duties by introducing a dedicated, independent set of InfoObjects: Data Warehouse InfoObjects. But how about those Reporting InfoObjects? Should we simply activate all the Business Content InfoObjects we need? Or do we have to introduce our own set of InfoObjects, customized and fit to the Business Users’ requirements? Or a combination of both? In this blog I would to like to present an alternative approach. I created an ABAP program to generate Reporting InfoObjects based on Business Content. This blog series explains how to use the program. In Part 1 we will have a look at the rationale, the program, the application log, the generated InfoObjects and Template InfoProvider. The blog series consists of the following blogs in addition to this blog: - Generating Reporting InfoObjects based on Business Content – Part 2: Metadata Repository & Customizing; - Generating Reporting InfoObjects based on Business Content – Part 3: Optimizing Results. The document Implementing Reporting InfoObjects based on Business Content provides detailed technical instructions on how to create the ABAP program and all related ABAP Workbench objects. Since the earliest SAP NetWeaver BW releases SAP delivers so-called Business Content (a.k.a. BI Content). It’s a multitude of BW data modeling objects, amongst others InfoObjects. Strong advantages can be materialized in pure SAP implementations. The Business Content is developed in synch with the SAP source system and perfectly complements standard business processes with analytical scenarios. However, there are in my opinion some drawbacks to take into account. Activation of Business Content can lead to a massive number of new InfoObjects. All dependencies are considered and can go many levels deep. This can lead to an extensive data model which might also include unused SAP modules, business processes and even Industry solutions. Such a data model will become increasingly difficult to understand and won’t make any sense from a Business User’s perspective. The installation of Business Content in a productive system can even be dangerous. There are many cases where previously activated Business Content is enhanced. These enhancements can be overwritten by an inappropriate activation. No matter how experienced you are, one day it can happen to all of us. I would like to propose an alternative approach: generating Reporting InfoObjects in the customer namespace based on Business Content InfoObjects using a program. All mandatory dependencies will be respected (i.e. compounding InfoObjects and reference InfoObjects). For Characteristics however, generation of attributes will be restricted to the highest level. This will prevent an uncontrolled expansion of the data model as we can observe with the Business Content activation. Starting the Program You can start the program by using t/code YRIOBJ. Figure 1: Selection Screen There are 3 ways to run the program: - For one or more single Business Content InfoObjects; - For one single Business Content InfoCube; - For one single Business Content DataStore Object. Make the appropriate selection on the selection screen. You can use the F4 search help functionality. The program will check the input afterwards and gives an error message in case of any incorrect input. Press the Execute push button to start processing. Note that the program will check on authorization object YBWREPIOBJ. Please make sure that an appropriate authorization role is assigned to your user-id. This will be explained in Part 2 of the blog series. Analyzing the Application Log As the last processing step the program will display an application log. Figure 2: Application Log The program collects all messages issued during processing and adds them to the application log. Here you can obtain an overview of all InfoObjects that have been generated as well as the Template InfoProvider. If applicable any error messages can be found here. The various processing blocks can be identified by the “Start of processing” and “End of processing” messages. Note that you can always review previous application logs retrospectively via t/code SLG1. Make sure to fill in appropriate selection criteria such as Object YBW , Sub Object YBWREPIOBJ, date/time and user-id to narrow down the search results. Figure 3: Analyze Application Log The program checks the Metadata Repository if an InfoObject already exists for the respective Business Content InfoObject. If yes, then it will proceed by skipping this InfoObject. Otherwise it will generate a new InfoObject that will be appended to the central Metadata Repository table and adds it to the appropriate InfoObject Catalog. Figure 4: InfoObject Catalogs for Generated InfoObjects Please note that there is a restricted set of “special” InfoObjects which is excluded from the generation process. It concerns InfoObjects in table RSDIOBJFIX which have a special purpose in the system. One can think of Time Characteristics but also Characteristics like 0LANGU, 0UNIT and 0CURRENCY. Generated Template InfoProvider After the processing of the InfoObjects the program generates a so-called Template InfoProvider depending on the processing mode. In my example the processing mode was InfoCube and the program generated a Template InfoCube. Figure 5: Generated Template InfoProviders Such a Template InfoProvider acts as a container for all InfoObjects and can be used as a starting point for creating your own DataMart. In this blog we discussed the rationale of generating Reporting InfoObjects based on Business Content, the program, the application log, the generated Reporting InfoObjects and the Template InfoProvider. In Generating Reporting InfoObjects based on Business Content – Part 2: Metadata Repository & Customizing we will have a look at the Metadata Repository and Customizing. In Generating Reporting InfoObjects based on Business Content – Part 3: Optimizing Results we will discuss several ways to optimize the results.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00703.warc.gz
CC-MAIN-2023-50
6,390
36
https://sourceforge.net/directory/developmentstatus:inactive/language:objectivec/language:c/license:lgpl/
code
Join ACM and take advantage of thousands of online courses, video tutorials, and online books available through ACM’s online Learning Center. Stay ahead of the curve by reading Communications of the ACM, our monthly flagship publication that delivers incisive essays, reviews and research, and ACM Queue, an online magazine for software developers written by leading software engineers, computing researchers and tech entrepreneurs.Sponsored Listing - Audio & Video - Business & Enterprise - Home & Education - Science & Engineering - Security & Utilities - System Administration A C/C++ library to share digital audio between computers on a network, reverse engineered from and compatible with Apple's iTunes 4.0 implementation. Graphite is a LGPL implementation of Apple's Carbon API. It aims to be as close as possible to a source-code level portable implemetation. It will be a wrapper around existing APIs to allow fast development, the main one being GNUstep. Objective-C bindings for GTK+ library ThousandEyes extends visibility across corporate networks as well as the public Internet, helping to solve issues from the branch through MPLS links and SIP trunks to service provider networks. Simulate pre-deployment capacity, monitor detailed performance metrics and see how QoS settings impact call quality.Sponsored Listing
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609613.73/warc/CC-MAIN-20170528101305-20170528121305-00197.warc.gz
CC-MAIN-2017-22
1,333
11
http://stackoverflow.com/questions/4121470/how-to-kill-a-java-thread
code
I google the solution for killing a java thread. And there exists two solutions: - set a flag - using Thread.interrupt But both of them are not suitable for me. In my thread, I call a third-party api which takes a long time to complete. and I want to let the user to cancel this thread if it takes too much time. So how can I kill this thread ? Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257822598.11/warc/CC-MAIN-20160723071022-00144-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
363
5
https://adobebridge.uservoice.com/forums/905377-report-bugs/suggestions/40196563-can-t-name-new-folder-right-after-creating-it
code
Can't name new folder right after creating it. In "Thumbnail only" mode I can't name new folder right after creating it in FOLDER PANEL. New folder appears with name "New Folder", then I have to right click (Win) on it and select "rename" option. CONTENT PANEL: In addition: I can't rename folders in "Content" panel by "right mouse click" at all, - in dropdown menu there no "rename" option, guys!!! Dmytro Mykhailov shared this idea When 'Show Thumbnail Only' is activated you can rename a folder selected in Content panel pressing F2. It highlights the name of folder but what's bad also deactivates 'Thumbnail Only' mode.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00695.warc.gz
CC-MAIN-2023-23
625
5
http://developer.marklogic.com/adventure/developer/rest/start
code
Get productive quickly by using your existing REST knowledge to interface with MarkLogic. This MarkLogic University On Demand Course gives a high level overview of some of MarkLogic's key features. Duration: 24 mins Jump-start your technical team’s broad understanding of MarkLogic; how to setup, ingest and get developing! Duration: 1-8 hours An on demand series from MarkLogic University covering the basics of data modeling. Some basic concepts and terms. Before you can use it you have to install it. A quick introduction to the REST API. Another tutorial for using the REST API. The official user's guide for the REST API. How to update parts of a document with the REST API. This document describes how to load, query, and work with semantic data in MarkLogic Server. The official guide to MarkLogic Security. The official guide to MarkLogic BiTemporal feature. This project is a Community-driven MarkLogic REST API wrapper for Python developers. See the CONTRIBUTING.md file to see how you can help advance this project. MLDotNet provides a convenient C# wrapper for common uses of the MarkLogic REST API. It abstracts authentication and common search settings to make it intuitive for new MarkLogic developers to get going quickly. MLPHP is a set of PHP classes that provide connectivity to MarkLogic via PHP. Go MarkLogic Go is a Go library for interacting with MarkLogic's REST APIs. MarkMapper is a community-driven Ruby Object Mapper for MarkLogic.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00115-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,462
20
https://www.postgresql.org/message-id/[email protected]
code
|From:||Heikki Linnakangas <hlinnaka(at)iki(dot)fi>| |To:||Claudio Freire <klaussfreire(at)gmail(dot)com>| |Cc:||PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>| |Subject:||Re: Vacuum: allow usage of more than 1GB of work mem| |Views:||Raw Message | Whole Thread | Download mbox | Resend email| On 06/04/18 01:59, Claudio Freire wrote: > The iteration interface, however, seems quite specific for the use > case of vacuumlazy, so it's not really a good abstraction. Can you elaborate? It does return the items one block at a time. Is that what you mean by being specific for vacuumlazy? I guess that's a bit special, but if you imagine some other users for this abstraction, it's probably not that unusual. For example, if we started using it in bitmap heap scans, a bitmap heap scan would also want to get the TIDs one block number at a time. > It also copies stuff a lot, so it's quite heavyweight. I'd suggest > trying to go for a lighter weight interface with less overhead that > is more general at the same time. Note that there was similar copying, to construct an array of OffsetNumbers, happening in lazy_vacuum_page() before this patch. So the net amount of copying is the same. I'm envisioning that this data structure will sooner or later be optimized further, so that when you have a lot of TIDs pointing to the same block, we would pack them more tightly, storing the block number just once, with an array of offset numbers. This interface that returns an array of offset numbers matches that future well, as the iterator could just return a pointer to the array of offset numbers, with no copying. (If we end up doing something even more dense, like a bitmap, then it doesn't help, but that's ok too.) > About the B-tree, however, I don't think a B-tree is a good idea. > Trees' main benefit is that they can be inserted to efficiently. When > all your data is loaded sequentially, in-order, in-memory and > immutable; the tree is pointless, more costly to build, and harder to > maintain - in terms of code complexity. > In this use case, the only benefit of B-trees would be that they're > optimized for disk access. Those are not the reasons for which I'd prefer a B-tree. A B-tree has good cache locality, and when you don't need to worry about random insertions, page splits, deletions etc., it's also very simple to implement. This patch is not much longer than the segmented multi-array. > On the other side, using B-trees incurs memory overhead due to the > need for internal nodes, can fragment memory because internal nodes > aren't the same size as leaf nodes, is easier to get wrong and > introduce bugs... I don't see a gain. The memory overhead incurred by the internal nodes is quite minimal, and can be adjusted by changing the node sizes. After some experimentation, I settled on 2048 items per leaf node, and 64 items per internal node. With those values, the overhead caused by the internal nodes is minimal, below 0.5%. That seems fine, but we could increase the node sizes to bring it further down, if we'd prefer that tradeoff. I don't understand what memory fragmentation problems you're worried about. The tree grows one node at a time, as new TIDs are added, until it's all released at the end. I don't see how the size of internal vs leaf nodes matters. > If you propose its use, at least benchmark it to show some gain. Sure. I used the attached script to test this. It's inspired by the test script you posted. It creates a pgbench database with scale factor 100, deletes 80% of the rows, and runs vacuum. To stress lazy_tid_reaped() more heavily, the test script creates a number of extra indexes. Half of them are on the primary key, just to get more repetitions without having to re-initialize in between, and the rest are like this: create index random_1 on pgbench_accounts((hashint4(aid))) to stress lazy_vacuum_tid_reaped() with a random access pattern, rather than the sequential one that you get with the primary key index. I ran the test script on my laptop, with unpatched master, with your latest multi-array patch, and with the attached version of the b-tree patch. The results are quite noisy, unfortunately, so I wouldn't draw very strong conclusions from it, but it seems that the performance of all three versions is roughly the same. I looked in particular at the CPU time spent in the index vacuums, as reported by VACUUM VERBOSE. > Furthermore, among the 200-ish messages this thread has accumulated, > better ideas have been proposed, better because they do use less > memory and are faster (like using bitmaps when possible), but if we > can't push a simple refactoring first, there's no chance a bigger > rewrite will fare better. Remember, in this use case, using less > memory far outweights any other consideration. Less memory directly > translates to less iterations over the indexes, because more can be > crammed into m_w_m, which is a huge time saving. Far more than any > About 2 years ago, I chose to try to push this simple algorithm first, > then try to improve on it with better data structures. Nobody > complained at the time (I think, IIRC), and I don't think it fair to > go and revisit that now. It just delays getting a solution for this > issue for the persuit of "the perfect implementaiton" that might never > arrive. Or even if it doesn, there's nothing stopping us from pushing > another patch in the future with that better implementation if we > wish. Lets get something simple and proven first. True all that. My point is that the multi-segmented array isn't all that simple and proven, compared to an also straightforward B-tree. It's pretty similar to a B-tree, actually, except that it has exactly two levels, and the node (= segment) sizes grow exponentially. I'd rather go with a true B-tree, than something homegrown that resembles a B-tree, but not quite. > I'm attaching again one version of them (I've been modifying it to > suit my purposes at each review round), you'll probably want to tweak > it to build test cases good for your purpose here. Attached is a new version of my b-tree version. Compared to yesterday's version, I fixed a bunch of bugs that turned up in testing. Looking at the changes to the regression test in this, I don't quite understand what it's all about. What are the "wait_barriers" for? If I understand correctly, they're added so that the VACUUMs can remove the tuples that are deleted in the test. But why are they needed now? Was that an orthogonal change we should've done anyway? Rather than add those wait_barriers, should we stop running the 'vacuum' test in parallel with the other tests? Or maybe it's a good thing to run it in parallel, to test some other things? What are the new tests supposed to cover? The test comment says "large mwm vacuum runs", and it sets maintenance_work_mem to 1 MB, which isn't |Next Message||Ashutosh Bapat||2018-04-06 13:40:13||Re: Get the name of the target Relation from Query struct? SOLVED!| |Previous Message||Ashutosh Bapat||2018-04-06 13:35:54||Re: Get the name of the target Relation from Query struct?|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00134.warc.gz
CC-MAIN-2023-23
7,101
108
https://www.sgbox.eu/en/knowledge-base/playbooks-generic-api/
code
Search another article? Playbooks – Generic API Generic API request This node can be configured with url, headers and parameters to get the output from any HTTP API. URL – if the complete url is already known, insert it into the Value field and select Fixed as type. Otherwise, the url can be composed of several concatenated url parts, added by Each value have a type: - Fixed: for values already known - Fixed JSON: same as fixed, with the value in JSON format - Extract from previous output: if the value has to be extracted from the output of another node. E.g.: a token obtained as a response from a previous API call - Extract from previous response headers: as the previous one, but the value is in the headers of the response, not in the output. - Start Timestamp or End Timestamp: for requests with parameters that are time ranges and have to be updated at every request. Given the timestamp format and, if needed, a timezone and a number of seconds to add/subtract, the node will automatically compute the values, based on the last request. Build to be used in periodic requests to retrieve logs from an API. When the value has to be extracted, a list of the other nodes is displayed. Choose one and you will get its output, from which to select the value. If no value is selected, the whole output of the node will be used, if possible. The composition of headers and parameters is the same of the url parts one.All values can be preceded by a prefix or followed by a suffix. Generic API request Set url and parameters and make the request. Generic API request with a parameter extracted from a previous request. In this example, the previous request return 5 sgbox events. We selected type = “Extract from previous output”, selected the previous node and got its output on the right. Then from the JSON we clicked on the value for the “event_id” parameter of our request, i.e. the first patternid from the previous request, and the value field was filled with 0.patternid) Generic API request with Start and End timestamp to extract logs from an API. In this example, the API request has two parameters, start_ts and end_ts, that are re-calculated every time the node is executed or tested. In the dump section, you can see the values. In the second execution, the value of start_ts is the end_ts of the previous execution, while the value of end_ts is the current time. Download the PB module – samples package for examples.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00428.warc.gz
CC-MAIN-2024-10
2,451
20
https://social.technet.microsoft.com/Forums/en-US/23aefba5-37b5-4ae4-8d15-b648edb9128d/a-failure-occurred-while-validating-the-map-toolkit-configuration-database?forum=map
code
I'm trying to share a Windows 7 workstation which has the MAP toolkit v8 installed with another user. I've successfully installed and used the MAP toolkit as "DOMAIN\userA". However, when user "DOMAIN\userB" logs onto the workstation and launches the tool, elevated, they get this error: I'm not expecting "userB" to have access to the localDB databases created by "userA", since they will make their own databases to use. Following the advice in the error message I asked "userB" to run the installer and perform the repair as suggested. This appeared to work and allowed both userA and userB to launch the application. I'm expecting the same problem to re-occur when userC wants to use the application, I guess each different user will be required to perform a repair install before they can launch the application. Your suspicion would be correct. By default, the MAP Toolkit will install SQL Server 2012 Express LocalDB during setup. As LocalDB is specific to the user account that installed it, You will only be able to run it from the initial account that did the installation. The repair process is actually placing LocalDB on the second user accuont and creating a new instance for MAP to reference. You may use an existing (or new) installation of SQL Server 2008, SQL Server 2008 R2, or SQL Server 2012 if you create an instance named "MAPS" before running the MAP Toolkit installer. The MAP Toolkit requires the collation order of the database engine to be set to "SQL_Latin1_General_CP1_CI_AS". Doing this will allow multiple user accounts to use MAP and share databases. Thanks Michael, I used your advice and solved the problem as follows: 1. exported all the databases we wanted to keep. (this creates SQL backup .bak files) 2. uninstalled the MAP tool from the PC. 3. uninstalled SQL 2012 localDB. 4. installed SQL 2012 with SP1 express with tools, then CU2, being careful to name the instance "MAPS" as per your tip. 5. installed the MAP tool, which curiously didn't ask which database we wanted to use and used the MAPS instance from the previous step. 6. launched MAP and imported the databases we wanted to use. (this performed a SQL restore of the .bak files). 7. used SQL studio full edition (feature of SP1 express) and tweaked each database by setting auto-close to false, which makes the MAP tool start and operate faster. I have the same issue with MAP 9.1. Map Toolkit version 9.1 UserA- Domain user and a Local administrator. User B- Domain user and a Local administrator. User A configured the Map with SQL Server 2012 Standard DB with the collation order of the database engine set to "SQL_Latin1_General_CP1_CI_AS". Each DB setting is auto-close - False. Still User B/C/D get the same error. appreciate pointers Replying to an old thread, but might be useful for someone.. Had this issue when changing domain on the Windows client where Map Toolkit was installed. The domain was changed so I could scan the Active Directory from a different domain in the end customer's environment. I solved it by: 1. Logged on the Windows client with an administrator account from the new domain 2. Pressed Shift+right click and then selected "Run as different user" on MapToolkit.exe 3. Used an administrator account from the previous domain in the "Run as different user" logon window (the domain where Map Toolkit was initially installed) 4. Started a Map Toolkit scan towards the new domain with an administrator account from the new domain No repair needed
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171900.13/warc/CC-MAIN-20170219104611-00158-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
3,477
31
https://github.com/frankradocaj
code
Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users. You must be logged in to block users. Contact GitHub support about this user’s behavior. Learn more about reporting abuse. An alternative RavenDB database viewer Forked from thecodejunkie/github.expandinizr Chrome extension that improves the GitHub experience Forked from NancyFx/Nancy Lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono Forked from PureKrome/WorldDomination.RavenDb Totally titty sparkle awesome kewl extensions and helpers for RavenDb. Seeing something unexpected? Take a look at the GitHub profile guide.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00089.warc.gz
CC-MAIN-2022-49
683
14
https://community.atlassian.com/t5/Bitbucket-questions/Committing-one-file-vs-several/qaq-p/104374
code
In SourceTree when a try to select and commit a single file - I have to manually move the file to staging. When I have more than one file selected I can commit straight away. In the second case I assume the move to staging/index happens behind the scenes. Is there method to this madness or is it just a bug? The toolbar Commit button tries to be smart about what you intended - because browsing the diffs is done with a single-file selection (also the default), it can't really assume that you meant to jump directly to committing that one file. Since multi-selection is a much more explicit action, it assumes you did in fact mean just the current selection. If you want SourceTree to always jump directly to committing even single-file selections, use the Commit Selected option instead. This is on the menu, and you can also customise the toolbar to use the alternative Commit Selected button too instead of the assumption-making Commit button. Also, right-clicking the file and selecting Commit always works in the context of that one file. Lastly, if you don't like using staging, you can turn it off completely if you prefer. In Preferences > Git, uncheck the 'Use the staging area' checkbox, and all commits will be direct from your working copy in future. Perfect. I appreciate the answer. I suspect I'm not the only one confused by this little detail. However I've always found GIT a bit confusing. In this case I just decided for my needs that staging is overkill - a few people running a small business, we almost always have access to our server. From this point of view staging is just a complication. Hello! My name is Mark Askew and I am a Premier Support Engineer for products Bitbucket Server/Data Center, Fisheye & Crucible. Today, I want to bring the discussion that Jennifer, Matt, and ... Connect with like-minded Atlassian users at free events near you!Find a group Connect with like-minded Atlassian users at free events near you! Unfortunately there are no AUG chapters near you at the moment.Start an AUG You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829997.74/warc/CC-MAIN-20181218225003-20181219011003-00196.warc.gz
CC-MAIN-2018-51
2,132
11
https://scratchhouse.co/social/rsat-windows-2012-r2-download.php
code
Installation requires a few minutes to finish. Clear the check boxes for any tools that you want to turn off. Note that if you turn off Server Manager, the computer must be restarted, and tools that were accessible from the Tools menu of Server Manager must be opened from the Administrative Tools folder. Download The Remote Server Administration Tools (RSAT) for Windows 10 – scratchhouse.co When you are finished turning off tools that you do not want to use, click OK. Under Programsclick Uninstall a program. Click Downlaod installed updates.IMPORTANT: Starting with Windows 10 October Update, RSAT is included as a set of "Features on Demand" in Windows 10 itself. See "Install Instructions" below for details, and "Additional Information" for recommendations and troubleshooting. RSAT lets IT admins manage Windows Server roles and features from a Windows 10 PC. Start the Add Features Wizard in Windows Server or Windows Server R2 or the Add Roles and Features Wizard in Windows Server and later versions. Then, on the Select Features page, select Remote Server Administration Tools, and Group Policy Management. To install the Remote Server Administration Tools (RSAT) on Windows Server please follow these instructions. On the Windows Server open Server Manager. If Server Manager does not start by default press the “Windows + R” keys, Type “ServerManager” in the “Open” field and press “Enter” or click the OK scratchhouse.coted Reading Time: 2 mins. When you are asked if you are sure you want to uninstall the update, click Yes. Share this: Twitter Facebook. Like this: Like Loading Leave a Windows Cancel reply Enter your comment here Fill in your details below or click an icon to log in:. Email required Address never made public. For RSAT that runs on Windows Vista and Windows 7, after running the downloaded rsat package, you 2012 enable the tools for the roles and features that you want to manage wnidows shown in the download figure. If you need to install management tools in Windows ServerWindows Server R2, Windows ServerWindows Server R2, or Windows Server Technical Preview for specific roles or features running on remote servers, there's no need to install additional software. Remote Server Administration Tools - Windows Server | Microsoft Docs Complete the wizard to install your management tools. See the following figure. Only PowerShell tools work on Windows Server Group Policy has some new features in Windwos Server Technical Preview which are not available on older operating systems. Failover Cluster Manager runs only on Windows Server MSClus and Cluster. Download Remote Server Administration Tools for Windows from Official Microsoft Download Center Skip to main content. This browser is no longer supported. AdminPak / RSAT in Windows Server and R2 – .NET MVP KenLin @HKSAR Download Microsoft Edge More info. Contents Exit focus mode. Is this page helpful?
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00061.warc.gz
CC-MAIN-2021-49
2,921
11
http://www.thesaurus.com/browse/independency
code
He has fought for that independency, for which Mr. Jefferson only wrote. In him the transition from Independency to Individualism is completed. None are in a state of independency on their fellow-creatures. Such was Independency when it flourished all over East Anglia. I don't see how you could lay out part of your independency to more advantage. Catholicity is the ...
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00427.warc.gz
CC-MAIN-2018-22
371
6
http://angela-kim.blogspot.com/2011/11/i-came-home-over-thanksgiving-break-and.html
code
I came home over thanksgiving break, and one of my hw was to make something with sculpey. It was interesting because I've never touched this stuff before. It's very squishy, and when I went to go bake it, since I thought it was supposed to harden from the heat, but instead I left it in too long because it wasn't hardening. Areas like his hat and shoe were burnt. ; ; cool experience overall.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593010.88/warc/CC-MAIN-20180722041752-20180722061752-00291.warc.gz
CC-MAIN-2018-30
393
3
https://summit.riot-os.org/2021/
code
Let's meet remotely! RIOT is the friendly operating system for the Internet of Things. If you cannot run Linux on your device due to constrained hardware, use RIOT! RIOT explicitly implements the idea of an open Internet. It supports all relevant standards and is distributed under open source license. You find more details on www.riot-os.org. The RIOT community consists of companies, academias, and hobbyist, distributed all around the world. RIOT aims to implement all relevant open standards supporting an Internet of Things that is connected, secure, durable & privacy-friendly. About the RIOT Summit Over the last four years, RIOT has been emerged as one of the agile and state of the art operating system for the IoT. The previous Summits were a big success. It’s time to meet again! The RIOT Summit aims for bringing together RIOTers, beginners and experts, as well as people interested in the IoT in general and decision makers who plan to deploy RIOT in the future. The event combines plenary talks, hands-on tutorials, and break-out sessions. The Summit will not only inform about latest developments but will also help to gather feedback from the community to shape the RIOT future. What can you expect? This is the fourth summit of the RIOT community. We will put hell of a lot of energy to make this a special event. - Great talks - Lively demos and tutorials - Social networking - No registration fees but reservation is needed Why should you attend? It’s like vacation. Once a year you should come together with the members of the community to reflect on the past and push the future. - Contact with senior and junior RIOT developers - Latest community news - Participation is free Why should you sponsor? RIOT is a community product and we want to involve as many people as possible. We don’t want to introduce fees and need your help! - Explicit support of the RIOT community - High visibility - Different sponsor levels - Connect with RIOT developers and users Speakers (more to come) Thursday, September 9 Friday, September 10 This year, due to Covid-19, the RIOT Summit will happen online, via common in-browser tools for video-conferencing and chat. For details, stay tuned. Frequently Asked Questions Why an online event? Why an online event? The health of attendees is of utmost importance to us. Due to the ongoing pandemic, we decided on an online event. If the situation improves for specific countries, we will support you to organize local hubs. Contact the organizers via [email protected] When does registration start? Registration will open around August. Who organizes the RIOT Summit? The RIOT Summit is organized by the Internet Technologies group at Freie Universität Berlin and the Internet Technologies group at HAW Hamburg. You can contact the organizers via [email protected]. How much does registration at the RIOT Summit costs? Participating at the RIOT Summit 2021 will be free of charge. However, explicit registration is mandatory for planning purposes.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00365.warc.gz
CC-MAIN-2021-31
3,019
37
https://www.jwz.org/blog/2010/08/11/
code
Dear Lazyweb (Boondoggleweb), I will certainly be attending SXSW music in 2011, but I'm probably going to skip interactive this time. Unless, that is, someone's got a panel they want me to be on, because that would A) give me something to do and B) get me a free ticket. Yeah, that's a little crass, but it's just not worth it to me otherwise. I was pretty bored at interactive last year, and the film festival was organized so poorly that I didn't actually see any films.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00704.warc.gz
CC-MAIN-2023-14
472
3
https://www.rfc-editor.org/errata/eid3631
code
RFC 6874, "Representing IPv6 Zone Identifiers in Address Literals and Uniform Resource Identifiers", February 2013Source of RFC: 6man (int) Errata ID: 3631 Publication Format(s) : TEXT Reported By: Michael Sweet Date Reported: 2013-05-22 Rejected by: Brian Haberman Date Rejected: 2013-05-23 Section 4 says: An HTTP client, proxy, or other intermediary MUST remove any ZoneID attached to an outgoing URI, as it has only local significance at the sending host. It should say: An HTTP client, proxy, or other intermediary MUST retain any ZoneID attached to an outgoing URI, as it will be the only way for an HTTP server to return a URI containing a link-local address that can subsequently be used by the HTTP client. The original advice ignores a very real issue: HTTP Servers that generate URIs from the client's Host: need to include the Client's zoneid in order for the link local address to be usable/routable. The zoneid is a strictly internal value that is not shared between devices.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473735.7/warc/CC-MAIN-20240222061937-20240222091937-00502.warc.gz
CC-MAIN-2024-10
989
13
https://help.simplesat.io/en/articles/4218331-making-sense-of-your-feedback-data-with-reports
code
Reports are brand new but ready to use! The goal of reports is to give you even more insight into your satisfaction data. Reports are presented as a pivot table and let you see aggregated stats based on segments. With reports, you can answer questions such as: Which companies have 75% or lower CSAT score? Which of our team members has the most 5-score ratings? What does my NPS score look like quarter over quarter? What's the positive/neutral/negative breakdown of my feedback based on tags? Here's how to use it Date series - The start and end dates to report on Filters - Filter down your data. For example, you could choose to only see stats from a specific survey or group. Primary and secondary segments - In pivot table terms these are "rows" or "dimensions". Segments are the grouping of data that you'd like to show in each row. For example, customers, companies, team members etc. Secondary segments are grouped by their primary parent. Survey metric - This toggle is required because Simplesat allows you to report on both NPS and CSAT surveys. This presents a challenge because CSAT and NPS calculations and rating scales are different. If you choose CSAT, you'll see a new column that calculates the CSAT score and 5 new columns for 1-5 ratings. If you choose NPS, you'll see an NPS column and 11 new columns for 0-10 ratings. If you choose All you won't see the rating scale columns, because a "4" score for CSAT means something very different than a "4" score for NPS. Download to CSV Download the report you generated to CSV to get everything in Excel or Google Sheets. Let us know your feedback! Reports are still in beta. We would love for you to start using it and let us know what you like, or what's missing.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00149.warc.gz
CC-MAIN-2023-50
1,731
16
http://www.linuxquestions.org/questions/linux-newbie-8/change-language-171956/
code
As far as I can remember, this would be done with the window manager i.e. if you installed the default KDE, then you have to check in the KDE "control centre", in the accessibility section under language & region. Though it may be easier to do a re-install, but follow the instructions very carefully, and select finnish language support (presuming it's available - I think it maybe), and then you get to complete the install in finnish. The converting of your system from english to finnish, may require additional packages i.e. finnish help guides and stuff like that, whereas I'd presume that if you re-install and select finnish as default language, then the system would select the packages automatically. hope this helps a little
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00465-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
735
4
https://www.programmableweb.com/sdk/strava-akka-scala-sdk-blair-garrett
code
June 28, 2018 View all 1 Followers View all 1475 Related Articles Related Articles (1475) Seven APIs have been added to the ProgrammableWeb directory in categories including Mapping, Text, Mapping, and Games. Highlights include an API that offers data about gaming giveaways and an API for adding polls to applications or websites. Here's a rundown of the latest additions. Just how much are people talking about Justin Bieber? How ga-ga are they for Lady Gaga? Viralheat has a platform for tracking these sorts of social trends and now the data in over 4,000 searches is available for free. Today the company announced its new Social Trends endpoint for the Viralheat API. Life Fitness stands as the first fitness equipment manufacturer to open its products to the developer community via APIs. Life Fitness created LFopen to encourage app development around Life Fitness equipment. Currently, APIs exist to enable web based applications or mobile applications that pull data from machines.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00306.warc.gz
CC-MAIN-2022-49
991
7
https://j-dm.org/archives/2607
code
This post looks at Parameter P, a specific construct of the PCS-DM model, as elaborated in “What is adaptive about adaptive decision making? A parallel constraint satisfaction account,” by Andreas Glöckner, Benjamin E. Hilbig, and Marc Jekel (Cognition 133 (2014) 641–666). (See post Revisiting Swiss Army Knife or Adaptive Tool Box.) Glockner et al state that transformations in Eqs. (3)–(5) (See figure at top of post.) are commonplace and sensitivity analyses have shown that the selection of specific values has little influence on predictions as long as inhibitory connections are relatively strong compared to excitatory connections. PCS-DM predictions, however, strongly depend on Eq. (2). In this equation for calculating connection weights, validities are corrected for chance level (.50) to avoid that irrelevant cues have a weight. Parameter P allows PCS-DM to capture individual differences in the subjective sensitivity to differences in cue validities. Low sensitivity is captured by low P. By contrast, high sensitivity for cue validities is captured by large values of P with high values as special cases in which less valid cues cannot overrule more valid ones. P captures sensitivity at the level of individuals, that is, it determines how an individual transforms explicitly provided or learned information about a cue’s predictive power (i.e., cue validity) into a weight. Glockner et al suggest that P describes a core property of a psychological transformation process that precedes decision making. To find the value of P that maximizes the overlap between PCS choice predictions and the rational Naïve Bayesian solution, the authors used Monte-Carlo simulations. They found that in randomly generated tasks and sets of validities in a four-cue environment this is the case for P = 1.9. My understanding of this is questionable. I would assume that this particular P value is not valid across many environments, but I really do not know. I plugged in numbers in the Equation 2 to do a mini-sensitivity analysis. So with a cue validity of .6 subtracting the chance .5 and taking it to the 1.9 power, I got a weight of .012 compared to a weight of .175 for a cue validity of .9. Thus the weight for the more valid cue is about 15 times that of the less valid cue. For P=1.2 the weight for the more valid cue is only about 5 times that of the less valid cue. Since in this situation 1.9 is optimal, the calculations show that an individual with a parameter P of 1.9 would rely much more heavily on the more valid cue. Additionally, they implemented a second fitted version of PCS-DM, PCS fitted, which estimates one individual P parameter per participant, representing participants’ sensitivity to differences in cue validities. They found that participants were insufficiently sensitive to differences in cue validities, although cue validities were explicitly provided. The authors state that by taking into account individual differences in sensitivity to cues in the parameter P, PCS-DM allows the model to describe and predict choice behavior better than other models even if cue weighting is suboptimal from a rational point of view. For environments with stable cue validities, the findings hint that adaptivity is achieved through adapting weights as suggested by PCS-DM. Glockner et al note that environments are often instable, and it remains unclear whether PCS-DM can also capture individual adaption following a change in cue validities. Research indicates that people stick to previously learned strategies and this stickiness is particularly strong if there is a shift from compensatory to non-compensatory environments, indicating that individuals have a hard time learning to ignore less valid evidence. According to PCS-DM, such stickiness would be reflected in suboptimal adaption (and thus insufficient differences) in the P parameter. One can expect to find lower P parameters after switching from compensatory to non-compensatory environments, indicating insufficiently adapted sensitivity to differences in cue validities. Participants seemed to be insufficiently sensitive to differences in cue validities as the P parameter for PCS fitted was significantly below 1.9. They found insufficient adjusting of cue weights once the environmental structure changed. Specifically, individuals may differ in how they translate information about the world into their mental representation of the decision task. Parameter P is difficult for me to characterize. Glockner et al state that: “According to the PCS model for decision making, participants translate cue validities into subjective weights in a mental representation corresponding to their individual sensitivity captured in the parameter P.” Recent posts that looked at prediction error minimization propose that the brain tries to slow down the onslaught of sensory information. Parameter P might capture some individual differences in this. It seems to me, and I am getting way over my skis here, that it might be partly a “slowness factor”. People with a lower Parameter P might be “slower” to respond to new information from the environment or maybe to trust information from the environment less. The particular experiments probably do not translate to many real world situations well. It is probably not typical or adaptive to quickly trust that one cue has a validity of .6 and one of .9. This slowness might lead to large differences in how we respond to the world and thus who we are. Slowness might change over time within individuals so that it is also a development factor. Clearly the stability of the environment would help to determine the adaptivity of a particular Parameter P. Parameter P might respond to the blend of analytical and intuitive activities. The idea that a significant amount of our personality or cognitive style or perceived intelligence might be based on our differences in a single parameter may seem crazy, but aggregation modeling shows us the complexity that can be generated by a simple rule. If individuals do have stable Parameter P values, at least over certain conditions, it might be possible to improve our individual decision making based on it. Parameter P might vary with expertise. (I should note that Glockner et al introduce lambda in the appendix as a function representing the steepness of the choice function. That was beyond me at least as presented.)
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00428.warc.gz
CC-MAIN-2022-49
6,428
8
https://forums.socialpointgames.com/topic/3408/game-crashing-on-pvp
code
Every time I try to battle on pvp it instantly closes the app and when I get back on it records the battle as a loss. What do I do? Game crashing on pvpPost count: 3260Reputation: 1315Joined: Hi and welcome to forum I moved your topic to Ml section as it looks like ml problem . and back to topic , what is your game version ? are your attack 3 monsters team used in wars ? did you tried clearing game cache ,,, if your game is connected did you tried uninstall and re install ----Again you game MUST be connected to fb not to lose your progress - 19 May 2013 - 4 October 2018 Game crashing on pvpPost count: 0Reputation: 0Joined: Try what Haka suggested. I had the same problem & they have suggestions on how to fix. As for the losses, you're screwed. SP won't acknowledge it or return any trophies. Same thing happened to me & I lost over 300 trophies. If it continues, my advice would be to stop playing PvP until next season.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00135.warc.gz
CC-MAIN-2022-21
929
7
https://atff.nl/00337-how-to-make-my-own-bitcoin-wallet.html
code
Please check the Transaction ID in a block explorer. Please note the wallet cannot check if your password is correct or not. I could easily get it from the arguments, like this: var amountToSend new Money(GetAmountToSend(args C But I want to do better and let the user specify a special amount that sends all the funds from the wallet. Security and privacy since the network is publicly visible, it creates a number of concerns for governments and corporations that want to protect their critical data. In the original version I was hiding every NBitcoin reference from the users of my Safe class, so they don't get overwhelmed by the details, in this article my audience is more advanced. Fee:.00025btc The transaction fee is 2 of your transaction amount. HdPathType null) if (hdPathType null) Dictionary BitcoinAddress, operationsPerReceiveAddresses 7, ceive Dictionary BitcoinAddress, operationsPerChangeAddresses 7, ange var operationsPerAllAddresses new Dictionary BitcoinAddress, foreach (var elem in operationsPerReceiveAddresses) y, lue foreach (var elem in operationsPerChangeAddresses) y, lue return operationsPerAllAddresses; var addresses tValueOrDefault /var addresses var operationsPerAddresses. I would strongly recommend you to use this class, unless you know what you are doing. How to make own bitcoin wallet? Bitcoins can be lost only when someone physically steals your paper wallet. Show-history Output example Type your password: Wallets/Wallet. (y/n) y Selecting coins. There are a few wallets that take up less space on your hard drive. var startIndex minUnusedKeys; while (unusedKeyCount minUnusedKeys) addresses new for (int i startIndex; i startIndex minUnusedKeys; i) tAddress(i, tValueOrDefault tAddress(i foreach (var elem in y, lue if (unt 0) unusedKeyCount; WriteLine unt hdPathType keys are processed. Step 11: If you wish to get only one paper wallet, change the. How to, create an Online, bitcoin, wallet - wikiHow There are many different hardware wallets that range in price range and quality. I will use an http API to query what fee should be used and handle properly if there is something wrong with the API. Dat, now go ahead: create a new.NET Core CLI Application and implement the command how to make my own bitcoin wallet line argument parsing with your favorite method, or just check out my code. Virtual currencies are not issued by a central bank or other authorities, meaning that they dont have a single control center. Even if you didn't understand too much, you will face the same design decisions I faced and probably tackle them much better. Zero) var secret y d(secret, lue Next figure out where to send our change. Most virtual currencies have their own independent wallets, but some of them use adopted programs. Also these commands need to access the a Safe: var walletFilePath GetWalletFilePath(args Safe safe if (nnectionType tp) / From now on we'll only work here else if (nnectionType ConnectionType. The last step before building our transactions is selecting coins to spend. The truth is simple dynamic fee calculation for confirmed, not exotic transactions works 99 of the time. 14 Receive keys are processed. If you dont have a software wallet, do read my previous article on making. Method 2 Setting up a Web Wallet 1, understand web wallets. Bitcoin, cold Storage, bitcoin.com Setting up a paper wallet Equals(amountString, "all amountToSend availableAmount; amountToSend - fee; else amountToSend ParseBtcString(amountString Then do some checks: /. As a result, you are not involved in the exchange rate fluctuations while extending payment options for your clients. We also want to access these settings easily, so I created a Config class: public static class Config / Initialized with default attributes public static string DefaultWalletFileName Wallet. This function will retrieve the unspent balance and the unspent confirmed balance. If wallet-file is not specified the app will use the default one, specified in the config file. I would really recommend you to use paper wallets if you have bitcoins in significant amount and have no intention in near future to spend ey are safe and cheap than software or hardware wallets in aspects like. The first thing we'll always will do is to query a bunch of data with the help of this QBitNinja jutsu: Dictionary BitcoinAddress, operationsPerReceiveAddresses 7, ceive The above syntax might need some mental effort to understand. These devices protect your data and use similar micro-processor chips that credit cards use. FullNode) throw new NotImplementedException else Exit Invalid connection type. HiddenWallet, the successor of this wallet. (I'll explain later why I omit the implementation of the full node for now.) The rest of the commands need to communicate with The Blockchain and will have now two ways to do it, those have to be implemented separately. Debit or credit cards are options for users in other countries. The website is in charge of your keys and can take your bitcoins out of your control. No intermediaries are needed for the blockchain functioning. WriteLine Transaction Id: tHash var qBitClient new QBitNinjaClient(twork / QBit's success response is buggy so let's check manually, too BroadcastResponse broadcastResponse; var success false; var tried 0; var maxTry 7; do tried; WriteLine Try broadcasting transaction. DIY Tutorial: How To Create A Bitcoin Paper Wallet. Community Q A Search Add New Question Question Do I get interest after creating my own bitcoin account? Coinbase a cross-platform library; supports Android and iOS platforms; works with Java, Ruby, Python, etc.; allows all major operations with cryptocurrencies through one API Stages of the cryptocurrency wallet app development: Installation download an API from the appropriate website. If the bitcoin price changes). (1) Transaction is successfully propagated on the network. There may be four types of bitcoin wallets. This value will be to the miner who will process your transfer. However there are some edge cases, for example when you have many small inputs, I handled them here, but I will not include it in this tutorial, because it would complicate the fee estimation a lot. A wallet is just for storing your Bitcoin, and there is no way to get interest.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00511.warc.gz
CC-MAIN-2020-05
6,283
9
https://www.opensourceagenda.com/projects/gaia
code
.. raw:: html <img src="https://gist.githubusercontent.com/michelvocks/ef3894f63c3bb004bca1a2fd5f7eb644/raw/40c5799d74a6f28af1874e726083a50a3ebd877d/gaia-logo-text.png" width="650px"> |build-status| |go-report| |go-doc| |apache2| |chat| |codecov| Gaia is an open source automation platform which makes it easy and fun to build powerful pipelines in any programming language. Based on HashiCorp's go-plugin_ and gRPC_, gaia is efficient, fast, lightweight, and developer friendly. pipelines <What is a pipeline?_>_ with the help of SDKs <Why do I need an SDK?_>_ and simply check-in your code into a git repository. Gaia automatically clones your code repository, compiles your code to a binary, and executes it on-demand. All results are streamed back and formatted as a user-friendly graphical output. gaia-pipeline.io_ to learn more. Automation Engineer, DevOps Engineer, SRE, Cloud Engineer, Platform Engineer - they all have one in common: The majority of tech people are not motivated to take up this work and they are hard to recruit. One of the main reasons for this is the abstraction and poor execution of many automation tools. They come with their own configuration ( YAML_ syntax) specification or limit the user to one specific programming language. Testing is nearly impossible because most automation tools lack the ability to mock services and subsystems. Even tiny things, for example parsing a JSON file, are sometimes really painful because external, outdated libraries were used and not included in the standard framework. We believe it's time to remove all those abstractions and come back to our roots. Are you tired of writing endless lines of YAML-code? Are you sick of spending days forced to write in a language that does not suit you and is not fun at all? Do you enjoy programming in a language you like? Then Gaia is for you. Gaia is based on HashiCorp's go-plugin. It's a plugin system that uses gRPC_ to communicate over HTTP/2. Initially, HashiCorp developed this tool for Packer but now it's heavily used by Plugins, also called pipelines <What is a pipeline?_>, are applications which can be written in any programming language, as long as gRPC is supported. All functions, also called jobs <What is a job?>_, are exposed to Gaia and can form up a dependency graph that describes the order of execution. Pipelines can be compiled locally or simply over the integrated build system. Gaia clones the git repository and automatically builds the included pipeline. If a change ( git push_) happened, Gaia will automatically rebuild the pipeline for you*. After a pipeline has been started, all log output is returned back to Gaia and displayed in a detailed overview with their final result status. boltDB for storage. This makes the installation process super easy. No external database is currently required. * This requires polling or webhook to be activated. |sh-login| |sh-overview| |sh-create-pipeline| |sh-pipeline-detailed| |sh-pipeline-logs| |sh-vault| |sh-settings| The installation of gaia is simple and often takes a few minutes. Literally every tool that was designed for automation, continuous integration (CI), and continuous deployment (CD) like Spinnaker, Jenkins, Gitlab CI/CD, TravisCI, CircleCI, Codeship, Bamboo and many more, introduced their own configuration format. Some of them don't even support configuration/automation as code. This works well for simple tasks like running a go install or mvn clean install but in the real world there is more to do. Gaia is the first platform that does not limit the user and provides full support for almost all common programming languages without losing the features offered by todays CI/CD tools. What is a pipeline? A pipeline is a real application with at least one function (we call it a Job). Every programming language can be used as long as gRPC is supported. We offer SDKs to support the development. What is a **job**? ~~~~~~~~~~~~~~~~~~ A job is a function, usually globally exposed to Gaia. Dependent on the dependency graph, Gaia will execute this function in a specific order. Why do I need an **SDK**? The SDK implements the Gaia plugin gRPC interface and offers helper functions like serving the gRPC-Server. This helps you to focus on the real problem instead of doing the boring stuff. Which programming languages are supported? We currently fully support Go, Java, Python, C++, Ruby and Node.JS. When do you support programming language **XYZ**? We are working hard to support as much programming languages as possible but our resources are limited and we are also mostly no experts in all programming languages. If you are willing to contribute, feel free to open an issue and start working. Gaia is currently available as beta version. Feel free to open a new GitHub issue to request a new feature. Gaia can only evolve and become a great product with the help of contributors. If you like to contribute, please have a look at our issues section_. We do our best to mark issues for new contributors with the label good first issue. If you think you found a good first issue, please consider this list as a short guide: Go installed_ on your machine and also nodeJS_ for the frontend. Clone this repository and run the make command inside the cloned folder. This will start the backend. To start the frontend you have to open a new terminal window and go into the frontend folder. There you run npm install and then npm run serve. This should automatically open a new browser window. If you have any questions feel free to contact us on HashiCorp's go-plugin: https://github.com/hashicorp/go-plugin Do not use it for mission critical jobs yet!: https://tenor.com/view/enter-at-your-own-risk-gif-8912210 releases page: https://github.com/gaia-pipeline/gaia/releases Unix nice level: https://en.wikipedia.org/wiki/Nice_(Unix) issues section: https://github.com/gaia-pipeline/gaia/issues Go installed: https://golang.org/doc/install go-example repo: https://github.com/gaia-pipeline/go-example Kubernetes deployment with vault integration: https://docs.gaia-pipeline.io/tutorials/kube-vault-deploy/ git push: https://git-scm.com/docs/git-push plugin system: https://en.wikipedia.org/wiki/Plug-in_(computing) available docker image tags: https://hub.docker.com/r/gaiapipeline/gaia/tags/ how to develop a pipeline: https://docs.gaia-pipeline.io/develop-pipelines/ .. |build-status| image:: https://circleci.com/gh/gaia-pipeline/gaia/tree/master.svg?style=shield&circle-token=c0e15edfb08f8076076cbbb55558af6cfecb89b8 :alt: Build Status :scale: 100% :target: https://circleci.com/gh/gaia-pipeline/gaia/tree/master .. |go-report| image:: https://goreportcard.com/badge/github.com/gaia-pipeline/gaia :alt: Go Report Card :target: https://goreportcard.com/report/github.com/gaia-pipeline/gaia .. |go-doc| image:: https://godoc.org/github.com/gaia-pipeline/gaia?status.svg :alt: GoDoc :target: https://godoc.org/github.com/gaia-pipeline/gaia .. |apache2| image:: https://img.shields.io/badge/license-Apache-blue.svg :alt: Apache licensed :target: https://github.com/gaia-pipeline/gaia/blob/master/LICENSE .. |codecov| image:: https://codecov.io/gh/gaia-pipeline/gaia/branch/master/graph/badge.svg :target: https://codecov.io/gh/gaia-pipeline/gaia .. |sh-login| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/login.png :alt: gaia login screenshot :width: 650px .. |sh-overview| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/overview.png :alt: gaia overview screenshot :width: 650px .. |sh-create-pipeline| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/create-pipeline.png :alt: gaia create pipeline screenshot :width: 650px .. |sh-vault| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/vault.png :alt: gaia Vault screenshot :width: 650px .. |sh-pipeline-detailed| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/detail-pipeline.png :alt: gaia pipeline detailed screenshot :width: 650px .. |sh-pipeline-logs| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/logs-pipeline.png :alt: gaia pipeline logs screenshot :width: 650px .. |sh-settings| image:: https://github.com/gaia-pipeline/gaia/blob/master/screenshots/settings.png :alt: gaia settings screenshot :width: 650px
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00755.warc.gz
CC-MAIN-2023-06
8,309
72
https://forums.autodesk.com/t5/Installation-Licensing/cad-2014-terminal-services/td-p/3917128
code
When users login to the TS server which is Windows 2003 32 bit and try to open AutoCAD 2014 as a user it goes through a setup for about 2-3 sec and disappears and won’t open Users are limited user accounts and under Administrator account it works fine User accounts have folder redirection if i login as the user local on the server after a setup process i get this error any help would be great >> Users are limited user accounts and under Administrator account it works fine Isn't that the answer? Your user accounts are restricted to much. The other problem I see is: what has the user (none admin) to search in the users-folder from the administrator (in screenshot)? What you can try is to reset the profile for that user (from the Windows start menu) so that this user can start AutoCAD from scratch. - alfred - Yes and no Yes I know it’s from restricted account however it works on the workstations with the same accounts and we can raise user rights Does anyone know what read/wright access they need in the drive and registry? On server 2008 r2 the AutoCAD opens but has a onetime font registry issue I will have to reread the 2014 EULA, but I am pretty certain this violates the license agreement. DarrenP, any comments? Start with some of our most frequented solutions to get help installing your software.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694248/warc/CC-MAIN-20140313024454-00010-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
1,321
16
http://www.vlsi-expert.com/p/vlsi-basic.html
code
Here we are targeting the different basics of VLSI from very starting point (Digital Back ground) till understand the meaning of "What is VLSI". I have divided the all the post in different chapters and then subsections (As per the below index). If you think, I have missed any topic, please let me know. I will try to cover that in this section (if possible) else I will let you know by when and where It will be included. Note: Below Index can be changed because of more detailed sections or New topics but broadly following topics will be covered. Chapter 1 : Digital Background - 1.1 Number System - 1.2 Digital Arithmetic - 1.3.a Logic Gates - 1.3.b Logic Gates - 1.4 Combinational Circuits - 1.5 Multiplex (MUX) - 1.6a Sequential Circuits (Introduction) - 1.6b Sequential Circuits Components - 1.7 Basic Flip Flop Chapter 3: CMOS Processing - 3.1 Fabrication Steps. - 3.2 Create N-well And Field Oxide. - 3.3 Creating Gate Oxide and Poly Layers. - 3.4 Implant N+ Impurities. - 3.5 Implant P+ Impurities. - 3.6 Create Contact and Metal-M - MOS design (type of MOS/PMOS/NMOS/CMOS) - CMOS Gates - CMOS basic working principle - Differennt Prperties - SPICE models - Simulation with MicroWind Chapter 7: Few VLSI terminology - Symol, Schematic, CELL, LIBRARY - Timing , Power, Area, - Driving Strength, Slew, Transition Time, Propagation Delay, - Nets, PINS, Clock Circuit, DATA paths - Clock paths, Layout , Standard Cells - Library Cell Design Chapter 8: VHDL-Verilog Basics 1) Twin-tub (Twin -Well) CMOS Process 2) Silicon On Insulator (SOI) CMOS Process
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607702.58/warc/CC-MAIN-20170523202350-20170523222350-00005.warc.gz
CC-MAIN-2017-22
1,559
37
https://booksdrive.org/linux-kernel-module-programming-by-peter-jay-salzman/
code
Linux Kernel Module Programming by Peter Jay Salzman pdf free download. So, you want to write a kernel module. You know C, you’ve written a number of normal programs to run as processes, and now you want to get to where the real action is, to where a single wild pointer can wipe out your file system and a core dump means a reboot. Well, welcome to the club. I once had a wild pointer wipe an important directory under DOS (thankfully, now it stands for the Dead Operating System), and I don’t see why living. Linux Kernel Module Programming by Peter Jay Salzman Please make a comment if the link is not working for you. I appreciate your valuable comments and suggestions. For more books please visit our site.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474690.22/warc/CC-MAIN-20240228012542-20240228042542-00180.warc.gz
CC-MAIN-2024-10
716
6
https://www.pdfchm.net/tag/ancient/
code
Dive Into Algorithms: A Pythonic Adventure for the Intrepid Beginner Dive Into Algorithms is a broad introduction to algorithms using the Python Programming Language. Dive Into Algorithms is a wide-ranging, Pythonic tour of many of the world's most interesting algorithms. With little more than a bit of computer programming experience and basic high-school math,... Show Me You Care - The Power of Silence in Selling The Power of Silence is immense. Nothing establishes a closer connection or works better than silence and active listening. There is raw power to listening, hearing, and getting out of the way. When we let a buyer tell us what they want, vs. what we think they may need, long-lasting relationships are built. Silence and... Book of the Moon: A Guide to Our Closest Neighbor Have you ever wondered if there are seasons on the moon or if space tourism will ever become commonplace? So has Dr. Maggie Aderin-Pocock. In fact, she earned her nickname “Lunatic” because of her deep fascination for all things lunar. In her lucidly written, comprehensive guide to the moon, Aderin-Pocock takes readers... |Result Page: 29 28 27 26 25 24 23 22 21 20 |
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00463.warc.gz
CC-MAIN-2021-43
1,165
10
https://mein-coon.ru/validating-the-email-86.html
code
Validating the email The field under validation must end with one of the given values. Not only will we use regexes to do the validation but we’ll cover some other strategies. It is incredibly difficult to build a good regex to handle all the validation scenarios. For example, these are all valid email addresses: So you should decide how restrictive you need to be in your matching. If this is for a user signing up on your website and if you’re going to email them a validation code then you might not need to be too strict. The @ sign is a super simple way to do some easy validation. But you could also throw in some length validation as well which is discussed below. This regex ensures the user typed at least one character before the @ and one after. @won’t match it but it will match most email addresses. If someone types an @ symbol sometimes that’s good enough. This will match every one of the examples up above and is fast. You could use it but why risk having a valid email address be rejected. Here is some data on what we’ve seen for email address lengths for real people when parsing emails: If you built your email validation rule to validate that an email address is at least 7 characters, that would be a pretty good rule. If you don't have the proper application template, you could be hindering your ability to get tasks done or collect the information you need.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00125.warc.gz
CC-MAIN-2021-17
1,395
8
https://wiki.eclipse.org/index.php?title=EMF_Facet&direction=prev&oldid=295606
code
- 1 Overview - 2 Install - 3 Documentation - 3.1 User documentation - 3.2 Screencasts & Slides - 3.3 Project documents - 3.4 Committer documentation - 4 Support - 5 Getting Involved Using Eclipse Release Update Site (Recommended) To install the latest EMF Facet release, just point your Install Manager to the pre-defined Eclipse simultaneous release update site: http://download.eclipse.org/releases/__release_name__. For instance: - in an Indigo 3.7 installation the update site will be : Then, you can select the "EMF Facet SDK (Incubation)" feature under the "Modeling" category. Update site locations - Update sites - Alternative update sites - milestones 0.1.x: http://download.eclipse.org/facet/updates/milestones/0.1 - milestones 0.2.x: http://download.eclipse.org/facet/updates/milestones/0.2 - nightly (trunk, 0.2.0, Juno): http://download.eclipse.org/facet/updates/nightly/ - nightly (branches/0_1, 0.1.1, Indigo): http://download.eclipse.org/facet/updates/nightly-maintenance/ Update site uses The releases update site : - contains the release (GA) and the service releases (SR1, SR2, etc.) - should be used by all regular users The milestones update sites: - contain the milestones and release candidates: M1, M2, M3, M4, M5, M6, M7, RC1, RC2, RC3, RC4 (=GA), SR1 RC1, SR1 RC2, SR1 RC3, SR1 RC4 (=SR1), SR2 RC1, SR2 RC2, SR2 RC3, SR2 RC4 (=SR2) - must be used by the builds of other Eclipse projects - are referenced by Indigo b3aggrcon file and Juno b3aggrcon file. The nightly update sites: - contain the build of the SVN head (latest SVN revision) - can be used to test a not-yet-released feature or bug fix - must not be used to build any product release - should be used by integration builds of the other release train members a few days before the milestones and release candidates to detect bugs or regressions. Using an archived update Site (Not Recommended) You can download the archive of the EMF Facet updates sites from the EMF Facet download page but you will have to resolve the dependencies and find the corresponding archived update sites manually. The EMF Facet team does not provide the list of the archived update sites needed to satisfy the dependencies, because it is too complicated to maintain. That's why this kind of installation is not recommended. - 0.1.0 documentation (the documentation of the components copied from MoDisco is missing, please have a look at:Facet Manger, Query Manger, Customization) - New and Noteworthy - Head documentation: documentation in progress for the next version (0.2.0) Screencasts & Slides - EMF Facet 0.1.0, Eclipse DemoCamp Indigo in Nantes, 2011 - (frensh) MDT : Papyrus : état actuel et perspectives: Les journées NEPTUNE, May 2011. - A presentation of Papyrus and of the use of the EMF Facet table by Papyrus. - EMF Facet - A Non-Intrusive Tooling to Extend Metamodels, EMF Facet EclipseCon 2011 Audition, December, 2010. - EMF Facet - A Non-Intrusive Tooling to Extend Metamodels: Eclipse Summit Europe 2010, November, 2010. Release Train Required Documents - IP Log (Indigo) - 0.1.0 Release review docuware - Indigo Release Train Requirement Conformance Summary - Ramp down policy Useful release train's documentations Indigo_Simultaneous_Release (contains the calendar) Project Creation Documents - Web documentation update - Test scenarios - Releng : How to Use - Non enhancement opened bugs sort by importance (P1=planed for the next milestone, P2=planed for the next release, P3=not planned yet, P4=planed for the next "non service" release, P5=delayed) - Enhancement opened bugs sort by importance (P1=planed for the next milestone, P2=planed for the next release, P3=not planned yet, P4=planed for the next "non service" release, P5=delayed). - Unit Test Failures - Bugs not flaged indigo+ (must be empty) EMF Facet uses the MoDisco Developer Guide. Developers mailing list: https://dev.eclipse.org/mailman/listinfo/emft-dev
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194982.45/warc/CC-MAIN-20201128011115-20201128041115-00374.warc.gz
CC-MAIN-2020-50
3,913
61
https://supportforums.cisco.com/discussion/10219396/terminal
code
anyone who can give the best approach for the attached diagram. we have production and test environment network. The same workstation will access both networks. Both networks are on different subnet. We are planning to have a terminal server to be setup that will serve as our jump host to access the test environment network. Any advice is highly appreciated. thanks
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609598.5/warc/CC-MAIN-20170528043426-20170528063426-00335.warc.gz
CC-MAIN-2017-22
367
2
https://community.sdl.com/ideas/translation-productivity-ideas/i/trados-studio-ideas/processing-of-key-keynote-files
code
We have seen a small increase in the number of requests for .key files. Especially with customers who are predominantly Mac based. Given that we have an OpenOffice set of filters, I am wondering if perhaps we can look at a .key filter as well. There is a workaround for .key files which involves conversion to PowerPoint. There is a slight issue in that the conversion can cause formatting changes, and font info can get lost - which will increase post translation production time. So, if we can process this content natively - that would be a time saving device in itself.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00304.warc.gz
CC-MAIN-2020-24
573
4
https://www.informit.com/articles/article.aspx?p=1389137&seqNum=2
code
- Linking to a YouTube Video - Embedding a YouTube Video in Your Website - Customizing an Embedded Video Embedding a YouTube Video in Your Website Linking to YouTube videos from your web page is one thing; embedding an actual video into your web page or blog is quite another. That's right—YouTube lets you insert any of its public videos into your own web page, complete with a video player window. And it's easy to do. YouTube automatically creates the embed code for every public video on its site and lists this code on the video page itself. The code is in the information box beside the video, as you saw in Figure 9.1; you'll need to copy this entire code (it's longer than the Embed box itself) and then paste it into the HTML code on your own web page. Just follow these steps: - Go to the page for the video you want to link to. - In the information box to the left of the video is an Embed box. Highlight and copy the HTML code in this box. - Paste that HTML code into your web page's underlying HTML code where you want the embedded video to appear. The result of inserting this code into your page's HTML is that your web page now displays a special click-to-play YouTube video player window, like the one shown in Figure 9.2. The video itself remains stored on and served from YouTube's servers; only the code resides on your website. When a site visitor clicks the video, it's served from YouTube's servers to your viewer's web browser, just as if it were served from your own server. (This means you don't waste any of your own storage space or bandwidth on the video.) Figure 9.2 A YouTube video embedded in a web page. By the way, the code in the Embed box is squished together onto a single line to make it easier to copy. If you were to properly format the code, it would look something like this: <object width="425" height="350"> <param name="movie" value="http://www.youtube.com/v/12345"></param> <param name="wmode" value="transparent"></param> <embed src="https://www.youtube.com/v/12345" type="application/x-shockwave-flash" wmode="transparent" width="425" height="350"> </embed> </object>
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00313.warc.gz
CC-MAIN-2023-14
2,117
13
https://community.ptc.com/t5/PTC-Education-Forum/Creo-Student-Download-links-broken/td-p/3673
code
I am trying to install creo on my pc but whenever i click on any of the download links it comes up with an error 404 message saying the page was not found. Can anyone help me in this? I already reported the problem in Creo download links appear to be broken Thanks, Martin! Is there anyway to put email notification as soon as the link starts working? I really need to install it as soon as possible!
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00593.warc.gz
CC-MAIN-2023-23
400
4
https://www.raspberrypi.com/news/get-started-ardunio/
code
Get started with… Arduino? Yes, you read that title right, and no, you haven’t accidentally stumbled upon the Arduino Foundation’s website. Today, we’re pleased to announce a new addition to the Raspberry Pi Press family: Get Started with Arduino, a complete how-to guide to help you get hands on with the other pocket-sized board. Why not? Our mission is to put the power of computing and digital making into the hands of people all over the world. Whether you’re using a Raspberry Pi, an Arduino, or any other piece of digital making kit, if you’re creating with tech, we’re happy. And Raspberry Pi and Arduino make wonderful project partners for all kinds of build. What’s in the book? Get Started with Arduino is packed full of how-tos and project tutorials to help you get better acquainted with the little blue microcontroller. Whether you’re brand new to digital making, a die-hard Raspberry Pi fan looking to expand your maker skillset, or simply a bit of a bookworm, Get Started with Arduino is a super addition to your bookshelves. Aren’t Raspberry Pi and Arduino the same kind of thing? Arduino is a microcontroller, while Raspberry Pi is a full computer. Microcontrollers don’t usually run a mainstream operating system, but they’re extremely power-efficient, so they can be great for projects that can’t stay plugged into the mains. You need to use a separate computer to set up your Arduino, but you can do everything on a Raspberry Pi itself… including setting up an Arduino. As we said, the two work really well together in some projects: for example, you might build a robot where the Raspberry Pi handles intensive processing tasks and provides you with a friendly environment for developing your code, while the Arduino handles precise real-time control of the motors. Buy Get Started with Arduino today Get Started with Arduino is out now! It’s available from the Raspberry Pi Press website with free international shipping, from the Raspberry Pi Store in Cambridge, and from WHSmith in the UK; it’ll reach Barnes & Noble stores in the US in a week or so. Also out today… HackSpace magazine issue #25 is also out today, available from the Raspberry Pi Press website, the Raspberry Pi Store in Cambridge, and every newsagent that’s worth its salt. And, if that’s not enough, Wireframe magazine issue 27 is also out today, and it too is available from Raspberry Pi Press, the Raspberry Pi Store, and newsagents across the UK. But wait, there’s more! In case you missed it, on Monday we released Retro Gaming with Raspberry Pi, your one-stop guide to creating and playing classic retro games on your Raspberry Pi. Did someone say free? For getting this far in today’s blog, here’s your reward: Get Started with Arduino, HackSpace magazine, Wireframe magazine and Retro Gaming with Raspberry Pi are all available as free PDF downloads. However, when you buy our publications, you’re supporting the work of the Raspberry Pi Foundation to bring computing to everyone, as well as the continued production of even more great magazines and special edition books. So, you know what to do. Something that must not be missing here is a link to Alex Eames’ RaspiDuino project! I agree. 100%. ummm…piDuino? Ardupi? raspiDuino? BTW I’ve met both the Arduino guy and the raspberry pi guy. Nice people. Eben was very humble, great guy. Not as big as he is on video though. For some reason he looks like a giant in his interviews. >the Arduino guy You mean Hernando Barragán? Just been looking at Arduino history and slightly confused who you meant Oh, I see. I saw an older man with grey hair. Not tall. A little heavy. bald spot. Love, peace & unity :) Arduino did have the 101 and the Yun which could be seen as Pi competitors, but it seems that they have been discontinued* and Arduino has decided to go back to just making boards with microcontrollers. * In the case of the 101 it seems this was Intel’s fault and not Arduino’s, i’m not sure about the Yun, but even if it was a supplier’s doing, ardunino don’t seem to have made another go at entering the Linux boards world. Yun2 had MIPs processor and a small amount of RAM plus it is (was) quite expensive compared to Pi. Great decision! I’m glad that this topic was issued under Hacker’s Magazine. This series is a bit more ambitious, and therefore such book was desired by me. I want not another one book with hello world and work with basic sensors/devices. Interrupts, stocks and other stuff is very important to understand computing devices and serious programming. Both topics and more are covered in here! Dear Raspberry Pi Press, Dear Raspberry Pi Foundation, if you ever plan to continue Android topics as a book or magazine I would like to read about: – PCM stuff – how to make good samples with most efficient results. – sound compression. I remember when Mortal Kombat II came to arcades. DCM (sound compression system) made impressive difference, It was actually in pinball machines first. If sound compression on Arduino is possible please do not miss this topic! – work with EEPROMs and Flash (including build-in one). – writing test sequence which should check all components with every start of Arduino, audits (arcade machines and pinballs had this feature, it is today forgotten but it is capable to discover broken components before actual run of the Arduino based device) – tricks which helps to fit and run quite long program into Arduino – Asynchronous data management I bought Arduino hardcopy with hope in mind that second part will ever be published. If not maybe advanced Arduino topics will be published in Hacker Space Magazine. Keep up good work please. I have been using Arduino for long time as analog and digital input and output. My current problem is, I have to read 34 temperature sensors, 34 voltage sensors, 34 current sensors, and turn on/off 48 relays based on the readings from the sensors. I found that Raspberry Pi and Arduino combination is the most cost effective and portable solution.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00188.warc.gz
CC-MAIN-2023-14
6,072
39
https://original.newsbreak.com/@muhammad-abubakar-shoaib-1584014/2315533660202-my-finals-are-within-a-week-this-is-how-i-am-preparing-for-it
code
My finals are within a week. I haven’t studied the whole year & now I am pretty much stuck on how to start everything. Well, I started by organizing all the notes 📝 🗒 and shortlisting them just to learn the essence. Then, I looked up my finals schedule & which subjects I need to study the most. For me, the weakest of all is HISTORY, and I find it pretty exhausting 🥲😖😣😩. So, I am going to prepare for it first. I make my notes 📝 on the phone 📱 so that I don’t have to carry my entire book 📚 📕 series everywhere & it also lets me study wherever I want, whether in a library or a cafeteria. Even in exams, I don’t study the entire day, I study for like 6 hours, but I make sure whatever I study must stay in my mind. So, I WISH ME LUCK FOR MY FINALS!!! 🤞🏻🤘🏻 As after this, I am going to college & it would have a major impact on that. I wish to get into Ivy League. Comments / 0
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00397.warc.gz
CC-MAIN-2023-14
926
7
https://madd74.livejournal.com/216829.html
code
I was not familiar with the movie. Master Movie Myles told me he wanted to see this because Terry Gilliam directed and wrote the movie. For those not familiar with Gilliam (where the hell have you been?), look no further than Monty Python, Adventures of Baron Munchausen, Brazil, and Time Bandits. Maybe if I had watched all of these films the prior week, and did nothing else, I would have been more prepared for what I saw. As many know, this was Heath Ledger's last film. They were actually able to use elements of the story to allow the film to complete, despite the fact he died prior to the end of the film. This was no "Crow" attempt. They simply took the, well, brilliance that is Terry, did a few minor re-writes, and away you go with what can only be called a most wonderful movie experience... or a nice flashback of the last time you did acid and PCP while naked in January in the arctic. So, all I can say, is if any of beforehand mentioned movies/works were enjoyed by you, then go watch this movie.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155925.8/warc/CC-MAIN-20210805130514-20210805160514-00707.warc.gz
CC-MAIN-2021-31
1,013
3
https://dasklub.com/forum/politics-news-and-world-issues/techno-song-about-putin
code
I heard about this song tonight due to the John Oliver show. It's a song made for Putin and is played in Russia. Такого как Путин / One Like Putin, English Subs Gotta hand it to him that's some catchy tunes :D (hard to do as karaoke but still)
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00354.warc.gz
CC-MAIN-2017-43
257
4
https://eclipse-openj9.github.io/openj9-docs/version0.19/
code
What's new in version 0.19.0 The following new features and notable changes since version 0.18.0 are included in this release: - New binaries and changes to supported environments - Option to print code cache usage to stderrat VM shutdown StringBuilderabove 1 G grow to the maximum size - jpackage packaging tool platform support - Extended messages for NullPointerExceptionnot yet implemented - Compiler changes for Linux - New JDK 14 features Features and changes Binaries and supported environments Eclipse OpenJ9™ release 0.19.0 supports OpenJDK 14, which is available from the AdoptOpenJDK community at the following link: OpenJDK 14 with Eclipse OpenJ9 is not a long term support (LTS) release. The latest builds of OpenJDK with OpenJ9 for Java 8 and 11 at the AdoptOpenJDK community are for Eclipse OpenJ9 release 0.18.0. Features mentioned in these release notes are not available in these builds. Although it might be possible to build an OpenJDK 8 or OpenJDK 11 with OpenJ9 0.19.0, testing at the project is not complete and therefore support for any of these features is not available. To learn more about support for OpenJ9 releases, including OpenJDK levels and platform support, see Supported environments. Option to print code cache usage to stderr at VM shutdown A new command line option -XX:+PrintCodeCache allows you to print the code cache memory usage to stderr when the VM shuts down. StringBuilder above 1 G grow to the maximum size A 1 G char or larger StringBuilder now immediately grows to the maximum possible size for all current versions of Java, including Java 8. For Java 8 only, you can revert to the previous behavior of growing only as much as necessary to accommodate the String being added, by using the option, jpackage packaging tool platform support jpackage utility is described in JEP 343 as a tool that "packages a Java application into a platform-specific package that includes all of the necessary dependencies." Full details of the tool are available at JEP 343: Packaging Tool. Be aware that jpackage is supported on only the following OpenJ9 platforms: Linux®, macOS®, and Windows™. It is not supported on AIX® or z/OS® platforms. Extended messages for NullPointerException not yet implemented JEP 358: Helpful NullPointerExceptions provides extended messages when a NullPointerException is generated by the Java 14 VM and you have enabled the feature. However, be aware that this is not implemented in OpenJ9 at this time. Compiler changes for Linux Linux x86 64-bit, Linux on POWER® LE 64-bit, and Linux on IBM Z® 64-bit have all moved to the gcc 7.5 compiler. See Supported environments. New JDK 14 features The following features are supported by OpenJ9: - JEP 343: Packaging Tool (Incubator) jpackageis supported on only the following OpenJ9 platforms: Linux®, macOS®, and Windows™. It is not supported on AIX® or z/OS® platforms. - JEP 352: Non-Volatile Mapped Byte Buffers - JEP 358: Helpful NullPointerExceptions - JEP 359: Records (Preview) The following features are implemented in OpenJDK and available in any builds of OpenJDK 14 with OpenJ9: - JEP 305: Pattern Matching for instanceof (Preview) - JEP 361: Switch Expressions (Standard) - JEP 367: Remove the Pack200 Tools and API - JEP 368: Text Blocks (Second Preview) You can find the full list of features for JDK 14 at the OpenJDK project. Any remaining features that are listed do not apply to OpenJ9. Full release information To see a complete list of changes between Eclipse OpenJ9 version 0.18.0 and version 0.19.0 releases, see the Release notes.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00470.warc.gz
CC-MAIN-2024-18
3,581
50
https://bsa.classroomparent.com/helps?page=9
code
Can't find an answer to your question, please contact us I completed the signup form and it said it sent a confirmation email, but I never received it. What should I do? I would like to create a group but don't want everyone to know who is in the group, and see our messages. Can I use ClassroomParent to track membership in my Parent Organization? How can I send a message to just the parents in my class or grade without including teachers? I want to arrange a class gift without her knowing. How can I sign up a volunteer that is not a parent, teacher or staff member at the school? Once our school has agreed to use ClassroomParent, what are the next steps to getting up and running? Can more than one Class Parent be assigned to a classroom? How can I get statistics on the number of emails we have sent and their open rates? How do I add a Classroom/Homeroom I was just told that I was added to the directory, but when I try to register, the system says it can't find me. Why is this happening?
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00133.warc.gz
CC-MAIN-2021-25
1,000
11
https://community.graylog.org/t/pipeline-rule-with-multiple-conditions/26089
code
I’m trying to create a pipeline that would extract data from ssh logs and set the username, ip, login_result and similar fields. I can do it in this fashion and it works - for single rule at a time: rule “SSH Cert OK” to_string($message.application_name) == “sshd” && starts_with(to_string($message.message), “Accepted publickey for “, true) let grep = regex(”^Accepted publickey for (.[^\s]) from (.[^\s]) port (.*)”, to_string($message.message)); set_field ("ssh_result", "Login success"); set_field ("ssh_login_type", "Pubkey"); set_field ("username", grep["0"]); set_field ("src_ip", grep["1"]); The problem is, I’d like to have multiple rules for different messages (login fail, bad cert, etc.). Can I do it in single “rule” (like multiple when … then, but that would be rather inefficient); should I cascade rules, or is there a more efficient way? I don’t think there is if…elseif or case thing here right?
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00410.warc.gz
CC-MAIN-2023-23
943
9
https://wiki.4netplayers.com/en/How_to_install_mods_on_my_Minecraft_server
code
How to install mods on my Minecraft server Rent your own Minecraft server at 4Netplayers.com You need to create a new configuration. Either for Forge, or Craftbukkit or Spigot. It depends on what system the plugins/mods you want to use are for. The server must be completely stopped before uploading via FTP. The server must not be running, starting, or stopping. The plugins are then uploaded to the Saves FTP using an FTP program (e.g. Filezilla ). You can find the login data in the FTP overview by clicking on FTP. Depending on the modframework, the mods belong in For this you create, if not already present, in the configuration of the game server the folder "plugins" or "mods" (without "") and upload the Jar file of the plugin - The folder must be plugins. Not Plugins! - Do not unpack the Jarfile! After the next start and stop, the config files (.xml /.yml ) of the respective plugin/mods are also present on the Saves FTP.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00418.warc.gz
CC-MAIN-2022-27
934
12
https://petri.com/enable-powershell-remoting-in-windows-8/
code
In this Ask the Admin, I’ll show you how to enable PowerShell Remoting in Windows 8. PowerShell Remoting is not enabled by default in Windows 8, but can be easily configured from the command line or by using Group Policy. Windows Remote Management (WinRM) is the technology behind PowerShell Remoting. When enabled, the WS-Management service is set to start automatically, an HTTP listener is configured, the default WinRM Windows Firewall rules are turned on, and permissions are set to allow local administrators to establish remote connections. Enable PowerShell Remoting from the Command Line If you want enable PowerShell Remoting on PCs that are not joined to a domain, or just on a handful of devices, then use the command line as shown below. - Log in to Windows 8. - Switch to the Start menu by pressing the WINDOWS key. - On the Start screen, type powershell. Make sure that Windows PowerShell is selected in the search results and press CTRL+SHIFT+ENTER. Give consent or enter administrative credentials if prompted. - In the PowerShell prompt, type enable-psremoting and press ENTER. - You will then be prompted to confirm if you want to continue with the configuration. You can either confirm each step individually by typing [Y], or collectively by typing [A] and pressing ENTER. Enable PowerShell Remoting from the command line (Image: Russell Smith) Alternatively, you can add the –force parameter to avoid having to confirm the configuration. Once the command has completed, the WinRM listener will be ready to accept remote connections from any IP address. Enable PowerShell Remoting using Group Policy To enable PowerShell Remoting on PCs joined to an Active Directory domain, log on to a server or management PC that has the Remote Server Administration Tools (RSAT), using a domain account with permission to create new Group Policy Objects (GPOs) and link them to Organizational Units (OUs). Configure a WinRM Listener Let’s start by configuring a WinRM listener on HTTP: - To start GPMC, open Server Manager using the icon on the desktop taskbar. Alternatively, you can use the icon on the Start screen. - In Server Manager, select Group Policy Management from the Tools menu. - In GPMC, expand your Active Directory (AD) forest and domain in the left pane. - In the left pane of GPMC, right click Group Policy Objects and click New. - In the New GPO dialog, give the new GPO a name and click OK. - Expand Group Policy Objects in the left pane of GPMC, right click the GPO you just created and select Edit… from the menu. - In the left pane of the Group Policy Management Editor window, expand Computer Configuration > Policies > Administrative Templates > Windows Components > Windows Remote Management (WinRM) and click WinRM Service. - In the right pane, double click Allow remote server management through WinRM. - In the Allow remote server management through WinRM dialog, check Enabled. - In the IPv4 filter and IPv6 filter fields under Options, type * in both boxes to allow connections from any IP address, and then click OK. Enable PowerShell Remoting using Group Policy (Image: Russell Smith) Set the WS-Management Service to Start Automatically Now let’s configure the Windows Remote Management service to start automatically: - In the left pane of the Group Policy Management Editor window, expand Computer Configuration > Policies > Windows Settings > Security Settings and click System Services. - In the right pane, scroll down the list of services and double click Windows Remote Management (WS-Management). - In the Windows Remote Management dialog, check Define this policy setting, and then check Automatic under Select service startup mode. Click OK. Enable Windows Firewall Rules for WinRM Finally we need to enable the default Windows Firewall rules for Windows Remote Management. - In the left pane under Security Settings, expand Windows Firewall with Advanced Security and click Inbound Rules. - Right click Inbound Rules, and select New Rule from the menu. - In the New Inbound Rule Wizard window, check Predefined and select Windows Remote Management from the menu. Click Next. - Click Next on the Which rules would you like to create? screen to create the two default Windows Remote Management rules. Note that you should consider deselecting the default rule for the Public firewall profile to ensure that Windows Remote Management isn’t exposed on unknown networks. Additionally, you might want to tighten the default Windows Remote Management firewall rule for Domain and Private networks. - On the final screen of the wizard, make sure that Allow the connection is selected and then click Finish. - Close the Group Policy Management Editor window. Now you can link the new Group Policy Object (GPO) to an OU that contains your Windows 8 computer accounts. When Group Policy is next updated on devices within scope, Windows Remote Management will be enabled. For more information on linking GPOs and working with the Group Policy Management Console, see Working with Group Policy on Petri.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00801.warc.gz
CC-MAIN-2023-40
5,056
42
http://conferences.academicjournals.org/cat/arts-education/18th-international-morphology-meeting
code
The 18th International Morphology Meeting is organized by the Research Institute for Linguistics, Hungarian Academy of Sciences. The meeting will be held in Budapest, Hungary, May 10-13, 2018. We invite papers on subjects including but not limited to the main theme of the conference: Paradigms in inflection and word formation synchronically and diachronically Olivier Bonami (Université Paris Diderot) Farrell Ackerman (UCSD) Marilyn Vihman (York University)
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866733.77/warc/CC-MAIN-20180524170605-20180524190605-00510.warc.gz
CC-MAIN-2018-22
461
6
https://forums.hololens.com/discussion/12184/windows-mixed-reality-on-surface-tablets
code
The Mixed Reality Forums here are no longer being used or maintained. There are a few other places we would like to direct you to for support, both from Microsoft and from the community. The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality. And always feel free to hit us up on Twitter @MxdRealityDev. Windows Mixed Reality on Surface Tablets I am quite confused with the situation of Windows Mixed Reality or more specific the windows.perception.spatial (API) on Microsoft Tablets. Since Paint 3D features mixed reality I thought I would be able to write code against the API but as far as my tests go I was not able to initialize the Spatial locator. Now I am confused: Am I doing something wrong, or is this just not possible to use the Spatial mapping features on Microsoft Surfaces?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00022.warc.gz
CC-MAIN-2024-10
1,259
10
https://lovehoneyforum.com/t/safex-delay-condoms/229224
code
ARE THESE ANY GOOD or are they a waste of time, what are the best type of condoms? My old boyfriend hated them coz he felt he was too numb, but thaht could just be him! There's no best type. The type that works best for each person is different. ok thanks guys delay are disarsterious, if they are anything like the durex performa! numbness is definately not good because why would a guy want sex without feeling? he loses interest and of course the erection follows. but they must have been made for some good reason, i think the idea was for men who suffer from premature ejaculation or something, but my friend says the same thing. numb is not great. thanks guys its so interestong to see what you all think lol i can say i cant tell one condom from another lol
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00274.warc.gz
CC-MAIN-2020-45
764
6
http://www.zhaoliming.net/research
code
As an important and challenging problem in computer vision and graphics, keypoint-based object tracking is typically formulated in a spatio-temporal statistical learning framework. However, most existing keypoint trackers are incapable of effectively modeling and balancing the following three aspects in a simultaneous manner: temporal model coherence across frames, spatial model consistency within frames, and discriminative feature construction. To address this issue, we propose a robust keypoint tracker based on spatio-temporal multi-task structured output optimization driven by discriminative metric learning. Consequently, temporal model coherence is characterized by multi-task structured keypoint model learning over several adjacent frames, while spatial model consistency is modeled by solving a geometric verification based structured learning problem. Discriminative feature construction is enabled by metric learning to ensure the intra-class compactness and inter-class separability. Finally, the above three modules are simultaneously optimized in a joint learning scheme. Experimental results have demonstrated the effectiveness of our tracker. In this paper, we propose an end-to-end deep correspondence structure learning (DCSL) approach to address the cross-camera person-matching problem in the person re-identification task. The proposed DCSL approach captures the intrinsic structural information on persons by learning a semantics aware image representation based on convolutional neural networks, which adaptively learns discriminative features for person identification. Furthermore, the proposed DCSL approach seeks to adaptively learn a hierarchical data-driven feature matching function which outputs the matching correspondence results between the learned semantics-aware image representations for a person pair. Finally, we set up a unified end-to-end deep learning scheme to jointly optimize the processes of semantics-aware image representation learning and cross-person correspondence structure learning, leading to more reliable and robust person re-identification results in complicated scenarios. Experimental results on several benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art approaches.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00397-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,274
2
https://www.camptocamp.com/de/actualite/puppet-acceptance-testing-travis-ci/
code
Camptocamp is developing lots of Puppet modules and is using more and more often acceptance tests with beaker to prevent regressions. Beaker is a powerful framework to add acceptance tests to Puppet modules. It allows to use all sorts of hypervisor backends (docker, vagrant, aws, openstack, etc.). Unfortunately, it is not (yet) possible to launch specific docker containers or vagrant boxes on Travis CI. Since Camptocamp has its own OpenStack private cloud, we had the idea to use Beaker’s OpenStack hypervisor to spawn VMs on our own Openstack infrastructure from Travis CI, in order to automate acceptance tests runs. And this is how we are doing it. First, let’s write a very simple .travis.yml file: Then a simple nodeset: This should work locally, but not from Travis CI because it (hopefully) doesn’t have access to your private key, so you have to generate a new public key and add its keypair to Openstack. You’ll need this patch: https://github.com/puppetlabs/beaker/pull/647 You then need to remove the openstack_keyname from your nodeset. Now, it should work fine, but you certainly don’t want to publish your credentials with your code, so let’s secure it a little… Encrypt your credentials using travis encrypt You’ll need this patch: https://github.com/puppetlabs/beaker/pull/643 Use the travis gem to encrypt and add your credentials to your .travis.yml. You can then remove them from your nodeset. Finally, you should have something like this: .travis.yml While waiting for Travis CI to allow spawning specific Docker Container or virtualbox, we now have acceptance tests automatically launched and historized. Here is an example of Travis CI output using this method on our puppet-openldap module:
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00100.warc.gz
CC-MAIN-2020-24
1,733
9
https://news.ycombinator.com/item?id=4741899
code
I disagree with a few tips here. Rather than code to exact resolutions (eg: `if ([CCDirector sharedDirector].winSize.width == 568)`) build your games to be adaptive to the screen. That way when a new device is released you don't end up going through and making your code swiss cheese. A better idea is to maybe make decisions when a size is greater than a certain amount, but still layout to relative amounts to the screen. This will make it easier to support landscape and portrait on 4/3 devices as well and not just 3/2 and 16/9 screens. Perhaps add additional HUD when you have the space. To be fair, the author does suggest adaptive strategies all over the article. In practice, it's not uncommon to mix adaptive for the general case together with special cases optimized for a few specific, common and popular resolutions. sure, I don't mind mixing, but I don't vote for checking for exact resolutions, but rather ranges of resolution ratios. Instead of checking for "width == 586" check for width/height and checking to seeing if it's close to 16/9 or 3/2 or 4/3 and then using specific UI for those cases, but still keep the layout relative.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00122-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
1,149
4
https://cwiki.apache.org/confluence/display/TUSCANY/BusinessExceptions
code
Supporting Business Exceptions in Tuscany Business exception related concerns 1. How to declare business exceptions? What defines business exception on each interface type. (ie Java wsdl) In Java interface, business exceptions are declared checked exceptions specified on the operations of the services interface. Note declared unchecked (runtime) exceptions are not considered business exceptions. WSDL need to determine if all WSDL defined faults are business exceptions. Need to in investigate JAX-WS mapping of exceptions JAX-B databinding.. SDO has no similar specification. Need to determine a similar means for SDO to provide reliable transforms between JAX-B and SDO. 2. How to represent business exceptions in Java? In Java business exception are represented as non runtime exceptions. But not all runtime exceptions delivered in a message maybe a Business exception. If a non runtime exceptions is delivered to a component that is does not declare the exception the exception will be wrappered in a specified runtime exception. Jax-b need to further investigate the JAX-B specification and follow how it models business exceptions as objects. SDO there are no specific mappings provided for. May need to wrapper complex parts of an exception as SDO objects in a specific Java exception. JAX-WS WSDL1.1 to Java mapping for faults JAX-WS 2.0 spec defines the mapping rule for web service faults in section 2.5. Using java exception to represent the Web Service fault: 3. How to transform business exceptions across databindings? Provide in the data binding transformations between Axiom OMElement, SDO represented faults and JAX-B exceptions. How to identify as business exceptions during the transforms. 4. How to propagate business exceptions? Determine how to propagate exception in the Tuscany runtime message in local interactions. Make sure TargetInvocation exceptions become unwrappered How in the case of webservice's binding propagate the message through web service binding. A scenario to test the business exception handling Integration Tests (iTests) Several integration tests have been created beginning with prefix exception. Work items actions - Either investigate using existing complex type conversions or add additional interfaces to databinding framework for transformation of Exceptions. - Make SDO generated exceptions match as close as possible to JAX-B. what annotations to SDOs could be added - Augment Java introspector to capture JAX-B annotations in logical types to help assist in providing hints to determine matching exceptions. - Use when provided WSDL qname of Exception's message name to match exceptions. If not provided fall back to name matching and possible use of packagename annotations to resolve matching type. - Attempt to do actual conversion through Axiom transform. If conversion fails try using simple conversion through copying of respective field members. - Check that only declared checked exceptions are thrown. Wrapper all other checked exceptions. - If the originating exception is a Business exception and conversions fails should we have a Tuscany standard runtime exception that will have basic message from the originating exception set ? Should we just pick one of the business exceptions on receiving operation ? This might be more robust than throwing a runtime exception. - What runtime exception should undeclared, checked exceptions be wrappered in? Tuscany defined ? just RuntimeException ? java.lang.reflect.UndeclaredThrowableException? I can see an SCA client still wanting to be "robust" capturing this and acting on it. - Will not directly validate webservices exceptions until axis binding is at incubator-snapshot (kernel trunk) level - How are we currently mapping operations during wiring in Tuscany with respect to exceptions ? Need to see if Exceptions are part of operations signature. - Currently we are unwrappering at the TargetInvokerExtension all exceptions and passing them on the message path. - System exceptions happening during the processing of a message are thrown up the stack. - We have made some decision on the support of SDO exception wrappers and example of can be seen in the exceptionXbindingTest iTest This closely resembles the JAX-B pattern in dealing with faults. One exception is we currently have a FAULT_ELEMENT field type QName on the exception to help tie back to original wsdl element it is associate to. The generated exception has the getter and setter for the fault: setFaultMessage(...) and getFaultMessage() and that's the pattern Axis2 adopts for web service fault to java exception mapping. The InvalidSymbolFault is a generated class to represent the fault. It can be generated using different databindings such as ADB, JAXB (and hopefully SDO in the future). JAX-WS RI 2.1 WSDL2Java Rick provided the WSDL and generated code from JAX-WS RI 2.1.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00572.warc.gz
CC-MAIN-2024-18
4,866
44
https://forums.unrealengine.com/t/multiple-uv-sets-vs-multiple-textures/77656
code
So does Unreal consider multiple UV sets that are present in a single texture as a single texture? How are the pros and cons vs having multiple textures without UV sets instead in the same material? Is this better for memory and performance in general or is the same thing under the hood? As an example: One 2K texture with 2 UV sets in the same material vs having two separate 2K textures with no UV sets in the same material. Of course this multiplied for all maps roughness normal etc… Also I haven’t tested Normal maps in multiple UV sets before, should i expect any problems here with having Normal maps arranged like that in Unreal? or should it work right as expected like the other maps? Last question is unrelated, but I wanted to ask, does anyone know how many Draw calls Paragon characters have per character? Notably how many separate materials do they have on average?
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00104.warc.gz
CC-MAIN-2021-21
885
4
http://www.nerfcenter.com/reviews.htm
code
has a comprehensive database of Nerf and Nerf-like blaster reviews. All blasters which NerfCenter has reviewed are listed alphabetically below (starting on the first column, going down, and continuing onto the next columns). Any e-mail regarding the reviews should be sent to [email protected]. Click on a letter below to see all of the blasters whose names start with that Due to space constraints, some blaster names have been abbreviated. The full name of each blaster appears on its respective review page. Blasters marked with the (NN) (Non-Nerf®) symbol are not part of the Nerf product line.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00298.warc.gz
CC-MAIN-2022-49
599
10
http://forums.parallax.com/archive/index.php/t-93459.html
code
View Full Version : Measuring FAN Rpm 04-05-2007, 01:15 PM Have got a 12VDC, 0.27A Fan, how can I measure the RPM of the fan with BS2. 04-05-2007, 03:10 PM One idea is to mount a magnet on the fan somewhere, then mount a Hall sensor on a fixed position so that the magnet comes close enough to it to trigger it, the sensor produces an output when it it near a magnetic field. I think there are Hall sensors on the main Parallax site. The sensor would output a pulse everytime the magnet passed near it, so the Stamp would take the pulses and measure the time between each pulse trigger, and calculate the RPM based on the period between the pulses. 04-05-2007, 04:15 PM A couple of ideas, Is this is three lead fan?· If so, it may already have a tachometer output.· Search for the model number of the fan to see how to use the wire.· Then, you'll need to condition the input for the BS2.· The fan runs at 12V,·so I would expect the tachometer output's·to be a 12V squarewave.· You'll need to reduce the voltage before it reaches the Stamp.· The other piece of the puzzle is number of pulses per revolution, which the same website should also tell you.· Then use the PBASIC "COUNT" command to count the number of pulses for a short period of time.· Then divide by the number of pulses per revolution and finally multiply up using your time period to get one minute.· Eg, 200 pulses for 1 sec = 200/2 (example fan from link below)=100.· Then because I used one second, multiply by 60 to get one minute.· 100*60= 6,000 RPM.· One other thing to keep in mind is that while the Stamp is counting, it can't do anything else, so you have to balance accuaracy vs. speed vs. other tasks. An example for a Nidec three wire fan is here:· http://www.nidec.com/designoptions/options.htm· click on the link just below the diagram to see the most common option/use. Another way to do this is to use an IR sensor/receiver combo as described in the Process Control text.· It uses an IR Transmitter/Reciever and an encoder wheel on the fan.· The text describes the exact hookup and some programming examples too.· It's a free download, so take a look.· Link:· http://www.parallax.com/html_pages/downloads/siccurriculum/documentation_sic_curriculum.asp· Look for Activity 6 beginning on page 114. Direct to PDF:· http://www.parallax.com/dl/docs/prod/sic/Web-PC-v1.0.pdf· A similar idea would be to use a seperate·IR transmitter and a seperate reciever, but set it up so that the transmitter is on one side of the blades and the reciever is on the other.· The blades will cut the beam, allowing you to count each·blade as it·passes.· Sunlight will affect both these ideas though.· Similair programming as above, balance accuarcy, speed and your other tasks. If the fan is an a light-tight box, you could as try·a regular LED for a light source and a CDS or Phototransistor·to detect the light.· Use shrinkwrap w/a straw to narrow the focus.· Set the LED so the blades break the beam.· You'll need to use "RCTIME" command and add a small capacitor as described in the text above.·I'm not sure how precise this would be at measuring RPM, but it may be worth a shot.· Also, it may not measure high RPMs.· But it might be worth playing with, just to see what you can do with it. I hope this helped! Post Edited (Desy2820) : 4/5/2007 9:31:10 AM GMT 04-05-2007, 04:57 PM Thanks for the reply. Yes its a 3wire Fan, the green wire, I assume its a signal wire. I cant get any information about this fan. Its an Intel C33224-002 Fan.Does anyone have any information about the this fan, the Rev Per minute and other details. 04-05-2007, 05:02 PM And please could you give me an example using just the count statement 04-05-2007, 10:20 PM Measuring fan speed is discussed on page 115 of the Parallax "Process Control" to download it. 04-06-2007, 04:44 AM Only problem i can see with using the magnet/hall effect solution is one of safety, the magnet may be a problem attaching it securely so that it doesnt fly off at high speed and hit someone in the eye! Would using an optical transmitter/receiver option be better like the IR buddies sensor used as a break beam arrangement using the fan blades to cut the beam and divide by amount of blades to get a revolution. 04-09-2007, 12:13 AM · The 3-Lead Fans have an open-collector output…You must pull it HIGH using a 10K resistor. Now, bear in mind that since the fan is usually running @ 12V and the BASIC Stamp @ 5V you will want the pull-up to be to the 5V supply (usually VDD on the BASIC Stamp Boards). You can use the COUNT command to get the RPM at that point. I hope this helps. Take care. Parallax Tech Support
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036920.5/warc/CC-MAIN-20150601214356-00014-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
4,676
28
http://www.busuu.com/p/27917012
code
How are you right now? How do you feel? Well, but the question was about your feeling and not about this pictures) Please, describe how do you feel! thank you :) This exercise is about how are you :) Read tasks more attentive, please :) And if about your text , it is veery good!! Good luck in studying :)
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120974.20/warc/CC-MAIN-20140914011200-00123-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
305
6
https://www.daniweb.com/programming/software-development/threads/82822/sed-leaves-file-empty
code
Hello all, new to this site and to scripting. I am trying to search for the line with "CALIBRATE" in a file, and remove the leading # from the line with the following lines: cat $F1 | sed -e '/CALIBRATE/s/^#//' > $F1 cat $F2 | sed -e '/CALIBRATE/s/^#//' > $F2 cat $F3 | sed -e '/CALIBRATE/s/^#//' > $F3 Another part of my script does basically the same thing in reverse, that is, add the leading # to the line. However, when the script is run several times in a row with the different options, one of the output files are left either empty or scrambled (seen as binary). My guess is that something happens when run again and there is no # to remove at the beginning of the line. Any help would be much appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00542.warc.gz
CC-MAIN-2022-40
714
8
https://connect.unity.com/p/pyramid-vr
code
Remember those days long ago when the idea of gaming was only text? Those adventure games that drove you nuts, typing in 2 words "go north", "Take Torch" etc... Frustrating you to no end trying to think of the right combination to just open the freakin door! Well here it is. The chance to actually FINISH one of those games, in true graphical style! No more 2 word commands, just point and click your commands! (OK, this was for the PC version, which is now in VR so reach out, grab that item and use it!) This is a graphical remake of the 1978 Tandy game: Pyramid 2000. (A tribute to the original style games that started it all) We tried to stay as true to the text as possible, but due to not having a graphic artist we had to make do with what we could and it is progressing nicely! Hope to see you at the end! Team/GDD Creator (This team was great, it all went smooth! thanks guys!) Scene decorator...Does this sarcophagus come in pink?? Prop Master/funding of the little things required to finish IP rights procurement (License to use music, etc...) Directed and performed VR playtesting. Coded extended gameplay features and VR integration. Textured, modeled, and structured levels in a coherent art style. Customized, modeled, and polished scene item game objects. Customized, modeled, and created icons for inventory items. Directed and staged camera and lighting effects. Modified/remodeled Logic and UI for VR.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00531.warc.gz
CC-MAIN-2019-39
1,422
20
http://www.docjar.org/docs/api/javax/print/attribute/standard/JobState.html
code
java.lang.Object javax.print.attribute.EnumSyntax javax.print.attribute.standard.JobState All Implemented Interfaces: PrintJobAttribute, Cloneable, Serializable IPP Compatibility: The category name returned by getName() is the IPP attribute name. The enumeration's integer value is the IPP enum value. The returns the IPP string representation of the attribute value. |public static final JobState||UNKNOWN||The job state is unknown.| |public static final JobState||PENDING||The job is a candidate to start processing, but is not yet processing.| |public static final JobState||PENDING_HELD||The job is not a candidate for processing for any number of reasons but will return to the PENDING state as soon as the reasons are no longer present. The job's JobStateReasons attribute must indicate why the job is no longer a candidate for processing.| |public static final JobState||PROCESSING||The job is processing. One or more of the following activities is When the job is in the PROCESSING state, the entire job state includes the detailed status represented in the printer's PrinterState and PrinterStateReasons attributes. Implementations may, though they need not, include additional values in the job's JobStateReasons attribute to indicate the progress of the job, such as adding the JOB_PRINTING value to indicate when the output device is actually making marks on paper and/or the PROCESSING_TO_STOP_POINT value to indicate that the printer is in the process of canceling or aborting the job. |public static final JobState||PROCESSING_STOPPED||The job has stopped while processing for any number of reasons and will return to the PROCESSING state as soon as the reasons are no longer The job's JobStateReasons attribute may indicate why the job has stopped processing. For example, if the output device is stopped, the PRINTER_STOPPED value may be included in the job's JobStateReasons attribute. Note: When an output device is stopped, the device usually indicates its condition in human readable form locally at the device. A client can obtain more complete device status remotely by querying the printer's PrinterState and PrinterStateReasons attributes. |public static final JobState||CANCELED||The job has been canceled by some human agency, the printer has completed canceling the job, and all job status attributes have reached their final values for the job. While the printer is canceling the job, the job remains in its current state, but the job's JobStateReasons attribute should contain the PROCESSING_TO_STOP_POINT value and one of the CANCELED_BY_USER, CANCELED_BY_OPERATOR, or CANCELED_AT_DEVICE values. When the job moves to the CANCELED state, the PROCESSING_TO_STOP_POINT value, if present, must be removed, but the CANCELED_BY_xxx value, if present, must remain.| |public static final JobState||ABORTED||The job has been aborted by the system (usually while the job was in the PROCESSING or PROCESSING_STOPPED state), the printer has completed aborting the job, and all job status attributes have reached their final values for the job. While the printer is aborting the job, the job remains in its current state, but the job's JobStateReasons attribute should contain the PROCESSING_TO_STOP_POINT and ABORTED_BY_SYSTEM values. When the job moves to the ABORTED state, the PROCESSING_TO_STOP_POINT value, if present, must be removed, but the ABORTED_BY_SYSTEM value, if present, must remain.| |public static final JobState||COMPLETED||The job has completed successfully or with warnings or errors after processing, all of the job media sheets have been successfully stacked in the appropriate output bin(s), and all job status attributes have reached their final values for the job. The job's JobStateReasons attribute should contain one of these values: COMPLETED_SUCCESSFULLY, COMPLETED_WITH_WARNINGS, or COMPLETED_WITH_ERRORS.| protected JobState(int value) |Method from javax.print.attribute.standard.JobState Summary:| |getCategory, getEnumValueTable, getName, getStringTable| |Methods from javax.print.attribute.EnumSyntax:| |clone, getEnumValueTable, getOffset, getStringTable, getValue, hashCode, readResolve, toString| |Methods from java.lang.Object:| |clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait| |Method from javax.print.attribute.standard.JobState Detail:| public final Class<Attribute> getCategory() For class JobState and any vendor-defined subclasses, the category is class JobState itself. protected EnumSyntax getEnumValueTable() public final String getName() For class JobState and any vendor-defined subclasses, the category protected String getStringTable()
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00220.warc.gz
CC-MAIN-2023-14
4,651
34
https://talk.peercoin.net/t/disappearing-client/15440
code
I have v0.10.01 downloaded. Yesterday, the shortcut on my menu bar did not work. It said: "The item peercoin-qt.exe that this shortcut refers to has been changed or moved [etc.] I went to the executable file in Program files, and tried to open peercoin-wallet.exe - however, I just briefly got a small black screen, like a command prompt, which instantly disappeared. The client did not open or run. I downloaded the client afresh from the website and had it open for the rest of the day - it even minted a coin. This morning, however, the short-cut is again not working - any ideas? I note that the error message refers to “peercoin-qt.exe”, which I think must be wrong. I don’t think this problem relates directly to v0.10.01, as I was running it last week, as normal.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00048.warc.gz
CC-MAIN-2021-39
776
7
https://wiki.genexus.com/commwiki/servlet/wiki?19478,iOS+Requirements,
code
Requirements vary if you want to prototype using Knowledge Base Navigator or compile your app. No special requirements are needed when using an iOS 9.0 (or higher) device with the Knowledge Base Navigator to prototype. If you want to use F5 (Run) from GeneXus IDE, you will have to register your devices using the GeneXus Account (Associated Smart Devices tab in the account configuration) from the device. Necessary components are available on Apple Developer website. "Starting July 2018, all new iOS apps and updates submitted to the App Store must be built with the iOS 11 SDK. All new iOS apps and updates for iPhone, including universal apps, must support the Super Retina display of iPhone X." Ref.: https://developer.apple.com/ios/submit/ This implies that you must use GeneXus 15 Upgrade 8 or higher to be able to deploy to Apple Store. Note: It is highly recommended delete '/mobile/iOS' directory from the generated project and Rebuild-All. |GeneXus 15 users. ||OS X - El Capitan(10.11.x or later) ||If you use a previous OS version, your emulator won't launch automatically when running your app ||7.3.1 or higher ||Includes Swift 2.2 ||It's included in Xcode 7.3.x. ||Enable SSH access on your MAC computer As of GeneXus 15 Upgrade 7 (build 117818) iOS applications are generated for Swift 4. Swift 4 changes the way it integrates with Objective-C, by not exposing every Swift method to Objective-C. This change in the compiler may bring some compatibility issues, that can only be discovered at runtime. The Swift generator in GeneXus 15 Upgrade 7 up to Upgrade 8 is using a compatibility mode to detect possible misconfigurations, that need to be checked when the application is executed. As of Upgrade 9, this flag is disabled (no message will be displayed) and should not produce any runtime error. Please test your application meticulously and check on the XCode project for runtime warnings (shown with a purple icon) as shown below. When you click on that icon it will display detailed information. It is very important to report an issue to GeneXus Support Team with that detailed information. An example of this information is: _implicit Objective-C entrypoint -[sdsvc_workwithdevicescity_section_general dyn_CountryGuid:] is deprecated and will be removed in Swift 4; add explicit '@objc' to the declaration to emit the Objective-C entrypoint in Swift 4 and suppress this message - To run compiled applications, devices with iOS 8 or higher are required. - iOS 8 is not supported as of GeneXus 15 Upgrade 8 (iOS 9 or higher is supported). - In requirement tables, the "-" symbol means there is no recommendation or requirement in particular for such component.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00068.warc.gz
CC-MAIN-2018-22
2,683
23
http://play-game.ga/capif/openvpn-source-code-jedo.php
code
Openvpn source code This documentation describes the internal structure of OpenVPN.It was automatically generated from specially formatted comment. Free OpenVPN VPN Servers Installing OpenVPN Installing OpenVPN is easy and platform independent. One of the most popular and well-received implementations of VPN technology, OpenVPN is an open source solution for creating a Virtual Private Network (VPN.We use GitHub as the primary official SoftEther VPN repository.This means that per omission the VPN connection supports 100. Just like the rest of FEAT VPN, the OpenVPN client does not run with root privileges. The source code of our versions of OpenVPN is available at the following URLs. Free VPN Client OpenVPN Best Free VPN Client for Mac SoftEther VPN Client Manager Download How to set up OpenVPN on Mac OS X. The repository includes the OpenVPN source code as well as a few other open source projects it makes use of such as the popular.OpenVPN is an open source virtual private network product that offers a simplified security framework, a modular network design and cross-platform portability. Spider Solitaire Download The problem turned out to be buggy bridging code in the NIC driver. There are now available builds localized into several languages. This is the modified openvpn 2.1.0 source code. SoftEther VPN Server IronSocket VPN review - Best VPN.com Download OpenVPN (64-bit) v2.3.0 (open source) - AfterDawn: Software ... share code kali ini akan dishare code aplikasi source code sistem ... I spent a considerable amount of time getting OpenVPN working in bridged mode on my FreeBSD system. I downloaded openvpn-2.3.7.zip. Need help installing openvpn-2.3. Are you telling me that whoever wrote the programming code for openvpn-2.3.7.zip had omitted.The forum thread is here.) Introduction This guide describes how to set up a bridge-mode OpenVPN server.OpenVPN enables you to create an SSL-based VPN (virtual private network) that supports both site-to-site and client-to-site tunnels. OpenVPN client configuration for Windows, Linux, Mac OS X systems and Windows Mobile for Pocket PC. Pick your favorite and FlashRouters will upgrade the Wireless Router prior to.Being open source also allows the world community to contribute to the code base, often increasing innovation. SSL VPN Network Diagram Open Source VPN Free Download The Ultimate Battle. attempt to hide a tracking program inside a source code will be.OpenVPN Program in VB.NET. Rate. how to generate the Security Certificate for client and server of openVPN. Openvpn for Android is an open source client based on the open source OpenVPN project. How to Set Up VPN Network How to compile OpenVpn for Windows from source code OpenVpn suggests cross-compilation - compiling Windows executables with Unix build toolchain. OpenVPN is based on the product OpenSSL, the main open source implementation of the SSL protocol in.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00690.warc.gz
CC-MAIN-2017-34
2,904
28
https://www.oreilly.com/library/view/enterprise-j2metm-developing/0131405306/ch10.html
code
The Case for MOM Introducing the JMS Mobile JMS from iBus//Mobile The IBM WebSphere MQ Everyplace Mobile messaging applications have proven extremely successful in the consumer world to support flexible person-to-person communications. However, messaging is much more than interperson communications. The wide use of messaging-oriented middleware (MOM), such as the Java Messaging Service (JMS), has made messaging one of the most important person-to-machine or machine-to-machine integration schemes in modern enterprise applications. Compared with tightly integrated applications, messaging-based solutions are more reliable, flexible, and scalable. Those are critical advantages in the mobile enterprise world. ...
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524685.42/warc/CC-MAIN-20190716180842-20190716202842-00018.warc.gz
CC-MAIN-2019-30
717
5
https://golangrepo.com/repo/axelspringer-generator-go-lang-go-miscellaneous
code
A Yeoman Golang Generator We are very sorry Gophers, but other names for the generator where taken, so we choose go-lang. But we have gocreateas an alias. We highly recommand to use nvm(NVM) to manage your Node versions, and to use the most recent versions if you have dep for Go package management installed, the generator provides you with an option for that to initialize First, you have to install yo, as to use any Yeoman Generator. npm i -g generator-go-lang Create your project in the mkdir $GOPATH/src/<username>/your-new_app && cd $_ You could also npm i -g yoand use We can highly recommend to consult the Yeoman Guide to write your own Yeoman Generator. Most importantly, to use the generator locally, you have to npm link the generator.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00745.warc.gz
CC-MAIN-2022-49
748
16
https://www.meetup.com/CassandraSF/events/134763522/
code
At this upcoming meetup, we're excited to have Alan Coleman, VP of Engineering at VigLink, presenting on his real-world Apache Cassandra use case. At This Meetup You Will Learn: • The VigLink + Cassandra Real-World Use Case • Using Cassandra's built in multiple-datacenter replication to provide a high-volume, low-latency service • Schema and process design, to ensure a reliable system at scale • Approaches with Cassandra for state tracking, large-scale content, and high-volume/low-latency use cases. About VigLink + Cassandra VigLink infers context from pages using NLP (Natural Language Processing) techniques, and match commercial content to relevant products from a catalog of ~100M product offers on the web. We also model and estimate earnings to ensure that publishers are matched to the most lucrative offers for them. VigLink Uses Cassandra: • As a data store for canonicalized content and state tracking • As a store for high-volume, low-latency, replicated storage of page-relevant offers Alan Coleman is VP of Engineering at VigLink, and has spent the last 25 years developing various types of software and leading engineering organizations. His last several companies (VigLink, Adchemy, and DemandTec) have all focused on the application of machine learning and statistics to large data sets, in industries ranging from retail price optimization to internet advertising. Other prior companies include Sybase, CrossWorlds, and Convergent Technologies. Alan has a Bachelor's Degree in Cultural Anthropology from Columbia University in New York.
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
1,571
11
https://www.nonarchitecture.eu/product/group-discount/
code
We will ask for more information during the submission process. The ORDER number you will receive with the first email is the code you must use to submit your project. The code is a five-digit order number, looking like #12345 or #98765. For group tickets, use the same code for all submissions. Please check the competition calendar. Once the submissions are open, each team of your group will have to fill out the submission form individually. Do not hesitate to join our Discord server if you have more questions! *PRICE DOES NOT INCLUDE VAT.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00598.warc.gz
CC-MAIN-2023-14
545
5