url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.fx141.com/4541.html
code
Useful for those who use the DeMark TD lines in trading. It draws TD points, plots TD lines, calculates the current values of TD lines, calculates the targets. Commen - display comments in the top left corner; TD - display the TD points; TD_Line - display the TD lines; Horiz_Line - display the current value of the TD lines as a horizontal line; TakeProf - display targets;
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00036.warc.gz
CC-MAIN-2019-13
374
6
https://hexdocs.pm/google_api_android_enterprise/GoogleApi.AndroidEnterprise.V1.Model.Device.html
code
GoogleApi.AndroidEnterprise.V1.Model.Device (google_api_android_enterprise v0.25.0) View Source A Devices resource represents a mobile device managed by the EMM and belonging to a specific enterprise user. nil) - The Google Play Services Android ID for the device encoded as a lowercase hex string. For example, "123456789abcdef0". nil) - Identifies the extent to which the device is controlled by a managed Google Play EMM in various deployment configurations. Possible values include: - "managedDevice", a device that has the EMM's device policy controller (DPC) as the device owner. - "managedProfile", a device that has a profile managed by the DPC (DPC is profile owner) in addition to a separate, personal profile that is unavailable to the DPC. - "containerApp", no longer used (deprecated). - "unmanagedProfile", a device that has been allowed (by the domain's admin, using the Admin Console to enable the privilege) to use managed Google Play, but the profile is itself not owned by a DPC. nil) - The policy enforced on the device. nil) - The device report updated with the latest app states.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00435.warc.gz
CC-MAIN-2021-21
1,101
6
https://www.mail-archive.com/[email protected]/msg75512.html
code
First of all, I simply didn't realize that mouse drag'n'drop did not work in Lyx and thought this could maybe a platform specific issue, not a general one. On Fri, 2009-07-31 at 11:43 -0400, Michael Joyner: "Drag and drop" text is one the most annoying things I have found It is too easy to do it by accident, not notice, and output garbage for people to read. I can't imagine how one could mark some text with the mouse hold-and- click it and drag it around by accident. I think the discussion on going on here is somewhat dogmatic. Many people use Ctrl+X and then Ctrl+V to shift textparts around. That's o.k. and that's what I do as I have no other chance in Lyx to do this. My point is: Drag and drop is something many Mac Users (I am one) and Windows users may be very used to in everyday work. So implementing this feature would make Lyx a little easier to work with when you are new to Lyx. Those who don't like drag and drop are not forced to use it, it would be just another editing option (maybe to be activated or deactivated via GUI). Still, I don't see the problem in having this feature – if you want to use it, use it, if not, don't. I don't think this can create any trouble by accident. It certainly would be a problem, if implementing this feature really costs a lot of time. I simply don't know this as I'm not a programmer. Another thing is that is maybe not the most urgent feature on the Lyx- feature-wish-list. That's o.k. It was only a question. Still, voting for it is legitimate.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00184.warc.gz
CC-MAIN-2021-21
1,507
26
https://www.erdosinstitute.org/project-database/spring-2024/data-science-boot-camp/bodybuilding-contest-ranking
code
Bodybuilding contest ranking Jessica De Silva, Andrei Prokhorov Bodybuilding contest data is published on NPC News Online. The project goal is to use the Elo Rating System (which uses Logistic regression) to rank competitors against one another. We can then see how well our rating system works in predicting the outcomes of each Olympia (like the Super Bowl for bodybuilding). I've already built a web scraper, so we only need to clean the data and apply the model. My web scraper also scrapes images of the scorecards, but I have found that the OCR methods aren't accurate enough to get the scorecard data. There's a lot more we can do with this data beyond the rating system, but these are some initial ideas.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00158.warc.gz
CC-MAIN-2024-18
712
3
http://slashdot.org/~hendrips
code
The risk-free (i.e. government-guaranteed) inflation-adjusted 30-year interest rate in the U.S. is about 1% at the moment. On the one hand, that a seems depressingly low, and compared to historical rates it is. On the other hand, periods of low long-term real interest rates tend to be highly correlated with periods social and political stability, so perhaps today's low interest rates are a price worth paying. If you are willing to accept a reasonable, but non-trivial, amount of risk, you could invest the stock market. A 3.5% inflation-adjusted rate of return is actually a very solid guess about future long-term stock market returns. Of course, there is definitely a risk that your returns will end up lower - that's why the stock market is a higher-risk, higher reward investment. Here's a useful rule of thumb for estimating inflation-adjusted stock market returns in developed nations over long periods time (at least 20 years): Rate of return = Real economic growth rate + Divided rate - Expense ratio - Dilution rate. The "dilution rate" is the rate at which your shares of stock are diluted by companies issuing new shares of stock, and the the "expense ratio" is the proportion of your assets that are consumed by investment costs, usually in the form of transaction and recordkeeping costs incurred by your mutual fund. The dividend rate of the domestic stock market is currently about 2% per year, and the market's dilution rate seems to be around Putting that all together: 2%+2%-.5%-.2% = 3.3% estimated rate of return, which is almost exactly the rate cited by the grandparent. Keep in mind though, that this is a very long term estimate; the stock market might go up or down 50% in a single year. Also keep in mind that there's a good chance that this estimate will be wrong - after all, the reason that the expected return is relatively high is that there's a reasonable but non-trivial risk that your rate of return will be much lower than expected, even in the long run. Anyone who wants to become a finance nerd would do well to read William Bernstein's book The Intelligent Asset Allocator. The book explains the rationale behind this rule of thumb, and everything else you might ever want to know about estimating financial returns.
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460472.17/warc/CC-MAIN-20150226074100-00167-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
2,258
6
https://www.gamedeveloper.com/audio/opinion-specializations-still-matter
code
"I understand that Scrum has been applied mainly to software products and that the elimination of "specialties" means that the database programmer, UI programmer, and QA engineer should all be able to perform each other's roles equally. This is valid." Now I'm concerning myself with only the technical side of an agile team, but I've seen this raised in a number of different agile circles. In those cases, there seems to be the impression that swapping a database, physics, or audio developer with any other specialization like UI, animation, or graphics is valid, and an agile team members should be able to roll up their sleeves and perform the different roles to the same level with the same level of outcome. To me, this is emphasized in how the product backlog is often used, which is a priority- and risk-ordered document that doesn't take into account the skillset of the team that'll be working on the final product. Processes such as pair programming, constant re-factoring, and code reviews (to name but a few) seem to be seen as ways to not only communicate intent and project information but also skillset and ability across an entire discipline. So What Do Specialists Bring? But we have specialist developers for a reason. They are great at what they do, they understand the area in which they work and they know how to get the best results in the shortest amount of time. They have a passion for the area they are focusing in, which usually means they'll go a step further to research their area and keep up with developments which other developers may not have the time or the understanding to do. Spreading your talent thin and assuming that people can fill each others shoes leads to the following issues: You are not respecting the knowledge, skill, experience, and passion that a specialist can bring to their work, and as a result you are not respecting the developer themselves You're reducing the impact these people can have on a team, and it's often the experienced specialists that inspire younger members of the team into an area they are interested in The ability of those specialists to learn more about their area and pass that onto others is drastically reduced. The ability for the team to push its development boundaries will be indirectly reduced as everyone on the team aims for the 'generalist' role to fit in What About Pair Programming? Now I'm a massive fan of the various agile techniques out there. Pair programming is an excellent mentoring, development, and training tool, but it won't allow one developer to fit into the shoes of another. True, they will have a better understanding of the tools, pipeline, and systems being developed, which will allow them to fill in, but it won't transfer the amount of background experience the specialist has. The same goes for code reviews, constant refactoring, and feature discussions. It spreads the knowledge, which reduces the risk to the project should the specialist not be around when needed, but the core experience and drive that made the specialist who they are simply cannot be replaced by dropping in a new developer. But Everyone Does A Bit Of Something Every Once In A While? Of course, sometimes people do need to jump into another developer's shoes (illness, staff turn-over, hit by a bus etc.), but this is not the same as expecting a people to be able to fulfill each others roles equally. We can take steps to decrease the impact this will have on the team using the processes mentioned above but it will not allow those specialists to be inter-changed as the project continues development. We need specialists in any development field because it's these people that can push their respective fields in directions we might not even be able to imagine. By treating them as interchangeable, we might be gaining flexibility to schedule our staff, but we're losing something far more important and vital to a development team and the products they are creating. As I said to someone (in 140 character or less of course) when pointed out that people have done this, and even the author of the original post has done it (see the comments): "I'm sure he has done it, I've done similar, but it doesn't mean we did both with the skill of an expert of either." [This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions.]
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100523.4/warc/CC-MAIN-20231204020432-20231204050432-00294.warc.gz
CC-MAIN-2023-50
4,503
23
http://powerhouse-band.com/2016/09/21/data-cabling-mistakes-you-are-likely-to-make-as-a-diy-installer/
code
Posted on: 21 September 2016 If you are planning to upgrade your data cables, one of the decisions you have to make is to handle it as a DIY project or hire a professional electrician. This is, of course, assuming that you have some data cabling skills. Before you go ahead with DIY installation, however, make sure you won't be making these three mistakes that novices tend to make: Running Cables Near Noisy Fixtures Any electrical device that emits electromagnetic signals can interfere with signals along your data lines. Therefore, you need to ensure you do not run your data cables near such devices. Examples of devices that emit electromagnetic signals include telephone receivers, anything with an electric motor, network routers, electrical wires, and television receivers. This is not an exhaustive list of devices that may interfere with your data cables; if it uses electricity, it's good to assume that it may cause interference. Either run your data cables elsewhere or relocate the device causing the interference. The recommended distance between an electromagnetic device and the data cable depends on numerous factors including the strength of the signals. For example, the minimum distance (that will prevent interference) from a fluorescent lighting should be 5 inches. Alternatively, you can invest in a shielded data cable that is designed to block the interference signals. Using the Wiring Data Cables Just because a cable can carry data doesn't mean it is the right one for your needs. Data cables differ on which speeds (in megabits per second) they can handle reliably. For example, Cat6 Ethernet cable is more reliable for high-speed internet than Cat5 Ethernet cable, though the two look fairly the same and can be easily confused. Ignoring Local Cabling Codes Both the national government and your local authority probably have some codes or ordinances on data cable installation. Most people don't know this because a data cable doesn't look like a dangerous item that needs to be regulated. However, it is not the data, but the cables and their accessories, which need regulation. For example, since 2014, National Electrical Code (NEC) requires that nonmetallic cable accessories (such as cable ties) in plenum spaces should have low-smoke and low-heat-release properties. Plenum spaces are parts of a building that facilitate air circulation, such as the open space above the ceiling. DIY projects save money, but they can also lead to wastage of resources. The trick is to identify what you can handle and leave the rest to the professionals (like those at Dolce Electric).Share
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247508363.74/warc/CC-MAIN-20190221193026-20190221215026-00554.warc.gz
CC-MAIN-2019-09
2,614
10
https://sway.uservoice.com/forums/264674-sway-suggestion-box/suggestions/35331346-connect-sway-analytics-in-power-bi
code
connect Sway Analytics in Power BI I would like to create Power BI reports and dashboards using Sway Analytics as a data source. Ideally, I could create reports using analytics from Sways I have created and Sways for which I have author access. +3 PLEASE! Also have the ability to extract the name of the Sway with the analytic connector is greatly helpful! Currently, with Forms, you cannot extract the Form name very easily... Chris Mathias commented yes this would be really helpful please could we have a custom connector for sway to get the analytics data in powerbi +2 Sway is a wonderful service, thank you! Following on the topic above, even just the possibility to export all the Sway data (Name, Autors, Analytics) to Excel would be highly apreciated. +1 for Power BI, we use Sway for a weekly company newsletter and it would be great consolidate with the rest of our company dashboards.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00329.warc.gz
CC-MAIN-2021-10
897
8
https://forum.hawkscast.com/topic/51/aliens-ufos
code
In this topic.. Recently was reading something interesting. To me at least. In the 1950s and 1960s aliens were thought of as roughly humanoid. After that in sci-fi they were but the educated talked knowledgeably about how unlikely that would be-- that aliens would be utterly, well, alien to us. There would be no reason for them to be anything like us at all. Recently though there is a counter argument. It rests in the well known phenomenon of "convergent evolution".. that is similar environmental pressures create similar results. For example, dolphins and sharks are far apart on on the evolutionary tree.. but physically are quite similar. The eye has evolved independently many times. As has flight.. and each time in a similar way. It may be that intelligent life,following the same rules of evolution (some undiscovered) may lead to humanoid type life.. maybe 4 limbs and a large brain plus walking erect are the natural results of certainly evolutionary threads. I think Stephen J. Gould would disagree and I haven't decided my own opinion but it's fun to think that if there is intelligent life out there it might be more similar to us that we might otherwise expect!
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203462.50/warc/CC-MAIN-20190324145706-20190324171706-00103.warc.gz
CC-MAIN-2019-13
1,179
4
https://www.stringfellow.com/2024/02/miami-managed-it-services/
code
Alexis in Florida gave us great feedback on our Miami Managed IT Service! Glad we could help Alexis! If you don’t feel this good about your IT support, reach out: we need to talk. #MiamiManagedITServices #HealthSafeIT #healthcareit #healthcare We support clients all over the US, from sea to shining sea, and we hope to hear from you soon about your IT problems so we can start to help!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00524.warc.gz
CC-MAIN-2024-10
388
4
https://www.experts-exchange.com/questions/26442145/Restict-USB-devices-and-Windows-7.html
code
We have a 2003 AD. About 2/3 of our computers are XP and we have about 1/3 of Windows 7 machines. We have GPO that restricted USB devices by denying the System and Users groups permissions to the USBSTOR.INF and .PNF. This worked great on XP, but we are finding it doesn't work in Windows 7. The permissions that were set via the GPO don't even replicate to the Windows 7 machine. For a time there I couldn't view the permissions on the INF folder. From my research I found the administrators group has to be the owner of the file instead of the TrustedInstaller. Using takeown /f /r /a command I've taken ownership of the WINDOWS\INF folder. My permissions change from GPO now are set on the machine but I still can plug in USB devices. I make sure they are removed using devmgmt.msc, but when i log on as a regular user I I can still install a USB device. I've tried adding and denying permissions to the Authenticated Users and Domain Users groups but this doesn't do any difference.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00253.warc.gz
CC-MAIN-2018-26
986
3
https://cigarboxnation.com/profiles/profile/show?id=marshallstapleton
code
I Heard that Little Thing on the link you sent... I have a TinJo I named ... "Simplicity" A 2 string the Tins I think are my FAV ... I think they are at least easier to construct but I digress Here on CBG Nation. If Ya Wanna let her go Figure the shipping and all that to Georgia. USA. I'll try to Accommodate.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00381.warc.gz
CC-MAIN-2020-10
310
7
https://sfb1102.uni-saarland.de/publication/the-frequency-of-rapid-pupil-dilations-as-a-measure-of-linguistic-processing-difficulty/
code
Demberg, Vera; Sayeed, Asad The Frequency of Rapid Pupil Dilations as a Measure of Linguistic Processing Difficulty Andreas Stamatakis, Emmanuel (Ed.): PLOS ONE, 11, 2016. While it has long been known that the pupil reacts to cognitive load, pupil size has received little attention in cognitive research because of its long latency and the difficulty of separating effects of cognitive load from the light reflex or effects due to eye movements. A novel measure, the Index of Cognitive Activity (ICA), relates cognitive effort to the frequency of small rapid dilations of the pupil. We report here on a total of seven experiments which test whether the ICA reliably indexes linguistically induced cognitive load: three experiments in reading (a manipulation of grammatical gender match / mismatch, an experiment of semantic fit, and an experiment comparing locally ambiguous subject versus object relative clauses, all in German), three dual-task experiments with simultaneous driving and spoken language comprehension (using the same manipulations as in the single-task reading experiments), and a visual world experiment comparing the processing of causal versus concessive discourse markers. These experiments are the first to investigate the effect and time course of the ICA in language processing. All of our experiments support the idea that the ICA indexes linguistic processing difficulty. The effects of our linguistic manipulations on the ICA are consistent for reading and auditory presentation. Furthermore, our experiments show that the ICA allows for usage within a multi-task paradigm. Its robustness with respect to eye movements means that it is a valid measure of processing difficulty for usage within the visual world paradigm, which will allow researchers to assess both visual attention and processing difficulty at the same time, using an eye-tracker. We argue that the ICA is indicative of activity in the locus caeruleus area of the brain stem, which has recently also been linked to P600 effects observed in psycholinguistic EEG experiments.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00620.warc.gz
CC-MAIN-2024-18
2,069
4
https://www.theonlyanimal.com/news
code
The Only Animal is developing SLIME, a new play from Bryony Lavery. Directed by Kendra Fanconi, SLIME takes place just slightly in the future set at a climate change conference where student interns act as translators for marine animals who are participating in the conference. 7 non-equity roles for actors 19 - 25, or can play those ages. Please do not apply if you are a member of a union, Equity or UBCP. We are interested in hiring underrepresented communities and cast for cultural diversity. The Only Animal invites application from trans or gender non-binary actors for all roles.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00213.warc.gz
CC-MAIN-2020-34
588
3
https://life-photostudio.ru/updating-system-specification-24082.html
code
Updating system specification Free no sign up privite sea site that i can chat talk dirty or fuck for free The next step will be to get the drafts adopted by an IETF Working Group. In the meantime, publication of Internet-Draft documents can be tracked through the IETF: Internet-Drafts expire after six months, so our goal is to publish often enough to always have a set of unexpired drafts available. This specification uses the underscore as a prefix to disambiguate reserved names from other names in 3 cases: is used as a prefix to operation names that are RPC-like additions to the base API defined either by this specification or by implementers. The Service Base URL is the address where all of the resources defined by this interface are found. Applications claiming conformance to this framework claim to be conformant to "RESTful FHIR" (see Conformance). FHIR is described as a 'RESTful' specification based on common industry level use of the term REST. This may be considered a violation of REST principles but is key to ensuring consistent interoperability across diverse systems. Each "resource type" has the same set of interactions defined that can be used to manage the resources in a highly granular fashion. There may be brief gaps as we wrap up each draft and finalize the text. The intention, particularly for vocabularies such as validation which have been widely implemented, is to remain as compatible as possible from draft to draft. However, these are still drafts, and given a clear enough need validated with the user community, major changes can occur.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00095.warc.gz
CC-MAIN-2020-29
1,582
12
https://answers.yahoo.com/question/index?qid=20130124081805AAtj0jh
code
i think there might be some thing wrong with my laptop? after a while that i got my laptop,my "Internet Explorer" started to get slower and slower,and i tried to find things that that would help with that and all i found was stuff you had to register for,which then come with a cost,and i found out paying for stuff to try and get rid of malicious software is a scam,my PC already came with a windows defender,but it's not getting rid of every thing,and pretty soon i got google chrome but soon after that,i got a system message saying that "Internet Explorer has gone corrupted"and when i first got my laptop i got this update called "HP Quick Web configuration tool" which is what i'm using right now,and so i use google chrome but then it started to get slower,and found "SpeedItUp free and i still really don't see a speed diffrence,and now every time i open google chrome,at the top where the tab is,it will say "Untitled" unless i'm hooked up to the DSL we have,i'm not sure how i'm going to be able to fix this,because if there down and you can't save a thing on the HP Quick web configuration tool,unless you got a device to connect to it,which,i have to MP3's, i just don't know,can you help me out? Sorry about the big block of words,Let me try to make this more understandable,any body who has the "HP Quick Web Configuration Tool" would know you can't save any thing from the internet,that would include downloading and installing,what i was trying to say is,My google chrome was acting funny,I mean just second after i got done using it (no i did not close on any thing important),I went to get back on and it was having issues,I do not know if i have malware bytes,i'am using a "Windows 7 laptop starter edition",I just updated my HP assitant for 2013 about 3 days ago,Thinking that maybe thats why it would'nt go online,Wrong-o,still not going through,the best way i was able to use "Google chrome" was on DSL,and i can't download any thing on there if if it won't even go through,but that does not mean this problem is un-fixable,got any suggestions? internet connection is not a problem,i can find good spots for that,and please note:from a d - SaraLv 78 years ago Aaron, I read all of that big block of text and find that I really don't understand. It isn't structured very well and goes in circles. So I have questions. When you ask questions here on Yahoo about computers it is very helpful to tell us what kind of computer you have along with the processor type and how much ram you have and the operating system - such as Windows xp, or 7 or whatever. That is very important. So...answer all of that. And - how many windows do you open at one time? How many programs are you running at the same time? If you hit ctrl,alt, and delete at the same time and open the task manager it will show you how many programs are running. When you install programs like quicktime or divx or bitcomet or whatever they will automatically startup when you start your computer and run in the background to be 'ready' when you want to use them. Shut them down. Let me be clear about something tho - do NOT end the task on something if you don't know what it is. When you end the task on unnecessary stuff - does the computer speed up? Do you have malwarebytes? If not - download it and run a scan. Get the free version. and finally - big blocks of text are bad. use paragraphs for different subjects or points. k? - tsistinasLv 44 years ago properly, i don't understand what to show you how to already know. first of all in no way sparkling A LAPTOPS visual reveal unit WITH something different than a moist PAPER TOWEL! the component to the visual reveal unit this is screwed up has had a number of th coating burned off of it by using tough chemical components interior the windex.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00031.warc.gz
CC-MAIN-2020-45
3,785
12
http://fsharpactuary.blogspot.com/2014/03/smith-wilson-and-deedle.html
code
A DiversionIn this post, I am taking a brief detour into the asset side of actuarial modelling. This is definitely not my area of expertise, but fortunately I know some experts who are happy to give me a hand. For this post I am going to look at the Smith-Wilson method of curve fitting. This was the method preferred under QIS5 for interpolating and extrapolating interest rate curves - see http://eiopa.europa.eu/fileadmin/tx_dam/files/consultations/QIS/QIS5/ceiops-paper-extrapolation-risk-free-rates_en-20100802.pdf. (This method was proposed in one of the earlier publications jointly authored by the amazing Andrew Smith - http://www.theactuary.com/features/2012/12/dare-to-be-different/). I have had a lot of help from an ex-colleague and expert in this field, Phil Joubert. Phil writes a very interesting blog at http://www.not-normal-consulting.co.uk/. He has written a CRAN R package to carry out this curve fitting, which is documented in two of his blogs: The R PackageThe R Package is called "SmithWilsonYieldCurve". Once installed, you have access to a number of functions. The most straightforward to use is fFitSmithWilsonYieldCurveToInstruments. This is described as "A convenience function that takes a dataframe containing market instrument data as type, tenor, frequency and rate. It extracts the required vectors and matrices and then calls fFitSmithWilsonYieldCurve." To illustrate usage, I will use code that replicates results from a QIS5 Example - see page 24 of the paper mentioned above. It is very straightforward to do this in R Studio, using the R Code shown below: library( "SmithWilsonYieldCurve" ) InstrumentSet1 <- read.csv("InstrumentSet1.csv") ufr <- log( 1 + 0.042 ) alpha <- 0.1 Curve <- fFitSmithWilsonYieldCurveToInstruments(InstrumentSet1, ufr, alpha ) This displays this curve: F# using DeedleIn this post, I will limit myself to retaining the use of R for the calculations and plotting (I will revisit this in a later post). I will however use F# Interactive to host the code and use F# to call into R using the R Type Provider. I will also use Deedle, which provides a dataframe capability that is directly usable with R. To access the supporting libraries it is easiest to use NuGet. You should select the Deedle package with the R Plugin. (Note that there seems an issue with the NuGet package - so you might need to make some manual changes - see this StackOverflow question). Once you have this installed, you can reference the relevant libraries and very simply replicate the R calling code: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: #I "../packages/Deedle.0.9.12/" #I "../packages/RProvider.1.0.5/" #load "RProvider.fsx" #load "Deedle.fsx" open RProvider open RDotNet open Deedle open RProvider.graphics open RProvider.SmithWilsonYieldCurve open RProvider.``base`` let InstrumentSet1 = Frame.ReadCsv("c:/temp/InstrumentSet1.csv") let ufr = log( 1.0 + 0.042 ) let alpha = 0.1 let Curve = R.fFitSmithWilsonYieldCurveToInstruments(InstrumentSet1, ufr, alpha ) R.plot(Curve) let Pfn = Curve.AsList()..AsFunction() let ans = R.sapply(R.c(4),Pfn) (A copy of the InstrumentSet1.csv file can be downloaded from bitbucket.) Lines 13 to 17 are virtually identical to the R code and generate exactly the same effect - R is called and displays the identical curve. When you run this in F# interactive it also display the following output: val InstrumentSet1 : Deedle.Frame<int,string> = Type Tenor Rate Frequency 0 -> SWAP 1 0.01 1 1 -> SWAP 2 0.02 1 2 -> SWAP 3 0.026 1 3 -> SWAP 5 0.034 1 val ufr : float = 0.04114194333 val alpha : float = 0.1 val Curve : RDotNet.SymbolicExpression = fBase(t) + t(KernelWeights) %*% fCompoundKernel(t) CashflowMatrix %*% fKernel(t, TimesVector) "SmithWilsonYieldCurve" "YieldCurve" Notice that the dataframe InstrumentSet1 is displayed in a friendly format followed by the other function parameters: ufr and alpha. This is followed by a full display of the Curve object. This is a rather complex object, which is documented as "Objects of class SmithWilsonYieldCurve are a list, the first element of which is a function P(t), which returns the zero coupon bond price of the fitted curve at time t." To obtain the test value for P in F#, we therefore, in line 18, convert the R SymbolicExpression to a List, take the first element and then convert this to a Function. We can then call this function, in line 19, using the R high level function sapply . In F# Interactive we then get: val ans : SymbolicExpression = 0.8850041 In the next post, I will explore converting the underlying R code to F#.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662893.38/warc/CC-MAIN-20190119095153-20190119121153-00631.warc.gz
CC-MAIN-2019-04
4,598
38
http://nadvi.blogspot.com/2013/03/
code
To optimize I/O performance of a database, it's always a good idea to keep LOG file & DATA file in separate physical drives. The transaction log file records every data change and DML transaction executed in the database. Writing to the transaction log file is sequential in nature as compared to the database files which are typically random I/O. As such, placing the log file on separate physical disk from database will allow the disk to work in sequential manner and perform optimally. I'm going to show a demonstration of moving LOG files to another drive. 1: Capture database and transaction log file information 2: Set database to single user mode and detach database *** Now the database is detached. Once the detach process is completed, then you can copy and paste the new transaction log file then delete the old transaction log file via Windows Explorer. Once this is completed, we can attach the database with SQL Server database log file at new location with the following script: 3: Attach database with log file at new location 4. Validate the LOG moving After the final attach command transaction log file has been moved to new location and database is operational with log file on new location. Verifying the new database transaction log location can be accomplished by re-running
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00426.warc.gz
CC-MAIN-2020-50
1,298
9
https://bowtieddevil.com/post/take-control-of-passwords-bitwarden/
code
Do you have a password you use everywhere? Be honest, we’re friends here. Good password security is critical, but few take it seriously. I understand why, life has become too complex to keep track of everything inside your head. Security / Convenience — Pick One! Consider the Github list of the top 100 most commonly used passwords, sorted by frequency. Here are the top 10: “Dragon” appearing as #10 surprised me, but the rest are sadly predictable and fall into an obvious pattern — quick & easy keyboard entry. The problem is clear, people with weak passwords are likely to maintain that habit forever. Most online accounts take an email address as a username identifier, so any security breach that reveals an email address + weak password pair is ripe for future abuse. I won’t mince words: if you’re using any of these passwords, you should feel terrible about your choices and immediately take steps to fix it. Luckily for us, modern cryptography has been developed to be extremely usable and secure with some minor up-front work. Enter the password manager. Many browsers offer a password vault feature, but I recommend disconnecting your password store from your daily browser, simply because the separation makes it easier to switch either your browser or your password manager without significant heartburn. To pay you back for the extra effort of setting up a password manager, it will make it very convenient to keep your passwords synced across multiple devices, automatically fill login and other information on websites, and check your passwords for weakness or prior exposure and data leaks. Bitwarden — The Best 3rd Party Solution I use Bitwarden (sort of, more on this later). If you have no desire to host your own setup, I recommend creating an account with Bitwarden and using their plugin. Then, simply create a strong master password, then generate random passwords or pass phrases on each website you visit. Easy! Bitwarden provides desktop clients, mobile clients and browser plugins. How Does Bitwarden Work? If you want to read the technical specifications, please visit Bitwarden’s own Encryption page. All password data is encrypted prior to storage, and all encryption/decryption of that data occurs on the local client. Why Use Bitwarden? Bitwarden offers some useful features that make my life easy. The primary feature is secure storage and random generation of passwords, along with URL detection to keep me from accidentally exposing a password to a look-alike phishing website. In addition to storing secure passwords, it can also store files and notes securely, and can manage shared items between multiple users. My wife and I use Bitwarden and maintain login credentials for important accounts. Password Manager Best Practices First, the point of a password manager is to type in a secure “master” password, and then to have that unlock all of your other passwords. Since you only have to remember one password, I recommend making it a good one. Visit the GRC Password Haystack Checker to experiment with various master passwords and their associated brute force timing requirements. The best password is (in general) a long one, so make it count. A shorter password can be more secure if it contains mixed alphanumeric types, but the general rule of thumb is longer is better. Once you have a secure master password, start replacing all of your short, weak passwords with randomly generated long passwords that are stored in your manager. I have roughly 500 passwords stored, and I don’t know any of them. They are all randomly generated with as many characters as the particular website will allow. Bitwarden has a 128 character password limit, so I always try that first. I also rotate my master password often to mitigate keylogger attacks. I also have a separate account with a separate password for any work computers, since I don’t trust that a keylogger is not installed. Self Hosted Solutions If you’d rather not involve Bitwarden’s servers at all, you can self hosted it. They publish a set of Docker images for the whole Bitwarden stack HERE, though I think their installation stack is a bit overblown for a single user or small team. An enterprising developer has started a project known as Vaultwarden, rewriting a Bitwarden API-compatible server in Rust. The advantage of this approach is that the official Bitwarden clients are still used, only the backend is changed. Rust is known for being quite efficient and fast, so the resulting speed and memory usage is very compelling (30MB vs 1+ GB for the official install). Using the Vaultwarden Docker page, we can formulate a docker-compose.yml file to bring up our service. version: "2" services: bitwarden: image: vaultwarden/server:alpine container_name: bitwarden restart: unless-stopped environment: - WEBSOCKET_ENABLED=true volumes: - data:/data labels: - traefik.enable=true - traefik.http.routers.bitwarden-ui.entrypoints=websecure - traefik.http.routers.bitwarden-ui.rule=Host(`bw.example.com`) - traefik.http.routers.bitwarden-ui.service=bitwarden-ui - traefik.http.services.bitwarden-ui.loadbalancer.server.port=80 - traefik.http.routers.bitwarden-websocket.entrypoints=websecure - traefik.http.routers.bitwarden-websocket.service=bitwarden-websocket - traefik.http.routers.bitwarden-websocket.rule=Host(`bw.example.com`) && Path(`/notifications/hub`) - traefik.http.services.bitwarden-websocket.loadbalancer.server.port=3012 volumes: data: networks: default: name: bitwarden A few notes on this one: docker-compose.ymlassumes that you have a Traefik reverse proxy running. If so, make sure to add that reverse proxy to the - The Vaultwarden container exposes two services. One is the web UI, and the other is a WebSocket that desktop clients and browser plugins read to be automatically notified of updated passwords. The Traefik labels above will split the incoming requests appropriately. - The default setup uses SQLite for a database. If you want to use a database instead, read the wiki entries for MariaDB/MySQL and Postgres Bring the stack up with docker-compose up -d and visit the appropriate URL to create your account login. Browser Plugin Setup Once you have the service running, it’s simple to configure the browser plugin or desktop client to use your personal instance instead of the official service. Simply click the gear icon before inputting your login information and fill in the URL for your Vaultwarden instance. Thoughts for Security If you’d like to take an extra security step, you could connect your server and clients together using an overlay network like Tailscale, Nebula, or ZeroTier and communicate using an internal IP. This way, your password manager would be even more secure against brute force and keylogger attacks. Now get out there and stop using terrible passwords!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00335.warc.gz
CC-MAIN-2022-49
6,856
43
https://docs.digit.org/v/v2.3/digit-2.3-release-notes
code
DIGIT 2.3 release offers new modules, few functional changes, and non-functional changes. Functional: Faecal Sludge Management module, Bill Amendment module, and Enhancements in HRMS. Non-functional: Security fixes. Faecal Sludge Management (FSM) Multi-Tenancy support while creating Employee PGR Reports and Enhancements Few of the Security fixes: Building plan approval system All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00079.warc.gz
CC-MAIN-2021-39
498
9
https://grady.dev/project/website_v2/
code
I took an architechure class in my Senior Year at Brandeis that awoke an asthetic sense I had not previously identified or listened to. In particular, the de-stijl primitive of intersecting planes struck a resonant chord in my sense of beauty. That new asthetic, in combination with “draw me CSS” was the inspiration for V2 of my website - meant to resemble something in between a circut board, the maurader’s map, and the de-stijl classic “Composition in black, red, yellow and blue”. Notable: the code that generates the site went through a series of refactors to try to make it more consistent across browsers and screen sizes. Underlying these problems was the fact that draw-me-css is too generalizable to be able to produce elegant code.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00060.warc.gz
CC-MAIN-2022-49
753
3
http://www.ask.com/answers/288366021/i-kill-only-when-absolutely-necessary-how-can-i-move-an-ant-hill-without-killing-the-ants?qsrc=3111
code
I kill only when absolutely necessary. How can I move an ant hill without killing the ants? One way:Oh wait, you can't... You can't. Short of digging it up unbroken. Ants will defend their own to their deaths. Put it in a bucket. The anthill is ruined, but the ants are fine & rebuild it fast.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163040059/warc/CC-MAIN-20131204131720-00067-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
293
4
https://onlinecoursedownload.com/100-python-exercises-evaluate-and-improve-your-skills/
code
What you'll learn : Solve 100 Python scored assignments ranging from beginner to expert levels. Know your Python skill level via the collected points. Solve assignments in many areas: data analysis, image processing, visualizations, web apps, and much more. Compare your solutions to the correct Python solutions for every exercise. You will have the Teacher’s Edition! You will know the category level of your Python programming skills by the end of the course. Have unlimited access to your instructor: Ardit Sulce A working computer (Windows, Mac, or Linux). Basic knowledge of Python. Unlike other online-video courses that guide you through the process of how to do something, this course will ask you to solve 100 different Python assignments on your own. This practice will improve and solidify your Python-coding skills and you will be the one to teach yourself how to write Python code the hard way. The course works best for those people who already know Python basics such as variables, functions, and loops. If you don’t know Python basics, please, take a Python beginners course first. This course is also suitable for intermediate Python programmers because the exercises range from easy to difficult progressively. As you advance in the course, you will solve 100 Python assignments. After each assignment, you can see the assignment solution and its explanation. This “answer key” helps you test your solution and learn new skills by examining the instructor’s solution. Each exercise is scored, so at the course’s end you will have a “total points” number that reports at whichlt Python Skills’ Category Level you are. The “100 Exercises” challenge you: to build specific programs for particular actions; to fix bugs in existing programs; and uto make improvements to existing code. The variety of exercises ensures your ability to manage comfortably different real-world programming scenarios. This course will also exponentially increase your confidence when applying for jobs. The skills you learn in this class are common questions in programming job interviews. You will be prepared! Who this course is for : People who know Python basics, but lack the confidence to solve coding problems on their own. Course Size Details : 2 hours on-demand video 48 downloadable resources Full lifetime access Access on mobile and TV Certificate of completion People also Search on Google - free course download - download udemy courses on pc - udemy courses free download google drive - udemy courses free download - udemy online courses - online course download - udemy course download - udemy paid course for free - download udemy paid courses for free
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817463.60/warc/CC-MAIN-20240419234422-20240420024422-00793.warc.gz
CC-MAIN-2024-18
2,685
32
https://techgenix.com/managing-certificates-exchange-server-2013-part6/
code
If you would like to read the other parts in this article series please go to: - Managing Certificates in Exchange Server 2013 (Part 1) - Managing Certificates in Exchange Server 2013 (Part 2) - Managing Certificates in Exchange Server 2013 (Part 3) - Managing Certificates in Exchange Server 2013 (Part 4) - Managing Certificates in Exchange Server 2013 (Part 5) In our series we went over the creation process of the certificate, and how to configure Exchange Server to take full advantage of the new certificate using just a couple of names. In this article, we are going to increase the complexity of the existent environment by adding a new Exchange Server to the mix and configuring a DAG between those two servers. All changes required to add this new server and configure fault tolerance on the Certificate side will be covered in this article. In Figure 01, we can see the proposed changes with the new server (BsAEX02) and the names that are already in the certificate that will be shared between those two servers. What does it change for the Public Certificate? Well, it is not a matter of a completely new design but we need to prepare well for such a change. For any new server we need to perform the following main tasks: - Configure Autodiscover on the new server - Plan the DNS Changes - Export the certificate from an existent server - Import the certificate on the new Exchange Server - Configure Exchange Services – Outlook Anywhere – Outlook External URLs – Outlook Internal URLs - Change DNS - Test the new solution We had this issue at the beginning of this series when we changed the certificate, and any new Exchange Server 2013 will provide its default address for the SCP which is similar to this: https://<Server-FQDN/Autodiscover/Autodiscover.xml. That may create some certificate warning on your clients if a new client gets that information from the new server. To reduce the changes of that happening, after finishing the installation of a new server, just run the following cmdlet: Set-ClientAccessServer –Identity <New-Server-Name> -AutoDsicoverServiceInternalUri https://Autodiscover.AndersonPatricio.info/Autodiscover/Autodiscover.xml where AndersonPatricio.info should be replaced by your domain. When running Get-ClientAccessServer | ft Identity,*Uri –AutoSize the output should be similar to the one shown in Figure 02. There is no requirement to have the certificate installed and configured to perform the cmdlet above. The cmdlet just updates the SCP object for this server to send the same information that has been configured on all production servers. When this new server is moved to production by changing the DNS settings then the certificate will become a requirement. Planning the DNS Changes… It all boils down what kind of solution our company will use to Load Balance the traffic among the servers. Since we have only two servers, we have a couple of options, as follows: - Add a Load Balancer in front of this new DAG and that server will be responsible to balance the traffic among those servers - Use DNS round-robin and when a server goes down the client will have a brief disconnection and after a few seconds the connection will be re-established The recommended and neat solution is the Load Balancer, but some companies can live with a failure for a few seconds and that is going to be the procedure that we will be working on in this article. If your company decided to go for a Load Balancer then in theory after going through steps 2 to 5 from the previous list you just need to point out your two names (Autodiscover and webmail) to the VIP (Virtual IP) of the Load Balancer and all your clients will be taking advantage of the Load Balancer. If you decided to use DNS the process is simple, just create a second A record for both Autodiscover.AndersonPatricio.info and webmail.AndersonPatricio.info on your internal DNS pointing out to the new Exchange Server and you will have the fault tolerance in place, however we are going over more details in a little bit further on. Managing a Public Certificate among Exchange Servers In this section we are going over the process to import and export certificates in Exchange Server 2013. First, logged on to Exchange Admin Center, click on servers and then certificates. From the selected server we are going to select the first server and a list with all certificates will be displayed, click on the Public Certificate that is in use and then click … (more options) and Export Exchange Certificate (Figure 03). On the new page, we are going to use the same Shared Folder that we created in the third article of this series (ExchUtil$), and in the same field we are going to specify a name for the exported certificate and a password, as shown in Figure 04. The result of this process is a new file created in the shared folder and we can use the file to import the Public Certificate of different servers. Time to import the certificate to the new server (BsAEX02.apatricio.local), click … and then Import Exchange Certificate, as shown in Figure 05. On the new page we need to specify the location of the shared folder for that initial exported certificate and the same password used in that process, after that click Next (Figure 06). On the following page, we can click the Add (+ icon) and add one or more servers that will have this certificate imported and installed locally. In our scenario, we are going to add in just one new server and then click finish. (Figure 07) Configuring the Certificate on the new server… Now, that we have the certificate up and running on the new server, we need to perform some of the tasks that we have already done in the previous articles. In order to refresh your memory we listed the steps and in which article we covered such topic, as follows: - Configure Outlook Anywhere (we covered this topic in the fourth article of this series) - Configure External URLs (we covered this topic in the fifth article of this series) - Configure Internal URLs (we covered this topic in the fifth article of this series) This is the easy part… initially we had BsAEX01 (IP 10.60.99.225) in the environment and all clients were using it just fine. After introducing BsAEX02 (IP 10.60.99.227) we went over the previous steps and we configured the certificate and URLs for that new server which means that the new server is ready to move into production. In order to move this new server into production we need to create another set of autodiscover and webmail entries in the internal DNS pointing out to the new server. The result of this operation will be similar to Figure 08. Testing the solution…. We have an Outlook client connected to the system just fine. Since we are sharing the same name with both servers we don’t know for certain which server the workstation is communicating to and to identify it we can run netstat –an | find “:443”. The results can be seen in Figure 09, and it is clear that the workstation is connected to 10.60.99.227 which means BsAEX02 server at this moment. In order to test it, we are going to bind down the server BsAEX02 to test our fault tolerant solution (logged on to the server, just type in Stop-Computer in PowerShell to shut down the server). Our server went down and with it the connection of the Outlook client, as shown in Figure 10. However that status will stay like that until the client times out and after that a new connection will be established with the remaining operational server. We can see the SYN_SENT when the connection was lost, and then after a few seconds I ran the same command and the connection was established with the remaining operational server. Then, Outlook client just went back online as shown in Figure 11. In this final article of our series, we covered the process to manage the certificate among Exchange Servers to support high availability and fault tolerance scenarios. If we look back at our 6 (six) articles we had a simple environment using default settings and we designed public certificates, and from that point on we installed the certificate, configured all the main Exchange Services for a single server and afterwards we stretched the same configuration for a high available scenario. Is that it for Certificates in Exchange Server 2013? Definitely not, we only went over the main features using public certificates, but we also have POP/IMAP/SMTP/ADFS services that can take advantage of Public Certificates but that is going to be topic for another series or perhaps an extension of this one. If you would like to read the other parts in this article series please go to:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817576.41/warc/CC-MAIN-20240420091126-20240420121126-00796.warc.gz
CC-MAIN-2024-18
8,641
54
https://www.indiehackers.com/product/busy-radicals/first-client-signed-up--MBd8HpmkF-qSwCY5GOH
code
We've been working together for some time through Upwork, and after several successful months of work, we decided to go further and continue work without third parties. This is our first client outside of Upwork, which is excellent! Because in the near future, we plan to focus on long-term projects without third-party services and on providing a boutique experience for our customers. We're always happy to discuss a new project or collaboration. If you have something in mind or interested in working with us, drop us a line!
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00020.warc.gz
CC-MAIN-2020-34
528
4
https://www.gungoddess.com/pages/customizable-holster-colors
code
Allow a few seconds for the pics to load. These colors are only for the Customizable Line of Holsters: Ulticlip Holster, Fabriclip Holster, IWB Holster, OWB Holster, Competition/Training Holster, Two-Clip Holster, Purse/Backpack Holster, Tuckable Holster, Two-in-One Holster, Flat-Back Holster Hover to view color name. Click/tap to expand and scroll. SOLID COLORS & PATTERNS For best color results, patterns are printed on a light base such as tan, white or grey (the inside & edges of holster will be the base kydex color). The pictured color swatches are all available, but if you have your own design you like, let's make your holster one-of-a-kind!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474641.34/warc/CC-MAIN-20240225171204-20240225201204-00664.warc.gz
CC-MAIN-2024-10
653
4
http://forums.nexusmods.com/index.php?/topic/920139-stupid-windows-8/
code
I just bought a new computer and I Installed oblivion GotY it worked perfectly. But once I added my standard mods it crashes. I never had this problem on my windows 7. I always install to C:\Games I have tried to delete the data folder files editing them so forth so on. If anyone has a tutorial or advise that will help it would be great.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00479-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
339
4
https://nick-p-doyle.medium.com/flaws-cloud-a-fun-interactive-way-to-learn-the-basics-of-aws-security-19463c2b6ece?source=post_internal_links---------7----------------------------
code
flAWS.cloud — A fun interactive way to learn the basics of AWS Security — Part 1 — S3 Shenanigans A couple months back I got stuck into another one of his projects called Parliament, an IAM policy linter for CloudFormation. I wrote a wrapper that uses it to lint IAM policies in Terraform. It was a pretty fun exercise, and the code is possibly going to get Checkov, a great tool which I’m also using in our GitHub Actions CI pipeline for our AWS Infra code. It’s working great and I also recommend both of them. But I digress. This post is about flaws.cloud. This is a review and walkthrough. If you’re the sort of person who enjoys learning by doing (as do I) then I encourage you to just go do flaws.cloud yourself right now — it’s fun, rewarding and educational. - AWS CLI Intro — Credential profiles, listing s3 buckets - S3 bucket configuration - IAM credential usage intro - Git secret retrieval (and cleanup) - EBS snapshot inspection & forensics - EC2 Instance Profile Metadata Attack (the Capital One incident) - Recon of API Gateway and Lambda _ ____ __ __ _____ | || | / || |__| |/ ___/ | __|| | | o || | | ( \_ | |_ | |___ | || | | |\__ | | _] | || _ || ` ' |/… However if you’re short on time, non-technical, or simply lazy, then welcome to my walkthrough. What can we find out about this site? Let’s check the DNS Possibly the owner has set up a reverse DNS / PTR record And, sure enough visiting the URL tells its s3 So it’s an s3-hosted website. It’s pretty reasonable to guess the bucket is called something like “flaws.cloud”. Let’s try: Let’s have a look inside that secret file: And there we have the URL to Level 2 My solution here wasn’t actually the intended one; we were meant to visit http://flaws.cloud.s3.amazonaws.com/ to get a directory listing via the web instead of the CLI. Both methods rely on the same fundamental misconfiguration; the bucket ACL allowed Everyone toRead. There’s one main technical difference: logging. Logging for the website access requires the bucket to have “Server access logging” enabled, which writes access logs to another s3 bucket. The CLI access wouldn’t show up there; that would require “Object-level logging” (using CloudTrail) to be configured for the bucket. Which one of these is more likely to be enabled depends on who and why the bucket was set up; I’d guess if the owner is using s3 to host the site, they’re more likely to have just Server access logging setup. However if the bucket contains more important material (and likely, the web hosting was unintentional), Object access logging is more likely to be configured. Either way it’s very likely your IP and access to these items would be recorded, and would in the real world be recommended to be done via TOR (where only the US Navy know what you’ve done :) or a VPN. There’s a good little writeup on what we learnt from the previous Level: Turns out Level 2 is pretty-much the same as level 1 (actually exactly the same, if you use the CLI again): OK, standard drill, let’s try listing the bucket: Looks pretty similar to before right? But there’s one important entry there that should make your nerdo-senses tingle: .git We can pull those files down and inspect that git repo: Sure enough, in the commit history, we can see that Scott has accidentally committed AWS keys to his repo, before removing them in a later commit: This is actually a very common mistake, and there are now many good tools to both 1. Prevent and 2. Remediate this situation. Indeed, GitHub now by-default scans commits for such mistakes. Let’s do a quick demo of remediating such a mistake. Bonus Digression: Cleanup with BFG For remediation — my go-to (and recommended by GitHub) is BFG. We put the strings we want to remove (our secrets) in a text file e.g. “my-soon-to-be-rotated-secrets.txt”: And run BFG: Then we just copy and past that reflog command, and sure enough our secrets are removed from the repo: So, let’s use those keys to list other S3 buckets: Juicy. It looks like we can skip to the final level — let’s try it: I’ll put these creds in my ~/.aws/credentials for future use, like so: Well, the level 4 site we can see in the listing does work. Continued in my next post, flAWS.cloud Walkthrough — Level 4 … This is part of my 4-part Walkthrough of flaws.cloud:
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00368.warc.gz
CC-MAIN-2022-40
4,369
44
https://www.londonenvironment.net/organizational_reports
code
Explore our 2017 Organizational Report! In this report you will find: - Who we are, what we do, and how we do it - Who our amazing 45 member organizations are and what they do! - What 2017 looked like for LEN, our members, and our community - Member highlights and successes You can download and view a PDF version of the report here
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00442.warc.gz
CC-MAIN-2019-22
333
7
https://archello.com/brand/gonzalo-mardones-arquitecto
code
He gets his degree as architect from the Universidad Católica de Chile, where he graduates with the Maximum Honors. He receives the First Prize in the Architecture Biennale, for the best degree project among all the Architectural Schools in Chile, for his project for urban renewal of the South-West Center of Santiago. He has been a professor of architectural design workshops and directed degree projects in the Faculties of Architecture of the Universidad Católica, Universidad de Chile, Universidad Central, Universidad Andrés Bello and Universidad Finis Terrae, in addition to having been guest professor and lecturer in different universities in Chile, and abroad. His work has been published by the main architectural magazines and honored at Biennales. He has been a member of the National Commission of Competitions of the Architects Association in Chile and a Founding member of the Association of Architectural Practices (AOA). In 2008 received the ‘Degree Distinction’ by the ‘UMSA Universidad Mayor de San Andrés’, Bolivia.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00038.warc.gz
CC-MAIN-2023-50
1,048
1
http://www.mp3car.com/mp3car-gatherings/44676-north-jersey-anyone-20.html
code
I spent some time looking at this and then put it on the "back burner" to deal with other stuff (like me screen I cannot seem to finish)Originally Posted by DK888 here are some links I found. http://www.automobile-security.com/default.htm <---- They sell this sort of thing minus the computer Replacing key ignition I have to look back over it, but the biggest problem I saw (and others) was that if the system is dependent on the computer, you need A) power to the computer before the car is started and B) If something goes worng with the computer then you cannot start the car
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00260-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
579
5
https://math.ucr.edu/home/baez/week119.html
code
I've been slacking off on This Week's Finds lately because I was busy getting stuff done at Riverside so that I could visit the Center for Gravitational Physics and Geometry here at Penn State with a fairly clean slate. Indeed, sometimes my whole life seems like an endless series of distractions designed to prevent me from writing This Week's Finds. However, now I'm here and ready to have some fun.... Recently I've been trying to learn about grand unified theories, or "GUTs". These were popular in the late 1970s and early 1980s, when the Standard Model of particle interactions had fully come into its own and people were looking around for a better theory that would unify all the forces and particles present in that model - in short, everything except gravity. The Standard Model works well but it's fairly baroque, so it's natural to hope for some more elegant theory underlying it. Remember how it goes: GAUGE BOSONS ELECTROMAGNETIC FORCE WEAK FORCE STRONG FORCE photon W+ 8 gluons W- Z FERMIONS LEPTONS QUARKS electron electron neutrino down quark up quark muon muon neutrino strange quark charm quark tauon tauon neutrino bottom quark top quark HIGGS BOSON (not yet seen) The strong, electromagnetic and weak forces are all described by Yang-Mills fields, with the gauge group SU(3) x SU(2) x U(1). In what follows I'll assume you know the rudiments of gauge theory, or at least that you can fake it. SU(3) is the gauge group of the strong force, and its 8 generators correspond to the gluons. SU(2) x U(1) is the gauge group of the electroweak force, which unifies electromagnetism and the weak force. It's not true that the generators of SU(2) corresponds to the W+, W- and Z while the generator of U(1) corresponds to the photon. Instead, the photon corresponds to the generator of a sneakier U(1) subgroup sitting slantwise inside SU(2) x U(1); the basic formula to remember here is: Q = I3 + Y/2 where Q is ordinary electric charge, I3 is the 3rd component of "weak isospin", i.e. the generator of SU(2) corresponding to the matrix (1/2 0) (0 -1/2)and Y, "hypercharge", is the generator of the U(1) factor. The role of the Higgs particle is to spontaneously break the SU(2) x U(1) symmetry, and also to give all the massive particles their mass. However, I don't want to talk about that here; I want to focus on the fermions and how they form representations of the gauge group SU(3) x SU(2) x U(1), because I want to talk about how grand unified theories attempt to simplify this picture - at the expense of postulating more Higgs bosons. The fermions come in 3 generations, as indicated in the chart above. I want to explain how the fermions in a given generation are grouped into irreducible representations of SU(3) x SU(2) x U(1). All the generations work the same way, so I'll just talk about the first generation. Also, every fermion has a corresponding antiparticle, but this just transforms according to the dual representation, so I will ignore the antiparticles here. Before I tell you how it works, I should remind you that all the fermions are, in addition to being representations of SU(3) x SU(2) x U(1), also spin-1/2 particles. The massive fermions - the quarks and the electron, muon and tauon - are Dirac spinors, meaning that they can spin either way along any axis. The massless fermions - the neutrinos - are Weyl spinors, meaning that they always spin counterclockwise along their axis of motion. This makes sense because, being massless, they move at the speed of light, so everyone can agree on their axis of motion! So the massive fermions have two helicity states, which we'll refer to as "left-handed" and "right-handed", while the neutrinos only come in a "left-handed" form. (Here I am discussing the Standard Model in its classic form. I'm ignoring any modifications needed to deal with a possible nonzero neutrino mass. For more on Standard Model, neutrino mass and different kinds of spinors, see "week93".) Okay. The Standard Model lumps the left-handed neutrino and the left-handed electron into a single irreducible representation of SU(3) x SU(2) x U(1): (νL, eL) (1,2,-1)This 2-dimensional representation is called (1,2,-1), meaning that it's the tensor product of the 1-dimensional trivial rep of SU(3), the 2-dimensional fundamental rep of SU(2), and the 1-dimensional rep of U(1) with hypercharge -1. Similarly, the left-handed up and down quarks fit together as: (uL, uL, uL, dL, dL, dL) (3,2,1/3)Here I'm writing both quarks 3 times since they also come in 3 color states. In other words, this 6-dimensional representation is the tensor product of the 3-dimensional fundamental rep of SU(3), the 2-dimensional fundamental rep of SU(2), and the 1-dimensional rep of U(1) with hypercharge 1/3. That's why we call this rep (3,2,1/3). (If you are familiar with the irreducible representations of U(1) you will know that they are usually parametrized by integers. Here we are using integers divided by 3. The reason is that people defined the charge of the electron to be -1 before quarks were discovered, at which point it turned out that the smallest unit of charge was 1/3 as big as had been previously believed.) The right-handed electron stands alone in a 1-dimensional rep, since there is no right-handed neutrino: eR (1,1,-2)Similarly, the right-handed up quark stands alone in a 3-dimensional rep, as does the right-handed down quark: (uR, uR, uR) (3,1,4/3) (dR, dR, dR) (3,1,-2/3)That's it. If you want to study this stuff, try using the formula Q = I3 + Y/2 to figure out the charges of all these particles. For example, since the right-handed electron transforms in the trivial rep of SU(2), it has I3 = 0, and if you look up there you'll see that it has Y = -2. This means that its electric charge is Q = -1, as we already knew. Anyway, we obviously have a bit of a mess on our hands! The Standard Model is full of tantalizing patterns, but annoyingly complicated. The idea of grand unified theories is to find a pattern lurking in all this data by fitting the group SU(3) x SU(2) x U(1) into a larger group. The smallest-dimensional "simple" Lie group that works is SU(5). Here "simple" is a technical term that eliminates, for example, groups that are products of other groups - these aren't very "unified". Georgi and Glashow came up with their "minimal" SU(5) grand unified theory in 1975. The idea is to stick SU(3) x SU(2) into SU(5) in the obvious diagonal way, leaving just enough room to cram in the U(1) if you are clever. Now if you add up the dimensions of all the representations above you get 2 + 6 + 1 + 3 + 3 = 15. This means we need to find a 15-dimensional representation of SU(5) to fit all these particles. There are various choices, but only one that really works when you take all the physics into account. For a nice simple account of the detective work needed to figure this out, see: 1) Edward Witten, Grand unification with and without supersymmetry, Introduction to supersymmetry in particle and nuclear physics, edited by O. Castanos, A. Frank, L. Urrutia, Plenum Press, 1984. I'll just give the answer. First we take the 5-dimensional fundamental representation of SU(5) and pack fermions in as follows: (dR, dR, dR, e+R, nubarR) 5 = (3,1,-2/3) + (1,2,-1)Here e+R is the right-handed positron and nubarR is the right-handed antineutrino - curiously, we need to pack some antiparticles in with particles to get things to work out right. Note that the first 3 particles in the above list, the 3 states of the right-handed down quark, transform according to the fundamental rep of SU(3) and the trivial rep of SU(2), while the remaining two transform according to the trivial rep of SU(3) and the fundamental rep of SU(2). That's how it has to be, given how we stuffed SU(3) x SU(2) into SU(5). Note also that the charges of the 5 particles on this list add up to zero. That's also how it has to be, since the generators of SU(5) are traceless. Note that the down quark must have charge -1/3 for this to work! In a sense, the SU(5) model says that quarks must have charges in units of 1/3, because they come in 3 different colors! This is pretty cool. Then we take the 10-dimensional representation of SU(5) given by the 2nd exterior power of the fundamental representation - i.e., antisymmetric 5x5 matrices - and pack the rest of the fermions in like this: ( 0 ubarL ubarL uL dL ) 10 = (3,2,1/3) + ( -ubarL 0 ubarL uL dL ) (1,1,2) + ( -ubarL -ubarL 0 uL dL ) (3,1,-4/3) ( -uL -uL -uL 0 e+L ) ( -dL -uL -dL -e+L 0 )Here the u-bar is the antiparticle of the up quark - again we've needed to use some antiparticles. However, you can easily check that these two representations of SU(5) together with their duals account for all the fermions and their antiparticles. The SU(5) theory has lots of nice features. As I already noted, it explains why the up and down quarks have charges 2/3 and -1/3, respectively. It also gives a pretty good prediction of something called the Weinberg angle, which is related to the ratio of the masses of the W and Z bosons. It also makes testable new predictions! Most notably, since it allows quarks to turn into leptons, it predicts that protons can decay - with a halflife of somewhere around 1029 or 1030 years. So people set off to look for proton decay.... However, even when the SU(5) model was first proposed, it was regarded as slightly inelegant, because it didn't unify all the fermions of a given generation in a single irreducible representation (together with its dual, for antiparticles). This is one reason why people began exploring still larger gauge groups. In 1975 Georgi, and independently Fritzsch and Minkowski, proposed a model with gauge group SO(10). You can stuff SU(5) into SO(10) as a subgroup in such a way that the 5- and 10-dimensional representations of SU(5) listed above both fit into a single 16-dimensional rep of SO(10), namely the chiral spinor rep. Yes, 16, not 15 - that wasn't a typo! The SO(10) theory predicts that in addition to the 15 states listed above there is a 16th, corresponding to a right-handed neutrino! I'm not sure yet how the recent experiments indicating a nonzero neutrino mass fit into this business, but it's interesting. Somewhere around this time, people noticed something interesting about these groups we've been playing with. They all fit into the "E series"! I don't have the energy to explain Dynkin diagrams and the ABCDEFG classification of simple Lie groups here, but luckily I've already done that, so you can just look at "week62" - "week65" to learn about that. The point is, there is an infinite series of simple Lie groups associated to rotations in real vector spaces - the SO(n) groups, also called the B and D series. There is an infinite series of them associated to rotations in complex vector spaces - the SU(n) groups, also called the A series. And there is infintie series of them associated to rotations in quaternionic vector spaces - the Sp(n) groups, also called the C series. And there is a ragged band of 5 exceptions which are related to the octonions, called G2, F4, E6, E7, and E8. I'm sort of fascinated by these - see "week90", "week91", and "week106" for more - so I was extremely delighted to find that the E series plays a special role in grand unified theories. Now, people usually only talk about E6, E7, and E8, but one can work backwards using Dynkin diagrams to define E5, E4, E3, E2, and E1. Let's do it! Thanks go to Allan Adler and Robin Chapman for helping me understand how this works.... E8 is a big fat Lie group whose Dynkin diagram looks like this: o | o--o--o--o--o--o---oIf we remove the rightmost root, we obtain the Dynkin diagram of a subgroup called E7: o | o--o--o--o--o--oIf we again remove the rightmost root, we obtain the Dynkin diagram of a subgroup of E7, namely E6: o | o--o--o--o--oThis was popular as a gauge group for grand unified models, and the reason why becomes clear if we again remove the rightmost root, obtaining the Dynkin diagram of a subgroup we could call E5: o | o--o--o--oBut this is really just good old SO(10), which we were just discussing! And if we yet again remove the rightmost root, we get the Dynkin diagram of a subgroup we could call E4: o | o--o--oThis is just SU(5)! Let's again remove the rightmost root, obtaining the Dynkin diagram for E3. Well, it may not be clear what counts as the rightmost root, but here's what I want to get when I remove it: o o--oThis is just SU(3) x SU(2), sitting inside SU(5) in the way we just discussed! So for some mysterious reason, the Standard Model and grand unified theories seem to be related to the E series! We could march on and define E2: o owhich is just SU(2) x SU(2), and E1: owhich is just SU(2)... but I'm not sure what's so great about these groups. By the way, you might wonder what's the real reason for removing the roots in the order I did - apart from getting the answers I wanted to get - and the answer is, I don't really know! If anyone knows, please tell me. This could be an important clue. Now, this stuff about grand unified theories and the E series is one of the reasons why people like string theory, because heterotic string theory is closely related to E8 (see "week95"). However, I must now tell you the bad news about grand unified theories. And it is very bad. The bad news is that those people who went off to detect proton decay never found it! It became clear in the mid-1980s that the proton lifetime was at least 1032 years or so, much larger than what the SU(5) theory most naturally predicts. Of course, if one is desperate to save a beautiful theory from an ugly fact, one can resort to desperate measures. For example, one can get the SU(5) model to predict very slow proton decay by making the grand unification mass scale large. Unfortunately, then the coupling constants of the strong and electroweak forces don't match at the grand unification mass scale. This became painfully clear as better measurements of the strong coupling constant came in. Theoretical particle physics never really recovered from this crushing blow. In a sense, particle physics gradually retreated from the goal of making testable predictions, drifting into the wonderland of pure mathematics... first supersymmetry, then supergravity, and then superstrings... ever more elegant theories, but never yet a verified experimental prediction. Perhaps we should be doing something different, something better? Easy to say, hard to do! If we see a superpartner at CERN, a lot of this "superthinking" will be vindicated - so I guess most particle physicists are crossing their fingers and praying for this to happen. The following textbook on grand unified theories is very nice, especially since it begins with a review of the Standard Model: 2) Graham G. Ross, Grand Unified Theories, Benjamin-Cummings, 1984. This one is a bit more idiosyncratic, but also good - Mohapatra is especially interested in theories where CP violation arises via spontaneous symmetry breaking: 3) Ranindra N. Mohapatra, Unification and Supersymmetry: The Frontiers of Quark-Lepton Physics, Springer-Verlag, 1992. I also found the following articles interesting: 4) D. V. Nanopoulos, Tales of the GUT age, in Grand Unified Theories and Related Topics, proceedings of the 4th Kyoto Summer Institute, World Scientific, Singapore, 1981. 5) P. Ramond, Grand unification, in Grand Unified Theories and Related Topics, proceedings of the 4th Kyoto Summer Institute, World Scientific, Singapore, 1981. Okay, now for some homotopy theory! I don't think I'm ever gonna get to the really cool stuff... in my attempt to explain everything systematically, I'm getting worn out doing the preliminaries. Oh well, on with it... now it's time to start talking about loop spaces! These are really important, because they tie everything together. However, it takes a while to deeply understand their importance. O. The loop space of a topological space. Suppose we have a "pointed space" X, that is, a topological space with a distinguished point called the "basepoint". Then we can form the space LX of all "based loops" in X - loops that start and end at the basepoint. One reason why LX is so nice is that its homotopy groups are the same as those of X, but shifted: πi(LX) = πi+1(X) Another reason LX is nice is that it's almost a topological group, since one can compose based loops, and every loop has an "inverse". However, one must be careful here! Unless one takes special care, composition will only be associative up to homotopy, and the "inverse" of a loop will only be the inverse up to homotopy. Actually we can make composition strictly associative if we work with "Moore paths". A Moore path in X is a continuous map f: [0,T] → X where T is an arbitrary nonnegative real number. Given a Moore path f as above and another Moore path g: [0,S] → X which starts where f ends, we can compose them in an obvious way to get a Moore path fg: [0,T+S] → X Note that this operation is associative "on the nose", not just up to homotopy. If we define LX using Moore paths that start and end at the basepoint, we can easily make LX into a topological monoid - that is, a topological space with a continuous associative product and a unit element. (If you've read section L, you'll know this is just a monoid object in Top!) In particular, the unit element of LX is the path i: [0,0] → X that just sits there at the basepoint of X. LX is not a topological group, because even Moore paths don't have strict inverses. But LX is close to being a group. We can make this fact precise in various ways, some more detailed than others. I'm pretty sure one way to say it is this: the natural map from LX to its "group completion" is a homotopy equivalence. P. The group completion of a topological monoid. Let TopMon be the category of topological monoids and let TopGp be the category of topological groups. There is a forgetful functor F: TopGp → TopMon and this has a left adjoint G: TopMon → TopGp which takes a topological monoid and converts it into a topological group by throwing in formal inverses of all the elements and giving the resulting group a nice topology. This functor G is called "group completion" and was first discussed by Quillen (in the simplicial context, in an unpublished paper), and independently by Barratt and Priddy: 6) M. G. Barratt and S. Priddy, On the homology of non-connected monoids and their associated groups, Comm. Math. Helv. 47 (1972), 1-14. For any topological monoid M, there is a natural map from M to F(G(M)), thanks to the miracle of adjoint functors. This is the natural map I'm talking about in the previous section! © 1998 John Baez
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00312.warc.gz
CC-MAIN-2023-06
18,691
79
https://lists.nongnu.org/archive/html/tramp-devel/2023-03/msg00027.html
code
|Copying works normally, performance is alright as I’m copying through a lot of servers and all, the problem is executing commands with eshell or shell, or find-file, saving a file is mostly 7 seconds or mode.| Right now I’m connecting through a shell and using the ssh command, even through all the multihops works fast 12/3/23 22:52、Neal Becker <[email protected]>のメール: I find that using dired-rsync over tramp is a lot faster than dired copy, although this is just my impression. I usually use scp:// with 1 hop on an otherwise pretty fast connection. I use multihops, but the performance isn’t great even with one simple connection without it. In my config, I have: > 12/3/23 18:21、Michael Albinus <[email protected]>のメール: > [email protected] writes: >> Hello Tramp Community, >> I tend to connect to several servers through a middleware one, problem >> being that the performance of tramp over implementations like eshell, >> dired and even find-file is slow, even though the connection is pretty >> fast via shell’s SSH, even through the middleware. >> I’m using Emacs 28.2 with the Doom framework, > Could you pls give some more details? Do you use multi-hops? Do you have > some special configuration in your ~/.ssh/config? >> Thank you for your advice into this topic!. > Best regards, Michael. Those who don't understand recursion are doomed to repeat it
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653764.55/warc/CC-MAIN-20230607111017-20230607141017-00037.warc.gz
CC-MAIN-2023-23
1,420
19
https://www.icredd.hokudai.ac.jp/stecker-collin-2
code
About the Research Public Relations and Outreach At ICReDD I work on public relations and outreach, communicating the latest scientific advances coming out of ICReDD to a variety of scientific and non-scientific audiences and organizing outreach activities. My research background is in solar energy materials and surface science. I examined lead halide perovskite surfaces and interfaces relevant to next generation solar cell devices. I used scanning tunneling microscopy and photoelectron spectroscopy to characterize the perovskite surface, defect dynamics, and the interface between perovskites and copper phthalocyanine (CuPc), a hole transport material. I worked closely with computational collaborators using density functional theory to be able to gain greater insight into the structures and phenomena seen experimentally. This is similar to how experimental chemists at ICReDD collaborate with computational chemists and information scientists to further accelerate the development of new chemical reactions. Representative Research Achievements Atomic Scale Investigation of the CuPc–MAPbX3 Interface and the Effect of Non-Stoichiometric Perovskite Films on Interfacial Structures. C. Stecker, Z. Liu, J. Hieulle, S. Zhang, L. K. Ono, G. Wang, Y.B. Qi. ACS Nano, 2021, 15, 14813-14821. Imaging of the Atomic Structure of All-Inorganic Halide Perovskites. J. Hieulle, S. Luo, D.Y. Son, A. Jamshaid, C. Stecker, Z. Liu, G. Na, D. Yang, R. Ohmann, L. K. Ono, L. Zhang, Y.B. Qi. J. Phys. Chem. Lett., 2020, 11, 818-823. Surface Defect Dynamics in Organic–Inorganic Hybrid Perovskites: From Mechanism to Interfacial Properties. C. Stecker, K. Liu, J. Hieulle, R. Ohmann, Z. Liu, L.K. Ono, G. Wang, Y.B. Qi. ACS Nano, 2019, 13, 12127-12136. Unraveling the Impact of Halide Mixing on Perovskite Stability. J. Hieulle, X. Wang, C. Stecker, D.Y. Son, L. Qiu, R. Ohmann, L. K. Ono, A. Mugarza, Y. Yan, Y.B. Qi. J. Am. Chem. Soc., 2019, 141, 3515-3523. Scanning Probe Microscopy Applied to Organic–Inorganic Halide Perovskite Materials and Solar Cells. J. Hieulle, C. Stecker, R. Ohmann, L.K. Ono, Y.B. Qi. Small Methods, 2018, 2, 1700295.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00372.warc.gz
CC-MAIN-2023-14
2,146
10
https://www.jobeloo.com/job/sap-commerce-cloud-developer-back-end-freelance-permanent/
code
Offered Salary 0 Experience 3 Years Permanent or consulting positions available with work from home or relocation opportunities. Who are we? We are specialized in supporting organizations achieve their goals with regards to Customer Experience. As a certified SAP Gold Partner, we focus on the SAP CX portfolio and deliver e-Commerce, Marketing automation and CRM projects for our customers. Our evolution represents our passion for delivering consultancy and fully integrated end-to end digital solutions to our clients for e-commerce, product & content management, mobile development, marketing automation, integration expertise and hosting & managed services. With our colleagues, we work for large (international) organizations like The Straumann Group or Jan Linders Supermarkets, but we are also more than happy to work for smaller locally oriented companies. As an SAP Commerce Cloud Developer within our company, you are part of a multi-skilled scrum team that is responsible for the implementation and improvement of the SAP Commerce Cloud (e-Commerce) projects. Together with your team, you will support the customer in building a new e-Commerce application or enhance the existing setup. You will pick up the back-end user stories in the project, creating best in class web applications for our Business-to-Business (B2B) and Business to Consumer (B2C) clients. Customer Experience requires continues attention from our Back-end developers changing and optimizing the web application on a daily basis. What do you bring? • You have a unique combination of skills that allow you to build software that is as elegant as it is powerful • You thrive working within a team of professionals • You are passionate about technology • You love learning and solving problems • You approach challenges with a positive attitude • You recognize the value of code readability and the importance of developing to established guidelines so that your work looks as professional as it performs • You possess a laser focus on quality, including success with design reviews, code reviews, testing, etc. Desired Skills and Experience • 3+ years’ experience in systems integration and development • 3+ years’ experience working in a variety of programming languages: Java / J2EE, Spring, Spring Boot, Spring MVC, HTML, DHTML, SOAP Ajax etc • Excellent understanding of J2EE, Spring, Struts, Hibernate, SQL, MYSQL, Tomcat and Apache • Application development experience • Appetite for learning and self-development • Strong written and verbal communication skills • Understanding of enterprise deployments & code management systems • Knowledge of agile development processes • Eager to learn & demonstrate initiative • Able to work independently • Very reliable • Able to work to deadlines What we offer: • A job with broad responsibilities in a dynamic and growing company; • We offer an environment that celebrates innovation and helps you to achieve a good balance between your professional and personal life. • A great working atmosphere with a team of passionate colleagues; • A working environment where there is room to learn and explore new opportunities For more information please send your CV to [email protected]
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00102.warc.gz
CC-MAIN-2021-10
3,261
36
https://www.citrix.com/blogs/2015/10/22/top-questions-on-citrix-workspace-cloud-technical-webinar/
code
Efficiency and productivity is achieved when deployment and management of IT services is fast, simple and elegant. Attendees that joined the October Citrix Workspace Cloud webinar showed a high level of engagement and asked great questions while embarking on a technical tour of the new platform and services. Excited about the journey to cloud, attendees asked questions themed around provisioning technologies, secure credential management, new feature deployment and more. Let’s review the top 8 questions that were asked during the webinar. 1. Is Citrix Provisioning Services (PVS) integrated with the App and Desktop Service in Citrix Workspace Cloud? Today, integrated provisioning in the App and Desktop service is supported by Machine Creation Services (MCS). This technology allows administrators to scale app and desktop capacity on compute resources (hypervisors) located in customer datacenters (resource locations) from the cloud based management control plane (XenDesktop Delivery Site). This simplifies capacity management for app and desktop workloads, saving time and easing administration. A similar integration with PVS is currently under development. Workspace Cloud customers can use machines provisioned by PVS in their App and Desktop service resource location, or use them as part of blueprint automation powered by Citrix Lifecycle Management. However at this time, provisioning must be initiated using PVS. In the future the App and Desktop service will support integrated PVS provisioning from Studio. Integrated PVS support was a popular request at Citrix Synergy 2015 as partners and customers looked to take advantage of provisioning features in PVS. We listened, and development is underway. 2. Will it be possible to host the Workspace Cloud UI and Control Center by ourselves? No, there are no plans to make the Workspace Cloud interface available for on-premises deployment. Our goal is to provide a cloud service that can evolve quickly and regularly with no effort or upgrade required by customers. To deliver on this vision and continuously add value we must host and operate the control plane and interface as a Citrix operated cloud service. 3. Can we host the App and Desktop Service Controllers in our datacenter? No. To take full advantage of the hybrid delivery model provided by the App and Desktop service, the Delivery Controllers must be hosted and operated in the Citrix cloud. If your goal is to host both controller and worker components of the Citrix XenDesktop infrastructure, look into how the Lifecycle Management Service in Workspace Cloud can help automate deployment and monitoring of your environment. Hybrid delivery isn’t right for everyone and all use cases, which is why Citrix will continue to invest heavily in our core products and the traditional on-premises use case. 4. Where does end-user authentication take place in the hybrid delivery model? Do my end users need a new username or can they continue to use their Microsoft Active Directory identity? End users or “subscribers” continue to use their AD based identity and do not need to be issued a new login to access apps and desktops delivered by the App and Desktop service in Workspace Cloud. Authentication flows are secure, and the routing and “cloud exposure” of credentials depends on where the StoreFront component (where end users authenticate to access apps and desktops) is located. In the scenario where StoreFront is hosted in a Resource Location, user credentials are encrypted before leaving the customer datacenter or cloud. The encryption key is never sent to the cloud, ensuring that credential information is handled securely and can not be compromised even in the event that an attacker was able to intercept the data stream. In the scenario where a customer is taking advantage of the cloud hosted StoreFront provided by the App and Desktop service (one less piece of Citrix infrastructure for them to deploy, manage and update) credentials will be securely encrypted and passed from the cloud based StoreFront to the Workspace Cloud Connector deployed in the customer’s Active Directory domain. In no scenario are end user credentials ever stored in Workspace Cloud. Learn more 5. Is all communication from the Citrix cloud to the Cloud Connector done over port 443? Are there other port or networking requirements? All communication between Workspace Cloud and the Connector occurs using SSL over port 443. There are no other networking or port requirements. Using only 443 outbound access, administrators can quickly connect existing enterprise directories, domains and compute resources with the Workspace Cloud platform and services. This simple requirement makes it easy to get started with Workspace Cloud. It takes just minutes to download, install and register a Workspace Cloud Connector. This design allows customers to avoid the cost and complexity of setting up a direct connections or VPNs to connect their resource locations (datacenters or cloud environments) with Workspace Cloud. 6. Can updates to the control plane be rolled back if they introduce a problem? How’s that going to work? Of course, but Citrix manages that – not customers. Citrix is taking the responsibility of service availability and functionality very seriously. That include updates and availability of Workspace Cloud platform elements in addition to each service. First and foremost, we test updates and rollout extensively. Next, we don’t roll a new production update out to every customer at once, we pick an appropriate sample size and only update those customers and control planes. With that additional “in-production confidence”, we’ll continue with rolling out the update to all customers. In the event that a failure, outage or bug does occur (we’re not perfect), our 24×7 support and operations team is on the case. There are a number of possible resolution paths, rollback or restore being one of them. In other scenarios we may just need to cycle out a problematic server or component. Often these types of resolutions can occur without a service outage. Resolution varies depending on the issue and whether it resides in a platform element or a specific service. If you suspect an outage has occurred, check our operations dashboard to see if the issue is known and what we’re doing about it. 7. What are the basic requirement for getting started with the App and Desktop service in Workspace Cloud? Check out our official documentation on minimum requirements located here. To take a quick test drive, check out this documentation. To get started with the App and Desktop service, this guide will help you get your first resource location created and workspace delivered. 8. Is Citrix partnering with Microsoft to deliver desktops/apps from Azure? While Workspace Cloud is committed to an any cloud, we certainly have our sights set on a robust solution for delivering apps and desktops using Windows Azure. Our roadmap and vision includes integrated Azure provisioning in the App and Desktop service. That integration will enable customers and partners to quickly and easily scale capacity in resource locations, saving time and further simplifying administration of Citrix environments. We’re aligned and partnered with Microsoft and look forward to strengthening the Citrix + Microsoft story with a deeper integration between Azure and Workspace Cloud. Next Steps: Explore Workspace Cloud
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00469.warc.gz
CC-MAIN-2018-13
7,470
27
https://dice-list.com/syndicate
code
Syndicate.Casino is an online gambling service, where hitting winning combinations is not only fun and exciting but also profitable. On the official Syndicate.Casino website you can find more than a few hundreds of great video slots and table games. Syndicate.Casino online platform has a convenient navigation system. On the left side of the homepage, you can see a menu with a few functional tabs: "LOG IN", "SIGN UP", "LOBBY", "PROMOTIONS", "LOTTERIES", "VIP PROGRAM", and "SLOT FIGHTS". In the middle of the webpage, there are also several inlays: "NEW GAMES", "TOP GAMES", "VIDEO SLOTS", "TABLE GAMES", "BITCOIN GAMES", "OTHER GAMES", "JACKPOTS", and "ALL GAMES". If you are looking for a particular game, you can use a dedicated search bar. Syndicate.Casino also offers you the ability to sort games by providers: Thunderkick, Tripleedgestudios, Vivogaming, etc. To learn more about Syndicate.Casino's features, go to the tab, which is interesting to you, and explore it. For example, a gambling service provides you with great lotteries (go to "LOTTERIES" inlay). Lottery tickets are available for players from all over the world. "Familia" is a Syndicate.Casino's VIP program. Start participating in it, reach new statuses and get rewarded for each of them. The more hours you spend at Syndicate.Casino, the more point you will collect. The support team of the website keeps watching players and analyze their behavior. Once collecting a certain amount of points by playing your favorite video slots, card, and dice games, you will be able to check how many points you have and how you can use them. Go to the "VIP PROGRAM" to learn more. One of the most valuable features of the Syndicate.Casino is a friendly customer support team. You can contact them 24 hours a day, 7 days a week via [email protected] email address. You can also subscribe to Syndicate.Casino's email list in order to get Exclusive No Deposit Bonuses, Generous Welcome Offer, Free Spins and much more.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00244.warc.gz
CC-MAIN-2021-39
1,994
5
https://abra-electronics.com/sensors/sensors-temperature-en/4089-ada-adt7410-high-accuracy-i2c-temperature-sensor-breakout-board.html
code
ADT7410 High Accuracy I2C Temperature Sensor Breakout Board Analog Devices, known for their reliable and well-documented sensor chips - has a high precision and high resolution temperature sensor on the market, and we've got a breakout to make it easy to use! The Analog Devices ADT7410 gets straight to the point - it's an I2C temperature sensor, with 16-bit 0.0078°C temperature resolution and 0.5°C temperature tolerance. Wire it up to your microcontroller or single-board computer to get reliable temperature readings with ease The ADT7410 has 2 address pins, so you can have up to 4 sensors on one I2C bus. There's also interrupt and critical-temperature alert pins. The sensor is good from 2.7V to 5.5V power and logic, for easy integration. We've got both Arduino (C/C++) and CircuitPython (Python 3) libraries available so you can use it with any microcontroller like Arduino, ESP8266, Metro, etc or with Raspberry Pi or other Linux computers thanks to Blinka (our CircuitPython library support helper). Each order comes with a fully tested and assembled breakout and some header for soldering to a PCB or breadboard. You'll be up and running in under 5 minutes! - Wide input-voltage range: 2.7 V to 5.5 V - Up to 16-bit temperature resolution (0.0078°C per lsb), default is 13 bits (0.0625°C per lsb). - Highly accurate temperature tolerances: - ±0.5°C from −40°C to +105°C (2.7 V to 3.6 V) - ±0.4°C from −40°C to +105°C (3.0 V) - Configurable I2C address allows up to four sensors on the I2C bus - Operates over I2C, so only two shared lines required Product Dimensions: 23.3mm x 16.5mm x 3.2mm / 0.9" x 0.6" x 0.1" Product Weight: 1.4g / 0.0oz
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655027.51/warc/CC-MAIN-20230608135911-20230608165911-00257.warc.gz
CC-MAIN-2023-23
1,670
14
https://www.libhunt.com/compare-Reddit-Enhancement-Suite-vs-RedReader
code
|27 days ago||4 days ago| |GNU General Public License v3.0 only||GNU General Public License v3.0 only| Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars. Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones. For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking. Ce parere aveti despre vaccinarea obligatorie covid in Romania? 1 project | reddit.com/r/Romania | 3 Dec 2021 To Be This Smooth... 1 project | reddit.com/r/mylittlepony | 2 Dec 2021 Reddit Enhancement Suite is a plug-in that offers this, amongst other features. Why do all of my contractors prefer to float walls instead of backer and membrane? 1 project | reddit.com/r/HomeImprovement | 2 Dec 2021 /u/distantreplay I have you marked in RES as "knows a shit ton about tile" so I'm calling you for your expertise! 1 project | reddit.com/r/bradford | 2 Dec 2021 1 project | reddit.com/r/reddevils | 2 Dec 2021 Women aren’t that complicated… 1 project | reddit.com/r/Unexpected | 2 Dec 2021 Install Reddit Enhancement Suite, let's you use old reddit (less ads) and customize a bunch of things, including excluding default subs from /r/all Fuslie knows video games 1 project | reddit.com/r/LivestreamFail | 1 Dec 2021 MFA Theme WAYWT Challenge Announcement: Corduroy // Knits! 1 project | reddit.com/r/malefashionadvice | 1 Dec 2021 Reddit Enhancement Suite makes it very easy to view pictures in a thread. This trend has got to stop 1 project | reddit.com/r/Songwriting | 1 Dec 2021 The reddit enhancement suite has an auto-hide for flairs, words etc. It only works on computer tho. What would make you quit Reddit? 13 projects | reddit.com/r/AskReddit | 1 Dec 2021 Eğer resmi Reddit uygulaması sinir hücresi kanserine sebep oluyorsa Infinity for Reddit gibi resmi olmayan Reddit uygulamasıyla rahat edebilirsiniz. 3 projects | reddit.com/r/KGBTR | 26 Oct 2021 andorid'de: redreader*,slide* ,reddit is fun ios:slide*,apollo masaüstü*:RES(eklenti),libreddit,teddit gibi uygulamalar var Which Android apps are perfect.(for you) 2 projects | reddit.com/r/androidapps | 21 Sep 2021 A good APK for reddit? 1 project | reddit.com/r/ApksApps | 24 Aug 2021 try this reddit clientsBoost and RedReader I'm getting quite high battery drain through RedReader 1 project | reddit.com/r/RedReader | 26 Jul 2021 A better Best, Reddit in new languages, and more 2 projects | reddit.com/r/blog | 13 Jul 2021 Reddit Orders ‘SaveVideo’ Bot to Shut Down or Face Lawsuit 17 projects | reddit.com/r/technology | 13 Jul 2021 It's friday. Lets discuss entertainment apps. Name three favorite entertainment android apps. If it's not too much trouble include briefly why do you like them so much. 8 projects | reddit.com/r/androidapps | 1 Jul 2021 Name: RedReader Link: https://github.com/QuantumBadger/RedReader Info: A Reddit app which again works best with Reddit's RSS links and an RSS to Email provider feeding them into FairEmail Pro. Read in the email client where I have filters to curate and organise posts, interact in RedReader where there's a quick, no bullshit interface that allows you to do what you want to do without ever getting in your way. [Request] Option to hide stickied posts 1 project | reddit.com/r/RedReader | 28 Jun 2021 Please do consider opening an issue and describe your success criteria for such a feature: Was zum Fick, Reddit? 2 projects | reddit.com/r/de | 27 Jun 2021 Für Android: Red Reader (GitHub) Version 1.17 released 3 projects | reddit.com/r/RedReader | 25 Jun 2021 Download the APK immediately on GitHub What are some alternatives? libreddit - Private front-end for Reddit darkreader - Dark Reader Chrome and Firefox extension Hentoid - Doujinshi Android App Infinity-For-Reddit - A Reddit client for Android iina - The modern video player for macOS. web-search-navigator - Chrome/Firefox extension that adds keyboard shortcuts to Google, YouTube, Github, Amazon, and others Slide - Slide is an open sourced, ad free Reddit browser for Android vscodium - binary releases of VS Code without MS branding/telemetry/licensing Lemmy - 🐀 Building a federated link aggregator in rust SponsorBlock - Skip YouTube video sponsors (browser extension) VideoLAN Client (VLC) - VLC media player - All pull requests are ignored, please follow https://wiki.videolan.org/Sending_Patches_VLC/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00342.warc.gz
CC-MAIN-2021-49
4,488
66
https://chromium.googlesource.com/chromium/tools/build/+/f555359d154466d9aa3c148db3d92ebeb6e028e3
code
|author||Alexander Thomas <[email protected]>||Fri Mar 15 11:44:33 2019| |committer||Commit Bot <[email protected]>||Fri Mar 15 11:44:33 2019| [dart] Timeout observatory tests after 5 minutes on 3xHEAD recipe These tests usually finish in seconds, but are prone to hang forever. Change-Id: If3d11f07ea52181374b7b6b18dc83b3f0c96ccec Reviewed-on: https://chromium-review.googlesource.com/c/chromium/tools/build/+/1525941 Commit-Queue: Alexander Thomas <[email protected]> Auto-Submit: Alexander Thomas <[email protected]> Reviewed-by: Martin Kustermann <[email protected]> Hi build contributor! If you do any change in scripts/master/ or touching any master's html/ directories, you must restart master.chromium.fyi first and ensure that it still works before restarting other masters. If you're here to make a change to ‘recipes’ (the code located in scripts/slave/recipes*), please take a look at the README for more information pertaining to recipes.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00109.warc.gz
CC-MAIN-2020-24
981
8
https://discuss.elastic.co/t/mail-servers-logs/180553
code
I am getting mail servers logs in elk now i want to visualize par day mail counts in graph but i am confused about that which field should i choose for the counting mails par day incoming and outgoing mails kindly help me for this problem count is a top-level aggregation that does not require any additional field parameters. It just counts the documents within a time-frame. e.g. this just counts the number of documents (in your case, these correspond to emails) by day This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00574.warc.gz
CC-MAIN-2022-21
573
5
https://www.702jump.com/items/thanksgiving_large_feast/
code
Thanksgiving Large Feast - Setup Area: PLEASE READ INSTRUCTIONS Do not select 3 days,select wednesday and hit overnight we will upgrade you to the 3 days Thanksgiving is here! It's time to meet up with the family again! Covid-19 didnt allow us to have Thanksgiving for 2020 so now it's time to go ALL OUT for 2021! DROP OFF: WILL BE WEDNESDAY 24th PICK UP: WILL BE FRIDAY 26th 3 DAY RENTAL FOR ONE GREAT PRICE!
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00525.warc.gz
CC-MAIN-2022-40
410
10
https://reactjobsboard.com/job/208220-front-end-developer-at-clearmacro
code
Based in London, we are an early stage technology company in the asset management space. We are building cutting edge tools and data sets designed to improve the way that active professional investors go about making investment decisions. Having just closed a major funding round, we are now starting to scale our operations, requiring several new hires, among them a Front-end Developer. The role is London based, reports directly to the CTO. We are looking for someone who will be responsible for building the ‘client-side’ of our applications. Your duties will include translating our company and customer needs into functional and appealing digital environment, ensuring great user experience. We expect you to be a tech-savvy professional, who is curious about new digital technologies and aspires to combine usability with visual design. Essential Tasks, Duties, and Responsibilities: - Prototype and develop visualisation tools for the products - Build client-facing dashboards to display business intelligence - Develop user-friendly website and web-app (SaaS), using markup languages such as HTML - Review, maintain and improve company's website and web-app (SaaS) - Maintain high quality graphic standards and brand consistency - Work together with back-end developers to improve usability - Assist back-end developers with coding and troubleshooting - Discuss designs and key milestone deliverables with peers and executive level stakeholders - Analyse and meet product specifications and user expectations - AWS (Solutions Architect certification or greater). - Understanding of serverless architecture, namely AWS Lambda. - Solid understanding of GraphQL - Solid understanding of React - Good grasp of GitHub Actions - Essential experience with React and/or VueJS - 5+ years’ experience developing front end applications, including hands on coding, code analysis, and performance tuning - Minimum 5 years’ experience working as a Front-End Developer - Excellent problem-solving, troubleshooting and analytical skills - Having a user-centric mindset and caring about UIUX design - Preferable to have working knowledge of C#/Java - Strong communication skills - BSc degree or above in Computer Science, Engineering, Human-Computer Interaction, Interaction Design, or other relevant area What can you expect from us? - Being a company transitioning from start-up to the first phase of growth – there will be many opportunities to both use and expand your skills and range. - An open culture that encourages sharing of ideas. - Goals and deliverables focused - flexibility around how this is achieved. - Competitive remuneration package. Please apply via the ATS link below only, other applications will not be accpeted. Post date: 1st July, 2020
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00054.warc.gz
CC-MAIN-2021-21
2,765
33
https://crowdfunding.airtripp.com/project/12363
code
This project has finished. My new idea of business is to allow public to write their name on a plane. I have approached airline for approval and I have also created a website for this (www.nameonplane.com). The fund collected here will be used for marketing purpose to blast the idea and get people to participate. I would be happy if you could use the [Help by sharing] button to share about my project!
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00428.warc.gz
CC-MAIN-2019-39
404
5
https://cache.kzoo.edu/items/c8fa29ba-a041-4c53-b134-6a63de3a6b99
code
Cell Assemblies for Photovoltaic Concentrators As an intern with SNL's PV Technology division, I was assigned the task of assisting with the design of a solar cell assembly for the SBM III. Research at SNL is largely a team effort, and it was necessary for me throughout my appointment to both substantially rely on and contribute to the work of others. Nevertheless, given the specific problem of improving the existing cell assembly, my approach was to evaluate it thoroughly through literature review and experiments, and then to design and test my own ideas for improvement. This report documents research I conducted on solar cell assemblies for the Baseline III module. Due to the nature of our approach, my work involved contributions to several aspects of the assembly's design, as opposed to complete responsibility for one part of it. I first present an overview of SBM III, and then discuss the cell assembly and my involvement with its individual parts. Conclusions are presented in the final section, and an appendix is included which discusses testing methods. iv, 30 p. U.S. copyright laws protect this material. Commercial use or distribution of this material is not permitted without prior written permission of the copyright holder. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101282.74/warc/CC-MAIN-20231210060949-20231210090949-00281.warc.gz
CC-MAIN-2023-50
1,271
4
https://app2container.workshop.aws/en/net-containerize-your-app/net-extract-and-containerize.html
code
In this module, you will be transforming your application with app2container. The transform phase activities depends on whether you are running all steps on the application server, or are using the application server for the analysis and a worker machine for containerization and deployment. In this scenario ( Containerize your .NET app), you will be running all below steps on the Worker Machine. Separating the containerization environment using a worker machine is a best practice and provides you security and functional benefits and allows you to standardize your containerization process to help your larger scale containerization efforts. app2container remote extract --target <private-IP of your source server> --application-id <net-app-id> At this point, you can also update the artifacts inside the zip file and continue the containerization with the updated zip file. You can copy the suggested command from the CLI output. app2container containerize --input-archive C:\Users\Administrator\AppData\Local\app2container\remote\<PrivateIPAddress>\<net-app-id>\<net-app-id>.zip The process will take few mins, as Worker machine will need download the Windows 2019 base image for your container. Once process finished, you should see the below output. Notice that, you’ve updated your operating system at this step. This could be especially useful where your applications are running on EOL(End Of Life) OS versions. 4. Open the file and review the configurations. Ensure “createEcsArtifacts” is set to “true”. Enter your Target vpc-id to deploy your application into your target network. Go to AWS console and Navigate to VPC Service. In your VPCs, find “TargetVPC” and copy the vpc-id as shown below. In deployment.json file, Find “reuseResources” part and paste your copied for “vpcId” value. As show on the above example. If you don’t update your VPC-ID in deployment.json file, app2container will deploy your application into the Default VPC. Congratulations! You have defined all your target AWS configuration settings and ready to deploy your application. In the next section you will create deployment artifacts for your deployment.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00059.warc.gz
CC-MAIN-2021-39
2,171
17
http://iwatching.info/watch/Pqh_UdF6NOA
code
6 Like 0 Dislike My Top 25 Goals in Chicago Blackhawks history. Go read my different stories behind each goal at www.chihawktalk.wordpres.com and follow us on Twitter at @HawkTalkHockey I do not own any of the footage or audio, all credit goes to NBC, CBC, CSN, SportsNet, and others! PUCK DAILY: THE HOCKEY FANATICS Second Channel For Prospect Highlights: https://www.youtube.com/c/PuckDaily2 Wayne Gretzky loves hockey. But he calls the modern game more "robotic" and too expensive for kids. »»» Subscribe to The National to watch more videos here: https://www.youtube.com/user/CBCTheNational?sub_confirmation=1 Voice Your Opinion & Connect With Us Online: The National Updates on Facebook: https://www.facebook.com/thenational The National Updates on Twitter: https://twitter.com/CBCTheNational The National Updates on Google+: https://plus.google.com/+CBCTheNational »»» »»» »»» »»» »»» The National is CBC Television's flagship news program. Airing seven days a week, the show delivers news, feature documentaries and analysis from some of Canada's leading journalists. Patrick Marleau and Joe Thornton were drafted 1st and 2nd overall in 1997, but both their careers have been defined by an inability to win the ultimate prize. CHECK IT OUT: BRAD MARCHAND 17-18 HIGHLIGHTS ARE HERE! #HIGHLIGHTS #MARCHAND #BRUINS #BOSTON SONG: IMA BOSS (REMIX) BY MEEK MILL All Footage Belongs to the NHL and it's Providers. None of the footage used in this video belongs to us. THIS IS THE ULTIMATE HOCKEY CHANNEL THIS IS HOCKEY UNLEASHED SUB FOR DAILY HIGHLIGHTS AND GAME RECAPS CHECK OUT MY SECOND CHANNEL: Matt Dubinski CHECK IT OUT: PATRICK MARLEAU 17-18 LIGHTS ARE HERE! #HIGHLIGHTS #MARLEAU #MAPLELEAFS #TORONTO #MARLEAULIGHTS SONG: ON & ON BY FAME ON FIRE All Footage Belongs to the NHL and it's Providers. None of the footage used in this video belongs to us. THIS IS THE ULTIMATE HOCKEY CHANNEL THIS IS HOCKEY UNLEASHED SUB FOR DAILY HIGHLIGHTS AND GAME RECAPS CHECK OUT MY SECOND CHANNEL: Matt Dubinski
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211126.22/warc/CC-MAIN-20180816152341-20180816172341-00363.warc.gz
CC-MAIN-2018-34
2,020
13
https://thefirstmagazine.com/2022/06/27/geeky-tech-updates-info-about-its-it-marketing-agency-service/
code
IT marketing agency known by the name of Geeky Tech has today spoke out about how in the era of social media and YouTube, everyone has access to an abundance of knowledge. People no longer recognise the expertise and knowledge necessary for SEO, which is one of the bad side effects of this trend. While it is most likely possible to learn the fundamentals of SEO in a short period of time, the ability to put these concepts into practice in order to rank a website is an entirely other issue. Companies pay IT marketing organizations mostly for the experience they bring. A team of five individuals with a combined 25 years of expertise will certainly yield greater results than an individual who has been studying SEO on YouTube for the last year. IT marketing organizations offer more than their skills, though. Geeky Tech uses search engine optimization to generate more leads, convert more consumers, and draw attention to the websites of businesses. Geeky Tech is a knowledgeable IT marketing agency. Geeky Tech’s Geeks are dispersed over the world, which is contrary to the norm. And because of the power of online workspaces and collaboration tools, the team is not restricted to the standard 9-to-5 workday. The organization argues that this concept of work fosters creativity and productivity much better than chaining its employees to an office routine. Geeky Tech, an IT marketing firm, features a list of renowned professionals in their respective disciplines. The whole team has spent the greater part of the last eight years combing the internet for the most talented digital marketers. Geeky Tech encourages prospective clients to hang up the phone, cease cold-calling, and rely on the Geeks to lead prospective clients to their website. Refer to https://www.geekytech.co.uk/ for further information or a complimentary review. 32 London Road
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474573.20/warc/CC-MAIN-20240225003942-20240225033942-00484.warc.gz
CC-MAIN-2024-10
1,859
7
https://www.eclipse.org/lists/ide-dev/msg01103.html
code
I posted a while back to epp-dev about a new look for the Eclipse Welcome/Intro based on the Eclipse Solstice theme used on the website. This new theme also allows showing useful actions on the opening page, called ‘quicklinks’, to help newcomers get started. That theme is now available in the 4.6M6 and I’ve started putting together change requests to adopt it for the various EPP packages. Gunnar asked some good questions about the work required from package maintainers. This new theme is entirely opt-in: you can refuse these changes and continue with the slate theme. And you can also choose to only partially adopt the theme and decline to use quicklinks. Let me outline the changes and the expectations: - Adopt the Solstice theme: - Add a new black-and-white Eclipse logo as ‘intro-eclipse-bw.png’; the packages already include an purple-and-orange ’intro-eclipse.png’. - Change the default intro theme from ’org.eclipse.ui.intro.universal.slate’ to ’org.eclipse.ui.intro.universal.solstice’ - Adopt Quicklinks [Optional]: - Change the intro start page and home page to a new ‘qroot’ page that includes a new ‘quicklinks’ section. - ❗️Review the set of quicklinks for your package. I’ll include a set that I hope make sense for your package. But the language used here is very important. - Review the organization of current pages - I noticed a number of packages haven’t changed the default organizations of the other intro pages, and so PDE and JDT are given higher priority over other plugins. I may include some changes related to this. Q: Why do the quicklinks include more text? The quicklinks leverage the Eclipse Commands framework (i.e., the org.eclipse.ui.commands extension point). Although these commands include labels and descriptions, and sometimes icons, the labels and descriptions are rarely couched in newcomer-friendly terms, and the icons are generally 16x16 and aren’t in keeping with the Solstice theme. For example, any references to the New wizard (org.eclipse.ui.newWizard) shows label ‘New’ and description ’New’; the Import wizard (org.eclipse.ui.file.import) shows label ‘Import’ and description ‘Import’. As the Welcome screen is the first port-of-call for almost all newcomers to Eclipse, using these default labels and descriptions presents a poor experience. As both the New and Import wizards have lots of different wizards, it’s best if we can drive them to the right wizard. The Solstice theme also provide some icons based on Font Awesome that should be suitable for most uses, and we can generate others if needed. Q: Can’t you just define the quicklink texts in one place? Our packages cover a lot of different areas, including non-developers and non-Java developers, and it’s unlikely we’ll ever be able to come up with language that is universally understandable. We want the text to be tailored for each domain. Q: Why are we including a new image in each package? Each package already includes the purple-and-orange Eclipse logo as intro-eclipse.png. We're adding the black-and-white version. The Intro component doesn’t support specifying inter bundle image references. The existing image may be referenced by other pages that expect an image that looks suitable on white backgrounds. Q: Why do we have to specify a new start and home page? As the Intro/Welcome component is used in hundreds of products far beyond just Eclipse, we can’t change the default root pages. Q: Do I have to provide quicklinks? No. The Solstice theme works against the standard root page too. If you want to just use the Solstice theme, you can do a one-line change to change: > org.eclipse.ui.intro/INTRO_THEME = org.eclipse.ui.intro.universal.solstice and specify the updated introBrandingImage (intro-eclipse-bw.png) I’m a committer on the Platform and was asked to work on the reimagined Welcome/Intro described in bug 466370 . The work is in two parts: - It introduces a new Solstice-based theme for the Welcome/Intro pages, similar to what’s used in the eclipse.org pages. - It also re-imagines the ‘home’ start page to provide a set of useful starting actions for new (and possibly grizzled) developers; I’ve called these “quicklinks”. I’ve included a small snapshot it below. Three things to notice: - We’ve moved the page references (Overview, Tutorials, Samples, What’s New) to the right side - We’ve added a set of ‘quicklinks’ for commonly-used commands (left side) - We’ve brought in the always-show component (lower right), which defaults to “always show on restart” The work is currently pending on CQ 10824 against platform.ui (gerrit ) and we’ll need a similar CQ for use of the icons for EPP. This needs PMC sign-off before it can proceed further. This work is opt-in. The goal is for the EPP packages to use it, once approved. I’ve put up a changeset on Gerrit to implement the change above for the Java package as an example. What you need to do: - Include the org.eclipse.intro.theme.solstice feature: it includes two bundles, org.eclipse.ui.intro.solstice and org.eclipse.ui.intro.quicklinks, that implement the theme and the quicklinks component, respectively. - Change your plugin_customization.ini to point org.eclipse.ui.intro/INTRO_THEME = org.eclipse.ui.intro.universal.solstice - Configure your desired quicklinks using the org.eclipse.ui.intro.quicklinks extension point. These leverage the existing Eclipse Core Commands (e.g., the org.eclipse.ui.newWizard command).
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00599.warc.gz
CC-MAIN-2023-14
5,532
35
https://www.shrikrishnatechnologies.com/new-websites/news-website
code
In this fast world where people run for work and don’t have much time to do most of the things they want everything on the go be it internet or even food. So even news is needed on the go and hence they prefer reading it on mobiles, smart phones or laptops. To cater to this need one of our clients asked us to create a news site(http://www.vindianz.com/) which can have several news from all over the world at one place into a website and have several RSS feeds which can open easily on any smartphone. The website is already a success with more than 1000 visitors daily and caters to audiences of India, Australia, Germany and Malaysia among others. The website: http://www.vindianz.com/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00524.warc.gz
CC-MAIN-2023-40
691
4
http://www.trilug.org/pipermail/trilug/Week-of-Mon-20080107/052807.html
code
[TriLUG] How to get laptop earphone jack to work? motley.crue.fan at gmail.com Fri Jan 11 13:50:09 EST 2008 Steve Litt wrote: > When I run aumix, there are three sliders: Volume, which works perfectly, > Line, which will not let me set it at anything but 100%, and Mic, which again > will not let me set it at anything but 100%. > What can I do to drive powered external speakers from this laptop? Do you happen to know what audio chipset this is? The Intel HDA stuff to frequently have this problem with Linux. I had some trouble getting jack to work with my new Toshiba, it ultimately took a kernel upgrade to make it work in my case, but other people have had success fiddling with different to the kernel module. Chief Architect - OpenQabal More information about the TriLUG
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00014-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
778
16
https://lists.debian.org/debian-powerpc/2002/08/msg00304.html
code
model 150 almost there Thanks to the help of Hollis Blanchard, I almost have debian linux install on my rs6000 model 150. However I still need help. Here's my situation I boot linux from netboot and then I mount my root filesystem from floppy. I have built a custom kernel from the 2.4 devel tree using a cross compiler and I have dd'd a root floppy using a debian chrp root.bin image. When I boot the linux kernel, it asks me to insert the root floppy and hit enter. I do so and it soon after displays a nice blue screen with centered grey field and black text. which says Debian GNU Linux 2.2 boot floppy 2.2.23. I hit enter, the screen flashes up a dialog quickly which says something like, "The installation program is determining the state of your system and the next step of the installation procedure to be performed." Then the screen returns to the opening screen. If I hit enter, it will repeat this sequence ad nausem. I can get to virtual terminal 2 and cd/ls about the filesystem. I tried to mount /dev/scd0 (My cdrom?) but it says the filesystem is read only and it wont let me mount anything. Which is weird because, I can make a directory to mount the cdrom onto and mount says the root filesystem is mounted rw. Any idea what to do next?
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.57/warc/CC-MAIN-20180422002521-20180422022521-00183.warc.gz
CC-MAIN-2018-17
1,253
22
https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=42471
code
Default Theme Modifier PRO is an opinionated OpenCart extension which lets you quite extensively modify default OpenCart v3.x theme. You can modify lots of theme colors, hide elements, add additional text to footer, and even add your own CSS code if needed. The extension is built to make significant changes to the default OpenCart theme with a low risk of third party extension compatibility issues. It's both a good thing, and not so good. Good one - you'll probably won't have issues after installing other extensions to OpenCart store. Not so good - it's not magic. You'll be able to do lots of tweaks, but it's still in the range of what default OpenCart theme can do. Now let's get to features. It is an OCMOD extension - no core files are overwritten. Hide REFINE SEARCH Hide PHONE NUMBER Show sidebar on mobile Show left sidebar after content on mobile Disable rounded corners on the whole site Hide product thumbnail borders Simplify breadcrumbs styling Remove < hr > from footer Add CUSTOM CSS (basic textarea field for adding any css if needed) Hide default Social Share buttons Hide Stock Status Hide Reward points Remove < hr > from product page Improve Specifications tab styling (automatically customizes table styling) Improve Reviews tab styling (automatically customizes table styling) Background and colors: Change body background color Add body background image (with image position / cover size / repeat options) Add different content and product thumbnail background color if needed (or leave transparent) Add custom theme accent color, change primary button colors, change footer background and font colors Body and heading fonts: Change main theme font - select from 40+ Google Fonts Change main font color Change main font size Change headings font - select from 40+ Google Fonts Change headings color Add label to products with SPECIALS (including bestsellers, featured products and other default product modules) You may enter any multilanguage text (for example SALE!) Instead of multilangue text system can calculate and show discount percent (for example -40% ) Change label background color, text color and font size Add custom footer text (4 areas) Remove most of the default footer links * Fixed bug when sidebar shows up on the rightin category pages on mobile. * Added way to show product price and options column before images and description column on mobile. ALREADY HAVE THIS EXTENSION? Don't forget to rate it! CHECK OUT MY OTHER EXTENSIONS: What customers say about Default Theme Modifier PRO Login and write down your comment. Login my OpenCart Account
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00016.warc.gz
CC-MAIN-2023-23
2,595
42
http://www.phoronix.com/forums/showthread.php?54805-How-Unity-Compiz-GNOME-Shell-amp-KWin-Affect-Performance&mode=hybrid
code
How Unity, Compiz, GNOME Shell & KWin Affect Performance Phoronix: How Unity, Compiz, GNOME Shell & KWin Affect Performance Those that follow my Twitter feed know that over the weekend I began running some benchmarks of the various open-source and closed-source graphics drivers. But it was not like the usual Phoronix benchmarks simply comparing the driver performance. Instead it was to see how each driver performed under the various desktops / window managers now being used by modern Linux installations. In this article are the first results of this testing of Unity with Compiz, the classic GNOME desktop with Metacity, the classic GNOME desktop with Compiz, the GNOME Shell with Mutter, and the KDE desktop with KWin. These configurations were tested with both the open and closed-source NVIDIA and ATI/AMD Linux drivers. Nice benchmark, and interesting numbers. I am using KDE 4.6.3, with a GTX 460 and although I use compositing most of the time I have to turn it off even when playing games like gnujump in windowed mode since it causes horrible performance. The problem is of course even worse in more demanding 3D games. I take it the performance in your benchmark is due to using the (by default in most distros) "undirect windows" option and fullscreen benchmarks, which isn't much different from benchmarking the desktop without compositing. It would be interesting to see compositing benchmarks in KDE without using this option, or running the benchmarks in windowed mode, and then comparing it to suspended compositing (Alt+Shift+F12). Have you tried benchmarking Gnome Shell & Mutter on Fedora 15? Do people actually use GNOME Shell with the Unity desktop? Nice. Any chance you can run a similar benchmark looking at idle CPU usage and power usage? I remember Martin Gräßlin saying a little while back that kwin's power usage with desktop effects on shouldn't be any higher than with them off, but that's not been my experience (nvidia driver). Anyway, would be interesting to see how all these compare on power management. Would it be possible to include openbox in the test? As Lubuntu is known as a fast desktop system, it would be nice to know if this is true regarding graphics performance as well. What kind of compositing is in use by kwin? I only use xrender.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640001.64/warc/CC-MAIN-20150417045720-00300-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
2,288
15
http://stackoverflow.com/questions/1984704/how-to-delete-a-data-from-a-file-through-java
code
I want to read a file in java. And then, I want to delete a line from that file without the file being re-written. How can I do this? Someone suggested me to read/write to a file without the file being re-written with the help of RandomAccessFile. http://stackoverflow.com/questions/1984534/how-to-write-data-to-a-file-through-java Specifically, that files contains lines. One line contains three field - id, name and profession - separated by \t. I want to read that file through a Reader or InputStream or any other way and then search for a line that has the specified keyword (say 121) and then wants to delete that whole line. This operation needs to be performed without the whole file being re-written
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00073-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
708
5
http://www.meetup.com/DC-Great-Books-Reading-Group/events/70432902/
code
We will discuss selected readings from Karl Marx, German philosopher, economist, sociologist, historian, journalist and revolutionary socialist. The schedule is below. All the reading selections are included in The Marx-Engels Reader (Second Edition, 1978), edited by Robert C. Tucker. It is available on Amazon or used on www.abe.com. Page numbers cited below are from The Marx-Engels Reader. If you wish to buy individual texts or get texts from the library, the detailed reading list is below: July 18 Marx's essay, On the Jewish Question, pp. 26-52; The essay can be found at the following link July 25 Economic & Philosophical Manuscripts: Estranged Labor, Private Property and Communism, The Meaning of Human Requirements, The Power of Money in Bourgeois Society, pp. 70-105. August 1 The Communist Manifesto, pp. 469-500 August 8 Capital, Volume One, Excerpts, Chapter 1 (Commodities) through Chapter IV, pp. 302-336 August 15 Capital Chapter V (Contradictions in the General Formula of Capital). This chapter is not in The Marx-Engels Reader but can be found at the following link. through Chapter VII (The Labour Process and the Production of Surplus Value), pp. 336-361.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762908/warc/CC-MAIN-20131218054922-00080-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,180
8
https://community.esri.com/t5/arcgis-enterprise-questions/do-i-need-to-restart-arcgis-server-after-applying/td-p/1107483
code
We have a standalone ArcGIS Server 10.8.1 instance with a license that will expire soon. I've used the Software Authorization file to authorize with a new authorization to extend the license to next year. I can confirm by looking at the keycodes file that the new license has been applied. However, I can see that the old license is also present in the keycodes file with the upcoming expiry date. Do I need to restart ArcGIS Server to get it to discover and use the new license? Or, will it continue to work past the upcoming expiry date using the new licenses that I've applied without needing a restart? Thanks for your answer, @ReeseFacendini. Esri Canada technical support gave me the opposite answer (that I would need to restart), so now I'm wondering which is correct. The options that I have, in order of preference, are:
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00079.warc.gz
CC-MAIN-2024-10
830
4
https://blog.cloud66.com/two-new-awesome-features
code
Today I am happy to announce two new features: Individual Linux Users and RabbitMQ AddOn. Individual Linux Users Until now, a single shared user (the one used by the cloud provider) and SSH key was used for both deployments and all SSH accesses by the members of your team. Starting from today, all new stacks will have an individual and unique user for every memeber of your team. This means if a someone leaves your team, their access to servers will be revoked automatially without your intervention. And any new addition to your team will automatically be commissioned on all of the applicable servers based on the priviledges you assign them. This feature is transparent to all users and will be automatically used by the toolbelt without any change. If you want to use it directly with your own SSH terminal, downloading the SSH key from the server page will download your own individual SSH key and not the shared one. We just added RabbitMQ to the list of our addons. You can now install a managed instance of RabbitMQ on your stack with a click of a button. Until now, RabbitMQ was installed on stacks only if it was used in your Rails, Sinatra or Padrino stacks. Now you can add it as a standalone part of your infrastrucutre after the stack has been created.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00188.warc.gz
CC-MAIN-2023-50
1,269
8
http://clofix.com/vista-pet-supply.html
code
3:00pm-3:30pmDemystifying ChameleonsLoren LeighVista Pet Supply Vista Pet Supply offers wholesale reptiles and reptile supplies to pet stores and zoos1. Marquis at Ladera Vista offers pet-friendly apartments for rent in Austin, Texas. We accept both cats and dogs and are excited to welcome you and your extended family. For your convenience, we offer an where your pup can run and play. Grooming salons, pet supply stores, and off-leash dog parks are also nearby. We guarantee our animals to arrive alive, healthy and to your satisfaction, and for 24 hours after arrival. Amphibians, Invertebrates and Green Tree Pythons have a live, healthy arrival guarantee only. We must be notified of any issues at the time of arrival. If the package is not signed for on the first delivery attempt, or your temperature at the time of arrival is above 90 or below 35 degrees, animals are not guaranteed for any reason. We do not cover replacement shipping for any reason, and there are no cash refunds of any kind. Replacements or store credit only. Any turtle or tortoise under 4" is for scientific / educational purposes only. We ship animals UPS overnight to your location. Although you can request the sex of animals, if they are not available, the other sex will be sent unless you specify otherwise. You MUST be a retail pet store or public zoo open to the public to purchase from Vista Pet Supply. You are responsible for knowing your state and local laws regarding animals you purchase and sell. Although we attempt to fill every order completely, we will not notify you of shortages unless the shortage exceeds 10% of your order. Heat packs and cold packs will be added at our discretion at $1.75 per pack, depending on the size of your box and where it is going. VICE CHAIRMAN Jeff Boyd Lee Mar Aquarium & Pet Supply Vista, CA Vista Pet Supply Price Guarantee
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00320.warc.gz
CC-MAIN-2018-26
1,858
6
https://www.empsn.org.uk/knowledge-base/network-checking/
code
To run Network Checks connected directly into the router you will need to assign a fixed IP address on your workstation. The IP address needs to be within the range assigned to your school. Once configured you should be able to connect and test your services. Lets assume your IP range is: Range 10.15.0.0 /24 Subnet Mask 255.255.255.0 DNS 220.127.116.11, 18.104.22.168 (standard emPSN dns) - Configure your workstation with a fixed IP address, it does not matter if this is in use on your network this is only for testing purposes. Lets use 10.15.0.100 - Enter the correct subnet mask, gateway and DNS and save the settings - Connect your workstation directly into the router, on the same port/interface your network would normally connect. - From a terminal or command prompt Ping the Gateway address, you should get a reply. - Again from a terminal PING 22.214.171.124 Open a web browser and web onto http://126.96.36.199 this should give you a speedtest page, run the test
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00676.warc.gz
CC-MAIN-2022-27
976
11
http://tourdegrace.blogspot.com/2008/08/if-blog-is-posted-and-is-not-read-by.html
code
I admire those who blog on a daily basis; quite frankly, doing it every three days is enough of a challenge for me. Since I began blogging a few months ago, I have struggled with the basic question "Is anyone really interested in what I have to say?" Actually, I asked that question even before I posted my first blog. My struggle: combating the fear of being irrelevant, of wasting other people's time, besides wasting my own. I can make arguments that a blog should be informative; telling others something about me or my perspective on some slice of the world that I feel may be important to share. I might reach one person with one idea and start them off in a slightly altered direction for their own journey; a blog of benefit to someone on occasion. I can make the argument that a blog can be therapeutic; letting me give vent to some expressive thought, that, if bottled up with other unvented expressive thoughts, would cause me to burst open one day like an overripe melon. I would scatter a few seeds; in spite of the sound of it, a blog that might occasionally benefit me. I can make an argument that a blog can be narcissistic; allowing me to revel in, well, to revel in me. Oh boy, can I see that one happening; a blog of use to no one. I would ask those who blog who bother to take the time to read my blog: - Do you think anyone is really interested in what you have to say? - Do you really care either way? - If no one was reading it, would you blog anyway? I guess I will continue to seek the good things, and seek to avoid the bad things about blogging.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00374.warc.gz
CC-MAIN-2018-30
1,572
9
http://www.sparkpeople.com/mypage_public_journal_individual.asp?blog_id=5279373
code
Wrap up blog! Friday, March 08, 2013 I have to say I enjoyed blogging this week,it was nice having the exchange with team mates, the ideas and motivation! I really felt like I was part of a team,and I also read a lot of blogs, since I wanted my points for my team. I learned a lot, I see we all have the same challenges, but I believe we all want the same thing Health ,come on everyone ,let's do this once and for all. I believe we can, what about you? What do you believe??
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806388.64/warc/CC-MAIN-20171121132158-20171121152158-00039.warc.gz
CC-MAIN-2017-47
475
7
https://admhelp.microfocus.com/da/en/6.4/online_help/Content/PluginHelp/sra_plug_crte_ckbk.html
code
Create Cookbook Step This step creates a new cookbook on the Chef server. |Cookbook Name||Enter the name of the cookbook you want to create.| |Copyright Holder||Enter the name of the copyright holder for this cookbook.| |Type of License|| Select the type of license under which the cookbook is distributed. Values include: Apache v2.0, GPL v2, GPL v3, MIT, and Proprietary - All Rights Reserved |Owner Email||Enter the email ID of the person who maintains this cookbook.| |Cookbook Path||Enter the directory where the cookbook is to be created.| |Document Format|| Select the format for the Release Notes file.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100489.16/warc/CC-MAIN-20231203062445-20231203092445-00270.warc.gz
CC-MAIN-2023-50
610
10
http://boolesrings.org/krautzberger/2012/03/
code
I haven’t posted anything in almost two months — life happened. It’s still a bit chaotic and maybe I’ll write about it when things calm down. For now, I’m back from a productive trip to Toronto where, among other things, I had the pleasure to reconnect with Sam, Assaf and Mike. Meanwhile back at the ranch In the mean time, Felix Breuer scooped me and wrote a great piece (better than anything I would have written) entitled “Not only beyond Journals, not only beyond papers, but beyond Theorems”. You absolutely must read it. He takes a point I made in several discussions with him and just nails it. So go and read his piece — don’t worry, I’ll wait. You’re back? Excellent! Reading Felix’s piece, I thought I should try something I’ve been thinking about for a while. Though I’m arguing (as does Felix) that mathematicians need to move beyond “new result”-papers, I’m not advocating the end of new results. (Or the end of review by peers.) The bane is rather that we write too many papers. I think, paradoxically enough, that we can only overcome this inflation of papers (and its damaging monoculture) by reducing the “least publishable unit” further. We must develop new ways of sharing mathematics that are better adapted to the effective dissemination of research — while allowing researchers to build a track record. Science Online 2012 revisited Figshare is a platform to make all kinds of research results public and, importantly, citeable. They started with scientific figures (duh) but since it’s official launch earlier this year went on to more general data (they have a few of the big citizen science projects onboard now), and is really open to everything — from grant proposals to short research notes to anything. (Figshare also has very interesting financial backing.) Getting to know figshare made me wonder what mathematical content could be suitable for it. There’s obviously stuff that works: data in applied mathematics and also visualizations and mathematical software packages fit the bill. But for logicians and set theorists none of these are usual. What would fit? The first thing that came to mind was the mathematical analogue of negative experimental data (which figshare is eager to host) — in other words, counterexamples. What else? For my second idea, I need to return to Science Online where I had a wonderful long conversation with Jason about the invisible college, new measures for research and other ideas (which I will try to write about in the future). In that conversation, we also discussed the idea of micro-contributions, contributions much smaller than a small paper. At first, this seems more important to the sciences — publishing your data in real time seems a natural progression there and open notebook science is already a development in that direction. I think this is also a natural step for mathematicians — share your research as soon as it’s done — don’t worry about the great result but help by making things public. As a mathematical example of open notebook science, you might consider Polymath, but Dror Bar-Natan’s pensieve is probably a better comparison. You might be afraid to do this, worrying about being scooped or not being able to publish afterwards. But we need to experiment and create more examples that work. Polymath was wonderful, but has only worked twice so far, once led by Tim Gowers, once led by Terry Tao. It might turn out that Polymath simply does not scale to the “average” researcher. But in any case, there’s every reason to continue experimenting on the web. I read a great comparison recently: the state of the 20-year-old web is roughly that of the printing press 100 years after its invention — that’s 1540, mind you, when almost all prints were illegal copies of short pamphlets. Which reminds me of this: So what’s this about then? Well, this post is supposed to be the prelude to an upcoming double-post which is precisely this — a micro-contribution, a small result, far smaller than the least publishable unit, nothing big, but at the same time a curious observation which is, I believe, worth recording (if only because it made me question a certain intuition of mine). This micro-result has been lying around in my notes for almost two years now. At first I thought it might be incorporated into something else, but as it turns out, this never happened. And yet, I’m sitting on it. Sure, the people involved in it know about it, but since it’s so small, it would never be published as a paper and thus never appear. I think that’s really unnecessary — I want to make my work public, that’s kind of the point, isn’t it? And I don’t want to be pretentious and waste time finding a way to blow this up into yet another paper that nobody reads. Thus, this experiment. A question on the side. Could we have a “journal” for micro-contributions? What would that even look like? Would we need peer-review? What would peer-review look like? Could it simply be done in the comments of a blog or by short “replies” on (a common or different) platforms such as arXiv or figshare? Besides making this micro-contribution public I would like to give you the story behind the result. You see, the result is really “micro”. So if I only gave you the proof, we’d be done in a minute. And then you’d have 1-2 pages tops, in the usual brutally short mathematical paper-style writing (only expanded by my silly habit of writing proofs as lists). I mean, don’t worry, that version will be there, too, in the end, much like my dream of one day having papers with computer checkable proofs in the appendix. But while I’m trying something new why not try some mathematical storytelling simultaneously? So before I reach those bare bones of proof, I will take the time to tell you the story of how the result came to be. Not because it is an especially important or impressive story — neither is the case. In fact, it’s rather ordinary and I’m sure every mathematician will have experienced this, probably on a much more significant level than I did. Yet I’d like to try writing about it and it will take me a double post to do so. Reversely, I hope you, my two readers, will be kind enough to provide some feedback. I could imagine this being in three ways: - the mathematical side: is the result correct? - the lyrical side: is the result well written? - the experimental side: is this a concept you could see yourself employing? Finally, I do plan to post the result of all this properly somewhere; you know, as a research note of sorts, with a proper bibliography and so forth. Maybe on figshare, maybe on the arXiv, maybe on github, not sure yet (any thoughts?). In any case, stay tuned for the first post — the rough drafts are finished, but some fine tuning is still missing.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00014-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
6,875
20
http://bitshift.bplaced.net/en/dslr-remote.htm?lang=EN&page=print
code
DSLR Remote is an app for Android smartphones and tablets, enabling you to remotely control your digital reflex camera. But it's not just an ordinary remote control, it provides you with the possibility to take timer controlled series of shots, e.g. for time lapse, long time exposures and sequences of shots in the context of High Dynamic Range (HDR) photography. Using DSLR Remote your camera can, depending on its technical capabilities, be controlled in two different ways. Either the smartphone in conjunction with DSLR Remote operates as an infrared remote control or as a cable release. In either case you will need a small, inexpensive and easy to built hardware, which is to be connected to the smartphones/tablets audio output. As easy as the layout of the hardware and the operation of DSLR Remote is, please be aware that the construction of the hardware and its use in combination with a smartphone/tablet and DSLR Remote is on your own risk. The author of DSLR Remote assumes no liability whatsoever for any resulting damage. Furthermore no garantuee is assumed for DSLR Remote to work properly in conjunction with your smartphone and/or camera. Regarding the smartphone/tablet the proper function mainly depends on the maximum ear phone output volume. Also not all camera types of a particular brand seem to behave in the same way regarding the infrared signal. At the time beeing there seem to be problems particularly with some camera types from Canon. Please take into account that DSLR Remote is a hobbyist project and developed and maintained by the author in his spare time. It is still a work in progress and the author is willing to check all error reports and to answer all questions. But please understand that this perhaps might take some time. You will find a discussion thread about DSLR Remote and the hardware in particular in the DSLR-Forum (german). Maybe there are already answers to some of your questions. If you want to know more about the hardware and its construction you will find the necessary information and how-tos in the section Hardware. If you already have got the necessary dongle or cable, or would like to know more in advance about the range of functions of DSLR Remote, you will find information about the settings and operation of DSLR Remote in the section Manual. DSLR Remote is absolutely free of charge and free from ads. But the development and mantainance needs a lot of time and sometimes a bit of money, too. So, if you like DSLR Remote and think about supporting the further development you might consider to: Many thanks in advance
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00186-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,593
8
http://piwigo.org/forum/viewtopic.php?id=19800
code
I am testing 2.4 and am using the Elegant theme. When I click the up arrow or at the top of the image, I get linked to http://www.geoffschultz.org/photos/inde … 3/start-15 which generates the following: You don't have permission to access /photos/index.php on this server. If I delete the "/start-15" from the URL, there's no problem. Last edited by geoffschultz (2012-07-03 15:06:09) Out of interest (I'm not a developer) I tried your link and had the error, so I went into your gallery and found a problem here http://www.geoffschultz.org/piwigo/pict … category/3 If you look on this page, there is no thumbnail for the next image, and if you go to that image it is that page where your problem link is found. Every other page is OK, so I suspect it may be a problem with that missing thumbnail. To regenerate the thumbnail (if your not sure how - if you are ignore this) Go into the 'admin/photos/batch manager', select album in 'add filter', select that album then tick the box on the culprit image. Now in the 'choose an action' drop down box below select 'generate multiple size images' and tick the box for thumbnail. See if that solves the problem. Thanks, but that isn't the issue. I think that you just happened to look at the gallery when I was deleting an image and replacing it with a new version & I hadn't generated the thumbnail yet. I also noted that I have the same issue with other themes when they specify a value for the /start parameter. Last edited by geoffschultz (2012-07-01 17:51:54) The error also appears when you have more than one page of thumbnails and try to change the page - I assume it is using the same code for the link. I have just been experimenting with Piwigo so have not uploaded a large number of photos so only have one page - I'll try and find the time to upload more and see what happens. I wonder if others are experiencing the same issue - would be nice if they dropped by to confirm one way or the other. Don't know if it contains further info, but you might check the server log Do you have a security mod enabled in apache ? I wasn't sure what the links meant so I delved further and this is what I found - not sure if it'll help. The end of the link is pointing at the image number to start the display of thumbnails on a page. The -15 in yours is the default image number for the start of the second page of thumbnails - because the default number of thumbnails per page is set at 15 (at least for the elegant theme - others may be different.) I set my number of thumbnails to 3 for guests, then tried the links (page number and the up arrow) and it works perfectly using -3 - so I assume it would for the default of 15 if I had more images. As a suggestion, try setting the number of thumbnails to a different value (it is set at the bottom of the Users/Manage page) and see if that makes any difference. If not, then as 'flop' suggests, maybe it is a server setting somewhere - but I wouldn't have a clue as to what it might be - although if it is a server setting problem I'm sure others would have reported it in the past as setting the number of thumbnails and the link formatting has been around for some time in previous versions - unless of course there has been a significant change to the coding for this in v2.4. It has been not changed for a long time , and you're the first reporting that, so that's why i' m thinking of a server particularity What I don't understand is that I can access http://www.geoffschultz.org/photos/inde … ry/3/start but if you specify a value (i.e. "-15"), I get the 403 error. What's also strange is that I can't find any errors in the server log. Last edited by geoffschultz (2012-07-03 11:16:31) This was resolved on my end. It ended up being an error in the rewrite rules within my .htaccess. There was a problem handling "-" symbols in parameters. Last edited by geoffschultz (2012-07-03 15:43:57) This was resolved on my end. It ended up being an error in the rewrite rules within my .htaccess. There was a problem handling "-" signs in parameters. great news ! the code was from Piwigo 2.4 or ... ?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706961352/warc/CC-MAIN-20130516122241-00072-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
4,103
35
https://www.npmjs.com/search?q=keywords:mru
code
A cache object that deletes the least-recently-used items. Imlementation of 2Q Cache Algorithm In-memory cache implementations with ES6 Map-like API and different eviction strategies A tiny (215B) and fast Least Recently Used (LRU) cache A LRU (least-recently-used) in-memory cache for Node.js. Tiny & Fast LRU Implementation as possible Most recently used fixed size overflowing array. hyperlru implementation backed in an Map hyperlru implementation backed in an Object A keyed pool that recycles the least-recently-used objects. In-memory object cache written in typescript for Node that supports multiple eviction strategies. A least-recently-used cache manager in 35 lines of code A lighting fast cache manager for node with least-recently-used policy. A package filled with data structures, sort algorithms, selections, and more to come
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00548.warc.gz
CC-MAIN-2020-16
842
14
https://www.java.net/node/689970
code
wsmonitor help & documentation Hi Can some one guide me to a location where I can find the users/admin guide and further documentation for wsmonitor? We are trying to benchmark our transaction (web service calls). We are using Glassfish ESB, and I found this tool seems to be appropriate one for this purpose. We could configure and set it up fine, but not able to understand how to benchmark and get transaction level timings. Any help could be highly appreciated. Here is the screenshot of the wsmonitor and the values that we are seeing.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097199.58/warc/CC-MAIN-20150627031817-00011-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
540
2
https://www.name.com/blog/name/2010/11/podcast-3-dnssec/
code
This week our CTO, Sean Leach, joins the podcast to talk about a little thing called DNSSEC. The most basic explanation of DNSSEC is that it provides security for your DNS, but, as you will hear, there is oh so much more involved. Non-tech folks, not to worry, Sean does a really good job of keeping the technobabble to a minimum. Even as I was politely smiling and nodding during recording, I was actually comprehending (most) of what was being said. 🙂
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347293.1/warc/CC-MAIN-20210224165708-20210224195708-00294.warc.gz
CC-MAIN-2021-10
456
2
https://www.animalspot.net/african-rock-python.html
code
The African Rock Python is native to the Sub Saharan Africa. The python has seven species and African rock python is one of them. They are non venomous snakes and are the largest snakes in Africa. The name Python Sebae was derived from the Greek mythology, which refers to a huge serpent. Table of Contents Table Of Content Table of Contents Table of Contents African Rock Python is amongst the seven species of python. They have two sub-species of African Rock Python, one of which are found in the Southern Africa which are also called Python Sebae Natalis and the other types are found in the Western and Central parts of Africa. The African Rock Python found in the Central and Western Africa was identified by a German naturalist Johann Friedrich Gmelin in the year 1788. The Python Sebae Natalis found on the southern Africa were identified by the father of South African Zoology, Sir Andrew Smith in 1833. The largest snake in Africa and the third largest snake in the world, it’s enormous and bulky. They can weigh up to 135kgs. An average male African Rock Python measures around 16 feet however the largest that has been confirmed are around 20 feet long. The females are larger than the males. However they vary according to the place. Places of higher populated has recorded smaller size African Rock Python than the ones with lesser population. Picture 1 – African Rock Python It has a very thick body with blotches which are joined like irregular stripes and ultimately fade to white underneath. Their body color varies between brown, chestnut to olive. They have a triangular dark arrow head shaped head. They have a triangular mark below their eyes. Smooth and dry to touch, their scales are small and smooth. The African Rock Python have these heat sensitive pits around their lips which help them to detect warm blooded preys. The African Rock Python are usually found on the open savanna, grassland, rocky area, forest and semi deserts type of habitat. They are dependent on water soften found near the water bodies like lakes, swamps and marshy areas, they become dormant during the dry season. They occupy the abandoned ant bear burrows or under the dense pile of driftwood. The African Python is found throughout the Sub-Saharan Africa. They are found from Guinea and Senegal on the western coast of Africa, spreading across the central Africa and also towards the east coast of Ethiopia, Southern Somalia, towards Kenya and northern Tanzania. African Rock Python were found at Florida Everglades in the year 2009. African Rock Python becomes sexually active at 3 to 5 years old. They reproduce during the spring. They lay about 20 to 100 eggs. The incubation period last for around 2 to 3 months during which the female guard their eggs aggressively against any predators. The length of the hatchlings is around 18 to 24 inches. African Rock Python can live up to 12 years in wild however they can live up to 30 years in captivity. The African Rock Python are carnivorous and non venomous; therefore they coil around their prey and constrict them. The prey ultimately dies due to cardiac arrest. They hold a tight grip and tighten it every time the prey breathes out. They swallow the entire prey and if the prey is big enough they can do without eating for almost a year. They can swallow up to 60 kilo grams of lifeless prey. Since their upper and lower jaws are attached together and their ligament stretches they have the ability to swallow preys bigger than themselves. They have strong acids inside their stomach which helps them to digest their food. The African Rock Python feeds on rodents, small and medium antelopes, monkeys, domestic pigs, lizards, dogs, goats, crocodiles and at times even fishes. African rock Python snake does not have many predators. Humans are their main predators and in some cases they might be a prey to hyenas or the African wild dogs during their digestion period. Picture 2 – African Rock Python Photo African Rock Python are available at exotic pet shops. Their prices vary according to the color of their skin and their temperament. They can be breed in captivity however are not meant for beginners. They are large and aggressive especially when they are hungry or when they guard their eggs. Their species and taxonomy has been described differently by various authors. African Rock Pythons are not endangered species but are listed as the CITES (Convention on International Trade in Endangered Species) appendix 2 species as their skin are in demand for making leather, belts and bags. Exporting them is restricted. During the breeding season both the sexes fast and the female African Rock Python continues the fast till the eggs are hatched. The hatchlings have to fend for themselves. African Rock Pythons feeds only once or twice a month and if their prey is big enough they can go without food for almost a year. Burmese Python had been thriving in Florida where they don’t have many invasive species. Since 2002 six African Rock Pythons, were located on the loose in Florida. This was a matter of great concern as some of the scientist feared that the African Rock Python would breed with the Burmese Python and the out spring would be a more aggressive species of super snake. It would not only hamper the ecosystem but would also be dangerous for families with small children. African Rock Python have known to attack livestock and pets of human beings. They feed on dogs, goats and cattle which are important source of livelihood of the local residents. There are reports of the African Rock Python attacking human beings too but they usually do not attack unless they are provoked. They can cause threat to families with small children. African Rock Python conservation is not a matter of very big concern however they are no longer widespread like the earlier times. The reason for their decline is mainly due to hunting for their skin or meat. They are mainly restricted to secluded areas, hunting reserves and parks. They are listed as on appendix 2 of CITES (Conservation on International Trade of Endangered Species) and hence has been legally protected especially in areas where their species are vulnerable and declining. Here are some amazing images of the largest snakes in Africa.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474650.85/warc/CC-MAIN-20240226030734-20240226060734-00061.warc.gz
CC-MAIN-2024-10
6,276
28
https://support.kinetica.com/hc/en-us/articles/360051585093-KAgent-Unable-to-authenticate-user-admin-against-existing-cluster-Please-verify-and-update-your-password-
code
KAgent: “Unable to authenticate user admin against existing cluster. Please verify and update your password” User is performing Add Cluster to a new cluster or existing cluster. Verify step is failed with message as follows: Unable to authenticate user admin against existing cluster. Please verify and update your password 1. UERR message in gpudb.log alerting Invalid password for admin error 2020-07-07 16:47:05.244 UERR (4138,5096) <hostname> Security/SecurityManager.cpp:926 - Invalid password for admin 2020-07-07 16:47:05.244 UERR (4138,5096) <hostname> HttpServer/GaiaHttpRequestHandlerFactory.cpp:113 - invalid credentials for endpoint: /show/system/status 2. KAgent is of older version, e.g. kagent-188.8.131.52.20191113144856.ga-0.x86_64.rpm 3. It's possible administrator is trying to add a cluster that shares the same name as a cluster that was previously added and removed. Kinetica On-prem 7.x Invalid password is faced during Add Cluster at Verify step when performing upgrade cluster. Support recommends to follow proper upgrade path in performing upgrade cluster. 1. Try clearing cookies of the browser, or try using a different browser 2. Upgrade your KAgent version to the most recent release available in public repository and try Add Cluster again 3. If the cluster currently is not managed by KAgent and you are attempting to add it, and you don’t have any other cluster in this KAgent, you can delete the kagent.db and start over. /etc/init.d/kagent_ui stop && rm -rf /opt/gpudb/kagent/resources/var/kagent.db && /etc/init.d/kagent_ui start Special Considerations : Kinetica public repository URL: Upgrade guidelines document:
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00287.warc.gz
CC-MAIN-2020-50
1,657
18
http://uk-mobile-broadband-deals.com/business/directx-sdk-vista.php
code
Main / Business / Directx sdk vista Directx sdk vista Name: Directx sdk vista File size: 528mb Download the complete DirectX SDK, which contains the DirectX Runtime and all DirectX software required to create DirectX compliant. Windows Vista, Windows 7, Windows 8, English DirectX SDK (Software Development Kit) is a powerful programming tool designed by. Complete DX SDK, which contains the DirectX Runtime and all DirectX Windows Server , Windows Vista, Windows 7, Windows Server. DirectX SDK This DirectX SDK contains the runtime and all the software required to create DirectX compliant applications in C/C++ Windows Vista. The DirectX 9 SDK (Software Development Kit) features updates to the Windows and Windows Vista, Windows 7 bit and Windows 7. This repo contains C++ samples from the DirectX SDK updated to build using These are all Windows desktop applications for Windows Vista Service Pack 2. Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks SDK samples. Starting with the release of Windows 8 Developer Preview, DirectX SDK has been integrated into Windows SDK. . Direct3D 9Ex (known internally during Windows Vista development as L or 9.L ): allows full. 2) There are plenty of great resources for learning DirectX SDK [. You can preserve your DirectX 10 code for Vista with no SP targets. And to. But I can't get DirectX sdk to install. Is there Windows 7, Windows Server , Windows Server , Windows Vista, Windows XP not any 8.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249569386.95/warc/CC-MAIN-20190224003630-20190224025630-00391.warc.gz
CC-MAIN-2019-09
1,491
7
https://www.microsoft.com/en-us/itshowcase/migrating-yammer-to-native-mode-unlocks-the-full-benefits-of-microsoft-365-integration
code
For the past eight years, cultural change at Microsoft has accelerated based on open and transparent conversation, addressing tough problems from a diverse set of perspectives, and clear articulation of values applied to technical, social, and ethical questions. Essential to the communication fabric that enables this growth is cross-company participation across vibrant and diverse communities, powered by Yammer. Recently, Microsoft positioned its employees and staff for the next wave of collaboration by shifting Yammer to Native Mode, strengthening the ties across Microsoft 365 and the social network. Migrating to Native Mode transforms the community model to be aligned with Microsoft 365 groups and the collaboration and communication capabilities of Microsoft Outlook, Microsoft Teams, and Microsoft SharePoint. A network in Native Mode empowers IT professionals with processes and management found in the tools established for Microsoft 365. All of these improved functions and compliance features led Core Services Engineering and Operations (CSEO), the engineering organization at Microsoft that builds and manages the products, processes, and services that Microsoft runs on, to shift the Yammer network entirely into the integrated infrastructure. Why it was time to implement Native Mode Yammer joined the Microsoft family of products in 2012. Since then, the internal social networking service has experienced significant and organic growth, becoming a wellspring for knowledge-sharing across the company. Microsoft’s employees and partners have long relied on Yammer to communicate. During Microsoft’s recent work-from-home period, 110,000 users turned to Yammer to connect and collaborate with colleagues. Even prior to that, Microsoft employees and third-party guests have leaned heavily on the platform to communicate initiatives, changes, and host stimulating and asynchronous conversations on critical topics. As the social network grew, it became clear that it was necessary to remove some outdated or irrelevant material for the health and future development of the network. Migrating to Native Mode is the only way to fully align the back end of Yammer to the rest of the Microsoft suite, including Microsoft Azure Active Directory. Beyond integration, Native Mode permits CSEO to manage all Yammer communities with the same policies and practices implemented across Microsoft 365. The integrated infrastructure of Native Mode enables all files and conversations to be searchable and shareable through common methods. Additionally, moving to Native Mode allowed CSEO to leverage Microsoft Azure B2B guest models, adopting unified and familiar document, group, and app sharing models for partners and customers. The strong identity management introduced in Native Mode simplifies oversight, adding controls to manage guest identities and appropriate guest access to the service. In addition to the Microsoft Azure B2B guest capabilities, Microsoft 365 rules and policies can now be applied to Yammer communities. With new policies promoting safe practices and rules removing unused content, Yammer in Native Mode empowers a vibrant network to continue supporting Microsoft’s culture of innovation and inclusion. Planning and preparing for migration For small and mid-sized organizations, moving from classic Yammer to Native Mode could be as simple a process as flipping a few switches and adjusting some settings. To make the transition smooth, the Yammer engineering team has developed an automated wizard for Yammer, the Native Mode Alignment Tool, to help migrate communities of fewer than 50,000 users. For larger enterprises, including Microsoft, migrating to Native Mode requires a closely monitored approach guided by an engineer. Native Mode alignment Native Mode alignment guides the processes of migrating Yammer to Native Mode. This process inventories existing members, communities, and files, enabling decisions about how to handle changes that might occur during transition. The process also monitors the progress of proactive changes, limiting the number of surprises encountered during the migration to Native Mode. The Native Mode Alignment Tool generates the reports an administrator needs to initiate the Native Mode process when ready. The switch to Native Mode is either automated, with the Native Mode Alignment Tool, or engineer driven. The volume of information in Microsoft’s Yammer network is significantly higher than a majority of other customers. As such, CSEO worked with the Yammer engineering team to successfully migrate to Native Mode with minimal disruption. CSEO’s approach included several important phases. Readying for Native Mode revealed that Microsoft had several thousand unused groups and Yammer accounts that accrued since the service’s introduction. As CSEO worked through their migration strategy and introduced automated lifecycle controls, abandoned groups were culled from the network and safekeeping strategies were implemented to help owners prepare for Native Mode. Cleaning up Yammer occurred throughout the entire migration, helping to prune the environment of unnecessary material and easing the transition. - Eliminate groups that are no longer needed. Previously, duplicate, unused, and old Yammer groups stayed around in perpetuity. Removing superfluous items reduced the number of groups CSEO had to monitor during migration. - Address groups that fail governance. To comply with Microsoft’s standards, administrators and owners of noncompliant groups were required to take action lest their groups be deleted. This effort helped remove groups that weren’t needed. - Ensure clear ownership. Dubbed the “FTE + 1” rule, groups who fell short of the two owner, one FTE threshold were given the opportunity to promote a new owner or agree that the community would be deleted. If no action was taken, the group would be deleted through automation. In some circumstances, users had been appointed group owner due to team changes and may have been unaware of their ownership status. When ownership was unclear, CSEO consulted with group members on the steps needed to bring the group into compliance. Cleaning house properly decreases the number of unused and noncompliant groups within social networks, reducing the number of groups that must be migrated. Furthermore, as old and outdated groups are removed from the ranks, users can better discover current resources and information. Assess the current state of the Yammer network Before moving into Native Mode, CSEO carefully examined the existing network. This meant digging into specifics about how classic Yammer communities were being used. - Connected vs. classic. How many communities were currently connected to Microsoft 365 prior to moving to Native Mode? The number of classic Yammer groups within a network determined the scale of migration required. - Members and guests. Collaboration in Yammer is at its best when internal and guest users can work together. Understanding the volume of guest access and identifying groups with guests in classic Yammer is a necessary step for transitioning those guests into new identities provided by Microsoft Azure B2B. In Native Mode, guests rely on Microsoft Azure B2B to access the network. - Owners. To meet Microsoft’s internal compliance standards, all Microsoft 365 groups require two owners, one of which must be an FTE. Being able to identify owners and noncompliant groups allows service engineers to identify which groups require changes to be compliant in Native Mode. In addition to fulfilling compliance requirements, providing early information to group owners allowed CSEO to prepare members for changes related to migration to Microsoft 365 and Native Mode. - Unlisted and secret groups. Classic Yammer enabled owners to indicate that private groups be unlisted in the groups directory and undiscoverable in Yammer search. At Microsoft, most of these groups were created for testing, but some contained confidential information. In the transition to Native Mode, these groups needed to be converted to private groups, which still restricts access to members of the community, but enables the group name, description, and avatar image to appear in the community directory and Microsoft 365 search indexes. Identifying unlisted and secret groups allowed CSEO to develop an appropriate masking strategy and communicate with owners early. Pre-work related to evaluating and understanding the Yammer environment is critical to a successful migration. The preceding elements will all be affected during the transition into Native Mode, and knowing what will be affected reduces the potential for surprises or disruption. Proactively address gap cases Although the majority of classic Yammer groups connect to Microsoft 365 without complication, several gap cases exist that require early and proactive involvement. To avoid disruption, CSEO developed specific strategies to address these gap cases. - Safely mask unlisted and secret groups. Native Mode still supports private groups, but previously unlisted groups will show up in search results. Unlisted groups from classic Yammer become private groups that present content only to members, but the name, avatar image, and description may be visible to anyone in searches. To protect confidentiality, unlisted or secret groups were re-titled, assigned a generic avatar image, and given new descriptions to avoid sensitive information appearing in public searches. After migrating, owners of affected groups were prompted to edit the group title and meet classification requirements. If these groups were not re-titled after migration, CSEO assumed the group was abandoned and removed it. - Encourage users to store file attachments in Yammer private messages. Private messages exist in both classic Yammer and Native Mode networks. Although you can store files in groups in Microsoft SharePoint, files in private messages don’t have an associated group. As a result, any necessary files must be downloaded prior to the transition to Native Mode. To avoid users losing files saved in private messages, CSEO developed communications to encourage users to download their files and back them up elsewhere. Users were asked to acknowledge completion of this task so that CSEO could gauge the proportion of users who may not have acted. If necessary, CSEO planned to engage directly with users having a large volume of files stored in private messages—a step that proved unnecessary due to high response rates. - Prepare guests in external groups to adopt the Microsoft Azure B2B guest model. Because the guest model was changing and guest users might not have access to communities for up to one day, communications were provided for group owners and members to align expectations for Native Mode. This included detailed information as to what would change, confirmation that a Yammer group was still active and needed, and prepared communications to share with guests. Final announcements went to all members, including guests, with supporting information that would be available even if access to a community was interrupted. These proactive steps exist to reduce a disruption of service for Yammer users. By predicting and responding to gaps, CSEO was able to engage with groups and owners early, preventing a loss of access. Ease the shift for critical and large communities CSEO was always aware of the potential impact a migration might have on large and critical Yammer groups. To avoid confusion related to the changes, additional communication was established with groups and owners of key communities. - Shift communities that need Microsoft 365 features. Several features, including the ability to host live events, host files in Microsoft SharePoint, and enable a consistent Microsoft 365 user experience, were only available in connected communities. Ensuring key community owners understood the change and would experience no loss in functionality required direct communication by CSEO. - Shift communities with large membership numbers. Microsoft’s larger Yammer groups, those with 5,000 members or more, also required direct engagement and communication in order to avoid disruption. Direct one-on-one interaction with community leaders also proved to be helpful for readying Microsoft’s bigger communities. By engaging early and often, CSEO was able to consult with groups and owners regarding the new features and benefits of Native Mode, discuss action steps required to ease the migration, and convey milestones. Early on, CSEO’s communications professionals scheduled a series of messages to go out to users informing them of the changes. A major part of CSEO’s migration plan was to provide high-level information to everyone, then rely heavily on regular communication with group owners for targeted actions. Project managers directly engaged with group owners from larger and critical communities to prepare them for the change. This steady stream of communication not only helped group owners ready for migration, it also amplified messages across Yammer. CSEO communicated directly through emails, via messages posted into groups, and through generalized announcements posted in all-company environments, including Yammer and Microsoft 365 groups. Emails were utilized when direct action was called for, whereas general messages were posted throughout Yammer communities. In utilizing this strategy, CSEO was able to sequence and map out a successful messaging campaign. - Provide context. Yammer capabilities have evolved over time, but to enable fundamental changes that take advantage of Microsoft 365 capabilities, a transition into Native Mode became a necessity. Moving to Native Mode is a one-time event, after which every new community is immediately connected in Microsoft 365. Giving users this context helps them understand the value of migration. At a user and group owner level, it was important for CSEO to convey action and consequence. Giving this kind of context allowed CSEO to prevent any loss of user files, messages, or groups. This also meant informing Yammer users of groups and files they owned in the network so that they might take action. - Seize the opportunity. Conveying the benefits of Native Mode to members of Microsoft’s Yammer communities was a top priority. Integration with Microsoft 365 did more than strengthen compliance controls, it also unlocked several features, including live event, guest controls, and eDiscovery, that users had long been asking for. These changes closely aligned to Microsoft’s technology and cultural goals. Communicating what was possible in Native Mode got users excited for the change, creating buy-in. - Account for overlapping audiences in your communications plan. While mapping out a communications calendar, CSEO recognized that individuals might be both a member and an owner of several categories of Yammer groups. To avoid bombarding users with repetitive or unnecessary information, comprehensive communications were delivered broadly to reduce the number of messages and potential confusion. - Record audience actions and acknowledgements. Several communications included calls to update Yammer communities and acknowledge completed actions. If response rates were low, CSEO was able to prepare further mitigating steps to avoid disruption during the transition. Given the complexity and scale of the project, setting out a strong pattern of communication eased the transition to Native Mode. After the migration was underway, CSEO published web pages and blogs to further facilitate the communication with users, group owners, and stakeholders during the change. Managing the shift to Native Mode Due to the size of Microsoft’s Yammer environment, initiating Native Mode required coordination to manage the volume of accounts, thousands of Microsoft 365 groups, and millions of files. Additionally, the age of the network became a factor because variations found in older metadata made it more complex to migrate from classic into Native Mode. To initiate Native Mode and minimize the impact on users, CSEO developed a coordinated plan for managing the migration. This multi-day process sequenced priorities and team involvement to create a clear approach with engineers, project management, communications, tenant administrators, and support teams. Several processes took an extended period of time to run, but in the end CSEO was able to connect all users, files, and groups to Native Mode. - Coordinate a game plan. Map out timing between all stakeholders, including engineers and network administrators who will be logging and resolving any snags encountered during the migration. For a Yammer network the size of Microsoft, initiating Native Mode required several weeks of work, with guest users experiencing a day of disrupted service. Having a coordinated plan reduced the risk of extended outages. - Share with dependent teams. Engaging with dependent teams, like support and listening teams, including tenant administrators, sets expectations during the migration. These updates should include a clear schedule of events and progress as migration processes run. Aligning expectations and sharing timing and progress updates reduces frustrations, improves communications, and leverages the support of dependent teams. - Manage from a scripted set of actions. The Native Mode Alignment Tool can’t service a network of Microsoft’s size and scope, but it can inform management of the migration. Adhering to an order of operations consistent with the Native Mode alignment process optimizes the transition and minimizes a chance of downtime. This step ensured that the right players were involved, the right information was available, timing could be estimated, and contingency plans could be put into place. Anticipating elevated support Having an escalation plan in place was critical for CSEO. During Microsoft’s migration to Native Mode, CSEO recognized several potential drivers for increased support related to the new Yammer environment. - With change comes learning and uncertainty. CSEO understood that users and group owners would need time to acclimate to community governance for Yammer. They also recognized a need for support and administrative teams to familiarize themselves with new processes as well. By planning for a certain degree of uncertainty and learning curve, CSEO was able to set expectations. - Expect new types of issues. New workflows uncover new issues. Actively logging and responding to errors allowed CSEO to efficiently focus engineer and support energies. - Guests may lack context and support. External users within Microsoft’s Yammer network may not have access to support staff familiar with the new Microsoft Azure B2B guest model inside their own organization. CSEO expected new support requests to come directly from guest users as they got acquainted with Native Mode. At times, these requests surfaced through other Microsoft contacts who may not have received the detailed information that group owners did. - Cleanup might carry unexpected consequences. During the cleanup phase, CSEO saw the number of groups within Microsoft move from 40,000 to around 21,000. Although this is a significant decrease in volume, it actually represents a healthier environment consisting of active communities. Though aggregate metrics may change, the groups being removed were abandoned or noncompliant. A similar trend was visible within group membership numbers. Users with a “pending” status—often email addresses mentioned in a community but not representing users who visited—are not preserved during migration. As such, some communities witnessed an apparent reduction in membership numbers when pending users disappeared. Delivering new experiences in Native Mode Initiating Native Mode for Yammer was a big step. CSEO understood the impact it would have on users, both in terms of benefits and possible disruptions. Now that a Native Mode Yammer environment is established, Microsoft’s company-wide social network can reap the benefits of a full Microsoft 365 integration. - Group management. New compliance features unlocked in Yammer through Microsoft 365 means that governance can be established consistently across all groups, instead of individually. - Content search and eDiscovery. User files are now saved in Microsoft SharePoint instead of Yammer. This makes it easier for users to quickly find items using enterprise search and Microsoft graph signals. This allows compliance measures to scale with the network. Additionally, Microsoft 365’s eDiscovery features now span Yammer content. - Enhanced capabilities. In addition to being able to host live events, Yammer’s guest access is managed through Microsoft Azure B2B. This new approach means guest users access Microsoft Teams and Yammer with the same identity. - Consistent experience. In Native Mode, Yammer employs predictable Microsoft 365 features users expect. Instead of unique workflows, Yammer enables file viewing, editing, and sharing consistent to other Microsoft environments. © 2021 Microsoft Corporation. This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00297.warc.gz
CC-MAIN-2021-31
21,463
70
https://gitlab.haskell.org/ghc/ghc/-/wikis/ghc-users-guide
code
GHC User's Documentation This page has links to the user documentation for both current and past versions of GHC. Master (HEAD) branch version - GHC User's Guide (May fail build) Latest release version Previous release and candidate versions (Follow the links in the order,
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00508.warc.gz
CC-MAIN-2021-49
273
7
http://gautiersblog.blogspot.com/2014/04/excel-writer-v13.html
code
Ada.Calendar.Time Put/Write and date built-in formats wrap_text format option Next and Next_Row Text_IO's New_Line(lines), Line, Col now available Excel Writer (Excel_Out) is a free, standalone, portable, open source package for producing Excel spreadsheets with basic formattings and page layout. It can be used in an "Ada.Text_IO" fashion, with Put, Put_Line and New_Line.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00133-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
374
5
https://community.progress.com/community_groups/openedge_development/f/19/p/37186/115058
code
I've created a REST service in my project and am using prodatasets and data-sources to populate the data before it is returned to the client sending the request. The issue I am having is with sorting the data in the dataset to be passed back to the client. I've defined a query on the data-source and am using a BY clause on that query I prepare, but it's not "fully" sorted by that filter. It seems that the index defined on the DB table (tt has been defined LIKE db table) is always applied. Here is the pertinent source code for all that I'm doing. Any help or advice is appreciated. DEFINE TEMP-TABLE tttrkChk NO-UNDO LIKE trk-chk BEFORE-TABLE bttrkChk. DEFINE DATASET dstrkChkBE FOR tttrkChk. DEFINE QUERY trkChkQuery FOR trk-chk SCROLLING. DEFINE DATA-SOURCE trkChkSource FOR QUERY trkChkQuery. DATASET dstrkChkBE:EMPTY-DATASET (). BUFFER tttrkChk:attach-DATA-SOURCE (DATA-SOURCE trkChkSource:handle). QUERY trkChkQuery:QUERY-PREPARE ("FOR EACH trk-chk where Whs-code = '01' and check-in-date = '2018-02-12' BY check-in-time"). My solution was to NOT define the temp-tables LIKE a db table and instead just define all fields LIKE the db table's field and not define any indexes. Now the order by statement functions as expected and the behavior of the SAVE-ROW-CHANGES on the datasource has not been affected. My solution was to not define the temp-table LIKE the db table and instead define each field individually LIKE the db's table field, without any indexes defined. The BY clause now behaves as expected and the SAVE-ROW-CHANGES on the temp-table attached to the data-source has not been affected. Add the index you need for sorting to the define temp-table (documentation.progress.com/.../index.html define temp-table tttrkChk no-undo like trk-chk before-table bttrkChk index ttix check-in-time. Defined as above your temp-table will have only this index.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583684033.26/warc/CC-MAIN-20190119221320-20190120003320-00286.warc.gz
CC-MAIN-2019-04
1,868
13
https://www.talkbass.com/threads/new-song-again.95862/
code
I finished another song a couple of days ago, and I was again hoping the fine folks at talkbass.com could give me some feedback. The song is a lot less rock-influenced than my recent output. It touches on funk and disco (!) a little bit, and all of the percussion was done by sampling bass slapping. The melody is a little repetitive, but that's never been one of my strong points. Alright, so without further ado... Steve Swyers - Phrygian Funk P.S. - I won't bump this too much, Robot, I swear!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00448.warc.gz
CC-MAIN-2022-49
496
1
https://blogs.kde.org/2008/02/21/packaging-tutorial-fosdem-qt-44
code
It's FOSDEM this weekend, a huge gathering of free software enthusiasts all in one place with dozens of talk tracks. I'm giving a packaging tutorial on the Sunday at 16:00 in the cross desktop devroom (note different time than advertised). If you make software and want to get it into Ubuntu (or Debian) come along and learn how to make .debs and get them out for the world. Bring your laptop to follow the tutorial yourself. I've put packages of Qt 4.4 in my PPA for those working with KDE 4.1. They have a binary incompatible change compared to current Qt 4.3 so it'll break existing plasma and other bits.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00234.warc.gz
CC-MAIN-2020-24
608
4
http://pcworld.com/article/191912/xobox_360_may_support_usb_storage_with_update.html
code
Joystiq reports it has a document confirming an Xbox 360 update that will allow the device to use USB storage units. There is a 16GB cap on the USB mass storage, but you could conceivably copy an entire disc-based game onto it. It could give Xbox users an easy way to add the storage they've sought. The document, authored by a senior software development engineer at Microsoft, states that due to "increased market penetration of high-capacity, high throughput USB mass storage devices, a 2010 Xbox 360 system update" will allow consumers to save and load game data from USB devices. The update is purportedly coming in Spring 2010. We've contacted representatives at Microsoft to confirm the information. Joystiq claims two unidentified sources vetted the document, and we have to agree with them that the logic of allowing USB storage options is sound -- especially with all these rumors of an Xbox 360 Slim model floating around. This story, "Xbox 360 May Support USB Storage with Update" was originally published by GamePro.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107744.5/warc/CC-MAIN-20170821080132-20170821100132-00548.warc.gz
CC-MAIN-2017-34
1,029
4
https://hackmd.io/NfSTk2E-TwOvU6sc5nS6wA
code
Photo by [jonathan Ford](https://unsplash.com/photos/BfI7PYfJ0hc) on Unsplash A less tumultuous week in crypto. The charts remain firmly bearish but unless you are a day trader that should be of little concern. Significant developments for EOS with the SEC issuing a $24 million dollar fine but seemingly clearing the way for EOS to continue unimpeded development in the US (see below). Bakkt continues to underwhelm with pitiful contract numbers. Libra lost yet more steam (likely shedding Paypal) and once again highlighted the superiority of decentralized blockchains. Global markets look very soft - BTC's claim to 'hedge status' looks set to be tested sooner rather than later. ## Picks of the Week These Twitter threads by [Marco Santori](https://twitter.com/msantoriESQ/status/1178811671621591040) and [Jake Chervinsky](https://twitter.com/jchervinsky/status/1179162527541993472) which delve into the recent SEC ruling on EOS are very insightful. Flight of (BTC) fancy?: Long/short BTC - who really made better returns?: Bitcoin Twitter mentions fall to a new low (a problem, an opportunity, or just a change in tagging?): Cryptos ranked by code activity over a 12 month period (note focusing on this metric alone can be misleading): Digesting SEC settlement with EOS (highly recommended): More on SEC settlement with EOS and Sia (highly recommended): Time to reward all EOS voters?: BTC is unstable - let's dig-into that assumption (recommended): On trust, BTC, and the way ahead (recommended): How to achieve consensus - a tale of Ethereum devs: Basic overview of some of the pros and con of DEXs: A developer takes us through their experience of using EOS to code a game (highly recommended): An argument against vote rewards for DPoS (recommended): A model of the benefits of rationality (non-crypto specific but certainly of use to an investor/trader): Exploring whether rate cuts matter for Bitcoin: Assessing the downside for BTC (good analysis if you ignore the self-congratulatory intro): Some folks have seen this steep drop as an opportunity: Colin Talks Crypto - does just that: Coding tips from an expert: In general, BTC takes longer to reach each new all time high - we are only 300 days into this cycle: Worth your while to be aware of the (significant) variations in exchange fees: # Website / Utility A simple tool to track current rates of segwit adoption (note recent uptick to over 60%): As usual another fascinating week in crypto. Until next we meet down the crypto rabbit hole! **Note on Sources**: *Twitter & Reddit (cryptos current meta-brains) / Medium / Trybe / Hackernoon / Whaleshares / TIMM and so on/ YouTube / various podcasts and whatever else I stumble upon. The aim is a useful weekly aggregator of ideas rather than news. Though I try to keep the sources current – I’ll reference these articles and podcasts etc. as I encounter them – they may have been published just a couple of days ago or in some cases quite a bit earlier.* Also published on [TIMM](https://mentormarket.io/category/cryptocurrencies/). Import from clipboard Advanced permission required Your current role can only read. Ask the system administrator to acquire write and comment permission. This team is disabled Sorry, this team is disabled. You can't edit this note. This note is locked Sorry, only owner can edit this note. Reach the limit Sorry, you've reached the max length this note can be. Please reduce the content or divide it to more notes, thank you!
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651815.80/warc/CC-MAIN-20230605085657-20230605115657-00428.warc.gz
CC-MAIN-2023-23
3,482
41
https://muppetcentral.com/forum/threads/now-whats-going-on-with-youtube.41457/page-186
code
I'm honestly reaching my point that the more I keep seeing Fox News being shoved all over YouTube, I want to smash my computer through the window. They have to be paying YouTube to push their content all over the site, there seriously is no other explanation for why their videos are not only all over the homepage, but also being recommended on other non-political junk like pet videos, cartoon shows, music recordings, ASMR, even fricken SESAME STREET! If I had ever actually watched any of their junk, it'd make sense because the recommended would be based on search/watch history, but I've never gone out of my way to watch any of their biased, unfactual, falsified, sensationalized propaganda. Ah, YouTube has finally introduced a new feature that's actually pretty useful: the ability to break up your video's timeline into separate chapters, so that way viewers can skip ahead to a specific part of a video if they want to. I've already applied it to some of my longer videos, and I have to say, it's really neat. I see YouTube's done some redesigning again, this time with its various different icons and such, and man, they couldn't possibly be any colder and impersonal - like literally, they're just vector outlines, and that's it. Really ugly and simplistic. Why does Facebook just keep getting slower and slower? Like seriously, it's taking the words you type like five times longer to actually appear on screen than being typed on the keyboard. Even a simple, one-sentence comment takes up to a minute or longer to actually appear on screen when typing it only takes a few seconds. EDIT: Okay, this appears to be a problem exclusive to Firefox. Strange, because for the longest time, Facebook worked better in Firefox than Chrome; now it's the other way around.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00345.warc.gz
CC-MAIN-2024-18
1,775
5
https://ws-resize.ru/what-is-sftp-client-for-mac/
code
- Bitvise SSH Client: SSH tunneling, SSH terminal emulation and SFTP client. JSCAPE AnyClient: Web-based SFTP/FTP/FTPS/WebDAV/S3 client. Commercial with free version. Tectia SSH Client: SFTP/SSH client from the creators of the SSH protocol. VanDyke SecureFX: SFTP/FTP/SCP client for Windows, Mac and Linux. - Using the built-in SSH client in Mac OS X. Mac OS X includes a command-line SSH client as part of the operating system. To use it, goto Finder, and selext Go - Utilities from the top menu. Then look for Terminal. Terminal can be used to get a local terminal window, and also supports SSH. Windows SFTP client apps This FTP manager and SFTP client for macOS offers all convenient options that one may need to work with files on Mac — view, copy from server to server, delete, create, and more. Now, long ago we’d call Transmit an “FTP client”, but today, with Transmit 5, we connect to. Download FileZilla Client 3.50.0 for Mac OS X. The latest stable version of FileZilla Client is 3.50.0. Please select the file appropriate for your platform below. Specialized applications from connecting to SFTP. |WinSCP||Free and open source SFTP GUI client. Despite its name it's not limited to SCP, but works with SFTP and FTP/SSL too.| |Filezilla Client||Free and open source FTP, FTP/SSL and SFTP GUI client (beware of adware).| |Bitvise SSH Client||SSH tunneling, SSH terminal emulation and SFTP client. Commercial.| |JSCAPE AnyClient||Web-based SFTP/FTP/FTPS/WebDAV/S3 client. Commercial with free version.| |Tectia SSH Client||SFTP/SSH client from the creators of the SSH protocol. Commercial.| |VanDyke SecureFX||SFTP/FTP/SCP client for Windows, Mac and Linux. Commercial.| |FlashFXP||SFTP/FTP client for Windows. Commercial.| |FTP Voyager||FTP/FTP client for Windows. Free.| |WS_FTP Professional Client||SFTP/FTP client for Windows. Commercial.| |Axway Secure Client||SFTP/FTP client for Windows. Commercial.| |SmartFTP||FTP (File Transfer Protocol), FTPS, SFTP, WebDAV, S3, Google Drive, OneDrive, SSH, Terminal client. Commercial.| |GoAnywhere SFTP client for MFT||SFTP client from creators of GoAnywhere MFT server. Commercial.| SFTP plugins for popular apps Osx Ftp Client |Swish (for Windows Explorer)||Shows SFTP server in Windows Explorer. It's not a filesystem driver, so this sftp drive cannot be used from command line or from inside another program. Free and open source.| |SFTP plugin for Total Commander||Official plugin from the creators of Total Commander. Free.| |Chrome sFTP Client||sFTP Client for Google Chrome / Chrome OS.| Use those if you want to access SFTP from a script or if you simply prefer command line over GUI. |PuTTY PSFTP||PuTTY SFTP tool for those who are not afraid of command line. Available for Windows and Un*x-like systems. Free and open source.| |OpenSSH||OpenSSH's 'ssh' command is available on most Un*x systems. Free and open source. Windows port is included in CygWin.| |Bitvise Command-Line SFTP Client||Advanced command-line SFTP client for Windows. Commercial.| Sftp Client Mac Os Map SFTP server as a network drive Do you want to use a SFTP connection as a Windows mapped drive? Assign it a drive letter and use it from any application? Try one of those: What Is Sftp Client |NetDrive||SFTP, FTP, DropBox, GoogleDrive, OneDrive and few others. Commercial. Reverts to a limited free version when the trial is over. Windows.| |Web Drive||SFTP, FTP, DropBox, GoogleDrive, OneDrive and few others. File system level locking symantics. Synchronization mode and network drive mode. Commercial. Windows, Mac, iOS and Android.| |ExpandDrive||SFTP, FTP, DropBox, GoogleDrive, OneDrive and few others. Commercial. Windows and Mac.| |SFTP NET Drive||SFTP. Commercial. Free for personal use. Windows.| |win-sshfs||Maps remote SFTP drive and make it available to all applications. Open source, last updated in 2012. Works on Windows 7, newer OS versions are not supported. Several forks exists.| |WinSshFS 4every1 edition||Fork of win-sshfs which works on Win10. Free and open source.| |WinSshFS FiSSH edition||Fork of win-sshfs focused on UI changes. Free and open source.| |SSHFS for Linux||Enables you to mount a remote folder on Linux over SSH. FUSE-based, free and open source. Part of most Linux distros.| |SSHFS for OS X||SSH File System for MAC OS X based on FUSE for OS X. Free and open source.|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00341.warc.gz
CC-MAIN-2021-49
4,353
39
https://offtopic.com/threads/creating-a-new-partition.1937853/
code
I have one partition on my harddrive and that has Windows XP installed on it. I want to install Vista on the same drive. Since the partition has used up all the space on my harddrive, is there a way I can take the extra space off that partition and use it to create a new one? If so, how? Also, if this is possible, will I be able to delete the new partition and join it back to the original one if I decide to later?
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806768.19/warc/CC-MAIN-20171123085338-20171123105338-00292.warc.gz
CC-MAIN-2017-47
417
1
https://security.my.salesforce-sites.com/security/tools/webapp/burpabout
code
Burp Suite is a set of tools for assessing web application security. It's available in a free and commercial versions. We recommend its use when developing or assessing any web applications. The Burp tool must only be used to evaluate the security of your web application that resides outside of Force.com (e.g. www.partnersite.com). For applications residing completely on Force.com (e.g. partner-visual.force.com, appxpartner.force.com. etc.), please use the Force.com Source Source Scanner A 15 minute training video on using the Burp Suite Professional tool can be found here By launching the tool and setting a web browser to use this as its proxy server, all web traffic can be intercepted, inspected, modified and analyzed to identify a range of security vulnerabilities. Burp Suite Professional contains the following tools: - an intercepting HTTP/S proxy server which operates as a man-in-the-middle between the end browser and the target web application, allowing you to intercept, inspect and modify the raw traffic passing in both directions. - an intelligent application-aware web spider which allows complete enumeration of an application's content and functionality. - an advanced tool for performing automated discovery of security vulnerabilities in web applications. - a highly configurable tool for automating customized attacks against web applications, such as enumerating identifiers, harvesting useful data, and fuzzing for common vulnerabilities. - a tool for manually manipulating and re-issuing individual HTTP requests, and analyzing the application's responses. - a tool for analyzing the quality of randomness in an application's session tokens or other important data items which are intended to be unpredictable. Use the above links to read the detailed help specific to each of the individual Burp Suite tools. For additional help and details, please visit the Burp Suite Professional website Effectively Scanning Applications Using Burp In order to obtain effective results from the Burp Scanner, it is recommended that you do the following: • Turn “Intercept” (Proxy->Intercept) off within Burp. Do not change other default configurations • Configure your browser to use Burp as a proxy (Default port is 8080) • Login to your web-application with the highest privileged account to ensure no features are hidden, and run through typical use cases (simulate customer usage). Your goal is to access all application pages • Right click on the Target URL (Target->site map) and click on “spider this host” • Once spidering completes, Right click on the Target URL and click on “actively scan this host”. The scan progress can be monitored under the “Scanner” tab Accuracy of Results While black-box testing tools can be of great assistance in uncovering major security vulnerabilities, it is important to understand that no tool can identify all vulnerabilities. Additionally, since these tools lack insight into the context of the application, false positives can be produced. The output of this tool should not be considered a comprehensive security assessment of your application; rather it should complement a thorough manual review. The OWASP testing guide can be a valuable asset in determining your application’s security testing plan. A false negative occurs when a tool is not able to identify an existing bug. Some vulnerabilities that Burp Suite may not identify are: • Stored Cross-Site Scripting • Cross-Site Request Forgery • Session Hijacking/Fixation • Weak Access Control Policy A false positive occurs when a bug is flagged as being legitimate, which a tool misinterprets as being an actual issue. This can occur for multiple reasons, but often times it occurs due to not understanding the full context of an application. Here are two of the common places where you will see false positives in the output from Burp: • SQL Injection - SQL Injection consists of insertion of a SQL query via the input data from a user to the application. Burp looks for database error messages in the HTTP response, and may incorrectly classify an error message as being output from the database. • XML Injection – XML Injection is an attack technique used to manipulate or compromise the logic of an XML application or service. Burp looks for exceptions thrown during XML parsing. However, at times a response containing the term “XML” could get flagged as an exception.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00247.warc.gz
CC-MAIN-2023-50
4,445
31
http://news.sys-con.com/node/2990526
code
|By Business Wire|| |February 27, 2014 01:00 PM EST|| NetJapan, Inc., software publisher of backup and disaster recovery solutions, releases ActiveImage Protector 3.5 Service Pack 4. ActiveImage Protector 3.5 SP4 now supports Windows Server 2012 R2, Windows 8.1, and Microsoft Surface Pro. New features include incremental backup of ReFS volumes and conversion of ActiveImage Protector image files into VHDX VM file format. ActiveImage Protector 3.5 SP4 will be made available in six editions, i.e., Virtual Edition and Hyper-V Edition for virtual environments, Server Edition and Desktop Edition for physical environments, and IT Pro Edition for system administrators. - Now supporting Windows Server 2012 R2, Hyper-V Server 2008 R2 (a free standalone hypervisor offering from Microsoft) and Windows 8.1 Client Hyper-V. ReZoom and Seamless Hot Restore (SHR) for live VM recovery or live migration are included for both Hyper-V server and Client Hyper-V. - Enhanced ActiveImage Protector Boot Environment (AIPBE) builder. - New Pre-Boot AIPBE feature allows booting AIPBE from hard disk without requiring optical media, enabling the restoration of backup images from a Surface Pro tablet PC. - New Hyper-V VHDX support for Physical to Virtual conversion. - Incremental backup of ReFS volumes. - Support for the latest uEFI motherboards. Pre-Boot AIPBE (boot to Linux) support for uEFI native mode. Standard features include hot and cold backup, Inline Data Deduplication Compression (IDDC), Physical to Virtual machine conversion, smart sector backup, scheduled backups, fast incremental backups, flexible scripting, boot recovery environment, individual folder and file recovery, Architecture Intelligent Restore (AIR) technology for restoring to virtual or physical machines with dissimilar hardware, flexible storage configuration, bare metal recovery, and U.S.-based technical support. For more detailed product information and system requirements, please visit the ActiveImage Protector website at: http://activeimage.net. ActiveImage Protector software and support are available in Japanese and U.S. English. NetJapan, Inc. distributes ActiveImage Protector through authorized system integrators, business partners, distributors, online shops and direct web sales. For more information please visit http://www.activeimage.net/products/. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id... Aug. 27, 2016 03:15 AM EDT Reads: 1,752 Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil... Aug. 27, 2016 02:30 AM EDT Reads: 1,994 With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors. Aug. 27, 2016 01:45 AM EDT Reads: 1,721 Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is... Aug. 27, 2016 01:30 AM EDT Reads: 2,073 The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi... Aug. 27, 2016 01:15 AM EDT Reads: 2,000 Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher.... Aug. 27, 2016 12:45 AM EDT Reads: 2,929 With over 720 million Internet users and 40–50% CAGR, the Chinese Cloud Computing market has been booming. When talking about cloud computing, what are the Chinese users of cloud thinking about? What is the most powerful force that can push them to make the buying decision? How to tap into them? In his session at 18th Cloud Expo, Yu Hao, CEO and co-founder of SpeedyCloud, answered these questions and discussed the results of SpeedyCloud’s survey. Aug. 27, 2016 12:45 AM EDT Reads: 2,144 SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp... Aug. 27, 2016 12:15 AM EDT Reads: 2,274 Actian Corporation has announced the latest version of the Actian Vector in Hadoop (VectorH) database, generally available at the end of July. VectorH is based on the same query engine that powers Actian Vector, which recently doubled the TPC-H benchmark record for non-clustered systems at the 3000GB scale factor (see tpc.org/3323). The ability to easily ingest information from different data sources and rapidly develop queries to make better business decisions is becoming increasingly importan... Aug. 26, 2016 10:45 PM EDT Reads: 2,065 SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ... Aug. 26, 2016 10:00 PM EDT Reads: 1,853 Kubernetes, Docker and containers are changing the world, and how companies are deploying their software and running their infrastructure. With the shift in how applications are built and deployed, new challenges must be solved. In his session at @DevOpsSummit at19th Cloud Expo, Sebastian Scheele, co-founder of Loodse, will discuss the implications of containerized applications/infrastructures and their impact on the enterprise. In a real world example based on Kubernetes, he will show how to ... Aug. 26, 2016 09:15 PM EDT Reads: 1,432 Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ... Aug. 26, 2016 07:15 PM EDT Reads: 431 As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav... Aug. 26, 2016 07:00 PM EDT Reads: 723 SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms. Aug. 26, 2016 07:00 PM EDT Reads: 665 Aspose.Total for .NET is the most complete package of all file format APIs for .NET as offered by Aspose. It empowers developers to create, edit, render, print and convert between a wide range of popular document formats within any .NET, C#, ASP.NET and VB.NET applications. Aspose compiles all .NET APIs on a daily basis to ensure that it contains the most up to date versions of each of Aspose .NET APIs. If a new .NET API or a new version of existing APIs is released during the subscription peri... Aug. 26, 2016 06:00 PM EDT Reads: 1,936
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298875.42/warc/CC-MAIN-20160823195818-00228-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
10,320
44
https://amalgaminsights.com/2019/12/02/developing-an-practical-model-for-ethical-ai-in-the-business-world-stage-2-technical-development/
code
To read the introduction, click here. To read about Stage 1: Executive Design, click here This blog focuses on Technical Development, the second of the Three Keys to Ethical AI described in the introduction. Figure 1: The Three Keys to Ethical AI Stage 2: Technical Development Technical Development is the area of AI that gets the most attention as machine learning and data science start to mature. Understandably, the current focus in this Early Adopter era (which is just starting to move into Early Majority status in 2020) is simply on how to conduct machine learning, data science efforts, and potentially deep learning projects in a rapid, accurate, and potentially repeatable manner. However, as companies conduct their initial proofs of concepts and build out AI services and portfolios, the following four questions are important to take into account. - Where does the data come from? - Who is conducting the analysis? - What aspects of bias are being taken into account? - What algorithms and toolkits are being used to analyze and optimize? Figure 2: Technical Development Where does the data come from? Garbage In, Garbage Out has been a truism for IT and data projects for many decades. However, the irony is that much of the data that is used for AI projects used to literally be considered “garbage” and archival exhaust up until the practical emergence of the “Big Data” era at the beginning of this decade. As companies use these massive new data sources as a starting point for AI, they must check on the quality, availability, timeliness, and context of the data. It is no longer good enough to just pour all data into a “data lake” and hope that this creates a quality training data sample. The quality of the data is determined by the completeness, accuracy, and consistency of the data. If the data have a lot of gaps, errors, or significant formatting issues, the AI will need to account for these issues in a way that maintains trust. For instance, a long-standing historical database may be full of null values as the data source has been augmented over time and data collection practices have improved. If those null values are incorrectly accounted for, AI can end up defining or ignoring a “best practice” or recommendation. From a practical perspective, consider as an example how Western culture has recently started to formalize non-binary gender or transgendered identity. Just because data may not show these identities prior to this decade does not mean that these identities didn’t exist. Amalgam Insights would consider a gap like this to be a systemic data gap that needs to be taken into account to avoid unexpected bias, perhaps through the use of adversarial de-biasing that actively takes the bias into account. The Availability and Timeliness of the data refers to the accessibility, uptime, and update frequency of the data source. Data sources that may be transient or migratory may serve as a risk for making consistent assumptions from an AI perspective. If an AI project is depending on a data source that may be hand-curated, bespoke in nature, or inconsistently hosted and updated, this variability needs to be taken into account in determining the relative accuracy of the AI project and its ability to consistently meet ethical and compliance standards. Data context refers to the relevance of the data both for solving the problem and for providing guidance to downstream users. Correlation is not causation, as the hilarious website “Spurious Correlations” run by Tyler Vigen shows us. One of my favorite examples shows how famed actor Nicolas Cage’s movies are “obviously” tied to the number of people who drown in swimming pools. Figure 3: Drownings as a Function of Nicolas Cage Movies (Thanks to Spurious Correlations! Buy the book!) But beyond the humor is a serious issue: what happens if AI assumptions are built on faulty and irrelevant data? And who is checking the hyperparameter settings and the contributors to parameter definitions? Data assumptions need go through some level of line of business review. This isn’t to say that every business manager is going to suddenly have a Ph.D. level of data science understanding, but business managers will be able to either provide confirmation that data is relevant or provide relevant feedback on why a data source may or may not be relevant. Who is conducting the analysis? In this related question, the deification of the unicorn data scientist has been well-documented over the last few years. But just as business intelligence and analytics evolved from the realm of the database master and report builder to a combination of IT management and self-service conducted by data-savvy analysts, data science and AI must also be conducted by a team of roles that include the data analyst, data scientist, business analyst, and business manager. In small companies, an individual may end up holding multiple roles on this team. But if AI is being developed by a single “unicorn” focused on the technical and mathematical aspects of AI development, companies need to make sure that the data scientist or AI developer is taking sufficient business context into account and fully considering the fundamental biases and assumptions that were made during the Executive Design phase. What aspects of bias are being taken into account? Any data scientist with basic statistical training will be familiar with Type I (false positive) and Type II (false negative) errors as a starting point for identifying bias. However, this statistical bias should not be considered the end-all and be-all of defining AI bias. As parameters and outputs become defined, data scientists must also consider organizational bias, cultural bias, and contextual bias. Simply stating that “the data will speak for itself” does not mean that the AI lacks bias; this only means that the AI project is actively ignoring any bias that may be in place. As I said before, the most honest approach to AI is to acknowledge and document bias rather than to simply try to “eliminate” bias. Bias documentation is a sign of understanding both the problem and the methods, not a weakness. An extreme example is Microsoft’s “Tay” chatbot released in 2016. This bot was released “without bias” to support conversational understanding. The practical aspect of this lack of bias was that the bot lacked the context to filter racist messages and to differentiate between strongly emotional terms and culturally appropriate conversation. In this case, the lack of bias led to the AI’s inability to be practically useful. In a vacuum, the most prevalent signals and inputs will take precedence over the most relevant or appropriate signals. Unless the goal of the AI is to reflect the data that is most commonly entered, an “unbiased” AI approach is generally going to reflect the “GIGO” aspect of programming that has been understood for decades. This challenge reflects the foundational need to understand the training and distribution of data associated with building of AI. What algorithms and toolkits are being used to analyze and optimize? The good news about AI is that it is easier to access than ever before. Python resources and a plethora of machine learning libraries including PyTorch, Scikit, Keras, and, of course, Tensorflow, make machine learning relatively easy to access for developers and quantitatively trained analysts. The bad news is that it becomes easy for someone to implement an algorithm without fully understanding the consequences. For instance, a current darling in the data science world is XGBoost (Extreme Gradient Boosting) which has been a winning algorithmic approach for recent data science contests because it reduces data to an efficient minima more quickly than standard gradient boosting. But it also requires expertise in starting with appropriate features, stopping the model training before the algorithm overtunes, and appropriately fine tuning the model for production. So, it is not enough to simply use the right tools or the most “efficient” algorithms, but to effectively fit, stop, and tune models based on the tools being used to create models that are most appropriate for the real world and to avoid AI bias from propagating and gaining overweight influence. In our next blog, we will explore Operational Deployment with a focus on the line of business concerns that business analysts and managers consider as they actually use the AI application or service and the challenges that occur as the AI logic becomes obsolete or flawed over time.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474715.58/warc/CC-MAIN-20240228112121-20240228142121-00565.warc.gz
CC-MAIN-2024-10
8,624
33
https://stackoverflow.com/questions/61966153/mysql-connect-server-sent-charset-255-unknown-to-the-client
code
I'm using PHP 5.6.40 and MySQL 5.7 but when I want to connect remote database(MySQL 8), occur a problem. Warning: mysql_connect(): Server sent charset (255) unknown to the client. Although I tried a lot of things like here but nothing has changed. My Connection Code $connect = mysql_connect("XXX:25060","XXX","XXX") or die (); mysql_select_db("defaultdb", $connect) or die ( mysql_error() );
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00231.warc.gz
CC-MAIN-2021-21
392
5
http://ssanitea.blogspot.com/2013/04/fuel-lemon-and-ginger-tea.html
code
Put together two rough videos of the 2 art projects I'm going for my major project. We had 4 projects worth of time to fill, and each project is meant to take roughly 4 weeks. All of my project's were involved in the Echoes project. I only have 2 art projects as my other two projects got consumed by directing and game engine stuff. Project Echoes - Dragon from Ruth Beresford (C) 2013 on Vimeo. Project Echoes - Props from Ruth Beresford (C) 2013 on Vimeo. As you can see from the videos, I need to quickly wrap up all the props I've made, while also wrapping up the games engine and directing. About 3 weeks left, so you can really feel the tension rising on the course, but the excitement is quite a good feeling to be around. Everyone is really motivated and positive atm, so let's just hope that continues ;)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593010.88/warc/CC-MAIN-20180722041752-20180722061752-00306.warc.gz
CC-MAIN-2018-30
814
5
https://www.audiobombs.com/items/1436/free-drumsamples-from-murmux-v2-for-analog-rytm-%28and-wav%29
code
This FREE analog Soundpack for the Analog Rytm was created with the awesome Dreadbox murmux v2. Please subscribe to my youtube-channel (https://www.youtube.com/channel/UCst43Ia_CGs8t4oI7_prKag) for more free soundpacks. And i will be very happy, if you like or comment this. And if you'll use this soundpack and make an audio or video, please post the link also in the comments. Thanks and peace! First you have to create a new project, load the samples into the +drive and then load the sysex. Please use the freeware C6 (https://www.elektron.se/support/?conn...) for dumping the files.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.19/warc/CC-MAIN-20240414161724-20240414191724-00707.warc.gz
CC-MAIN-2024-18
587
3
https://headhonchos.quezx.com/jobs/Software-Developer-IT-Information-Technology-IT-Technology-Software-Services-Bangalore-1147779.html
code
(a) Design and development of BI systems/applications as part of Monthly releases and data acquisition process needs. (b) Communicate design, requirements, feature set, functionality, usability and limitations of subsystem to team and/or development lead or manager. (c) Participate in Business Requirements and Functional Requirements meetings, identify gaps in requirements and drive discussion around appropriate solutions. (d) Design and code high quality database solutions within a fast paced monthly release cycle. (e) Manage errors gracefully. Document code and work completed. (f) Conduct thorough unit testing of code and document the unit test cases. (g) Conduct appropriate performance testing to ensure all solutions will meet SLAs and performance criteria. (h) Provide support as needed throughout Test and User Acceptance Testing phases. (i) Create Technical Design Specification documentation that clearly articulates the design and code being implemented. (j) Provide client communication as appropriate to project. (k) Develop new reports and provide technical support for the applications. (l) Be able to translate technical specifications into finished programs and systems and also have a thorough understanding of developing solutions to handle large volume data sets that are typical with BI solutions. SQL Database Administrator/Analysist experience. Experience with TSQL, SSIS package, SSAS or OLAP Knowledge of MDX, DAX cube customization is a must. Analysis Services/OLAP experience. Understands how to create a proper DB schema, ETL package and Analysis Services cube. Strong data warehouse design and physical data modeling skills. Ability to quickly troubleshoot issues and work on multiple tasks in parallel. Excellent coding and debugging skill.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257497.4/warc/CC-MAIN-20190524024253-20190524050253-00386.warc.gz
CC-MAIN-2019-22
1,777
20
https://www.freelancer.com/projects/php-website-design/developing-website/
code
I want to develop a social networking website. I provide you the webdesign in .psd file. You have to develop the site in phpfox. please show your previous work. the person from Mumbai, India will be more good. after hiring perfect candidate the files and further details will be given. My budget will be $555. 16 freelancers are bidding on average $570 for this job Dear mam/ sir, Recently completed a very good social networking site with Face book features which you looking exactly. Kindly please check your PMB. Many Thanks Uzma I have the skills needed to get this job done quickly and effectively. I plan to build a working site for your needs and continue expanding as needed and as you desire.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948543611.44/warc/CC-MAIN-20171214093947-20171214113947-00565.warc.gz
CC-MAIN-2017-51
701
10
http://python.sys-con.com/node/2537926
code
|By PR Newswire|| |February 13, 2013 08:00 AM EST|| TYSONS CORNER, Va., Feb. 13, 2013 /PRNewswire/ -- MicroStrategy® Incorporated (Nasdaq: MSTR), a leading worldwide provider of business intelligence (BI) and mobile software, today announced that it has been positioned by Gartner, Inc. in the "Leaders" quadrant in the 2013 "Magic Quadrant for Business Intelligence and Analytics Platforms" report.(1) According to Gartner, the dominant theme of the market in 2012 was that data discovery became a mainstream BI and analytic architecture. Increasingly, Gartner sees more organizations building diagnostic analytics that leverage critical capabilities, such as interactive visualization, to enable users to drill more easily into the data to discover new insights. Furthermore, Gartner believes "this emphasis on data discovery from most of the leaders in the market — which are now promoting tools with business-user-friendly data integration, coupled with embedded storage and computing layers (typically in-memory/columnar) and unfettered drilling — accelerates the trend toward decentralization and user empowerment of BI and analytics, and greatly enables organizations' ability to perform diagnostic analytics." A copy of the Gartner report is available, compliments of MicroStrategy, at http://www.microstrategy.com/gartner. In 2013, MicroStrategy delivered a bold set of innovative solutions and critical capabilities for its Business Analytics, Mobile App Platform, and Cloud offerings. MicroStrategy Reinforces its Enterprise-Scale Pedigree With Innovations in Visual Exploration, Dashboards, Big Data and Business Analytics In July 2012, MicroStrategy unveiled MicroStrategy 9.3™, the latest version of its core business intelligence platform. MicroStrategy 9.3 includes dozens of powerful new capabilities and improvements to Visual Insight, MicroStrategy's data discovery software, increased support for advanced analytics from 'R' — an open source language for predictive analysis — and improved connectivity to Hadoop®. In addition, MicroStrategy 9.3 delivers a high speed "Google-like" search experience and introduces an innovative administration product, MicroStrategy System Manager™, for automating manual, multi-step processes. Taken together, these enhancements help ensure that business people can receive critical business information at the right time for making better business decisions. In November 2012, MicroStrategy certified its integration with the Amazon Redshift data warehouse service. Launched by Amazon Web Services (AWS), Amazon Redshift is a cloud-based analytical data warehouse service designed to deliver high performance analytics for data warehouse and Big Data applications. MicroStrategy Offers Advanced Mobile Capabilities In January 2012, MicroStrategy announced the general availability of MicroStrategy 9.2.1m for customers to build information-driven mobile apps for iPhone®, iPad®, and Android™ devices. MicroStrategy 9.2.1m contains over a dozen new capabilities that further enhance MicroStrategy-powered mobile apps. The new capabilities include support for offline mobile transactions, intelligent offline caching, and a secure mobile content management system — all resulting in a richer user-experience. In April 2012, MicroStrategy unveiled a new version of MicroStrategy Mobile™ that delivers enhanced integration with Apple's AirPlay® feature. This capability extends MicroStrategy Mobile wirelessly onto conference room screens, encouraging spontaneous conversations rather than static slide presentations. When MicroStrategy Mobile users "mirror" their business apps onto screens, the device becomes a data remote-control, letting the user discuss the data on the screen while controlling it wirelessly from an Apple device. For more information, visit: https://www.microstrategy.com/mobile/. MicroStrategy Extends Leadership in Cloud BI Offering MicroStrategy continued to establish itself as a leader in Cloud-based business analytics. During 2012, customers on its MicroStrategy Cloud Platform ran over 2 million reports per week through the Company's innovative BI in the cloud service. MicroStrategy Cloud™ builds on the analytic, visualization, and mobile capabilities of MicroStrategy 9.3 and offers customers a turnkey business analytics solution, including infrastructure, data hosting, ETL, and BI technologies. The MicroStrategy Cloud service also includes expert resources to manage and monitor the systems. Customers of all sizes have found value in the MicroStrategy Cloud service through improved business agility, lower cost, and lower financial and operational risks. MicroStrategy Unveils Express, an Innovative Software-as-a-Service (SaaS) Offering In October 2012, MicroStrategy announced the general availability of MicroStrategy Express™, enabling any businessperson — regardless of technical skill — to access and analyze data on his own, and deploy powerful data-driven web and mobile intelligence apps to thousands of users within days. Express combines the simplicity and flexibility of a cloud-based solution with the analytical depth, performance and scalability of world-class business intelligence. Business people can access on-premises and cloud-based data rapidly and securely, and explore it using powerful and intuitive data visualizations. They can design and share mobile apps without writing a line of code. They can build boardroom-quality dashboards using pixel-perfect editing capabilities, and automatically publish personalized documents to any number of recipients. To try the service, visit https://www.microstrategy.com/express/. In 2012, MicroStrategy delivered numerous technology innovations for its Wisdom, Alert, and Usher product offerings. Wisdom: In July 2012, MicroStrategy announced the availability of MicroStrategy Wisdom Professional™, a market intelligence application that can explore the demographics, interests, activities, and preferences of nearly 20 million Facebook users in the Wisdom dataset. MicroStrategy Wisdom Professional provides businesses with intelligence to improve a wide range of marketing activities including brand management, consumer promotions, media buying, location planning, competitive analysis, business development, and social media marketing. To sign up for a free trial of MicroStrategy Wisdom Professional, visit: http://www.wisdom.com/professional/. Alert: MicroStrategy Alert is a cloud-based platform that lets retailers deploy a branded mobile commerce app in a matter of weeks, without the high cost and delay of developing an application from scratch and maintaining it on their own. By integrating with a retailer's customer data system, marketing assets and social channels, Alert-powered apps can include a broad range of functionality, including a native storefront, store and product locators, digital receipts, and loyalty features. In addition, Alert offers other innovative capabilities, such as targeted promotions, peer-to-peer gifting, and detailed analytics on commercial activity and customer usage. In January 2013, West-Coast based specialty retailer Tilly's launched a state-of-the-art mobile commerce app, powered by MicroStrategy's Alert Mobile Commerce Platform, providing its customers with an enhanced experience of the Tilly's brand. Features include access to exclusive Tilly's content, a streamlined mobile shopping experience, and wallet functionality that allows personalized promotions and receipts. To see what Alert can do for other brands in only a matter of weeks, visit: http://www.alert.com/. Usher: MicroStrategy Usher™ is a mobile identity network that offers businesses a more convenient and secure alternative to physical IDs, keys, and cards. It also provides a more effective way to manage the workforce, improve customer service, and reduce the threat of cyber-attacks. To download Usher, visit: http://www.usher.com/. "We believe our consistent position in the Leaders quadrant over the past five years underscores our commitment to quality, performance and innovation," said MicroStrategy President Paul Zolfaghari. "As significant demand for visual data discovery functionality has developed over the past few years, MicroStrategy has not only been keenly focused on perfecting our visual data discovery product, MicroStrategy Visual Insight, but we have also renewed our dedication to increasing the usability and ease-of-use of the entire platform overall." (1)Gartner "Magic Quadrant for Business Intelligence and Analytics Platforms" by Kurt Schlegel, Rita L. Sallam, Daniel Yuen, Joao Tapadinhas, February 5, 2013. About the Magic Quadrant Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Founded in 1989, MicroStrategy (Nasdaq: MSTR) is a leading worldwide provider of enterprise software, including the MicroStrategy Business Intelligence (BI) Platform™, the MicroStrategy Mobile Platform™, and MicroStrategy Applications™. The Company offers its technologies for deployment in customer data centers and as proprietary cloud services. The MicroStrategy BI Platform enables leading organizations to analyze vast amounts of data and distribute business insight throughout the enterprise. The MicroStrategy Mobile Platform lets organizations rapidly build enterprise-caliber mobile applications needed to mobilize business processes and information. MicroStrategy Applications are a set of application services designed to help enterprises deploy mobile commerce and loyalty services, build mobile identity and cyber security services, as well as generate real-time insights into consumer preferences. MicroStrategy Cloud™ allows enterprises to deploy MicroStrategy BI apps and mobile apps more quickly and with lower financial risk than with traditional on-premises solutions. To learn more about MicroStrategy, visit www.microstrategy.com and follow us on Facebook (http://www.facebook.com/microstrategy) and Twitter (http://www.twitter.com/microstrategy). MicroStrategy, MicroStrategy Business Intelligence Platform, MicroStrategy Mobile, MicroStrategy Mobile Platform, MicroStrategy Applications, MicroStrategy Cloud, MicroStrategy 9.3, MicroStrategy System Manager, MicroStrategy Express, MicroStrategy Wisdom Professional, and MicroStrategy Usher are either trademarks or registered trademarks of MicroStrategy Incorporated in the United States and certain other countries. Other product and company names mentioned herein may be the trademarks of their respective owners. SOURCE MicroStrategy Incorporated With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte... Feb. 8, 2016 08:00 PM EST Reads: 131 Fortunately, meaningful and tangible business cases for IoT are plentiful in a broad array of industries and vertical markets. These range from simple warranty cost reduction for capital intensive assets, to minimizing downtime for vital business tools, to creating feedback loops improving product design, to improving and enhancing enterprise customer experiences. All of these business cases, which will be briefly explored in this session, hinge on cost effectively extracting relevant data from ... Feb. 8, 2016 03:00 PM EST SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part... Feb. 8, 2016 03:00 PM EST Reads: 574 SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful... Feb. 8, 2016 02:00 PM EST Reads: 378 SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad... Feb. 8, 2016 12:45 PM EST Reads: 357 As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis... Feb. 8, 2016 12:30 PM EST Reads: 145 SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management... Feb. 8, 2016 10:45 AM EST Reads: 384 The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes... Feb. 8, 2016 09:30 AM EST Reads: 159 With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts... Feb. 7, 2016 12:00 PM EST Reads: 358 SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including clou... Feb. 6, 2016 03:30 PM EST Reads: 739 Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D... Feb. 5, 2016 09:00 PM EST Reads: 799 Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha... Feb. 2, 2016 02:00 PM EST Reads: 417 WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong. Feb. 2, 2016 04:30 AM EST Reads: 862 Learn how IoT, cloud, social networks and last but not least, humans, can be integrated into a seamless integration of cooperative organisms both cybernetic and biological. This has been enabled by recent advances in IoT device capabilities, messaging frameworks, presence and collaboration services, where devices can share information and make independent and human assisted decisions based upon social status from other entities. In his session at @ThingsExpo, Michael Heydt, founder of Seamless... Feb. 1, 2016 05:00 AM EST Reads: 952 The IoT's basic concept of collecting data from as many sources possible to drive better decision making, create process innovation and realize additional revenue has been in use at large enterprises with deep pockets for decades. So what has changed? In his session at @ThingsExpo, Prasanna Sivaramakrishnan, Solutions Architect at Red Hat, discussed the impact commodity hardware, ubiquitous connectivity, and innovations in open source software are having on the connected universe of people, thi... Jan. 31, 2016 09:00 PM EST Reads: 738 WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform. Jan. 31, 2016 07:15 PM EST Reads: 1,158 There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, showed how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants received the download information, scripts, and complete end-t... Jan. 31, 2016 10:00 AM EST Reads: 1,231 For manufacturers, the Internet of Things (IoT) represents a jumping-off point for innovation, jobs, and revenue creation. But to adequately seize the opportunity, manufacturers must design devices that are interconnected, can continually sense their environment and process huge amounts of data. As a first step, manufacturers must embrace a new product development ecosystem in order to support these products. Jan. 31, 2016 10:00 AM EST Reads: 825 Manufacturing connected IoT versions of traditional products requires more than multiple deep technology skills. It also requires a shift in mindset, to realize that connected, sensor-enabled “things” act more like services than what we usually think of as products. In his session at @ThingsExpo, David Friedman, CEO and co-founder of Ayla Networks, discussed how when sensors start generating detailed real-world data about products and how they’re being used, smart manufacturers can use the dat... Jan. 30, 2016 07:45 PM EST Reads: 799 When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Buildi... Jan. 30, 2016 03:45 PM EST Reads: 1,284
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156627.12/warc/CC-MAIN-20160205193916-00124-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
21,659
68
https://smartauthorsitesmain.com/renewing-bluehost-cheaper/
code
Renewing Bluehost Cheaper Discovering a high-grade cheap web hosting supplier isn’t very easy. Every website will certainly have different demands from a host. Plus, you need to compare all the features of a hosting business, all while trying to find the most effective deal feasible. This can be a lot to type via, specifically if this is your very first time buying organizing, or building a website. Many hosts will provide extremely affordable initial rates, only to increase those rates 2 or 3 times higher once your preliminary call is up. Some hosts will certainly supply free perks when you register, such as a cost-free domain, or a complimentary SSL certificate. While some hosts will certainly be able to offer far better performance and high degrees of protection. Renewing Bluehost Cheaper Below we dive deep right into the very best cheap webhosting plans out there. You’ll learn what core organizing features are crucial in a host and also how to analyze your own organizing needs to make sure that you can choose from among the best affordable organizing service providers listed below. Disclosure: When you purchase a host plan via links on this web page, we earn some payment. This aids us to maintain this website running. There are no additional costs to you in all by utilizing our links. The listed here is of the best cheap webhosting bundles that I’ve directly made use of and also examined. What We Take into consideration To Be Cheap Host When we define a webhosting package as being “Inexpensive” or “Spending plan” what we imply is hosting that comes under the price bracket in between $0.80 to $4 per month. Whilst researching low-cost hosting service providers for this overview, we looked at over 100 various hosts that fell under that cost array. We after that analyzed the high quality of their least expensive organizing plan, value for money and client service. In this write-up, I’ll be discussing this world-class site organizing firm as well as stick in as much pertinent info as feasible. I’ll go over the features, the rates options, and also anything else I can think of that I think might be of benefit, if you’re deciding to register to Bluhost as well as obtain your internet sites up and running. So without additional trouble, allow’s check it out. Bluehost is just one of the largest web hosting firms in the world, getting both huge advertising and marketing support from the company itself as well as associate marketing experts that advertise it. It actually is a substantial company, that has been around for a very long time, has a huge credibility, and is certainly among the leading choices when it comes to webhosting (most definitely within the leading 3, a minimum of in my publication). But what is it specifically, as well as should you obtain its services? Today, I will certainly answer all there is you need to understand, given that you are a blog owner or an entrepreneur that is searching for a webhosting, and also does not recognize where to begin, because it’s a wonderful service for that audience as a whole. Let’s imagine, you wish to hold your websites and make them visible. Okay? You currently have your domain name (which is your website destination or URL) today you want to “transform the lights on”. Renewing Bluehost Cheaper You require some holding… To accomplish every one of this, and also to make your site noticeable, you need what is called a “web server”. A server is a black box, or device, that keeps all your website information (documents such as photos, messages, videos, web links, plugins, and also various other information). Currently, this web server, has to be on all the time and it needs to be attached to the internet 100% of the moment (I’ll be mentioning something called “downtime” in the future). Furthermore, it additionally requires (without getting as well fancy and right into details) a file transfer protocol commonly called FTP, so it can reveal web browsers your site in its intended kind. All these points are either costly, or call for a high degree of technological skill (or both), to produce as well as preserve. And also you can entirely go out there and also find out these points on your own and established them up … however what about as opposed to you purchasing and preserving one … why not simply “renting out hosting” instead? This is where Bluehost is available in. You lease their servers (called Shared Hosting) and also you introduce a web site using those servers. Since Bluehost maintains all your documents, the company likewise allows you to establish your web content management systems (CMS, for short) such as WordPress for you. WordPress is an incredibly prominent CMS … so it simply makes good sense to have that option available (virtually every hosting company currently has this alternative also). In other words, you no more require to set-up a server and afterwards integrate a software application where you can develop your web content, separately. It is currently rolled right into one package. Well … picture if your web server is in your home. If anything were to happen to it at all, all your data are gone. If something goes wrong with its interior processes, you need a specialist to fix it. If something overheats, or breaks down or obtains corrupted … that’s no good! Bluehost takes all these troubles away, as well as deals with whatever technical: Pay your server “rental fee”, and also they will take care of everything. And as soon as you acquire the solution, you can then begin concentrating on including content to your web site, or you can place your initiative right into your advertising and marketing projects. What Services Do You Receive From Bluehost? Bluehost offers a myriad of different solutions, but the key one is hosting obviously. The organizing itself, is of different types by the way. You can rent a shared server, have a devoted server, or likewise a virtualprivate web server. For the objective of this Bluehost review, we will concentrate on organizing solutions and various other services, that a blogger or an on the internet business owner would certainly require, instead of go too deep right into the rabbit opening as well as discuss the various other solutions, that are targeted at even more skilled individuals. - WordPress, WordPress PRO, as well as shopping— these hosting solutions are the packages that permit you to host a site utilizing WordPress as well as WooCommerce (the latter of which allows you to do ecommerce). After acquiring any one of these bundles, you can start constructing your internet site with WordPress as your CMS. - Domain Market— you can also acquire your domain name from Bluehost rather than other domain registrars. Doing so will make it much easier to point your domain to your host’s name web servers, since you’re making use of the very same marketplace. - Email— once you have actually bought your domain name, it makes sense to also get an e-mail address linked to it. As a blog owner or online business owner, you ought to pretty much never ever utilize a complimentary email service, like Yahoo! or Gmail. An e-mail such as this makes you look amateur. Luckily, Bluehost offers you one free of cost with your domain. Bluehost additionally offers committed servers. And also you may be asking …” What is a dedicated web server anyhow?”. Well, things is, the fundamental webhosting plans of Bluehost can just so much website traffic for your internet site, after which you’ll require to upgrade your organizing. The reason being is that the usual servers, are shared. What this implies is that one server can be servicing two or more websites, at the same time, among which can be your own. What does this mean for you? It indicates that the single web server’s sources are shared, as well as it is doing several jobs at any given time. As soon as your web site starts to hit 100,000 website sees each month, you are going to need a specialized web server which you can likewise receive from Bluehost for a minimum of $79.99 monthly. This is not something yous needs to fret about when you’re starting however you should keep it in mind for sure. Bluehost Pricing: How Much Does It Price? In this Bluehost evaluation, I’ll be focusing my focus primarily on the Bluehost WordPress Hosting packages, given that it’s the most prominent one, and likely the one that you’re seeking and that will certainly match you the very best (unless you’re a huge brand name, business or site). The 3 available strategies, are as adheres to: - Standard Plan– $2.95 per month/ $7.99 regular price - Plus Strategy– $5.45 per month/ $10.99 routine rate - Choice Plus Strategy– $5.45 monthly/ $14.99 routine rate The first price you see is the cost you pay upon subscribe, and also the second price is what the cost is, after the first year of being with the company. So basically, Bluehost is going to charge you on a yearly basis. And you can additionally choose the quantity of years you wish to hold your site on them with. Renewing Bluehost Cheaper If you pick the Basic plan, you will pay $2.95 x 12 = $35.40 beginning today and by the time you enter your 13th month, you will currently pay $7.99 each month, which is additionally billed each year. If that makes any type of sense. If you are serious about your internet site, you need to 100% get the three-year option. This implies that for the basic plan, you will certainly pay $2.95 x 36 months = $106.2. By the time you hit your fourth year, that is the only time you will pay $7.99 each month. If you think about it, this strategy will save you $120 in the course of 3 years. It’s not much, yet it’s still something. If you intend to obtain more than one internet site (which I extremely recommend, as well as if you’re severe, you’ll probably be getting even more at some time in time) you’ll wish to make use of the option plus plan. It’ll allow you to host endless internet sites. What Does Each Strategy Deal? So, in the case of WordPress hosting plans (which are similar to the common hosting strategies, but are more tailored towards WordPress, which is what we’ll be focusing on) the attributes are as adheres to: For the Fundamental plan, you get: - One internet site just - Secured website using SSL certificate - Maximum of 50GB of storage - Free domain name for a year - $ 200 advertising and marketing credit rating Keep in mind that the domain names are purchased independently from the hosting. You can get a cost-free domain name with Bluehost right here. For both the Bluehost Plus hosting as well as Choice Plus, you get the following: - Limitless number of internet sites - Free SSL Certificate. Renewing Bluehost Cheaper - No storage or transmission capacity restriction - Free domain name for one year - $ 200 advertising credit report - 1 Workplace 365 Mailbox that is cost-free for thirty day The Choice Plus strategy has actually an included advantage of Code Guard Basic Back-up, a back-up system where your file is saved and replicated. If any kind of accident occurs as well as your website information vanishes, you can restore it to its original type with this feature. Notice that despite the fact that both plans cost the same, the Option Strategy then defaults to $14.99 monthly, routine cost, after the collection quantity of years you’ve chosen. What Are The Benefits Of Using Bluehost So, why pick Bluehost over various other host solutions? There are hundreds of host, much of which are resellers, yet Bluehost is one choose few that have stood the test of time, and it’s probably the most popular out there (and also forever reasons). Here are the three major benefits of selecting Bluehost as your host service provider: - Web server uptime— your internet site will not show up if your host is down; Bluehost has greater than 99% uptime. This is extremely essential when it pertains to Google SEO and also rankings. The higher the much better. - Bluehost speed— just how your server response establishes just how rapid your website shows on a web browser; Bluehost is lighting fast, which suggests you will decrease your bounce price. Albeit not the best when it concerns loading rate it’s still extremely crucial to have a rapid speed, to make customer experience far better and also much better your position. - Limitless storage space— if you obtain the And also strategy, you need not fret about the number of documents you store such as video clips– your storage space capacity is limitless. This is really vital, because you’ll most likely run into some storage space problems in the future down the tracks, as well as you do not desire this to be a problem … ever before. Finally, consumer assistance is 24/7, which indicates regardless of where you are in the globe, you can get in touch with the support team to fix your internet site issues. Pretty typical nowadays, yet we’re taking this for given … it’s likewise very essential. Renewing Bluehost Cheaper Also, if you have actually obtained a complimentary domain with them, after that there will certainly be a $15.99 fee that will certainly be subtracted from the quantity you initially purchased (I imagine this is since it type of takes the “domain name out of the marketplace”, uncertain concerning this, however there probably is a hard-cost for registering it). Finally, any requests after 1 month for a refund … are void (although in all sincerity … they ought to possibly be stringent here). So as you see, this isn’t always a “no questions asked” plan, like with a few of the various other holding alternatives out there, so make sure you’re alright with the policies prior to continuing with the hosting.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00385.warc.gz
CC-MAIN-2021-43
13,836
82
https://blender.stackexchange.com/questions/235744/how-to-properly-shrink-wrap-a-torus-looking-thing
code
I'm trying to shrink wrap "cloth" to a torus looking object. It was going fine until I reach the quarter of the torus, the shrink wrap isn't working as I expected. Here's the image of my current model The wrap is overlapping on itself right now, but it's currently my biggest concern. As you can see from the images, the wrap twisted on the quarter way and rotating the curve vertices doesn't really fix it. What configuration of the shrink wrap should I use? Because I think this could be a shrink wrap issue. I'll upload my blend file so you guys can check it out.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00201.warc.gz
CC-MAIN-2023-50
566
3
https://msdn.microsoft.com/en-us/library/ms242904(v=vs.100)
code
Collaborating within a Team Using Team Project Resources As a member of a team project, you can collaborate effectively with other team members by using the tools and processes that Visual Studio Application Lifecycle Management (ALM) provides. Visual Studio ALM enables collaboration by enhancing communication, tracking work status, enforcing team processes, and integrating tools. The purpose of enhancing communication on a team is to make sure that no information or work is lost when tasks are reassigned from one team member to another. Your team project provides a central location for you and your team members to coordinate work. You can add several collaborative menu options to Team Explorer if you install the Team Members power tool for Visual Studio. By using this tool, you can organize your team into sub-teams, and team members can gain quick access to instant messaging and e-mail, share queries and links, and download and install custom components of Team Foundation. This power tool is not supported. For more information, see the following page on the Microsoft Web site: Team Foundation Server Power Tools. In this topic Share documents by uploading them to the project portal. You can share files that you want to make available to all team members by uploading them to the project portal. The Documents node of your team project displays all the project portal document libraries as its child nodes. These nodes are the same names that you see when you click Documents and Lists in the project portal. The Documents node is another view of the document libraries on the project portal. The Documents node is present only when your team project has a project portal enabled and the project is associated with a SharePoint site. For more information, see Access a Team Project Portal and Process Guidance. You can view documents by double-clicking them. You can also upload, delete, move, and perform other tasks on the documents and folders. Share information that is specific to one or more work items by using attachments. You can share information that is specific to a task, bug, or other type of work item by attaching it to the work item. For example, you can attach an e-mail thread, a document, an image, a log file, or another type of file. You add files to work items from the Attachments tab on the work item form. Augment descriptions and history of work items. As new information becomes available, you can add it to a work item, either in the Description or History fields. In the History field, you can format text to provide emphasis or capture a bulleted list. Link your changesets to user stories, requirements, tasks, or bugs. If your team is using Team Foundation version control, when you check in the files that you have changed to complete a work item, you might want to add a link to the changeset for that work. This allows you and the team to track what files were involved in completing the work item. Link versioned items to user stories, requirements, tasks, or bugs. If your team is using Team Foundation version control, you can associate version control changes to a particular work item. This allows other members of your team to see what changes have been made in the source code to address a work item. Add nonsource code files to version control. Team members can add important project documentation, artifacts, and other nonsource code files to Team Foundation version control. This enables the team to manage all source files for a project in one location. Track dependencies by linking work items. You can better manage risks and dependencies if you create relationships between work items. Your team can more easily evaluate the following situations when work items are linked: Monitor progress on feature development by linking work items. The reports that are listed in the next column require that you create links from user stories or requirements to tasks and test cases. Manage and share your work item queries. You can create, save, copy, and rename work item queries. You can maintain a private set of queries or share them with other team members. Share work items with other team members. You can share work items, work item queries, and query results with other team members by using e-mail, using query folders, or posting a hyperlink. Get notified when changes occur in your team project. You can determine whether you want to receive e-mail when one of the following events occurs: Alert subscriptions are defined for each team project. You can add different alerts for each team project that you have permission to access. Send notifications to your team. You can create alerts that are sent to an e-mail distribution group. Support the flow of work by updating work items. Changing the State and the Assigned To fields in work items are the primary ways by which work gets handed off to team members and work is effectively tracked. Assign work to specific product areas and iterations. By assigning work items to specific product areas, you help organize the work and support team members to understand what work is associated with specific product features and functions. By assigning work items to the iteration in which they will be addressed, you help keep all members of the team informed about what work is current. The project administrator for each team project defines area and iteration paths for that project so that the team can track progress by those designations. Review process guidance for your team. Process guidance provides information about how to coordinate work on a team project, and how to use a type of work item in the overall project life cycle. Process guidance can provide details about a team project. Details can include information about how to complete work item fields, examples of healthy and unhealthy reports, query descriptions, roles to assume, activities to complete, and other information.
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00020-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
5,913
27
https://www.toogit.com/jobs/skill/press-release-writing
code
Fixed - Posted at 3 days ago If you are a seasoned publicist with a rockstar track record who just branched off to do your own thing and you have public relations agency experience - we're looking for you! Who we are: We are a new company who just launched - and we're looking for some help with getting press releases written and put out on PR distribution s...read moreEasy Apply Create a free profile to broadcast your skills, experience, and desired pay rate to clients. You choose the payment method that's best for you to easily get paid for your work.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00428.warc.gz
CC-MAIN-2022-40
558
4
https://guidelines.openminted.eu/guidelines_for_providers_of_corpora/
code
Providers of corpora Content may be imported in OpenMinTeD in the form of single documents or already packaged in the form of corpora, i.e. collections of single documents. Corpora may come (upon bilateral agreements) from repositories of language resources, or discipline-specific repositories, or uploaded by users for processing with TDM applications. If you wish to share corpora through OpenMinTeD, you will find more information here. What types of corpora Corpora in the OpenMinTeD framework refer mainly to collections of documents that will be used as mining source in the TDM process. If they are uploaded in OpenMinTeD, they may not necessarily be composed of scholarly works. Examples include reference corpora (i.e. corpora deemed representative of general language or a sublanguage usage), news corpora, collections of domain-specific texts, such as manuals, technical reports, etc., as well as annotated corpora, such as treebanks, morphologically tagged golden corpora etc. Nevertheless, in order to be mined they must follow the technical requirements that have been defined for corpora built through the OpenMinTeD mechanism1. Otherwise, they can be used (upon availability of the respective components/applications) for other objectives, such as training Machine Learning models, evaluating the performance of applications, etc. Minimum requirements for corpora If you want to share your corpus through OpenMinTeD, you must - ensure that the single documents comprising the corpus adhere to the minimal level of the OpenMinTeD Interoperability specifications, - describe the corpus with a metadata record compliant with the OMTD-SHARE schema, at least at the minimal level, - prepare, package and register a zipped file with the contents (texts) of the corpus according to the instructions for uploading corpora. 1. In the case of single documents (publications) uploaded in the registry, the OpenMinTeD platform includes a mechanism for automatically generating corpora based on user criteria selected from a faceted view - more details are included in the Building corpora of scholarly content offered in OpenMinTeD. ↩
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529472.24/warc/CC-MAIN-20190420080927-20190420102927-00475.warc.gz
CC-MAIN-2019-18
2,141
12
http://www.seomastering.com/wiki/CodeBeamer_(software)
code
From Seo Wiki - Search Engine Optimization and Programming Languages codeBeamer is a web based Collaborative Application Lifecycle Management tool for distributed software development, written in Java. It is developed and marketed by Intland Software. Its license is proprietary, but free versions and free hosting options are available. It won Jolt Productivity Award in 2005 and 2008; the 2008 DDJ reviewer, Gary Pollice, noted that CodeBeamer "stands out among the competition because of its ability to play nicely with other products", based on the availability of plug-ins for IDEs like Eclipse and NetBeans, and integration with Microsoft Word, and that praised the browser-based user interface for its ease of use. Notable codeBeamer users include Asus, Continental AG, Bayer, Sun Microsystems and Los Alamos National Laboratory. It is also used by non profit organizations, including SIPAM (Amazonian Protection System). CodeBeamer features the following major functional components: - Document Manager - Issue Tracker - Version Control: codeBeamer integrates with Subversion, Git, Mercurial, CVS and some other version control software. - Comparison of free software hosting facilities - Comparison of issue tracking systems - Comparison of wiki software
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00066-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,263
10
https://github.com/lmorchard/badg.us/blob/80f27766c864c80a05982548ef5378e0c3adb091/docs/index.rst
code
Welcome to this project's documentation! This is a documentation template for a web application based on Playdoh. Feel free to change this to your liking. This project is based on playdoh. Mozilla's Playdoh is an open source web application template based on Django. To learn more about it, step by the playdoh project page.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096301.47/warc/CC-MAIN-20150627031816-00141-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
324
4
https://www.tothenew.com/blog/context-aware-configuration-in-aem/
code
Imagine you have a special part on your website that shows different information based on where the website visitor is from. Let’s call it the “Location Info” part. This part is used in many places on the website, like different pages. Now, some places on the website have different needs – they want to show specific information based on where the visitor is located. For example, they might want to show different offers or messages depending on whether the visitor is in New York or Los Angeles. However, a unique challenge arises – the content of this component must adapt based on the geographical location of the user. This is where the power of context-aware configuration in AEM comes into play, facilitating the display of distinct configuration values for the same component on pages catering to different locations. What is Context-aware configuration When building a multi-national website with AEM, you may need to use different configurations for different country-specific pages. However, creating a separate configuration file for each country can be complex and error-prone. Instead, you can use the context-aware configuration feature in AEM to define one configuration class file with different configurations for multiple contexts. This approach simplifies your development process and reduces the risk of errors. Steps for Implementation of Context-Aware Configuration 1. Context-aware configuration: To manage site-specific configurations, we can create a configuration class that defines configurations. The @Configuration annotation is mandatory. In the configuration class, we can define a set of properties that specify the default configuration values. 2. Using a model class to retrieve configurations To use the Configuration class in our components, we need to create a model class that adapts to the configuration. Inside the Model class, we create a function that takes currentPage and ResourceResolver as Input. Context-aware configurations are built on top of context-aware resources. The same concept is used: configurations are named and the service to get them is the ConfigurationResolver. You can get a reference to the OSGi service org.apache.sling.caconfig.ConfigurationResolver – It has a single method to get a ConfigurationBuilder. Alternatively, you can directly adapt your content resource directly to the ConfigurationBuilder interface and get the configuration:- The ConfigurationBuilder also supports getting the configurations as ValueMap or by adapting the configuration resources, e.g., to a Sling Model. In this case, we have to specify a configuration name that is otherwise derived automatically from the annotation class. Internally, the ConfigurationResolver used the ConfigurationResourceResolver to get the configuration resources. It always uses the bucket name sling: configs. Call that function by @PostConstruct annotation, where all variables get values from configuration. 3. Create Component Inside the component, Model is used by data-sly-use and shows all values that it returns from Model on the page. 4. Create folders for every content resource inside /conf/mysite. These folders are of jcr:primaryType = sling: folder. inside /conf/mysite ca,ca_b1,ca_bk are the folders of jcr:primaryType = sling:folder, and inside of these folders create folders sling:configs of the same type. 5. Create a Configuration node. Create a Configuration Node for every folder of jcr:primaryType = nt:unstructured. And define values for every node according to the context. 6. Inside /content/mysite under the specific site hierarchy, add property sling:configRef = [path of the folder that has a configuration for that specific site] in jcr: content. When we use a component on any page, it takes configuration according to sling:configRef path. If for some hierarchy configuration is not defined, then it inherits from the parent one; that is, in our case, if we do not define sling:configRef in ca/b1/jcr:content that it picks configuration values from ca, i.e., from a parent. For Ca => For CA_b1 => For CA_bK => If we remove sling:configRef Property from /content/mysite/ca/b1/jcr:content then it inherits values from /content/mysite/ca/jcr:content i.e parent.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00356.warc.gz
CC-MAIN-2024-10
4,229
31