url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://www.unrealengine.com/marketplace/en-US/product/slice-meshes-skeletons-projectiles
|
code
|
Real physical blade slices over meshes, skeletons, projectiles, cannons. Matrix bullet trails in slowmotion. 100% blueprints.
TO KNOW BEFORE YOU BUY: No ragdolling skeleton slices are performing in this package. The instant you slice a skeleton, this is converted into a static mesh (in hit position you can choose), then sliced in 2 parts.
IMPORTANT, to destroy cannons with your blade:
- for UE5 Chaos Destruction plugin must be enabled.
- for UE4 Apex Destruction plugin must be enabled.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.30/warc/CC-MAIN-20230924191454-20230924221454-00289.warc.gz
|
CC-MAIN-2023-40
| 490 | 5 |
http://discussions.nokia.com/t5/Lumia/Lumia-920-What-is-NFC/m-p/1644928
|
code
|
So what is it? What can it be used for? As a user in London.
Wasn't even aware of this until I upgraded to the Lumia 920.
I'm curious as to how it can benefit me...
Solved! Go to Solution.
NFC is short for Near Field Communication. It allows devices with NFC to establish communication by just touching them together. Examples of use are as follows.
Pairing devices by just touching. You can already buy NFC speakers which will pair with your phone by just touching. Soon, you should be able to pay for your purchases by touching a transponder pad with your NFC electronic wallet. You can transfer information between devices by touching. Devices can read data by special NFC tags by just touching.
12-12-2012 13:31 - edited 12-12-2012 13:34
You can find out a lot of information about NFC via a quick google search. As for when you'll be able to pay for things that's down to things like card providers and network providers.
Orange support it but only seems to be a limited set of phones at the moment.
Also more info from Nokia about NFC here:
And as you live in London......
Having a 920 with an NFC chip and Wallet is the basic requirement. You may still need a different SIM card from your network provider and/or specific apps to make these sort of transactions work.
It's early days. But I'm hopeful!
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928586.49/warc/CC-MAIN-20150521113208-00114-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 1,308 | 13 |
https://www.fortemont.com/about/
|
code
|
One Fortemont ID for all services
Fortemont provide cloud-based services for businesses. We are developing operational applications to help million of business reaching their objective.
Fortemont is a full-service consultancy started as a branding, media production and digital agency in 2016.
Founder Steven Cheng wanted to build a system that help millions of people through his multidisciplinary approach, that involves strong branding strategy, simplicity in design, memorable narrative and advanced technology.
Since 2005 he has been running online communities, working for multiple creative agencies, managing production of a global animation production studio, running a branding project of a country, and leading a startup to develop award-winning enterprise solutions.
Aiming for the mountaintop
We believe in pragmatic, logical and consistent approach, and is always strike for the best possible solutions. Because only the best is enough.
We are developing products
We are developing software product for both the real world and digital medium, through spatial computing, Augmented Reality and Mixed Reality technology.
Get in touch with us if you want to be a close member of our developing team.
Get started today
Send us a message to schedule a time for a call.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00403.warc.gz
|
CC-MAIN-2024-10
| 1,275 | 12 |
https://www.indiedb.com/news/gif-devlog-2
|
code
|
GIF Devlog 2 # (Bouncing & New Level)
Current progress on first level and new mechanics for bear.
Img 1: Bear chasing after hearts, but as bear learns, hearts aren't so easy to get..
Img 2: A pan around one of the early alpha levels, with custom ambient occlusion added.
Img 3: Bouncing and freefalling off a castle and trying to hit the stop button for record at the end.
Hopefully you enjoyed the update. I'm mostly trying to add mechanics that make the game more fun at this stage as well as graphical improvements. Next devlog I'm mostly going to share additions to dog as this update has mostly focused on bear. I'm hopeful to have some sort of playable demo done by early next year, if not the whole game if you guys were wondering. If you have any suggestions or comments feel free to drop a post!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00103.warc.gz
|
CC-MAIN-2023-50
| 804 | 6 |
https://au.news.yahoo.com/postman-left-spooked-chilling-find-draped-mailbox-073706082.html
|
code
|
A postman was stopped in his tracks when he encountered an unexpected and frightening hurdle on his mail route on Friday.
The worker approached a home in Overland Park in the US state of Kansas, but noticed they weren’t the only one interested in the mail delivery.
A large red-tailed boa constrictor was draped over the railing and letterbox.
“The poor resident did not receive their mail today (obviously),” the Overland Park Police Department tweeted.
Local police said the postal worker called animal control about the slippery situation, but the first responding officer needed backup.
“It was too big for her to get and put in a box by herself so called in her boss, the police came,” neighbour Holly Gibson told 41 Action News.
- Thirsty emus flock to outback mining town as drought deepens
- Why the internet has fallen in love with the cat ‘too cool to be homeless’
- Massive 8.2 magnitude earthquake strikes in the Pacific
Ms Gibson said the serpent was the biggest snake she had ever seen and “looked like it had just had lunch.”
A reptile breeder named Dr. Larry Holtfrerich told the news outlet that this particular reptile was very docile, well cared for and about six or seven years old.
“This one here is about six to seven feet, that’s about full grown they can get up to 10 feet,” Dr. Holtfrerich said.
Police believe the snake is someone’s pet, and are hoping its owner comes forward.
A Ball Python was found my a mailman and called Animal Control. The poor resident did not receive their mail today (obviously). The @OPPD_PIO is trying to talk the Animal Control Officers to put the snake under the @OPPD_Chief desk, but they won't. pic.twitter.com/YqahoO2pVn
— Overland Park Police (@OverlandPark_PD) August 17, 2018
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00063.warc.gz
|
CC-MAIN-2019-26
| 1,765 | 15 |
https://www.jsunilrao.com/about
|
code
|
As of Jan 1, 2023, I will be Professor in the Division of Biostatistics at the University of Minnesota and Director of Biostatistics at the University of Minnesota Masonic Comprehensive Cancer Center. From 2010-2022, I was the Director of the Division of Biostatistics in the Department of Public Health Sciences at the University of Miami, Miller School of Medicine. From 2016 (June) - 2019 (December), I was the Interim Chair of the Department of Public Health Sciences. More on my time in that role can be found here.
From 1998-2010 I was in the Department of Epidemiology and Biostatistics at Case Western Reserve University School of Medicine where I rose to Full Professor. For the last 5 of those years, I was Director of the Division of Biostatistics. From 1994-1998 I was on faculty in the Department of Biostatistics at the Cleveland Clinic Foundation.
I graduated from the University of Toronto in 1994 with my Ph.D. in Biostatistics under the guidance of Rob Tibshirani. In 1991, I received my M.S. degree in Biostatistics from the University of Minnesota and in 1989, I received my BSc. from the University of Ottawa with a double major in Biology and Biochemistry.
My work is mostly motivated by real problems I encounter from medicine and biology. Much of my time has been in the area of cancer genomics where I co-developed Bayesian ANOVA for microarrays (BAMarray) and bump hunting for discovery of novel subgroups of colon cancer. I was also part of the research team that conducted the original work that became Cologuard - the at-home, early detection of colon cancer screening tool based on detection of aberrant DNA methylation in stool. I am also interested in modeling pharmacogenomic data looking for strategies for identifying candidates for drug repurposing drug synergies. I am now working in developing new statistical methods for health disparity estimation. I recently co-developed PRISM for multilevel tree-based modeling of individual level and social determinants of health and disparity driver analysis (DDA). Most recently, I've also gotten interested in developing new models to estimate contextual vulnerability for better predicting and risk stratifying patients for opioid relapse. And I too am researching various aspects of COVID-19 modeling including correcting prevalence estimation for biased testing sampling.
Other areas of statistical research include a more continuous form of spike and slab regression for high dimesnional L2 shrinkage with hard thresholding, model selection for complex data including fence methods for linear and generalized linear mixed models and non-parametric small area estimation, and the E-MS algorithm for model selection with missing or incomplete data. I have also done work on mixed model prediction (MMP) which includes the development of the observed best predictor (OBP) and best predictive estimator (BPE) for misspecified mixed models, and classified mixed model prediction (CMMP) for more accurate subject level prediction. These have all been recognized as important contributions to statistics.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00209.warc.gz
|
CC-MAIN-2024-18
| 3,082 | 5 |
https://dribbble.com/ilovegraphics
|
code
|
Time to show off. As a placement image (grey background), I used a fantastic image from Anthony Dart (http://bit.ly/9UEP1o)
New game The current draft for the next webgame we're going to get on tracks. It combines a cute robot, some 3D, and a level builder... Nothing too fandy bu...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00282.warc.gz
|
CC-MAIN-2018-22
| 283 | 2 |
https://staging.stemaway.com/t/solomon-nwante-full-stack-self-assessment/8140
|
code
|
Week of June 7th: Getting Started
Overview of Things Learned:
- How to contribute to open-source software
- Command line basics
- Git and GitHub basics
- Visual studio code
- Time Management
Achievement Highlights and Tasks Completed
- Setting up of my Discord account for team communication
- Setting up my Trello account for team project management
- Completed the required readings for the week.
Goals for Upcoming Week:
- Check my locally installed Discourse and make sure that things are still working properly
- Familiarize myself with the Discourse code base again.
- Start Ember and SCSS guide
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00267.warc.gz
|
CC-MAIN-2023-06
| 601 | 15 |
http://www.tomshardware.com/forum/52389-63-start-windows-anymore-boots-slow
|
code
|
I hope you can help me with this, I have desperately tried many things but I couldn't find a solution.
So I am hoping you can help me.
A few weeks ago my computer started to get some random freezes. First I thought it was bad memory, so I ordered some new RAM. But that didn't seem to be the case.
In the meanwhile I was also starting to get BSOD and windows recovery on boot.
With this knowledge I started to experiment with my hardware, and I found out that if I removed one (of the 2) HDD (in this case, the oldest/slowest one with all the uninstalled files and downloads on it) it would boot smoothly like it used to.
But now it is getting even worse, I can't boot up my system without waiting a minimum of 15 min, while I used to be able to browse, and open my mail within 2 min. It also doesn't matter whether I remove the old HDD or not.
It is also not possible to boot into save mode because it will hang at "\windows\system32\drivers\classpnp.sys"
So now it doesn't seem to be the old HDD anymore.
I also tried reïnstalling the OS and replacing the CMOS battery, but this without any result.
Do you "guys" have any idea.
Sorry for my English, I am not a native speaker. But I hope I could make myself (crystal)clear.
do you have the original install disc ( the operating system ) ?.... the first thing you could try is to do a windows repair. Do not hit the (R) repair the first time you see it but rather keep going until it looks like you will be totally reformating your drive...... there there will be an option to repair leaving the files in tact. This may clear up the widows error you listed above.
sounds a lot like you have a ton of malware/virus' and unwanted programs running eating into your resources. if the repair works and you can boot into safe mode you need to start working on killing that stuff.
an entire reformat may be in your future............ or there is another hardware problem yet to be discovered.
I believe I already got promted once to a repair. If we are talking about the same screen, it is saying something like "windows is repairing, this may take up more than an hour" Eventuately it ended with a BSOD....
But still I will try what you told me, so we are sharing the same level of knowledge.
About the malware, I have eset smart security running since I build my pc (somewhere in december).
I also ran malwarebytes to check for any malware, but it didn't detect any.
And I used ccleaner to clean and repair every now and than. I also removed all the programs from starting up on boot that were optional.(like skype and steam etc.)
windows7 is great in the respect it tries to cure it's own ills. unfortunately it sounds as if you have a hardware problem.
all heat sinks are free of dust/remove-clean-reapply thermal paste to processor/make sure all connections are tight ( wires )/ try putting SATA cable on different spot on board. ( HD ) only main drive installed.
the repair thing.............. no, not the same. i was thinking of using the opsys install disc not the auto recovery. ( boot to cd on startup ) ( BIOS ).
have you tried running a HD disc repair at all ? ...( on another machine ) perhaps a bad sector that can be recovered ?
I managed to boot it on a different pc.
Now making sure I have my precious files backed up.
Also ran HDtune after I finished backing up.
Till now it has found 2 bad units with the "error scan" (3055&3175 are the bad ones)
Will this program auto fix them?
i never ran that program. i run what windows has to offer. if it's finding bad sectors there's a good chance the drive is toast. it may help for a while but i would never depend on it for anything again. you should also run the disc scan on the other drive. glad we're getting somewhere.
What program would windows offer me?
I will run that after I finished this one.
I will try to make it possible to also scan the other hdd, but it doesn't have a os on it, so I might do it when I have this HDD back in my old pc.
I am not sure if I understand what you're trying to tell me.
For what I get : I should throw away the disk I am scanning, cause the errors will temerary disappear, but they will come back. (anyway, just throw it away)??
what I'm saying is once a disc starts to go bad you should keep your eye on it. I would not put too much trust in it from here on out. could work fine for a while don't toss out.
start/computer/right click local disc C:/properties/tools/error checking/...put check mark in both boxes... reboot and let it happen.
from what you said early on this could not be done on the damaged machine. once the drive has finished and you put it back in the machine it was in you can try it there if you want. .... if the drive will stay running and you get no more BSOD's.
Sorry for the inactivity, but now I am back, and I have checked both of my HDDs.
My (faster) current boot drive shows some broken units in the HDtune error scan.
My (slower) backup and download drive has a 0 broken units.
I also tested a different HDD with w7 on it, with the same cables and SATA ports as the (broken) boot drive and it bootes fine...
So I guess this would indeed mean its time for a new hard drive. But I have made this mistake before. With this I mean I just bought something (new RAM) without double checking wheter it was the problem. Because after installing the RAM the freezing and slow boot time still appeared.
So are there any other options that may cause my system to slow down?
Or any other options to (double)check wheter it is really my HDD that's broken?
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710196013/warc/CC-MAIN-20130516131636-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 5,525 | 44 |
https://sourceforge.net/directory/?q=chess
|
code
|
Java Schach Turnierverwaltung / Java Chess Tournament Management
Manage chess round robin tournaments with JKlubTV Version 3.0.0-beta This application stores the data belonging to round robin into a SQLite database. It is intended for webmasters who want to easily manage a club tournament in their own chess club. The HTML tables that are to be published for the website will be created easily by the application. Automatically calculate the total points, Sonneborn Berger points of each player, as well as their sequence DWZ, and sorts the HTML table...
Java Open Chess is a project written in Java in NetBeans IDE. There is possibility to play for 2 players on local computers and via network connection. There is also an option to play versus quite weak computer oponent. Stronger computer oponens will be implemented soon. ATTENTION: Requires Java 1.8 or higher!
SJCE - free portable cross-platform graphical chess game, 100% Java.
SJCE - Strong Java Chess Engines, free portable cross-platform graphical chess game, 100%-pure Java. Support with including many best free/open-source java xboard/uci chess engines. It is possible to play both White and Black. It is possible to play Human to Human, Human vs Engine, Engine vs Engine. Simple and intuitive GUI - Graphical User Interface. Tested on Windows/Linux. Created in NetBeans IDE. Need jre1.8 - http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155...
Funcionalidades em FPC/Lazarus para desenvolvimento de jogos de Xadrez
Pacote de funcionalidades em FPC/Lazarus para serem utilizadas no desenvolvimento de jogos de Xadrez.
World largest chess database
Chess database with 15,9 Million games. Database is categorized into 6 parts, to learn which one you need please browse all files and read description. In addition a huge collection of opening books for chess engines and Arena. As well as other chess resources, to download separate packages please browse "all files"
Blanker is an editor for creating blanks to write a lot of chess.
Blanker is an editor for creating blanks to write a lot of chess. The program interface is available in two language version Polish and English. The user gets with seven kinds of blanks, which can change: - Type, color and font size, - The thickness and color of the lines of a form, - Subtitle Language (available language is Polish and English) - The color field recording for hand playing black spillikins. and insert your own logo and the name of the tournament. Blanker is free software...
The calculator Polish chess rankings.
Program Ranker is a kind of calculator, which is designed to allow players to easily check the conditions that must be met in order to obtain the desired chess category. With the help of this program, the player may consider different variants leading to the category of chess. Ranker program is free software both for private purposes but also commercial. Warning!!! Please do not install the program in the "Program Files" or "Program Files (x86). --- Program Ranker jest rodzajem kalkulatora...
A collection of LaTeX-styles used for generating diagrams of chess problems.
Edit diagram.sty LaTeX diagrams
A Python application to edit LaTeX files containing chess problems - using the chess-problem-diagrams LaTeX-package.
Program dla trenerów szachowych - Program for chess coaches
Program "Arion Expert" jest programem którego zadaniem jest wspomaganie trenerów szachowych w ich pracy związanej z przygotowaniem swoich podopiecznych od strony ich umiejętności taktycznych. Program "Arion Expert" w obecnej wersji został wyposażony w trzy nowe narzędzia umożliwiające: 1. Obsługę silnika szachowego "Stockfish". Tworząc zadanie szachowe możesz przy pomocy silnika szachowego szukać prawidłowych posunięć, 2. Obsługę kopii bezpieczeństwa. Możesz w łatwy sposób zabezpieczyć i...
Chess game on 6x5 board
Capa chess is a chess program. Features built-in simple engine, interface with external XBoard/WinBoard and UCI engines, basic plug-in capabilities, import/export, 2D/3D boards with customization. Java JRE 1.7 or above required.
Phalanx is a chess engine which understands the xboard protocol. It's suitable for beginner and intermediate players (I'm counting on your help to make it suitable for strong players!)
Scid is a chess database application (cross-platform, for Unix/Linux and Windows) with many search and database maintenance features.
Chess Database and Toolkit program
"Shane's Chess Information Database" is a huge chess toolkit with extensive database, analysis and chess-playing features. Scid vs. PC is a usability and bug-fix fork of Scid. It has extensive interface fixes and improvements, and is fully compatible with Scid's .si4 databases. It's new features include a rewitten Gamelist, a Computer Tournament, and FICS, Tree and Book improvements.
Chess Database and PGN viewer
A free and open source chess database application for Linux, Mac OS X and Windows.
yet another UCI chess engine
A chess-engine based on Java.
Chess for the Blind for the JAWS or NVDA Screen Readers
Winboard 4.5 32-bit is a free Windows accessible Chess program that works automatically with the JAWS or the free NVDA screen reader. It is for the blind, low sighted or those who can not use a mouse. It provides vocal announcements of position changes and other selectable board conditions. Blind players also use a separate "tactile chess board". Winboard 4.5 has full keyboard access to move pieces and run menu items. Partial sighted players use high contrast mode and adjust board, piece, most...
Chess Database Application
Scidb is an open-source chess database application for Windows, Unix/Linux. It is a new development inspired by Scid.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00603.warc.gz
|
CC-MAIN-2018-26
| 5,730 | 32 |
https://sakaridixon.com/my-music/large-ensemble/
|
code
|
Below are links to works available in my catalog for purchase or perusal. For a complete list of works, contact me. To search for a specific instrument, use Control+F (Command+F on a Mac) and type in a keyword to search for it.
|Citronella Sunsets, a pedagogical work for Grade 3 string orchestra (2020)|
|Of Tattered Threads and Recollections, for chamber orchestra. (2020)|
|The Castle Upon Crystal Shores, for full orchestra (2019)|
|The Enigma of the Twilight Stallion, a pedagogical work for multi-level string orchestra (2019)|
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649348.41/warc/CC-MAIN-20230603233121-20230604023121-00606.warc.gz
|
CC-MAIN-2023-23
| 533 | 5 |
https://www.eventbrite.ca/e/east-of-toronto-user-group-windows-8-camp-tickets-3297261197
|
code
|
Be part of the Windows 8 Camp brought to you by the East Toronto .NET User Group!
- Bring your notebook computers! You’ll need them to do the hands-on activities.
- Hardware - Any device capable of running Windows 8 and Visual Studio Express.
We recommend that you install the necessary software prior to the event. If the software is not already installed, you will need a partition with ~30 GB of free space to install the bits.
- Windows 8 Consumer Preview
- Visual Studio 11 Beta
- Download the Windows 8 SDK samples
Find out How to Install Windows 8 with Native Boot to VHD from this great blog post from Eric D. Boyd
Ps. If you have any questions, please reach out to us at [email protected]. If you’re building a Windows 8 application, we would love to hear all about it!
|8:30 - 9:00||Registration, socialize and refreshments|
|9:00 - 9:45||The Windows 8 Platform for Metro style apps|
|9:45 - 10:30||Designing Apps with Metro Principles and the Windows Personality|
|10:30 - 10:45||Break|
|10:45 - 11:30||Everything Web Developers Must Know to Build Metro Style Apps|
|11:30 - 12:15||The Developer Opportunity: Introducing the Windows Store|
|12:15 - 13:00||Lunch|
|13:00 - 17:00||Hands-On Lab|
Quality Hotel & Conference Centre
Room: Harmony Hall
1011 Bloor Street East
Oshawa, ON L1H 7K6
Phone: (905) 576-5101
When & Where
East of Toronto .NET User Group
The mission of the East of Toronto .NET Users Group is to provide advanced, interesting information about the Microsoft .NET Framework. We serve the need of developers to receive the best .NET programming information, and fill their desire to be informed about developments of revolutionary importance as early as possible. We are committed to increasing visibility for our members, and to increasing the visibility of .NET programming in general. We serve the needs of programmers who are currently working with or interested in the .NET platform.
Meetings are free and generally include pizza and giveaways. Come join us at our meetings to learn about .NET technologies, network with your peers and have some fun. We'd love to meet you!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00382-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,112 | 26 |
https://highlights.sawyerh.com/highlights/NNS5Z1RGNBo8azS1AMLD
|
code
|
This connection between communication bandwidth and systems architecture was first discussed by Melvin Conway, who said, “organizations which design systems . . . are constrained to produce designs which are copies of the communication structures of these organizations” (Conway 1968). Our research lends support to what is sometimes called the “inverse Conway Maneuver,” 2 which states that organizations should evolve their team and organizational structure to achieve the desired architecture. The goal is for your architecture to support the ability of teams to get their work done—from design through to deployment—without requiring high-bandwidth communication between teams. Architectural approaches that enable this strategy include the use of bounded contexts and APIs as a way to decouple large domains into smaller, more loosely coupled units, and the use of test doubles and virtualization as a way to test services or components in isolation.
Link · 1062
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00166.warc.gz
|
CC-MAIN-2019-47
| 980 | 2 |
http://www.gmc4x4.com/topic/2283-how-to-change-add-banner-photo-or-gif-to-your-signature/
|
code
|
Posted 23 November 2013 - 05:51 PM
If you are adding just a banner photo reference this video ~ http://www.gmc4x4.co...e-forums-video/ or this one by DonYukon ~ http://www.gmc4x4.co...posting +photos
regardless of choice alwyas use the "IMG CODE"
if you are going to create an animated GIF you need your chosen video. For the allowed size here I chose 512x330 in the AVS program
this allows for a file size that doesnt exceed the forums limits/ which allows for fast viewing when going through forum posts etc
to create a GIF you can go the long way at it in photoshop or download AnyVideoConverter here ~ & take the easy route to get the job done for starters
once you haev your video converted from AVI (or whatever) & to the gif please watch this video
2004 GMC Yukon, 4X4 SLT, 5.3L Vortec, ~ Yukon Build Thread
Trail Master 2.5" leveling keys, SkyJacker Hydro 7000's & MOOG 81069 rear coils with 1.5" spacers,
Hella 700FF's, Running on 33's BFG TA KO & XRC 12K Comp Winch
1966 Stevens M416 trailer too ~ M416 Build thread
"Now she is gone I got my Yukon"
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00571.warc.gz
|
CC-MAIN-2018-26
| 1,058 | 12 |
https://publikationen.bibliothek.kit.edu/1000127955
|
code
|
With the share of renewable energy sources in the energy system increasing,accurate wind power forecasts are required to ensure a balanced supply anddemand. Wind power is, however, highly dependent on the chaotic weathersystem and other stochastic features. Therefore, probabilistic wind powerforecasts are essential to capture uncertainty in the model parameters and inputfeatures. The weather and wind power forecasts are generally post-processedto eliminate some of the systematic biases in the model and calibrate it topast observations. While this is successfully done for wind power forecasts,the approaches used often ignore the inherent correlations among the weathervariables. The present paper, therefore, extends the previous post-processingstrategies by including Ensemble Copula Coupling (ECC) to restore the de-pendency structures between variables and investigates, whether including thedependency structures changes the optimal post-processing strategy. We findthat the optimal post-processing strategy does not change when including ECCand ECC does not improve the forecast accuracy when the dependency struc-tures are weak. ... mehrWe, therefore, suggest investigating the dependency structuresbefore choosing a post-processing strategy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00207.warc.gz
|
CC-MAIN-2022-05
| 1,255 | 1 |
https://lifehacker.com/mental-models-solve-problems-by-approaching-them-from-1682835620
|
code
|
As the saying goes, if the only tool you have is a hammer, every problem will look like a nail. The same logic applies when you're approaching more abstract problems. A "mental model" is a way of looking at the world, and sometimes you need to expand your perspective beyond your usual mental toolbox by learning things outside your norm.
This post originally appeared on James Clear's blog.
Richard Feynman won the Nobel Prize in Physics in 1965. He is widely regarded as one of the greatest physicists of all-time. (He was a pretty solid bongo player as well).
Feynman received his undergraduate degree from MIT and his Ph.D. from Princeton. During those years, he became known for waltzing into the math department at each school and solving problems that the brilliant math Ph.D. students couldn't solve.
Feynman describes why he was able to do this in his fantastic book, Surely You're Joking Mr. Feynman! (one of my favorite books that I read last year). How did he do it? He just had a different perspective after a high school teacher had given him a unique calculus book, years earlier:
That book also showed how to differentiate parameters under the integral sign–it's a certain operation. It turns out that's not taught very much in the universities; they don't emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book, I had peculiar methods of doing integrals.
The result was, when the guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn't do it with the standard methods they had learned in school. [...] So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's, and they had tried all their tools on it before giving the problem to me.
A mental model is a way of looking at the world. Put simply, mental models are the set of tools that you use to think. Each mental model offers a different framework that you can use to look at life (or at an individual problem). Feynman's strategy of differentiating under the integral sign was a unique mental model that he could pull out of his intellectual toolbox and use to solve difficult problems that eluded his peers. Feynman wasn't necessarily smarter than the math Ph.D. students, he just saw the problem from a different perspective.
Where mental models really shine, however, is when you develop multiple ways of looking at the same problem. For example, let's say that you'd like to avoid procrastination and have a productive day. If you understand the 2-Minute Rule, the Eisenhower Box and his other methods, and Warren Buffett's 25-5 Rule, then you have a range of options for determining your priorities and getting something important done.
There is no one best way to manage your schedule and get something done. When you have a variety of mental models at your disposal, you can pick the one that works best for your current situation.
In Abraham Kaplan's book, The Conduct of Inquiry, he explains a concept called The Law of the Instrument.
Kaplan says, "I call it the law of the instrument, and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding."
Kaplan's law is similar to a common proverb you have likely heard before: "If all you have is a hammer, everything looks like a nail." If you only have one framework for thinking about the world, then you'll try to fit every problem you face into that framework. When your set of mental models is limited, so is your potential for finding a solution.
Interestingly, this problem can become more pronounced as your expertise in a particular area grows. If you're quite smart and talented in one area, you have a tendency to believe that your skill set is the answer to most problems you face. The more you master a single mental model, the more likely it becomes that this mental model will be your downfall because you'll start applying it indiscriminately to every problem. Smart people can easily develop a confirmation bias that leaves them stumped in difficult situations.
However, if you develop a bigger toolbox of mental models, you'll improve your ability to solve problems because you'll have more options for getting to the right answer. This is one of the primary ways that truly brilliant people separate themselves from the masses of smart individuals out there. Brilliant people like Richard Feynman have more mental models at their disposal.
This is why having a wide range of mental models is important. You can only choose the best tool for the situation if you have a full toolbox.
In my experience, there are two good ways to build new mental models.
1. Read books outside the norm. If you read the same material as everyone else, then you'll think in the same way as everyone else. You can't expect to see problems in a new way if you're reading all the same things as your classmates, co-workers, or peers. So, either read books that are seldom read by the rest of your group (like Feynman did with his Calculus book) or read books that are outside your area of interest, but can overlap with it in some way. In other words, look for answers in unexpected places.
2. Create a web of ideas that shows how seemingly unrelated ideas connect. Whenever you are reading a new book or listening to someone lecture, write down the various ways that this new information connects to information you already understand. We tend to view knowledge as separated into different silos. We think that a certain set of ideas have to do with economics and another set have to do with medicine and a third set have to do with art history. This is mostly a product of how schools teach subjects, but in the real world information is not separated like this.
For example, I was watching a documentary the other day that connected the design of the Great Pyramids in Egypt with the fighting rituals of animals. According to the historians on the show, when animals are battling one another they will often rise up on their back feet to increase their height and show their dominance. Similarly, when a new Pharaoh took power in Egypt, he wanted to assert his dominance over the culture and so he built very tall structures as a symbol of power. This explanation links seemingly unrelated areas (architecture, ancient history, and animal behavior) in a way that results in a deeper understanding of the topic.
In a similar way, mental models from outside areas can reveal a deeper level of understanding about issues in your primary field of interest.
Don't try to tighten a screw with a hammer. The problems of life and work are much easier to solve when you have the right tools.
James Clear writes about science-based ideas for living a better life and building habits that stick. If you enjoyed this article, then join his free newsletter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00450.warc.gz
|
CC-MAIN-2023-40
| 6,921 | 23 |
https://help.flipcause.com/help/terms-nov-2019
|
code
|
Full summary of changes and additions to Terms of Service
- Section 2 - Summary of Service
Addition of Terms (2.11): Fiscal Sponsors and their projects. This section primarily governs the management of Project Sub Accounts by Project Sponsors.
- Section 8 - Flipcause Payment Gateway
- Clarification of language (8.1): Clarity of language around payment satisfaction between Supporters and Account Holders when payments are made through Flipcause.
- Change of Limit (8.8): Maximum Charge amount changed from $999,999 to $50,000 plus applicable fees.
- Section 12 - Interaction with Campaigns and Fundraising
- Addition of Terms (12.18): Clarification of relationship of Flipcause and Account Holders, specifically acknowledged by Supporters.
- Section 16 - Refunds
- Clarification of language (16.3): Clarifying that Supporters can reach out directly to Flipcause to inquire about a charge and initiate a refund.
Creation of new Terms: "Supporter Terms"
- The new Supporter Terms can be found at www.flipcause.com/supporter_terms
- Corrected Typo in linked URL.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00021.warc.gz
|
CC-MAIN-2020-29
| 1,061 | 13 |
https://uy.jobrapido.com/Ofertas-de-trabajo-para-Quality-Engineer
|
code
|
Ofertas de trabajo para Quality Engineer
× Oferta Caducada Esta oferta de empleo ya ha caducado, utiliza nuestro buscador para encontrar ofertas más actuales. Quality Engineer Publicada hace 4 semanas Denunciar Datos...
Pro QC International29 may
Quality Engineer/ Certified Auditors
quality engineer / certified auditors Todas los estadosMontevideo experiencia1 años ingenieria temporaljornada completa salario18 USD hora publicadohace más de 60 días Third party...
Software Engineer Python / AWS Cloud / Big Data
Software Engineer Python / AWS Cloud / Big Data Publicada hace 4 semanas Denunciar Datos Generales Empresa Vacantes1 LocalizaciónUruguay (Cerro Largo) SectorAdministración...
We need a professional MEP engineer with a good background in water treatment project
We need a professional MEP engineer with a good background in water treatment projects. The first stage is writing a professional proposal to let us know how you want to do this...
Mix and master our song
Mix and master our song 6 days left VERIFIED We have some home-recorded tracks by a USB interface. We would like to have a high quality mixed and mastered, professional song. We...
Arduino hydroponic system automation and instructions to replicate the system
I am looking for an electronics engineer to help with the automation of a simple hydroponics system that will be used for educational purposes for high school students. The system...
Oracle EMPRESA DIRECTA05 ago
QA Analyst 3-ProdDev
...passionate about tools and technologies used in quality assurance area and enthusiastic about delivering bug free software.As a Test Engineer at Oracle, you will be a key member...
IMCV Costa Rica, S.A.01 jun
QA/QC Manager 12yr + - Fort McMurray, Alberta, Canada
...opportunity within the oil & gas industry. Please feel free to share this information. The position is: QA/QC Manager 12yr + – Engineer with experience in Quality QA QC in projects of...
Jones Lang LaSalle29 may
...the country. • Evaluate service response time and analyze occupants’ service Requirements •College graduate with technical background Electrical or Mechanical Engineer , Architect...
CPA Ferrere22 jun
Sr Project Engineer
Descripción Job Description We are looking for a Sr. Project Engineer to join our Engineering team at PepsiCo's Concentrate Manufacturing Plant located in Colonia, Uruguay. This...
You will be expected to develop robust code
Job Description: An exciting opportunity has become available within our team for a new full stack software engineer . In your role at KMS Digital, you’ll be responsible for...
QA Analyst with excellent English communication skills
Are you a success driven Quality Assurance Engineer with great communication skills? Would you like to work for a flexible and dynamic company? Define your own working hours and...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219692.98/warc/CC-MAIN-20180822065454-20180822085454-00553.warc.gz
|
CC-MAIN-2018-34
| 2,833 | 28 |
https://www.suzs-space.com/potential-attacks-on-my-blog/
|
code
|
This is an article I wrote some weeks ago. I’ve no idea why it didn’t publish. It was set to publish on the 20th May 2019 and didn’t publish. I feel it’s important enough to publish now. To answer my thought in the last paragraph, yes it has reduced…a lot.
There are hazards to everything. Yesterday I downloaded my email only to discover 3,600 emails alerting me to a lot of potential attacks on my blog. I often get potential attacks and apart from checking that they were potential and nothing to worry about I’ve given it no more thought. I normally only get a handful of attacks, if I’ve been away I might find two or three hundred emails waiting for me. This time was rather more than that and I took exception.
I’ve recently upgraded all the software behind my website so was confident that I would be okay. With so many emails I wanted to take a little more time to ensure a lot more security. I googled ‘how to block IP addresses’ and proceeded to block all the IP addresses listed in the emails.
Internet Protocol Address. It’s the numeric label assigned to a particular network. In essence it establishes a path to that particular network through the entire internet. Every network has an IP address, even your phone, although that will change depending on whether you’re hooked up to a wifi or not. I know, I checked.
A lot of work.
Having been through all 3,600 emails and blocked all the IP addresses listed in each email I proceeded to check my emails again. Another 3,000 emails came through. Bear in mind there were several hours between downloading the emails and blocking them so the IP addresses were likely to be the same or similar. While checking the first set of emails I put a video on YouTube, used it as good thinking time for my assignment – occasionally switching programmes and adding a few more pieces of dialogue – and also figured out a much more time efficient method of finding the IP addresses to block them. I used this second method to get through the second set of emails in only a few minutes, blocked all the IP addresses and decided I should tell you all about it.
I couldn’t log in.
At some stage my own IP address came up in my travels last night and I managed to insert that into my list of blocked IP addresses. Only took me a few seconds to realise this so it was an easy fix to find my own IP address in the list and delete it. I felt a little foolish but I’d done it before when I was fiddling around blocking IP addresses so I only wasted seconds on it.
Anyway, all is well. I’ll find out when I check my email later today how that’s worked and if there are any more problems that I need to solve. I really hope I’ve managed to block enough IP addresses to bring my potential attack emails back down to manageable levels. The moral of the story is to check your website and make sure it’s secure.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816070.70/warc/CC-MAIN-20240412194614-20240412224614-00796.warc.gz
|
CC-MAIN-2024-18
| 2,886 | 9 |
https://www.skillsugar.com/how-to-execute-shell-commands-in-python
|
code
|
How to Execute Shell Commands in Python
Sometimes one needs to run a shell command in Python. There are many reasons for this, usually, it is to access the functionality of a command-line based third-party program though it can be for other useful tasks such as running maintenance on the server.
In this tutorial, we will learn how to execute shell commands in Python.
Import the os package
The easiest way to run a shell command is with the
os native Python package.
os provides a collection of methods for working with the file system including running shell commands. Put the following at the top of your program to import the
Running a Command using os.system()
os package has been imported we can use the
system() method to execute a command. We will store the response from the command in a variable and print it. The example below is executing an
ls command to list files and directories in the current working directory.
import os output = os.system("ls") print(output)
If you are running your program inside Jupyter notebook the
ls command will return
0 for success or
-1 for an error; the actual output of the command will be shown in the Jupyter terminal window.
system() method works great if we just need to run a command and know if it was successful – to get output data we will need to open a pipe.
Open a Pipe to get the Output from a Terminal Command
To get the output from a terminal command in Python using
os we can open a pipe using the
popen() method. This will create a stream that can be read with the
read() method. Let's run the same command (
ls) as we did in the first example and get the output inside the Python program.
stream = os.popen("ls") output = stream.read() print(output)
enumerate.ipynb example.json fruit.json new_dir sentence.txt shell commands.ipynb
Get the Output of a Terminal Command as an Array
read() method will collect the whole output and return it as a string. To return each line as an element of an array use the
stream = os.popen("ls") output = stream.readlines() print(output)
['enumerate.ipynb\n', 'example.json\n', 'fruit.json\n', 'new_dir\n','sentence.txt\n', 'shell commands.ipynb\n']
To remove the
\n (newlines) and extra empty spaces iterate through the lines using a
for loop and use the
.strip() method before appending the line to a new array.
stream = os.popen("ls") temp = stream.readlines() output = for l in temp: output.append(l.strip()) print(output)
['enumerate.ipynb', 'example.json', 'fruit.json', 'new_dir','sentence.txt', 'shell commands.ipynb']
Using the subprocess Package
The most flexible way of running commands in Python is by using the subprocess package, which can be used to chain inputs/outputs among various other functionalities. To use it import it at the top of your program.
Run a Command with subprocess
The easiest way to use subprocess is with the
run() method. It will return an object containing the command that was run and the return code.
success = subprocess.run(["ls", "-l"]) print(success) print(success.returncode)
CompletedProcess(args=['ls', '-l'], returncode=0) 0
note - to run multiple commands or use command arguments, they must be passed into subprocess as an array of strings.
Get the Output of a Command from subprocess
To get the output of a command with subprocess pass
stdout=subprocess.PIPE as an argument of
subprocess.run(). This essentially creates a property on the output object containing the returned data from the terminal command. We can get this property on the output of subprocess to get the output of the command.
success = subprocess.run(["ls"], stdout=subprocess.PIPE) print(success.stdout)
Plain Text Output from subprocess
subprocess.pipe() returns data in a byte format, which in most cases isn't very useful. To get a formatted plain text output pass
text=True as an argument of
success = subprocess.run(["ls"], stdout=subprocess.PIPE, text=True) print(success.stdout)
enumerate.ipynb example.json fruit.json new_dir shell commands.ipynb
Provide an Input to the Command with subprocess
It is possible to send data to use in the terminal command by passing
input="" as an argument
success = subprocess.run(["ls"], input="some data")
Don't Display Any Output in the Console
To not display any output in the console pass
stdout=subprocess.DEVNULL as an argument of
response = subprocess.run(["ls", "-l"], stdout=subprocess.DEVNULL)
Using Popen in subprocess
Let's try the
Popen() method to get the output from a command, which provides more shell interaction options. To run open a pipe and get an output in Python we will have to
subprocess.pipe() as the second and third arguments.
process = subprocess.Popen("ls", stdout=subprocess.PIPE, stderr=subprocess.PIPE) output = process.communicate() print(output)
(b'enumerate.ipynb\nexample.json\nfruit.json\nnew_dir\nsentence.txt\nshell commands.ipynb\n', b'')
To get the output from the pipe opened in
communicate() method must be called.
You will see that the output is in the bytes format. To fix that pass
universal_newlines=True as an argument in
process = subprocess.Popen("ls", stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
Read more about the Popen constructor for more details about the extra functionality it has.
You now know how to run shell commands in Python using two different packages and get the output from the command if needed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00465.warc.gz
|
CC-MAIN-2023-50
| 5,361 | 74 |
https://www.wiley.com/en-us/MDX+Solutions%3A+With+Microsoft+SQL+Server+Analysis+Services+2005+and+Hyperion+Essbase%2C+2nd+Edition-p-9780471748083
|
code
|
MDX Solutions: With Microsoft SQL Server Analysis Services 2005 and Hyperion Essbase, 2nd Edition
- Serving as both a tutorial and a reference guide to the MDX (Multidimensional Expressions) query language, this book shows data warehouse developers what they need to know to build effective multidimensional data warehouses
- After a brief overview of the MDX language and a look at how it is used to access data in sophisticated, multidimensional databases and data warehousing, the authors move directly to providing practical examples of MDX in use
- New material covers changes in the MDX language itself as well as major changes in its implementation with the latest software releases of Microsoft SQL Server Analysis Services 2005 and Hyperion Essbase
- Also covers more advanced techniques, like aggregation, query templates, and MDX optimization, and shows users what they need to know to access and analyze data to make better business decisions
Note: CD-ROM/DVD and other supplementary materials are not included as part of eBook file.
Chapter 1:Â A First Introduction to MDX.
Chapter 2: Introduction to MDX Calculated Members and Named Sets.
Chapter 3: Common Calculations and Selections in MDX.
Chapter 4: MDX Query Context and Execution.
Chapter 5: Named Sets and Set Aliases.
Chapter 6: Sorting and Ranking in MDX.
Chapter 7: Advanced MDX Application Topics.
Chapter 8: Using the Attribute Data Model of Microsoft Analysis Services.
Chapter 9: Using Attribute Dimensions and Member Properties in Hyperion Essbase.
Chapter 10: Extending MDX through External Functions.
Chapter 11: Changing the Cube and Dimension Environment through MDX.
Chapter 12: The Many Ways to Calculate in Microsoft Analysis Services.
Chapter 13: MDX Scripting in Analysis Services 2005.
Chapter 14: Enriching the Client Interaction.
Chapter 15: Client Programming Basics.
Chapter 16: Optimizing MDX.
Chapter 17: Working with Local Cubes.
Appendix A: MDX Function and Operator Reference.
Appendix B: Connection Parameters That Affect MDX.
Appendix C: Intrinsic Cell and Member Properties.
Appendix D: Format String Codes.
|SQL: Databases for Chapter 13|
Download the databases for Chapter 13 (MDX Scripting).
|SQL: Waremart 2005 database|
Download the Waremart 2005 database used as an example in much of the book.
|System 9: WM2005 Database||Download|
|System 9: Interactive MDX Query Tool|
Download an interactive MDX query tool for System 9 databases. THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTIES OF ANY KIND. HYPERION DOES NOT WARRANT THAT THE OPERATION OF THE SOFTWARE WILL BE UNINTERRUPTED OR ERROR-FREE. HYPERION DISCLAIMS ALL OTHER WARRANTIES OR CONDITIONS, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, WARRANTIES OR CONDITIONS OF MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. HYPERION SHALL NOT BE LIABLE FOR ANY DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE HEREOF.
|Web Updates - Chapter 10|
These updates are in a .zip format archive. You might need a copy of WinZip to open them.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00622.warc.gz
|
CC-MAIN-2018-26
| 3,025 | 36 |
http://tech.unruly.co/y-4153.html
|
code
|
The technique yielded wines with small, fresh bubbles, known as a mousse, and no sediment. This second fermentation is induced by adding several grams of yeast and rock sugar to the bottle - although each brand has its own secret recipe. In 1844 Adolphe Jaquesson invented the to prevent the corks from blowing out.
It also contained higher concentrations of minerals such as iron, copper, and table salt than modern-day Champagne does.
Retrieved 27 September 2020. Chardonnay gives the wine its acidity and biscuit flavour. Retrieved 10 March 2013.Next
The different types of Champagne can make your selection process tricky, so let's review the four types.
The disturbance caused by one bottle exploding could cause a chain reaction, with it being routine for cellars to lose 20—90% of their bottles this way. Champagne is typically drunk during celebrations. After aging, the bottle is manipulated, either manually or mechanically, in a process called remuage or "riddling" in English , so that the settle in the neck of the bottle.
Uncorked: The Science of Champagne.
This bottle was officially recognised by as the oldest bottle of Champagne in the world. This etching is typically done with acid, a laser, or a glass etching tool from a craft shop to provide nucleation sites for continuous bubble formation note that not all glasses are etched in this way.
Still wines produced from varying grapes and vintages are blended together in a process called assemblage.
The most notable example is perhaps the 20 fluid oz.
Other well-known recipes using Champagne are huîtres au champagne "oysters with Champagne" and Champagne zabaglione.
Though Reims is the city of the Kings of France, Troyes is one of the most historic cities of Champagne.
In addition, négociant successfully appeal champagne to broader consumers by introducing the different qualities of sparkling wine, associating champagne brands with royalty and nobility, and selling off-brands under the name of the importer from France at a lower cost.
Often the bottle is chilled in a bucket of ice and water, half an hour before opening, which also ensures the Champagne is less gassy and can be opened without spillage.Next
Some wine from previous vintages and additional sugar le dosage is added to maintain the level within the bottle and adjust the sweetness of the finished wine.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00709.warc.gz
|
CC-MAIN-2022-21
| 2,354 | 14 |
https://gradle.com/training/devprodeng-showdown-s2e2-showdown-on-wall-street/?time=1661299200
|
code
|
Showdown on Wall Street
Large banks have some of the largest software projects and code bases in the world. What’s it like working on developer productivity engineering and developer experience for 5K+ developers in a heavily regulated financial entity with legacy code? In this episode of DevProdEng Showdown we feature productivity engineering, developer tooling, and developer experience experts who have been working on the challenges of shipping software at scale at large heavily-regulated financial institutions. The importance of digital transformation in the financial services industry has accelerated the need for and focus on Developer Productivity Engineering. Join us and learn from our expert panelists who will answer questions in an entertaining, rapid-fire, game-show-like format on the best ways to achieve developer productivity and experience excellence. You vote on the best answers and determine the winner.
This episode’s all star panelists:
- What to do when you’re blocked by “process”
- Best hacks to speedup the onboarding time of new developers
- The best way to find and fix Log4J vulnerabilities firm wide
- How to address dependency management in the software supply chain
- Best practices for dealing with flaky tests at a large firm
- How best to manage and monitor distributed applications
- The new role of enterprise architecture
- How agile is too agile?
Not familiar with DevProdEng Showdown!? It’s a series of live-streamed 30-minute episodes where a panel of distinguished experts debate hot topics related to Developer Productivity Engineering in a rapid-fire game show-like format. Check out our pilot episode below to get a better idea of what you’re in for and don’t forget to invite your colleagues that care about DevX:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.31/warc/CC-MAIN-20231206063543-20231206093543-00076.warc.gz
|
CC-MAIN-2023-50
| 1,782 | 12 |
http://www.linuxforums.org/forum/applications/sethdlc-configuration-error-hdlc-interface-print-168351.html
|
code
|
Sethdlc configuration error for HDLC interface
Can anyone help me with HDLC on linux?
Actually configuring patched HDLC on linux kernel 2.4.20.
The patching is done as HDLC version on 2.4.20 is 1.02 and upgraded to 1.14 to support HDLC to Ethernet bridging.
From the patching looks like Ioctl interface is changed to support physical and logical (protocol) configuration for HDLC.
Driver doesnt currently support physical parameter setting as is set for required parameters.
On driver module insertion the interface shows frame relay as follows:
hdlc0 Link encap:Frame Relay Access Device
POINTOPOINT NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Interrupt:25 Base address:0x2000
Now when you try to configure HDLC interface as follows for HDLC interface:
# sethdlc hdlc0 hdlc nrz no-parity
following error is thrown
hdlc0: Unable to set HDLC protocol information: Operation not supported
Trying to configure for HDLC-ETH type using sethdlc
Appriaciate any support on this.
Thanks in Advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00262-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 1,112 | 20 |
https://raku.land/zef:atroxaper/App::RaCoCo/changes?v=1.5.0
|
code
|
1.5.0 2021-12-27 'Custom Reporters and Configurations'
- Add possibility to implement a custom reporter
- Add configuration file racoco.ini
- Add -l option as a shout-cut for --exec='prove6 -l'
- Remove all logic related --fix-compunit option. Now the correct folder is determined using the --raku-bin-dir option
- Now --append option works through previous report.txt file instead of previous coverage.log file. The coverage.log file is deleted after each run because it can be very large
- Improve calculation of coverage level. Most likely now the level will be lower than before
1.4.5 2021-11-03 'Bugfix release'
- Fix fail when run on library with not existed .precomp directory (#10)
1.4.4 2021-10-23 'Bugfix release'
- Fix #6 Warn in CoveredLinesCollector.rakumod
1.4.3 2021-10-23 'Bugfix release'
- Fix project name getter for html page
- Fix JS of report.html
- Make .precomp directory check as ambiguous be before run tests
- Add tags to META6.json
- Add Roadmap.md file
1.4.2 2021-10-22 'Bugfix release'
- fix rare issue in covered lines collector
- add code coverage badges
1.4.1 2021-10-21 'Second public release'
- add --fix-compunit flag
- fix tests for all three platforms
- improve CI
- Update README
1.3.1 2021-03-14 'The first public release'
- Production ready code coverage tool
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00743.warc.gz
|
CC-MAIN-2022-27
| 1,299 | 27 |
http://softlanding.ca/blog/Re-Link-Workflow-History-To-List-Item
|
code
|
By default SharePoint is configured to remove the workflow history from an
item 60 days after the completion date of a given workflow. This might not
be an issue under normal circumstances; however, we started to receive
reports from some business groups indicating that they needed the workflow
history for legal auditing purposes.
I was pleased to discover that SharePoint doesn't actually delete the
workflow history for a given item, it just unlinks the item from the
workflow history. At this point I began searching for options that might
enable me to reconnect each item to its respective workflow history. The
solution came in the form of a calculated column.
The strategy would be to create a new column that would contain a link to
the relevant workflow history for each item.
The workflow history list provided me with everything that I needed to
uniquely identify each item and its workflow history. Here is the query
that I ended up using to create a link back to each items' workflow
0ca99fcb-3e3b-4ce4-ae21-44e627959fee}')>Internal Approval Workflow
This query string filters for the item ID, List ID and Workflow
Association ID. This results in a clean display of the workflow history
for each item.
I created a new view in the workflow history list called 'Audit View',
which contains the relevant fields that would be useful for auditing
purposes. The query references the List ID as well as the workflow
association ID. In this case we had several workflows associated with the
list, so it was important to distinguish them.
The end result is a column with hyperlinks to the workflow history for
*When you create the 'Audit History' calculated column using a
'Date/Time' data type this will justify the link text to the left side of the
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00080.warc.gz
|
CC-MAIN-2017-43
| 1,755 | 27 |
https://mechanics.stackexchange.com/questions/14934/can-i-place-a-mechanical-water-temp-sensor-within-the-radiator-hose-pipes
|
code
|
I have a Mk1 Golf. The previous owner has put in the 3 GTi console gauges but didnt connect them.
I want to connect up the water temp gauge (which is the mechanical type with a heat sensing bulb) but still keep the original electric gauge working as well.
I was wondering if I could simply make a hole in the radiator hose and insert the bulb and seal it in.
Would having the bulb within the hose impede the coolant flow at all? Does anyone know another way of doing this?
(PS: diameter of hose is 30mm and length of bulb is about 25mm, and 8mm thick.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00883.warc.gz
|
CC-MAIN-2023-50
| 551 | 5 |
https://www.geekzone.co.nz/forums.asp?forumid=46&topicid=197897&page_no=2
|
code
|
I found that my rpi 2 was not able to play an actual bluray rip properly over ethernet. It was actually slightly better over an AC wifi stick, but still was not able to keep up and seeking was dog slow.
For less than the total cost of a raspi and sd card, wireless mouse/keyboard, case, power supply you can get an android based settop box with kodi already installed on it with a remote and power brick etc. Sure, not a nice fast 4k supporting one, but it plays stuff a hell of a lot better than the pi does.
Never tried bluray and think it would be a very sad experience on my v1 Pi but my setup works fine as it is so keeps me and the family happy
Like most people, I purchased an RPi without a clue what I was going to use it for and it consequently had several lives before settling down as a Kodi media center. I think the main advantage I see using the Pi with Kodi is that I can use my TV remote.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00136.warc.gz
|
CC-MAIN-2020-45
| 904 | 4 |
https://alexanderobenauer.com/labnotes/029/
|
code
|
These Lab Notes document my research in progress. My research area is in the future of personal computing.
Notes on time
Gestural view construction
Free and easy organizations and associations
The Messy Desktop
Live items & Contextual notifications
Swappable reference views
Experimenting with the item as the core primitive
Designing systems for computer literacy and evolvability
Personal Computing Network & Devices
Mutations & Item change logs
Services & Item Drives
Today & Daily summary
Cross-reference Navigation in Obsidian
Cross-references & References cloud
The Graph OS
Why is our thinking on computers so restrained?
References box & Topics
General purpose personal computing software
User-created application and system views
User-created item views
Browsing contexts & recent paths
Universal reference containers
Universal data portability
Composing application interfaces
The Lab Notes
Today, we’ll start with a summary, on three main pieces. If you’ve been following along, stay with me; we’re headed somewhere. If you’re new, click through to the other, linked lab notes to see demos that illustrate any new concepts.
First, in an OS of the future, everything is an item: emails, notes, webpages, todo lists, podcast episodes, receipts… everything in our personal computing domain is an item.
In our itemized OS, we can use items together, regardless of their type or source (LN 002). So we might have a workspace into which we gather a few things related to one train of thought: for example, an email, a webpage, and a note (LN 004, LN 005).
This lets us bring items together in fairly infinite ways. We could drop the email thread about an event into the event item itself. Or if we need to log an expense tomorrow, we could put the receipt item directly into a todo item we added to tomorrow’s list.
In many cases, items are made up of other items: a todo list is made up of todos, and a podcast is made of up episodes.
We can also use these bi-directional references to create a graph that reflects our thinking in higher fidelity (LN 014). A project might be referenced from a contact item representing a client, and it might reference various assets, tasks, conversations, and so forth.
Third, items are rendered by item views. An item view renders certain types of items, and you can switch which item view you’re using at any time, or create a new one entirely (LN 006, LN 009). You might have a new, preferred way of viewing your todo list or email inbox, or you might toggle between different views when visualizing an emerging thought.
To summarize, the itemized OS’ useful functionality comes from some foundational features: items that you can co-mingle, remix, and transclude; references that are bi-directional and can be many-to-many; and item views that are modifiable, swappable, and user-creatable. Plus, there’s a few other things we haven’t recapped here, like automations (LN 021).
This supreme flexibility requires a thoughtful set of primitives to give power to the user over the system, and a thoughtful set of features built with those primitives for users to start off with.
What are those primitives? And what should today’s ready-made features be? It will take quite a bit of experimentation to find out. But here’s one take that these lab notes, and the software demoed within them, has implicitly experimented with.
In an itemized OS, even the fundamental concepts like item views and references are, themselves, items. You can do everything with these items that you can do with others. This streamlines the mental model of how the system works, and it makes the system even more evolvable. How? It means that better ideas can replace even the “lower-level” concepts of references and item views in such an itemized system of the future. Plus, since these items work like all others, you can open them in an item view: for example, an item view for an item view might let you modify that item view, as seen in LN 009. If you want to really track this out, this lets you open the item view for item views, and modify how you can modify item views!
This makes the “item” the core primitive.
Building up: The system stores items. We can define new kinds of items, such as views that render items, and references that relate items to one another. We can keep building up this way, into larger structures that we might describe today as “apps”: the interfaces we expect over the items in our domain.
Breaking down: Using our itemized system, we can see how our “app”-like software is built: our inbox is an item rendered by an item view with references to message items rendered by another item view. We can see how these views and references are themselves items, which we can open to adjust their look or behavior, to make our inbox work a little differently, or to build a new kind of inbox in a similar way – such as an inbox for the things we want to read later.
In this way, we might put using a personal computer and developing for a personal computer onto one, single trajectory. Making complex software is a harder version of making simple software. Making simple software is a harder version of using complex software. Using complex software is a harder version of using simple software. This single trajectory allows users to become computer literate by either breaking down: peeking into the inner-workings of the system they’re using; or by building up: learning the fundamentals with which we “do” personal computing (e.g., items) much as we learn the fundamentals of arithmetic (e.g., addition) to build up to the more advanced stuff (e.g., items → references; addition → multiplication).
Designing computing systems meant to improve lives and improve society means designing systems that promote computer literacy and evolvability. The previous lab note, LN 028, sums up with a discussion on this.
In the next few lab notes, I’ll share with you my project that I’m currently working on (here’s a little preview), and lots of demos that take this experiment — of the item as the core primitive — as far as we can.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00822.warc.gz
|
CC-MAIN-2024-18
| 6,135 | 43 |
https://groups.yahoo.com/neo/groups/linux-dell-laptops/conversations/topics/25728?o=1&d=-1
|
code
|
Re: [linux-dell-laptops] Digest Number 1957 Live CD on Dell Laptops
- On Thu, 2004-12-30 at 17:13 +0000, [email protected]
> 6. Inspiron 7000 easiest Linux to installBased on my recent limited experiences with Live CD's on my Dell
> From: "Nick Braybrooke" <nickbraybrooke@...>
> Message: 6
> Date: Thu, 30 Dec 2004 13:46:39 -0000
> From: "Nick Braybrooke" <nickbraybrooke@...>
> Subject: Inspiron 7000 easiest Linux to install
> Hi, I am new and wanted ask some questions. 1. What is the easiest
> Linux to install in an I7000 so I can learn Linux? 2. I am installing
> a wireless LAN at home does Linux support this capability?
Inspiron 2650 laptop, I would say, though a long time Mandrake user
since v. 6.5, that for a true beginner, anything but a Live CD for first
use and learning is now totally obsolete, and especially so if the
beginner has fast Internet and can download the iso's, then burn the
image into the CD.
Live CD's use Ram, assuming you have enough, not HD, to load and run
software, so one need not install the Distro to use it, Windows need not
be affected, though some Live CD's can easily be installed if you like
it. Allegedly if you find a good Live distro for your machine, it is
possible to connect to the Web, even print and run wireless without
installing. Some do offer the capacity to store your configs on floppy
or even the HD, so you need not re-type the entire cheat code.
After downloading the iso image, also get the md5sum from the same
source, though you might have to look for it a bit. Md5sum is an
algorithm, sort of fancy check sum to verify that your file is correctly
If you are using a Linux machine, md5sum path filename.iso <enter> for
your downloaded file gives you a lengthy number. IF it is correct, your
download is good.
On Windows, google for md5summer.ex and download it, it produces a
mini-window for handling md5 sums.
I assume most know how to burn a CD, do be sure to burn as an image, or
you end up with a big file on a CD that will not boot. This is a common
error that smart people make.
After you burn the CD, I do not know how to md5sum an entire CD in Win,
but in Linux, it is md5sum /dev/cdrom and the result should be the same
as the file itself.If not, don't use it, burn again.
To run a Live CD, assuming you can boot from your CDROM, install your
Live CD in the CD drive, and boot. If you have several live distros,
one or more should boot for your machine and find most hardware, except
dial-up winmodems. Well, some will even find the lucent chips, but many
others will not be set up. And, some claim to find external modems, I
don't know if this is true or not.
The more Live CD's you download, the better your chances of finding one
which will handle the particular combo of hardware in your machine.
When you boot, you get a boot: prompt, and depending upon your machine,
you need to type different "cheat codes". Example, on some distros, it
will hang probing SCSI devices, so reboot with boot: linux noscsi
Other cheat codes allow you to set monitor freqs, and install certain
modules, if the distro cannot figure it out.
First on my list of recommends is Kanotix. On his forum, Kano himself is
likely to answer your questions if he has time. What a great guy. Many
of the multiple-distro geeks have said Kanotix is the easiest to
When I booted it on my Dell Inspiron 2650, it came up nicely, with a
default selection of boot cheats supplied for select and enter, and
voila, I had solid connection via the NIC to my daughter's Cable
Internet with the included browser. It identified and claimed to set up
my HSF modem with Smartlink-Softmodem package, but it did not really.
I was impressed with the default nv video, Mandrake 9.1 took me two days
to get video working on this Dell, so automatic selection in Kanotix was
delightful. I could not get Kanotix to run video on an old Compaq with
Knoppix 3.7 (Knoppix was apparently the pioneer of Live CD's though we
had live floppies some years ago. I don't think I have tried 3.7 but 3.4
did work on the 2650.
LiveSlax 4.2.0 Worked on my machine.
Linuxpcos Xorg version -- did not work on my video.
Linuxpcos Nvidia version -- did not work on my GEforce video.
I did not take time to see if I could make them work, which is highly
probable, it is just too easy to toss a 20 cent CD aside, and use one
LinEx This one is from Spain, is in Spanish, and the standard linux
applications have unusual and original names. Province Extremadura
several years ago did a study how to keep bright young folks at home.
The decision was made to develop a special distro to be used by all in
the province. Money was budgeted, and local programmers were hired to
develop and maintain the distro. They are also paid to develop special
apps not available, such as medical and agricultural progams. I think
recently the provincial government has announced that there will be no
more funds for MS software, unless a special case is made for some
special need. (e.g.-- AutoCad or Inventor 8 could be examples of special
needs that cannot be met in Linux.)
I kept trying and since they have no mirrors, I kept getting a 20 hour
download prediction. Finally, at a family memeber's house, I got a 10
hour download, and got a good burn on LinEx.
I think I had to cheat noscsi and did not have access to the Internet,
but a lot of the stuff worked. I did note the desktop icons did not want
to work, but the Gnome menu did activate icons.
I also downloaded SimplyMepis; Ubuntu Warty, and Knoppix-STD (security
and hacker stuff as well as a distro). Alas, the Flexwriter CD writer
with Nero 5 on her machine made good copy of LinEx, but then the next 8
attempts produced bad CD's, thus it is important to verify your
downloaded files and CDs, there are a lot of begs for help with CD's
that don't work. So tomorrow I will visit another daughter, and download
those three again, directly to my Linux 2650 which with K3B has never
written a bad CD yet.
There is a Mandake Live CD as well, though I have not downloaded it yet.
The big advantage of LIVE is that you can find one which works on your
machine, without all the work of installation just to see if it will
work on your machine. It is very frustrating to spend some considerable
time installing a new distro and then find it is going to take many
hours or days to make it work because of some quirk in your machine and
differences in distros.
I don't know if the Development stuff is included to let you add
software from tarballs, I will check later when I have time, but most
distros are based on a common distro, and you can download their
packages if you decide to install the LIVE CD. By memory, I think
Kanotix and Knoppix are Debian based so you can download almost any free
debian app; fear not, if this is wrong, someone will let us know.
There are instructions available from Knoppix to let you mount the .iso
file as a loop back file, and access it to add or delete software to
customize your own CD. I made a Live floppy several years ago using
Martin's Mandrake, which was a text mode only distro on two floppies. I
wanted dc, the hundreds-of-digit Reverse Polish calculator program for
linux. LiveSlax and LinEx both include it. I recently did a calculation
for a forum which involved an answer with 965 digits to the left of the
decimal point, and 5000 to the right. Anyway, on the floppies, I got all
the information from LINUX FROM SCRATCH, and I suspect that still works
on CD .iso's as well. I do not yet know how to compress the files to
fit 2GB into one CD. One thing at a time.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581033.57/warc/CC-MAIN-20171216010725-20171216032725-00710.warc.gz
|
CC-MAIN-2017-51
| 7,556 | 118 |
https://github.com/encode/django-rest-framework/pull/6361
|
code
|
Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Permissions: Add ~ operator to permissions composition #6361
This PR add support for
We could now do something like
I updated only a very tiny bit of the documentation.
If we keep going further with this approach, I don't know if the mentions of "Composed Permissions" and "Rest Condition" will stay relevant. More documentation on how to compose permissions could be a good thing, it is a bit discrete right now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00154.warc.gz
|
CC-MAIN-2020-10
| 571 | 7 |
http://www.webassist.com/forums/post.php?pid=148232
|
code
|
You don't need to rebuild the pages, but you will need to inspect each of the server behaviors to have the code regenerated for data bridge.
I would make a site backup first,
then open the first page, for examples sake lets say it is an insert page. On the server behaviors panel, double click the Insert Record server behavior then click ok to have the code regenerated.
Re inspecting the server behavior may leave being a reference to the older Data Assist helper pages:
<?php require_once("WA_DataAssist/WA_AppBuilder_PHP.php"); ?>
that you will need to delete manually.
You will need to repeat this for each of the pages.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00765.warc.gz
|
CC-MAIN-2023-06
| 625 | 7 |
https://support.seagullscientific.com/hc/zh-tw/community/posts/115000972187-Graphics-size-and-real-resolution
|
code
|
Graphics size and real resolution 追蹤
Printer TSC TTP-345 Seagull drivers ver. 7.1
Template width 4"
Printer resolution is set 300 dpi
Correct me if I am wrong but 1200 pixels should precisely fit the template (1200/300=4)
Unfortunately the 1200 px picture is a way bigger and needs 50% rescaling. It looks like Designer itself interprets the picture resolution like 150 DPI.
When you need the highest quality greyscale print, the best way is to convert it in Photoshop to Black&White with error diffusion. And then print it as a black&white image / no dither / no diffusion / no scaling -- just pixel-to-pixel.
So the problem is that something makes an internal rescaling such producing the mess on the printed label. Is it possible to set the label resolution EQUAL to the printer resolution or otherwise outsmart Designer?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00067.warc.gz
|
CC-MAIN-2021-25
| 828 | 8 |
https://community.virginmedia.com/t5/Security-matters/Password-security-is-poor/td-p/4524224
|
code
|
If the password is less then 8 characters then i get a too short warning. This is fair and is good security practice. If I type anything over 10 characters this is too long. If I put special characters in it would not accept the password either.
I ended up by trial and error getting a valid password after about 15 minutes but the UI was terrible in helping me out with what was wrong.
I can see this requirement now I have set a account but this was nowhere when I was registering. "8-10 characters long, letters and numbers only, no spaces. First character must be a letter."
Can anyone explain why in 2020 an 11 characters password is too long ? You are limiting your customers passwords to be pretty insecure for what reason ? Why does the first character have to be letter you making passwords even more insecure mandating this ?
Five Months on, and the password requirements have still not changed. Virgin Media is the largest ISP in the UK. Yet, customer security falls behind even this forum which, allows up to 20 characters, including special characters. Unlike this forum, my account holds personal information and is far more easily accessible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00512.warc.gz
|
CC-MAIN-2022-49
| 1,157 | 5 |
http://mathhelpforum.com/advanced-algebra/187764-find-inverse-following-matrix-progess.html
|
code
|
I have uploaded the problem, by clicking on p1, its part 1 for the problem. and my progress
I am stuck on p3. The task is to find the inverse.
Follow Math Help Forum on Facebook and Google+
You just need to apply the following operations to the matrix on p3: add 7*R3 to R1add 5*R3 to R2multiply R2 by -1
Afterwards, the rightpart of the augmented matrix is the inverse of the original matrix.
View Tag Cloud
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00024-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 408 | 6 |
https://anastako.com/about
|
code
|
Specialized in blending user research, design thinking, and brilliant UX/UI with business strategy to create elegant and effective products.
Getting things done and making fun along the way
Across borders, but still on the same page
Hi! I'm Konstantinos, with 7+ years of experience in mobile and web standing. POPULAR projects, RETURNING clients, and shining REVIEWS speak for themselves. I started back in 2015 as a junior designer shifting my career after almost 20 years in the finance sector. Fast forward to today, I am an expert in UX/UI Design assisting companies delivering digital products across the globe. Working from home, long before WFH was even a thing, I criss-cross timezones and collaborate across borders - mixing up work and play, every day.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00697.warc.gz
|
CC-MAIN-2024-10
| 763 | 4 |
https://www.information-management.com/news/data-modeling-a-systematic-approach-to-build-it-right-the-first-time
|
code
|
The data model is a valuable enterprise tool. In addition to helping enterprises understand information assets and analyze data requirements, data models support decision-making, enable information sharing, guide data quality initiatives and form the cornerstones of an enterprises data architecture. In fact, data models are the Holy Grail for the business and technical community.
A quality data architecture is the soul of an organization and must clearly depict how information flows through the enterprise and how it is used. The key component of a data architecture is the data model, which is typically a graphical representation of data structure.
Thousands of data models are being built these days. But for every good data model, there are a dozen bad ones. Existing efforts focus on completing the task at hand building an adequate model rather than spending time validating the effort. Imagine a mission-critical system deployed without testing. If we agree that a robust data architecture developed and maintained through data modeling efforts is the backbone of the information enterprise, then why arent we spending 25 to 30 percent of our effort in a quality assurance review? The impact of forgoing this step is grave.
What Constitutes a Quality Data Model Quality?
Generally, quality is addressed relative to fitness for purpose. For quality data models, we talk about two concepts, completeness and correctness, from two perspectives, conceptual and technical. We document data models using what is essentially a language. Using its syntax correctly and making sure the model is an accurate and complete representation of the concept we are representing yields a quality result. These concepts and perspectives shape our quality framework:
- The conceptual perspective pertains to context or the meaning of the representation of the architecture relative to the organization.
- The technical perspective pertains to the adherence to syntax or internal integrity of the representation.
- The completeness concept is one of wholeness or comprehensive coverage.
- The correctness concept is one of accuracy or correctness.
To ensure a data model is built right the first time, we have taken basic ideas about languages, syntax, semantics, models and quality and have joined them to form the five dimensions of data model quality that are depicted in Figure 1. Together these provide a framework for a comprehensive modeling approach.
Figure 1: The Five Dimensions
The Five Dimensions Defined
Conceptual correctness implies that the data architecture accurately reflects the business objects of interest for the enterprise and requires that the data structure needed to support all business processing is in place. Achieving conceptual correctness depends on the ability to translate information of interest in the business environment into a structured representation using a semantic language that forms a meaningful and accurate representation of the real world. Determining conceptual correctness is one of the most difficult aspects of assessing overall quality and the most challenging aspect of building a data model in the first place.
Conceptual completeness implies that the data model contains objects adequate to describe the full scope of the business domain that the model purports to represent. Our ability to judge the quality of a data model is closely tied to outside factors such as government and legal mandates, financial constraints and stakeholder requirements. You cant build representative data models by focusing exclusively on business data. You must take into account all of these factors to understand data and its interrelationships, or you will never build good data models.
Technical correctness implies that the objects contained in the data model do not violate any of the established syntax rules of the given language. Syntactic correctness means data model boxes, lines and symbols are used for their intended purposes and that the model adheres to generally accepted practices of the chosen methodology. Once rules are established they become part of the syntax and the models must be judged against those rules. Technical completeness implies that all the requisite data model objects, components, elements and details are captured at appropriate levels of detail for the purpose of the data architecture. As an example, assume we have adopted IDEF1X, a modeling notation used extensively by the federal government and by some commercial organizations as our modeling technique. In IDEF1X, data models are supposed to be built in three distinct phases: an entity relationship modeling phase, followed by a key based modeling phase and finished with a fully attributed modeling phase (NIST 1992). As the name implies, non-key attributes are not defined until late in the modeling process. Therefore, an IDEF1X model may be considered technically complete without any non-key attributes through the first two phases of modeling. At the end of the third phase, however, the model must contain non-key attributes while remaining technically sound.
Enterprise integration implies the data model is balanced with the other elements of an enterprise architecture effort. It is linked and synchronized to the performance, business, service and technical components of the enterprise architecture.
By understanding each dimension and planning your modeling approach to address each one, you can significantly increase the likelihood that your data models will provide added value and a solid foundation for business analysis, information systems design and business operations across the enterprise. Each dimension contributes uniquely to the overall quality of the architecture. The sum of the parts makes the whole stronger; addressing all five dimensions ensures fitness for purpose of the data architecture.
These five dimensions form a framework for assessing a data models utility. This framework may also be applied to each component of the architecture independently to ensure seamless integration.
Figure 2: Data Architecture
Data Quality Review Process
Reviewing data models to determine quality can be challenging. Typically, data dictionaries and matrices are structured or presented in an alphabetical sequence. Unfortunately, this organizational paradigm has nothing to do with the context of the organization and, therefore, does not facilitate review from any of the five quality dimensions. Graphical data models are typically laid out to be aesthetically pleasing. While minimizing line crossing and grouping some objects that are closely associated can help with comprehension, it still does not provide a repeatable structure for reviewing a large, complex model.
As part of the systemic evaluation process, it is extremely important to review the model in an orderly progression. Random selection of a starting point leads to difficulties in making correct assessments, and often leads back through the same parts of the model over and over again in an effort to trace primary key migrations and identify circular references. To avoid this, break the model into logically cohesive subsets prior to starting the review. Each subject area should be laid out in data dependency sequence, in other words, based on the dependencies of each entity upon each other.
Figure 3: Systemic Evaluation Process
Review Technical Quality Dimensions
To confirm the technical completeness and correctness of the data model, conduct a review of each entity, including all of its elements, attributes, relationships and links, as documented in any components of the data architecture.
Independent entities. Begin by reviewing independent entities, those which do not inherit any foreign key. By starting the review at the opposite end of the family tree from cluster endpoints, you can more easily track (and evaluate) the migration of keys through the model and follow a general-to-specific path through the model, because dependent children inherit characteristics of the parent entities. This helps the reviewer understand the model syntax use and grasp the business concepts implicitly in the model construct.
Dependent entities. Next, shift focus to dependent entities, those with foreign keys (serving as either primary keys or non-primary foreign keys). Subtype, associative and attributive entity types are dependent entities. Trace the relationships down from independent entities, and then follow parent-child relationship paths through successive dependent entities until you reach the endpoint. You will be following multiple dependency paths, each originating with an independent entity and merging with other dependency paths. This progression allows you to continue tracking key migration and follow the natural flow of the model.
Backward pass. Retrace the same paths in the opposite direction, beginning with the endpoints and moving upstream to the independent entities. This progresses relatively quickly, because most of the issues related to the syntax and business concepts have already been captured. The objective is to ensure that nothing has been overlooked due to a single perspective review. To prevent reviewing the same objects more than once, mark each entity and attribute as it is reviewed on each pass. Be sure to record issues and make annotations as necessary as you move through the model.
Review Conceptual Quality Dimensions
The next step is to conduct a conceptual quality review of the data model to determine the conceptual correctness and completeness. This can be done from the perspective of either the performance architecture or the business architecture.
Such reviews are significantly more difficult to perform than checking syntax and modeling standards compliance in a technical quality review because they center on the successful transformation of business plans and rules into data architecture elements and relationships. The business concept review is used to evaluate the precision and accuracy with which the requirements within the models scope have been addressed. This is where the majority of time is spent. Unlike syntax checks, it is impossible to automate the review of business concepts because these checks rely on human faculties of interpretation, comparison, translation, knowledge and judgment. A few questions may guide the review process.
1. Verify the data model supports all business statements or stated requirements from other sources. Ensure the data model provides traceability from the data back to the requirements that this data will support or represent.
2. Determine if the model accurately reflects the context of the business plans and rules. Relationships are generally a good place to start when looking for inconsistencies between plans and models.
- Does the relationship capture, support and implement a stated strategy or policy correctly?
- Does the business rule make sense in the context of the functional area or segment being represented, as well as in the full model (i.e., reversed relationship, incorrect cardinality or optionality)?
- Does the business rule supported by the relationships make sense when assessed in the context of other relationships involving each of the two entities connected by the relationship?
This process has been proven to determine data model quality quickly and effectively. The objective, structured, repeatable approach is reusable, which will further lead to consistency across an entire enterprise architecture. A high quality data model has the following characteristics:
- Understandable at all levels: A fellow data architect should have no difficulty understanding the data model you have created; however, a person with limited modeling knowledge should also be able to comprehend the represented model and processes.
- Flexible: The data model should be accurate and stable for the immediate purpose for which it is designed and also be robust enough to accommodate future data requirement additions and changes easily and without major modifications.
- Adherent to standards: Ensure all applicable standards have been followed, regardless of how the standards came into being; i.e. corporate mandate, government directive, etc.
- Reusable: A data model that is complete and comprehensive can be leveraged by other modelers and utilized for constructing a model of similar scale and functionality.
Providing a quality review process to examine data architectures will help your organization answer the basic questions, Am I done? and Is it any good? It will help data architects and modelers achieve quality with their initiatives, which in turn will help information providers respond to changing needs and, ultimately, enable organizations to function more effectively and efficiently.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00484.warc.gz
|
CC-MAIN-2018-26
| 13,152 | 51 |
https://community.amd.com/t5/archives-discussions/glpushclientattrib-glpopclientattrib-do-not-work-with-firepro/m-p/373522/highlight/true
|
code
|
I am using Qt Quick as software GUI.
Qt Quick over 5 requires unpack alignment to be 4 for correct GUI rendering.
With FirePro W5100 / Barco MXRT-5600's current driver (and previous ones), the glPushClientAttrib / glPopClientAttrib are not working (Do not have this problem with other graphics cards like NVIDIA's).
So once I changed the unpack alignment between glPushClientAttrib / glPopClientAttrib, Qt Quick GUI can no longer be rendered correctly. I have to manually change the alignment back to 4.
I wonder if there is some settings I did not do to make glPushClientAttrib / glPopClientAttrib work with these drivers.
OS: Windows 7
When I was using Qt 4.8, it works properly. Since Qt 4.8 does not have the alignment requirement.
Thank you for your time!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00046.warc.gz
|
CC-MAIN-2023-50
| 760 | 8 |
http://www.minecraftmodel.com/orteil-dashnet-cookie-clicker.html
|
code
|
Cookie clicker | know your meme - internet meme database, About cookie clicker is a click-based online browser game in which the player must create as many cookies as possible by repeatedly clicking the oversized c.Dashnet, The forum and irc are now open. forums: http://forum.dashnet.org/ irc info: server: irc.gamesurge.net channel: #dashnet port: 6667. you can use your favourite irc. Dungeons - cookie clicker wiki, As of the latest release, dungeons can only be accessed in the beta release of cookie clicker. an early version of dungeons can be experimented with in the live game.
Cookie clicker - ultimate victory! - you., "cookie clicker cl.ic": http://orteil.dashnet.org/experiments "cookie clicker modern": http://orteil.dashnet.org/cookieclicker/ website: http://www.Cookie clicker - #07 - unlimited cookies! ☆ let's play, Cookie clicker infos gibts weiter unten. abo http://glp.tv • twitter http://glp.tv/twitter google+ http://glp.tv/google+ • facebook http://glp.tv/fb. Cookie clicker on scratch - scratch - imagine, program, share, Cookie clicker on scratch by j3or cheat mode now for everyone! record grandma apocalypses by jermy-x with 211.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011126350/warc/CC-MAIN-20140305091846-00085-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 1,163 | 2 |
https://github.com/peter-conalgo
|
code
|
Report or block peter-conalgo
Contact Support about this user's behavior.Report abuse
- Conalgo Inc.
- Toronto, Ontario, Canada
Library for working with BDF font files.
This is a test repo
Forked from sipa/bitcoin
Bitcoin integration/staging tree
Forked from django-inplaceedit/django-inplaceedit
Django application that allows you to inline edition of some data from the database.
Forked from richardkiss/pycoin
Python-based Bitcoin utility library.
Forked from MongoEngine/flask-mongoengine
MongoEngine flask extension with WTF model forms support
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00214-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 549 | 14 |
https://cute-overload.blogspot.com/2011/03/monty-library-dog.html
|
code
|
You should get good grades because then you can go to Yale University Law School. And then you can go to its library.
And THERE you can “check out” the library’s dog, Monty!
Yep, according to this story, you can check out Monty for a half-hour at a time. And you can pet him, feed him a biscuit, and admire his cuteness!
Post a Comment
Sorry—spammers mean that a Google account is necessary to comment. (Spammers aren't cute!)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00324.warc.gz
|
CC-MAIN-2023-14
| 434 | 5 |
https://offtopic.com/threads/help-me-pick-a-new-notebook-laptop.4057183/
|
code
|
So I need to buy a new notebook/laptop in the next 5 days, Because I currently use Windows XP and Office 2003 on my home desktop and office, I wanted to stick with that, but..... My options for an easy to order a Notebook from Dell or HP are kinda limited if I want to get it with Windows XP, and require a 10-Key, since everything comes with Vista for the most part. It Must have a 10-key(NumPad) on the right hand side, cause I do Finance & Accounting. So that means 17" Looks like I am going to get a a new notebook w/ Vista and install my legit copy of Office 2003 Professional on it. Please advise me if you think this is a bad idea? My understanding is my license should be good for two installs, one on my desktop and one on my notebook, and that Office 2003 should work fine with Vista as the OS. Will I have any problems getting both my Vista notebook and XP Desktop to work with a new wireless All In One WiFi HP Printer at home? I am looking at the Dell Studio 17 and the Hewlett Packard DV7 Please advise if there are any other models I should be looking at, Please advise if I should shop anywhere other than the HP & Dell websites and Best Buy, Are there any good deals going on right now that I should know about? I will be using this to do a bunch of finance and accounting, and some financial modeling software than can be fairly intensive, ( like takes 4-7 minutes to run through all the calculations/iterations sometimes once complete) and to use it the same way everyone else uses their normal notebook, a lil powerpoint, internet surfing, emaill and maybe watch a movie on a trip or something. Best Buy has both the HP DB7 and a Red Dell Studio 17 for $699, I liked them both in person and was leaning toward those. Your comments are appreciated, thanks,!!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00697.warc.gz
|
CC-MAIN-2017-43
| 1,778 | 1 |
https://neeness.com/why-i-feed-cats-to-snakes/
|
code
|
Table of Contents
Why I Feed Cats To Snakes? Scaly snakes: why are there differences in the scales of snakes? Snakes are entirely covered in scales. Those scales protect the body, aid in camouflaging and help with their locomotion. Other functions of scales are keeping water and dust out of the eyes, similar to our eyebrows.
What is the purpose of scaly skin in the snake? Snake bodies are covered with plates and scales, which help them move over hot surfaces like tree bark, rocks, and desert sand. Rough belly scales help the snake keep its grip on rough branches and push off of surfaces when it needs to move. The scales are also waterproof, helping keep water away from the snake’s body.
How does scaly skin help reptiles? Scaly Skin Is In!
Their special covering actually helps them hold in moisture and lets them live in dry places. Reptile scales are not separate, detachable structures — like fish scales. You may not think of their shells as being scaly, but they are!
How does a snake skin help it survive? Snakes bodies are covered with scales. Without this protective armor snakes could not move over rough or hot surfaces like tree bark, rocks, and hot desert sand. Their scales are also nearly waterproof and help to keep the water out. A few times every year a snake will shed a layer of dead skin.
Why I Feed Cats To Snakes – Related Questions
What do you do if you find snake skin in your yard?
If you encounter a snake outside your property, the best thing is to leave it be. You should also try to identify the snake species and then leave the snake alone unless it is inside the building or it is venomous.
What is the disadvantage of reptilian scaly skin?
What is the disadvantage of reptilian scaly skin? It does not grow with the rest of the reptile, so it must shed. Evolution of reptiles: they evolved rapidly in the warm, humid climate of the Carboniferous period.
What does reptile skin look like?
Reptiles have a reputation that they are “slimy” when we touch and hold them; however, they have dry skin, which has even fewer glands than mammals or amphibians. The main special feature of their skin is that the epidermis is heavily keratinized with a layer, which also prevents water loss.
How long does it take for snake skin to decompose?
In the wild, shed snake skins disintegrate in about a week, although if you collect one and put it in a plastic bag, they can last decades.
How can you tell a snake from its skin?
A more easily recognizable difference between venomous and nonvenomous snakes is the shape of the head. If the head on the shed is intact and distinctly arrow-shaped, or you can make out a small pit between the eye and nostril, you’re likely to have a venomous snake.
What happens after a snake sheds its skin?
First, while the snake’s body continues to grow, its skin does not. Kind of like when humans grow out of their clothes. A roomier skin layer is generated, and the old layer is discarded. Secondly, shedding, or sloughing of the skin, removes harmful parasites.
How do scales help an animals?
A reptile’s scales also protect the animals from abrasions as they scurry across the ground, climb trees, or briefly dive beneath the surface of the water. The scales also help protect reptiles from the loss of body moisture which helps them stay healthy.
What is the function of reptile skin?
The skin of reptiles has numerous functions including display, protection, camouflage, thermoregulation and fluid homeostasis. The skin is dry, with few glands compared with mammals and amphibians.
What has dry scaly skin?
Atopic dermatitis is also known as eczema. It’s a chronic skin condition that causes dry scaly patches to appear on your skin. It’s common among young children. Other conditions, such as psoriasis and type 2 diabetes, can also cause your skin to dry out.
What smell do snakes hate?
Ammonia: Snakes dislike the odor of ammonia so one option is to spray it around any affected areas. Another option is to soak a rug in ammonia and place it in an unsealed bag near any areas inhabited by snakes to deter them away.
Is it OK to pick up snake skin?
Handle with Care
You should never pick up a snakeskin with your bare hands. This is because about 15 to 90 percent of snakes carry some Salmonella bacteria on their shed skins. Consequently, touching it with your bare skin places you at risk of a bacterial infection.
What does seeing snake skin mean?
The presence of a shed skin indicates that a snake has been living within the vicinity for a while. Most snakes will often get cranky prior to their shedding period, and even if they try to shed their skins in one piece, many snakes will become snappy and they may have patchy sheds.
What attracts snakes to your house?
A snake may be attracted to houses or yards if there is shelter and food that are unknowingly being provided by humans. Taipans and brown snakes eat rodents and they are attracted to farm sheds or gardens where they can hunt mice or rats. The python may eat chickens or other birds.
Can you smell a snake in your house?
Spotting a snake
The only way people will know whether there is a snake in their house is by seeing it, Sollenberger said. Snakes don’t really have an odor and don’t really make sounds so it would be impossible to smell them or hear them.
Can you identify snake by shed skin?
Yes, you can tell the species of snake from its shed skin. By examining the scale pattern, along with other clues such as location found, size, diameter, remnants of color pattern, skin thickness, and how intact or shredded it is, I can nearly always determine the species, or at least the genus of the snake.
What was the first reptile on earth?
The earliest known reptiles, Hylonomus and Paleothyris, date from Late Carboniferous deposits of North America. These reptiles were small lizardlike animals that apparently lived in forested habitats.
Which is a characteristics of reptiles like snakes?
All reptiles have backbones, lay hard or leathery-shelled eggs, have scales or scutes, and they are all ectothermic. We usually think of snakes as reptiles, which they are, but there are more reptiles than just snakes.
How do reptile reproduce?
Most reptiles reproduce sexually and have internal fertilization. Males have one or two penises that pass sperm from their cloaca to the cloaca of a female. Fertilization occurs within the cloaca, and fertilized eggs leave the female’s body through the opening in the cloaca.
What animals have moist skin?
There are more than 6,000 species of amphibians living today. This animal class includes toads and frogs, salamanders and newts, and caecilians. Almost all amphibians have thin, moist skin that helps them breathe.
What causes scaly patches on skin?
Scaling skin is a symptom of many medical conditions, including psoriasis, contact dermatitis, eczema, and fungal skin infections. Some causes can lead to health complications if left untreated. Commonly affected areas include the face, legs, and hands.
Which animal would most likely have moist skin without any scales?
Amphibians live part of their lives on land and part of their lives in the water. They have smooth, moist skin with no scales, feathers, or hair.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00155.warc.gz
|
CC-MAIN-2021-49
| 7,226 | 51 |
https://bugsfixing.com/solved-gaussian-summation-for-2d-scatter-plots-using-python/
|
code
|
I am trying to establish what people would loosely refer to as a homemade KDE – I suppose. I am trying to evaluate a density of a rather huge set of datapoints. In particular, having many data points for a scatter, I want to indicate the density using a color gradient (see link below).
For exemplification, I provide a random pair of (x,y) data below. The real data will be spread on different scales, hence the difference in X and Y grid point spacing.
import numpy as np from matplotlib import pyplot as plt def homemadeKDE(x, xgrid, y, ygrid, sigmaX = 1, sigmaY = 1): a = np.exp( -((xgrid[:,None]-x)/(2*sigmaX))**2 ) b = np.exp( -((ygrid[:,None]-y)/(2*sigmaY))**2 ) xweights = np.dot(a, x.T)/np.sum(a) yweights = np.dot(b, y.T)/np.sum(b) return xweights, yweights x = np.random.rand(10000) x.sort() y = np.random.rand(10000) xGrid = np.linspace(0, 500, 501) yGrid = np.linspace(0, 10, 11) newX, newY = homemadeKDE(x, xGrid, y, yGrid)
What I am stuck with is, how to project these values back to the original x and y vector so I can use it for plotting a 2D scatter plot (x,y) with a z value for the density colored by a given color map like so:
plt.scatter(x, y, c = z, cmap = "jet")
Plotting and KDE approach is in fact inspired by this great answer
To smooth out some confusion, the idea is to do a gaussian KDE, which would be on a much coarser grid. SigmaX and sigmaY reflect the bandwidth of the kernel in x and y directions, respectively.
I was actually- with a little bit of thinking -able to solve the problem on my own. Also thanks to the help and insightful comments.
import numpy as np from matplotlib import pyplot as plt def gaussianSum1D(gridpoints, datapoints, sigma=1): a = np.exp( -((gridpoints[:,None]-datapoints)/sigma)**2 ) return a #some test data x = np.random.rand(10000) y = np.random.rand(10000) #create grids gridSize = 100 xedges = np.linspace(np.min(x), np.max(x), gridSize) yedges = np.linspace(np.min(y), np.max(y), gridSize) #calculate weights for both dimensions seperately a = gaussianSum1D(xedges, x, sigma=2) b = gaussianSum1D(yedges, y, sigma=0.1) Z = np.dot(a, b.T).T #plot original data fig, ax = plt.subplots() ax.scatter(x, y, s = 1) #overlay data with contours ax.contour(xedges, yedges, Z, cmap = "jet")
Answered By – Fourier
Answer Checked By – Senaida (BugsFixing Volunteer)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00477.warc.gz
|
CC-MAIN-2023-06
| 2,328 | 11 |
https://www.hirist.com/j/human-robot-interactions-engineer-c-c-deep-learning-15-20-yrs-490131.html?ref=kp
|
code
|
HR Recruiter at DataGroup Geospatial Technologies Pvt Ltd
Views:104 Applications:20 Rec. Actions:Recruiter Actions:17
Human-Robot Interactions Engineer - C/C++/Deep Learning (15-20 yrs)
Are you an experienced Human-Robot Interactions (HRI) engineer capable of building new AI-Based technology at the intersection of software, systems, sensing, machine learning, Deep Learning, Natural Language Processing and physical deployment at scale?
We are looking for an experienced Human-Robot Interactions engineer capable of building and delivering functioning robotics software components for deployment on a global scale.
Job Responsibilities :
- Add new capabilities to our robots and make them more robust against real-world challenges.
- Participate in all phases of new development including concept, design, prototyping, and production
- Influence the full-stack architectural roadmap for Human-Robot Interface Software Components.
- Some of the components need Design and development of AI or Machine Learning or Deep Learning-based models.
- Work closely with hardware and other firmware teams to design and optimize the system
- Performance tuning and maintenance of on-device software
- Mentor junior engineers
- Integration: make all of the robot capabilities work as part of a system that also behaves as a coherent character. This can touch many parts of the system, including image processing, state machines, embedded development, robot localization and mapping, and voice recognition
- Collaborate daily with your fellow Robotics Engineers, QA, Product, and Hardware to get stuff done
- Design, implement and validate applications, Machine Learning Models and capabilities.
Basic Qualifications :
- Ph.D. in Computer Science, Electrical Engineering or Machine Learning or AI or Human-Robot Interaction or related field
- 5+ years of experience as a software developer for Human-Robot Interaction
- Experience working in C, C++ or other Object Oriented languages on a Linux platform with ROS or equivalent toolkit
- Maintaining a high level of communications with a cross-functional team, and partners
- Do regular work, with balancing between delivery and research.
- Experience with multithreading and concurrency
- Proficiency in at least one scripting language: Python, Perl, etc.
- Debugging/troubleshooting skills of embedded processes and systems
- Knowledge of computer architecture and OS fundamentals
- Experience with designing, building and deploying scalable and highly available systems
Preferred Qualifications :
- Ph.D. in Computer Science, Electrical Engineering or Machine Learning or AI or related field
- Experience and knowledge in controlling and integrating Software Components for robotics
- Experience taking a leading role in building complex software systems that have been successfully delivered to customers
- Hands-on expertise in many disparate technologies, typically ranging from front-end user interfaces through to back-end systems and all points in between
- Experience with a Linux development environment (e.g. Makefiles, GDB, Git, Ubuntu)
- Experience and knowledge in building software for large scale industrial systems
- Knowledge of professional software engineering practices for full software development life cycle, including coding standards, code reviews, source control management, agile development, build processes, and testing
- Experience with formal Integration, Validation and Verification (IV&V) techniques.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572491.38/warc/CC-MAIN-20190916060046-20190916082046-00234.warc.gz
|
CC-MAIN-2019-39
| 3,472 | 36 |
http://www.vm-education.com/zioxi-power-up-computer-tables/
|
code
|
22 Aug zioxi Power Up Computer Tables
Large collaborative tables with concealed ICT hardware
Press a button and the central mechanism rises to reveal a computer, keyboard and mouse for each seat. Can be specified for All-in-One PCs or PCs with standard TFT screens with the CPU units concealed behind the central plinth. Available for four, six, eight or ten.
For open plan learning zones, ICT suites, libraries, meeting rooms and conference facilities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00548.warc.gz
|
CC-MAIN-2021-04
| 453 | 4 |
https://libgentech.wordpress.com/technology/
|
code
|
Operating Systems: Windows Desktops to 10, Mac OS 10.10 Yosemite, + Legacy Systems
Mobile Computing Understanding of any Apple or Android device
Webpage Construction Hardcoding HTML & CSS
Graphic Design and Layout Adobe Photoshop, Pixlr.cm
Cloud Storage Dropbox
Blogging Platforms WordPress, Blogspot
Integrated Library Systems OCLC Worldshare, SirsiDynix Workflows, Polaris, Voyager
Learning Management Systems Canvas, Blackboard, Moodle, Desire2Learn, Angel
Financial Management Banner, Financial 2000
Social Network Web 2.0 Familiarity with Facebook, Twitter, Instagram, Snapchat + more
Programs I am working with July 2017
Having previously taken the course @ONE Introduction to Teaching with Canvas, I am now HTML editing the library presence for Barstow Community College.
Finished an Introduction to XML course through Library Juice Academy, Finished the course with a certificate in Basic XML.
This time last year – 2016
Worked with the Sandbox version of Archives Toolkit
Worked with the Trial version of Libguides.
I have started work with OpenRefine.org, also known as Google Refine. Working with Messy data.
Stanford list of tools I have yet to work with:
(Thanks to Dr. Robert Chavez for this list)
Good for doing well-formedness checks. Some of these will also work for URL based DTD and Schema Validation
- XML Editor and Validator: http://xmlgrid.net
- Valindome XML Validator: http://www.validome.org/xml/validate/
- XML Validator: http://xml.online-toolz.com/tools/xml-validator.php
- XML Schema Validator: http://www.xpathtester.com/validate
Good for testing and experimenting with Paths
- Online-Toolz Tester: http://www.online-toolz.com/tools/xpath-editor.php
- Free Formatter Path Tester: http://www.freeformatter.com/xpath-tester.html
- XPath Tester: http://www.xpathtester.com/xpath
- Another XPath Tester: http://codebeautify.org/Xpath-Tester
- W3C XPath Evaluator: http://www.utilities-online.info/xpath/#.VNQM9kKe0hI
Simple tools for formating (pretty printing) your XML documents.
- XML Formatter: http://xml.online-toolz.com/tools/xml-formatter.php
- Another XML Formatter: http://www.freeformatter.com/xml-formatter.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530841.24/warc/CC-MAIN-20171213201654-20171213221654-00163.warc.gz
|
CC-MAIN-2017-51
| 2,152 | 33 |
https://forum.vassalengine.org/t/copying-pieces-from-one-game-to-another/5262
|
code
|
I’m trying to learn how to Copy one piece from one Module to another Module.
Specifically, I am attempting to Modify The Gamers Regimental ACW Modules like This Terrible Sound to include the Basic Pieces from the LOB None But Heroes Module. Things like Skeddatle, ShakeyLegs, etc markers that are new to the new Line of Battle rules.
Basically all I need to do is use the Basic Markers from NBH instead of those currently in TTS.
You cannot just copy/paste from one module to another, you have to retype everything.
But you can edit 2 modules at the same time, so if you just arrange the two modules next to each other, that will make things easier.
And a vmod file is actually just like an archive file, start your favourite archive program, open a vmod and you can extract the image files you want to reuse.
And when you are extracting images, also extract the buildfile and copy the XML for the pieces you want to re-use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00125.warc.gz
|
CC-MAIN-2022-49
| 926 | 7 |
https://community.netapp.com/t5/Simulator-Discussions/can-netapp-simulator-run-on-ESXi-host-running-vmware/td-p/171967
|
code
|
The instructions are for windows/mac running vmware workstations. I want to see if the simulator can run on our vmware infrastructures.
See The Solution
yes it will work.. Upgrade the RAM to 12GB
View solution in original post
I run a 9.8 simulator in my vmware environment. I use it for testing. There is an ova you can run in vsphere.
The NetApp Support Site continues to win, 6 years in a row!
Join our Discord Community
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510734.55/warc/CC-MAIN-20231001005750-20231001035750-00408.warc.gz
|
CC-MAIN-2023-40
| 423 | 7 |
https://maniacdev.com/2016/10/open-source-swift-based-segmented-control-component-with-fluid-animations
|
code
|
SJFluidSegmentedControl is an open source Swift 3.0 based component from Sasho Jadrovski providing a segmented control with neat “fluid” animations moving between selections.
SJFluidSegmentedControl can be implemented with code or within the interface builder with customizable colors, corner radius, and shadows.
This animation from the readme shows SJFLuidSegmentedControl in action:
You can find SJFluidSegmentedControl on Github here.
A great UISegmentedControl replacement with great animations.
Submit A Resource
Have you created a useful tutorial, library or tool for iOS development that you would like to get in front of our 300,000+ monthly page views from iOS developers?
You can submit the url here.
The resources we feel will appeal to our readers the most will be posted on the front page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189130.57/warc/CC-MAIN-20170322212949-00170-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 807 | 9 |
https://blog.zoho.com/general/riya.html
|
code
|
Many people have a common memory problem. Associating the name and face. It is not rare that one fine day you meet someone you know and are left wondering her name. Also, a long lost friend sends you an e-mail and you try best, only to no avail, to remember the face. So when Riya introduced something like Face Recognition which (to the best of my knowledge) no one else has offered before, many people quickly caught on to it. It's a great service, a bit heavy, and I've never been so glad to drag and drop :)
It was great to know that Riya could identify Michaelangelo's David. It would be even nice if there were Riya communities or public forums like Keyhole community where users can upload such photos, recognize them and have some fun like what people do in "Where in the World" contest.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00831.warc.gz
|
CC-MAIN-2024-18
| 795 | 2 |
http://stackoverflow.com/questions/7586669/where-do-i-get-the-device-token-that-urban-airship-requires-for-ios-registration
|
code
|
To get the device token, you have a few options:
You can find it as one of the arguments sent in your app delegate's
You can get it as an NSString by calling
[[UAPush shared] deviceToken] after your device has successfully registered for remote notifications.
If you don't have access to the code. You can find it by reading your app's calls to urban airship. You can do this with Charles proxy. Full instructions at this link. To sum it up:
- Install the Charles certificate on your iOS device be going to http://charlesproxy.com/charles.crt in safari on your device.
- Proxy your device's wireless connection through Charles
- Enable SSL Proxying in Charles for
*.urbanairship.com on port 443.
- Run your app and look for calls to urls that mention "urbanairship" that have been recorded in Charles. They should be decrypted and some will include info about your device token.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448227.60/warc/CC-MAIN-20151124205408-00085-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 878 | 10 |
https://www.saashub.com/compare-goldfish-vs-blocs?ref=list
|
code
|
No Goldfish videos yet. You could help us improve this page by suggesting one.
Website X5 - Create your own website! It's Easy, it's Complete and Professional
Webflow - Build dynamic, responsive websites in your browser. Launch with a click. Or export your squeaky-clean code to host wherever you'd like. Discover the professional website builder made for designers.
Sublime Text - Sublime Text is a sophisticated text editor for code, html and prose - any kind of text file. You'll love the slick user interface and extraordinary features. Fully customizable with macros, and syntax highlighting for most major languages.
Adobe Dreamweaver - Adobe Dreamweaver is a proprietary web development tool developed by Adobe Systems.
IntelliJ IDEA - Capable and Ergonomic IDE for JVM
Pinegrow Web Editor - A professional visual editor for Bootstrap 4 and 3, Foundation, responsive design, HTML, and CSS. Convert HTML to WordPress themes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00483.warc.gz
|
CC-MAIN-2021-43
| 930 | 7 |
http://www.coderanch.com/t/454364/sr/certification/Finally-SCJPs
|
code
|
AFter two months of reading K&B book for 4 times (about 2 hours on weekdays and 6 hours on weekends) and lurking and participating in this forum, I cleared the exam.
I have used K&B book and it's master exam and Devaka's ExamLab.
I have never passed the quiz and master exam from the K&B cd-rom
I have never passed any exam in Devaka's Exam lab,
the most score I get for both was just 50%
Then I was so afraid to take the exam since I'm not passing any mock, but I took the chances and passed it. My grade is the double axe, well, a pass is a pass isn't it?
Thanks to all Ranchers, the authors of K&B, Devaka, and everyone here..
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462665.97/warc/CC-MAIN-20151124205422-00330-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 629 | 7 |
https://pypi.org/project/motley/
|
code
|
motley code catalog
Motley is a planed catalog server for algorithm storage and evaluation. Motley aims to serve as an algorithm catalog for nemoa to allow the client-side automatic usage of current state-of-the-art (STOA) algorithms. Thereby any respectively used STOA algorithm is determined server-sided by it’s category and a chosen metric. An example for such a metric would be the average prediction accuracy within a fixed set of gold standard samples of the respective domain of application (e.g. latin handwriting samples, spoken word samples, TCGA gene expression data, etc.). Nevertheless also the metric by itself can be a STOA algorithm.
Due to this approach motley allows the implementation of enterprise analytics projects, that are automatically kept up-to-date by a minimum of maintenance costs. Also motley supports scientific applications, by facilitating a local (workgroup, lab, institution) or global publication, application and evaluation of algorithms, e.g. developed within a PhD-position or program.
Install the latest version of motley:
$ pip install motley
Contributors are very welcome! Feel free to report bugs and feature requests to the issue tracker provided by GitHub.
Motley is available free for any use under the GPLv3 license:
Copyright (C) 2019 Frootlab Developers Patrick Michl <[email protected]>
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652161.52/warc/CC-MAIN-20230605185809-20230605215809-00228.warc.gz
|
CC-MAIN-2023-23
| 1,500 | 10 |
https://www.concrete5.org/community/forums/installation/manually-deleted-add-on-package-now-add-block-doesnt-work/
|
code
|
Manually deleted add-on package; now "Add Block" doesn't work
So now when I try to edit my site pages by adding a block, I get:
Warning: require_once(/home2/summite4/public_html/packages/gnt_mathjax/blocks/gnt_mathjax/controller.php): failed to open stream: No such file or directory in/home2/summite4/public_html/updates/concrete220.127.116.11_updater/concrete/core/libraries/loader.phpon line 217 Fatal error: require_once(): Failed opening required '/home2/summite4/public_html/packages/gnt_mathjax/blocks/gnt_mathjax/controller.php' (include_path='/home2/summite4/public_html/libraries/3rdparty:/home2/summite4/public_html/updates/concrete18.104.22.168_updater/concrete/libraries/3rdparty:.:/usr/php/54/usr/lib64:/usr/php/54/usr/share/pear') in/home2/summite4/public_html/updates/concrete22.214.171.124_updater/concrete/core/libraries/loader.phpon line 217
Clearly the Concrete5 installation I have still thinks the package is there, tries to look for it, can't find it, and gives up.
What to do?
As can be seen from the above snippet, I am currently running 126.96.36.199 - if I update to 188.8.131.52, would that fix my issue? I am wary of getting my hands any further involved and really mucking things up, so I am hoping for a relatively easy solution that is not too complicated to implement.
Any tips or advice appreciated ~
After putting it back - you 'should' be able to uninstall the package - failing that you can track the blocks down and remove them perhaps...
Is there any way to clean things up manually? What do I look for in trying to eliminate whatever is necessary so Concrete5 will quit looking for components that no longer exist?
the add on is this one by the looks of it:
Grab a copy and put it in your /packages directory. This should get you to the point where you can add blocks normally.
If, at that point there are issues with uninstalling that add-on then that will need looking into - but at least your site will be working.
You could manually try and delete references to this package and blocks etc from the db but you might end up in a worse state.
If the db actually had referential integrity in it (foreign keys), that would be a breeze, but it doesn't... which is very curious to me why they decided to keep the data in a totally unrelated state. was it for speed or some other 'good' reason? It would have to be a pretty good reason to disregarding the integrity of the data. Can anyone shed light on this?
As of now, I have no idea where to look for clues as to why it will not unintiall. Nothing the the server or db logs, no error messages. Especially with packe installations and upgrades, many fatal errors are not reported, so I kind of stuck as to how to debug this. Any suggestions?
I ended up poking around in my c5 database and locating tables related to the manually deleted package. This made it so the "add block" script quit looking for the non-existent package, and now I have no further problems to add blocks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00092.warc.gz
|
CC-MAIN-2018-22
| 2,966 | 16 |
http://allisonknight.blogspot.com/2007/03/getting-organized.html
|
code
|
Ever look in file cabinet drawer, the one you don't
open often and discover there's stuff in there you'll
never use? I have one cabinet strictly for research.
This is the two drawer cabinet I opened, of course,
looking for something.
Now I have the time consuming task of organizing my
research. Why, you ask. Simple ... You guessed it.
I couldn't find what I was looking for.
Now, I get to spend the rest of this day and probably
part of tomorrow trying to figure out why I saved
information on something about which I'll never write,
at least not in this lifetime.
Lesson one, think seriously before you label a
folder and stuff that bit of information into it.
Much of what I saved I won't ever need.
Lesson two, search engines are wonderful. It's
absolutely amazing what you can find on the
internet, if you use the advanced search options.
There are maps available, (I have three folders
full of old maps - all of which I found on the net),
old newspapers, not to mention all the common bits
and pieces a historical author might need. (The
peerage charts of England for example - I found
five of those - all duplicates.)
So, I'm off to organize a file drawer. With any luck
I'll find what I starting looking for when I
discovered more than I ever wanted to know.
It's all about romance!
"Heal My Hurting Heart" a western
romance set in Colorado during the
year of the "great die-up". (That's
also on the net)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00531.warc.gz
|
CC-MAIN-2018-30
| 1,413 | 32 |
https://www.mql5.com/ja/market/product/8885?source=Site+Market+Product+Similar
|
code
|
- コメント (6)
The indicator is intended for cross-group analysis of currency pairs, and it determines the strength of each currency in a given period of time. The results are displayed in a table. It also displays the graphs of growth of a selected currency comparing to others pairs.
In external indicator parameters, you can specify up to 11 currencies for analysis. The indicator will determine the currency pairs which are supported by the broker. If you need a smaller number of currencies you can leave empty values. The indicator allows to make a cross-group analysis of currency pairs and determine the strength of each currency in a given period of time. Results are displayed in the table. It also displays selected currency growth charts relatively to other currencies.
Attach the indicator to the chart, and then click "Reset". There will be two vertical lines in the center of the chart. We set a time interval for analysis by moving lines.
The table displays changes of the definite currency pair Close price within the given time period in percentage terms. The value is taken regarding the currency specified in the left column (that is, if the EURUSD price fell by 5% for the specified period of time, then 5% is taken for USD is taken 5% and -5% is taken for EUR). In the two right-hand columns you can see the sum and average value for each currency. If the value is below zero - the cell is highlighted in red, if it is above zero, the cell is highlighted in blue.
There will be a price change chart in time for the selected currency towards other currencies in a separate window. You can select currency clicking on its name in the left column of the table.
We recommended keeping the selected currency pairs charts open to load history and calculate the data correctly. If the indicator cannot get the data on the currency pair on a specified date, the table will display "R" and this value will not be taken into account for calculating the sum and average value. You must open a chart of the relevant currency pair and update the history on the required time frame.
If a currency pair is not in the terminal, it will display an empty cell, and its value will not be counted in the statistics.
To attach several copies of the indicator to one graph, it is necessary to specify other coordinates (X, Y) in settings.
The indicator works with currency pairs only. The length of the currency pair must exceed 0. That is why such instrument as GOLD is not supported yet. But if your broker's gold is named XAUUSD, it will work without any problems.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00449.warc.gz
|
CC-MAIN-2021-21
| 2,571 | 10 |
https://brendmlm.com/cbmiqwh0dhbzoi8vd3d3lnrozxjlz2lzdgvylmnvbs8ymdiylzeylziwl2tlcm5lbf82ml9mc19pbxbyb3zlbwvudhmv0geaoc5/
|
code
|
The upcoming Linux kernel 6.2 should see improved file system handling, including performance gains for SD cards and USB keys, as well as FUSE. As for next-generation storage subsystems…not so much.
For a mature OS kernel, there are still significant improvements being made in Linux’s handling of current disk formats, and this should improve when kernel 6.2 comes out sometime in early 2023.
A patch from Sony engineer Yuezhang Mo speeds up the creation of new files or directories on an exFAT disk with a lot of files – the more files, the greater the optimization. This follows the same programmer’s previous patch to improve exFAT handling, back in March.
After Microsoft published the exFAT specification in 2019 and entered the Linux kernel in 2020, its support has steadily improved. Linux recently gained the ability to repair exFAT volumes, thanks to a patch from developer Samsung Namjae Jeon, which keeps the exFAT drive out of the tree for older kernels — such as those used in Android. Its commit history shows a lot of contributions from the Sony programmer. Another Samsung engineer, Jaeguk Kim, contributed a patch to improve F2FS, the flash-friendly file system.
Former Ubuntu and now Microsoft engineer Christian Brauner also works. Posted in a detailed patch to add a custom VFS API for POSIX ACLs. It’s been supported for a long time, as you can see from this description of how it works from 2002, but the new version should clean up and simplify its handling. Brauner has also introduced a patch to support ID mapping mounts for SquashFS volumes. This is a complement to his previous patch which introduced ID mapping swatches, which also contains an explanation of how they work and what they are for.
There are improvements to some of the more established file systems, too. The first is a list of fixes and improvements for XFS, aimed at the important new feature of online repair. Another patch provides performance improvements for volumes installed with FUSE – in other words, where the file system code runs in the userspace program, not as part of the kernel. There are even some bug fixes for ext4 which is now worthy of respect.
There are also some improvements in Btrfs, particularly in its handling of RAID 5 and 6. In particular, one patch addresses the “read-modify-write destruction” issue for Btrfs RAID5 (but not RAID6) arrays. This is good stuff, but these disk layouts are still not recommended. In special product documentation words:
The feature should not be used in production, only for evaluation or testing.
Since this is the FOSS file system of the FOSS operating system, of course there are workarounds for this, such as using the built-in Linux kernel
mdraid cross support
mdadm command, to create a RAID-6 volume, then format it with Btrfs. Our story about upgrading the oldest Debian installation mentioned that it was running LVM, on RAID, on LVM. This kind of thing is possible and people do it, but that doesn’t mean it’s a good idea if you’re not an old Linux nerd.
Simplifying and consolidating this kind of complex disk configuration is exactly what next-generation file systems were meant to deliver, but while code changes continue to emerge, two of the most significant have gone relatively quiet.
While Ubuntu continues to maintain its ZSys code, there isn’t much activity. It’s worth noting that there hasn’t been a new post on ZSys’ blog since the Ubuntu 20.04 timeframe, and some users are starting to wonder what’s going on, as well as whether it should be removed.
This is certainly not due to a lack of development at OpenZFS, which released version 2.1.7 earlier this month, with plenty of updates and support for up to Linux 6.0. There are also active and innovative third-party tools that use it, such as ZFSBootMenu. It is also supported in NixOS, the innovative distro we talked about last week.
On the Red Hat side of things, the Stratis team released version 3.4 in November, and three minor bullet releases since then. However, the changelog doesn’t show any very significant work. It still targets a Fedora version before the current one, for example. You can use it in RHEL 9, but it remains an unsupported tech preview, as happened in RHEL 8 in 2019.
We could be wrong, but it seems to us as if both Canonical and Red Hat have lost interest in moving this innovative technology forward. We’d like to see someone else select, merge, and optimize projects, as some Android vendors do with exFAT. It appears to be a huge opportunity for companies creating their own Ubuntu-based distributions, such as Linux Mint and ZorinOS, that address perceived weaknesses in their flagship desktop product. ®
#Kernel #promises #file #system #improvements
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00870.warc.gz
|
CC-MAIN-2023-06
| 4,758 | 17 |
http://banfftrader.com/exploring-a-cave-banff-vlog/
|
code
|
Follow me on social media:
My personal makeup blogsale: http://instagram.com/rebellefleurxox
Tweets by CanuckForever
° Hi, my name is Leila, Im a Filipina Canadian vlogger based in Calgary Canada. My channel is a mix of everything I’m interested in from makeup and beauty to fashion, hauls, lifestyle vlogs, couple videos & challenges. I hope you will supoort me in my youtube journey and subscribe to my channel 🙂
Canon t3i|imovie| MUSIC BY FLIPTUNESMUSIC
Intro template by Taylor, Edit by me.
EXPLORING A CAVE | BANFF VLOG
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583879117.74/warc/CC-MAIN-20190123003356-20190123025356-00604.warc.gz
|
CC-MAIN-2019-04
| 530 | 7 |
http://asleepingbird.asleepingbird.com/2020/05/or-more-esoteric-structure.html
|
code
|
You alone have nothing to do,
And unlike me you don’t like it.
I am described in general terms.
You can’t tell from this that I was
A unique individual.
Let your wish grow greater, until
You can stretch your arm to the sky,
Pinch the sunlight and snap it off
Like a simple switch on a wall.
You’re so much more than a number,
A more esoteric structure,
And I’m a minor composer
Of cobbler’s patches for your thoughts.
So now you know. You can darken
The universe. I can watch you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00323.warc.gz
|
CC-MAIN-2023-50
| 490 | 15 |
https://www.softexia.com/windows/miscellaneous/advanced-installer
|
code
|
Advanced Installer is a powerful and easy to use Windows Installer authoring tool, enabling developers to create reliable MSI packages That meet the latest Microsoft Windows logo certification guidelines. Extremely easy to use, powerful, fast and lightweight.
Advanced Installer simplifies the process of building Windows Installer packages by providing a very easy to use, high level interface to the underlying technology. The program implements all Windows Installer rules and follows all the advised best practices.
With this simple, intuitive interface, building a Windows Installer package will take just a few minutes. Start the program, add a few files, change the name, hit the Build button and you are done. No scripts to learn, no seminars to attend.
Advanced Installer project files are stored in XML format. This way, they can be easily checked into a version control system. The software installer also operates at command line, so you can build your release packages in a completely automated script, like Make, Ant or NAnt. Furthermore, the most common operations are also implemented as command line actions, so you can modify your project in an automated fashion.
Advanced Installer will organize your application in Features and Components as per Windows Installer guidelines. This way you can take advantage of advanced software installer features like partial install and repair without having to do anything. Further customizing the organization is easy and intuitive.
Packed as native EXEs, DLLs or script files and written in C, C++, VBS or JS, Custom Actions give you the full power to add anything you want, anywhere you want to your software installer.
Advanced Installer Top Freeware Features:
- MSI. Create valid MSI setups for your applications respecting all written and unwritten Windows Installer rules.
- UAC. Build installers that run flawlessly on Windows 10/8.1/8/7/Vista supporting their security model.
- Side-by-Side. Create packages for different versions of your application.
- Imports. Import from Visual Studio, InstallShield LE, WiX, Eclipse, Inno Setup, NSIS and regular MSI/MSM packages.
- Fonts. Register fonts in the Windows OS. Specify registration names for non-TrueType fonts.
- Environment Variables. Create, append or prepend user or system environment variables.
- Autoregister. Auto registration for files that support it. You can schedule at install time.
- Files and Folders. Install and uninstall files and folders. Set attributes, create shortcuts.
- Registry. Install and uninstall registry keys and entries.
- Template projects. Create templates based on your current project and ready-to-use for your future projects.
- Add/Remove. Customize your application’s listing in the “Add/Remove Programs” page of Control Panel.
- XML projects. Easy to check into version control systems and share between multiple developers.
- Command Line. Build your release packages in completely automated scripts, like Make, Ant, NAnt, TeamCity, Jenkins or MSBuild.
- Run and Log. Launch your MSI package while pretty printing the full Windows Installer log.
- Launch Conditions. Visually specify conditions necessary (applications, frameworks, etc.) for your package to run.
- Smart Formatted Editing. Editing (MSI)Formatted fields offers reference auto-completion, syntax and error highlights and resolved value hints.
- Per-User/Per-machine. Create installers that can be install as required: per-user or per-machine if the user is Administrator.
- Include Merge Modules. Include frameworks, libraries, and other prepackaged dependencies into your installers with just a few mouse clicks.
- .NET Core deployment via our Visual Studio Extension
- Predefined launch conditions for Windows 10 version 21H1 (May 2021 Update)
- Over 16 enhancements and bug fixes
Changes in Advanced Installer 18.2 (April 23, 2021):
- IIS: certificate per SSL binding support
- Over 19 enhancements and bug fixes
Changes in Advanced Installer 18.1.1 (March 25, 2021):
- Fixed build error: failed to digitally sign aipackagechainer.exe module
Changes in Advanced Installer 18.1 (March 23, 2021):
- Support for repackaging in remote machines
- Predefined prerequisites for OpenJDK Java
- Over 27 enhancements and bug fixes
Changes in Advanced Installer 18.0 (February 22, 2021):
- Support for PowerShell v7: Predefined Launch Conditions, Prerequisites, Custom Actions Engine Update
- WiX import support for Visual Studio Extension
- Over 30 enhancements and bug fixes
Changes in Advanced Installer 17.9 (January 25, 2021):
- Importer and Editor for MSIXUpload/AppXUpload
- New MSIX declaration: FileExplorerContextMenus
- Over 20 enhancements and bug fixes
Changes in Advanced Installer 17.8 (December 21, 2020):
- Smart PSF
- Environment variable support for MSIX
- Migrate MSIX ProgramData
- New Vivid theme for installers
- Predefined prerequisites and launch conditions for “PowerShell 7”
- Over 22 enhancements and bug fixes
Changes in Advanced Installer 17.7 (November 25, 2020):
- Support for digital signing using Device Guard Signing Service (DGSS) v2
- Import and Update support for handling JSON files
- New MSIX supported extensions: “Mutable Package Directories”, “Installed Location Virtualization”, “App Extension”, “Host Runtime”, “App Uri Handler”
- Virtual Machine deployment image customization
- Predefined launch conditions for Windows 10 version 20H2 (October 2020 Update)
- Predefined launch condition for “.NET Runtime 5.0”
- Predefined prerequisites for “.NET 5.0”
- Over 30 enhancements and bug fixes
Changes in Advanced Installer 17.6 ( October 22, 2020):
- Over 38 enhancements and bug fixes
Changes in Advanced Installer 17.5 (September 22, 2020):
- Build MSIX Optional Packages
- MSIX Editor: Import & Edit Optional Packages
- Repackager Templates
- Over 27 enhancements and bug fixes
Changes in Advanced Installer 17.4.1 (September 7, 2020):
- Advanced Installer crashed for trial users
- Advanced Installer(MSI Quick-Edit mode) fails to edit MSI without files
Changes in Advanced Installer 17.4 (August 26, 2020):
- Azure Key Vault digital signing support
- Enabling integration for third-party digital signing tools
- “Quick MSI to MSIX” conversion workflow
- Predefined prerequisites for “SQL Server ODBC Driver 17.6”
- Over 24 enhancements and bug fixes
Changes in Advanced Installer 17.3 (July 29, 2020):
- Support for signing using Microsoft Device Guard
- Over 49 enhancements and bug fixes
Changes in Advanced Installer 17.2 (June 30, 2020):
- Import and Edit support for MSIX bundle
- Driver Dependencies support for MSIX
- Add wizard for the “Modification Package” project
- Predefined launch conditions for “Windows 10 version 2004”
- Over 37 enhancements and bug fixes
Changes in Advanced Installer 17.1.2 (June 17, 2020):
- Installation failed when passing a null property to a deferred inline PowerShell custom action
- The “Platform” option for an Inline PowerShell custom action was not correctly saved
- “Import Repackaging Results” wizard failed for certain projects
- Visiting the Visual Assets or Package Information views incorrectly marked the project as modified
Homepage – https://www.advancedinstaller.com
Supported Operating Systems: Windows 7, 8, 8.1, 10 (32-bit, 64-bit).
Size: 147 MB
This is a unified package containing the complete Advanced Installer application, which includes Freeware, Professional, For Java, Enterprise and Architect features. Freeware features can be accessed at any time by creating a project of type “Simple”.
Creating any other type of projects requires non-freeware features. These are fully enabled during the trial period, after which you must purchase a license to continue using them.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00490.warc.gz
|
CC-MAIN-2021-25
| 7,768 | 96 |
https://wasasando.com/blog/2017/02/20/current_user-in-michael-hartls-ruby-on-rails-tutorial/
|
code
|
I am reading the RoR Tutorial by Micheal Hartl and trying to make my very first web application at all. Following the book, I arrived now at chapter 8. In this chapter, one defines a couple of methods in the sessions helper:
log_in(user), current_user, and logged_in
With these defined, we put an if logged_in? condition in the site header, to decide which links are displayed in the header (e.g. "login" or "logout" depending on the logged_in status).
Now with the new header, I noticed the following behaviour in the application. Whatever link I click now (even home or help page), the application tries to retrieve a user from the database (even if no user is logged in, the application still hits the database asking for user_id: NULL).
> User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" IS NULL LIMIT $1 [["LIMIT", 1]]
I guess that this means that current_user doesn't persist from one request to the other, which means it has to be reloaded for each new request. So my question is:
a) is that really the case?
b) isn't that negatively affecting the performance of the application if it is on a production website?
c) If the answer to b) is yes, is there a way to avoid this behaviour?
4 thoughts on “current_user in Michael Hartl’s Ruby on Rails Tutorial”
it is supposed to persist, perhaps there is some error in the code
I copied the code from the Book, so I assume it’s ok.
hargly recommend gem ‘device’ .
I don’t know about your tutoruia but usually user_id is stored in session.
Method cuurent_user is use to be like
@current_user ||= User.find_by_id(session[:id]) ? session[:id] :false
true ? session[:id] : false
Basic idea – after auth we store user_id in session.
Your active record call is looking for a users_id in the users table… more than likely you have made a typo somewhere due to the fact rails convention would look for user.id in the users table… notice singular and plural
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657169.98/warc/CC-MAIN-20230610095459-20230610125459-00684.warc.gz
|
CC-MAIN-2023-23
| 1,937 | 19 |
https://community.mimosa.co/t/network-design-tool-2-5-0-update/5642
|
code
|
Network Design Tool (NDT) Release Notes: http://cloud.help.mimosa.co/design-release-notes
As I do with physical products, I will give my summary of what I think Is important. You should read the release notes if you want to know all the details.
Merged Viewshed: This feature merges the intersecting viewsheds of multiple APs and generates a single viewshed to represent the overall coverage. (When you are laying out sectors this will make coverage much more intelligible. Especially if you want to show your tower coverage to people who don’t do this stuff all day long)
KML and KMZ export for Google Earth (Handy dandy, especially when you are going from design to trying to get a licensed link.
Enhanced CPE Management (Understatement of the week here, If you wanted to, you can make a spread sheet of all your customers with Mimosa Equipment and import it into the design tool. From there link planning, future customer planning all that good stuff is doable. Not sure how good the Geo Coding, if any is available in the design tool, so you may need a way to get your customers GPS coordinates, but Google Earth works well enough in my experience…)
The PTP planner is updated to display Total RSSI and Total SNR instead of the old per-chain RSSI and per-chain SNR. Also, more realistic PHY and TCP rates have been introduced. (I felt some of the PTP links were a bit off of reality, but I always expect that and plan for at least 5 dB lower then expected)
Numerous UI enhancements. (As someone who can spend all day doing link planning for our company and others we are quoting for, this is big to me. The Design tool does feel a lot better now Though I can’t point at any specific changes.)
Speaking of Form 477, Mimosa has a whole deal for generating it. Check out this link https://design.mimosa.co/designtool/ndt/#/fcc/form477/upload/ (Note: I don’t know how Mimosa goes about generating the 477, but to do it the FCC’s way is a PITA, so this might help some people who don’t already have a system.)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00095.warc.gz
|
CC-MAIN-2021-04
| 2,020 | 8 |
http://kashifalihabib.blogspot.com/2010/05/
|
code
|
Gluttony implies that fixing all dimensions (Scope, resources, cost and quality) of the project at the start. Such situations arise when we want to achieve more than the expected goal, in result quality is affected. It results in impossible schedules and death marches in the end. As we know Project iron triangle between cost, resources and schedule, we need to vary on thing. You can read more from here.
Time Boxing is quite good technique to avoid gluttony, in agile mode, each iteration is time boxed and we are always focus on project velocity and what we can achieve next, after few iterations , we can guess about the project velocity and bug rhythm, knowing this prevents the temptation of over-commit.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00634-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 711 | 2 |
https://stateofsecurity.com/tag/tomcat-exploit/
|
code
|
An exploit has been released into the wild for Tomcat Connector version jk2-2.0.2. The vulnerability exploited exists in the Host Header field of the apache jk2 module. At this point it’s known to work on Fedora Core versions 6,7, and 8. Other distros will likely also be affected by the exploit. If you are using the legacy 2.0.x tree of the Apache Tomcat Connector, upgrade to version 2.0.4, or use the newest version of mod_jk.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817033.56/warc/CC-MAIN-20240415205332-20240415235332-00436.warc.gz
|
CC-MAIN-2024-18
| 432 | 1 |
http://www.linuxforums.org/forum/kernel/disable-ip-route-caching-print-185228.html
|
code
|
Disable ip route caching
I use Linux Debian as a router for a school project with Quagga/Zebra. I need to make loadbalancing between routes of the same costs.
It seems to work but the Linux route caching mecanism avoid load balancing to work properly. I can flush manualy the route cache with a while 1 bash script calling ip route flush cache but I'm looking for the good way to do that.
Is there any option, in the kernel configuration I hope, ?
If not what .c file is concerned, maybe I could try to disable route caching by deleting some code ?
Thanks in advance for your help,
deleting code is probably not the best route (hehe get it?).
Thanks for your answer.
Theses parameters improves route caching timers but it's not perfect. I need to completly disable route caching.
How about this then:
Linux Advanced Routing & Traffic Control HOWTO
see section 4.2.2, has link to a patch for loadbalancing.
Thanks a lot, I will try this solution ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00512-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 947 | 13 |
http://firstclassramblings.blogspot.com/2009/09/100-eco-friendly-100-mini.html
|
code
|
The Mini E and its entirely electric engine:
There are some issues...a lack of backseats and the fact they'll only manage 100-120 miles per full charge...though this won't be a problem for commuters. There's also the fact there'll only be 40 in the UK for testing...It's pretty amazing though that they've designed an eco-friendly car that's still 100% Mini. In other words great fun to drive. (Yes...despite not having passed my driving test I have driven a Mini!) I'd get applying soon if you fit the criteria. Check out details on the Mini E Field Test website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00196.warc.gz
|
CC-MAIN-2018-30
| 564 | 2 |
http://techie-buzz.com/foss/ubuntu-packages-are-now-compatible-with-puppy-linux.html
|
code
|
Puppy Linux is a very light weight Linux distribution. With the latest release, that is Puppy Linux 5.1 codenamed Lucid Puppy, it has become binary compatible with Ubuntu 10.04 Lucid Lynx. This means that Puppy Linux 5.1 can use the packages (applications) meant for Ubuntu Lucid Lynx as it is.
The binary compatibility with Ubuntu should come as good news to the Puppy Linux users. They can now use Ubuntu’s packages without much problems.
Puppy Linux 5.1 itself has been built using packages from Ubuntu Lucid Lynx. Because of this the development time was extremely short. It is also the first Puppy Linux release in which the entire community did the development work, not just Barry Kauler.
In this release, a new application called Quickpet has been added. It allows users to easily install other applications with a single click. Lucid Puppy offers the users a choice of four browsers – Firefox, Chromium, Opera or Seamonkey.
Other features include:
- Straight to desktop – auto setup
- Icewm Window Manager included with Joes Window Manager
- Developer access to the Ubuntu software repositories
- Better enhanced Graphic driver support
Puppy Linux 5.1 comes in a 130MB ISO image which is available for direct download. If you want to try it, the download link is given below.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00109.warc.gz
|
CC-MAIN-2020-16
| 1,290 | 10 |
http://loiter.co/v/every-time-a-news-site-appears-in-my-facebook-feed/
|
code
|
Every time a news site appears in my Facebook feed asking me "what do you think?" about a story, I think of this:
Clarkson island sketch, still as relevant and funny as ever
A car crash motivated a 400-pound man to transform his body
Ben Affleck responds to Sad Affleck
comments powered by Disqus.
Subscribe to our mailing list!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00278-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 328 | 6 |
https://stackoverflow.com/questions/11334981/windows-8-listview-with-horizontal-item-flow/22455659
|
code
|
You can use a ListView this way:
-- that gives it a horizontal panel and the right ScrollBars for horizontal scrolling.
Both ListView and GridView can cause problems when you get larger items. I think by default the items might be sized based on the size of the first item added. You can set that size manually though:
TargetType="ListViewItem"><!-- note - for GridView you should specify GridViewItem, for ListBox - ListBoxItem, etc. -->
Value="200" /> <!-- this is where you can specify the size of your ListView items -->
-- note that all items need to be the same size.
-- also note - I have changed this answer to replace a
StackPanel with an
ItemsStackPanel which is virtualized, so it should get you better performance and lower memory usage for large data sets, but PLEASE - don't create layouts with large, horizontally scrollable lists. They might be a good solution in some very limited scenarios, but in most cases they will break many good UI patterns and make your app harder to use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00093.warc.gz
|
CC-MAIN-2020-16
| 997 | 9 |
https://knowledge.shakingthehabitual.com/article/172-moreish-stack-settings
|
code
|
The background color set here is used to mask out text behind the button and/or to fade the bottom of the content out. You will generally want to match the background color in use on your page. The default value (white) is simply a named colour value and will work for most sites. You can use any approach to define the desired colour though - a HEX or RGB colour value or a CSS variable if you want to dynamically tie in to your framework's color scheme. For instance, if using Source as your framework then you can view the associated color variables here - and to use the accent color (for example) you would add a value of
The button acts as the toggle that will show and hide your content.
Start text: Add the initial text value that you want the button to display
Re-open text: You can add the same (or alternative) text for when the button returns to its minimised state
Close text: The text that should be used to indicate that the button will minimise the content
Tip: You can add symbols (e.g. html entities) as text. See the demo project for several examples of this.
Button type: There are numerous options for styling the button. The default simply lets you select values to do this (as in image above). If you are using a framework though then there are various pre-sets that allow you to use the framework's styling.
If you use a different framework or want to use your own CSS to style the button then you can select 'Custom classes' and attach the relevant class names to the button. A final option is the 'Link style' which makes the button look like a text link instead.
Position: The toggle button can be placed either at the top (default) or bottom of the stack.
Alignment: Whether at the top or bottom, the button can be aligned to the left, center or bottom.
Apply background color: This option will use the background color (specified in the initial settings field) behind the button. If you are positioning the button over text content then you will almost certainly want this option checked. Note: If your button is positioned at the top (and you are using text content) then you will probably want to add a top margin to your text content so that it can be seen.
Start minimised: You can opt for the content of the stack to be initially minimised (default) or have it open initially.
Minimised height: The desired height (in pixels) of the collapsed section
Fade out content: This option is only available for when the button is positioned at the bottom of the content and it makes use of the background colour specified initially (in the top setting) to fade out the text before the button to give a visual indication that there is more to see.
CSS classes: If you want to do additional custom styling to any aspect of Moreish and/or its content then you can add a class name in here to target those styles. Multiple class names should be separated by spaces.
Aria label: An aria-label can optionally be added into the button HTML. This provides some guidance around the purpose of the button (if the button text is not sufficient) and is used for accessibility purposes (though note that the actual content of the stack is still in the HTML even when Moreish is minimised so is always available to screen readers anyway).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00208.warc.gz
|
CC-MAIN-2023-40
| 3,250 | 16 |
https://succeedwithsalesforce.com/automatically-removing-contacts-from-campaigns-using-process-builder-and-flow/
|
code
|
I have been lately seeing a lot of questions on the Answers Community related to automatically adding or removing contact/lead from a Campaign when a field is changed on either one of them. Being an avid fanboy of Process Builder and Flows, I love responding to any questions remotely concerned with these automation tools. The objective of this post is to demonstrate how straightforward it is to manipulate Campaign Data via changes made on a Contact or Lead. The ulterior motive however is to link the community folks to this article if a similar question pops up in the future. I will tackle a specific requirement here which can be easily tweaked to meet any related requirement.
Requirement: The system administrator needs to create an automated process which automatically removes a Contact from all Campaigns (that he/she is a part of) as soon as the contact is flagged as Inactive.
Design Approach (questions to ask yourself before you begin the implementation):
- What are we really trying to do here?
Good question! So the primary part of this implementation is first finding the Campaign/(s) which the Contact is a part of and then removing him/her from the Campaign/(s). Which means that we are essentially deleting a Campaign member. Helps put things in perspective, right? Let’s proceed with the rest of the questions.
- What should be the trigger for the automation?
A contact is created or edited and is flagged as Inactive – This becomes the criteria
- Should the automation fire every time the Contact is edited?
Nah, just firing it once when the contact is marked inactive should suffice – This means that we are going to enforce the ‘subsequently meets criteria’ evaluation
- And now the million dollar question among all: Which automation tool/(s) are we using to implement this requirement?
Well we are going to perform some deletion here which is only possible either via Visual Workflows (commonly known as Flows) or Apex. Let’s keep Apex out of the picture since we are trying to design this declaratively.
For the deletion we need to find all Campaign Members that have their ContactId field equal to the ID of the Contact being flagged Inactive – This can be done within the flow
The Flow needs to fire when the Contact is flagged – This can be easily done via the Process Builder
So there we go! We are going to use the Process Builder to create a Process that is triggered when a Contact is created or edited to be flagged Inactive which launches a Flow. This Flow performs the heavy lifting of finding all related Campaign Member records and then purging them from the system. This is what the complete automation flow will look like:
Now when we have got our bases covered, let’s get down to business:
CREATING THE FLOW:
We need two variables in this Flow: One to receive the ID of the Contact being flagged inactive (vContactId) and other to store the Campaign members (vColCampMem).
Using the Fast Lookup element: There can be multiple Campaign members found for a single Contact which means we have to store the result in a Collection variable which demands usage of a Fast Lookup element.
Using the Decision element: We need to factor in the possibility of our Contact not being a part of any Campaign which means no Campaign members are found. The deletion step should only be performed if a related Campaign member is found which is why we need the Decision element.
Using the Fast Delete element: Well you need a Fast Delete element to delete the records inside a collection variable.
Do not forget to activate the Flow otherwise it won’t be accessible within the following Process.
CREATING OUR PROCESS THAT LAUNCHES THE FLOW:
Activate the Process and you are ready to test!
AFTER the magic happens:
See how easy it is to manipulate records via Process Builder and Flows? We were able to query records in an object, store them in a collection variable and also delete them, all of it without using Apex! This is the beauty of Visual workflows and that is exactly why I am so fond of them.
Feel free to share in the comments below any similar scenario involving Contact/Lead and related Campaign data manipulation that you tackled using Process Builder and Flows.
Related Answer Community questions:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00790.warc.gz
|
CC-MAIN-2024-18
| 4,258 | 27 |
https://blog.skrots.com/creating-apache-synapse-analytics-workspace/
|
code
|
Creating Apache Synapse Analytics Workspace
In continuation to our earlier article on this article, we are going to examine learn how to create our first synapse workspace. I strongly suggest you take a look at my earlier article the place we’ve mentioned the fundamentals of azure synapse analytics and what will be carried out by it. To get began with azure synapse you have to first create a workspace. Workspace is the top-level object which holds all the info prep, administration, and studio to your synapse workload.
We’re going to create a brand new azure synapse analytics workspace underneath a brand new useful resource group which I created for this demo.
Open Azure portal and sort “azure synapse” you possibly can click on the hyperlink beneath to start out.
You can be getting the beneath display screen as soon as you choose the choices, I discussed within the earlier step.
The workspace asks for the ADLS storage to be mapped with and we should create a brand new gen2 storage account if not already obtainable as it’s necessary. There’s one other discipline I’ve marked within the beneath picture is the ‘File system title’ which acts as a container that holds all our recordsdata that have to be saved within the Information lake storage. Hierarchically it comes underneath the info lake storage – workspace title – container title.
The ‘Managed useful resource group’ within the above picture (arrowed) is a container that holds the assets to assist our workspace. The azure synapse analytics, within the background, creates many processes and momentary objects that can be positioned underneath the managed useful resource group. The naming of that is elective, if you happen to select to go away it the system creates a useful resource group underneath a random title like beneath. You possibly can see there’s a new useful resource group that has been created by azure within the beneath picture.
Now the azure synapse workspace has been created, let’s open the synapse studio. One of many methods to open the azure synapse studio is to enter your newly created synapse analytics workspace and click on on the ‘Open synapse studio’ hyperlink.
The opposite easier methods are to entry by typing by the URL ‘internet.azuresynapse.internet’ in your browser or just clicking the hyperlink given within the workspace internet URL. If you navigate into this URL it should routinely present you the choices so that you can choose your subscription and workspace title.
You possibly can see the choices within the left pane for the navigation buttons like Information, Develop, Combine, Monitor, and Handle. It appears to be like just like what we noticed at azure knowledge manufacturing facility studio however with some further choices. We will click on on the info button and see our linked DataLake storage and the containers underneath it, that we’ve mapped to this synapse analytics workspace.
This can be a very primary article on creating our first synapse analytics workspace. There are extra to come back within the coming weeks, keep tuned.
Microsoft official docs
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647639.37/warc/CC-MAIN-20230601074606-20230601104606-00572.warc.gz
|
CC-MAIN-2023-23
| 3,132 | 12 |
https://www.libhunt.com/compare-toga-vs-pywebview
|
code
|
|9 days ago||7 days ago|
|BSD 3-clause "New" or "Revised" License||BSD 3-clause "New" or "Revised" License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Creating Python Android/iOS Apps
3 projects | reddit.com/r/learnpython | 25 Mar 2022
PS: While I'm at it, I'm wondering if there's a better Python-friendly UI framework than Toga. It seems to be very basic and focused on cross-platform compatibility - which is nice, but if there's something better, that would be nice to know too.
How long does it take for you to get ready to develop a good Python library?
2 projects | reddit.com/r/Python | 15 Mar 2022
I have started pywebview eight years ago by just releasing code I had wrote for a personal project that I thought other people might find useful. Community found it useful indeed, and little by little it became a full-fledged library.
4 ways to create modern gui in python in easiest way possible
2 projects | dev.to | 3 Dec 2021
Electron Adventures: Episode 96: Pywebview Terminal App
1 project | dev.to | 12 Nov 2021
Now that we've done some hello worlds in Pywebview, let's try to build something more complicated - a terminal app.
Electron Adventures: Episode 95: Pywebview
1 project | dev.to | 12 Nov 2021
Pywebview staples together Python backend with OS-specific web engine frontend.
Is there a way to use a python GUI that’s based off of css and html?
2 projects | reddit.com/r/learnpython | 19 Oct 2021
I’ve never used this and just googled it because your question piqued my curiosity: https://pywebview.flowrl.com
How do you create a cross-platform GUI without using Electron?
21 projects | news.ycombinator.com | 10 Sep 2021
There's pywebview (https://github.com/r0x0r/pywebview/) which is a Python lib that uses whatever native webview implementation exists. Obviously means some compatibility work between each OS, but gives out very small apps what work very well on the whole. I'm using it on my cross platform email client (https://kanmail.io).
Is pywebview the way to go for a web-based UI-framework?
2 projects | reddit.com/r/learnpython | 9 Aug 2021
I found eel and pywebview. Eels github page looks a bit old. For pywebview, allthough recent releases and stars on Github indicate an active community, I am not able to find more than one/two recent tutorials.
What are some alternatives?
Eel - A little Python library for making simple Electron-like HTML/JS GUI apps [Moved to: https://github.com/ChrisKnott/Eel]
kivy - Open source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
PySimpleGUI - Launched in 2018. It's 2022 and PySimpleGUI is actively developed & supported. Create complex windows simply. Supports tkinter, Qt, WxPython, Remi (in browser). Create GUI applications trivially with a full set of widgets. Multi-Window applications are also simple. 3.4 to 3.11 supported. 325+ Demo programs & Cookbook for rapid start. Extensive documentation. Examples for Machine Learning(OpenCV Integration, Chatterbot), Rainmeter-like Desktop Widgets, Matplotlib + Pyplot integration, add GUI to command line scripts, PDF & Image Viewer. For both beginning and advanced programmers. docs - PySimpleGUI.org GitHub - PySimpleGUI.com. The Minecraft of GUIs - simple to complex... does them all.
Flexx - Write desktop and web apps in pure Python
PySide - ATTENTION: This project is deprecated, please refer to PySide2
DearPyGui - Dear PyGui: A fast and powerful Graphical User Interface Toolkit for Python with minimal dependencies
urwid - Console user interface library for Python (official repo)
EasyGUI - easygui for Python
Python bindings for Sciter - Python bindings for Sciter
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00709.warc.gz
|
CC-MAIN-2022-27
| 3,976 | 38 |
https://www.zoominfo.com/about/careers?gh_jid=5174954002
|
code
|
ZoomInfo is seeking a Senior Product Manager to join our Innovation and Data R&D team to deliver cutting-edge data products in its industry-leading B2B sales and marketing intelligence platform. The position will report directly to the Senior Vice President of Innovation & Data R&D and work alongside the fast-growing, bright and collaborative Data Science team in Vancouver, WA.
About our Team:
We are a small, nimble team of data professionals who love data, appreciate and are kind to each other, and continually push ourselves to get better every day. Innovation is about collaboration, new ideas, new approaches. We feed off each other and believe that we’re building a dream team of individuals with varied skills, backgrounds, and experiences that can help accelerate the evolution of ZoomInfo’s industry leading platform.
Our Innovation and Data R&D team combines rapid, product-oriented, iterative, ML and analytics approaches to build an ensemble of data products that drive ZoomInfo’s ability to help its customers hit their business objectives. We are passionate about driving business results with our data-informed products in collaboration with data scientists, product strategists, cross-functional teams, and our stakeholders. We prioritize incremental product requirements and rapidly iterate in weekly sprints to deliver impactful experimental and production models.
About the Job:
As part of our Innovation & Data R&D Product team, you will partner with the Sr. VP of Innovation and the data scientists to:
- Own/manage the Innovation/Data R&D roadmap and drive deliverability timelines of the ML models from concept to production on a quarterly cadence.
- Define, scope, and deliver production-ready ML models working cross-functionally with our Data/Product/Engineering teams. Some ML projects you will be working on include:
- ML solutions to model social interactions and predict relationship strength over large networks of business associates.
- Classification algorithms that derive confidence and accuracy of a given company/contact lead
- Recommender systems based on marketing engagement metrics or individual topic consumption/intent.
- Anomaly detection algorithms distilling a company’s future intents and interests
- Work with integrated agile/CRISP-DM methodology to support DS workflow and lead weekly sprints.
- Be firm but flexible with business stakeholders and a champion for the needs of the data scientists with respect to project requests/scale/deliverability time-lines and data-quality.
- Bachelors/Masters in a quantitative discipline such as Computer Science, Statistics, Engineering, Economics.
- Solid understanding of ML approaches/model experimentation/evaluation and ML projects full lifecycle (post model deployment).
- Ability to translate business problems into data science questions and success metrics
- SQL, Python, R experience a huge plus.
- Humble, Team Player, curious, a passion for continuous learning.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00633.warc.gz
|
CC-MAIN-2021-31
| 2,976 | 19 |
http://stackoverflow.com/questions/12879719/uniform-grid-inside-a-listbox
|
code
|
I'm in the development of a program that requires its own custom file manager/explorer. It's pretty typical: I have a user-control (named FileItem) and it includes an Image (thumbnail) and a Label (file name).
The problem I'm experiencing is with the layout; I need it in a grid layout (so I'm using Uniform Grid which nails it), exactly like Windows Explorer when you're on icon view:
I also need to be able to select file, or multiple files etc. which ListBox does perfectly. The problem is that I can't use both.
So I tried to insert a Uniform Grid inside a ListBox. The layout was great, but I couldn't select the files (as if the ListBox wasn't there).
A quick Google search suggested to just use a ListView, but it doesn't do a good job since it has fixed columns and rows (and at the program I'm working on the size will change).
So how can I exactly achieve both of the functionality of ListBox and Uniform Grid?
Edit (Important): If you also need this look and want an answer I actually recommend WrapPanel and not Uniform Grid. It will automatically allow you to dynamically change the number of the rows according to the content.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00008-ip-10-164-35-72.ec2.internal.warc.gz
|
CC-MAIN-2016-26
| 1,140 | 7 |
https://discuss.zerotier.com/t/opening-required-tcp-and-udp-ports-of-the-network-without-having-to-connect-zerotier-to-two-specific-computers/12787
|
code
|
Can I just completely Break down a NAT wall? I simply want to run an application gamecc on a cloud computer. Zerotier work beautifully when 2 computers are connected to the same network. Is there any way we could simply open the required UDP and TCP ports
Opening required TCP and UDP ports of the network without having to connect zerotier to two specific computers?
Hard to say without any specifics about the network topology, ie how clients and the cloud environment are configured.
Dst-nat is probably needed if you don’t have a fully functional routing to the cloud environment. Check this out: Route between ZeroTier and Physical Networks where “Physical Network” represents the cloud environment in your case.
It’s also possible to route multiple local networks (LAN) to the cloud network the same way. Then all clients on the respective LAN will have direct access to the cloud environment without needing their own installation of ZT. Each LAN must be published using the ZT manager (my.zerotier.com) under “Advanced->Managed Routes”. Be aware that this allows traffic between all LANs, not just to the cloud. This can be restricted using “Flow Rules” in the ZT manager.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00195.warc.gz
|
CC-MAIN-2023-23
| 1,297 | 6 |
http://www.mmo-champion.com/threads/1256862-Unable-to-upload-screenshots-to-wowdb?p=20022077&viewfull=1
|
code
|
I thought I'd try participating in the contest by uploading some screenshots, however when I click on the button to upload a screenshot it just brings up a small window with a text input box that says "title" and a "submit" button that does nothing.
Is this the what it looks like for you? (click for bigger)
If it is, half the box where you put in the info is getting cutoff which idk how to fix but it gives the people who can a starting point. If what you are seeing is something different can you please post a screenshot of that?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00355-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 534 | 3 |
http://www.freekb.net/Article?id=715
|
code
|
Let's say you create a new project in Visual studio, and create a website with a master page and a default.aspx page. After building the site and going to www.example.com/default.aspx, the parser error appears. Notice in this example the error references migrationdb.FrontEnd.
First, let's make sure that both the master page and default.aspx page are in the target directory. As an example, if the Web server is IIS and the directory is C:\inetpub\wwwroot\example, ensure the default.aspx and master page files are in the directory.
When first creating the project, you probably selected one of the following two:
If you selected ASP.NET Web Application, your default.aspx page will need to use CodeBehind (not CodeFile). If you selected Website, your default.aspx page will need to use CodeFile (not CodeBehind).
<% @Page Title=Language="vb" AutoEventWireup="false" MasterPageFile="./MasterPages/FrontEnd.Master" CodeBehind="default.aspx.vb" Inherits="migrationdb._default" %>
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00103.warc.gz
|
CC-MAIN-2021-43
| 978 | 5 |
https://git.amrita.edu/explore?language=4&sort=stars_desc
|
code
|
This is the official repo for the Hodor project.
This is a project for CORE lab. The project includes CRUD services for Users and Books.
This project was built using Django & SQL.
one time use
Script to generate id cards for Anokha.
Classify Chest X-rays using Probabilisitic Graphical Models
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00798.warc.gz
|
CC-MAIN-2023-50
| 292 | 6 |
https://carlespradofonts.com/cv-of-failures/
|
code
|
Gathering resources for my work helping out in my department’s faculty development. To combat impostor’s syndrome and the perpetual self-perception of failure, I forward several CVs inspired on Melanie Stefan’s idea of a CV of failures, published in Nature in 2010:
As scientists, we construct a narrative of success that renders our setbacks invisible both to ourselves and to others. Often, other scientists’ careers seem to be a constant, streamlined series of triumphs. Therefore, whenever we experience an individual failure, we feel alone and dejected.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00222.warc.gz
|
CC-MAIN-2022-21
| 566 | 2 |
http://fig.cox.miami.edu/~cmallery/150/life/big.bang.htm
|
code
|
the Big Bang
is the Cosmological Theory
of the Origin of the Universe... the Big Bang is a unifying theme of
astrophysics, as is Darwinian Evolution a unifying theme of biology.
EXPANSION is the basis of Big Bang model - sort of an explosion of
space itself that happened everywhere, much like the surface of a
balloon expanding - happens everywhere at once.
1. was not a "bomb" that went off at center of universe hurling matter outward.
It did not explode from a particular location into a preexisting void.
2. Big Bang model does not describe the Big Bang itself, only what happened afterward.
3. Does not expand into space, but sort of around it - it has no center or edge.
4. Universe was not size of a grapefruit. What is often meant is that the Universe we can observe
(not space itself, but the solids in it - galaxies) was more tightly packed. It was packed
smaller... but, small is a relative term. The totality of space is infinite; shrink an infinite
space a little and it is still infinite.
5. as space expands, not everything in it does also. Gravity overpowers expansion and results
in an equilibrium of size; thus galaxies remain same size and do not expand.
a. You live on the surface of an inflating 2D balloon; with inflation the distance between points is
increasing, thus the surface of the balloon is expanding; distance to remote galaxies is
increasing, but the galaxies themselves are not moving away - the space between galaxies
b. In reverse: any given region of the Universe shrinks and all of the galaxies within it get closer,
until they smash together = Big Bang
BACK The end of the Big Bang?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578584186.40/warc/CC-MAIN-20190423015050-20190423041050-00250.warc.gz
|
CC-MAIN-2019-18
| 1,622 | 23 |
https://ollyjackson.co.uk/wordpress/archives/2005/03/21/footer-fixing/
|
code
|
I finally fixed up the footer over the weekend to display properly in IE & Safari. It had been bugging me for a while but displayed perfectly in everything else. Giving the right footer div a width property seemed to fix it. For some reason IE & Safari were collapsing it down to the width of the link images.
I also managed to fit in a ton of work on the GameSoc re-design. It’s all top secret at the moment but I’m sure I’ll be blabbing about it once it’s finished 🙂
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00680.warc.gz
|
CC-MAIN-2024-10
| 479 | 2 |
https://www.fortiss.org/en/results/software/toki
|
code
|
Platform for prototyping and evaluating operating system concepts in real-time environments
Simple and easy to use prototyping platform for embedded real-time systems for a hassle free evaluation of operating system concepts in industrial applications (up to technology readiness level 7). With toki, developing, building, simulating, and flashing of embedded software is brought to the convenience level of Linux application development.
Typically, even low-level operating system concepts, such as resource sharing strategies and predictability measures, are evaluated with Linux on PC hardware. This leaves a large gap to real industrial applications. Hence, the direct transfer of the results might be difficult. As a solution, we present toki, a prototyping and evaluation platform based on FreeRTOS and several open-source libraries. toki comes with a unified build- and test-environment based on Yocto and QEMU, which makes it well suited for rapid prototyping. With its architecture chosen similar to production industrial systems, toki provides the ground work to implement early prototypes of real-time systems research results, up to technology readiness level 7, with little effort.
Currently, most applied real-time systems research prototypes are developed and evaluated on top of Linux on PC hardware. This leaves a large gap between real industrial applications in that field and the prototype. In case of low-level operating system concepts concerning, e.g., context switch times, resource sharing, intra-node communication, and predictability, the drawn conclusions could even be void due to the completely different nature of the industrial platform. Furthermore, we see a lack in practical examinations of which latency and how much temporal predictability is achievable with certain configurations (e.g., software architectures and predictability measures). Hence, we see the need to ease constructing early prototypical implementations of research results on relevant hardware in relevant environments.
Therefore, we see the need for a minimal, yet flexible real-time system framework that provides comfort similar to commercial platforms, but comes without a complex software architecture and strict inherent design concepts. Accordingly, toki provides a flexible and configurable operating system framework, close to production industrial systems, but without the hassle of complex software architectures.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00484.warc.gz
|
CC-MAIN-2024-18
| 2,429 | 5 |
https://documentation.commvault.com/commvault/v11_sp8/article?p=products/search/compliance/tag_set.htm
|
code
|
Tagging and Tag Sets in Compliance Search
Tags are short descriptive phrases that you can add to items in a Compliance Search review set. You can use tags to label items during the review phase of electronic discovery. For example, tags can help you to identify data in a review set that might require additional actions for further processing. You can also share tags and work collaboratively with other Compliance Search users.
Changes to Tagging in Service Pack 7
The tagging feature was updated in Version 11 Service Pack 7. The updates introduced some new capabilities and changes to tagging, as follows:
- You can now filter items in a review set based on the tags that you apply to items after updating to SP7.
- You can refine the displayed items in a review set using search or filters and apply tags to only the items that match your search and filter criteria.
- There is a new column in the review set, named Tag(s), that lists the tags that you apply after updating to SP7. Tags that you applied in earlier versions of Compliance Search appear in the Review Tag(s) column.
Enabling Tagging in Compliance Search
To use the tagging feature in Compliance Search, your administrator must assign you to a role with the Tag Management capability. For more information, see Roles Overview.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00510.warc.gz
|
CC-MAIN-2019-47
| 1,295 | 9 |
https://www.springerprofessional.de/symmetry-and-consistency/1134356
|
code
|
We introduce a novel and exciting research area: symmetrising levels of consistency to produce stronger forms of consistency and more efficient mechanisms for establishing them. We propose new levels of consistency for Constraint Satisfaction Problems (CSPs) incorporating the symmetry group of a CSP. We first define Sym(
)-consistency, show that even Sym(1,0)-consistency can prune usefully, and study some consequences of maintaining Sym(i, 0)- consistency. We then present pseudocode for SymPath consistency, and a symmetrised version of singleton consistency, before presenting experimental evidence of these algorithms’ practical effectiveness. With this contribution we establish the study of symmetry-based levels of consistency of CSPs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00253.warc.gz
|
CC-MAIN-2019-43
| 747 | 2 |
https://blogs.aalto.fi/mediatutkimus/2016/02/
|
code
|
First Mlab DA seminar of the spring 2016 is this Thu 25.2 in room 429, Miestentie 3B, 5-7pm, welcome!
Eunice Sari: Transforming Learning with Mobile Technology
Jukka Purma: Kataja – Visualizing Biolinguistics
This presentation discusses various mobile learning projects and their contributions to the development of mobile teaching and learning framework in Indonesia.
With an aim of redefining a learning experience through the use of mobile technology in teaching and learning, I have conducted a number of studies between 2006 and 2016 in both developed and developing countries to investigate user interaction and attitude toward mobile technology for teaching and learning. The studies apply design and action research approaches in order to dynamically design and develop good teaching and learning experience using mobile technology. In this presentation, I will present several projects conducted between 2006-2016 that involved the use of mobile technology to support teaching and learning in both developed and developing countries and some lessons learned for the development of mobile teaching and learning framework for Indonesia.
The first case study discusses the early adoption of mobile technology in early 2006. I conducted a series of feasibility studies in Thailand, Indonesia, Malaysia, Singapore, Denmark and Finland. Employing ethnographic-inspired user research approach (i.e. contextual inquiry, participant observation, shadowing), interview, focus group and cultural probes, I gained rich insights from students, teachers, lecturers and other relevant stakeholders from various educational institutions about their perspective on the use of the mobile technology as a tool for teaching and learning. The concept of mobile learning was still a distance for majority of the users, while there is a large adoption of feature phones across the region, particularly in Indonesia as the 4th largest mobile market in the world.
The second case study discusses the changing of the landscape of Information Communication Technology (ICT) started around 2007-2008 in Southeast Asia. The general landscape for smartphones had grown significantly and the social media started to become a new common basic requirement for mobile end-users. This significant development influenced the adoption of mobile technology in education field. However, the education market was in general still unsure and undecided market due to cost of the mobile data, education and social ethics. In this project, a series of online activities to educate teachers and school leaders on the use of ICT in teaching and learning. Social media was employed in the development of online learning community to socialize the concept of professional learning and the use of mobile technology was encouraged to support the flexible learning, especially for educators who work in rural and remote areas in Indonesia.
The third case study started in 2013, where I investigated the use of mobile technology as a learning tool for large size flipped learning multidisciplinary classes at a university in Australia. The project has been repeated for twice from 2014-2015 and now is entering the third year experiment with some modification on the mobile automated assessment (2016-2017). While the context is in Australia, there is a lot of lessons that can be learnt and transferred from this project for developing a mobile teaching and learning framework in Indonesia considering more than half of the participants in the projects are from Asia and they have a lot of similarities with students in Indonesia.
In addition to this project, I also conducted a number of smaller studies on mobile learning. The first one is the study at primary level of education on how mobile technology could be implemented to engage students and teachers in classroom teaching and learning, and how participatory design process can be implemented to engage a teacher, a UX Designer and an Instructional Designer, as well as students at different levels and phases of design process to create good learning experience.
Jukka Purma: Kataja – Visualizing Biolinguistics
This presentation focuses on section ‘Tools for syntacticians’ of my thesis work ‘Kataja – Visualizing Biolinguistics’. The thesis includes production of visualization software Kataja and a written part. Kataja is a digital instrument for researching syntax in biolinguistics, one theoretical direction in linguistics. The design of the instrument depends heavily on understanding of the current theory and possible directions it may take, so most of the thesis is about syntactic theory and hypotheses employed in biolinguistics, and how to turn them to visual interactionable components in software. Aside those concerns, the thesis aims to position Kataja as a digital research instrument by examining history of research instruments in general and diagramming tools in linguistics.
The study of diagramming tools shows that development of software tools, especially parsers and treebanks in linguistics has had significant role in separation of computational linguistics from generative linguistics: in general, when a new software tool is presented as an intended goal of research, the scope of its usefulness has been broader than just the theoretical questions that inspire the tool. These practical promises of usefulness can take over the tool development. Development of parser tools broke out from their original theoretical questions into parsing by any means necessary, creating fields of computational linguistics and natural language processing. Tree diagrams were continuously used for testing, teaching and debugging parsers, while many assumptions behind these diagrams were rejected or replaced in generative theory.
Viewing from generative tradition, diagramming tools employed in parser and treebank development belong into unfamiliar tool chains. To benefit from these tools, a computational linguist’s approach to research would need to be emulated. Tree drawing tools developed in generative tradition are few and they are designed to support introductory courses omitting recent approaches for tree construction, namely those based on operation Merge. The divergence of computational linguistics from generative linguistics teaches that new tools import some aspects from the theory they are built upon, but assumptions of the theory that are not made explicit in tool are easy to misconstrue.
With digital research instruments, the design space of what is explicitly labeled or immutable, and what is left out, or for users to define, is larger than with analog instruments. Biolinguistics needs a visualization tool that puts the operation Merge into focus, as it is in the recent theoretical work. As long as Merge is in center, misconstructing or reinterpreting other aspects of theory is not a risk for advancement of the biolinguistics. Emphasis on Merge is also necessary to escape from usage assumptions of computational linguistics’ tool chains: if the tool supports exploring properties of Merge-based structures, it requires a new take on how this task would connect to other tools.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00862.warc.gz
|
CC-MAIN-2024-10
| 7,166 | 14 |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-1-foundations-for-algebra-1-4-properties-of-real-numbers-practice-and-problem-solving-exercises-page-27/22
|
code
|
Work Step by Step
Original expression: $4+(105x+5)$ Combine like terms. 4 and 5 are the only terms we can combine because there are no other variables in the expression. We can add these because of the associative property of addition. $5+4=9$ The expression should now look like this: $105x+9$ It is impossible to simplify this further because all like terms have already been combined. So your final, simplified answer is $105x+9$.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00554.warc.gz
|
CC-MAIN-2023-50
| 433 | 2 |
https://learn.microsoft.com/da-dk/azure/search/search-indexer-overview
|
code
|
Indexers in Azure Cognitive Search
An indexer in Azure Cognitive Search is a crawler that extracts searchable content from cloud data sources and populates a search index using field-to-field mappings between source data and a search index. This approach is sometimes referred to as a 'pull model' because the search service pulls data in without you having to write any code that adds data to an index. Indexers also drive the AI enrichment capabilities of Cognitive Search, integrating external processing of content en route to an index.
Indexers are cloud-only, with individual indexers for supported data sources. When configuring an indexer, you'll specify a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have more configuration properties specific to that content type.
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a 'push model' that simultaneously updates data in both Azure Cognitive Search and your external data source.
Indexer scenarios and use cases
You can use an indexer as the sole means for data ingestion, or in combination with other techniques. The following table summarizes the main scenarios.
|Single data source||This pattern is the simplest: one data source is the sole content provider for a search index. Most supported data sources provide some form of change detection so that subsequent indexer runs pick up the difference when content is added or updated in the source.|
|Multiple data sources||An indexer specification can have only one data source, but the search index itself can accept content from multiple sources, where each indexer run brings new content from a different data provider. Each source can contribute its share of full documents, or populate selected fields in each document. For a closer look at this scenario, see Tutorial: Index from multiple data sources.|
|Multiple indexers||Multiple data sources are typically paired with multiple indexers if you need to vary run time parameters, the schedule, or field mappings. Cross-region scale out of Cognitive Search is another scenario. You might have copies of the same search index in different regions. To synchronize search index content, you could have multiple indexers pulling from the same data source, where each indexer targets a different search index in each region.Parallel indexing of very large data sets also requires a multi-indexer strategy, where each indexer targets a subset of the data.|
|Content transformation||Indexers drive AI enrichment. Content transforms are defined in a skillset that you attach to the indexer.|
Supported data sources
Indexers crawl data stores on Azure and outside of Azure.
- Azure Blob Storage
- Azure Cosmos DB
- Azure Data Lake Storage Gen2
- Azure SQL Database
- Azure Table Storage
- Azure SQL Managed Instance
- SQL Server on Azure Virtual Machines
- Azure Files (in preview)
- Azure MySQL (in preview)
- SharePoint in Microsoft 365 (in preview)
- Azure Cosmos DB for MongoDB (in preview)
- Azure Cosmos DB for Apache Gremlin (in preview)
Indexers accept flattened row sets, such as a table or view, or items in a container or folder. In most cases, it creates one search document per row, record, or item.
Indexer connections to remote data sources can be made using standard Internet connections (public) or encrypted private connections when you use Azure virtual networks for client apps. You can also set up connections to authenticate using a managed identity. For more information about secure connections, see Indexer access to content protected by Azure network security features and Connect to a data source using a managed identity.
Stages of indexing
On an initial run, when the index is empty, an indexer will read in all of the data provided in the table or container. On subsequent runs, the indexer can usually detect and retrieve just the data that has changed. For blob data, change detection is automatic. For other data sources like Azure SQL or Azure Cosmos DB, change detection must be enabled.
For each document it receives, an indexer implements or coordinates multiple steps, from document retrieval to a final search engine "handoff" for indexing. Optionally, an indexer also drives skillset execution and outputs, assuming a skillset is defined.
Stage 1: Document cracking
Document cracking is the process of opening files and extracting content. Text-based content can be extracted from files on a service, rows in a table, or items in container or collection. If you add a skillset and image skills, document cracking can also extract images and queue them for image processing.
Depending on the data source, the indexer will try different operations to extract potentially indexable content:
When the document is a file with embedded images, such as a PDF, the indexer extracts text, images, and metadata. Indexers can open files from Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint.
When the document is a record in Azure SQL, the indexer will extract non-binary content from each field in each record.
When the document is a record in Azure Cosmos DB, the indexer will extract non-binary content from fields and subfields from the Azure Cosmos DB document.
Stage 2: Field mappings
An indexer extracts text from a source field and sends it to a destination field in an index or knowledge store. When field names and data types coincide, the path is clear. However, you might want different names or types in the output, in which case you need to tell the indexer how to map the field.
To specify field mappings, enter the source and destination fields in the indexer definition.
Field mapping occurs after document cracking, but before transformations, when the indexer is reading from the source documents. When you define a field mapping, the value of the source field is sent as-is to the destination field with no modifications.
Stage 3: Skillset execution
Skillset execution is an optional step that invokes built-in or custom AI processing. Skillsets can add optical character recognition (OCR) or other forms of image analysis if the content is binary. Skillsets can also add natural language processing. For example, you can add text translation or key phrase extraction.
Whatever the transformation, skillset execution is where enrichment occurs. If an indexer is a pipeline, you can think of a skillset as a "pipeline within the pipeline".
Stage 4: Output field mappings
If you include a skillset, you'll need to specify output field mappings in the indexer definition. The output of a skillset is manifested internally as a tree structure referred to as an enriched document. Output field mappings allow you to select which parts of this tree to map into fields in your index.
Despite the similarity in names, output field mappings and field mappings build associations from different sources. Field mappings associate the content of source field to a destination field in a search index. Output field mappings associate the content of an internal enriched document (skill outputs) to destination fields in the index. Unlike field mappings, which are considered optional, an output field mapping is required for any transformed content that should be in the index.
The next image shows a sample indexer debug session representation of the indexer stages: document cracking, field mappings, skillset execution, and output field mappings.
Indexers can offer features that are unique to the data source. In this respect, some aspects of indexer or data source configuration will vary by indexer type. However, all indexers share the same basic composition and requirements. Steps that are common to all indexers are covered below.
Step 1: Create a data source
Indexers require a data source object that provides a connection string and possibly credentials. Data sources are independent objects. Multiple indexers can use the same data source object to load more than one index at a time.
You can create a data source using any of these approaches:
- Using the Azure portal, on the Data sources tab of your search service pages, select Add data source to specify the data source definition.
- Using the Azure portal, the Import data wizard outputs a data source.
- Using the REST APIs, call Create Data Source.
- Using the Azure SDK for .NET, call SearchIndexerDataSourceConnection class
Step 2: Create an index
An indexer will automate some tasks related to data ingestion, but creating an index is generally not one of them. As a prerequisite, you must have a predefined index that contains corresponding target fields for any source fields in your external data source. Fields need to match by name and data type. If not, you can define field mappings to establish the association.
For more information, see Create an index.
Step 3: Create and run (or schedule) the indexer
An indexer definition consists of properties that uniquely identify the indexer, specify which data source and index to use, and provide other configuration options that influence run time behaviors, including whether the indexer runs on demand or on a schedule.
Any errors or warnings about data access or skillset validation will occur during indexer execution. Until indexer execution starts, dependent objects such as data sources, indexes, and skillsets are passive on the search service.
For more information, see Create an indexer
After the first indexer run, you can rerun it on demand or set up a schedule.
You can monitor indexer status in the portal or through Get Indexer Status API. You should also run queries on the index to verify the result is what you expected.
Indexers don't have dedicated processing resources. Based on this, indexers' status may show as idle before running (depending on other jobs in the queue) and run times may not be predictable. Other factors define indexer performance as well, such as document size, document complexity, image analysis, among others.
Now that you've been introduced to indexers, a next step is to review indexer properties and parameters, scheduling, and indexer monitoring. Alternatively, you could return to the list of supported data sources for more information about a specific source.
Indsend og få vist feedback om
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00773.warc.gz
|
CC-MAIN-2023-14
| 10,327 | 66 |
https://sourceforge.net/p/mysql-python/discussion/70461/thread/2b2bedeb/
|
code
|
how can i compare the result of a fetchone() with a string?
this is my query and fetch PetName is a string.
cursor.execute("SELECT AnimalName FROM Animals WHERE UPPER(AnimalName) = '%s'"
max = cursor.fetchone()
the print statement and the return:
print "Animal:%s Pet:%s" % (max, PetName)
Animal: Pet: Hamster
now i want to compare the two:
if max == PetName:
match will never increment :(
i tried max but got the error:
TypeError: 'NoneType' object is unsubscriptable
any suggestions or hints?
If max is None, that means you hit the end of the result set. fetchone()
returns a tuple for each row returned.
The real problem is your query. Rewrite it like this:
cursor.execute("SELECT AnimalName FROM Animals WHERE AnimalName = %s", (PetName,)
MySQL is normally case-insensitive (depends on the default character set and
collation you have configured). But more importantly, you were not passing
parameters correctly, which is why you got None instead of a row (tuple).
I receive a error when writing the query like this:
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL
syntax; check the manual that corresponds to your MySQL server version for the
right syntax to use near 'Monkey's' at line 1
I have to query with the '%s', no?
You have to lose the Python % operator in your original query. Even though
MySQLdb uses %s as a parameter placeholder, it is not doing straight string
substitution. The values you pass are quoted as needed.
I'd also say, since you are only testing for the presence of that row in the
database, you can and probably should do this:
print max # the name, but not necessary to test it, the query did that
match += 1
okay i use now this query:
cursor.execute("SELECT AnimalName FROM Animals WHERE AnimalName = %s",
max is still giving a: TypeError: 'NoneType' object is unsubscriptable
the print of max is and not the string Hamster
== Hamster is still not true
is the data in my database the reason?
fetchone() returns tuples, not strings, even if you only selected a single
column. When it returns None, you've come to the end of the result set.
I recommend reading http://www.python.org/dev/peps/pep-0249/
i understand the concept of how the return of fetchone() works but shouldn't I
be able to access the first item in a tuple via tuple?
Yes, you should, unless fetchone() returned None, which it will if eventually
if you call it enough times.
Thank you. I appreciate your help!
Log in to post a comment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688997.70/warc/CC-MAIN-20170922145724-20170922165724-00121.warc.gz
|
CC-MAIN-2017-39
| 2,461 | 47 |
https://www.wolfssl.com/forums/post5856.html
|
code
|
Topic: build support available?
Hello to all at wolfSSL + community.
I've been tasked with evaluating options which could provide SSL capacity to our products.
The question is: what degree of support might we anticipate when it comes to building wolfSSL on our platform.
Initial attempts have thrown up problems with the older gcc/autotool/hardware in our environment.
Would be great to report on that: if we went wolfSSL, building-within-our-environemt support is something which is available.
Thanks for your time!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00649.warc.gz
|
CC-MAIN-2023-40
| 516 | 7 |
http://www.moddb.com/members/john-silver/comments
|
code
|
This member has provided no bio about themself...
This Su-34 has become 20% cooler.
Hey, TimeS. Have some important message for you from an author of one of What-If plane designs. Check your PM, please.
Thanks! PM sent. Warning, long post (due to many illustrations) incoming.
Hey, TimeS, I know, you're busy lately, but Ihave an elaborated question, considering the Flanker family (it concerns Su-27SM\SMK and Su-37).
Can I PM you? (I know you're checking front page more often than PM, so I'm asking here.)
You'll have to wait and see. It's a surprise, but a pleasant one.
TimeS, two more very minor notes about the editor. Not critical, but still would be better to fix those.
1) Editor text input sections (dialogue, briefing) do not support colon or semicolon symbol input, it doesn't cause error, rather the button just does nothing.
2) While you did fix lowercase "w" on the HUD, lowercase "x" is now showing up as "w". (Like "Ewit point" instead of "Exit point").
Thanks. Both the scipting advice and the editor improvement are much appreciated.
On the subjet of mission editor. TimeS, the editor really need the "Unit is dead" trigger. "Unit: HP" is not always working right.
Will it be in April, at least?
I really hoped that plane would be in this update.
Enjoying the update so far. By the way, I think special test-purpose explosion effects were not turned off. There are red-light effects when missile hits the water.
Also, I remember you mentioning "Su-27SM3(with the jammer pods in the wing tips)" was it moved to the next update?
That is not true.
Xpand has also made models of XF-108 and Il-76 for VTRP project.
Joke skins FTW!
Also, because I can't really help myself
"Reality is an illusion, the universe is a hologram, buy gold".
This is a known bug IIRC, and the developers have been notified of it.
You can try editing mission file manually with a text editor (I recommend Notepad++). Although, I don't know how that works with voicepacks.
Update on on Silent Flanker situation:
(I really hate how this forum works sometimes).
Not sure if it's latest update's issue or not, but hostile aircraft seem to ignore me and other friendly fighters if there are friendly SAM\AAA around.
Even if their NPC mission has "Fighters" or "Aircraft" set as target priority.
Also, found another editor bug. After launching mission\exiting editor Condition limter resets to "1".
TimeS, I've got a question about the new "Squad" field (Page 3 unit properties\Player B) in the editor. Is it a placeholder? I can't find a way to link units to the squad.
This pack concluded my plans for Zipang. It is, however, entirely possible to make a grey unmarked skin for F-14A+ on your own, using the existing files, as the colors and 95% of mapping are the same.
I will start working on the Knights of Roynd table, once all their planes are added, so I can release them as a single package.
Give me the decals, and I'll make it. :)
Um, to claify.
The game doesn't have a way to make AIs ignore player's stealt now, but you're considering adding hat option, right?
Also, by "targeting player" I meant that it doesn't have to be tied to ingoring player stealth, bt rather the NPC mission for enemy ace squadons, so that they would target player, specifically, even when there are other friendly units present. (Alternately, it could be realised trough adding "player" target priority setting for "Destory enemies" mission).
TimeS, I've got a question.
I'm making a mission that have two stealth aircraft face each other. Is there any way I can make enemy AI ace ignore my (player's) stealth capabilities? Too much cheance of losing track of each other, otherwise.
Also, I think editor would benefit from NPC mission "pursue player" (or "pursue specific unit" with "player" as subsetting), that would make enemy go after the player, while ignoring other friendly units.
Actually, I believe, they based it on T-10M (T-10M-1, to be precise), which is also been known as Su-35 at the time.
MiG-29UB challenge appears to be broken. It doesn't end after all F-15 ACTIVE's are gone.
Note 1: Since F-15 ACTIVE avoids close combat my only viable tactics of dealing with them was to force them to crash into mountains, while chasing me. Than may be the reason.
Note 2: REALLY need to have mission bordedrs visible on normal radar. Failed challenge the first time because the bastard ran from the map. Also, it'd be very convinient if enemy missiles radar signaures showed up above enemy signatures.
F-15:Unlock F-16A Block 5.
F-22: Ulnock two of three (F-15C, XST, F-117A).
Press F11 in the challenge menu, then play challenges (incubator branch).
Yeah, like Su-34, Su-27SM3, F-4E and F-16 Block 60 (F\I).
TimeS, is the new update is coming this week or next week (or later)?
I've also wanted to inquire about more production planes being added - prototypes and rare models are good and all, but the last few updates contained either those or bombers. So when can we expect the more common ones? (And new normal weapons like missiles and bombs).
Thanks in advance.
TimeS, a question. What program do you use to make textures? Spceicfically, how do you draw panel lines?
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00123-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 5,151 | 50 |
http://secondthoughts.typepad.com/second_thoughts/2012/10/whats-wrongly-and-rightly-about-kitely.html
|
code
|
Everybody knows what "history" is in your browser. But that's *in your browser*. This isn't a browser; it's a web page. And everybody knows that on PayPal or Second Life, there is a *history* of your transactions, but it isn't called "history," it's called just that: "past transactions" or "my account" or "bill history". It's never just called "history" since that is AMBIGUOUS.
If you don't know already that a virtual world has a system whereby it will give you a free 114 minutes, then debit you as you visit each world, then how can you POSSIBLY know that a tab called HISTORY is going to have a record of all your world visits (!), and isn't just the *corporate* history of that company?!
And there is nothing on here to tell you there will be a meeting running on time in worlds, with the free account. Nothing to explain that "history" tab is a record of all the worlds you've visited -- and the metered time you have in this world.
This is all you can see on the map.
You can't really find anything by scrolling on the map. You can try it for minutes on end -- not a solution.
Since I know there are sims named "fun" I will try to find them in search -- because search just doesn't seem to work at all!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00531-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,212 | 6 |
https://developer.worldpay.com/docs/wpg/reference/aboutthisguide/
|
code
|
Created and rewritten from the combination of these guides:
© Worldpay 2018. All rights reserved.
This document and its content are proprietary to Worldpay and may not be reproduced, published or resold. The information is provided on an "AS IS" basis for information purposes only and Worldpay makes no warranties of any kind including in relation to the content or sustainability. Terms and Conditions apply to all our services.
Worldpay (UK) Limited (Company No. 07316500 / FCA No. 530923), Worldpay Limited (Company No. 03424752 / FCA No. 504504), Worldpay AP Limited (Company No. 5593466 / FCA No. 502597). Registered Office: The Walbrook Building, 25 Walbrook, London EC4N 8AF and authorised by the Financial Conduct Authority under the Payment Service Regulations 2009 for the provision of payment services. Worldpay (UK) Limited is authorised and regulated by the Financial Conduct Authority for consumer credit activities.
Worldpay, the logo and any associated brand names are all trade marks of the Worldpay group of companies.
We use Google code-prettify to syntax highlight the code examples in this guide.
Copyright 2018 Worldpay
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for specific language governing permissions and limitations under the License.
To find out more, get in touch with your Relationship Manager or:
Feedback or bugs to report?
Ask our developer community.
Search our documentation, API references and articles.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00039.warc.gz
|
CC-MAIN-2020-45
| 1,836 | 14 |
https://www.sqlstad.nl/scripting/powershell-module-pssqllib/
|
code
|
PowerShell is very good for retrieving information from SQL Server. The problem is that most of the time we have to create the scripts every time or load them from a previous file. Because I hate to do things twice I created a PowerShell module for SQL Server which enables me to retrieve information from my instances with a few simple commands and is called PSSQLLib.
Due to the immense popularity and professionalism of the dbatools module I decided to no longer active develop this module. The main reason is that I think dbatools is a wonderful project and I could never achieve the same quality and results on my own. Because of that I will put my efforts in the dbatools module.
Is uses the SQL Server SMO for retrieving information from SQL Server and uses the WMI to get information from the host.
The module is definitely not done and additions can be made. If you have any functionality that could be included please send it to me and I’ll include it in the library.
The library has the following features:
- Export database objects
- Export SQL Server objects
- Get the host hard-disk information
- Get the host hardware
- Get the hosts SQL Server services
- Get the host operating system information
- Get the host up-time
- Get the SQL Server Agent jobs
- Get the SQL Server backups
- Get the SQL Server configuration settings
- Get the SQL Server databases
- Get the SQL Server database files
- Get the SQL Server database privileges
- Get the SQL Server database users
- Get the SQL Server disk latencies
- Get the SQL Server instance settings
- Get the SQL Server privileges
- Get the SQL Server up-time
The module needs at least PowerShell version 3.0 installed.
Additionally you need the SQL Server SMO installed. If you’ve got the SQL Server Management Studio installed on the machine your running this module from, then you don’t have to do anything.
If it’s not installed, you can download and install the SQL Server Feature pack. The version for 2014 can be found here.
To install the module open a PowerShell command window and enter the following script:
(new-object Net.WebClient).DownloadString("https://github.com/sanderstad/PSSQLLib/raw/master/GetPSSQLLib.ps1") | iex
Alternative installation method
Alternatively you can download the module from here.
Unzip the file.
Make a directory (if not already present) named “PSSQLLib” in one of the following standard PowerShell Module directories:
- $Home\Documents\WindowsPowerShell\Modules (%UserProfile%\Documents\WindowsPowerShell\Modules)
- $Env:ProgramFiles\WindowsPowerShell\Modules (%ProgramFiles%\ WindowsPowerShell\Modules)
- $Systemroot\System32\WindowsPowerShell\v1.0\Modules (%systemroot%\System32\ WindowsPowerShell\v1.0\Modules)
Place both the “psd1” and “psm1” files in the module directory created earlier.
Execute the following command in a PowerShell command screen:
To check if the module is imported correctly execute the following command:
Get-Command -Module PSSQLLib or Get-Module -Name PSSQLLib
If you see a list with the functions than the module is installed successfully. If you see nothing happening something has gone wrong.
Functions and examples
The following functions make all this possible:
This function will retrieve the disk, volume name, free space in MB, size in MB and the percentage used. An example of the result:
This function will retrieve hardware information from the host like the number of logical processors, physical memory, the model etc. An example of the result:
This function retrieves information of the operating system of the host like the architecture, the version, the free physical memory, the free space in the paging file etc. An example of the result:
This function retrieves all the configurations of the SQL Server instance with their possible values and the current configured and running values. An example of the result:
This function get information of the present databases in the SQL Server instance. It get’s a lot of information about the databases which mostly isn’t showed i nthe example below.
This function retrieves all the data and log files for all the databases with their size and location.
This function retrieves all the logins who are user in a database.
This function retrieves all the database users with their privileges summed up for all the database roles assigned to the user.
This function retrieves valuable information about the settings of the instance like the default file and log directory, the collation etc.
This function retrieves all the jobs that are present in the SQL Server Agent.
This function is similar to the database privileges function but looks up all the server roles for each login.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00720.warc.gz
|
CC-MAIN-2022-33
| 4,695 | 53 |
https://flytrapcare.com/phpBB3/post424870.html
|
code
|
Location: Long Island, NY
Joined: Sun Jul 07, 2013 4:16 pm
Donald, Donald. Here is my answer true: I'm not crazy over the likes of you! If you can't afford a carriage, you can forget your blooming marriage! 'Cause I'll be damned if I'll be crammed on a bicycle built for two!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00389.warc.gz
|
CC-MAIN-2023-06
| 275 | 3 |
http://www.linuxforums.org/forum/newbie/166293-solved-textmode-vs-graphical-2.html
|
code
|
Results 11 to 14 of 14
Originally Posted by hazel Well, in this forum it goes by the number of posts, not by whether you can install Gentoo or not (and btw coffee lounge posts don't ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 07-03-2010 #11
- Join Date
- May 2010
- 07-04-2010 #12
If the former, I've never really found a big difference. Graphical installer UIs can be more convenient. Usually, installers have a less complete range of hardware support than do the final installed OS. If your video hardware or input devices (mouse/keyboard) isn't supported by the installer, the text mode interface is there as a fallback. I've never seen a text mode installer that didn't allow you to get to the same place that the graphical installer would get to.
I'm a full-on commandline advocate, but that doesn't mean I'm likely to ever install Linux as a text-only installation. Even when I set up a host that I will run headless, I still install the GUI, and then run the host in runlevel 3 (text mode). When I want a commandline, I probably want a bunch of them, and using terminal emulators with good copy & paste support, tabbed windowing, etc is too helpful to give up.
The thing I want in linux more than anything else is a tool that lets me easily find which tab on which konsole window on which desktop contains the session that I'm looking for at any particular instant. All text/commandline stuff, but still making heavy use of a GUI.
--- rod.Stuff happens. Then stays happened.
- 07-05-2010 #13
Like many newbie questions, mine was based on an assumption that proved false. I thought text mode install meant you had to know how to use the command line to install that distro. After this thread corrected my ignorance, I tried a couple of text mode install distros and couldn't really tell any big difference. Thanks for educating me.
- 07-08-2010 #14
The text mode only seems more difficult than the graphical install to people because it looks more simple. The difficulty however isn't that different from graphical, because in most installers things are done in menus, which work quite simple.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447819.35/warc/CC-MAIN-20141017005727-00352-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 2,132 | 15 |
https://news.illinoisstate.edu/2014/04/school-information-technology-names-new-director/
|
code
|
Mary Elaine Califf has accepted the position of director of the School of Information Technology. Califf has been serving as the interim director; her permanent duties will begin July 1.
Califf received her B.A., M.A., and M.S. from Baylor University. She earned her Ph.D. in computer science from the University of Texas at Austin and began teaching at Illinois State University in 1998.
Her primary research interests are in artificial intelligence, particularly in the areas of machine learning and natural language processing, and she teaches the school’s Introduction to Artificial Intelligence course. She has also published papers on computer science education, primarily concerning teaching beginning programming courses.
Califf’s current research projects involve applying learning algorithms to natural language processing tasks and data analysis.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00545.warc.gz
|
CC-MAIN-2021-17
| 861 | 4 |
https://blog.collabware.com/2013/12/02/the-advantages-of-document-sets-and-metadata-sharing
|
code
|
One of the huge advantages of Document Sets is their ability to be treated as a single item. Collabware CLM leverages this feature and allows users to classify, declare, and move Document Sets like any other record, while maintaining the documents within the Document Set as a single entity.But there is another advantage to using Document Sets that may not be known to some users, and this is the feature of Metadata Sharing. Document Sets can be configured to push down their metadata to documents inside, and any updates made to the Document Set will be reflected on all of the documents. The best use of this feature is for searching purposes. While all documents in the Document Set may share the same metadata, search still becomes that much more powerful; more metadata to search on will always aid your users. Likewise, it cuts down on time spent by users entering the same metadata for every document.
Setting up Metadata Sharing is quite simple. The main requirement is that the Document Set Content Type and the Document Content Type are using the same metadata columns. Once that is done, you simple have to turn on Metadata Sharing on the Document Set.
To do this, simply click on the Document Set Content Type in the Content Types area of Site Settings.
And then click on the Document Set Settings link.
Document Set Settings has many different options that can be useful, so take the time to review each one. The section we are looking for is the Shared Columns section.
Once you check a Metadata Column, the Document Set will push down the value into the same column located on any documents inside the Document Set. If you add a document that doesn’t have these Metadata Columns, the metadata won’t be pushed down.
Now, when a new Document Set is created, all of the important metadata can be added on the Document Set at the time of creation.
And as soon as a new document is added to the Document Set, it instantly has the metadata synced.
The only drawback to the Shared Metadata feature is that the Document Set will always force down its metadata. This means if a user tries to manually enter a different value on a document, the Document Set will overwrite it.
Beyond this, the Shared Metadata feature is an excellent option for organizations wanting to have consistent metadata for their case files, and will increase the user-friendliness and searchability of SharePoint.
We hope you found this helpful! Want to learning more about metadata sharing? Check out our article on 6 Ways Exporting Metadata Can Bring Organizational Value. To find out more about Collabware CLM, download our free brochure:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00067.warc.gz
|
CC-MAIN-2024-18
| 2,628 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.