url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
http://docwiki.cisco.com/wiki/Cisco_Unified_Presence,_Release_7.x_--_utils_fior_enable_CLI_Command | code | Cisco Unified Presence, Release 7.x -- utils fior enable CLI Command
Main page: Cisco Unified Presence, Release 7.x
The File I/O Reporting service provides a kernel based daemon for collecting file I/O per process. This command starts the file I/O reporting service automatically when the machine boots.
Note: This command does not start the service without a reboot (use utils fior start).
utils fior enable
Command privilege level: 1
Allowed during upgrade: Yes | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183834-00072.warc.gz | CC-MAIN-2018-47 | 463 | 7 |
https://www.ifsqn.com/forum/index.php/topic/40280-shelf-life-of-barbeque-sauce/ | code | Hello IFSQN members, my team is reformulating a barbeque sauce taking the Biocoop barbeque sauce as the reference. For our product, the major ingredients are beetroot sugar, beetroot syrup, tomato concentrate, vinegar, onion powder. We need to find the estimated shelf life of the product based on risk assessment. We also need to consider the most suitable packaging material: I am thinking of considering a 250ml glass container with plastic cap or tin cap with plastic liner material (eg. plastisol) and a paper label. Usually the shelf life of unopenened barbeque sauce is 4-6 months in room temperature. Please suggest how can I carry out the risk assessment. Thank you in advance.
Edited by Jacob Timperley, 27 January 2021 - 06:58 PM. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00560.warc.gz | CC-MAIN-2021-43 | 741 | 2 |
https://projectfunction.io/about-us | code | ProjectFunction came about as a result of how difficult it is for beginners to pick up web development and design. After taking part as instructors in a recent web development course aimed at encouraging more women into tech, we discovered that the resources which were provided by the organizer were either outdated or incorrect. And as minorities in tech with some years of experience, we felt quite let down by the organizers -even worse, we felt like we were letting down those we were meant to be helping.
After some further research into the organization, we realized that the reason why the resources were so bad, was because the motive was not to educate, but to encourage women into tech by making it appear easier than it is - just to be able to have the statistics to say they "helped" women get into the tech industry. We felt like we had to do something; hence ProjectFunction was born. A small start up which aims to educate and encourage minorities to get involved in the tech industry.
Make 2020 the year you code!
Our aim is simple; we want to provide a platform where ideas, step-by-step tutorials and pointers can be shared by creators, learners and designers, in order to encourage and support minorities in tech. We run courses on Web Development, Web Design, and User experience. Unlike other organisations in this field, our courses are open to anyone with an interest in learning, regardless of their age or gender. We try especially hard to create an environment where minorities in tech feel welcomed, and our courses are designed to be comprehensive and easy to pick up by anyone, regardless of their skill-set.
In addition to providing free courses, we strive to maintain an online platform where thoughtful and educational exchanges of ideas can take place. As of January 2019, we launched our free online tools to enhance our members' learning experience.
We value all feedback, and are open to suggestions. We will try our best to implement any changes if there is enough demand for it. And where necessary we will create community threads to discuss or collect feedback on new features or ideas. Together, we hope to shape this platform into a community-driven space, where content is kept up to date, accessible, and free.
Lets do this! 💪 | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00026.warc.gz | CC-MAIN-2020-16 | 2,274 | 7 |
https://plug.org/pipermail/plug/2007-November/015241.html | code | Clone Failing Hard Drive?
nick at leippe.com
Thu Nov 8 10:27:43 MST 2007
Here's one way:
1) get the new drive installed, and boot from a rescue/live cd such as
dam small linux or gentoo install cd, etc.
2) copy the boot record (including partition table) to the new drive:
dd if=<old drive> of=<new drive> bs=512 count=1
3) resize the partitions on the new drive as desired:
fdisk <new drive>
4) use dd (or dd_rescue if necessary) to copy the old drive, partition
by partition to their new location on the new drive.
5) if you enlarged the partitions on the new drive, you can enlarged
the filesystems using their respective tools.
6) if you moved the boot partition, you may need to reinstall your
bootloader, if you left your boot partition the same, (at least for
grub) you shouldn't have to do anything
7) mount the filesystems from their new location and test it
8) power down, move the new drive to be the boot drive, remove the old
drive, and you should be good to go.
Another way would be to simply do steps 1-3, then create new filesystems on
the new drive, and use tar to move the data from one drive to the other. This
could potentially be faster. (Especially if you use the dd double-buffer
trick to speed it up.)
More information about the PLUG | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887535.40/warc/CC-MAIN-20180118171050-20180118191050-00471.warc.gz | CC-MAIN-2018-05 | 1,257 | 25 |
https://hi.service-now.com/kb_view.do?sysparm_article=KB0754339 | code | HR tasks with type "Submit Order Guide" may generate a null pointer exception on the Service Portal "hrj_ticket_page".
Steps to Reproduce
1. Set the glide.sc.sp.twostep property to false.
2. Install Employee Service Center and Lifecycle Events. Kick off a lifecycle event that includes a "Submit Order Guide" task.
3. Impersonate the user with subject person associated to the HR task.
4. Open the record in "hrj_ticket_page" as the subject person.
5. Submit the order guide. Observe the null pointer exception.
After carefully considering the severity and frequency of the issue, and the cost and risk of attempting a fix, it has been decided to not address this issue in any current or near future releases. We do not make this decision lightly, and we apologize for any inconvenience.
The workaround consists in the following:
- Set the property "glide.sc.sp.twostep" to true
Or in alternative:
- In the SC Order Guide widget, replace:
data.sys_id = $sp.getParameter("sys_id");
data.sys_id = input.sys_id;
else if (options.sys_id)
data.sys_id = options.sys_id;
data.sys_id = $sp.getParameter("sys_id")
- In the SC order Guide widget, under the following line in the client controller:
$scope.data.action = 'checkout_guide';
$scope.data.requested_for_id = c.options.requested_for_id;
and under the following line in the server script:
- In the HRJ Task Submit Order Guide widget, in the client controller, change the following line:
$scope.data.childCaseId = childCase.sys_id;
$scope.data.childCaseId = childCase.sys_id || childCase.request_id;
Related Problem: PRB1346339 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00263.warc.gz | CC-MAIN-2020-50 | 1,574 | 25 |
https://staging.sparkpost.com/blog/why-we-hack/ | code | Internal hackathons have become a common practice for tech companies these days. Here at SparkPost, we hold two hackathons a year and are happy to have made them a long standing tradition. So the question is, why do we hack? What value do we see in taking people away from their normal work and having them spend two days on completely different projects?
The first benefit of internal hackathons is the social aspect. During hackathons you get to work with people from other teams, people you’ve possibly never worked with before. There is a lot of value in forming new friendships and strengthening bonds by working toward a common goal together. Hackathons also let you see your coworkers in a new light. Maybe someone has a real talent for something and only during a hackathon do you get to see them excel in their area of expertise. Learning people’s hidden strengths makes them more valuable to the company and allows them to grow even further in areas that are of interest to them.
Creativity is a second important benefit. Hackathons allow people to unleash their creativity. Taking a break from your normal workload for a couple of days can help refresh and reset the mind allowing you to tackle your normal projects with renewed vigor when you get back to it on Monday. Engineers often encounter small problems during their normal work day but don’t have time to fix them. Or they see ways in which their project or product can be improved but don’t have the bandwidth to expand on those ideas. Hackathons provide a space for people to voice their ideas and get immediate proof of concept.
All of this leads to the third important benefit, hackathons encourage innovation which can be invaluable to a company. So many people have great ideas for ways to improve the product they’re working on and hackathons give them the opportunity to delve into that idea and potentially get it to production. At SparkPost several of our hackathon projects have made it to production including our Zapier integration, HEML, our A/B testing API, our internal customer service app, and more. In addition to producing customer facing products, hackathons also allow the time needed for people to improve architecture and problem solve.
All in all, hackathons are great for our company. At SparkPost, we highly encourage cross team and cross company camaraderie and teamwork. We highly value creativity and praise innovation. Hackathons allow people to look up from their computers and see what could be in the future. It gives them a chance to focus on the bigger picture and flex their inventive muscles. We’ve done ten hackathons so far and we can’t wait to see what cool projects come from the next one.
Check out this video from our most recent hackathon to learn about some of the exciting projects our engineers are working on! | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00015.warc.gz | CC-MAIN-2023-06 | 2,841 | 6 |
https://python.libhunt.com/categories/453-speech-data | code | 10 Speech Data packages and projects
8.9 4.4 L3 PythonSpeech recognition module for Python, supporting several engines and APIs, online and offline.
6.9 5.5 L5 Python:snake: Client library to use the IBM Watson services in Python and available in pip as watson-developer-cloud
6.2 0.0 L3 Pythonaeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka forced alignment)
4.5 0.0 Python:speech_balloon: SpeechPy - A Library for Speech Processing and Recognition: http://speechpy.readthedocs.io/en/latest/
3.2 0.0 L4 PythonPython interface for forced audio alignment using HTK and SoX
2.8 0.0 L5 PythonPython client that interacts with the IBM Watson Speech To Text service through its WebSockets interface
2.5 6.1 L3 PythonA python library for working with praat, textgrids, time aligned audio transcripts, and audio files. It is primarily used for extracting features from and making manipulations on audio files given hierarchical time-aligned transcriptions (utterance > word > syllable > phone, etc).
1.6 0.7 L4 PythonA collection of python scripts for extracting and analyzing acoustics from audio files.
1.5 1.0 L4 PythonPrososdy Morph: A python library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.
1.1 7.2 L4 PythonPython interface to ISLEX, an English IPA pronunciation dictionary with syllable and stress marking.
Take part in the Developer Ecosystem Survey 2022 by JetBrains and get a chance to win a Macbook, a Nvidia graphics card, or other prizes. We’ll create an infographic full of stats, and you’ll get personalized results so you can compare yourself with other developers.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00557.warc.gz | CC-MAIN-2022-27 | 1,802 | 14 |
http://www.coderanch.com/t/196669/java-programmer-SCJP/certification/difference-casting-conversion | code | File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
A friendly place for programming greenhorns!
Big Moose Saloon
Register / Login
Win a copy of
REST with Spring (video course)
this week in the
Programmer Certification (SCJP/OCPJP)
difference between casting and conversion
Joined: Dec 04, 2000
Jan 05, 2001 14:32:00
can any body tell me what is the difference between casting and conversion
Joined: Nov 08, 2000
Jan 06, 2001 00:29:00
casting is explicit and the conversion is implicit.
I agree. Here's the link:
subject: difference between casting and conversion
Casting & Conversion
Difference between casting and conversion in Java
what is the diff?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
| Powered by
Copyright © 1998-2015 | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737927030.74/warc/CC-MAIN-20151001221847-00240-ip-10-137-6-227.ec2.internal.warc.gz | CC-MAIN-2015-40 | 801 | 24 |
https://forum.matic.network/t/counter-stake-cs-2006-new-validators-wave-6/405 | code | As mentioned earlier, we will be adding validators to the staggered manner. Here is the list of the next of validators who are selected to join the network.
@MARIIA LAZAREVA#9176 0x09efd0d038dab6aDf8436118DBD51b117B50d4F3
@ALEKSANDR KOLODIN#5747 0x9397C1d08bbd7B06f63B73753c17B8689Ee237b1
We have already sent tokens to all the participants above.
CS-2006 details here:
Heimdall chainID: heimdall-cs2006
Bor chainID: 2006
Staking testnet tokens contract address on Goerli:
You will need to delete all previous entries of Heimdall and Bor you can follow the instructions here: https://forum.matic.network/t/how-to-delete-previous-entries-of-heimdall-and-bor/163. This is in case you are running a previous setup.
Once you are done with this you can then follow the instructions from our docs to setup your nodes. We have 2 different methods to setup nodes:
Once you’re done with setting up you can then move on to staking on Matic. You can either stake through CLI or use the Dashboard to stake.
Stake through Dashboard: https://wallet.matic.today/staking
Note: If you’re running a full node, we recommend not setting it up on a laptop
Happy Staking & Please stay safe! | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00318.warc.gz | CC-MAIN-2020-24 | 1,172 | 14 |
https://forum.obsidian.md/t/block-links-not-showing-up-past-first-heading/65234 | code | I am looking to link in the same document/note I am typing in.
There’s a lot of text, mostly lists.
If I # I get my headings, but I am not looking for headings.
If I ^ I should get my blocks, but only the blocks in the first heading show up. ??
So the block I am looking for is not on the list.
Now I know i can’t select a heading, and then select the block in the heading, but that would be nice for large notes. Otherwise block linking becomes pretty useless in long notes.
In any case it would be nice to find my block. Unless there’s another type of link I should be looking for.
I don’t understand what exactly you are trying to approach, so here are some thoughts based on my current understanding.
- If it’s a concept that you would like to reference ever-so-often, make a new note for them and link the new note and old note together.
- I don’t know what happens to your vault, it should provide all “blocks” that you have in the note.
The second item.
I am just trying to access all blocks in my note but it limits itself to those in my first heading. No idea why.
Type a few words to allow Obsidian to filter out the blocks.
- Also i have uploaded the edited version of the vault in your other post, unfortunately i cannot solve the indentation problem other then manually fixing it.
- I humbly suggest that dont put everything in one single file, and if you have time check out
(6) How to Take Smart Notes - Book on a Page - YouTube
Thanks typing in text works past the first block.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00638.warc.gz | CC-MAIN-2023-40 | 1,609 | 18 |
https://infocenter-archive.sybase.com/help/topic/com.sybase.dc73430_1250/html/aserbsun/aserbsun39.htm | code | [Bug #240950] The following only occurs on servers that use a 4K page size.
If you use alter table on DOL tables with a clustered index to add non-NULL columns, or modify or drop existing columns, Adaptive Server may issue an error message similar to the following:
Msg 1530, Level 16, State 1: Line 1: Create index with sorted_data was aborted because of row out of order. Primary key of first out of order row is '27, "default string"
This error occurs after alter table performs the data copy and is in the process of rebuilding the clustered index. This is a generic problem, and is highly data dependent, and only occurs on DOL tables with a clustered index.If this error occurs, the entire operation is rolled back, so tables are not corrupted, and the original schema of the table being altered is reinstated.
Workaround: Drop the clustered index, alter the table to implement the schema change, and then rebuild the clustered index manually after the alter table operation has committed. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818711.23/warc/CC-MAIN-20240423130552-20240423160552-00551.warc.gz | CC-MAIN-2024-18 | 995 | 5 |
http://pandawhale.com/post/42875/brienne-and-podrick-gif-s4e5-first-of-his-name | code | Thank you NeVeNGamingYT for making this gif.
Stashed in: gifs, Gifs of Glory, Best of GoT, High Quality gifs
Game of Thrones S4E5: "God fucking dammit pod!"
"Have you even ever killed a man?"
More Game of Thrones 4x05:
Doctor Who emotional man Wilfred Mott crying gif
Simon Pegg WHAT gif
Best combined mixing gifs are amusing.
Monty Python Holy Grail gifs | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00041-ip-10-31-129-80.ec2.internal.warc.gz | CC-MAIN-2016-50 | 355 | 9 |
http://stackoverflow.com/questions/8514725/getting-a-timer-in-a-javaee-friendly-way-that-works-on-javase-too-for-use-in-a | code | I'm interested in doing some work on the PostgreSQL JDBC driver to help with an implementation of
Statement.setQueryTimeout(...), one of the more problematic spec conformance holes in the driver. To do this, I need a portable way to get a timer or set an alarm/callback that works in all Java EE app servers, servlet containers, and in Java SE environments.
It seems that's not as simple as it should be, and I'm stuck enough that I'm throwing myself upon your mercy for a hint. How the hell do I do a simple timer callback that works in Java SE, Java EE, and servlet containers?
If necessary I can possibly endure doing separate -ee and -se releases, but it's extremely undesirable. Releases for each container are completely impractical, though auto-selected per-container adapters might be acceptable if strongly undersirable.
The PgJDBC driver must run in crusty old app servers under ancient versions of the JVM, but I don't care in the slightest if statement timeouts are only available in the JDBC4 version of the driver for modern containers and JVMs. There's already a conditional compilation infrastructure in place to allow releases of JDBC3/JDK 1.4 and JDBC4/JDK 1.5 drivers, so code that only works under 1.5 or even 1.6 isn't a problem.
(edit): An added complication is that the JDBC driver may be deployed by users:
- As a container module or built-in component that's launched at container startup;
- As a standalone deployment object that can be undeployed and redeployed at runtime; or
- Embedded inside their application
... and we need to support all those scenarios, preferably without requiring custom application configuration! If we can't support all those scenarios it needs to at least work without statement timeout support and to fail gracefully where statement timeouts can't be supported.
Ah, write once, run anwwhere....
I can't just use
I see broad statements that the use of
java.util.Timer or the Java SE concurrency utilities (JSR-166) in the
java.util.concurrent package is discouraged in Java EE, but rarely any detail. The JSR 236 proposal says that:
java.util.Timer, java.lang.Thread and the Java SE concurrency utilities (JSR-166) in the java.util.concurrency (sic) package should never be used within managed environments, as it creates threads outside the purview of the container.
A bit more reading suggests that calls from unmanaged threads won't get container services, so all sorts of things throughout the app may break in exciting and unexpected ways. Given that a timer invocation may result in an exception being thrown by PgJDBC and propagating into user application code this is important.
(edit): The JDBC driver its self does not require any container services so I don't care if they work within its timer thread(s) so long as those threads never run any user code. The issue is reliably ensuring that they don't.
The JSR 236 timer abstraction layer is defunct
JSR 236 is defunct, and I don't see any replacement that satisfies the same requirements for portable timers.
I can't find any reference to a cross-container portable way to obtain a container-pooled timer either. If I could grab a timer from JNDI on containers and fall back to direct instantiation where getting one from JNDI failed that'd be OK... but I can't even find a way to do that.
EJB timers are unsuitable
There are EJB timers, but they're unsuitable for low-level stuff like a JDBC driver implementation because they're:
- persistent across container or machine restart
- high overhead
- may be implemented using a database for timer persistence
- oriented toward business time not machine time
- unavailable in plain servlet containers
- unavailable in "web profile" EE app servers
So EJB timers can be struck entirely off the list.
I can't roll my own timer thread
The same issues that prevent the use of
java.util.Timer and friends prevent me from launching my own timer thread and managing my own timers. That's a non-starter.
The Java EE spec says:
The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread’s priority or name. The enterprise bean must not attempt to manage thread groups.
and the EE tutorial says:
Resource adapters that improperly use threads can jeopardize the entire application server environment. For example, a resource adapter might create too many threads or might not properly release threads it has created. Poor thread handling inhibits application server shutdown and impacts the application server’s performance because creating and destroying threads are expensive operations.
WorkManager doesn't do timers
There's a javax.resource.spi.work.WorkManager, but (a) it's intended for use on the service provider side not the application side, and (b) it's not really designed for timers. A timer can probably be hacked in using a Work item that sleeps with a timeout, but that's ugly at best and probably going to be quite inefficient.
It doesn't look like it'll work on Java SE, either.
The Java Connector Architecture (JCA)
As referred to in the Java EE tutorial, the Connector Architecture might be a viable option for EE containers. However, again, servlet containers like Tomcat or Jetty may not support it.
I'm also concerned about the performance implications of going down this route.
So I'm stuck
How do I do this simple task?
Do I need to write a new
ThreadPoolExecutor that gets threads from the container via JNDI, then use that as the base for a new
ScheduledThreadPoolExecutor ? If so, is there even a portable way to get threads from a container, or do I need per-container JNDI lookup and adapter code?
Am I missing something stupid and blindingly obvious?
How do other libraries that need asynchronous work, timers or callbacks handle portability between Java EE and Java SE? | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00120-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 5,859 | 50 |
https://learn.microsoft.com/en-us/azure/information-protection/install-powershell | code | Installing the AIPService PowerShell module
Use the following information to help you install the Windows PowerShell module for the protection service from Azure Information Protection. The name of this module is AIPService, and it replaces the previous version that was named AADRM.
You can use this PowerShell module to administer the protection service (Azure Rights Management) from the command line by using any Windows computer that has an internet connection and that meets the prerequisites listed in the next section. Windows PowerShell for Azure Information Protection supports scripting for automation or might be necessary for advanced configuration scenarios. For more information about the administration tasks and configurations that the module supports, see Administering protection from Azure Information Protection by using PowerShell.
This table lists the prerequisites to install and use the AIPService PowerShell module for the protection service from Azure Information Protection.
|Minimum version of Windows PowerShell||Version 3.0
You can confirm the version of Windows PowerShell that you are running by typing
Note the AIPService PowerShell module only supports Windows PowerShell. PowerShell 7 is not supported.
|Minimum version of the Microsoft .NET Framework||Version 4.6.2
Note: This version of the Microsoft .NET Framework is included with the later operating systems, so you should need to manually install it only if your client operating system is less than Windows 8.0 or your server operating system is less than Windows Server 2012.
If the minimum version of the Microsoft .NET Framework is not already installed, you can download Microsoft .NET Framework 4.6.2. This minimum version of the Microsoft .NET Framework is required for some of the classes that the AIPService module uses.
|Required license||You must have a product license to use this feature. For more information, see Microsoft 365 licensing guidance for security & compliance - Service Descriptions.|
If you have the AADRM module installed
The AIPService module replaces the older module, AADRM. If you have the older module installed, uninstall it and then install the AIPService module.
The newer module has aliases to the cmdlet names in the older module so that any existing scripts will continue to work. However, plan to update these references before the old module falls out of support. Support for the AADRM module will end July 15, 2020.
If you installed the AADRM module from the PowerShell Gallery, to uninstall it, start a PowerShell session with the Run as Administrator option, and type:
Uninstall-Module -Name AADRM
If you installed the AADRM module with the Azure Rights Management Administration Tool, use Programs and Features to uninstall Windows Azure AD Rights Management Administration.
How to install the AIPService module
The AIPService module is on the PowerShell Gallery and is not available from the Microsoft Download Center.
To install the AIPService module from the PowerShell Gallery
If you're new to the PowerShell Gallery, see Get Started with the PowerShell Gallery. Follow the instructions for the gallery requirements, which include installing the PowerShellGet module and the NuGet provider.
To see details about the AIPService module on the PowerShell Gallery, visit the AIPService page.
To install the AIPService module, start a PowerShell session with the Run as Administrator option, and type:
Install-Module -Name AIPService
If you are warned about installing from an untrusted repository, you can press Y to confirm. Or, press N and configure the PowerShell Gallery as a trusted repository by using the command
Set-PSRepository -Name PSGallery -InstallationPolicy Trusted and then rerun the command to install the AIPService module.
If you have a previous version of the AIPService module installed from the Gallery, update it to the latest by typing:
Update-Module -Name AIPService
In a Windows PowerShell session, confirm the version of the installed module. This check is particularly important if you upgraded from an older version:
(Get-Module AIPService –ListAvailable).Version
If this command fails, first run Import-Module AIPService.
To see which cmdlets are available, type the following:
Get-Command -Module AIPService
Get-Help <cmdlet_name> command to see the Help for a specific cmdlet, and use the -online parameter to see the latest help on the Microsoft documentation site. For example:
Get-Help Connect-AipService -online
For more information:
Full list of cmdlets available: AIPService Module
List of main configuration scenarios that support PowerShell: Administering protection from Azure Information Protection by using PowerShell
Before you can run any commands that configure the protection service, you must connect to the service by using the Connect-AipService cmdlet.
When you have finished running your configuration commands, as a best practice, disconnect from the service by using the Disconnect-AipService cmdlet. If you do not disconnect, the connection is automatically disconnected after a period of inactivity. Because of the automatic disconnection behavior, you might find that you need to occasionally reconnect in a PowerShell session.
If the protection service is not yet activated, you can do this after you have connected to the service, by using the Enable-AipService cmdlet. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00085.warc.gz | CC-MAIN-2023-40 | 5,368 | 41 |
https://www.dnnsoftware.com/community-blog/cid/136611 | code | This post is to announce that the new version of Newsfeeds has actually passed all stages of testing last Thursday. This is a bit of a champagne moment as it marks the end of a very long process. Initially another approach was taken by the former team lead and after a lot of consideration I decided to discontinue that code branch. Instead the current upgrade is almost a complete rewrite of the module incorporating what I believe to be the most important features that were still missing: aggregation, feed format flexibility and caching.
The new module uses a new (2.0) and adapted version of the RssToolkit as posted on CodePlex. This component is able to recursively (OPML is also supported) aggregate feeds of various formats into a single Rss 2.0 feed. We tried using the RssToolkit that comes with the DNN installation but it was not recent enough nor adapted to our task. So the RssToolkit code was incorporated into the module and adjusted to fit its needs. It is unlikely that this situation will stay like this forever. In all likeliness we will either go solo on aggregation and/or move the RssToolkit code we have to the DNN core. But for now it is in the module (and hence also translated to VB thanks to SharpDevelop).
The new aggregation does bring one drawback: the module is less forgiving about feeds that do not conform to one of the standards Rss 0.91, Rdf, Rss 2.0, ATOM 1.0. The aggregator (i.e. the code that merges the feeds) needs to read the contents of the feed and merge this into one stream of items. This means that the feed must adhere to its standard or the aggregator will not be able to find the content where it expects. I have already received several feed urls from big news corporations that did not conform to its standard. This is somewhat disturbing. If you experience this the only option you have is to use your own transformation and use the XML module. It will not pass through the aggregator.
One other note on aggregation. Merging multiple feeds really only makes sense if the items are dated. This is only the case in Rss 0.91, 2.0, and ATOM 1.0 feeds. Rdf and Rss 1.0 feeds do not require a date/time stamp on individual items. So aggregating feeds that use those standards is kind of nonsensical. For now the module will just interleave them one by one.
DNN has its own caching mechanism for modules. In short the HTML of the module is cached for a specific timeframe. During this timeframe, the module code will therefore not even be called. Instead the HTML is injected directly into the output stream from cache. This is all good and well, but it does mean that the Newsfeeds module will not be able to check for any updates of feeds it needs to make. So instead the new module now has its own solution of ‘multilayered’ caching. If it can render immediately it will do the XML/XSL transformation. This means it will be slightly outperformed by the HTML caching the DNN platform would do. But the advantage of the new solution is that we can fine tune what and how we cache. It does however rely on your setting of the module cache time in the module settings. If you leave that at non zero, it will override the mechanism we provide in the module.
The new mechanism caches at two levels: the feed and the aggregated feed. It will cache all incoming feeds in SQL. This will prevent it to trip up on not being able to retrieve a particular feed. For each feed you can specify a refresh time. When called, the module will check if any feeds need to be refreshed and will get them if necessary. The aggregated feed is stored as XML on disk. This will also make it possible to reuse the feed (not yet implemented as we’re waiting on the core team to make a move in regards to an overhaul of syndication in the platform). The new approach in the Newsfeeds module should ensure we have all the caching we need for any future developments.
Another change is the way in which the merged feed is rendered. This used to be a straightforward XML/XSL transformation using the standard ASP.NET XML component. Now, we use the same mechanism as the XML module. This means we transform in code using a parameterized stylesheet. This allows us to pass in variables like ‘number of items to display’, etc. These parameters are set under module settings. This solves one of those pressing issues we had: how to limit the amount of items displayed on screen. In the forums you could find the solution to this using a non-default XSL sheet. Now you can specify it as a parameter under module settings.
One oft heard complaint was that a DNN page with the Newsfeeds module would not render until the feed had been retrieved. This I felt was a very urgent issue to solve. In my opinion the ideal situation would be that the page would render before any feeds were being retrieved by the server. That way the user would see the page immediately. The feed result would then be passed through Ajax to fill the module on the client.
The new module does just this. It first performs a check whether Ajax should be used. This depends on:
- whether it is installed
- if the module is not being cached by DNN as one piece of HTML, and
- if the module needs to refresh one of its feeds.
If all these conditions are met then the module will render an empty container that calls back after the page has loaded to retrieve its contents. Then, the necessary feeds are retrieved cached, and merged. The result is sent to the waiting module. You’ll see the DNN Ajax loading gif displayed in the meantime.
I just want to end with a word or two about security and RSS. A feed is sent in plain XML from a source to the module upon request. This source can be an internal page. If this is so the module will call this module (for the RSS) within the current context (i.e. the current user’s login). Assuming you cache the feed this may lead to the following security breach: if you cache a feed from content that is not visible to all, you potentially will show the cached content to someone with a different security clearance. Keep this in mind as you set up caching and security in your portal.
Newsfeeds 04.00.00 was compiled against DotNetNuke 04.06.00. So this is the minimal version you need for the module to work. The module now uses the database so you’ll also need to be running on SQL 2000/2005.
Find the module here:
All comments/questions/etc can be asked here: | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00676.warc.gz | CC-MAIN-2022-21 | 6,405 | 17 |
https://ericlbarnes.com/category/programming/ | code | The old saying, “sleep on it”, meaning spend some time tonight thinking about the question and have an answer tomorrow. It’s a common phrase in our language but how often do you literally put aside time to actually think about it? Having a busy life outside of work with family, hobbies, dinner, everything else it’s […]Read more "Sleep on it"
I recently launched Laravel Events and it was one of those projects that came to life because I wanted to better keep the community informed on what all local events were happening. As I sat down and started planning the project I wanted to start with the goal of doing the absolute minimum to make […]Read more "Striving for Simplicity"
Stripe just announced support for Twilio Pay: Phone payments are especially common in industries like food, travel, healthcare, retail, and nonprofits, but payment details are often keyed in manually by a human agent. This flow can be riddled with errors and expose companies to security and compliance risks. That’s why we’re partnering with Twilio to […]Read more "Process Cards with Stripe + Twilio"
I saw this question pop up on a forum and I know every PHP developer has their own reasons, it made me think of why I initially picked PHP way back in the early 2000’s when v4.3 was king. It all started when my family hired a web developer to create a website for their […]Read more "What Makes PHP Popular?"
Here is an interesting plugin I came across today that stores every button you press with the mouse that could be used with a keyboard shortcut. It currently supports toolbar buttons, menu buttons, and tool windows and the actions.Read more "Learn keyboard shortcuts in IntelliJ products with Key Promoter X"
IDEA a unique project that uses IKEA style of instructions to explain algorithm assembly instructions.Read more "IDEA – IKEA Style Instructions for algorithms"
Today I needed to write some code to grab a list of CC addresses from an email, and I thought showing the steps I took could make for an entertaining blog post.Read more "Simple PHP Refactoring" | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00057.warc.gz | CC-MAIN-2018-47 | 2,081 | 7 |
https://www.prajbasnet.com/mysql-and-unix-timestamps/ | code | MySQL comes with two handy functions for working with unix timestamps. The unix timestamp or unix epoch time represents the number of seconds since midnight (UTC) 01/01/1970.
The two functions are:
unix_timestamp()to convert a standard date to unix timestamp format
from_unixtime()to convert a unix timestamp to standard date format
Note if you want the current date only (not time) in unix format use:
If you want the current date AND local time in unix format use: | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00463.warc.gz | CC-MAIN-2020-40 | 466 | 6 |
https://www.greatamericandebate.org/the-great-american-debate-team | code | Jamie Joyce has a background in non-profit management. She left her position as COO of an international sustainability organization to work full time on her founding non-profit programs. She’s a passionate patriot who wants to create tech to both defend and reinvent the democratic process.
Tim High is a software developer with over 20 years of experience, and has been a senior engineer, team lead and CTO of several successful startups. He has been developing decision-making and debate tools since 2008.
Kevin is a developer and law student at Johannes Gutenberg University Mainz. He has been developing the tools needed to compile and sort information for the Great American Debate. Kevin is also a member of Canonical Debate Lab
Stephen Wicklund is the founder of Debate Map. He’s passionate about using software to collect and organize the information that is spread out among forum threads, wiki pages, and individuals in order to bring them together into a community-edited, tree-based structure that's easier to traverse and explore.
Bentley Davis has been developing software for 30 years, managing teams for 15 years and advising startups for 5 years. He was also the CTO of a startup with a successful exit. Bentley builds unity through clarity with technology. He is also the cofounder of Reason Score. | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00538.warc.gz | CC-MAIN-2019-39 | 1,320 | 5 |
https://www.behance.net/kannibalkat/wip | code | I'm a Graphic Design student at EASD Castellón, Spain. I love visual communication in all its aspects and I hope to devote myself completely to it someday; especially I have a great interest in the audiovisual media: motion graphics, 3D animation and videogames.
Pamela S. hasn't uploaded any work in progress.
Please check back soon. Until then, you can explore other creative work in Discover. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00084-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 396 | 3 |
https://www.apaperaday.com/02/24/2020/13fastai-1intro/ | code | The paper is longer than I want to spend reading in depth, so I will try to skim intelligently and spend about 5 days on this paper. The abstract generally will call out the important points from the perspective of the authors, and from this abstract, it appears important the various levels of abstraction, the ease for new programmers, and the novel 2-way callback system.
Design Goals approachable and rapidly productive. With the multi-tiered API format, the authors believe to have successfully met their design goals. Two competitors to the library they call out as Keras and PyTorch which each have their own benefits in approachability and expressivity.
Fig1 shows the layers of the API abstractions:
The authors explain each of the layers
High Level is made for beginners and practitioners who are using deep learning in applications where it has successfully been applied before. Though it is able to derive state of the art results, its is going to be generally not surpassing the SOTA. The high level abstraction is intelligent as it is able to detect the best loss function for instance. The pros are this can be fairly good for getting good early results, but can lead to practitioners not understanding exactly what is going on with their models.
Mid Level provides the core of the deep learning methods. Built on previous optimized libraries, the underlying code is not hidden or protected from the user so they can dive in themselves and adjust code as needed. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00097.warc.gz | CC-MAIN-2020-24 | 1,477 | 6 |
https://blog.bestsoftware4download.com/2015/01/better-search-the-search-assistant-tool-for-chrome-users/ | code | Better Search – The Search Assistant Tool For Chrome Users
The Better Search is an interactive tool which helps in seeing the results that you want in a new tab on various search engines like Bing, Google, Yandex, Baidu, etc. All you have to do is hover the cursor of the mouse over the page description. Here it would be a normal magnifying glass icon and you would have to click the preview option so that the document would appear in a floating preview window.
Do you know that for any kind of web research project, the Chrome is one of the best companions? It is one of the quickest, easiest and the best one which have a capable bookmarking feature so that all the favorite websites can be recorded. Of course, there are quite a lot of room for making improvements and the tool Better Search helps in making the browser filled with a lot of extensive functionality and search run features.
An interesting aspect of this tool is that it helps in finding the search terms by highlighting it right inside the preview window. Yes! The scroll bar is present in the right hand side and it is the place where the keywords are kept and that too within the given document as a whole. It makes the searching quick and quite easy to see the references.
Another effective feature of the Better Search tool is that it would allow the user to rate the pages according to the usefulness. The search results will be based on various factors like Best comes first, then followed by Good pages and as you move to the top most page, you would be getting the Poor pages which would be often hidden from the search engines. Of course, there would be pages which come with the Block result, making sure that the results are hidden from that particular site. This is a great means to remove the unwanted searches from your selection.
In addition, there is the right click option which is used for highlighting the text and annotating it. Also you have the option to revisit the given pages later on and you can view the highlighted text and by simply hovering the mouse cursor would show the comments.
In case if you cannot keep in mind of what you have done, then don’t worry. Just click the Better Search icon and choose the option called My Library so that you can see the pages that you have rated and even the ones whose annotations have been made by you. This is one of the easiest methods of keeping track of what kind of researches have been done by you.
There are tiny icons available at the top of the searching result page. This would allow any user to run the given search on any other website and search engines like Google, eBay, Amazon, GitHub, Wikipedia, etc. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00153.warc.gz | CC-MAIN-2024-10 | 2,661 | 8 |
https://www.ninelions.ca/blog-1/2017/9/19-cg5e6 | code | Where do you get your best ideas?
And do you write them down?
Once in a while I see this meme on Facebook: “The biggest lie I tell myself is: I don’t have to write that down, I’ll remember it later.” :)
I can SO relate!
But thanks to my colleague and friend, Ric Durrant, I have realized that it’s more than just the act of remembering my ideas, it’s about the act of placing value on them.
I recently asked for Ric’s help on a big project and in our conversation he posed these questions:
- What’s the best idea you’ve had in the last 30 days and did you write it down?
- Do you see yourself as having valuable ideas?
- Or do you edit your ideas out of existence? Dismiss them on a personal level before they’ve even had a chance?
If I’m honest, my answers go something like this: No… I guess so… Yes and yes.
Letting my ideas come and go without any concern or reflection is nothing new for me, but this new perspective that I may be pre-editing or disregarding my ideas has given me pause.
So I’ve started to write my ideas down. Mostly because I’m scared I’ll forget them, but more and more I’m recognizing that my ideas could hold some value.
What I’ve observed lately is now that I’m committed to taking my work on The Changemaker’s Project further, I’m getting a lot of ideas (usually while I’m doing mundane things like sorting laundry or walking on the treadmill), and I do want to keep track of my thoughts. And with the tracking I am noticing repetition and patterns, which for me points to the ideas’ increased value.
My process for writing my ideas is nothing fancy or grand, it’s just a simple notebook with lined pages. But I like it and it’s working for me. I find it interesting to see what comes out of my pen, and I’m curious to discover where it will lead.
What about you? What are your best ideas? And how are you valuing them? | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00198.warc.gz | CC-MAIN-2019-39 | 1,900 | 15 |
https://discuss.pytorch.org/t/confusion-about-loss-objective-with-sgd/137083 | code | Hi, I have confusion about optimizing the loss objective with SGD
loss=1-F.cosine_similarity(x,y) and loss= -F.cosine_similarity(x,y), are these two represent to minimize the similarity using SGD? Apparently, (1-Cosinesim) seems suitable to me but i have seen the (-CosineSim) in a contrastive Self-Supervise learning objective. So how these two behave differently?
whats the difference between : loss = -F.crossentropy(logits, label) and loss=F.crossentropy(-logits, label) ? are these two represents to maximize the cross entropy ? | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00644.warc.gz | CC-MAIN-2022-21 | 533 | 3 |
https://developer.arm.com/docs/dui0472/l/standard-c-implementation-definition/arrays-and-pointers | code | Arrays and pointers
Describes implementation-defined aspects of the ARM C compiler and C library relating to arrays and pointers, as required by the ISO C standard.
The following statements apply to all pointers to objects in C and C++, except pointers to members:
Adjacent bytes have addresses that differ by one.
NULLexpands to the value 0.
Casting between integers and pointers results in no change of representation.
The compiler warns of casts between pointers to functions and pointers to data.
size_tis defined as
ptrdiff_tis defined as | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00007.warc.gz | CC-MAIN-2020-24 | 543 | 9 |
https://mattjmcnaughton.com/ | code | Since I last posted, I’ve been focusing almost all of my “fun coding” time on contributing to the Kubernetes code base. You can see my contributions on my Github. I’ve been particularly focusing on the components of the code base owned by sig/node. It’s been rewarding and a great learning experience… but it does mean I haven’t been focusing on adding features to my personal Kubernetes cluster. And since that’s what I mainly blogged about, it also means I’ve been blogging less.
Regularly updating our k8s cluster, and the applications running on it, is one of the most powerful tools we have for ensuring our cluster functions securely and reliably. Staying vigilant about applying updates is particularly important for a fast moving project like Kubernetes, which releases new minor versions each quarter. This blog post outlines the process we’re proposing for ensuring our cluster, and the applications running on it, remain up to date.
Kubernetes is a incredibly exciting and fast moving project. Contributing to these types of projects, while quite rewarding, can have a bit of a startup cost. I experienced the start up cost a bit myself, after returning to contributing to the Kubernetes after a couple of months of focusing on running my own Kubernetes cluster, as opposed to contributing source code. So this post is partially for y’all and partially for future me :)
We’re excited to announce all connections to mattjmcnaughton.com, and its subdomains (i.e. blog.mattjmcnaughton.com, etc.), are now able, and in fact forced, to use HTTPS. After reading this post, we hope you’ll be convinced of the merits of using HTTPS for public-internet facing services, and also have the knowledge to easily modify your services to start supporting HTTPS connections. Why do we care about HTTPS? via GIPHY Offering, and defaulting to, HTTPS is good web hygiene.
So far, most of our posts on this blog have been about the exciting parts of running our own Kubernetes cluster. But, running a Kubernetes cluster isn’t all having fun deploying applications. We also have to be responsible for system maintenance. That need became particularly apparent recently with the release of CVE-2019-5736 on 2/11/19. What is CVE-2019-5736 via GIPHY CVE-2019-5736 is the CVE for a container escape vulnerability discovered in runc.
In our last blog post, we increased the stability of our Kubernetes cluster and also increased its available resources. With these improvements in place, we can tackle deploying our most complex application yet: Nextcloud. By the end of this blog post, you’ll have insight into the major architecture decisions we made when deploying Nextcloud to Kubernetes. As always, we’ll link the full source code should you want to dive deeper.
In our blog series on decreasing the cost of our Kubernetes cluster, we suggested replacing on-demand EC2 instances with spot instances for Kubernetes’ nodes. When we first introduced this idea, we mentioned that this strategy could have negative impacts on both our applications’ availability and our ability to monitor our applications’ availability. At the time, we still converted to spot instances because we believed the savings benefits were worth the decrease in reliability.
In the first blog post in this series, we examined how our previous deployment strategy of running kubectl apply -f on static manifests did not meet our increasingly complex requirements for our strategy/system for deploying Kubernetes’ applications. In the second, and final, post in this mini-series, we’ll outline the new deployment strategy and how it fulfills our requirements. The new system via GIPHY Our new deployment strategy makes use of Helm, a popular tool in the Kubernetes ecosystem which describes itself as “a tool that streamlines installing and managing Kubernetes applications.
Over the holiday break, I spent a lot of my leisure coding time rethinking the way we deploy applications to Kubernetes. The blog series this post kicks off will explore how we migrated from an overly simplistic deploy strategy to one giving us the flexibility we need to deploy more complex applications. To ensure a solid foundation, in this first post, we’ll define our requirements for deploying Kubernetes’ applications and evaluate whether our previous systems and strategies met these requirements (spoiler alert… it didn’t).
Overall impact In parts one and two of this series, we sought to reduce our AWS costs by optimizing our computing, networking, and storage expenditures. Since this post is the final one in the series, let’s consider how we did in aggregate. Before any resource optimizations, we had the following bill: master ec2 (1 m3.medium): (1 * 0.067 $/hour * 24 * 30) = 48.24 nodes ec2 (2 t2.medium): (2 * 0. | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00176.warc.gz | CC-MAIN-2019-47 | 4,820 | 10 |
https://nookipedia.com/w/index.php?title=Template:YouTube&action=edit | code | (Redirected from Template:YouTube)
- Video Code is the unique sequence of letters, numbers, and characters that point to the video.
- Video Size is the desired size of the video. Defaults to 250 pixels.
- Location is where to have the video. (i.e. right, left, or center.) Defaults to left if blank.
- Caption is an optional caption to describe the video.
- If location is neither left, right, nor center, then the it is assumed to be a caption and the video's location is set to "left". | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00168.warc.gz | CC-MAIN-2019-30 | 487 | 6 |
http://www.britfa.gs/g/res/27107.html | code | |>>|| No. 27111
Appreciate the answer, especially the encouragement to go with multiple locations. Once I've charmed them a bit more at the new job I might even get away with sticking one on the ramp at the airport, who knows.
I didn't even think about ACARS, that's very useful. I sort of already have access to that sort of data but not directly - would be genuinely helpful to see that.
Have ordered a twin NooElec dongle bundle for now, and will go from there. Will update with my levels of success later in the week.
I've noticed you can get a HackRF One on eBay for about £140 now - are they weird bootleg versions, or just cheaper than I remember them being? Last time someone (you?) mentioned them on here they were about £250.
Basically, yes. All those sites like Flightradar, Flightaware, Planefinder etc work by pooling a load of receivers that pick up a planes transmission, which pumps out information such as their callsign, ID, location, altitude, and so on. It's functionally a secondary radar and and extremely cheap and effective way to monitor air traffic, both on the ground and in flight. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00440.warc.gz | CC-MAIN-2020-34 | 1,111 | 6 |
https://docs.3dolphins.ai/bot-settings/faq-knowledge/remove-faq-knowledge | code | Sometimes you need to remove knowledge from a system when it’s no longer needed. In the FAQ Knowledge feature, you can remove knowledge manually one by one or remove all knowledge based on modules.
For example, when you need to delete some knowledge contained in the dokumentasi module, you need to select the knowledge that you will delete by searching for the knowledge based on the title or question knowledge, then clicking the 'Remove' icon. Then, you will prompt to confirm the remove request, click ‘Yes‘ to proceed remove knowledge, or you can click ‘No‘ to return to the list of FAQ Knowledge page.
However, if you want to delete all the knowledge contained in the dokumentasi module, you can click the 'Remove' icon on the module. Then, you will prompt to confirm the remove request, click 'Yes' to proceed remove module, or you can click 'No' to return to the FAQ Knowledge page.
Or you can also re-save some of the knowledge you still need to another knowledge module before you actually delete that module.
Sometimes you need to change the data from the system when it is possible because of data changes or there's an error in data.
When you have finished, click 'Save Knowledge' to save the modified knowledge.
If you need to create a module with knowledge content that is almost the same or even the same as an existing module, you can use the Clone Module feature.
Once clicked, you will be asked to fill in the name of the new module, and you can fill in "Dokumentasi - EN" and click the clone button.
The module has been duplicated successfully. Furthermore, you can add or change knowledge as needed. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816954.20/warc/CC-MAIN-20240415080257-20240415110257-00122.warc.gz | CC-MAIN-2024-18 | 1,630 | 9 |
https://forums.theregister.com/forum/all/2007/11/08/beck_xp_shape_up/ | code | Remember who's paying
I'm speaking here as a twenty+ year veteran of the software development business, and -- yes -- I wear jeans (and a smart shirt) to work. I'm now a CTO-level consultant, and also CEO of a set-top box manufacturer.
Although, as Martin comments, Beck wasn't articulating himself too well, Beck has a very good point. Software developers have for a very long time been treated as a species apart: strange, feral creatures that ingest caffeine and extrude code, and operate according to their own lights.
It's suited the developers very well. But it's also insulated them from the most salient fact of their employment: that they're paid to make money for their employers. It's great to feel the aesthetics of the code innately, but if you're spending all day recoding a member function to be blisteringly fast, just because it "looked wrong", and the change makes no impact on the product, you've just sneaked a day's paid holiday.
The jeans-and-scruffy-T-shirt "dress code" doesn't encourage programmers to think business. Unfortunate, since it's business that's paying their wages, and reasonably expects that their every working action is directed towards increasing value in the product.
Ask the average programmer what are the market drivers for the product they're working on, and you'll probably get a blank stare. ("Why do I need to know that?") Yet they are expected to be working towards satisfying those needs. This is a dangerous disconnect, because developers are keenly intelligent people, on the whole, and if they understand the business model, have exposure to the end users, and are (in short) brought into the business, they can contribute massively to its profitability, and direct their own efforts more accurately towards a better product, and a healthier company.
But that requires a shift from both sides. Senior managers need to stop treating developers like quarantine victims; developers need in turn to stop treating work like play. Take a look at an outsourcing centre in India, and you'll see those principles at work. We, in the Western Hemisphere, need to learn those lessons, get a smarter (in both senses) attitude, and get down to work -- whilst there's still work to be had. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643462.13/warc/CC-MAIN-20230528015553-20230528045553-00721.warc.gz | CC-MAIN-2023-23 | 2,229 | 7 |
https://www.thecirclepit.com/2013/01/throwback-thursday-reflux | code | They released one album, “The Illusion of Democracy”, in 2004 which got many of the band members recognized in their own right. For example, Prosthetic Records approached Tosin Abasi with the concept of him writing a solo album. You can listen to some tracks from that LP down below. I also recommend looking into what each of the artists are doing currently. Reflux worked as a springboard for future greatness. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154471.78/warc/CC-MAIN-20210803191307-20210803221307-00485.warc.gz | CC-MAIN-2021-31 | 416 | 1 |
http://casperfabricius.com/site/2009/03/26/uploading-multiple-files-with-progress-indicator-using-jquery-flash-and-rails/ | code | I just implemented a new way of uploading assets such as photos and PDF-files to Lokalebasen.dk. There is nothing revolutionary about it, but I hit a few snags on the way, and I thought I’d share my choices here.
Several ready-made solutions exists, and chose one that was built as a jQuery plugin, was updated recently and was easy to use while being highly configurable: Uploadify. This article is not an Uploadify tutorial – you’ll have to work the out from the documentation and the examples. Rather, it’s about the last piece of the puzzle, how to make Rails play nicely with Uploadify.
I needed the functionality several places, so I wrote a partial I could reuse:
There is a lot going here, so let’s take it from the top. The
content_for :head section is the code that will be placed inside the
<head> part of the page – my layout takes care of that with a
file_uploader div seen later in the partial. This includes a lot options, some of which uses variables supplied to the partial. Here is an example of how I call the partial:
I won’t walk you through all the options I supply for Uploadify, but let’s take a look at the important ones:
script is where Uploadify will post the uploaded file to. This should be the
create action of your asset controller.
scriptData is the most tricky one. The option specifies what parameters should be posted to the controller along with the file.
'format': 'json' ensures that the
wants.json block is invoked in the controller, instead of the default
wants.html. This helps the controller to separate Flash uploads from ordinary uploads. The two other parameters in
scriptData will be explained later in the article, is they are the key to get the uploading past security and authentication measures taken.
fileDataName extracts the name to use for the uploaded file (e.g.
asset[:uploaded_data] for attachment_fu) directly from the fallback form.
wants.js in the controller.
There are several gotchas when you upload files through Flash. The most common one, which also apply to ajax, is the infamous
ActionController::InvalidAuthenticityToken exception. You will get this exception on any default Rails installation with authenticity checking enabled. Rails expects any post to an action the include the
authenticity_token parameter. It is used to verify the post actually came from the same application, and Rails automatically adds to all the forms and ajax requests it generates. In this case, we have to apply it manually, and this what happens with
'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>'). The
form_authenticity_token returns a valid token, but first we check if forgery is enabled. If it is disabled (which it usually is in tests), we will get an error if we invoke
encodeURIComponent makes sure that any characters in the token is encoded correctly. With this parameter, we should be able to make an authentic post through Flash – or maybe not …
Rails use data in the user session to authenticate requests, and requests directly from Flash does not include the session cookie that Rails use to find the session. Thus we still get the
ActionController::InvalidAuthenticityToken exception, even with the
authenticity_token parameter added. We have to make a slight hack into how Rails handles sessions to make this work. Place this code in a file in
config/initializers to apply the hack, which tells Rails to try to read the session id from a parameter, if it can’t find in a cookie. This will only happen, however, if we add the line
session :cookie_only => false, nly => :create to the asset controller. Also, we must supply the session in a parameter, which is what
'<%= Rails.configuration.action_controller.session[:session_key]%>': '<%= u session.session_id %>' } do. The unique session key of the application is taken from the Rails configuration, and the session id is taken from session.
With these measures in place, Rails can now properly authenticate our Flash upload request as a legal, secure post. As an added bonus, actions protected behind session-based logins now also just works. And I would guess that most applications require their users to register and login before they can upload files.
Finally, here is a trick if you use http basic authentication e.g. for the administration tool like we do on Lokalebasen.dk. Place an
is_admin? flag in the session, as shown in the code below. This will allow even Flash uploads to be authenticated, even if they don’t supply http basic authentication information: | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168998.49/warc/CC-MAIN-20160205193928-00129-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 4,565 | 33 |
https://mattmazur.com/2015/07/03/lean-domain-search-at-3%C2%BD/ | code | Lean Domain Search, despite almost no work since its acquisition by Automattic two years ago, has continued to thrive, now handling more than 160,000 searches per month:
It’s monthly growth rate works out to be about 6.5%. Not huge, but not bad for maintenance mode. :)
I think its growth is still driven almost entirely by word of mouth so if you’ve ever shared it with anyone (I’m looking at you, Jay Neely), thanks!
Looks to me like getting put into maintenance mode was the best thing that could have happened to it :)
Haha, you joke, but if I kept working on it I probably would have added a bunch of features no one cared about, making it complex and hard to use in the process, hurting its long term growth. The trick is knowing when that’s the case and when it’s not… :)
Well deserved growth Matt! :) | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00052.warc.gz | CC-MAIN-2023-14 | 820 | 6 |
https://www.nudevotion.com/s/p/senreve-maestra-mini-suede-convertible-backpack-bag/ | code | Senreve "Maestra" convertible backpack satchel bag in genuine Italian leather and suede. Small exterior back compartment. Flap closure with magnetic snap. Hidden zipper closure for extra security. 7 interior compartments, including: Padded compartment (fits iPad mini); 4 small sleeve pockets; larger zipper pocket for loose items. 1.8" drop handle. Adjustable shoulder straps. Microsuede lining. 7.5"H x 17.5"W x 5.5"D. Made in Italy. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00163.warc.gz | CC-MAIN-2021-21 | 435 | 1 |
http://www.webassist.com/forums/post.php?pid=47134 | code | The solution that Neilo proposes is one way to go about it, though Dreamweaver will throw an alert saying the site is contained in another site.
another way to go about it is to separate the sites locally, then put them back together when uploaded to the host.
What I mean by this is to have two separate site in Dreamweaver that each have a different local folder:
On the remote tab of Dreamweaver, set the main site to upload to www folder. and set the store site to upload to the www/store folder.
The error you are reporting is occurring because the power store files are not at the root of the Dreamweaver site.
OK - Great. What do I do now? I need to set up a store where the Power Store files are not at the root. This is because this particular store will be accessible via a password only. (Meaning you will not be able to even SEE the store without first logging in.)
This does go back to my frustration about lack of documentation. That is a huge thing that should be mentioned in some sort of FAQ about PowerStore. THat the files must be in the root of your site.
Is there no work around here? Could just the eCart and Template files be in the root - and could the index.php and other files be stored within a subfolder? | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00318.warc.gz | CC-MAIN-2018-51 | 1,232 | 8 |
https://stackoverflow.com/questions/19506983/pip-install-custom-include-path?noredirect=1 | code | On an ubuntu system on which I don't have sudo previleges, I wish to install a package via
pip (matplotlib to be precise), but some source packages are not installed on the system (however the binaries are installed).
I have created a virtual environment in which to install, and have downloaded the required source code, but I can't place them in the default
/usr/include/ etc.. When
pip runs matplotlib's
setup.py script, the source files are reported as missing.
Is there a way to instruct
setup.py where to look for the source?
CPPFLAGS adds the locations of the downloaded source to compile instructions, but
setup.py didn't find the source, so didn't attempt to compile some components (graphic backends).
pps: this is similar to, but more specific than this question | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201672.12/warc/CC-MAIN-20190318191656-20190318213656-00140.warc.gz | CC-MAIN-2019-13 | 773 | 11 |
http://docwiki.cisco.com/w/index.php?title=Open_System_Interconnection_Routing_Protocol&oldid=17514 | code | |Attention: DocWiki has reached EOL and will be decommissioned January 25, 2019.|
Open System Interconnection Routing Protocol
The International Organization for Standardization (ISO) developed a complete suite of routing protocols for use in the Open System Interconnection (OSI) protocol suite. These include Intermediate System-to-Intermediate System (IS-IS), End System-to-Intermediate System (ES-IS), and Interdomain Routing Protocol (IDRP). This chapter addresses the basic operations of each of these protocols.
IS-IS is based on work originally done at Digital Equipment Corporation (Digital) for DECnet/OSI (DECnet Phase V). IS-IS originally was developed to route in ISO Connectionless Network Protocol (CLNP) networks. A version has since been created that supports both CLNP and Internet Protocol (IP) networks; this version usually is referred to as Integrated IS-IS (it also has been called Dual IS-IS).
OSI routing protocols are summarized in several ISO documents, including ISO 10589, which defines IS-IS. The American National Standards Institute (ANSI) X3S3.3 (network and transport layers) committee was the motivating force behind ISO standardization of IS-IS. Other ISO documents include ISO 9542 (which defines ES-IS) and ISO 10747 (which defines IDRP).
OSI Networking Terminology
The world of OSI networking uses some specific terminology, such as end system (ES), which refers to any nonrouting network nodes, and intermediate system (IS), which refers to a router. These terms form the basis for the ES-IS and IS-IS OSI protocols. The ES-IS protocol enables ESs and ISs to discover each other. The IS-IS protocol provides routing between ISs.
Other important OSI networking terms include area, domain, Level 1 routing, and Level 2 routing. An area is a group of contiguous networks and attached hosts that is specified to be an area by a network administrator or manager. A domain is a collection of connected areas. Routing domains provide full connectivity to all end systems within them. Level 1 routing is routing within a Level 1 area, while Level 2 routing is routing between Level 1 areas.
Figure: Areas Exist Within a Larger Domain and Use Level 2 Routing to Communicate illustrates the relationship between areas and domains, and depicts the levels of routing between the two.
Figure: Areas Exist Within a Larger Domain and Use Level 2 Routing to Communicate
End System-to-Intermediate System
End System-to-Intermediate System (ES-IS) is an OSI protocol that defines how end systems (hosts) and intermediate systems (routers) learn about each other, a process known as configuration. Configuration must happen before routing between ESs can occur.
ES-IS is more of a discovery protocol than a routing protocol. It distinguishes among three different types of subnetworks: point-to-point subnetworks, broadcast subnetworks, and general topology subnetworks. Point-to-point subnetworks, such as WAN serial links, provide a point-to-point link between two systems. Broadcast subnetworks, such as Ethernet and IEEE 802.3, direct a single physical message to all nodes on the subnetwork. General topology subnetworks, such as X.25, support an arbitrary number of systems. Unlike broadcast subnetworks, however, the cost of an n-way transmission scales directly with the subnetwork size on a general topology subnetwork.
Figure: ES-IS Can Be Deployed in Point-to-Point, Broadcast, and General Topology Subnetworks illustrates the three types of ES-IS subnetworks.
Figure: ES-IS Can Be Deployed in Point-to-Point, Broadcast, and General Topology Subnetworks
ES-IS configuration is the process whereby ESs and ISs discover each other so that routing between ESs can occur. ES-IS configuration information is transmitted at regular intervals through two types of messages: ES hello messages (ESHs) and IS hello messages (ISHs). ESHs are generated by ESs and are sent to every IS on the subnetwork. ISHs are generated by ISs and are sent to all ESs on the subnetwork. These hello messages primarily are intended to convey the subnetwork and network layer addresses of the systems that generate them. Where possible, ES-IS attempts to send configuration information simultaneously to many systems. On broadcast subnetworks, ES-IS hello messages are sent to all ISs through a special multicast address that designates all end systems. When operating on a general topology subnetwork, ES-IS generally does not transmit configuration information because of the high cost of multicast transmissions.
ES-IS Addressing Information
The ES-IS configuration protocol conveys both OSI network layer addresses and OSI subnetwork addresses. OSI network layer addresses identify either the network service access point (NSAP), which is the interface between OSI Layer 3 and Layer 4, or the network entity title (NET), which is the network layer entity in an OSI IS. OSI subnetwork addresses, or subnetwork point-of-attachment addresses (SNPAs) are the points at which an ES or IS is physically attached to a subnetwork. The SNPA address uniquely identifies each system attached to the subnetwork. In an Ethernet network, for example, the SNPA is the 48-bit Media Access Control (MAC) address. Part of the configuration information transmitted by ES-IS is the NSAP-to-SNPA or NET-to-SNPA mapping.
Intermediate System-to-Intermediate System
Intermediate System-to-Intermediate System (IS-IS) is an OSI link-state hierarchical routing protocol that floods the network with link-state information to build a complete, consistent picture of network topology. To simplify router design and operation, IS-IS distinguishes between Level 1 and Level 2 ISs. Level 1 ISs communicate with other Level 1 ISs in the same area. Level 2 ISs route between Level 1 areas and form an intradomain routing backbone. Hierarchical routing simplifies backbone design because Level 1 ISs need to know only how to get to the nearest Level 2 IS. The backbone routing protocol also can change without impacting the intra-area routing protocol.
OSI Routing Operation
Each ES lives in a particular area. OSI routing begins when the ESs discover the nearest IS by listening to ISH packets. When an ES wants to send a packet to another ES, it sends the packet to one of the ISs on its directly attached network. The router then looks up the destination address and forwards the packet along the best route. If the destination ES is on the same subnetwork, the local IS will know this from listening to ESHs and will forward the packet appropriately. The IS also might provide a redirect (RD) message back to the source to tell it that a more direct route is available. If the destination address is an ES on another subnetwork in the same area, the IS will know the correct route and will forward the packet appropriately. If the destination address is an ES in another area, the Level 1 IS sends the packet to the nearest Level 2 IS. Forwarding through Level 2 ISs continues until the packet reaches a Level 2 IS in the destination area. Within the destination area, ISs forward the packet along the best path until the destination ES is reached.
Link-state update messages help ISs learn about the network topology. First, each IS generates an update specifying the ESs and ISs to which it is connected, as well as the associated metrics. The update then is sent to all neighboring ISs, which forward (flood) it to their neighbors, and so on. (Sequence numbers terminate the flood and distinguish old updates from new ones.) Using these updates, each IS can build a complete topology of the network. When the topology changes, new updates are sent.
IS-IS uses a single required default metric with a maximum path value of 1024. The metric is arbitrary and typically is assigned by a network administrator. Any single link can have a maximum value of 64, and path links are calculated by summing link values. Maximum metric values were set at these levels to provide the granularity to support various link types while at the same time ensuring that the shortest-path algorithm used for route computation will be reasonably efficient. IS-IS also defines three optional metrics (costs): delay, expense, and error. The delay cost metric reflects the amount of delay on the link. The expense cost metric reflects the communications cost associated with using the link. The error cost metric reflects the error rate of the link. IS-IS maintains a mapping of these four metrics to the quality of service (QoS) option in the CLNP packet header. IS-IS uses these mappings to compute routes through the internetwork.
IS-IS Packet Formats
IS-IS uses three basic packet formats: IS-IS hello packets, link-state packets (LSPs), and sequence-number packets (SNPs). Each of the three IS-IS packets has a complex format with the following three different logical parts. The first part consists of an 8-byte fixed header shared by all three packet types. The second part is a packet type-specific portion with a fixed format. The third part is also packet type-specific but of variable length.
Figure: IS-IS Packets Consist of Three Logical Headers illustrates the logical format of IS-IS packets.
Figure: IS-IS Packets Consist of Three Logical Headers
Figure: IS-IS Packets Consist of Eight Fields shows the common header fields of the IS-IS packets.
Figure: IS-IS Packets Consist of Eight Fields
The following descriptions summarize the fields illustrated in Figure 45-4:
- Protocol identifier - Identifies the IS-IS protocol and contains the constant 131.
- Header length - Contains the fixed header length. The length always is equal to 8 bytes but is included so that IS-IS packets do not differ significantly from CLNP packets.
- Version - Contains a value of 1 in the current IS-IS specification.
- ID length - Specifies the size of the ID portion of an NSAP address. If the field contains a value between 1 and 8 inclusive, the ID portion of an NSAP address is that number of bytes. If the field contains a value of zero, the ID portion of an NSAP address is 6 bytes. If the field contains a value of 255 (all ones), the ID portion of an NSAP address is zero bytes.
- Packet type - Specifies the type of IS-IS packet (hello, LSP, or SNP).
- Version - Repeats after the Packet Type field.
- Reserved - Is ignored by the receiver and is equal to 0.
- Maximum area addresses - Specifies the number of addresses permitted in this area.
Following the common header, each packet type has a different additional fixed portion, followed by a variable portion.
Integrated IS-IS is a version of the OSI IS-IS routing protocol that uses a single routing algorithm to support more network layer protocols than just CLNP. Integrated IS-IS sometimes is called Dual IS-IS, named after a version designed for IP and CLNP networks. Several fields are added to IS-IS packets to allow IS-IS to support additional network layers. These fields inform routers about the reachability of network addresses from other protocol suites and other information required by a specific protocol suite. Integrated IS-IS implementations send only one set of routing updates, which is more efficient than two separate implementations.
Integrated IS-IS represents one of two ways of supporting multiple network layer protocols in a router; the other is the ships-in-the-night approach. Ships-in-the-night routing advocates the use of a completely separate and distinct routing protocol for each network protocol so that the multiple routing protocols essentially exist independently. The different types of routing information basically pass like ships in the night. Integrated routing has the capability to route multiple network layer protocols through tables calculated by a single routing protocol, thus saving some router resources. Integrated IS-IS uses this approach.
Interdomain Routing Protocol
The Interdomain Routing Protocol (IDRP) is an OSI protocol that specifies how routers communicate with routers in different domains. IDRP is designed to operate seamlessly with CLNP, ES-IS, and IS-IS. IDRP is based on the Border Gateway Protocol (BGP), an interdomain routing protocol that originated in the IP community. IDRP features include the following:
- Support for CLNP quality of service (QoS)
- Loop suppression by keeping track of all RDs traversed by a route
- Reduction of route information and processing by using confederations, the compression of RD path information, and other means
- Reliability by using a built-in reliable transport
- Security by using cryptographic signatures on a per-packet basis
- Route servers
IDRP introduces several environment-specific terms. These include border intermediate system (BIS), routing domain (RD), routing domain identifier (RDI), routing information base (RIB), and confederation.
A BIS is an IS that participates in interdomain routing and, as such, uses IDRP. An RD is a group of ESs and ISs that operate under the same set of administrative rules and that share a common routing plan. An RDI is a unique RD identifier. An RIB is a routing database used by IDRP that is built by each BIS from information received from within the RD and from other BISs. A RIB contains the set of routes chosen for use by a particular BIS. A confederation is a group of RDs that appears to RDs outside the confederation as a single RD. The confederation's topology is not visible to RDs outside the confederation. Confederations must be nested within one another and help reduce network traffic by acting as internetwork firewalls.
Figure: Domains Communicate via Border Intermediate Systems (BISs) illustrates the relationship between IDRP entities.
Figure: Domains Communicate via Border Intermediate Systems (BISs)
An IDRP route is a sequence of RDIs, some of which can be confederations. Each BIS is configured to know the RD and the confederations to which it belongs. It learns about other BISs, RDs, and confederations through information exchanges with each neighbor. As with distance-vector routing, routes to a particular destination accumulate outward from the destination. Only routes that satisfy a BIS's local policies and that have been selected for use will be passed on to other BISs. Route recalculation is partial and occurs when one of three events occurs: an incremental routing update with new routes is received, a BIS neighbor goes down, or a BIS neighbor comes up.
Q - What two types of messages are sent between systems in a ES-IS?
A - Between ES and IS systems, IS hellos and ES hellos are sent at regular intervals to maintain the connections and to exchange subnetwork and network layer addresses.
Q - What link-state hierarchical routing protocol floods the network with link-state information when performing updates?
A - Intermediate System-to-Intermediate System (IS-IS) is an OSI link-state hierarchical routing protocol that floods the network with link-state information to build a complete, consistent picture of network topology. To simplify router design and operation, IS-IS distinguishes between Level 1 and Level 2 ISs. Level 1 ISs communicate with other Level 1 ISs in the same area. Level 2 ISs route between Level 1 areas and form an intradomain routing backbone.
Q - How is the IS-IS metric figured on each link?
A - IS-IS uses a single required default metric with a maximum path value of 1024. The metric is arbitrary and typically is assigned by a network administrator. Any single link can have a maximum value of 64, and path links are calculated by summing link values. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00572.warc.gz | CC-MAIN-2019-04 | 15,507 | 61 |
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/abort-processing-of-the-request/qaq-p/405963 | code | Usually, you would include your filter into the filter chain and override the doFilter() method as @bilal_ahmad outlined in his code example.
Depending on your understanding of how to "block/abort" the request, you could do one of the following:
Check for the right condition and send an appropriate HTTP response (e. g. 404 Not Found or any other 4xx or 5xx code that makes sense for your use case) - exactly what @bilal_ahmad mentioned. You can find a simple example of a LoggingFilter in the AEM Maven Archetype .
Proceed with request processing but skip any additional filters. Probably not what you are looking for.
Stop processing all together and not even waste additional computing resources by sending any kind of response. You probably don't want to do that as this may lead to unexpected results.
If your use case is really about blocking and/or dropping incoming requests (e. g. as a protective measure against DoS/DDoS, unwanted crawlers or other clients, etc.) my suggestion would be to stop them at an earlier stage. You could leverage some kind of web application firewall (WAF). Sometimes certain modules for Apache HTTPD do the job (mod_security , mod_qos or for simple cases even mod_rewrite ). | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00371.warc.gz | CC-MAIN-2021-21 | 1,215 | 6 |
https://doc.oroinc.com/4.1/backend/configuration/yaml/assets/ | code | You are browsing the documentation for version 4.1 of OroCommerce, OroCRM and OroPlatform, which is no longer maintained. Read version 5.1 (the latest LTS version) of the Oro documentation to get the updated information.
See our Release Process documentation for more information on the currently supported and upcoming releases.
assets.yml file can be used to define CSS file groups. The listed files will be
automatically merged and optimized for web presentation.
The following example creates two groups (
second_group) each of them
containing three CSS files:
1 2 3 4 5 6 7 8 9 10 11
# src/Acme/DemoBundle/Resources/config/oro/assets.yml assets: css: first_group: - 'First/Assets/Path/To/Css/first.css' - 'First/Assets/Path/To/Css/second.css' - 'First/Assets/Path/To/Css/third.css' second_group: - 'Second/Assets/Path/To/Css/first.css' - 'Second/Assets/Path/To/Css/second.css' - 'Second/Assets/Path/To/Css/third.css'
By default, when you install the application’s assets using the
assets:install command, all
CSS files from all groups and all bundles will be merged and optimized.
For debugging purposes compression of CSS files of certain groups can be disabled with the
1 2 3
oro_assetic: css_debug: - first_group
You can use the
oro:assetic:groups command to get a list of all available groups. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00860.warc.gz | CC-MAIN-2023-40 | 1,304 | 17 |
https://central.owncloud.org/t/install-owncloud-with-composer/20375 | code | I’ve installed your system to Ubuntu 19.0 in a local network. Pls note, that there’s one specific - my installation was done to one machine and a containers were used.
I downloaded a file from here https://github.com/ONLYOFFICE/docker-onlyoffice-owncloud
Pls clarify some questions:
where your program save the data and settings in such case ?
where a user’s data is saved (permissions and so on)? SQLite and MySQL are’t applied in your system
how can I made a data archivation ? If I want to move the system to another machine will be enough to made a new installation and copy all files ?
I’ve found a files here /var/lib/docker/volumes/docker-onlyoffice-owncloud_app_data/_data/data | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00123.warc.gz | CC-MAIN-2021-25 | 695 | 7 |
https://thefiringline.com/forums/showpost.php?p=5173771&postcount=2 | code | I have both sub compact autos and stub nose revolvers. I tend to carry the revolvers more. I like the fact of no external safeties, lines break up better in front pocket and can be fired from the pocket if needed.
I have a few but my most carried are the 638 and recently the Ruger LCR.
Know of that you speak,
Last edited by blackamos; August 6, 2012 at 03:22 PM. | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00095-ip-10-171-10-108.ec2.internal.warc.gz | CC-MAIN-2017-09 | 364 | 4 |
https://onoroff.biz/ms-sql-server-2005-developer-edition-free-download.php | code | System Requirements Supported Operating System. Install Instructions Click the Download button on this page to start the download, or choose a different language from the drop-down list and click Go. I still use it for a changing some photos because it's so quick and easy. Unfortunately, it lost support. The posting of advertisements, profanity, or personal attacks is prohibited.
Tales from documentation: Write for your clueless users. Web icon An illustration of a computer application window Wayback Machine Texts icon An illustration of an open book. Books Video icon An illustration of two cells of a film strip. It will download and install the latest components selected for installation.
The former one is recommended to install. While this isn't really programming related, I'll answer anyway. Adam Robinson Adam Robinson k 31 31 gold badges silver badges bronze badges. Thanks for the benefit I need to tinker with the both the mirror high-performance and the table partitioning functions. Ah, then that would explain it! Unfortunately it still looks like you're going to have to get yourself an MSDN subscription in order to get ahold of Developer edition. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00216.warc.gz | CC-MAIN-2020-45 | 1,170 | 3 |
http://www.sevenforums.com/software/234401-autom-nesting-groups-w7-client.html | code | Autom. nesting groups in w7 client
on our firm we install exact globe with sccm (msi). But we now face the following challenge;
exact creates an group called."SQLServer2005MSSQLUser$SYXXXXX$MSSQLSERVER"
This is an group wicht we automaticly want to nest with authenticated users.
That we can do with net localgroup .. /add but the sql group contains an variable %computername%
Annyone an idea?
Can i rename the group name through variables? Then i can ad the name through an command..
We are working in an windows2008r2 network and are users of;
- windows 7 32 bit
Thank you verry much..
mark ten bos
Last edited by mtenbos; 11 Jun 2012 at 06:24 AM..
Reason: possible sollution/.. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00052-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 680 | 13 |
https://paizo.com/people/Iskander | code | Put them in a between a rock and a hard place, make them choose between two evils, and have someone come after them as a consequence of the lesser one.
Sure they can take out a divine messenger or four... but do they REALLY want to?
One way to approach this is to try adding significant collateral damage to encounters as a way to escalate the conflict:
- do the PCs REALLY want to take out the bad guy(s) if it means killing all those orphans as well?
- if the PCs killed the orphans, how do all the local not-so-bad guys feel about that?
- do the PCs REALLY want to slaughter the demonstrators when the protest turns ugly?
- if the PCs kill all the townsfolk, how does that make the city's patron deity feel?
Just how far are your characters willing to go to get their prize? | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00516.warc.gz | CC-MAIN-2022-40 | 777 | 8 |
https://community.bt.com:443/t5/Archive-Staging/FON-varying-max-proportion-of-bandwith/m-p/1996984 | code | You can't change the speed.
It's max 3MB and your traffic always takes priority.
You can't make any changes to BTWifi-Fon but you can opt out of it if you want.
If you do opt out you will not be able to use BTWifi hotspots when out and about.
See link about opting out.
Thanks - does the visitor's usage add to the host's usage, such that the host's monthly allowance might be exceeded?
any usage on BTWIFI/FON is not added to the home data usage | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00335.warc.gz | CC-MAIN-2022-49 | 446 | 7 |
http://www.sharepointsecurity.com/sharepoint/antigen-services-and-simple-command-line-job-management/ | code | * This article was written in the context of Sybari Antigen For SharePoint, a technology now considered deprecated with the introduction of Forefront Security for SharePoint 2010. Variations may exist. *
The Antigen Services
The Antigen Services are what are the backbone of the Antigen framework. By implementing these services within your environment, it not only allows your detection engine to interact with your SharePoint environment, but also allows you to manage processes and other methods related to the client applications.
The two large services that compose the Antigen 8.0 environment are:
Antigen Services Breakdown
The first of these services, AntigenService, acts as the mediator on the server, providing functionality to client side applications that are responsible for the configuration of the Antigen processes. It is the most vital service, since it is also responsible for the scanning on the SharePoint server.
The second service, AntigenSp2Service, is the service which converses with the SQL database in order since SharePoint relies on the SQL backend for content storage.
Simple ways to manipulate with the Command Line
Using the command line against the first Antigen service, AntigenService, is perhaps one of the most useful tools. Since the Antigen scan jobs are pretty much what makes the Antigen framework, it is useful to know how to disable and enable these jobs quickly through the command prompt.
Start -> Run -> type cmd -> enter -> navigate to your Antigen directory using the cd prefix
Once there, locate the antigenstarter console application. From here you can load or unload engines using the d/e switch, in other words:
- Antigenstarter d (disable engine)
- Antigenstarter e (enable engine)
There are a variety o unload / loads, the parameters to pass with the above command line argument are:
- 1 – Norman
- 8 – Sophos
- 16 CA InoculateIT
- 32 CA Vet
- 64 – Command
- 128 – AhnLab
- 256 – Sybari
- 512 – VBuster
- 2048 – Kaspersky
You can also use these commands remotely against the Antigen instance by using a /RemoteServer suffix at the end of your command.
There are several other command tools, however are tailored around performing Antigen Diagonistics, a subject of another article. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.27/warc/CC-MAIN-20190625132645-20190625154645-00517.warc.gz | CC-MAIN-2019-26 | 2,250 | 25 |
http://drownedinsound.com/community/boards/social/4185821 | code | Normally, you can understand why hyper-popular things are hyper-popular.
Coca Cola, McDonalds etc. They're not the best, but they're easily likeable.
The biggest mystery of hyper-popularity is Lee Evans. Whenever you get a new job, it's highly likely that the majority of your workmates will find him funny. And when they tell you they've got tickets for him, it's near-impossible to feign congratulations or even jealousy.
And so I set thee a task: Name me a popular comedian LESS amusing than this creature. I bet you'll struggle. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583728901.52/warc/CC-MAIN-20190120163942-20190120185942-00575.warc.gz | CC-MAIN-2019-04 | 532 | 4 |
https://bulldogjob.com/companies/profiles/1783-cryptosoftwares | code | Technical skills we value
CryptoSoftwares is a leading, goal oriented, well-established Blockchain Development company around the world. Our team of experienced blockchain developers are well expertised in decentralised and custom blockchain software development on multiple frameworks like Hyperledger and Ethereum among many others.
What you would create with us?
- Blockchain Application Development
- Cryptocurrency Development
- Cryptocurrency Wallet Development
- Cryptocurrency Exchange Software Development
- ICO Development
- Smart Contract Development | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00456.warc.gz | CC-MAIN-2021-43 | 561 | 9 |
http://stackoverflow.com/questions/19505083/creating-extra-presentation-properties-for-itemssource-items | code | I have an ObservableCollection of items bound to a listbox as the ItemsSource.
Some of these items are also located in another collection on the same ViewModel (call it CollectionTwo).
I want to be able to take the count of the item in Collection2 and display it for the respective item in CollectionOne. When CollectionTwo properties change (ie the Count), it must also be reflected back to CollectionOne.
I would guess the best way to do this in MVVM is to wrap items in CollectionOne with a viewmodel class with an extra Count property on it. Can someone point me to a good example of this? Or perhaps another method to tackle this problem that won't hugely weigh down my ItemsSource performance. | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554236.61/warc/CC-MAIN-20141224185914-00079-ip-10-231-17-201.ec2.internal.warc.gz | CC-MAIN-2014-52 | 699 | 4 |
https://opensprinkler.com/forums/reply/69710/ | code | Yes I meant two columns. My bad!
The ideal situation for me is to have 2 columns and use my iPad in landscape view. As well any pc browser.
Just today I was trying to find more info on the UI and found out that the UI assets can be stored locally and be altered which is good!
I will follow this road and inform with progress in case somebody is interested as well! | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00528.warc.gz | CC-MAIN-2023-06 | 365 | 4 |
https://forum.peplink.com/t/wpa2-enterprise-setup-802-1x-v1-v2-difference/20316 | code | Does anyone know what the V1 / V2 versions of the 802.1x mean for setting up a wireless network with WPA2 Enterprise?
In the manual it states only that:
“When WPA/WPA2 - Enterprise is configured, RADIUS-based 802.1 x authentication is enabled. Under this configuration, the Shared Key option should be disabled. When using this method, select the appropriate version using the V1/V2 controls. The security level of this method is known to be very high.”
I tried both settings with a MacBook Pro wireless client (OS X 10.14) and it only seems to work with V1.
My other WPA2 settings are EAP-PEAP and MSCHAP v2. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00130.warc.gz | CC-MAIN-2020-40 | 613 | 5 |
https://news.ycombinator.com/item?id=1540805 | code | The problem is, C is a language where = vs == is actually something you have to think about. It's natural that IDEs arise to address this.
XCode is fixing what is really a problem at the language level. So it's the wrong place to fix the problem. At the same time, trying to fix the problem at the programmer level (by forcing programmers to check = vs ==) is equally the wrong place to fix it.
It's a language issue, and that's the place it really should be addressed. But it will never be addressed there, because it would break backwards compatibility since decades.
Uhh, what? In Lisp's case, most people use emacs and coding Lisp in Notepad is mostly IMPOSSIBLE because trying to balance parentheses and navigate and indent s-expressions manually (and correctly) is idiotic.
There, problem solved.
I would solve the problem in Obj-C by introducing a directive on the file-level for the newer syntax, like how they did it in F# with the "#light" directive. And when a file is compiled with that directive, the compiler could trigger compile-time errors for those obviously dangerous constructs.
x = Post.first ? do_foo(x) : do_bar # does the assignment, then evaluates the result
A nice shorthand in my opinion, so the python way isn't a cure-all :)
(x = Post.first) ? do_foo(x) : do_bar | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609613.73/warc/CC-MAIN-20170528101305-20170528121305-00149.warc.gz | CC-MAIN-2017-22 | 1,291 | 9 |
http://people.sutd.edu.sg/~stefano_galelli/ | code | Welcome to the Resilient Water Systems Group website! Here is our video introduction!
Our recent paper Complex relationship between seasonal streamflow forecast skill and value in reservoir operations (with Sean Turner, James Bennett, and David Robertson) has just been selected as editor’s highlight for Hydrology and Earth System Sciences. Check out the paper here!
BATADAL was recently completed, with the results presented at the World Environmental & Water Resources Congress (Sacramento, California). Visit BATADAL webpage for more info on datasets, algorithms and participants.
We are looking for two Postdoctoral researchers for a project focussing on the Water-Energy-Climate Nexus in the Greater Mekong Sub-region. If you are interested, visit our Openings page. | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806676.57/warc/CC-MAIN-20171122213945-20171122233945-00385.warc.gz | CC-MAIN-2017-47 | 774 | 4 |
https://www.my.freelancer.com/projects/php-website-design/clone-website.78977/ | code | I'm looking to get a clone of a website. I'm not sure if they are using a CMS, but mine would need to incorporate some PHP. The site name is included in the text file that was uploaded with this posting. I don't need all of the features, just most. I just need the basic structure and I will be using my own content.
We are a group of professionals and have 20+ PHP developers and over 200 clients. We can do this with full support. Our site could be visited at : [login to view URL] Thx,Vivek
16 pekerja bebas membida secara purata $254 untuk pekerjaan ini
Hi i ican do this for u please check PMB for previous work and details , I have done 4 poker sites that are also mentioned at PMB so please check . Kindly send me full details regarding ur website. Thanks | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00173.warc.gz | CC-MAIN-2018-30 | 762 | 4 |
https://forums.techweez.com/t/wordpress-for-google-docs-chrome-extension-for-collaborating-introduced/957 | code | So Wordpress earlier today introduced “Wordpress for Google DOcs” a Google Chrome extension for collaborating on Wordpress articles. Something I think is brilliant and overdue. I wonder why Wordpress thought to introduce a collaborating feature outside of Wordpress itself.
We are happy to announce WordPress.com for Google Docs, a new add-on that lets you write, edit, and collaborate in Google Docs, then save it as a blog post on any WordPress.com or Jetpack-connected WordPress site. Your images and most formatting will carry over too. No more copy-and-paste headaches!
Compose a document in Google Docs and send it directly to any WordPress.com or Jetpack powered WordPress.org site as a draft post.
Instead of copying and pasting from Google Docs to WordPress and losing your images and formatting in the process, this add-on makes it easy to compose in Google Docs and publish to WordPress with formatting intact and images being uploaded properly.
After installing the add-on, you’ll be able to open WordPress.com for Google Docs in any Google Doc and connect your sites. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00236.warc.gz | CC-MAIN-2020-34 | 1,086 | 5 |
https://www.xenfun.com/tags/custom/ | code | This addon allows you to set permissions for Custom Fields:
Input permissions: select which usergroups can access and edit the custom field (supports the 3 original locations)
Output permissions on profile pages: select which usergroups can...
Arguably the most important part of a forum is its user base. XenForo contains a wide range of user management tools in the Users section of the admin control panel.
This section covers the more complex features that relate to users in XenForo.
Custom user fields
This add-on allows you to create custom statistics fields on your forums. You can revise the phrases as you wish through your language file.
I want to make an important reminder.
Yes, you can show the areas you want as a widget but I recommend not using it in the sidebar.
You can simply add a...
The add-on allows you to edit the lock page as you like. Of the options to configure the template, configure and specify the contacts that will be displayed for communication. In addition to all this, you can display a standard message as desired.
Add animation to predefined elements or elements of your choice with extra settings.
add animations to predefined elements (currently: badge for Inbox and Alerts, badge for staff bar links, online indicator)
add animations to 3 separate groups of custom elements (simply...
The add-on implements a new custom field type - Location, allowing to collect location information from your users. It provides the list of countries/areas/cities based on World Cities Database - https://simplemaps.com/data/world-cities and allows admins to custom the list, add, edit and delete...
Custom Username Icons brings a new level of customization for you and your users!
This add-on lets your user select an icon to be displayed near their username everywhere on your board.
Icons can be selected either from a list of Font Awesome icons (more than 500 icons) or from a...
XenFun submitted a new resource:
How to add a Custom Group Badge Banner to User Info - Just a short tutorial on how to add a custom user group badge/banner to the user info area
Read more about this resource...
Display image with "User banner text" in "User info
STEP 1: Go to: AdminCP > Appearance > Templates > and search for the "extra.less" file
Open the file and add this code (Call it anything you want. In the example here, I called it ".myBadge" ):
Adjust the settings as needed for sizing and...
Threads will be where your primary forum content is posted. Each thread will be created within a particular forum, though they can be moved by moderators if needed.
At its simplest, a thread is created by a user with a title and the text that will make up the first post. All replies... | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00143.warc.gz | CC-MAIN-2020-50 | 2,701 | 27 |
https://reliabilityanalyticstoolkit.appspot.com/reliability_growth_planning | code | This tool is intended to assist in developing an idealized reliability growth curve using methods described in MIL-HDBK-189. The input table below is pre-filled with an example test data set,
"example of case 2," found on page 46 of from MIL-HDBK-189 (Ref. 1).
For a given initial test period (phase 1) and associated average MTBF and growth rate, the tool calculates an idealized reliability growth curve that can be expected from subsequent test phases.
By selecting plot option two below, the tool calculates the test time required to achieve an MTBF goal based on the phase 1 parameters entered in the first line of the table. For example, the pre-populated test data set results in an achieved MTBF of 110 hours after 10,000 hours of testing. Entering only the phase 1 information from the test data set and selecting plot option 2 with an MTBF goal of 110 hours, results in 10,000 hours of test time required (subject to rounding errors), as expected. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00523.warc.gz | CC-MAIN-2023-14 | 957 | 4 |
http://www.codingforums.com/showthread.php?t=284147 | code | Originally Posted by helen11
Hi I'm new to PHP and I would like to know how to sort text box values entered by a user alphabetically. Thanks in advance
You might want to separate values by using any delimiter. If you want you can store them in database . Once done you can run sql query and use ASC function in the end. This will sort them alphabetically
You can either store them in array or use while loop key in values.
Tell me more about your program or logic if I can be of any help to you. | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164987957/warc/CC-MAIN-20131204134947-00089-ip-10-33-133-15.ec2.internal.warc.gz | CC-MAIN-2013-48 | 495 | 5 |
https://gitlab.kitware.com/paraview/paraview/-/issues/17372 | code | Extract time step filter not not working as expected
Even though you use the Extract Time Steps filter to limit the time steps to load, you still get all those time steps you didn't want.
Here is the problem. The Extract Time Steps filter works by modifying the
TIME_STEPS information given in the request information phase of the pipeline. When you apply this as a filter in ParaView, the source is marked as Ignore Input and new filter reports only those selected time steps. If you then hit play visits each selected time step and everything works as designed.
The problem is that the filter still makes it to easy to get data in unselected time steps. If the filter gets a time value different than what was selected, it will happily send that up to the source and give you data for a time value you didn't select. This can happen a lot. For example, when you first apply the filter ParaView, might not immediately change the time and refresh, so you end up with the same data from an unselected time step. An even more important example: you should be able to create two Extract Time Steps filters to get data from two different time steps simultaneously.
In the update request extent pass, Extract Time Steps should modify the
UPDATE_TIME_VALUE to match one of the selected time steps. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00129.warc.gz | CC-MAIN-2021-25 | 1,291 | 7 |
https://karthikbgl.wixsite.com/karthikr | code | Karthik makes world a better place using technology
I am a web developer, passionate about technology. I enjoy creating things. I have assisted a wide variety of firms ranging from art and media companies, intellectual property firms, health care startups, to research and analysis companies.
Here are some Social ways to find me: | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100555.27/warc/CC-MAIN-20231205172745-20231205202745-00560.warc.gz | CC-MAIN-2023-50 | 330 | 3 |
https://hub.docker.com/r/iansmith/musl-fileutils/ | code | This image is part of an attempt to create "static world" for docker containers.
In static world if you don't want something in your docker image, just delete it. There should be no dependencies--or perhaps as few as possible--between the tools in the container.
This gives you a /bin/ directory which has most of the core unix tools, all statically linked against musl, for about 38MB.
The things that would be nice but are not included in this yet are:
make sed file gawk | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00148.warc.gz | CC-MAIN-2018-22 | 473 | 5 |
https://xenanetworks.com/?knowledge-base=knowledge-base/valkyrie/downloads/scripting-specifications-examples | code | Xena’s Layer 2-3 and Layer 4-7 script samples are stored on GitHub incl. TCL, Perl, Java, Python, Ruby & BASH. Click the button to visit GitHub and browse the library.
Go to GitHub
Xena has won multiple global awards for price/performance and technical innovation. Learn more.
Copyright © 2009-2023 Teledyne LeCroy Xena ApS, Denmark
Website Terms & Conditions / Privacy Statement / Business Terms & Conditions | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00234.warc.gz | CC-MAIN-2023-50 | 412 | 5 |
https://libguides.uwinnipeg.ca/rdm/dmp | code | A Data Management Plan or DMP is a formal document detailing the strategies and tools you will use to manage your research data and/or other research materials through each stage of your project from proposal to publication and beyond. Your DMP should include details on:
Your DMP should be a "living" document that you review, modify, and update throughout your project to accurately reflect your data management strategy.
Portage Network. (2020, August 25). Primer - Data Management Plan. Zenodo. https://doi.org/10.5281/zenodo.4495631
Portage Network. (2020). Brief Guide - Create an Effective Data Management Plan. Zenodo. https://doi.org/10.5281/zenodo.4004957
Portage, a CARL initiative for data stewardship in Canada, provides a simple walk through Data Management Plan tool: DMP Assistant
Portage Network has created several Data Management Plan Templates for a variety of different data types, including (links to download PDFs):
How to create a DMP using one of these templates in the DMP Assistant:
The following DMP Exemplars are available in the Digital Research Alliance of Canada's Training Resources. Please check the list for more examples. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00862.warc.gz | CC-MAIN-2023-40 | 1,157 | 8 |
http://crypto.stackexchange.com/users/4218/miljen-mikic?tab=activity&sort=all | code | |visits||member for||1 year, 11 months|
|seen||Aug 31 at 12:01|
Math & problem solving lover. Java EE professional with considerable experience in Linux and database worlds.
Why is elliptic curve cryptography not widely used, compared to RSA?
Excellent question. However, things have been changing. Have a look at NSA specifications and pay attention on key exchange and digital signature: nsa.gov/ia/programs/suiteb_cryptography | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645845.57/warc/CC-MAIN-20141024030045-00235-ip-10-16-133-185.ec2.internal.warc.gz | CC-MAIN-2014-42 | 429 | 5 |
https://www.pcmag.com/article2/0,2817,1886639,00.asp | code | Pirates are crafty. Now that they've built OS X PCs, their minds have grown curious. How to improve the beta? Rip it apart and start again, of course! Not necessarily the hardware, though; some smart folks have been working on several fascinating software projects. The nexus for this project is currently www.osx86project.org: Check the site often to see what everyone is up to. Here are some great project ideas:
- Dual-boot using one drive. We set up our system as a dual-boot, but using two hard disks. What a pain! Things would be easier if it ran on a single drive instead. The dd command to copy the image is different: dd if=tiger-x86-flat.img of=\\?\Device\Harddisk0\Partition3 bs=512 skip=63. You need the skip variable in order to pull the actual partition image out of the entire drive image file. You can use the Grub or PartitionMagic apps (and the related BootMagic utility) in order to set up a dual-boot between the two operating systems.
- Extend the size of the image. The downside of the 6GB image file is just thatyou're limited to a total of 6GB in OS X using this file, even if you copied it to a 160GB disk like we did. For information on workarounds, check out this thread: forum.osx86project.org/index.php?showtopic=753.
- Adding peripherals. we were psyched that the on-board sound chip worked on the first try. Same goes for on-board Ethernet; we were on the Internet the minute OS X started up for the first time. Got an odd-ball video capture card that you know has Mac drivers? What about burning DVDs? There's a lot of work still to be done; give one of these a shot and post the results to the wiki.
- Sluggo/Power Computing paint job. OK, we're just kidding about this one. Mostly. Though you could add chrome side pipes for effect, and... | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00105.warc.gz | CC-MAIN-2019-22 | 1,773 | 5 |
https://www.dk.freelancer.com/projects/website-design-graphic-design/make-word-doc-ebook/ | code | I've got a book written in Word right now. It's about 250 pages, but with very large margins, double spaced, and with large a large font (size 16). It's all pretty well organized with Chapters and titles already.
I'm wanting to take this doc to the next level and make it a real slick ebook so that I can start selling it. What I need is someone to help make it look real sharp and not just another boring document. I'm guessing we'll need a linkable index page, so cool layout, better font selections and some stuff like that. Since I've never made an Ebook before, I really don't know what to even ask for. I really would like to be impressed.
The contents of this Ebook is a Christian study on the book of Romans. It's written by a very well respected author here in Brazil, so that means it's all in Portuguese. I can help explain what stuff means, but I'm sure with some help from bablefish, anyone here who knows Ebooks well, could do a great job for me.
I'm also thinking about using PDF as my Ebook format, but I'm open for suggestions here as well.
Thanks for your time and your help in advance. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00325.warc.gz | CC-MAIN-2019-26 | 1,104 | 5 |
https://www.mail-archive.com/[email protected]/msg13899.html | code | On 05/01/2017 18:24, Jonas Smedegaard wrote:
Quoting Ximin Luo (2017-01-05 17:50:00)
I propose some reasonable checks, to ensure that we get people who are
interested. I disagree that this attitude is flawed.
What do others think?
Until now the discussion seems to have been centered on the Debian
grant them access? Won't they leave the team out in the cold after
pushing their packages?
But those are actual people, not just names : are all applicants aware
they are expected to maintain those packages? Are they really interested
in doing so? Do they have any long-term use for an account on alioth?
I'm for granting them access if they have something to contribute and
they want to join, but I'm againsts forcing them to join to contribute.
Snark on #debian-js | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745800.94/warc/CC-MAIN-20181119150816-20181119172816-00438.warc.gz | CC-MAIN-2018-47 | 764 | 14 |
https://www.br.freelancer.com/projects/php-data-entry/retrieve-email-adresses-from-specified/ | code | For marketing purposes I would like to have someone code a script for;
- Email collecting / harvesting.
The following parameters/settings should be customizable before executing the "search":
1) Prefered search enginge(s) - Example: [url removed, login to view], [url removed, login to view], aol, google.com..
2) Domain to collect email adresses for. Example: domain.com.
3) Option to input specified/relevant keywords (instead of domain or in addition to..). Aswell as negative keywords. (If not too much work it would be prefered to have an option to save these settings mentioned)..
4) Save output (collected email adresses) to file (preferably: csv, txt, vcard ..). To be able to import into outlook for bulk email.
5) Set a proxy with login/pw for the collecting process (optional).
I would like to have this script accessible online (www) through my website in a pw protected dir of cause (this is will take care of..).
I know there are software out there offering these type of "services", but I think they could be even better and tailored.
Looking forward for your solutions/contributions!
I would appreciate if you could give me a link to let me test and see how many email adresses your script actually can collect. Så I can perform a few test runs to see and choose the best script for my purpose.
Thanks in advance. Happy bidding! :)
3 freelancers estão ofertando em média kr757 para esse trabalho
Hello Sir i have 5+ member team sir my team have good experience of data entry job and seo [url removed, login to view] plz view my profile and last client review. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00579.warc.gz | CC-MAIN-2018-17 | 1,578 | 15 |
http://nemozen.semret.org/2009/07/ | code | - When you receive a message on Facebook, it sends you a notification by email. Great. But then you can't reply! Why? Why don't they just set the reply-to in the email header to an address that will send it back to the person's inbox in Facebook? That way you and your friend are still communicating through Facebook but with the added convenience of email e.g. on your mobile. But no, they force you to login to the Facebook website to reply.
- Similarly, suppose you are logged in to Facebook, and you want to email your friend. You go to you friends profile, and guess what, you can't click on the email address to send them email! Why? Why can't they just make it a mailto link?
- So ok, you decide to just copy and paste the address, of course. But you can't -- it's an image! Why? Whyyyyy? Why can't they just leave it in plain text, why do they want to go through the extra expense of converting everyone's email into an image?
The technical ideal here is obviously flexibility: let users exchange emails, SMSes, IMs, everything they want with their friends, with Facebook being the hub of their online universe. Instead of re-inventing separate and more primitive versions of email and IM inside their closed world, they could inter-connect and inter-operate. They could also for example enable you to chat with your Facebook contacts directly even if only one of you is logged in to Facebook and the other is on AIM, Yahoo Messenger, MSN messenger, or Google Talk... If Gaim and Trillian could do that years ago, surely Facebook can. They could effectively unify all the existing message systems into a grand Facebook Open Overlay IM ("FOO IM"). It would be a great service to their users, and a manifestation of the core raison d'être of a social network. And of course, they already have plenty of employees there who are very smart and experienced with this kind of stuff, so they definitely could. But, instead of doing the right thing, their business model is forcing them to instead handicap their users' communications!
Every company must have a way to make money of course. Through some combination of good ideas, timing, environment, luck etc. companies end up with very different business models. Here it looks like Facebook is trending toward one which requires an "adversarial" relationship with the user. We're seeing hints that their need to reach profitability is starting to go against the best interest of their users. Sure you can still make money that way. But that road is ugly. Down that road you end up with health insurance companies whose profits rely on denying coverage to people who tought they had paid for it. Shady calling cards where they put obstacles in your way so you can't fully use the advertised number of minutes. Sleezy subscription schemes that generate profits by making it difficult to cancel even when you are entitled to. Everyone knows that world, those businesses you just hate, the ones you complain about. Those are simply businesses where the company's incentives are not aligned with the users'.
In that sense, Facebook today is eerily reminiscent of AOL in the late 1990s. Facebook is the king of social networks with something like 300 million users. AOL was the king of Internet access providers, with 30 million users paying $20/month! (Here's an interesting side question: I wonder how Facebook users as a percentage of total Internet users today, compares to AOL subscribers as a percentage of total Internet population in 1999? I wouldn't be surprised if it's roughly the same.) And at the very peak of its dominance, AOL was showing the same signs. Instead of letting their users just go to any website directly, they had this limited proprietary system with "rooms", "keywords", "channels", their own content, their own applications, etc. The reason was because they were stuck in a business model of a closed online service from the 1980s. So even though they knew the open network was infinitely better, they were devoted to a doomed goal of keeping the users inside their own closed world. Inevitably their users realized they could get more for less: pay $10/month to a no-name ISP, use a free browser and just surf the web... ("surf the web" sounds so quaint doesn't it?) And they started leaving AOL in droves. Even after merging with Time Warner, AOL couldn't capitalize on the shift to broadband. They remained desperately focused on trying to keep subscribers from leaving the old "America On Line", they became a monster that took adversarial customer relations to a whole new level, before finally giving up in 2006. (By the way all this has little to do with what AOL is today in 2009).
To be sure.... Wow for a long time, I've wanted to start a paragraph with "To be sure ...", and this is the first! But I digress.
To be sure, despite the dramatic title of this post, and despite the fact that I've picked on them once before, it's far from over for Facebook. They may yet decide to give the users the obvious flexibility, and make enough money with higher quality ad targetting when the users naturally come to the site anyway. Maybe they will find new ways to advertise as messages flow openly in and out of their network, or maybe they will figure out brand new business models. Whatever the case is, they do have one great thing going for them. Execution. They know how to get things done. You don't get to 300 million users by being stupid or lazy. They just need to make sure they are not smartly and expertly marching off a cliff.
The title of this post by the way is from a novel by one my favorite authors. Not his best novel, but a great title. And a pretty good movie too. | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607620.78/warc/CC-MAIN-20170523103136-20170523123136-00589.warc.gz | CC-MAIN-2017-22 | 5,667 | 9 |
https://blog.codersty.com/what-is-node-js/ | code | What's in the article ?
Big Companies Using Node JS
Now let’s give some examples from the world. One of the most popular companies passing through Node.js is Linkedin. He used Rails on the mobile server side before moving to Node.js. After switching to Node.js, server cost decreased by 1/10. There has been a speed increase of up to 20 times in some operations. I think Paypall is the best example. Both more recently dated. In order to avoid the risk of Paypall working platform, it has developed parallel development in 2 plaformats as Java and Node.js instead of directly switching to Node JS. As a result, Node.js has received 2 times more prompts than java. Moreover, Node.js runs on a single-core processor, while the java application runs on a five-core processor. So Node.js reduced the cost to 1/5. In addition, response time has been accelerated by 35%.
Almost everyone from startups to big companies using. Apple, Google, IBM, Microsoft, NASA, Netflix, PayPal, Pinterest uses Node JS technology.
What Does Node JS do ?
Basically two things:
- We would say that software is the only way to write both backend and frontend codes in the same language, but those who object will come out. So let’s just say it’s the easiest way.
- Today, most applications spend most of their time requesting databases or various services on the Internet and waiting for the results. Node JS is by nature asynchronous. He never likes to wait. It makes requests in parallel instead of making them one by one. When the requests are finished, they Node JS call back Node JS. This allows you to quickly process a large number of requests.
What is the Difference of the Node.js from others ?
Node.js promotes modular code writing due to its functional programming structure. Each module tries to do just one job, just like Unix philosophy. No more, no more. Writing code in Node.js is like joining pieces of lego. Instead of using a cumbersome framework that tries to do everything, you can mix and use the modules you want from the NPM (Node Package Manager) ecosystem of hundreds of thousands of open source packages. | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00594.warc.gz | CC-MAIN-2021-39 | 2,111 | 10 |
https://confluence.man.poznan.pl/community/display/WLT/Note+about+release+from+2013-06-07 | code | Information about release:
Date of release: 2013-06-07
Changes and improvements made in release:
- Download transcription in text format
In a project management view, the transcription downloading options have been grouped into one button .
There is a possibility to download transcription in epub, hocr and from now on also in plain txt format.
- Link to preview whole scan in transcription’s editor
The possibility to display an original picture in new window in the browser from the editor’s level is possible by pressing Page preview in the editor’s top menu .
- Lines numbers in transcription view
Lines presented in transcription’s editor view were expanded by a line number.
- Moving lines to certain position
A possibility of moving a currently marked line to a certain position in transcription’s view has been added to the lines menu. After pressing Move the line to the position button, a dialogue window in which the position to which the line is supposed to be moved can be entered is displayed.
- Mail sent to the project’s owner after finishing the batch OCR
After finishing the batch OCR process for a project, its owner is informed about it by an e-mail.
- Access to the contact site without probation
Since current version of VTL, the access to a contact with an administrator is accessible without necessity to login. Unlogged users should fill a required field with their e-mail address while composing a message in contact formula.
- Long-lasting task during TIFF files upload to the project
Conversion process of TIFF files into PNG ones after adding them to the project is fulfilled in an asynchronous way by the long-lasting task. During files conversion, the project files are blocked. Information about enduring conversion task is visible in the Current tasks bookmark available in users profile. After sending TIFF files to the server, user is informed about the conversion process in the view of adding new files to the project by information:
Information about conversion in the Current tasks view:
- Automatic project page refreshing during its blockade in long-lasting tasks
During completion of long-lasting tasks for a certain project, as well the project page as its management is blocked until the task is finished. Those pages are automatically refreshed every 15s in order to check if tasks are finished. If long-lasting tasks are finished, project pages are automatically unblocked without a necessity to interact with a user.
- The information about changes author was added to a history view
View of changes history was enriched by an information about the user who creates a specific transcription for a given page. User’s name is a link leading to their contact formula.
- Lines verification mechanism was added
In transcription’s editor lines verification mechanism was introduced. A verified line is marked by a icon in transcript preview. Lines are verified automatically after 3s of their display in editor’s dialogue. Every user may as well cancel the existing verification and enter the new one manually by the option from the level of a given line editor’s window.
Owner of the project can additionally approve or cancel the verification for whole document from the main menu of transcription’s editor. Buttons below are used for this:
- Author of the project as an optional field
The field Author in a formula of creating a new project has become an optional one, its fulfilling is not required.
- Support for multicolumn documents
A suport for multicolumn documents was added in the transcription’s editor. The order of looking through lines should be compatible with vertical columns. Quality and character of a transcribed document can have an impact on accuracy of multicolumn structure analysis results. It need to be mentioned that this feature is available only for new projects.
- Possibility of adding same name files to one project
It is possible to add files with the same name. Earlier it failed.
- Adding the explanatory information to the log-in site
If an unlogged user tries to create a new project, it ends with a redirection to the log-in site. Moreover, the information explaining the necessary probation in service appears.
- Possibility to delete transcription of a given file
Deletion of a given file’s transcription is possible by withdrawing to a zero version in a changes history view.
- Uservoice-based feedback forum
In a right upper corner of VLT website, a link was added to a widget simplifying submission of suggestions to VTL team. With its support it is possible not only to submit your own idea but also look through or vote for already applied suggestions. More information about can be found on the uservoice site.
- Displaying thumbs of deleted files in the profile view
Error consisting of displaying files thumbs which were already deleted in a user’s profile view.
- Problem with batch OCR for projects which included deleted files
From now on, the batch OCR process can be made in a project in which anyone, anytime deleted some files.
- Problem with editor’s crash while deleting the line
Sometimes the transcription’s editor crashed while deleting marked line. Problem was solved among others by blocking the user’s interface during the operation.
- Problem with a single page OCR
In case when a marked area of the page was too big, the OCR process may have not given any results. The page with text recognizing results was simply blank. | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00025.warc.gz | CC-MAIN-2019-22 | 5,464 | 46 |
https://pkg.go.dev/github.com/cilium/ebpf/cmd/bpf2go | code | bpf2go compiles a C source file into eBPF bytecode and then emits a
Go file containing the eBPF. The goal is to avoid loading the
eBPF from disk at runtime and to minimise the amount of manual
work required to interact with eBPF programs. It takes inspiration
bpftool gen skeleton.
Invoke the program using go generate:
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go foo path/to/src.c -- -I/path/to/include
This will emit
foo_bpfeb.go, with types using
as a stem. The two files contain compiled BPF for little and big
endian systems, respectively.
You can use environment variables to affect all bpf2go invocations across a project, e.g. to set specific C flags:
BPF2GO_FLAGS="-O2 -g -Wall -Werror $(CFLAGS)" go generate ./...
Alternatively, by exporting
$BPF2GO_FLAGS from your build system, you can
control all builds from a single location.
Most bpf2go arguments can be controlled this way. See
bpf2go -h for an
bpf2go generates Go types for all map keys and values by default. You can
disable this behaviour using
-no-global-types. You can add to the set of
types by specifying
-type foo for each type you'd like to generate.
See examples/kprobe for a fully worked out example.
Program bpf2go embeds eBPF in Go.
Please see the README for details how to use it. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296815919.75/warc/CC-MAIN-20240412101354-20240412131354-00678.warc.gz | CC-MAIN-2024-18 | 1,272 | 26 |
https://www.livechat.com/marketplace/apps/message-translator/ | code | Translate chat messages in real time
$5 / mo, per agent
Developed by LiveChat
Works with LiveChat
The Message Translator app allows you to translate the chat messages in real time. Thanks to the integration between Translation Service and LiveChat, you can freely talk with your customers despite the language differences. Seamless translation process makes it easy for your agents to chat with all customers from anywhere in the world.
No special configuration required, simply install the app and start chatting with real-time translations automatically displayed in the chat. If you’d like to adjust the default settings of the Message Translator, you can always do so in the App Settings page.
Translate the chat messages in real time
Overcome the language barrier and chat with your customers freely thanks to the automatic message translations.
Effortless translation on both ends of the conversation
No need for any complex configuration. Messages are automatically translated on both sides of the chat - simple as that!
Reach more customers
Message Translator will help you reach customers from a wide range of countries despite the language differences.
Expand your sales possibilities
You can easily offer your products to potential customers worldwide and, at the same time, provide support to your existing customer base.
Ratings & Reviews
0out of 5 • 0 Ratings
How would you rate this app?
Tutorial & Support
App tutorial and setup instructions how to use and properly configure this app with your LiveChat account.
To get help and support contact LiveChat. You can also suggest improvements or request new features in the upcoming versions of Message Translator. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643585.23/warc/CC-MAIN-20230528051321-20230528081321-00231.warc.gz | CC-MAIN-2023-23 | 1,680 | 20 |
https://www.gadgetdaily.xyz/issue-82-out-now/ | code | Issue 82 of Linux User & Developer is on sale now!
Master Cloud Computing
Why the future of the cloud is open source
Create apps for Palm Pre
Build an RSS reader for the latest mobile platform
Linux veteran and Linux Driver Project Head Greg Kroah-Hartman tells us all about his work.
“Right now I’m just having a hard time finding devices to writes drivers for: everything seems to ‘just work’ with Linux these days…”
Develop with Moblin
Set up server virtualisation
How to create a multi-boot system
Create your own cloud computing environment
Use Xen to create virtual servers on your Linux box
Understanding I/O and subsystems in a Linux system
And much more!
Buy it today in all good news agents or subscribe online via the Imagine Shop and save 30% | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00633.warc.gz | CC-MAIN-2018-51 | 766 | 15 |
http://mathforum.org/kb/message.jspa?messageID=8323600 | code | On Thursday, February 14, 2013 5:29:06 AM UTC-8, [email protected] wrote: > two players Ann and Bob roll the dice. each rolls twice, Ann wins if her higher score of the two rolls is higher than Bobs, other wise Bob wins. please give the analyse about what is the probability that Ann will win the game
Please note that my previous response assumed that each toss involved TWO dice (because of the us of your term "dice" instead of the singular "die"). If you meant instead that each player tosses a (single) die twice and takes the maximum score, the problem is much less onerous, and others have already supplied the solution. | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864063.9/warc/CC-MAIN-20180521102758-20180521122758-00584.warc.gz | CC-MAIN-2018-22 | 628 | 2 |
https://phabricator.wikimedia.org/T107814 | code | Search handles a lot of interesting and problematic data. As we hire more engineers and researchers, we need to put together some guidelines on how to properly handle this data in a way that respects our users privacy and our obligations but allows us to perform the research needed to improve the search systems.
Oliver will sit down with Legal and Security representatives and draft some guidelines on:
- The types of data we have;
- Where we can safely store them;
- What data we are comfortable releasing and how to release it securely;
- Contacts if you are interested in releasing data not covered in (3). | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514355.90/warc/CC-MAIN-20181021203102-20181021224602-00221.warc.gz | CC-MAIN-2018-43 | 611 | 6 |
https://crushwalls.org/event-calendar/2018/9/7/lit-crawl-denver | code | Catch the Lit Crawl in the RiNo Arts District, Lit Crawl Denver is a project of the Litquake Foundation. View the full schedule at https://www.litquake.org/lit-crawl-denver.html
GELATO BOY is giving FREE scoops for the first 25 attendees, thereafter it’s Buy 1 Get 1 Free! You must grab a tote bag from the bookmobile, parked on Blake Street between 33rd and 34th Streets. Show your tote bag for free dessert! Find Gelato Boy inside Zeppelin Station, 3501 Wazee St. The store closes at 10 p.m.
Facebook Event: https://www.facebook.com/events/1481700455264009/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00295.warc.gz | CC-MAIN-2019-22 | 561 | 3 |
https://developer.apple.com/documentation/apple_pay_on_the_web/applepaysession/1778013-onpaymentmethodselected | code | An event handler that is called when a new payment method is selected.
- Safari Desktop 10.0+
- Safari Mobile 10.0+
- Apple Pay JS
This attribute must be set to a function that accepts an
events argument; for example,
The event parameter contains the
payment attribute. Access it like this:
onpaymentmethodselected function must respond by calling
complete before the 30 second timeout, after which a message appears stating that the payment could not be completed. | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371883359.91/warc/CC-MAIN-20200410012405-20200410042905-00120.warc.gz | CC-MAIN-2020-16 | 465 | 10 |
https://mariomods.net/profile/1887956641-donezo | code | |Total posts||1 (0.00 per day)
(last post 232 days ago in New Super Mario Bros. DS Style in Super Mario Maker 2 (mod) (Project Releases))
|Registered on||07-10-19, 01:29 am (232 days ago)|
|Last view||08-27-19, 01:32 am (184 days ago)
|Total stars received||0|
|Total stars given||0|
|Theme||New Super Mario Bros. Grassland · By Dirbaio|
|Items per page||20 posts, 50 threads|
This is a Super Mario Maker 2 mod that replaces the New Super Mario Bros. U game style with the look of New Super Mario Bros. on the Nintendo DS
What's added from the Super Mario Maker 1 Mod by Louiskovski:
-New course themes, and accompanying tiles and aspects
-New sprites for enemies were not replaced before (Bob-Omb, Cheep Cheeps)
-New sprites for enemies, objects, and items that were not in Super Mario Maker 1
-Classic DS models for Mario, Bowser, and Jr.
-Separate speeds for both layers of the background scrolling, contributing to a much cleaner look
-More planned like menu icons
As far as all current contributions to the mod go:
Louiskovskie - Original mod developer, Tilesets, Management
donezo - Sprites, Tilesets, Management
MelonSpeedruns - Modeling, File Replacement, Management
Weegeepie - Modeling, File Replacement
The Mario Modder - Modeling, Tilesets, File Replacement
JumboDS64 - Sprites, Music
Lakifume - Sprites
Buntendo - Sprite Shading
Here are some screenshots. We have most of the new items in Super Mario Maker 2 in NSMB sprite style already, so we won't be needing any help with that, but these screenshots don't have everything put together yet because we're waiting on a tool from KillzxGaming that will allow us to edit texture animations, add all of the enemies and items, and complete the mod.
*NOTICE* These screenshots are lacking sprites, models, polish, or other criteria that is already essentially done or being worked on, please judge for only what is added, not what isn't.
I don't like giving release dates, but I will say that progress has gone surprisingly smoothly. Like I said, we have most of the assets we need, and are in the process of putting them in. Thanks for your patience.
|Posted on 07-10-19, 02:02 am in New Super Mario Bros. DS Style in Super Mario Maker 2 (mod) (rev. 4 by on 07-10-19, 02:22 am)| | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00484.warc.gz | CC-MAIN-2020-10 | 2,239 | 29 |
https://docs.microsoft.com/en-us/archive/blogs/dan_fay/smart-buildings-pilot-at-microsoftenergy-analytics | code | Smart Buildings pilot at Microsoft–energy analytics
There’s a good cover story article on Microsoft’s Smart Building Pilot Program in the latest The Leader, it describes the Microsoft Real Estate and Facilitates group effort in using more technology to improve the energy performance of the buildings they manage. The article describes the use of the corporate campus as a living lab focusing on Fault Detection and Diagnosis, Alarm Management, and Energy Management. There’s also a technical overview of the Smart-Building Architecture that is being used – which includes the use of Azure Connect to securely transmit data to relevant vendor applications.
There’s more information in the whitepaper – and other resources…. | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00148.warc.gz | CC-MAIN-2020-05 | 739 | 3 |
https://newkidsvideo.com/bath-time-song-swimming-song-kids-songs-nursery-rhymes/ | code | #B2ETUn_lY9k #KidsTV #NurseryRhymes #Child #Tunes
▶Title Video clips : Tub Time Music | Swimming Music Little ones Tunes & Nursery Rhymes .
▶Duration : one:one:34 .
▶Published at : 2019-07-05 08:30:00 .
▶Souce: Movie Share Youtube For channel ➡ Little ones Television set – Nursery Rhymes And Child Tunes
Description : Tub Time Music by Little ones Television set – The nursery rhymes channel for kindergarten aged youngsters. These kids tunes are excellent for finding out the alphabet, quantities, styles, shades and tons a lot more. We are a a single cease store for your youngsters to find out the several joys of nursery rhymes. Subscribe to our channel and be the initial to view our newest entertaining kids finding out animations!
Hi toddlers, Bob The Teach is right here to make your finding out time effortless and entertaining with these playful toys. Click on on the url to investigate the toys now! – https://amzn.to/2PCeSDS
🌸 Share this movie – Tub Time Music here’s the url: https://youtu.be/B2ETUn_lY9k
🌸 Subscribe for free of charge now to get notified about new video clips – ytbkidstv?sub_affirmation=one
🌸 Nursery Rhymes Karaoke Playlist – https://little bit.ly/2DASesR
🌸 If you liked this selection, you might also like these compilations:
🌸 Previous Macdonald experienced a farm – https://youtu.be/Rf8e4JF8nXs
🌸 The Wheels on the bus – https://youtu.be/MWyordkvD0k
🌸 5 minor monkeys – https://youtu.be/VAL4w6JvDHc
⭐ Finger Family members – https://youtu.be/IiA17oFD1bs
⭐ 10 in the mattress – https://youtu.be/Rxxsvc2XXxY
⭐ Phonics Music – https://youtu.be/pFR_8zHCLio
🎶 Incy Wincy Spider – https://youtu.be/_6fwkxtp4bw
🌼 English Wheels on the bus – https://youtu.be/IEje6DRukxg
🌼 5 minor ducks – https://youtu.be/vp5rwesz8eU
🌼 Johny johny sure papa – https://youtu.be/PyZq_Jm0aqQ
🌼 Humpty Dumpty – https://youtu.be/ZSoougnleU8
🌼 Twinkle Twinkle Small Star – https://youtu.be/yvcrrmeBwlo
🌼 10 in the mattress – https://youtu.be/GRyxRadTM3c
Nursery rhymes and kid’s tunes speed up phonetic consciousness enhancing kid’s term comprehension, looking through and creating abilities. These rhymes for youngsters assist educate simple abilities and enhances their capacity to understand and stick to instructions.
We hope you are getting a entertaining time with all your buddies right here at Little ones Television set. If you liked seeing this movie then examine out our channel for several a lot more fascinating and entertaining finding out video clips for kids.
If you are nonetheless looking through this considerably, we know you appreciate our animations but are constantly content to listen to from you on how we can increase and what you would like to see in the pipeline!
If you appreciate our articles, will not neglect to assistance us and subscribe 🙂
Obtain the Bob the Teach Application right here:
Bob The Teach (Android & iOS): http://onelink.to/bobfree
JellyFish Adventures (Android & iOS): http://onelink.to/jelly
Uncover Little ones Television set on:
Internet: https://www.uspstudios.co/generation/channel/kids-television set/one/
Little ones Initial: https://enjoy.google.com/retailer/applications/information?id=com.lookedigital.kidsfirst
© 2017 USP Studios Personal Restricted
Tunes and Lyrics: Copyright USP Studios™
Movie: Copyright USP Studios™ | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00547.warc.gz | CC-MAIN-2020-34 | 3,397 | 37 |
http://www.linuxquestions.org/questions/fedora-35/fedora-5-speed-gnome-desktop-etc-428414/page2.html | code | FedoraThis forum is for the discussion of the Fedora Project.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I'm using FC5 on my ThinkPad T23, with PIII 1.13ghz.
There is no such thing as slowing down when selecting the screen with left mouse button pressed.
I really believe this is X problem,not FC...
Get 2.6.16 kernel, nvidia driver, and it should be fine.
I'm a newcomer to fedora, and I must say I find it polished and well made enough to deserve my attention.
You should tweak it a bit, anyway, and I'm sure it'll run perfectly on your box, because it does on mine, which is roughly twice slower.
And also there really doesn't matter whether you hava AMD or Intel CPU, the default kernel is optimized for pentium pro/II or something just as generic. And processor optimizations don't make that big difference, they add a bit ofcourse, but not that much.
Its worth to spend few days reading and trying stuff, recompiling your own kernel, etc.
I am also running an AMD and have noticed FC5 to be very sluggish on the desktop but I have only booted fedora once since I installed it so I haven't gotten around to doing anything yet but something that may help is that I have been running ubuntu dapper from day one and after a kernel update the desktop on that was very sluggish also, after about 3 months of this I happened to copy the information from xorg.conf on my kanotix install over to my ubuntu xorg.conf as last ditch effort to work out the problem and now it runs much fater and is very responsive. It seems it was down to the refresh rates in xorg.conf being wrongly setup by ubuntu. I'll be trying the same thing with fedora when I get round to it.
[EDIT] The horrible scrolling in firefox was also a "feature" of this xorg problem in dapper.
Last edited by fannymites; 03-29-2006 at 03:46 PM.
I have a similar problem with my Intel Celeron CPU. I downloaded FC5 after it was officially released, and upgraded from FC3. Many of my old applications cannot run, such as xfig, gsview and dia. It took one minute to start up Emacs. The problem I thought is I need to reinstall everything, never upgraded it
I just installed FC5 (fresh install) after using FC4. I, too, have noticed some serious performance issues since upgrading as well. And, wouldn't ya know it, I'm running an AMD system.
The bootup time is acceptable--I wouldn't say faster than FC4, but not likely slower either (I've disabled all unneccessary services). Logging into GNOME has been pretty quick before--however, lately, it's been pausing for about 10 seconds on the splash screen.
The main problem is application loading time for me. I wish I had more real numbers for you, but I'll toss out some estimates...
1. Firefox takes ~25 seconds from the time I click the icon to seeing the browser. Significantly longer than on FC4.
2. OpenOffice takes 20 seconds to launch (I timed this). Granted, it doesn't load terribly quick on any machine I have, but it certainly is slower than FC4 on this same machine.
3. I tried launching Yumex as a normal user; the root password prompt window took about 10 seconds to pop up. This used to be instantaneous--I actually thought maybe I hadn't clicked it and had enough time to try to launch it a second time on accident while it was still loading.
4. NX viewer takes ~7 seconds to load the login prompt. This is a small applicaiton that used to be almost instantaneous (maybe 2 secs at most).
This is on my Compaq laptop: AMD Turion64 (3700+), 1GB RAM, ATI graphics (32 MB shared memory reserved), and a "fast" 60GB hard drive (I forget the spindle speed, but I do recall paying extra for getting the faster drive). I am running the x86_64-compiled FC5.
Interestingly enough, I have the i386 version of FC5 running in a virtual machine on my desktop (VMWare on an Athlon XP 2700+, 1GB RAM). The performance there is very acceptable, if not faster than FC4. I plan on installing FC4 in a virtual machine as well so that I can compare apples to apples. While the host machine is FC4, VMWare doesn't necessarily emulate an AMD CPU for the virtual machine (somebody correct me on this if I'm wrong).
I am having some difficulty with FC5 as well, but performance is not one of them. I also have FC4 on my machine and FC5 as a separate partition. I find FC5 to be slightly faster than FC4 on most applications, so speed isn't an issure. I have found a number of multimedia applications won't work (as they do in FC4 with the same set up.) I am running the x86 64 version on an AMD64 3400+ with 1.5GB memory. So, I don't believe that the processor is what is slowing your performance. In most cases, an element of the xorg.conf file needs tweaking.
Just thought I would comment since AMD seems to be the suspect here, but it runs great on my machine.
when a program is optimized for pentium 4, that means that it will run great on pentium 4, and crap on anything other. This is why no one optimize there distro. there is nothing you can do other then change distro, or downgrade to version 4 and hope that they change it back in fedora core 6. This is a intel's problem, not red hat's.
Distribution: OpenSuSe 10.2 (Home and Laptop) CentOS 5.0 (Server)
i use fedora core 5,, on a athlon 2000+ with gig of ram and a nice radeon 9800 pro 256. 4 harddrives 2 running from add in PCI IDE, and i get blazing hot speed, now i trsut your pc is a beast and is beautiful but i msut say i doubt its a major issue with the core, BUT it might be a mobo issue
i got a asus a7n8x deluxe, plan jane from like 2004.....works like a charm
A couple of days ago I installed linux (fc5 with gnome) to my work computer for the first time and instantly noticed the desktop graphics performance to be horrible compared to WinXP. I really would like to use linux for my work, but now it feels like going 5 years back in time... When moving a window, the processor runs at 90%.. So, the question is: Is this just normal for linux desktops? I've never before had Linux on my computer so I don't have a reference.
I've got a P4 2.8GHz with 2GB of mem, ATI Radeon 9600XT, and a plenty of SATA disk space. Programs load fast so the issue is not the hard drive. Now I'm back in Windows writing this and everything (for example switching and moving windows) except loading times are many times faster. Might be something with the graphics driver? I noticed that direct-rendering is off but can't figure out how to enable it.. I tried setting swappiness to a lower value and checked that the hard disk has dma on etc. but they had no effect.
Distribution: OpenSuSe 10.2 (Home and Laptop) CentOS 5.0 (Server)
um, you are for sure haveing a hardware issue, FC5 is not slower than XP. i have run everything since Fedora Core 3 and honestly FC5 is the fastest yet, we have heard allot about these issue recently and msot of them have come down to hardware issue, and perhaps a not 100% finsihed kernel, but it is a general Linux or Fedora Core thing.
To all with performance issues - you really SHOULD try with SELinux disabled during the installation process, some say it makes 60% difference, I don't remember the website but it was a review of FC5, and I found the link from another topic here.
Disabling selinux made a slight difference and modifying my xorg.conf file made a huge difference.
Everything is running fast now, but not as fast as suse, dapper or kanotix. In general I'm pretty unimpressed by this release so far. I'm going to have to do a lot more downloading and installing things to get a decent setup then I did with FC4, there isn't even a menu editor that I can find.
I haven't even got internet access yet since I'm going to have to recompile the kernel just to get my modem to work (eciadsl) which I haven't had to do since I used gentoo and that means downloading the kernel source on another distro then copying it over and installing and recompiling on fedora.
At the moment I can't find anything special enough about FC5 to make me think it's worth all the trouble.
I'm glad I used rewritable cd's, I can use em for suse 10.1 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00506-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 9,419 | 47 |
http://kristingraefe.de/372/ | code | In order to answer the question how digital data is becoming meaningful I compared the properties and benefits of digital and physical objects. (more information can be found in the book “The Digital Afterlife” pg. 16)
Some insights I had:
Physical objects are tangible and can be held in the same way over generations, as a result owners can physically connect with each other. Digital object, in contrast, are intangible, to actually hold a digital object a device (like iPad) is needed. Therefore the meaningful digital data itself can never be held and the owner can only connect with previous owners through content.
Physical objects can break down and show age, that points out that the object has survived time and events: it tells a story. Because digital object do not age, they do not show this quality, but meaning can be added by additional information (metadata) which can grow over time.
Physical objects are taking up physical space, which can become a visible place in someones life. Digital objects require minimal space and is not necessarily visible, however the act of revisiting a digital object can be a good opportunity to create a meaningful experience. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00262.warc.gz | CC-MAIN-2022-40 | 1,181 | 5 |
https://www.mathworks.com/matlabcentral/answers/59305-how-to-correctly-read-video-frames-from-webcam-in-linux?requestedDomain=www.mathworks.com&nocookie=true | code | Hello guys, I am struggling with the getsnapshot function in Linux platforms. With the next code I am able to visualize the video from my webcam with the preview command, but the message "linuxvideo: A frame was dropped during acquisition." appears when I try to get an snapshot. Isn't it weird? What should I do to correctly acquire frames from the webcam?
clc, clear vid = videoinput('linuxvideo',1); set(vid,'FramesPerTrigger',1); set(vid,'TriggerRepeat',Inf); set(vid,'ReturnedColorSpace','rgb') vidRes = get(vid, 'VideoResolution'); nBands = get(vid, 'NumberOfBands'); ax = preview(vid); frame = getsnapshot(obj); delete(vid);
You're getting one frame. But it's acquiring 30 frames a second or something like that. Who cares if one got dropped? You might try putting in a pause(0.1) after the preview() and before the getsnapshot() if you're trying to avoid warning messages - see if that gets rid of it. You need to wait about 4 frame times after you go live to get a good image. One frame needs to get thrown away because you have only a partial exposure. Then you get a good full exposure during the next frame time. The next frame, it gets transferred off the sensor to the image buffer to get ready for sending to the computer. The next frame time it gets digitized and sent over the signal cable to the computer's memory. So it may take a few frames for signal that you send to the camera to make it from the computer to the camera and then come back with an image.
And what is obj? Shouldn't it be vid:
frame = getsnapshot(vid); | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00557.warc.gz | CC-MAIN-2017-47 | 1,540 | 5 |
http://stackoverflow.com/questions/3613512/cannot-install-powershell-snap-in/3613568 | code | When executing iis7psprov_x64.msi I immedialy receive a message saying:
The PowerShell snap-in is part of Windows Operating System. Please install it via 'Programs and Freatures' or 'Server Manager'
Extracting the msi and attempting to run it that way also yields the same message.
I am installing on Win7(64bit) with IIS 7.0. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00172-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 326 | 4 |
http://www.huffingtonpost.com/news/mission-control/ | code | Voters owe it to the citizens of Flint and kids everywhere to ask a different question this election cycle. Don't ask for less government -- instead, ask: how do we make sure government works for the citizens who need it the most?
If you grew up in the late '80s watching Double Dare, Nick Arcade and Guts, you probably regarded a trip to Space Camp as the ultimate grand prize. I, however, can clearly remember watching those shows with no desire to go to Space Camp.
After dealing with a buildup of static electricity and a few other minor problems, an announcement suddenly flashed across the main TV screen -- an unknown object had been detected, on a collision course with the Spacecraft!
You shot yourself in the foot by calling them Space Shuttles. "Shuttles" don't boldly go where no man has gone before, they go to Chicago, and occasionally bring you from Parking Lot T to the front entrance of the State Fair. | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189316.33/warc/CC-MAIN-20170322212949-00316-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 919 | 4 |
https://discuss.rubyonrails.org/t/how-to-get-my-cart-destroy-to-run-a-line-item-destroy-method/58866 | code | Complete novice here. I have a line_item that once created or updated
calculates the capacity left and displays it in the events view. The
Add to Cart button triggers the create line_item method. I've added
further capacity calculations to the line_item destroy method. My
problem is that the line_item destroy method is not called when the cart
class Cart < ActiveRecord::Base
has_many :line_items, :dependent => :destroy
The line items are destroyed but the functionality in the line_item
destroy method does not run.
Line Item Controller ----
@line_item = LineItem.find(params[:id])
event = @line_item.event
new_quantity = params[:line_item_quantity].to_i
change_in_capacity = @line_item.quantity + new_quantity
respond_to do |format|
event.capacity = change_in_capacity
So I tried to add a call to it in the Cart Controller as follows ----
@cart = Cart.find(params[:id])
@line_item = LineItem.find(params[:cart_id])
session[:cart_id] = nil
but I can't seem to find the line item this way
"Couldn't find LineItem without an ID"
Any ideas? Thanks in advance from a complete beginner. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00154.warc.gz | CC-MAIN-2022-27 | 1,085 | 23 |
https://jeanbaptisteaudras.com/en/2019/09/reusable-block-extended-a-cool-wordpress-plugin-to-extend-gutenberg-reusable-block-feature/ | code | 💙 Reusable blocks, an awesome Gutenberg feature
Reusable blocks is one of my favorite features in Gutenberg block editor.
This feature provides the ability to reuse contents across your whole website. What’s the difference between Reusable blocks and Regular Gutenberg blocks? Reusable blocks are stored in a custom post type called
Reusable blocks also have their own admin screen – though it’s hidden for the moment, you can access it by URL:
You can see an example of that screen in the screenshot on the right.
This admin screen allows you to edit your reusable blocks from a single place, so they are all modified at the same time since all reusable blocks are synchronized.
Brilliant! But what about extending that feature? This is why I created Reusable Blocks Extended, which is now available on WordPress.org plugins repository:
While being one of the Focus leads of WordPress 5.3, I finally found a couple of days to work on that shiny little plugin. Let’s see what’s in the box.
🔎 Introducing my advanced Admin screen for Gutenberg reusable blocks
First, I started by making the Reusable Block Admin screen accessible in WordPress Admin menu.
Then I extended the corresponding screen, by adding some columns:
- Title of the reusable block, with native features like editing, trash and export tool
- Used in: for each reusable block, get the list of all Posts that are using it. Very useful to clean up unused Reusable blocks and to see how many Posts will be modified when you are doing some changes on a reusable block.
- Usage: for each reusable block, get:
- A custom shortcode to use your reusable block everywhere, even on Posts that don’t use the block editor!
- A custom PHP function to implement your reusable block in your theme files/templates.
- A HTML preview to see what your block looks like. This is an experimental feature and it doesn’t work with all themes (for example it works well on Twenty Twenty or Twenty Nineteen, but not on Twenty Sixteen).
- The date the block was last modified.
🛠 Custom PHP functions to use Gutenberg Reusable blocks in your theme/plugins
The plugin provides two functions to use your reusable blocks anywhere you need to, like in your theme or plugins:
reblex_display_block( $id ) where
$id is the ID of your Reusable Block. This function will display/echo your reusable block.
reblex_get_block( $id ) where
$id is the ID of your Reusable Block. This function will return the content of your reusable block.
$my_block = reblex_get_block( 64 ) will put the content of Reusable block with ID
$my_block PHP variable.
On Reusable Blocks Admin screen, you’ll get a pre-filled PHP snippet for each block:
🧷 Custom Shortcode to implement Reusable blocks in your theme/plugins
A shortcode is also available.
will display the reusable block with ID
For each reusable block, you can get a custom shortcode on Reusable blocks Admin screen:
It will provide the same feature than
reblex_display_block() PHP function, but available to non-developers. This shortcode is very useful to integrate reusable blocks into Post types that are not using Gutenberg or in other plugin’s TinyMCE fields.
For example, here is an implementation of a reusable block into Gravity Forms confirmation message:
🔌 Implement Gutenberg Reusable Blocks using Widgets
The plugin also provides a dedicated widget to add Reusable blocks in your widget areas. Just choose the name of your block in the list and it will be displayed on your website!
👇 Last but not least
All your reusable blocks are also listed on your WordPress Admin Dashboard, with a link to the Reusable block screen:
🔗 Download Reusable Extended on WordPress.org
And if you want to support my work on this plugin, feel free to buy me a beer! 🍺🎉
- Whodunit WordPress Agency – my employer, for the time allowed on WordPress contribution 💚
- Émilie Lebrun, for the great banner/icon assets on w.org
🤗 Other Reusable Blocks plugins available on WordPress.org
While it’s the most complete as of today, my plugin is not the only one to extend Reusable Blocks Gutenberg feature. You’ll find them on WordPress plugin repository as well: | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00047.warc.gz | CC-MAIN-2024-18 | 4,174 | 44 |
https://github.com/tagtime/TagTime/pull/77 | code | Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Add buttons to the notification for popular tags #77
This PR adds buttons to notifications, allowing to select a tag in one click for a ping.
The feature is intended to provide some support for smartwatches eg. Android Wear or Samsung Gear, as specified in #42.
Here is how it's done:
Are the tag buttons also present on phone notifications?
I use Tagtime but I don't want to send all my pings to Beeminder.
I often want to tag with the same tags as the previous ping.
Thanks for doing atomic commits
Yes they are. In fact there is nothing specific to smartwatch support in this PR. Also that's why it should work on most smartwatch platforms, because they can parse phone notifications and display the buttons on the watch.
Yes, that's how I use it.
I think this is a different matter, and there is another issue about that (#48). I could work on it later. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656530.8/warc/CC-MAIN-20190115225438-20190116011438-00209.warc.gz | CC-MAIN-2019-04 | 1,015 | 13 |
http://njtebq.xyz/archives/12353 | code | Jam-upnovel Dragon King’s Son-In-Law update – Chapter 670 – The Upheaval In The Demon Sea reject wiry reading-p1
south sea bubble summary
Novel–Dragon King’s Son-In-Law–Dragon King’s Son-In-Law
Chapter 670 – The Upheaval In The Demon Sea coordinated taste
“Golden s.h.i.+eld!” Hao Ren identified as out yet again.
The perfect super bolts swept former him and smacked on the level 10 demon beasts!
With stage 8 demon beasts comparable to best-tier Nascent Heart and soul Kingdom and point 10 demon beasts similar to peak Nascent Spirit Realm, level 10 demon beasts could overcome Hao Ren, nevertheless they had been worried that as soon as they killed Hao Ren, the much stronger compact demon kings would take their victim and may also even eliminate them in the operation!
does a lieutenant outrank a commander
In addition to, Hao Ren possessed viewed through their plan to encompass him!
Previously, the great s.h.i.+eld was no match to the crimson gold hairpin since the latter possessed expert two Perfect Tribulations. As a result, it was subsequently one stage more than the fantastic s.h.i.+eld.
The Divine Doctor and Stay-at-home Dad
“Master, what decent do you see during this Hao Ren that you might want him to accept oath and get your blood buddy? Look! Perhaps the dharma prize he served doesn’t prefer to adhere to him.”
Calmly, the crimson yellow gold hairpin didn’t avoid even though its crimson light-weight brightened, building many phantoms of by itself.
Hua… More than a dozen demon beasts increased from your in close proximity ocean area. People were all stage 10 demon beasts which had been the small demon kings who switched directly back to their original variety. Soon after their ineffective attempts to prevent Hao Ren making use of their our forms, that they had altered to their demon beast types to encircle Hao Ren and actually eat him together with each other.
Clang… Each dharma treasures collided collectively, generating explosions and surging up water surf. A lot of levels 10 demon beasts dived deeply within the water right after ability to hear the noises.
The wonderful s.h.i.+eld that have ruined to a greater kingdom got already been thrown to the similar lat.i.tude as Hao Ren.
Getting a compact turn around Hao Ren, it flew to the extended distance promptly.
The interaction got their start in one area.
These compact demon kings possessed consumed a smaller bit of territory on the outside part of the Demon Seas. They made an effort to prevent Hao Ren with a group of stage 7 and level 8 demon beasts but pointed out that it was not possible due to the wonderful pace of your purple golden hairpin and Hao Ren’s bizarre sword energies.
Soon after simply being tossed up higher in to the heavens, Hao Ren looked down and found only bright white mist on the sea. There seemed to be no locate of Penglai Island any more.
Right after snapping shots out another purple gentle, the purple precious metal hairpin turned into the glowing watercraft.
The little demon kings dreaded the Perfect Tribulation, and Hao Ren’s super attack was best for them!
Having a dark-colored ‘iron plate” following him, Hao Ren rode around the surf on the golden boat.
hopes and fears examples
“d.a.m.n! Ungrateful thing!” Hao Ren cussed and touched the pendant on his pectoral before drawing out your purple golden hairpin.
Withdrawing some aspect heart and soul, he slowed around the great yacht, and also the wonderful s.h.i.+eld which checked similar to a black beaten steel plate adhered to Hao Ren, hovering above his arm.
If an ordinary cultivator emerged in the region with Lady Zhen’s crimson gold hairpin, no one makes difficulties for him.
Soon after pa.s.sing out two Heavenly Tribulations, it may overcome standard supreme psychic treasures without Hao Ren. Even so, soon after flying for over ten kilometers all itself, it suddenly changed its brain.
For the ocean, there had been numerous celestial mountain range.
On the vision from the purple golden hairpin, the golden s.h.i.+eld flew lower back and photo a wonderful gentle toward the crimson golden hairpin.
can a prince marry a normal person
These smaller demon kings experienced undertaken a smallish part of territory inside the exterior part of the Demon Ocean. They made an effort to obstruct Hao Ren with a small grouping of point 7 and level 8 demon beasts but pointed out that it was actually extremely hard due to terrific rate from the purple rare metal hairpin and Hao Ren’s unusual sword energies.
It spat out a demonic lighting ray.
Using their own learning ability, the supreme spiritual treasures didn’t want to have purchases from standard cultivators. Now that the fantastic s.h.i.+eld got heightened with a increased realm, it dreamed of being free from any limits associated with a cultivator and cover in between the heavens and the world.
“Well…” Girl Zhen yawned and glanced within the doorway. “It doesn’t make any difference allow them to combat. Only whenever they begin to deal with are we able to give troops for better reasons.”
Please, I Really Didn’t Want To Fall in Love With My Master!
If the common cultivator came into your spot with Woman Zhen’s purple golden hairpin, no one would make problems for him.
The great s.h.i.+eld that had chose to observe Hao Ren at the moment suddenly changed above his shoulder blades and picture out a golden mild during this modest demon master.
Withdrawing some character substance, he slowed down down the golden yacht, and the golden s.h.i.+eld which appeared similar to a dark beaten metallic platter followed Hao Ren, hovering above his shoulder blades.
The gold motorboat plus the fantastic s.h.i.+eld flew in contrary instructions.
you are still here quotes
Following pa.s.sing out two Perfect Tribulations, it may possibly overcome standard supreme psychic treasures without the need of Hao Ren. Even so, immediately after traveling for over ten kilometers all itself, it suddenly evolved its brain. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00617.warc.gz | CC-MAIN-2023-14 | 6,031 | 42 |
http://stackoverflow.com/questions/1563150/enableclientvalidation-on-master-page | code | I'm trying out ASP.NET MVC 2 Preview and when I use client side validation it all works if the following:
<% Html.EnableClientValidation() %>
Is used on a content page. If it is on a Master Page - client side validation fails.
Is it just me or is this by design? If so - why? | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828322.57/warc/CC-MAIN-20160723071028-00026-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 275 | 4 |
https://extensions.joomla.org/extension/access-a-security/site-access/akeeba-sociallogin/ | code | Allow your users to log in with their social media profile (Facebook, Twitter etc) or account (GitHub, Google, Microsoft).
Your users can login or create a new user account to your site using their social media profile (Facebook, Twitter etc) or account (GitHub, Google, Microsoft).
For example, someone can create a user account on your site using their Facebook login, without going through Joomla's user registration process. The created user accounts can either be activated immediately (e.g. when it's a verified Facebook account, i.e. Facebook has verified the user's email and/or mobile phone number) or go through Joomla's account activation process (click on the link sent by email). This is faster for the user and easier for you, since in most cases the email address of the user has already been verified by the social network and they don't have to go through Joomla's email address verification.
The following social networks are supported
The following third party services are supported:
- AppleID (requires a paid Apple developer subscription)
- GitHub account
- Google account
- Microsoft account
We have two actively maintained versions. SocialLogin 3 is compatible with Joomla 3.10 and PHP 7.2 or later. SocialLogin 4 is compatible with Joomla 4 and PHP 7.2 or later. Older, unsupported versions for end–of–life versions of PHP and Joomla are available on our site.
Both versions can be easily extended by developers using standard Joomla plugins. This allows you to integrate social networks we don't support yet, or even quickly develop an integration with a Single Sign On (SSO) solution used by your company.
- Akeeba Ltd
- Last updated:
Jan 16 2023
4 months ago
- Date added:
- Mar 06 2018
- GPLv2 or later
- Free download
- J3 J4
Write a review | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00529.warc.gz | CC-MAIN-2023-23 | 1,774 | 21 |
https://www.codingfish.com/forums/25-marketplace-2-en/4419-configure-duration-of-entries-option | code | I'm new to Joomla as well as MarketPlace component.
I'm trying to create a simple Job Ads in my website.
I saw that one of the 2.3 feature is Configure duration of entries.
But I'm having problem in finding the option.
Can anyone guide me through this? | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930109.71/warc/CC-MAIN-20150521113210-00130-ip-10-180-206-219.ec2.internal.warc.gz | CC-MAIN-2015-22 | 252 | 5 |
https://itecnote.com/tecnote/php-dealing-with-timezones-in-php/ | code | Some issues with timezones in PHP have been in the back of my mind for a while now, and I was wondering if there are better ways to handle it than what I'm currently doing.
All of the issues revolve around reformating database stored dates:
When dealing with a site that has to support multiple timezones (for users), to normalize the timezone offest of stored timestamps I always store it with the server timezone using the
CURRENT_TIMESTAMP attribute or the
This way I don't have to consider what timezone was set for PHP when the timestamp was entered (since PHP time functions are timezone aware). For each user, according to his preference I set the timezone somewhere in my bootstrap file using:
When I'm looking to format dates with the php
date() function, some form of conversion has to take place since MySQL currently stores timestamp in the format
Y-m-d H:i:s. With no regard to timezone, you could simply run:
$date = date($format,strtotime($dbTimestamp));
The problem with this is that
strtotime() are both timezone aware functions, meaning that if the PHP timezone is set differently from the server timezone, the timezone offset will apply twice (instead of once as we would like).
To deal with this, I usually retrieve MySQL timestamps using the
UNIX_TIMESTAMP() function which is not timezone aware, allowing my to apply
date() directly on it – thereby applying the timezone offset only once.
I don't really like this 'hack' as I can no longer retrieve those columns as I normally would, or use
* to fetch all columns (sometimes it simplifies queries greatly). Also, sometimes it's simply not an option to use
UNIX_TIMESTAMP() (especially when using with open-source packages without much abstraction for query composition).
Another issue is when storing the timestamp, when usage of
NOW() is not an option – storing a PHP generated timestamp will store it with the timezone offset which I would like to avoid.
I'm probably missing something really basic here, but so far I haven't been able to come up with a generic solution to handle those issues so I'm forced to treat them case-by-case. Your thoughts are very welcome | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00765.warc.gz | CC-MAIN-2022-49 | 2,144 | 20 |
https://discourse.mc-stan.org/t/gpu-stan/643 | code | I’m curious if there is a future in which Stan is able to compute its lp in a GPU accelerated fashion. GPUs are a no-go for smaller models, but for example, large-ish time series models there is enough parallelism, along at least one dimension, to benefit. For example, when fitting a very long nonlinear AR process, the AD lp gradient calculation would be dominated by the map & scan over the time dimension of the model, and this could be parallelized with a GPU. Is this something which could be accommodated within Stan, or am I better off attempting this by writing the code by hand?
(I’ve seen some discussions here on using OpenMP/MPI, but those APIs are significantly different from the GPU APIs.)
One of the big problems with GPU speedups for us is that we are almost certaintly going to need more than single precision. And it’s hard with our architecture to preload and reuse data. So we’re first looking at low data flow, high compute cycle operations like Cholesky decomposition (quadratic data, cubic time).
See also the branch with working code and corresponding pull request from Steve Bronder. The pull request isn’t ready to go, but we’re very very keen to start GPU-enabling some of our big matrix operations:
We’re attacking the parallelization of higher-level algorithms, like running multiple ODE solvers in parallel, using OpenMP for multi-threading and MPI for multi-process. Sebastian Weber has examples of this going that gives you nearly embarassingly parallel speedups. And these ODE problems are some of our harder problems.
Then, underneath the ODE solvers, we can start exploiting GPUs again, but these tend not be great GPU candidates as they involve repeated application of relatively small matrix operations.
The trick in all of this is getting the automatic differentiation synched up. And I think we can do this all asynchronously with our autodiff design and only have to block for required input values (in other words, very much the same kind of high-level design as TensorFlow uses as it’s the obvious way to cut a differentiable function apart [you see this going back decades in the autodiff literature]).
in other words, very much the same kind of high-level design as TensorFlow uses as it’s the obvious way to cut a differentiable function apart
what about STALINGRAD? Wouldn’t it require also an
optimization compiler of the graph with an “load balancer” to shift the calculation efficiently. Optionally to have
an integrated nested Laplace Approximation module and an neural network to learn about when to use INLA or
If blood temperature to max: This is no critics. It is maybe manageable by using already existing libraries and efficiently combine them.
I don’t know anything about the Pearlmutter and Siskind implementation.
If you do reverse mode autodiff, the condition is that all the parents of a node have to have passed gradients down before it can pass gradients down. This is solved in most system by keeping a stack of the expression graph which if you build it up in order is naturally topologically sorted. If you don’t do that, you can keep passing things down, but it becomes very inefficient as you have potentially exponentially paths to pursue in an unfolded graph. Like I said, there’s an enormous autodiff literature on this and sparsity detection, both of which are critical (speaking of INLA!).
The only motivation we’d have for using INLA in applied work would be if full Bayes via MCMC is too slow. Same motivation as for variational inference, expectation propagation, and any other appoximation. What we tend to do instead is work with people who claim only INLA can fit their models and then show them how to do it with Stan—that was the root motivation of the CAR case study Mitzi just did (http://mc-stan.org/documentation/case-studies/IAR_Stan.html); she started with Max Joseph’s CAR case study: http://mc-stan.org/documentation/case-studies/mbjoseph-CARStan.html, and a working INLA model that failed to run in BUGS/JAGS—I think she’s going to build out some of the comparisons with INLA in a later case study. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00345.warc.gz | CC-MAIN-2022-21 | 4,130 | 15 |
http://feedback.tapforms.com/forums/5402-tap-forms-feedback/suggestions/5164588-icloud-on-quit?edit=1 | code | iCloud on Quit
Having read your blog post on 2.0 I'm looking forward to the upgrade. One addition that caught my eye was backup on quit, a nice idea. I'm frequently forgetting to push updates to iCloud from my various devices. Perhaps an "iCloud on Quit" option would be a nice addition?
Of course, allowing us to schedule these events or, even better, making them automatic with each change would be preferable... but I appreciate the fact that the safety of the data has to come first. | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00052.warc.gz | CC-MAIN-2019-51 | 487 | 3 |
https://www.interviewhelp.io/blog/posts/data-structures-and-algorithm-interview-questions/ | code | Data structures and Algorithm Interview Questions
Linus Torvalds: “Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
A data structure is a way of organizing (storing) data so that it can be accessed and modified easily, such as tabular data representation or number sequences. An algorithm is a stepwise sequence that yields the desired output. Combined, data structure and algorithm (DSA) are the foundation of software engineering and are applied in aspects of a software development process and enable developers to learn to write efficient code.
Upgrade your coding skills with our FREE Data Structure and Algorithm class."
Here’s what we’ll cover in this article:
- Why Mastering DSA is critical
- How to Prepare for DSA Interviews
- Basic Data Structure Interview Questions
- Advanced-Data Structure Interview Questions
- Data structure Interview Questions on Strings
- Data structure Interview Questions on Arrays
- Data structure Interview Questions on Linked Lists
- Data structure Interview Questions on Binary Tree
- Data structure Interview Questions on Search and Sort Algorithms
Why Mastering DSA is Critical
Are you wondering why you need to study complicated stuff such as Array, Linked List, Stack, Queues, Searching, Sorting, Tree, Graphs etc.? And why do companies ask questions related to DSA instead of language/frameworks/tools-specific questions?
Many beginners and experienced programmers avoid learning Data Structures and Algorithms because they think they are complicated and not useful in real life. A strong understanding of data structures and algorithms helps programmers evolve and advance their careers. At the same time, interviewers often use DSA questions to evaluate candidates during the interview to test potential employees’ problem-solving skills, coding skills, and clarity of thought.
Data structures and algorithms play a major role in implementing software and the hiring process. Companies encounter a lot of complex and unstructured data on a larger scale. They are looking to hire software developers who can make the right decisions and efficiently complete the requisite tasks in a short amount of time and using fewer resources.
Knowledge of DSA goes a long way in solving these problems efficiently, and the interviewers are more interested in seeing how candidates use these tools to solve a problem. Just like a car mechanic needs the right tool to fix a car and make it run properly, a programmer needs the right tools (DSAs) to make the software run properly.
In product development, companies of the ilk of Google, Microsoft, Facebook, and Amazon allot 20-30% on the coding, which is the implementation aspect of the total project. Most of the time goes into designing things with the best and optimum algorithms to save on the company’s resources (servers, computation power, etc.). This is why interviews in these companies are mainly focused on algorithms, as they want people who can think out of the box to design algorithms that can save the company thousands of dollars.
How to Prepare for DSA Interviews?
Whether interviewing for startups or established FAANG or MAANG organizations, every company will expect you to have strong knowledge of basic data structures concepts, including String, Array, Linked List, Binary Tree, hash table, Stack, Queue, and advanced data structures such as binary heap, self-balanced Tree, circular buffer, etc.
Stage 1: Refresh your knowledge
First, start with revisiting the basics of DSA and focus on solving several problems. You should be able to write 20-30 codes without errors after practising about 100 problems. important topics to learn in this stage include the following:
Data structures: Array, Linked List, Stack, Queue, Hash Table, BST, Map (Hash vs. Tree), Set, Trie, Graph. Applications and pros & cons of those.
Algorithms: Time complexity, Space complexity, Sorting, Searching, BFS & DFS, Dynamic programming, Recursion, Divide and Conquer, and Bit manipulations.
Maths: Permutations, Combinations, Medians, Probability, Geometry, …
Problem-solving: How to reduce any given problem to a known Math or DS or DS+algo problem given enough hints.
Searching and sorting (Merge Sort, Quick Sort, Bubble Sort, Heap Sort, etc.)
Recursion, Divide and Conquer
Dictionaries and Sets, Arrays, LinkedList
HashTable, Binary Search Tree, BFS, DFS
Algorithms: Depth-first search, Breadth-first search, Binary search, Sorting, Dynamic programming, Greedy algorithms, Backtracking, Divide and conquer, Learn a framework
It is highly recommended that you thoroughly practice basic and advanced data structures problems to give yourself the best chance to ace programming or coding rounds.
Overall, you can devote around 100-150 hours of focused learning in this stage to build a strong base of knowledge in fundamentals.
Stage 2: Expand Your Knowledge
Start with commonly asked questions of medium difficulty in Leetcode. After practising 100-150 such problems, you should be able to come up with multiple solutions to a problem, be familiar with common syntax errors, and would have improved your coding and debugging skills. You can devote
about 150-200 hours of focussed practice in this stage, focusing on the following:
- Writing graph traversals with 10 minutes
- Implementation of Stack, Queue, Hash, BST, and Tree
- Application of BST, Tree, Heap, and Graph
- BFS (Breadth for Search)
- DFS (Depth for Search)
Stage 3: Polish Your Skills
In this stage, all you need to do is to practise more and increase your coding speed. Try solving a wide variety of problems to avoid getting stuck when you encounter an unfamiliar question during the interview. Additionally, you can focus on improving your knowledge about the following topics:
- DP memoization
- DP tabulations
- Advanced recursion
- Greedy method
- Topological sort
- Graph partitioning
- Shortest path
Learn From Top SDMs: The Secret to Landing Your Dream Job in Software Development. Get The Free E-Book
"The expert tips and advice provided by this guide are invaluable. I followed the advice and was able to land a Software Development Manager job at a top tech company. Highly recommend!" - John, Software Development Manager.
Let’s go ahead and look at some popular data structure interview questions that you may be asked at FAANG, MAANG, or any technical interviews.
Basic Data Structure Interview Questions
- How will a variable declaration activity consume or affect the memory allocation?
- What is a Dequeue?
- What Is the Difference Between File Structure and Storage Structure?
- What is the difference between Linear and Non-Linear Data Structures?
- Explain how dynamic memory allocation will help you in managing data?
- What operations can be performed on a data structure?
- Which data structures can you use for the BFS and DFS of a graph?
- What is a binary search? When is it best used?
- What are FIFO and LIFO?
- What is the difference between NULL and VOID?
- How will you implement a stack using a queue and vice-versa?
- What is data abstraction?
- What are the three characteristics of data structures?
- What are the advantages of the heap over a stack?
- What is the difference between a PUSH and a POP?
Advanced-Data Structure Interview Questions
Can you share some examples of greedy and divide-and-conquer algorithms?
Examples of algorithms that follow the greedy approach are:
- Dijkstra’s Minimum Spanning Tree
- Graph – Map Coloring
- Graph – Vertex Cover
- Job Scheduling Problem
- Knapsack Problem
- Kruskal’s Minimal Spanning Tree
- Prim’s Minimal Spanning Tree
- Travelling Salesman
Examples of the divide and conquer approach are:
- Binary Search
- Closest Pair (or Points)
- Merge Sort
- Quick Sort
- Strassen’s Matrix Multiplication
What is a spanning tree? What is the maximum number of spanning trees a graph can have?
A spanning tree is a subset of a graph that has all the vertices but with the minimum possible number of edges. A spanning tree cannot be disconnected and does not have cycles.
The maximum number of spanning trees in a graph depends on the number of connections. A complete undirected graph with n number of nodes can have a maximum of n-1 number of spanning trees.
What is recursion?
It is the ability to allow a function or module to call itself. Either a function f calls itself directly or calls another function ‘g’ that in turn calls the function ‘f. The function f is known as the recursive function, and it follows recursive properties, which are:
Base criteria: Where the recursive function stops calling itself.
Progressive approach: Where the recursive function tries to meet the base criteria in each iteration.
What is the Tower of Hanoi problem?
It is a mathematical puzzle that comprises three towers (or pegs) and more than one ring. Each ring is of varying size and stacked upon one another such that the larger one is beneath the smaller one. The Tower of Hanoi problem’s goal is to move the disk’s tower from one peg to another without breaking the properties.
What is the maximum number of nodes in a binary tree of height k?
The maximum number of nodes in a binary tree of depth K is 2K-1, k >=1 . Here the depth of the tree is 1. A binary tree is a tree data structure in which each node has at most two children, referred to as the left and right child.
What is an asymptotic analysis of an algorithm?
Asymptotic analysis determines an algorithm’s running time in mathematical units to determine the program’s limits, also known as “run-time performance.” The purpose is to identify the best-case, worst-case, and average-case times for completing a particular activity. Asymptotic analysis is an essential diagnostic tool for programmers to analyze an algorithm’s efficiency rather than its correctness.
What are the differences between the B tree and the B+ Tree?
The B tree is a self-balancing m-way tree, with m defining the Tree’s order. Btree is an extension of the Binary Search tree in which a node can have more than one key and more than two children. The data is provided in the B tree in a sorted manner, with lower values on the left subtree and higher values on the right subtree.
B+ Tree is an advanced self-balanced tree where every path from the Tree’s root to its leaf is of the same length.
What is an AVL tree?
An AVL (Adelson, Velskii, and Landi) tree is a self-balancing binary search tree in which the difference of heights of the left and right subtrees of any node is less than or equal to one. This controls the height of the binary search tree by not letting it get skewed. This is used when working with a large data set, with continual pruning through the insertion and deletion of data.
Where is the LRU cache used in the data structure?
LRU (Least Recently Used) cache is used to organize items in order of usage, enabling you to quickly find out which item hasn’t been used for a long time.
How do you find the height of a node in a tree?
You can use a recursive Depth-First Search (DFS) algorithm to find the height of a node in a tree.
What are Infix, prefix, and Postfix notations in data structure?
The approach of writing arithmetic expressions is known as notation without changing the essence or output of the expression. These notations are:
Infix notation: Here, the X + Y – operators are written between their operands. An example of infix notation is A * ( B + C ) / D
Postfix (Polish notation): The X Y + Operators are written after their operands. The infix expression given above is equivalent to
A B C + * D/
Prefix (Polish)notation: + X Y Operators are written before their operands. The expressions given above are equivalent to
/ * A + B C D
Can doubly-linked be implemented using a single pointer variable in every node?
Describe the Different Kinds of Tree Traversals for a Binary Search Tree.
There are three ways to traverse a tree. They are:
Traverse the Tree starting at the left subtree, then to the root of the Tree, and finishing off at the right subtree.
Preorder traversal starts at the root, then moves to the left and, finally, to the right.
This involves covering the Tree starting from the left subtree and moving to the right subtree. It is then moved from the right subtree to the root to complete the traversal.
What Is the Difference Between the Breadth-First Search (BFS) and Depth-First Search (DFS)?
The breadth-first search uses the queue data structure to find the shortest path in a graph. Depth-first search does the same thing but uses the stack data structure.
Can Dynamic Memory Allocation Help in Managing Data?
What Is the Left View of a Binary Tree?
The left view of a binary tree is all the nodes you visit when you traverse the Tree from its left side.
Explain Graph Data Structures
What Are the Applications of Graph Data Structures?
Can You Use an Object as a Key in HashMap?
What are asymptotic notations?
How does insertion sort differ from selection sort?
How does Kruskal’s Algorithm work?
How do you check if the given Binary Tree is BST or not?
Which data structures are used for the BFS and DFS of a graph?
Which data structure is used for the dictionary and spell checker?
What process is used to store variables in memory?
Which Sorting Algorithm Is Considered the Fastest?
**What Is the Maximum Number of Nodes in a Binary Tree of H**eight K?
String Interview Questions
The String is probably the most used data structure. You can solve string-based questions easily if you know the array because strings are nothing but a character array. From the C++ perspective, String is nothing but a null-terminated character array, but from the Java perspective, String is a full-fledged object backed by a character array.
Here is the list of frequently asked string coding questions in job interviews:
- How do you print duplicate characters from a string?
- How do you check if two strings are anagrams of each other?
- How do you print the first non-repeated character from a string?
- How can a given string be reversed using recursion?
- How do you check if a string contains only digits?
- How are duplicate characters found in a string?
- How do you count the number of vowels and consonants in a given string?
- How do you count the occurrence of a given character in a string?
- How do you find all permutations of a string?
- How do you reverse words in a given sentence without using any library method?
- How do you check if two strings are a rotation of each other?
- How do you check if a given string is a palindrome?
- For a given string, write a code to reverse the string without disturbing the individual words.
- Write a program to remove duplicate elements from the String.
- Write a program to find the longest substring’s length with distinct values.
- Write a code to remove successive duplicate characters recursively
- For two strings, A and B, write a program to determine if B can be obtained by rotating A in at least two places.
- How will you determine whether a string only consists of digits?
- How do you reverse words in a given sentence without using any library method?
- Explain the implementation process of a bubble sort algorithm.
- How is an iterative quicksort algorithm implemented?
- How to check if two Strings are anagrams of each other?
- Print the first non-repeated character from String.
Data Structure Interview Questions on Arrays
An array is a data structure that stores elements at a contiguous memory location. It is a crucial part of coding or programming interviews, e.g., reversing an array, sorting the array, or searching elements on the array.
Here are some of the popular array-based coding interview questions for your practice:
- How Do You Reference all Elements in a One-Dimension Array?
- What Is a Jagged Array?
- Find a missing number in a given integer array of 1 to 100?
- Find a duplicate number on a given integer array?
- Find all pairs of an integer array whose sum equals a given number?
- How do you find duplicate numbers in an array containing multiple duplicates?
- How is an integer array sorted in place using the quicksort algorithm?
- How do you remove duplicates from an array in place?
- How do you reverse an array in place in Java?
- How are duplicates removed from an array without using any library?
- For a given array of size N-1, containing integers in the range from 1 to N, write a program to find the missing element in the array.
- For a given unsorted array of size N, write a code to rotate it anticlockwise by D elements.
- For a given array of size N, write a code to print the reverse of the array.
- For a given array A, write a code to delete the duplicate elements in the array.
- Write a code to sort the array in the wave fashion for a given array of size N containing distinct integer numbers.
- Write a code to find the maximum subarray of non-negative numbers from a given array containing integer values.
- Find all pairs of integer arrays whose sum is equal to a given number?
- Find multiple missing numbers in a given integer array with duplicates?
- How do you find the largest and smallest number in an unsorted integer array?
- How do you identify duplicate numbers in an array if it consists of multiple duplicates?
Linked List Interview Questions
A linked list is a linear data structure or a sequence of data objects where elements are not stored in contiguous memory locations. It is a dynamic data structure where the number of nodes is not fixed, and the list has the ability to grow and shrink on demand. Each list element is connected to the next using a pointer to form a chain. Each element is a separate object called a node. Each node has a data field and a reference to the next node.
Here are some of the most common and popular linked list interview questions and their solutions:
What are the operations supported by a Linked List
Basic operations supported by a linked list:
Insertion - Inserts an element at the list beginning.
Deletion - Deletes an element at the list beginning.
Display - Displays the complete list.
Search - Searches an element using the given key.
Delete - Deletes an element using the given key.
Some implementations include stacks and queues, graphs, a directory of names, dynamic memory allocation, and arithmetic operations on long integers.
Explain Doubly-Linked Lists (DLL).
A doubly linked list is a modification of a linked list in which every element points to both the element before it and the element after it. It is easy to navigate doubly linked lists forwards and backward for this reason.
Every entry in a doubly linked list has the following:
- A data field to carry a particular data value
- A link to the previous entry in the list
- A link to the next entry in the list
What Are the Applications of Doubly Linked Lists?
Any application that requires quick forward and backward navigation can employ doubly linked lists. Some examples include:
The actions you’ve taken in an image editing app can be stored in a doubly linked list to allow for each undo/redo operation.
More complex data structures like binary trees and hash tables can be constructed using doubly linked lists.
Search engine results pages can be linked to each other using doubly linked lists.
A music playlist with next and previous navigation buttons
The browser cache with BACK-FORWARD visited pages
The undo and redo functionality on a browser, where you can reverse the node to get to the previous page, here are some of the frequently asked related list questions from programming interviews:
- What are the advantages of a linked list over an array? In which scenarios do we use Linked List, and when array?
- Do you know how to reverse a linked list?
- How do you find the third node from the end in a singly linked list?
- How do you find the sum of two linked lists using Stack?
- How do you find a single linked list’s middle element in one pass?
- Write a code to add two numbers represented by Linked Lists
- Write a function to remove the nth node from a Linked List
- Write a program to swap adjacent nodes in a Linked List
- Write a code to reverse a Linked List from position X to position Y
- Write a program to flatten a given multi-level linked list
- Write a code to find the next greater node for a given Linked List
- Find the middle element of a singly linked list in one pass?
- Find the 3rd node from the end in a singly linked list?
- Find the length of a singly linked list?
- Remove duplicate nodes in an unsorted linked list?
- Remove duplicate elements from a sorted linked list?
- How do you check if a given linked list contains a cycle? How do you find the starting node of the cycle?
- How do you find the third node from the end in a singly linked list?
Binary Tree Interview Questions
- How is a binary search tree implemented?
- How do you perform preorder traversal in a given binary tree?
- How do you traverse a given binary tree in preorder without recursion?
- How do you implement a postorder traversal algorithm?
- How do you traverse a binary tree in postorder traversal without recursion?
- How are all leaves of a binary search tree printed?
- How do you count the number of leaf nodes in a given binary tree?
- How do the leaves of a binary search tree get printed?
- How will you perform a binary search in a given array?
- How are the leaf nodes in a given binary tree counted?
- Print all nodes of a given binary Tree using inorder traversal without recursion
- Check if a given binary tree is a binary search tree?
- Check if a binary tree is balanced or not?
- Convert a binary search tree to a sorted doubly-linked list.
- Given a binary search tree and a value k, How do you find a node in the binary search tree whose value is closest to k.
When solving binary tree questions, having a good grasp of the theoretical concepts is vital.
Search and Sort Algorithmic Interview Questions
Search and Sort based questions are commonly asked in any programming round. You may be asked to implement various sorting algorithms, like the Bubble sort, Quicksort, merge sort, and asking to perform a binary search, etc.
- Write a code for Bubble Sort algorithm?
- Implement Iterative QuickSort Algorithm?
- Implement the Insertion Sort Algorithm?
- Implement the Radix Sort Algorithm?
- Implement Sieve of Eratosthenes Algorithm to find Prime numbers?
- Find GCD of two numbers using Euclid’s Algorithm?
Now that you understand the importance of data structures and algorithms, you can begin to improve your skill set. Practise more and more with set goals. Try experimenting with different patterns of problem-solving, establish familiarity with problems and see if you can crack a similar kind of problem which you have solved already, and you will be all set to face your job interview. It is also important to practice writing code on a paper or whiteboard and talk through your approach as you code.
Upgrade your coding skills with our Data Structure and Algorithm class." | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00231.warc.gz | CC-MAIN-2023-40 | 23,025 | 259 |
https://empyriononline.com/threads/sv-mouse-controls.91590/ | code | It's been a year since I've played, and as a returning player I like almost all of what I'm seeing. But I've hit a wall with the new SV controls, and I need a hand or I won't be able to advance. SV controls used to move in the direction you pushed the mouse. This meant that you had to pick up the mouse to re-center it on your mousepad repeatedly to continue a turn, but it made for some very fine controls for combat. Also, to stop any turn the player only needed to stop moving the mouse. The new controls puts a small circle on your hud. When you move the mouse the cursor moves, and the cursor controls a sustained turn away from the crosshair. However, the controls are overly sensitive, and the player must instantly move the cursor back to the crosshair to stop the turn. I want to re-enable the old controls for SVs, if at all possible. I've tried to adapt for three or four game hours, and the new controls are unintuitive for a mouse and completely cripple the player for targeting in combat. If there is no optional toggle, I don't think I'll be able to adapt the the new system without purchasing other control hardware. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00313.warc.gz | CC-MAIN-2020-40 | 1,133 | 1 |
https://www.ephe-paleoclimat.com/system-thread-exception-not-handled-error/ | code | System Thread Exception Not Handled Error is a very common Windows Vista or XP error. It happens when a certain driver is not installed properly. The hardware must be properly matched or the application will just not start at all. If one of the drivers is faulty, then the whole computer system may be inoperable until a new driver is found and installed. A simple way to determine if you have a problem with this error is to restart the computer and again check if the system works properly or not. If your problem still persists, it might mean that the drivers are faulty and need to be replaced.
How to fix the system thread exception not handled error is simple. The first thing that you should do is to download and install the latest drivers for every hardware that is installed in the machine. If the drivers are already outdated, then this could also be the cause of the problem. To make sure, you can check if the latest drivers are already installed on the machine.
Once you have updated the drivers, you can now try to restart the machine and see if the error message still appears. You can then try to repair the drivers if needed. One way of doing this is through the Device Manager. Click on the connection that is listed under your network connection or the printer and wait for the devices to appear. You can then select each of the devices and click on the repair option to see if the problem is fixed.
What to Do If You Have a System Thread Exception Not Handled Error in Your Computer Startup
Another method of fixing this error is to follow steps from step 1 and just reverse them. For example, if the error is about a device named USB driver, you can follow steps from step 2 and select the device and then press on the update driver button. If the device does not appear in the list, then you might need to update the drivers.
If neither of these methods work, then the next step you can take is to use Microsoft's system restore feature. If your computer does not have an operating system that supports it, this might be the reason why you are getting the System Thread Exception Nothandled Error Message. To do this, you should first download and install the Microsoft System Restore software. After installing the system restore software, open your windows setup menu. It is where you will find the restore point and click on it.
Thereafter, you should locate the system thread exception and search for the missing drivers. For each missing driver software that you find, you should uninstall it. This will make the missing drivers accessible so that you can install them one by one. Once you have installed the new ones, you should search automatically for the missing ones. If the System Thread Exception Not Handles Error Message is not responding, then you need to restart the system.
One thing that could be causing the error occurs when the computer has to restart for some reason. For instance, you might have deleted a system file making the system unable to restart. The last resort is to reinstall the operating system so that you are able to restart your computer. The first thing that you should do if you have a System Thread Exception Nothandled Error Message in your computer startup is to search for the causes of the problem.
You might have a virus or spyware that is causing the System Thread Exception Not Handled Error message in your computer startup. The next thing you should do is to clean your computer of such viruses. For this, you should install anti-virus programs that are certified by leading antivirus vendors. Then, you should run a complete system scan with these programs. In order to fix the problem of the System Thread Exception Not Handled Error, you should uninstall the malicious programs from your computer and then re-install them. Re-installing the programs could also help if the problem originated from malware infections in your system.
Thank you for checking this article, If you want to read more blog posts about system thread exception not handled error don't miss our blog - Ephe Paleoclimat We try to update our site every day | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00170.warc.gz | CC-MAIN-2022-05 | 4,104 | 10 |
https://oajournals.fupress.net/index.php/cdg/article/view/13836 | code | Gravitational scattering, inspiral, and radiation
- Gravitational waves,
- numerical relativity,
- perturbative calculations,
- string theory,
- modified theories of gravity
This work is licensed under a Creative Commons Attribution 4.0 International License.
The workshop gathered theorists working on different though connected areas concerning the recent discovery of gravitational waves. It fostered new collaborations between the quantum gravitational scattering amplitude and the general relativity community, leading to the calculation of new, high-order, terms in the post-Newtonian and post-Minkowskian perturbative approaches to the physics of binary systems, at both analytical and numerical level, in order to construct the waveform templates necessary for the analysis of LIGO/Virgo data. The use of recent progress on gravitational scattering and radiation in ultra-relativistic collisions of elementary particles or strings improved the determination of parameters appearing in the effective-one-body approach to the relativistic two-body problem. Various consequences of modified gravity theories for the LIGO/Virgo discoveries were also explored. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00544.warc.gz | CC-MAIN-2023-06 | 1,163 | 8 |
http://stackoverflow.com/users/1098873/eric | code | CTO at Justly
Apparently, this user prefers to keep an air of mystery about them.
55 How do I revert my changes to a git submodule? jun 5 '12
9 How do I use git flow with a staging environment? apr 15 '13
6 How can I write an Omniauth RSpec for the login? sep 22 '12
3 What rails relationship am I looking for here? apr 15 '12
3 Sunspot/Solr not able to search by boolean value nov 6 '12 | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161942.67/warc/CC-MAIN-20160205193921-00200-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 387 | 7 |
Subsets and Splits