text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Add Two PyTorch Tensors Together Add two PyTorch Tensors together by using the PyTorch add operation < > Code: Transcript: This video will show you how to add two PyTorch tensors together by using the PyTorch add operation. First, we import PyTorch. import torch Then we print the PyTorch version we are using. print(torch.__version__) We are using PyTorch 0.3.1.post2. Let’s now create a PyTorch tensor for our example. pt_tensor_one_ex = torch.Tensor([2, 3, 4]) So we use torch.Tensor. We want it to be 2x3x4. We assign it to the Python variable pt_tensor_one_ex. Let’s print the pt_tensor_one_ex Python variable to see what we have. print(pt_tensor_one_ex) We see that it’s a torch.FloatTensor of size 3. The elements are the number 2, the number 3, and the number 4, just as we defined it. Next, let’s create a second tensor to add to our first. pt_tensor_two_ex = torch.Tensor([4, 3, 2]) So we use torch.Tensor again, we define it as 4, 3, 2, and we assign it to the Python variable pt_tensor_two_ex. We print this variable to see what we have. print(pt_tensor_two_ex) We see that it’s a torch.FloatTensor of size 3. We see the numbers 4, 3, 2. Next, let’s add the two tensors together using the PyTorch dot add operation. pt_addition_result_ex = pt_tensor_one_ex.add(pt_tensor_two_ex) So the first tensor, then dot add, and then the second tensor. The result, we’re going to assign to the Python variable pt_addition_result_ex. Note that this operation returns a new PyTorch tensor. We can print the pt_addition_result_ex and see that we get 6, 6, 6. print(pt_addition_result_ex) So 2 plus 4 is 6, 3 plus 3 is 6, 4 plus 2 is 6. All right, so we were able to add two tensors together. Just to make sure that we didn’t change our initial tensors, let’s check pt_tensor_one_ex. So we print pt_tensor_one_ex, and we see that it’s still 2, 3, 4. print(pt_tensor_one_ex) We also print pt_tensor_two_ex, and we see that it’s still 4, 3, 2. print(pt_tensor_two_ex) So the PyTorch addition operation does not change the original tensors. It generates a new tensor. Perfect - We were able to add two PyTorch tensors together by using the PyTorch add operation.
https://aiworkbox.com/lessons/add-two-pytorch-tensors-together
CC-MAIN-2020-40
refinedweb
382
68.77
The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program. Introduction Ansible is an agentless configuration management tool that uses YAML templates to define a list of tasks to be performed on hosts. In Ansible, roles are a collection of variables, tasks, files, templates and modules that are used together to perform a singular, complex function. Molecule is a tool for performing automated testing of Ansible roles, specifically designed to support the development of consistently well-written and maintained roles. Molecule’s unit tests allow developers to test roles simultaneously against multiple environments and under different parameters. It’s important that developers continuously run tests against code that often changes; this workflow ensures that roles continue to work as you update code libraries. Running Molecule using a continuous integration tool, like Travis CI, allows for tests to run continuously, ensuring that contributions to your code do not introduce breaking changes. In this tutorial, you will use a pre-made base role that installs and configures an Apache web server and a firewall on Ubuntu and CentOS servers. Then, you will initialize a Molecule scenario in that role to create tests and ensure that the role performs as intended in your target environments. After configuring Molecule, you will use Travis CI to continuously test your newly created role. Every time a change is made to your code, Travis CI will run molecule test to make sure that the role still performs correctly. Prerequisites Before you begin this tutorial, you will need: Step 1 — Forking the Base Role Repository You will be using a pre-made role called ansible-apache that installs Apache and configures a firewall on Debian- and Red Hat-based distributions. You will fork and use this role as a base and then build Molecule tests on top of it. Forking allows you to create a copy of a repository so you can make changes to it without tampering with the original project. Start by creating a fork of the ansible-apache role. Go to the ansible-apache repository and click on the Fork button. Once you have forked the repository, GitHub will lead you to your fork’s page. This will be a copy of the base repository, but on your own account. Click on the green Clone or Download button and you’ll see a box with Clone with HTTPS. Copy the URL shown for your repository. You’ll use this in the next step. The URL will be similar to this: You will replace username with your GitHub username. With your fork set up, you will clone it on your server and begin preparing your role in the next section. Step 2 — Preparing Your Role Having followed Step 1 of the prerequisite How To Test Ansible Roles with Molecule on Ubuntu 18.04, you will have Molecule and Ansible installed in a virtual environment. You will use this virtual environment for developing your new role. First, activate the virtual environment you created while following the prerequisites by running: - source my_env/bin/activate Run the following command to clone the repository using the URL you just copied in Step 1: - git clone Your output will look similar to the following: OutputCloning into 'ansible-apache'... remote: Enumerating objects: 16, done. remote: Total 16 (delta 0), reused 0 (delta 0), pack-reused 16 Unpacking objects: 100% (16/16), done. Move into the newly created directory: The base role you've downloaded performs the following tasks: Includes variables: The role starts by including all the required variables according to the distribution of the host. Ansible uses variables to handle the disparities between different systems. Since you are using Ubuntu 18.04 and CentOS 7 as hosts, the role will recognize that the OS families are Debian and Red Hat respectively and include variables from vars/Debian.ymland vars/RedHat.yml. Includes distribution-relevant tasks: These tasks include tasks/install-Debian.ymland tasks/install-RedHat.yml. Depending on the specified distribution, it installs the relevant packages. For Ubuntu, these packages are apache2and ufw. For CentOS, these packages are httpdand firewalld. Ensures latest index.html is present: This task copies over a template templates/index.html.j2that Apache will use as the web server's home page. Starts relevant services and enables them on boot: Starts and enables the required services installed as part of the first task. For CentOS, these services are httpdand firewalld, and for Ubuntu, they are apache2and ufw. Configures firewall to allow traffic: This includes either tasks/configure-Debian-firewall.ymlor tasks/configure-RedHat-firewall.yml. Ansible configures either Firewalld or UFW as the firewall and whitelists the httpservice. Now that you have an understanding of how this role works, you will configure Molecule to test it. You will write test cases for these tasks that cover the changes they make. Step 3 — Writing Your Tests To check that your base role performs its tasks as intended, you will start a Molecule scenario, specify your target environments, and create three custom test files. Begin by initializing a Molecule scenario for this role using the following command: - molecule init scenario -r ansible-apache You will see the following output: Output--> Initializing new scenario default... Initialized scenario in /home/sammy/ansible-apache/molecule/default successfully. You will add CentOS and Ubuntu as your target environments by including them as platforms in your Molecule configuration file. To do this, edit the molecule.yml file using a text editor: - nano molecule/default/molecule.yml Add the following highlighted content to the Molecule configuration: ~/ansible-apache/molecule/default/molecule.yml --- dependency: name: galaxy driver: name: docker lint: name: yamllint platforms: - name: centos7 image: milcom/centos7-systemd privileged: true - name: ubuntu18 image: solita/ubuntu-systemd command: /sbin/init privileged: true volumes: - /lib/modules:/lib/modules:ro provisioner: name: ansible lint: name: ansible-lint scenario: name: default verifier: name: testinfra lint: name: flake8 Here, you're specifying two target platforms that are launched in privileged mode since you're working with systemd services: centos7is the first platform and uses the milcom/centos7-systemdimage. ubuntu18is the second platform and uses the solita/ubuntu-systemdimage. In addition to using privileged mode and mounting the required kernel modules, you're running /sbin/initon launch to make sure iptables is up and running. Save and exit the file. For more information on running privileged containers visit the official Molecule documentation. Instead of using the default Molecule test file, you will be creating three custom test files, one for each target platform, and one file for writing tests that are common between all platforms. Start by deleting the scenario's default test file test_default.py with the following command: - rm molecule/default/tests/test_default.py You can now move on to creating the three custom test files, test_common.py, test_Debian.py, and test_RedHat.py for each of your target platforms. The first test file, test_common.py, will contain the common tests that each of the hosts will perform. Create and edit the common test file, test_common.py: - nano molecule/default/tests/test_common.py Add the following code to the file: ~/ansible-apache/molecule/default/tests/test_common.py import os import pytest import testinfra.utils.ansible_runner testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner( os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all') @pytest.mark.parametrize('file, content', [ ("/var/www/html/index.html", "Managed by Ansible") ]) def test_files(host, file, content): file = host.file(file) assert file.exists assert file.contains(content) In your test_common.py file, you have imported the required libraries. You have also written a test called test_files(), which holds the only common task between distributions that your role performs: copying your template as the web servers homepage. The next test file, test_Debian.py, holds tests specific to Debian distributions. This test file will specifically target your Ubuntu platform. Create and edit the Ubuntu test file by running the following command: - nano molecule/default/tests/test_Debian.py You can now import the required libraries and define the ubuntu18 platform as the target host. Add the following code to the start of this file: ~') Then, in the same file, you'll add test_pkg() test. Add the following code to the file, which defines the test_pkg() test: ~/ansible-apache/molecule/default/tests/test_Debian.py ... @pytest.mark.parametrize('pkg', [ 'apache2', 'ufw' ]) def test_pkg(host, pkg): package = host.package(pkg) assert package.is_installed This test will check if apache2 and ufw packages are installed on the host. Note: When adding multiple tests to a Molecule test file, make sure there are two blank lines between each test or you'll get a syntax error from Molecule. To define the next test, test_svc(), add the following code under the test_pkg() test in your file: ~/ansible-apache/molecule/default/tests/test_Debian.py ... @pytest.mark.parametrize('svc', [ 'apache2', 'ufw' ]) def test_svc(host, svc): service = host.service(svc) assert service.is_running assert service.is_enabled test_svc() will check if the apache2 and ufw services are running and enabled. Finally you will add your last test, test_ufw_rules(), to the test_Debian.py file. Add this code under the test_svc() test in your file to define test_ufw_rules(): ~/ansible-apache/molecule/default/tests/test_Debian.py ... @pytest.mark.parametrize('rule', [ '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT' ]) def test_ufw_rules(host, rule): cmd = host.run('iptables -t filter -S') assert rule in cmd.stdout test_ufw_rules() will check that your firewall configuration permits traffic on the port used by the Apache service. With each of these tests added, your test_Debian.py file will look like this: ~') @pytest.mark.parametrize('pkg', [ 'apache2', 'ufw' ]) def test_pkg(host, pkg): package = host.package(pkg) assert package.is_installed @pytest.mark.parametrize('svc', [ 'apache2', 'ufw' ]) def test_svc(host, svc): service = host.service(svc) assert service.is_running assert service.is_enabled @pytest.mark.parametrize('rule', [ '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT' ]) def test_ufw_rules(host, rule): cmd = host.run('iptables -t filter -S') assert rule in cmd.stdout The test_Debian.py file now includes the three tests: test_pkg(), test_svc(), and test_ufw_rules(). Save and exit test_Debian.py. Next you'll create the test_RedHat.py test file, which will contain tests specific to Red Hat distributions to target your CentOS platform. Create and edit the CentOS test file, test_RedHat.py, by running the following command: - nano molecule/default/tests/test_RedHat.py Similarly to the Ubuntu test file, you will now write three tests to include in your test_RedHat.py file. Before adding the test code, you can import the required libraries and define the centos7 platform as the target host, by adding the following code to the beginning of your file: ~') Then, add the test_pkg() test, which will check if the httpd and firewalld packages are installed on the host. Following the code for your library imports, add the test_pkg() test to your file. (Again, remember to include two blank lines before each new test.) ~/ansible-apache/molecule/default/tests/test_RedHat.py ... @pytest.mark.parametrize('pkg', [ 'httpd', 'firewalld' ]) def test_pkg(host, pkg): package = host.package(pkg) assert package.is_installed Now, you can add the test_svc() test to ensure that httpd and firewalld services are running and enabled. Add the test_svc() code to your file following the test_pkg() test: ~/ansible-apache/molecule/default/tests/test_RedHat.py ... @pytest.mark.parametrize('svc', [ 'httpd', 'firewalld' ]) def test_svc(host, svc): service = host.service(svc) assert service.is_running assert service.is_enabled The final test in test_RedHat.py file will be test_firewalld(), which will check if Firewalld has the http service whitelisted. Add the test_firewalld() test to your file after the test_svc() code: ~/ansible-apache/molecule/default/tests/test_RedHat.py ... @pytest.mark.parametrize('file, content', [ ("/etc/firewalld/zones/public.xml", "<service name="http"/>") ]) def test_firewalld(host, file, content): file = host.file(file) assert file.exists assert file.contains(content) After importing the libraries and adding the three tests, your test_RedHat.py file will look like this: ~"/>") ]) def test_firewalld(host, file, content): file = host.file(file) assert file.exists assert file.contains(content) Now that you've completed writing tests in all three files, test_common.py, test_Debian.py, and test_RedHat.py, your role is ready for testing. In the next step, you will use Molecule to run these tests against your newly configured role. Step 4 — Testing Against Your Role You will now execute your newly created tests against the base role ansible-apache using Molecule. To run your tests, use the following command: You'll see the following output once Molecule has finished running all the tests: Output... --> Scenario: 'default' --> Action: 'verify' --> Executing Testinfra tests found in /home/sammy/ansible-apache/molecule/default/tests/... ============================= test session starts ============================== platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1 rootdir: /home/sammy/ansible-apache/molecule/default, inifile: plugins: testinfra-1.16.0 collected 12 items tests/test_common.py .. [ 16%] tests/test_RedHat.py ..... [ 58%] tests/test_Debian.py ..... [100%] ========================== 12 passed in 80.70 seconds ========================== Verifier completed successfully. You'll see Verifier completed successfully in your output; this means that the verifier executed all of your tests and returned them successfully. Now that you've successfully completed the development of your role, you can commit your changes to Git and set up Travis CI for continuous testing. Step 5 — Using Git to Share Your Updated Role In this tutorial, so far, you have cloned a role called ansible-apache and added tests to it to make sure it works against Ubuntu and CentOS hosts. To share your updated role with the public, you must commit these changes and push them to your fork. Run the following command to add the files and commit the changes you've made: This command will add all the files that you have modified in the current directory to the staging area. You also need to set your name and email address in the git config in order to commit successfully. You can do so using the following commands: - git config user.email "[email protected]" - git config user.name "John Doe" Commit the changed files to your repository: - git commit -m "Configured Molecule" You'll see the following output: Output[master b2d5a5c] Configured Molecule 8 files changed, 155 insertions(+), 1 deletion(-) create mode 100644 molecule/default/Dockerfile.j2 create mode 100644 molecule/default/INSTALL.rst create mode 100644 molecule/default/molecule.yml create mode 100644 molecule/default/playbook.yml create mode 100644 molecule/default/tests/test_Debian.py create mode 100644 molecule/default/tests/test_RedHat.py create mode 100644 molecule/default/tests/test_common.py This signifies that you have committed your changes successfully. Now, push these changes to your fork with the following command: - git push -u origin master You will see a prompt for your GitHub credentials. After entering these credentials, your code will be pushed to your repository and you'll see this output: OutputCounting objects: 13, done. Compressing objects: 100% (12/12), done. Writing objects: 100% (13/13), 2.32 KiB | 2.32 MiB/s, done. Total 13 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 2 local objects. To 009d5d6..e4e6959 master -> master Branch 'master' set up to track remote branch 'master' from 'origin'. If you go to your fork's repository at github.com/username/ansible-apache, you'll see a new commit called Configured Molecule reflecting the changes you made in the files. Now, you can integrate Travis CI with your new repository so that any changes made to your role will automatically trigger Molecule tests. This will ensure that your role always works with Ubuntu and CentOS hosts. Step 6 — Integrating Travis CI In this step, you're going to integrate Travis CI into your workflow. Once enabled, any changes you push to your fork will trigger a Travis CI build. The purpose of this is to ensure Travis CI always runs molecule test whenever contributors make changes. If any breaking changes are made, Travis will declare the build status as such. Proceed to Travis CI to enable your repository. Navigate to your profile page where you can click the Activate button for GitHub. You can find further guidance here on activating repositories in Travis CI. For Travis CI to work, you must create a configuration file containing instructions for it. To create the Travis configuration file, return to your server and run the following command: To duplicate the environment you've created in this tutorial, you will specify parameters in the Travis configuration file. Add the following content to your file: ~/ansible-apache/.travis.yml --- language: python python: - "2.7" - "3.6" services: - docker install: - pip install molecule docker script: - molecule --version - ansible --version - molecule test The parameters you've specified in this file are: language: When you specify Python as the language, the CI environment uses separate virtualenvinstances for each Python version you specify under the pythonkey. python: Here, you're specifying that Travis will use both Python 2.7 and Python 3.6 to run your tests. services: You need Docker to run tests in Molecule. You're specifying that Travis should ensure Docker is present in your CI environment. install: Here, you're specifying preliminary installation steps that Travis CI will carry out in your virtualenv. pip install molecule dockerto check that Ansible and Molecule are present along with the Python library for the Docker remote API. script: This is to specify the steps that Travis CI needs to carry out. In your file, you're specifying three steps: molecule --versionprints the Molecule version if Molecule has been successfully installed. ansible --versionprints the Ansible version if Ansible has been successfully installed. molecule testfinally runs your Molecule tests. The reason you specify molecule --version and ansible --version is to catch errors in case the build fails as a result of ansible or molecule misconfiguration due to versioning. Once you've added the content to the Travis CI configuration file, save and exit .travis.yml. Now, every time you push any changes to your repository, Travis CI will automatically run a build based on the above configuration file. If any of the commands in the script block fail, Travis CI will report the build status as such. To make it easier to see the build status, you can add a badge indicating the build status to the README of your role. Open the README.md file using a text editor: Add the following line to the README.md to display the build status: ~/ansible-apache/README.md []() Replace username with your GitHub username. Commit and push the changes to your repository as you did earlier. First, run the following command to add .travis.yml and README.md to the staging area: - git add .travis.yml README.md Now commit the changes to your repository by executing: - git commit -m "Configured Travis" Finally, push these changes to your fork with the following command: - git push -u origin master If you navigate over to your GitHub repository, you will see that it initially reports build: unknown. Within a few minutes, Travis will initiate a build that you can monitor at the Travis CI website. Once the build is a success, GitHub will report the status as such on your repository as well — using the badge you've placed in your README file: You can access the complete details of the builds by going to the Travis CI website: Now that you've successfully set up Travis CI for your new role, you can continuously test and integrate changes to your Ansible roles. Conclusion In this tutorial, you forked a role that installs and configures an Apache web server from GitHub and added integrations for Molecule by writing tests and configuring these tests to work on Docker containers running Ubuntu and CentOS. By pushing your newly created role to GitHub, you have allowed other users to access your role. When there are changes to your role by contributors, Travis CI will automatically run Molecule to test your role. Once you're comfortable with the creation of roles and testing them with Molecule, you can integrate this with Ansible Galaxy so that roles are automatically pushed once the build is successful.
https://www.xpresservers.com/tag/ci/
CC-MAIN-2022-27
refinedweb
3,351
57.77
The Oracle OCCI Fx Kit includes two Fx for the FxEngine framework from kit provides the tools needed to process ORACLE data in a data flow. + Added data append + Loop on vector data + Updated IFxPinCallback::FxPin Added "FEF" namespace. Updated FxDataViewerRnd: - Fixed column insert Updated FxDataViewerRnd: - Added Ctrl + A (Select All) - Added Ctrl + C (Copy selected lines to Clipboard) - Added Query parameter. - Added Release Media Data when an error occurs after GetDeliveryMedia calling. - Updated project settings to link with new libraries name (FxEngined-Vc8-md.lib and FxEngine-Vc8-md.lib). - Replaced all parameters by string ... Copyright © 2009 SourceForge, Inc. All rights reserved. Terms of Use
http://sourceforge.net/projects/oracleoccifxkit/
crawl-002
refinedweb
106
52.97
# Boring preliminaries %pylab inline import re import math import string from collections import Counter from __future__ import division Populating the interactive namespace from numpy and matplotlib <center> <h1>Statistical Natural Language Processing in Python. or How To Do Things With Words. And Counters. or Everything I Needed to Know About NLP I learned From Sesame Street. Except Kneser-Ney Smoothing. The Count Didn't Cover That. One, two, three, ah, ah, ah! — The Count </center> TEXT = file('big.txt').read() len(TEXT) 6488409 So, six million characters. Now let's break the text up into words (or more formal-sounding, tokens). For now we'll ignore all the punctuation and numbers, and anything that is not a letter. def tokens(text): "List all the word tokens (consecutive letters) in a text. Normalize to lowercase." return re.findall('[a-z]+', text.lower()) tokens('This is: A test, 1, 2, 3, this is.') ['this', 'is', 'a', 'test', 'this', 'is'] WORDS = tokens(BIG) len(WORDS) 1105211 So, a million words. Here are the first 10: print(WORDS[:10]) ['the', 'project', 'gutenberg', 'ebook', 'of', 'the', 'adventures', 'of', 'sherlock', 'holmes'] The list WORDS is a list of the words in the TEXT, but it can also serve as a generative model of text. We know that language is very complicated, but we can create a simplified model of language that captures part of the complexity. In the bag of words model, we ignore the order of words, but maintain their frequency. Think of it this way: take all the words from the text, and throw them into a bag. Shake the bag, and then generating a sentence consists of pulling words out of the bag one at a time. Chances are it won't be grammatical or sensible, but it will have words in roughly the right proportions. Here's a function to sample an n word sentence from a bag of words: def sample(bag, n=10): "Sample a random n-word sentence from the model described by the bag of words." return ' '.join(random.choice(bag) for _ in range(n)) sample(WORDS) 'of with head the us underneath the affability cannon of' Another representation for a bag of words is a Counter, which is a dictionary of {'word': count} pairs. For example, Counter(tokens('Is this a test? It is a test!')) Counter({'a': 2, 'is': 2, 'test': 2, 'this': 1, 'it': 1}) A Counter is like a dict, but with a few extra methods. Let's make a Counter for the big list of WORDS and get a feel for what's there: COUNTS = Counter(WORDS) print COUNTS.most_common(10) [('the', 80029), ('of', 40025), ('and', 38312), ('to', 28766), ('in', 22047), ('a', 21155), ('that', 12512), ('he', 12401), ('was', 11410), ('it', 10681)] for w in tokens('the rare and neverbeforeseen words'): print COUNTS[w], w 80029 the 83 rare 38312 and 0 neverbeforeseen 460 words In 1935, linguist George Zipf noted that in any big text, the nth most frequent word appears with a frequency of about 1/n of the most frequent word. He get's credit for Zipf's Law, even though Felix Auerbach made the same observation in 1913. If we plot the frequency of words, most common first, on a log-log plot, they should come out as a straight line if Zipf's Law holds. Here we see that it is a fairly close fit: M = COUNTS['the'] yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.') plot([c for (w, c) in COUNTS.most_common()]) plot([M/i for i in range(1, len(COUNTS)+1)]); Given a word w, find the most likely correction c = correct(w ). Approach: Try all candidate words c that are known words that are near w. Choose the most likely one. How to balance near and likely? For now, in a trivial way: always prefer nearer, but when there is a tie on nearness, use the word with the highest WORDS count. Measure nearness by edit distance: the minimum number of deletions, transpositions, insertions, or replacements of characters. By trial and error, we determine that going out to edit distance 2 will give us reasonable results. Then we can define correct(w ): def correct(word): "Find the best spelling correction for this word." # Prefer edit distance 0, then 1, then 2; otherwise default to word itself. candidates = (known(edits0(word)) or known(edits1(word)) or known(edits2(word)) or [word]) return max(candidates, key=COUNTS.get) The functions known and edits0 are easy; and edits2 is easy if we assume we have edits1: def known(words): "Return the subset of words that are actually in the dictionary." return {w for w in words if w in COUNTS} def edits0(word): "Return all strings that are zero edits away from word (i.e., just word itself)." return {word} def edits2(word): "Return all strings that are two edits away from this word." return {e2 for e1 in edits1(word) for e2 in edits1(e1)} Now for edits1(word): the set of candidate words that are one edit away. For example, given "wird", this would include "weird" (inserting an e) and "word" (replacing a i with a o), and also "iwrd" (transposing w and i; then known can be used to filter this out of the set of final candidates). How could we get them? One way is to split the original word in all possible places, each split forming a pair of words, (a, b), before and after the place, and at each place, either delete, transpose, replace, or insert a letter: def edits1(word): "Return all strings that are one edit away from this word." pairs = splits(word) deletes = [a+b[1:] for (a, b) in pairs if b] transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1] replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b] inserts = [a+c+b for (a, b) in pairs for c in alphabet] return set(deletes + transposes + replaces + inserts) def splits(word): "Return a list of all possible (first, rest) pairs that comprise word." return [(word[:i], word[i:]) for i in range(len(word)+1)] alphabet = 'abcdefghijklmnopqrstuvwxyz' splits('wird') [('', 'wird'), ('w', 'ird'), ('wi', 'rd'), ('wir', 'd'), ('wird', '')] print edits0('wird') set(['wird']) print edits1('wird') set(['wirdh', 'wirdw', 'jird', 'wiid', 'wirj', 'wiprd', 'rird', 'wkird', 'wiqrd', 'wrird', 'wisrd', 'zwird', 'wiqd', 'wizrd', 'wirs', 'wrd', 'wqird', 'tird', 'wirdp', 'wrrd', 'wzrd', 'wiad', 'nird', 'wirsd', 'wixd', 'wxird', 'lird', 'eird', 'wmird', 'wihd', 'wirp', 'lwird', 'wirzd', 'widrd', 'wxrd', 'ewird', 'wirdx', 'wirkd', 'hwird', 'wipd', 'wirnd', 'uwird', 'wirz', 'mwird', 'wjrd', 'wirjd', 'wirrd', 'wirdd', 'wsird', 'bwird', 'wcrd', 'xwird', 'wdird', 'wibrd', 'wikd', 'wiryd', 'wiord', 'gird', 'wtird', 'wbrd', 'nwird', 'wlrd', 'wgird', 'wmrd', 'wirf', 'wirg', 'wird', 'wire', 'wirb', 'wirc', 'wira', 'wkrd', 'wiro', 'wirl', 'wirm', 'iird', 'wirk', 'wirh', 'wiri', 'wirv', 'wirw', 'wirt', 'wiru', 'wirr', 'wicd', 'cird', 'wirq', 'wirqd', 'wizd', 'wirhd', 'ird', 'bird', 'wirx', 'wiry', 'wvrd', 'widr', 'wprd', 'wirad', 'wijd', 'wirxd', 'uird', 'wirdb', 'qwird', 'dird', 'wnrd', 'wjird', 'gwird', 'whrd', 'wtrd', 'woird', 'rwird', 'wurd', 'wijrd', 'witrd', 'wwrd', 'dwird', 'vwird', 'wibd', 'wiyd', 'wicrd', 'weird', 'yird', 'wiwrd', 'wirdv', 'wirdu', 'wid', 'wirds', 'wirdr', 'sird', 'wirbd', 'wirdq', 'wirdy', 'wivrd', 'wirdg', 'wirdf', 'wirde', 'wiud', 'wirdc', 'wir', 'wirda', 'wfird', 'kwird', 'wirdn', 'wirdm', 'wirdl', 'wirdk', 'wirdj', 'wiird', 'wnird', 'wiurd', 'wied', 'aird', 'wirod', 'wpird', 'wcird', 'wzird', 'jwird', 'wirdo', 'wsrd', 'wimd', 'wirwd', 'mird', 'fird', 'wuird', 'wirdt', 'wired', 'wirgd', 'wirfd', 'witd', 'wfrd', 'wyrd', 'wihrd', 'zird', 'ward', 'wilrd', 'widd', 'iwrd', 'hird', 'word', 'wisd', 'wvird', 'pird', 'wlird', 'wyird', 'wdrd', 'werd', 'wild', 'oird', 'wirid', 'wgrd', 'wirvd', 'wiod', 'wirud', 'wircd', 'wiyrd', 'wigrd', 'wixrd', 'wiard', 'vird', 'wiwd', 'wigd', 'wirmd', 'swird', 'wierd', 'xird', 'qird', 'waird', 'wqrd', 'kird', 'cwird', 'wirtd', 'wirdz', 'awird', 'fwird', 'wirpd', 'wifrd', 'pwird', 'owird', 'wivd', 'wimrd', 'iwird', 'winrd', 'wirdi', 'wrid', 'wifd', 'wirld', 'wwird', 'ywird', 'wirn', 'wbird', 'whird', 'wikrd', 'wind', 'twird']) print len(edits2('wird')) 24254 map(correct, tokens('Speling errurs in somethink. Whutever; unusuel misteakes everyware?')) ['spelling', 'errors', 'in', 'something', 'whatever', 'unusual', 'mistakes', 'everywhere'] Can we make the output prettier than that? def correct_text(text): "Correct all the words within a text, returning the corrected text." return re.sub('[a-zA-Z]+', correct_match, text) def correct_match(match): "Spell-correct word in match, and preserve proper upper/lower/title case." word = match.group() return case_of(word)(correct(word.lower())) def case_of(text): "Return the case-function appropriate for text: upper, lower, title, or just str." return (str.upper if text.isupper() else str.lower if text.islower() else str.title if text.istitle() else str) map(case_of, ['UPPER', 'lower', 'Title', 'CamelCase']) [<method 'upper' of 'str' objects>, <method 'lower' of 'str' objects>, <method 'title' of 'str' objects>, str] correct_text('Speling Errurs IN somethink. Whutever; unusuel misteakes?') 'Spelling Errors IN something. Whatever; unusual mistakes?' correct_text('Audiance sayzs: tumblr ...') 'Audience says: tumbler ...' So far so good. You can probably think of a dozen ways to make this better. Here's one: in the text "three, too, one, blastoff!" we might want to correct "too" with "two", even though "too" is in the dictionary. We can do better if we look at a sequence of words, not just an individual word one at a time. But how can we choose the best corrections of a sequence? The ad-hoc approach worked pretty well for single words, but now we could use some real theory ... We should be able to compute the probability of a word, $P(w)$. We do that with the function pdist, which takes as input a Counter (hat is, a bag of words) and returns a function that acts as a probability distribution over all possible words. In a probability distribution the probability of each word is between 0 and 1, and the sum of the probabilities is 1. def pdist(counter): "Make a probability distribution, given evidence from a Counter." N = sum(counter.values()) return lambda x: counter[x]/N P = pdist(COUNTS) for w in tokens('"The" is most common word in English'): print P(w), w 0.0724106075672 the 0.00884356018896 is 0.000821562579453 most 0.000259678921039 common 0.000269631771671 word 0.0199482270806 in 0.000190913771217 english Now, what is the probability of a sequence of words? Use the definition of a joint probability: $P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_1 w_2) \ldots \times \ldots P(w_n \mid w_1 \ldots w_{n-1})$ The bag of words model assumes that each word is drawn from the bag independently of the others. This gives us the wrong approximation: $P(w_1 \ldots w_n) = P(w_1) \times P(w_2) \times P(w_3) \ldots \times \ldots P(w_n)$ The statistician George Box said that All models are wrong, but some are useful. How can we compute $P(w_1 \ldots w_n)$? We'll use a different function name, Pwords, rather than P, and we compute the product of the individual probabilities: def Pwords(words): "Probability of words, assuming each word is independent of others." return product(Pword(w) for w in words) def product(nums): "Multiply the numbers together. (Like `sum`, but with multiplication.)" result = 1 for x in nums: result *= x return result tests = ['this is a test', 'this is a unusual test', 'this is a neverbeforeseen test'] for test in tests: print Pwords(tokens(test)), test 2.98419543271e-11 this is a test 8.64036404331e-16 this is a unusual test 0.0 this is a neverbeforeseen test Yikes—it seems wrong to give a probability of 0 to the last one; it should just be very small. We'll come back to that later. The other probabilities seem reasonable. Task: given a sequence of characters with no spaces separating words, recover the sequence of words. Why? Languages with no word delimiters: 不带空格的词 In English, sub-genres with no word delimiters (spelling errors, URLs). Approach 1: Enumerate all candidate segementations and choose the one with highest Pwords Problem: how many segmentations are there for an n-character text? Approach 2: Make one segmentation, into a first word and remaining characters. If we assume words are independent then we can maximize the probability of the first word adjoined to the best segmentation of the remaining characters. assert segment('choosespain') == ['choose', 'spain'] segment('choosespain') == max(Pwords(['c'] + segment('hoosespain')), Pwords(['ch'] + segment('oosespain')), Pwords(['cho'] + segment('osespain')), Pwords(['choo'] + segment('sespain')), ... Pwords(['choosespain'] + segment(''))) To make this somewhat efficient, we need to avoid re-computing the segmentations of the remaining characters. This can be done explicitly by dynamic programming or implicitly with memoization. Also, we shouldn't consider all possible lengths for the first word; we can impose a maximum length. What should it be? A little more than the longest word seen so far. def memo(f): "Memoize function f, whose args must all be hashable." cache = {} def fmemo(*args): if args not in cache: cache[args] = f(*args) return cache[args] fmemo.cache = cache return fmemo max(len(w) for w in COUNTS) 18 def splits(text, start=0, L=20): "Return a list of all (first, rest) pairs; start <= len(first) <= L." return [(text[:i], text[i:]) for i in range(start, min(len(text), L)+1)] print splits('word') print splits('reallylongtext', 1, 4) [('', 'word'), ('w', 'ord'), ('wo', 'rd'), ('wor', 'd'), ('word', '')] [('r', 'eallylongtext'), ('re', 'allylongtext'), ('rea', 'llylongtext'), ('real', 'lylongtext')] @memo def segment(text): "Return a list of words that is the most probable segmentation of text." if not text: return [] else: candidates = ([first] + segment(rest) for (first, rest) in splits(text, 1)) return max(candidates, key=Pwords) segment('choosespain') ['choose', 'spain'] segment('speedofart') ['speed', 'of', 'art'] decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeople' + 'todissolvethepoliticalbandswhichhaveconnectedthemwithanother' + 'andtoassumeamongthepowersoftheearththeseparateandequalstation' + 'towhichthelawsofnatureandofnaturesgodentitlethem') print(segment(decl)) ['] Pwords(segment(decl)) 3.613815636889254e-141 Pwords(segment(decl * 2)) 1.305966345742529e-281 Pwords(segment(decl * 3)) 0.0 That's a problem. We'll come back to it later. segment('smallandinsignificant') ['small', 'and', 'insignificant'] segment('largeandinsignificant') ['large', 'and', 'insignificant'] print(Pwords(['large', 'and', 'insignificant'])) print(Pwords(['large', 'and', 'in', 'significant'])) 4.1121373609e-10 1.06638804821e-11 Summary: Let's move up from millions to billions and billions of words. Once we have that amount of data, we can start to look at two word sequences, without them being too sparse. I happen to have data files available in the format of "word \t count", and bigram data in the form of "word1 word2 \t count". Let's arrange to read them in: def load_counts(filename, sep='\t'): """Return a Counter initialized from key-value pairs, one on each line of filename.""" C = Counter() for line in open(filename): key, count = line.split(sep) C[key] = int(count) return C COUNTS1 = load_counts('count_1w.txt') COUNTS2 = load_counts('count_2w.txt') P1w = pdist(COUNTS1) P2w = pdist(COUNTS2) print len(COUNTS1), sum(COUNTS1.values())/1e9 print len(COUNTS2), sum(COUNTS2.values())/1e9 333333 588.124220187 286358 225.955251755 COUNTS2.most_common(30) [('of the', 2766332391), ('in the', 1628795324), ('to the', 1139248999), ('on the', 800328815), ('for the', 692874802), ('and the', 629726893), ('to be', 505148997), ('is a', 476718990), ('with the', 461331348), ('from the', 428303219), ('by the', 417106045), ('at the', 416201497), ('of a', 387060526), ('in a', 364730082), ('will be', 356175009), ('that the', 333393891), ('do not', 326267941), ('is the', 306482559), ('to a', 279146624), ('is not', 276753375), ('for a', 274112498), ('with a', 271525283), ('as a', 270401798), ('<S> and', 261891475), ('of this', 258707741), ('<S> the', 258483382), ('it is', 245002494), ('can be', 230215143), ('If you', 210252670), ('has been', 196769958)] A less-wrong approximation: $P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$ This is called the bigram model, and is equivalent to taking a text, cutting it up into slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labelled with the first word on the slip. Then, to generate language, we choose the first word from the original single bag of words, and chose all subsequent words from the bag with the label of the previously-chosen word. Let's start by defining the probability of a single discrete event, given evidence stored in a Counter: Recall that the less-wrong bigram model approximation to English is: $P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$ where the conditional probability of a word given the previous word is defined as: $P(w_n \mid w_{n-1}) = P(w_{n-1}w_n) / P(w_{n-1}) $ def Pwords2(words, prev='<S>'): "The probability of a sequence of words, using bigram data, given prev word." return product(cPword(w, (prev if (i == 0) else words[i-1]) ) for (i, w) in enumerate(words)) # Change Pwords to use P1w (the bigger dictionary) instead of Pword def Pwords(words): "Probability of words, assuming each word is independent of others." return product(P1w(w) for w in words) def cPword(word, prev): "Conditional probability of word, given previous word." bigram = prev + ' ' + word if P2w(bigram) > 0 and P1w(prev) > 0: return P2w(bigram) / P1w(prev) else: # Average the back-off value and zero. return P1w(word) / 2 print Pwords(tokens('this is a test')) print Pwords2(tokens('this is a test')) print Pwords2(tokens('is test a this')) 1.78739820006e-10 6.41367629438e-08 1.18028600367e-11 To make segment2, we copy segment, and make sure to pass around the previous token, and to evaluate probabilities with Pwords2 instead of Pwords. @memo def segment2(text, prev='<S>'): "Return best segmentation of text; use bigram data." if not text: return [] else: candidates = ([first] + segment2(rest, first) for (first, rest) in splits(text, 1)) return max(candidates, key=lambda words: Pwords2(words, prev)) print segment2('choosespain') print segment2('speedofart') print segment2('smallandinsignificant') print segment2('largeandinsignificant') ['choose', 'spain'] ['speed', 'of', 'art'] ['small', 'and', 'in', 'significant'] ['large', 'and', 'in', 'significant'] adams = ('faroutintheunchartedbackwatersoftheunfashionableendofthewesternspiral' + 'armofthegalaxyliesasmallunregardedyellowsun') print segment(adams) print segment2(adams) ['far', 'out', 'in', 'the', 'uncharted', 'backwaters', 'of', 'the', 'unfashionable', 'end', 'of', 'the', 'western', 'spiral', 'arm', 'of', 'the', 'galaxy', 'lies', 'a', 'small', 'un', 'regarded', 'yellow', 'sun'] ['far', 'out', 'in', 'the', 'uncharted', 'backwaters', 'of', 'the', 'unfashionable', 'end', 'of', 'the', 'western', 'spiral', 'arm', 'of', 'the', 'galaxy', 'lies', 'a', 'small', 'un', 'regarded', 'yellow', 'sun'] P1w('unregarded') 0.0 tolkein = 'adrybaresandyholewithnothinginittositdownonortoeat' print segment(tolkein) print segment2(tolkein) ['a', 'dry', 'bare', 'sandy', 'hole', 'with', 'nothing', 'in', 'it', 'to', 'sitdown', 'on', 'or', 'to', 'eat'] ['a', 'dry', 'bare', 'sandy', 'hole', 'with', 'nothing', 'in', 'it', 'to', 'sit', 'down', 'on', 'or', 'to', 'eat'] Conclusion? Bigram model is a little better, but not much. Hundreds of billions of words still not enough. (Why not trillions?) Could be made more efficient. So far, we've got an intuitive feel for how this all works. But we don't have any solid metrics that quantify the results. Without metrics, we can't say if we are doing well, nor if a change is an improvement. In general, when developing a program that relies on data to help make predictions, it is good practice to divide your data into three sets: For this program, the training data is the word frequency counts, the development set is the examples like "choosespain" that we have been playing with, and now we need a test set. def test_segmenter(segmenter, tests): "Try segmenter on tests; report failures; return fraction correct." return sum([test_one_segment(segmenter, test) for test in tests]), len(tests) def test_one_segment(segmenter, test): words = tokens(test) result = segmenter(cat(words)) correct = (result == words) if not correct: print 'expected', words print 'got ', result return correct proverbs = ("""A little knowledge is a dangerous thing A man who is his own lawyer has a fool for his client All work and no play makes Jack a dull boy Better to remain silent and be thought a fool that to speak and remove all doubt; Do unto others as you would have them do to you Early to bed and early to rise, makes a man healthy, wealthy and wise Fools rush in where angels fear to tread Genius is one percent inspiration, ninety-nine percent perspiration If you lie down with dogs, you will get up with fleas Lightning never strikes twice in the same place Power corrupts; absolute power corrupts absolutely Here today, gone tomorrow See no evil, hear no evil, speak no evil Sticks and stones may break my bones, but words will never hurt me Take care of the pence and the pounds will take care of themselves Take care of the sense and the sounds will take care of themselves The bigger they are, the harder they fall The grass is always greener on the other side of the fence The more things change, the more they stay the same Those who do not learn from history are doomed to repeat it""" .splitlines()) test_segmenter(segment, proverbs) expected ['sticks', 'and', 'stones', 'may', 'break', 'my', 'bones', 'but', 'words', 'will', 'never', 'hurt', 'me'] got ['stick', 'sandstones', 'may', 'break', 'my', 'bones', 'but', 'words', 'will', 'never', 'hurt', 'me'] (19, 20) test_segmenter(segment2, proverbs) (20, 20) This confirms that both segmenters are very good, and that segment2 is slightly better. There is much more that can be done in terms of the variety of tests, and in measuring statistical significance. tests = ['this is a test', 'this is a unusual test', 'this is a nongovernmental test', 'this is a neverbeforeseen test', 'this is a zqbhjhsyefvvjqc test'] for test in tests: print Pwords(tokens(test)), test 1.78739820006e-10 this is a test 3.78675425278e-15 this is a unusual test 1.31179474235e-16 this is a nongovernmental test 0.0 this is a neverbeforeseen test 0.0 this is a zqbhjhsyefvvjqc test The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that "nongovernmental" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some "real" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a "real" word. It does seem that "neverbeforeseen" is more English-like than "zqbhjhsyefvvjqc", and so perhaps should have a higher probability. We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because it is more likely that a legitimate one will appear that has not been observed before. We can think of our model as being overly spiky; it has a spike of probability mass wherever a word or phrase occurs in the corpus. What we would like to do is smooth over those spikes so that we get a model that does not depend on the details of our corpus. The process of "fixing" the model is called smoothing. For example, Laplace was asked what's the probability of the sun rising tomorrow. From data that it has risen $n/n$ times for the last n days, the maximum liklihood estimator is 1. But Laplace wanted to balance the data with the possibility that tomorrow, either it will rise or it won't, so he came up with $(n + 1) / (n + 2)$. What we know is little, and what we are ignorant of is immense. — Pierre Simon Laplace, 1749-1827 def pdist_additive_smoothed(counter, c=1): """The probability of word, given evidence from the counter. Add c to the count for each item, plus the 'unknown' item.""" N = sum(counter.values()) # Amount of evidence Nplus = N + c * (len(counter) + 1) # Evidence plus fake observations return lambda word: (counter[word] + c) / Nplus P1w = pdist_additive_smoothed(COUNTS1) P1w('neverbeforeseen') 1.7003201005861308e-12 But now there's a problem ... we now have previously-unseen words with non-zero probabilities. And maybe 10-12 is about right for words that are observed in text: that is, if I'm reading a new text, the probability that the next word is unknown might be around 10-12. But if I'm manufacturing 20-letter sequences at random, the probability that one will be a word is much, much lower than 10-12. Look what happens: segment('thisisatestofsegmentationofalongsequenceofwords') ['thisisatestofsegment', 'ationofalongsequence', 'of', 'words'] There are two problems: First, we don't have a clear model of the unknown words. We just say "unknown" but we don't distinguish likely unknown from unlikely unknown. For example, is a 8-character unknown more likely than a 20-character unknown? Second, we don't take into account evidence from parts of the unknown. For example, "unglobulate" versus "zxfkogultae". For our next approach, Good - Turing smoothing re-estimates the probability of zero-count words, based on the probability of one-count words (and can also re-estimate for higher-number counts, but that is less interesting). I. J. Good (1916 - 2009) Alan Turing (1812 - 1954) So, how many one-count words are there in COUNTS? (There aren't any in COUNTS1.) And what are the word lengths of them? Let's find out: singletons = (w for w in COUNTS if COUNTS[w] == 1) lengths = map(len, singletons) Counter(lengths).most_common() [(7, 1357), (8, 1356), (9, 1176), (6, 1113), (10, 938), (5, 747), (11, 627), (12, 398), (4, 367), (13, 215), (3, 161), (14, 112), (2, 51), (15, 37), (16, 10), (17, 7)] 1357 / sum(COUNTS.values()) 0.0012278198461651215 hist(lengths, bins=len(set(lengths))); def pdist_good_turing_hack(counter, onecounter, base=1/26., prior=1e-8): """The probability of word, given evidence from the counter. For unknown words, look at the one-counts from onecounter, based on length. This gets ideas from Good-Turing, but doesn't implement all of it. prior is an additional factor to make unknowns less likely. base is how much we attenuate probability for each letter beyond longest.""" N = sum(counter.values()) N2 = sum(onecounter.values()) lengths = map(len, [w for w in onecounter if onecounter[w] == 1]) ones = Counter(lengths) longest = max(ones) return (lambda word: counter[word] / N if (word in counter) else prior * (ones[len(word)] / N2 or ones[longest] / N2 * base ** (len(word)-longest))) # Redefine P1w P1w = pdist_good_turing_hack(COUNTS1, COUNTS) segment.cache.clear() segment('thisisatestofsegmentationofaverylongsequenceofwords') ['this', 'is', 'a', 'test', 'of', 'segmentation', 'of', 'a', 'very', 'long', 'sequence', 'of', 'words'] That was somewhat unsatisfactory. We really had to crank up the prior, specifically because the process of running segment generates so many non-word candidates (and also because there will be fewer unknowns with respect to the billion-word WORDS1 than with respect to the million-word WORDS). It would be better to separate out the prior from the word distribution, so that the same distribution could be used for multiple tasks, not just for this one. Now let's think for a short while about smoothing bigram counts. Specifically, what if we haven't seen a bigram sequence, but we've seen both words individually? For example, to evaluate P("Greenland") in the phrase "turn left at Greenland", we might have three pieces of evidence: P("Greenland") P("Greenland" | "at") P("Greenland" | "left", "at") Presumably, the first would have a relatively large count, and thus large reliability, while the second and third would have decreasing counts and reliability. With interpolation smoothing we combine all three pieces of evidence, with a linear combination: $P(w_3 \mid w_1w_2) = c_1 P(w_3) + c_2 P(w_3 \mid w_2) + c_3 P(w_3 \mid w_1w_2)$ How do we choose $c_1, c_2, c_3$? By experiment: train on training data, maximize $c$ values on development data, then evaluate on test data. However, when we do this, we are saying, with probability $c_1$, that a word can appear anywhere, regardless of previous words. But some words are more free to do that than other words. Consider two words with similar probability: print P1w('francisco') print P1w('individuals') 7.73314623661e-05 7.72494966889e-05 They have similar unigram probabilities but differ in their freedom to be the second word of a bigram: print [bigram for bigram in COUNTS2 if bigram.endswith('francisco')] ['San francisco', 'san francisco'] print [bigram for bigram in COUNTS2 if bigram.endswith('individuals')] ['are individuals', 'other individuals', 'on individuals', 'infected individuals', 'in individuals', 'where individuals', 'or individuals', 'which individuals', 'to individuals', 'both individuals', 'help individuals', 'more individuals', 'interested individuals', 'from individuals', '<S> individuals', 'income individuals', 'these individuals', 'about individuals', 'the individuals', 'among individuals', 'some individuals', 'those individuals', 'by individuals', 'minded individuals', 'These individuals', 'qualified individuals', 'certain individuals', 'different individuals', 'For individuals', 'few individuals', 'and individuals', 'two individuals', 'for individuals', 'between individuals', 'affected individuals', 'healthy individuals', 'private individuals', 'with individuals', 'following individuals', 'as individuals', 'such individuals', 'that individuals', 'all individuals', 'of individuals', 'many individuals'] Intuitively, words that appear in many bigrams before are more likely to appear in a new, previously unseen bigram. In Kneser-Ney smoothing (Reinhard Kneser, Hermann Ney) we multiply the bigram counts by this ratio. But I won't implement that here, because The Count never covered it. Let's tackle one more task: decoding secret codes. We'll start with the simplest of codes, a rotation cipher, sometimes called a shift cipher or a Caesar cipher (because this was state-of-the-art crypotgraphy in 100 BC). First, a method to encode: def rot(msg, n=13): "Encode a message with a rotation (Caesar) cipher." return encode(msg, alphabet[n:]+alphabet[:n]) def encode(msg, key): "Encode a message with a substitution cipher." table = string.maketrans(upperlower(alphabet), upperlower(key)) return msg.translate(table) def upperlower(text): return text.upper() + text.lower() rot('This is a secret message.', 1) 'Uijt jt b tfdsfu nfttbhf.' rot('This is a secret message.') 'Guvf vf n frperg zrffntr.' rot(rot('This is a secret message.')) 'This is a secret message.' Now decoding is easy: try all 26 candidates, and find the one with the maximum Pwords: def decode_rot(secret): "Decode a secret message that has been encoded with a rotation cipher." candidates = [rot(secret, i) for i in range(len(alphabet))] return max(candidates, key=lambda msg: Pwords(tokens(msg))) msg = 'Who knows the answer?' secret = rot(msg, 17) print(secret) print(decode_rot(secret)) nyfbefnjkyvrejnvi ['who', 'knows', 'the', 'answer'] Let's make it a tiny bit harder. When the secret message contains separate words, it is too easy to decode by guessing that the one-letter words are most likely "I" or "a". So what if the encode routine mushed all the letters together: def encode(msg, key): "Encode a message with a substitution cipher; remove non-letters." msg = cat(tokens(msg)) ## Change here table = string.maketrans(upperlower(alphabet), upperlower(key)) return msg.translate(table) Now we can decode by segmenting. We change candidates to be a list of segmentations, and still choose the candidate with the best Pwords: def decode_rot(secret): """Decode a secret message that has been encoded with a rotation cipher, and which has had all the non-letters squeezed out.""" candidates = [segment(rot(secret, i)) for i in range(len(alphabet))] return max(candidates, key=lambda msg: Pwords(msg)) msg = 'Who knows the answer this time? Anyone? Bueller?' secret = rot(msg, 19) print(secret) print(decode_rot(secret)) pahdghplmaxtglpxkmablmbfxtgrhgxunxeexk ['who', 'knows', 'the', 'answer', 'this', 'time', 'anyone', 'bueller'] candidates = [segment(rot(secret, i)) for i in range(len(alphabet))] for c in candidates: print c, Pwords(c) ['pahdghplmaxtglpxkma', 'blmbfxtgrhgxunxeexk'] 8.7783378348e-33 ['qbiehiqmnbyuhmqylnb', 'cmncgyuhsihyvoyffyl'] 8.7783378348e-33 ['rcjfijrnoczvinrzmoc', 'dnodhzvitjizwpzggzm'] 8.7783378348e-33 ['sdkgjksopdawjosan', 'pdeopeiawjukjaxqahh', 'an'] 1.53192669415e-32 ['tel', 'hkltpqebxkptboqef', 'pqfjbxkvlkbyrbiibo'] 1.59574951877e-32 ['ufmilmuqrfcylqucprf', 'gqrgkcylwmlczscjjcp'] 8.7783378348e-33 ['vgnjmnvrsgdzmrvdqsg', 'hrshldzmxnmdatdkkdq'] 8.7783378348e-33 ['who', 'knows', 'the', 'answer', 'this', 'time', 'anyone', 'bueller'] 7.18422540159e-29 ['xiplopxtuifbotxfsui', 'jtujnfbozpofcvfmmfs'] 8.7783378348e-33 ['yjqmpqyuvjgcpuygtvj', 'kuvkogcpaqpgdwgnngt'] 8.7783378348e-33 ['zkrnqrzvwkhdqvzhuwk', 'lvwlphdqbrqhexhoohu'] 8.7783378348e-33 ['also', 'rsawxlierwaivxlmw', 'xmqiercsrifyippiv'] 4.20728492071e-30 ['bmtpstbxymjfsxbjwym', 'nxynrjfsdtsjgzjqqjw'] 8.7783378348e-33 ['cnuqtucyznkgtyckxz', 'no', 'yzoskgteutkhakrrkx'] 9.4554362126e-33 ['do', 'vruvdzaolhuzdlyao', 'pzaptlhufvuliblssly'] 9.5930573844e-33 ['epwsvweabpmivaemzbp', 'qabqumivgwvmjcmttmz'] 8.7783378348e-33 ['fqxtwxfbcqnjwbfnacq', 'rbcrvnjwhxwnkdnuuna'] 8.7783378348e-33 ['gryuxygcdrokxcgobdr', 'scdswokxiyxoleovvob'] 8.7783378348e-33 ['hszvyzhdesplydhpc', 'est', 'detxplyjzypmfpwwpc'] 1.52450959071e-32 ['it', 'awzaieftqmzeiqdft', 'uefuyqmzkazqngqxxqd'] 2.83847421472e-32 ['jubxabjfgurnafjregu', 'vfgvzrnalbarohryyre'] 8.7783378348e-33 ['kvcybckghvsobgksfhv', 'wghwasobmcbspiszzsf'] 8.7783378348e-33 ['lwdzcdlhiwtpchltgiw', 'xhixbtpcndctqjtaatg'] 8.7783378348e-33 ['mxeademijxuqdimuhjx', 'yijycuqdoedurkubbuh'] 8.7783378348e-33 ['nyfbefnjkyvrejnviky', 'zjkzdvrepfevslvccvi'] 8.7783378348e-33 ['ozgcfgoklzwsfkowjlz', 'aklaewsfqgfwtmwddwj'] 8.7783378348e-33 What about a general substitution cipher? The problem is that there are 26! substitution ciphers, and we can't enumerate all of them. We would need to search through this space. Initially make some guess at a substitution, then swap two letters; if that looks better keep going, if not try something else. This approach solves most substitution cipher problems, although it can take a few minutes on a message of length 100 words or so. What to do next? Here are some options:
http://nbviewer.ipython.org/url/norvig.com/ipython/How%20to%20Do%20Things%20with%20Words.ipynb
CC-MAIN-2015-35
refinedweb
5,438
53.31
Running Queries Queries and Results An overview of common concepts that you will need to understand in order to use the Query service. Getting System Information N1QL has a system catalog that stores metadata about a database. The system catalog is a namespace called system. N1QL Auditing N1QL-related activities can be audited by Couchbase Server. Backfill Support for N1QL You can configure the temporary working space for the N1QL engine and its embedded GSI client via the UI. N1QL Error Codes All of the N1QL error codes, their error messages, and some tips to resolve them. Related Links The Query services provides the following tools for running queries:
https://docs.couchbase.com/server/current/n1ql/n1ql-intro/index.html
CC-MAIN-2020-34
refinedweb
109
64.3
0 replies on 1 page. This blog is about static code in Java and enclosing of constructor so it can never be instantiated and using what I call Static classes. I often wanted to create classes that weren't instantiated and I wasn't sure how to do this until I read Effective Java where one of the points it talks about is that. It still can be slightly confusing because you do still have a number of options, like Abstract classes, Singletons. I was thinking about this because sometimes need helper classes with static methods in, a class with useful methods but no variables. Often these classes are functions in one class but then I find I need that functionality in another class. This point I consider whether or not I want that class being dependant on the class that currently holds the method. I also consider does that class need that functionality, what is the role of that class. When deciding things like this I try and consider the rule "A class should only have one reason to change" although putting useful methods in a class is probably also breaking that rule, you have to put them somewhere and I would prefer the dependency to be on the helper class rather than a class which's main job is doing something else I find this helps me decide whether I should extract the function into another class. Sometimes if your classes are in a hierarchy you can pull up the method, enabling many classes below to be able to access the method. If your classes are not in a hierarchy or if other classes not releated to the class and many other classes would like to use it but it's not really enough to make its own class. These are the sort of naming, what package to put something in conundrums I sometimes get it and I sort of get frozen trying to think of a good name and where to put it. What I am talking about is basically a utility or helper class, which I'm sure many of you use. So basically you want to make a helper class with a useful static standalone method which people can call if they need it. What you don't want though is for people to instantiate this class. There are a few options, you could make it a singleton, you could make it abstract or you could make what I call a static class. The reasons why you don't want to make it abstract are because someone could subclass it. This is probably very unlikely because everyone will know it's a helper class because you have probably called it helper or utility. It's important to consider the design in more than just what's possible but as an intent of the use of the class. If I see an abstract class I think it's meant to be extended, someone has designed it that way. You also don't want people having to know you don't want them to instantiate it even thought its abstract but they should instantiate those other abstract classes because I want you to extend those. You could create a singleton, to ensure that only one object is created from the class. Singletons are classes with a private constructor's and often with a getInstance method which has code in to this also sends out the wrong signals (for this example). You create a singleton class ( here is the classic Singleton code from this good article at Java World public class ClassicSingleton { private static ClassicSingleton instance = null; protected ClassicSingleton() { // Exists only to defeat instantiation. } public static ClassicSingleton getInstance() { if(instance == null) { instance = new ClassicSingleton(); } return instance; } } The class below is my so call static class /** * A helper class with useful static utility functions. */ public final class ActionHelper { /** * private constructor to stop people instantiating it. */ private ActionHelper() { ///this is never run } /** * prints hello world and then the users name * @param users name */ public static printHelloWorld(final String name) { System.out.println("Hello World its " + name); } } So what's the difference between the two examples and why do I think the second solution is better for a class you don't want or need to instantiate. Firstly the Singleton pattern is very useful if you want to create one instance of a class. For my helper class we don't really want to instantiate any copy's of the class. The reason why you shouldn't use a Singleton class is because for this helper class we don't use any variables. The singleton class would be useful if it contained a set of variables that we wanted only one set of and the methods used those variables but in our helper class we don't use any variables apart from the ones passed in (which we make final). For this reason I don't believe we want a singleton Instance because we do not want any variables and we don't want anyone instantianting this class. So if you don't want anyone instantiating the class, which is normally if you have some kind of helper/utils class then I use the what I call the static class, a class with a private constructor and only consists of Static methods without any any variables. In some ways its a bit like a web services, you just use its methods and it converts the data you give it. I'm not sure what other people think about this or how they code their helper classes, please leave some comments and let me know. Read: Static classes in Java
http://www.artima.com/forums/flat.jsp?forum=121&thread=159657
CC-MAIN-2017-30
refinedweb
948
63.22
expo-sharingallows you to share files directly with other compatible applications. 🚨Web browser support: expo-sharing for web is built on top of the Web Share API, which still has very limited browser support. Be sure to check that the API can be used before calling it by using Sharing.isAvailableAsync(). 💡HTTPS required on web: The Web Share API is only available on web when the page is served over https. Run your app with expo start --httpsto enable it. ⚠️No local file sharing on web: Sharing local files by URI works on iOS and Android, but not on web. You cannot share local files on web by URI — you will need to upload them somewhere and share that URI. expo install expo-sharing If you're installing this in a bare React Native app, you should also follow these additional installation instructions. import * as Sharing from 'expo-sharing'; Sharing.isAvailableAsync() Sharing.shareAsync(url, options) trueif the sharing API can be used, and falseotherwise. mimeTypefor Intent(Android only)
https://docs.expo.io/versions/v36.0.0/sdk/sharing/
CC-MAIN-2020-40
refinedweb
168
65.32
I have an assignment to make a text-based snakes and ladders game using Dev-C++. I have so far made a splash screen and a menu (with colours and sounds, pretty cool) using the winmm.a library and the switch/case construct. I have even been able to get player names with a simple character array. However, I have no idea how to instruct the system on player positions, dice spinning and player movements. I’ve been told to use random numbers for the dice, but I don’t know anything about that. I need to be able to initialize player_position for at least 2 players, have them roll the dice, move the players to a new location and determine whether there’s a snake or ladder there, then move them to a new location. I’ve been going through Daniweb for the past week looking for tips and I found some, but not enough. I need help, please, this assignment is due in 5days. I don’t want it to be done for me, I just want to be pointed in the right direction so that I can do it myself. My friends have no problems with cheating and I figure the best way to change their minds is by setting an example. Oh yes, is there any way I can make it full-screen? Don’t laugh, but here’s the first thing I tried in order to get the board set up: #include <cstdlib> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int board[101]; int snake; int ladder; snake = board[5],[15],[25],[30],[45],[53],[62],[78],[81],[93],[96]; ladder = board[7],[12],[28],[35],[40],[59],[65],[73],[88] cout >> snake; system("PAUSE"); return EXIT_SUCCESS; } i'm sure you can see from here that i'm trying to set positions for snakes and ladders on the board so that whatever random number pops up from the dice, locations will already by marked. Thanks.
https://www.daniweb.com/programming/software-development/threads/123662/fun-but-hard-work
CC-MAIN-2018-30
refinedweb
334
73.71
Hi, so the exercise I am supposed to do is : "If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000." So, the question is pretty simply, nothing hard to understand. Here is the function I did to get it : #include <iostream> #include <string> using namespace std; int calculate(int max) { int result = 0; for(int i = 1; 5 * i < max ; i++) { result += (i * 5); cout << (i * 5) << " + "; } for( int i = 1 ; 3 * i < max ; i++) { result += (i * 3); cout << (i * 3) << " + "; } cout << 0; return result; } int main() { cout <<" = " << calculate(1000); cin.ignore(); cin.get(); } The function does give me a result but when I put it in to the exercise box, it says it's a wrong answer and would redirect me again to the problem. I don't see why it doesn't work, it works completely fine for the 10 but it won't work for the 1000; And also, the couts in the calculate function are completely secondary, they were just there to show me what is getting added, you can delete them all, it won't matter; And sorry if the solution to this is easy, I am in my first week of C++ so I'm still quite noob-ish; Thanks a lot for the answers in advance guys :) Oh, and if you want the website to this and many other problems, just ask me :), I didn't think it was necessary to resolve this though.
https://www.daniweb.com/programming/software-development/threads/381066/what-is-wrong-with-this-why-isn-t-it-working-algorithm-not-compiler-error
CC-MAIN-2017-09
refinedweb
274
59
Starting Development with SharePoint 2010 - Understanding SharePoint Solutions as Deployment Units - Introducing SharePoint Features - Debugging SharePoint Solutions - Summary - Q&A Hour 1, “Introducing SharePoint 2010,” gave you an overview of SharePoint 2010, and Hour 2, “Understanding the SharePoint 2010 Architecture,” looked at SharePoint’s architecture. If you are the typical developer, by now you are probably anxious to write some code. In this hour you begin by writing code in SharePoint 2010 in the form of a console application. After that you learn about SharePoint features and solutions, and then write your first SharePoint feature. Finally you learn about debugging techniques in SharePoint and how they differ from a normal ASP.NET application. You just wrote your first code and used two important objects of the SharePoint Object model that you will find yourself using almost everywhere. The SPSite represents a SharePoint site collection. The SPWeb represents a site within the site collection. In this case we are accessing the top level site by making a call to the OpenWeb() function of the SPSite. We could have also accessed the top level site by accessing the RootWeb property of the SPSite. However, it was important for you to see the OpenWeb() method and overload that you will be using to access the subsites method as well. Hour 7, “Understanding SharePoint 2010 Server Side Development,” discusses most objects of the SharePoint object model in detail. However, it is helpful to get acquainted with a few of the objects that you will use frequently. You might have observed that SPSite and SPWeb represent the site collection and the site in the SharePoint hierarchy that we looked at in Hour 2. Following are the other objects in the SharePoint hierarchy: - SPFarm—This class represents a SharePoint farm and is a part of the Microsoft.SharePoint.Administration namespace. You can access the local farm instance through the SPFarm.Local object. - SPWebApplication—This class represents a SharePoint web application and is also a part of the Microsoft.SharePoint.Administration namespace. You can access the WebApplication instance of a site through the WebApplication property of the SPSite class instance. - SPContext—This is an important class that represents the context of the request in SharePoint. You can access various important objects of SharePoint like the SPSite and SPWeb instances for the current request through this object. To get access to this object you call SPContext.Current. You will see as you proceed further in this book that SharePoint provides a rich object model, and you learn about many objects in detail. Understanding SharePoint Solutions as Deployment Units Although console applications are a great way to play around with the SharePoint API, you will not be using them to deploy new functionality to SharePoint. The recommended way to deploy functionality is through SharePoint solutions, which are cab files with a .wsp extension, which stands for Windows SharePoint Solution Packages. The most important file in the WSP file is the manifest.xml. This file contains information about the other files in the solution. In SharePoint 2010 you can create two types of solutions: farm solutions and sandboxed solutions. Understanding Farm Solutions A farm solution is a SharePoint solution that is deployed to the Solution Store in Central Administration of your SharePoint farm. People who have worked with SharePoint 2007 will find this familiar. You have now built and deployed your first SharePoint solution. In the preceding output you can see that your HelloWorldVWP feature was activated while deploying. This means that the web part is now available for adding to the pages in the site. In the preceding scenario we deployed the solution directly through Visual Studio. But we could have just as easily packaged the solution by right-clicking on the project and clicking on Package. We could then deploy the solution using the following stsadm commands. Stsadm -o addsolution -filename HelloWorldFarm.wsp Stsadm -o deploysolution -name HelloWorldFarm.wsp -url –allowgacdeployment You can see the contents of the .wsp file generated when you packaged the solution. To view the contents of the .wsp, perform the following steps: - Go to the Bin\Debug folder of your project. Rename HelloWorldFarm.wsp to HelloWorldFarm.wsp.cab. - Open the .cab file and you should see the contents of the solution, as shown in Figure 3.7. - Select the Details view and notice the path column. Figure 3.7. Contents of the WSP file - Open the manifest.xml file. You can see it contains all the information about the contents of the solution, as shown in Figure 3.8. Figure 3.8. Sample manifest.xml - You can also find the SafeControl tag for the web.config file, which is required to allow the web part to run on your site. - Now browse to the <14 Hive>\Template\Features folder and you should see a directory named HelloWorldFarm_HelloWorldVWP. Notice that this maps to the path column in the WSP file. Also go to the <14 Hive>\Template\CONTROLTEMPLATES folder. You can see your HelloWorldFarm folder in there. Notice again that this maps to the path column in your WSP file. Browse to your SharePoint 2010 Central Administration site. Navigate to System Setting and under Farm Management click Manage Farm Solutions. You should see your solution along with other solutions, as shown in Figure 3.9. Figure 3.9. Solution Store in the SharePoint 2010 Central Administration Finally you can see your web part in action. Follow these steps: - Browse to your SharePoint site (in my case it is). - Select the Page tab on the ribbon and click Edit. Two new tabs—FormatText and Insert—appear. - Go to the Insert tab and click web part. The list of web parts appears below the tab. - Select Custom under Categories on the left side. You see your HelloWorldVWP web part, as shown in Figure 3.10. Figure 3.10. Insert web part screen - Click Add and the web part is added at the top, as displayed in Figure 3.11. Figure 3.11. HelloWorldVWP web part added to top zone One of the most important factors that differentiate a farm solution from a sandboxed solution is that the farm solution is always deployed to your Central Administration Solution Store. Farm solutions can run any type of code without any restrictions. This can be at times counterproductive as there is an increased chance that some code can bring your entire farm down. Understanding Sandboxed Solutions Sandboxed solutions are newly introduced in SharePoint 2010. Sandboxed solutions are not deployed to your Central Administration Solution Store like farm solutions. They are deployed to the solutions gallery of your site collection. Also they run with a lot of restrictions compared to farm solutions. In addition sandboxed solutions do not run under the worker process like farm solutions but in a special process known as the Sandboxed Code Service. You can see this service by browsing to SharePoint 2010 Central Administration, System Settings, Manage Services on Server. There you can find the Microsoft SharePoint Foundation Sandboxed Code Service as shown in Figure 3.12. Make sure that this is started. Figure 3.12. The Manage Service On Server screen in SharePoint 2010 Central Administration Though we created and deployed simple web parts without much functionality, the steps to create and deploy web parts will be applicable to most of the other things including more complex web parts and other project types in SharePoint. However you need to know when you should create sandboxed solutions and when you should create farm solutions. Sandboxed solutions are the recommended way to go in SharePoint 2010 as they can be monitored more easily and due to restrictions imposed on them cannot cause issues that rogue code in a farm solution might. Sandboxed solutions are an important topic, and Hour 21 is devoted to them. You should go with a farm solution only if you cannot achieve the functionality through a sandboxed solution due to the restrictions imposed on them. Even then various workarounds are available for overcoming the sandboxed solutions’ restrictions as you see later. Understanding Sandboxed Solutions Restrictions It is important to know the restrictions imposed on sandboxed solutions if you are to be able to work with them. Following are the SharePoint project items that will not work with a sandboxed solution: - Visual web parts - Application pages - Custom action group - HideCustomAction element - Content type binding - Web application-scoped features - Farm-scoped features - Workflows with code Don’t worry if you do not understand all the items listed here. You will come across all these items in subsequent hours. In addition to the previous items, you can access only the following from the SharePoint object model in a sandboxed solution: - In addition to the preceding restrictions the administrator can apply further restrictions to the API. The Microsoft.SharePoint.Administration.SPWebService class provides a collection in the form of the API block list that enables an administrator to specify additional types in the API to block.
http://www.informit.com/articles/article.aspx?p=1848179&amp;seqNum=4
CC-MAIN-2016-44
refinedweb
1,490
54.83
Autoloading and objects. (autoloading objects into namespaces) Autoloading and objects. (autoloading objects into namespaces) Here. - Join Date - Mar 2007 - Location - Gainesville, FL - 37,948 - Vote Rating - 951 ComponentLoader has a load event just like a Store that you could put the response onto a namespace. Or you have a callback function you can pass to ComponentLoader trying to use autoload in mvc pattern also trying to use autoload in mvc pattern I am trying to do something similar. I want to make a navigation widget that provides links with hash style hrefs (/resource/#!/widget). I would like to click on a link and load the corresponding widget (similar to how tabs work). I have created a view class that defines a container and it's items. I've tried using autoCreateViewport, autoRender, and autoShow in my views but the page just loads blank. forum thread here I'm confused as to how the the ComponentLoader can be used in an MVC application like the example here. If you download and run the example code everything seems to load without a loader configuration. Any additional thoughts or discussion on this would be much appreciated.
http://www.sencha.com/forum/showthread.php?153958-Autoloading-and-objects.-(autoloading-objects-into-namespaces)&p=671764
CC-MAIN-2015-14
refinedweb
193
64
Introducing Ext JS 4.2 . Learn the latest about Ext JS and HTML5/JavaScript for three intensive days with 60+ sessions, 3+ parties and more at SenchaCon 2013. Register today! Neptune With Ext JS 4.2, we are excited to welcome “Neptune” to the family as an official, fully supported theme. Building applications that have a modern, contemporary look has always been incredibly important for application developers and with the new Neptune theme, Ext JS now supports four core themes out of the box: Neptune, Classic, Gray and Accessibility. Neptune gives your application a clean, modern and lighter look by minimizing unnecessary visual elements such as borders and increased padding in many places to make the overall interface feel more relaxed and open. Our goal for Neptune is far more than just providing a new and pretty face. We want to enable you to create the best application experience easily and on as many browsers as possible. To support this, we have made some significant advances in how we do theming that will make it easier to customize and share themes. Flexibility The key to creating the best applications is easy customization. Making changes has to be as simple as possible, because when it comes to themes: one size never fits all. To make the Ext JS themes as flexible as possible, we’ve greatly expanded the use of Sass variables. The variables are chained together, so variables calculate their default values from other variables, and whenever possible, changes (such as setting the “$base-color”) propagate as you would expect. Theme Packages Sencha Cmd 3.1 adds support for “packages”: self-contained bundles of code, styling and resources. Neptune and the other Ext JS 4.2 themes are now delivered as theme Packages which enables many exciting possibilities. In general, packages allow you to easily share code between your applications as well as with other developers. Sharing JavaScript classes is something that the Ext JS loader and previous versions of Sencha Cmd handled very well, but now, with packages, Sencha Cmd can connect the world of JavaScript classes to the world of Sass. Internally, Ext JS leverages the Sencha Cmd ability to relate JavaScript and Sass to build its themes. That is, Cmd produces the “all.scss” and ultimately “all.css” that we ship with Ext JS. This build process ensures that the individual SCSS files defining Sass variables and rules are always in the right order based on the JavaScript class hierarchy. This allows us to share Sass logic between the various themes as easily as we do JavaScript classes. Of course, these Sencha Cmd features are not limited to building Ext JS. If you use Cmd to build the “all-classes.js” file containing a concatenation and compression of all your JavaScript, you can extend this to build your application’s Sass. If you do, you gain another exciting first: your CSS file will contain only the CSS needed for the components you are actually using. This also works for views you define, so your application can organize its Sass as a mirror image of its JavaScript — a huge help as your application grows over time. In the same way we improved user experience by not downloading lots of unused JavaScript, removing unused CSS can also be a big help. This is even more true of CSS because unnecessary rules are not so easily ignored by the browser. Some browsers also have limits on the number of rules you can have in your CSS file. Going forward, this will be increasingly important as new components are added and new features (like RTL) are added that span all components. Custom Themes Themes are special types of package that have one important, additional feature: themes can “extend” other themes. This capability is used by Ext JS 4.2 to create its theme hierarchy: The build process for theme packages has an extra step that allows a theme to inherit any of the resources of its base(s) or elect to replace them with a version of its own. Also, for IE compatibility, the image “slicer” is automatically invoked to transform your CSS3 border radius and linear gradient styles into background images. All of this allows you to create new themes by adding only what you want to change (style rules, JavaScript code or static resources like images). There is no need to “copy and paste” anything from your base theme. This ensures that your themes will inherit bug fixes and other enhancements as we maintain the core themes. You can learn a lot more about this process in the Theme Guide found here and about packages in general in the package guides and. RTL The support for Right-to-Left languages (such as Hebrew and Arabic) has been a long-requested feature, so we are delighted to say that RTL is now here. We are equally happy to say that if you don’t need RTL support and do not enable it, there is only a minimal amount of additional code added to the core of the framework. The first step to enabling RTL is to require the “Ext.rtl.*” namespace. This namespace contains many overrides that look like this: Ext.define(‘Ext.rtl.button.Button’, { override: Ext.button.Button’, … This family of overrides takes over key positioning methods on various classes in the framework and adds the necessary RTL checks and logic. The second step once you have the supporting code injected into the framework is to set the “rtl” config on the containers you want — such as the viewport: Ext.define(‘MyApp.views.Viewport’, { extend: Ext.container.Viewport’, requires: [ ‘Ext.rtl.*’ ], rtl: true, … RTL in Sass On the CSS side, RTL support is enabled by setting this Sass variable: $include-rtl: true; This will add the CSS rules for RTL using the “.x-rtl” selector. Mixing LTR and RTL The “rtl” config is inherited down the Component hierarchy. By setting it on the Viewport, you are effectively setting RTL globally. This setting can be enabled at a lower level or changed back by setting “rtl: false” which is then inherited from that level down. Due to CSS limitations in IE6 and IE/Quirks, nesting is not supported. Loading the CSS with RTL support must only be done when RTL is desired globally on these browsers. Locales To streamline this process for your applications, Sencha Cmd supports a package type of “locale”. Ext JS 4.2 now provides its locale support in this form, so your applications can simply require the appropriate locale package. The JavaScript needed will automatically be included and the include-rtl Sass variable set accordingly. Using this approach, you can produce an optimized JavaScript/CSS build for each locale. Performance This article would be incomplete if I did not say a few words about performance in Ext JS 4.2 compared to Ext JS 4.1 and 4.0. While the majority of performance improvement work has been in relation to grid, several other changes were made largely for performance reasons. These changes ranged from removing the CSS reset (its numerous “expensive” rules to reset, scope reset and unreset), to moving logic out of JavaScript to handle “framed” components (such as buttons), to simplifying the button component markup and its corresponding component layout logic. In the previous article, I compared Ext JS 4.0.7 to Ext JS 4.1 using the Themes example. Since then, a good friend in the community submitted an example application mimicking his own that is definitely a better, real-world test than Themes. I have put the probes to that example on the same IE8 / Windows 7 laptop and here are the results. The breakdown of performance by category in Ext JS 4.2 now looks like the chart below. The numbers from previous versions are “scratched out” to show the progression from 4.0.7 to 4.1.1 to current 4.2.0. Performance is never done, and we will continue to look for ways to increase performance. If you want to read more about all the work we did on Grid and the new bufferedrenderer plugin, check out the original blog post here. The Smaller Bits There are lots of little improvements here and there. For more details on these, consult the Upgrade Guide. Grid/Tree There are several new examples that show how you can now combine many features which previously did not work together. Perhaps one of the most interesting is the locking TreeGrid. The bufferedrenderer plugin also works on trees so you can now handle much larger trees (or tree grids) than before. To see a locking, buffered rendered TreeGrid, check out the example. Tabs Tabs can now go vertical. You can see them rotated and docked on the left or right in the new Side Tabs example Glyphs Many folks want to use web fonts to add scalable, cross-browser images to their application. To support this, we have added the “glyph” config which is very similar to “icon” and “iconCls”. You just set the “glyph” config to be the code point and the necessary text will be rendered into the component: { xtype: 'button', glyph: 42 } This is supported for buttons, tabs, panel headers and menu items. You can see this in action in the new Kitchen Sink example. MVC The introduction of Event Domains allows your Controller to respond to events fired by things like Stores or other Controllers. Here’s what the code looks like: this.listen({ controller: { '*': { // any controller foo: 'onFoo' // method names are now supported! } }, store: { '#storeId': { remove: ‘onStoreRemove’ } }, component: { // same as this.control() } }); XTemplate You can now iterate objects more easily in your templates: <tpl foreach="."> <tr> <td>{$}</td> <td>{.}</td> </tr> </tpl> The “{$}” expands as the property name and “{.}” is the property value. Conclusion Creating compelling, modern applications is hard work. Making them look awesome, run fast, and be delightful to use is even harder. With Neptune, RTL, the new grid improvements, enhancements to Cmd and all the various new features in Ext JS 4.2, delivering amazing experiences to your users has never been easier! There are 9 responses. Add yours. moghadam12 months ago hi tnx for RTL moghadam12 months ago in MVC app when add tooltip to any RTL object , preview have a problem in position of tooltip , and tooltip not near of object Les12 months ago Do you support wildcards on the requires inside Ext.define? requires: [ ‘Ext.rtl.*’ ], Felix12 months ago Would also like to know the answer about the wildcards in the requires. Don Griffin Sencha Employee12 months ago @Les / @Felix - Yes And with Sencha Cmd you can produce the metadata needed to use wildcards for your own namespace(s) as well. Don Griffin Sencha Employee12 months ago @moghadam - Best to submit test code for that in the forum so we can take a look at it - Kazuhiro Kotsutsumi12 months ago I translated it into Japanese. Provision: Japan Sencha User Group Jim Partin12 months ago How is ARIA support with 4.2? I’d heard that it was pushed to the 4.2 release. Is this still the case? What do I need to enable/test my apps with ARIA? moghadam11 months ago @Don Griffin - url of post Comments are Gravatar enabled. Your email address will not be shown.Commenting is not available in this channel entry.
https://www.sencha.com/blog/introducing-ext-js-4-2/
CC-MAIN-2014-15
refinedweb
1,896
63.7
How can I make as "perfect" a subclass of dict as possible? The end goal is to have a simple dict in which the keys are lowercase. It would seem that should be some tiny set of primitives I can override to make this work, but all my research and attempts have made it seem like this isn't the case: __getitem__ __setitem__ get set __setstate__ repr update __init__ UserDict DictMixin get() class arbitrary_dict(dict): """A dictionary that applies an arbitrary key-altering function before accessing the keys.""" def __keytransform__(self, key): return key # Overridden methods. List from # def __init__(self, *args, **kwargs): self.update(*args, **kwargs) # Note: I'm using dict directly, since super(dict, self) doesn't work. # I'm not sure why, perhaps dict is not a new-style class. def __getitem__(self, key): return dict.__getitem__(self, self.__keytransform__(key)) def __setitem__(self, key, value): return dict.__setitem__(self, self.__keytransform__(key), value) def __delitem__(self, key): return dict.__delitem__(self, self.__keytransform__(key)) def __contains__(self, key): return dict.__contains__(self, self.__keytransform__(key)) class lcdict(arbitrary_dict): def __keytransform__(self, key): return str(key).lower() You can write an object that behaves like a dict quite easily with ABCs (Abstract Base Classes) from the collections module. It even tells you if you missed a method, so below is the minimal version that shuts the ABC up. import collections You get a few free methods from the ABC: class MyTransformedDict(TransformedDict): def __keytransform__(self, key): return key.lower() s = MyTransformedDict([('Test', 'test')]) assert s.get('TEST') is s['test'] # free get assert 'TeSt' in s # free __contains__ # free setdefault, __eq__, and so on import pickle assert pickle.loads(pickle.dumps(s)) == s # works too since we just use a normal dict I wouldn't subclass dict (or other builtins) directly. It often makes no sense, because what you actually want to do is implement the interface of a dict. And that is exactly what ABCs are for.
https://codedump.io/share/evO3jwsz3TUM/1/python-how-to-quotperfectlyquot-override-a-dict
CC-MAIN-2017-26
refinedweb
331
57.06
Similar to variables of built-in types, you can also pass structure variables to a function. Passing structs to functions We recommended you to learn these tutorials before you learn how to pass structs to functions. Here's how you can pass structures to a function #include <stdio.h> struct student { char name[50]; int age; }; // function prototype void display(struct student s); int main() { struct student s1; printf("Enter name: "); // read string input from the user until \n is entered // \n is discarded scanf("%[^\n]%*c", s1.name); printf("Enter age: "); scanf("%d", &s1.age); display(s1); // passing struct as an argument return 0; } void display(struct student s) { printf("\nDisplaying information\n"); printf("Name: %s", s.name); printf("\nAge: %d", s.age); } Output Enter name: Bond Enter age: 13 Displaying information Name: Bond Age: 13 Here, a struct variable s1 of type struct student is created. The variable is passed to the display() function using display(s1); statement. Return struct from a function Here's how you can return structure from a function: #include <stdio.h> struct student { char name[50]; int age; }; // function prototype struct student getInformation(); int main() { struct student s; s = getInformation(); printf("\nDisplaying information\n"); printf("Name: %s", s.name); printf("\nRoll: %d", s.age); return 0; } struct student getInformation() { struct student s1; printf("Enter name: "); scanf ("%[^\n]%*c", s1.name); printf("Enter age: "); scanf("%d", &s1.age); return s1; } Here, the getInformation() function is called using s = getInformation(); statement. The function returns a structure of type struct student. The returned structure is displayed from the main() function. Notice that, the return type of getInformation() is also struct student. Passing struct by reference You can also pass structs by reference (in a similar way like you pass variables of built-in type by reference). We suggest you to read pass by reference tutorial before you proceed. During pass by reference, the memory addresses of struct variables are passed to the function. #include <stdio.h> typedef struct Complex { float real; float imag; } complex; void addNumbers(complex c1, complex c2, complex *result); int main() { complex c1, c2, result; printf("For first number,\n"); printf("Enter real part: "); scanf("%f", &c1.real); printf("Enter imaginary part: "); scanf("%f", &c1.imag); printf("For second number, \n"); printf("Enter real part: "); scanf("%f", &c2.real); printf("Enter imaginary part: "); scanf("%f", &c2.imag); addNumbers(c1, c2, &result); printf("\nresult.real = %.1f\n", result.real); printf("result.imag = %.1f", result.imag); return 0; } void addNumbers(complex c1, complex c2, complex *result) { result->real = c1.real + c2.real; result->imag = c1.imag + c2.imag; } Output For first number, Enter real part: 1.1 Enter imaginary part: -2.4 For second number, Enter real part: 3.4 Enter imaginary part: -3.2 result.real = 4.5 result.imag = -5.6 In the above program, three structure variables c1, c2 and the address of result is passed to the addNumbers() function. Here, result is passed by reference. When the result variable inside the addNumbers() is altered, the result variable inside the main() function is also altered accordingly.
https://cdn.programiz.com/c-programming/c-structure-function
CC-MAIN-2020-24
refinedweb
514
58.79
Control.Distributed.Process.Management Description - Management Extensions API This module presents an API for creating Management Agents: special processes that are capable of receiving and responding to a node's internal system events. These system events are delivered by the management event bus: An internal subsystem maintained for each running node, to which all agents are automatically subscribed. Agents are defined in terms of event sinks, taking a particular Serializable type and evaluating to an action in the MxAgent monad in response. Each MxSink evaluates to an MxAction that specifies whether the agent should continue processing it's inputs or stop. If the type of a message cannot be matched to any of the agent's sinks, it will be discarded. A sink can also deliberately skip processing a message, deferring to the remaining handlers. This is the only way that more than one event sink can handle the same data type, since otherwise the first type match will win every time a message arrives. See mxSkip for details. Various events are published to the management event bus automatically, the full list of which can be found in the definition of the MxEvent data type. Additionally, clients of the Management API can publish arbitrary Serializable data to the event bus using mxNotify. All running agents receive all events (from the primary event bus to which they're subscribed). Agent processes are automatically registered on the local node, and can receive messages via their mailbox just like ordinary processes. Unlike ordinary Process code however, it is unnecessary (though possible) for agents to use the base expect and receiveX primitives to do this, since the management infrastructure will continuously read from both the primary event bus and the process' own mailbox. Messages are transparently passed to the agent's event sinks from both sources, so an agent need only concern itself with how to respond to its inputs. Some agents may wish to prioritise messages from their mailbox over traffic on the management event bus, or vice versa. The mxReceive and mxReceiveChan API calls do this for the mailbox and event bus, respectively. The prioritisation these APIs offer is simply that the chosen data stream will be checked first. No blocking will occur if the chosen (prioritised) source is devoid of input messages, instead the agent handling code will revert to switching between the alternatives in round-robin as usual. If messages exist in one or more channels, they will be consumed as soon as they're available, priority is effectively a hint about which channel to consume from, should messages be available in both. Prioritisation then, is a hint about the preference of data source from which the next input should be chosen. No guarantee can be made that the chosen source will in fact be selected at runtime. - Management API Semantics The management API provides no guarantees whatsoever, viz: - The ordering of messages delivered to the event bus. - The order in which agents will be executed. - Whether messages will be taken from the mailbox first, or the event bus. - Management Data API Both management agents and clients of the API have access to a variety of data storage capabilities, to facilitate publishing and consuming useful system information. Agents maintain their own internal state privately (via a state transformer - see mxGetLocal et al), however it is possible for agents to share additional data with each other (and the outside world) using whatever mechanism the user wishes, e.g., acidstate, or shared memory primitives. - Defining Agents New agents are defined with mxAgent and require a unique MxAgentId, an initial state - MxAgent runs in a state transformer - and a list of the agent's event sinks. Each MxSink is defined in terms of a specific Serializable type, via the mxSink function, binding the event handler expression to inputs of only that type. Apart from modifying its own local state, an agent can execute arbitrary Process a code via lifting (see liftMX) and even publish its own messages back to the primary event bus (see mxBroadcast). Since messages are delivered to agents from both the management event bus and the agent processes mailbox, agents (i.e., event sinks) will generally have no idea as to their origin. An agent can, however, choose to prioritise the choice of input (source) each time one of its event sinks runs. The standard way for an event sink to indicate that the agent is ready for its next input is to evaluate mxReady. When this happens, the management infrastructure will obtain data from the event bus and process' mailbox in a round robbin fashion, i.e., one after the other, changing each time. - Example Code What follows is a grossly over-simplified example of a management agent that provides a basic name monitoring facility. Whenever a process name is registered or unregistered, clients are informed of the fact. -- simple notification data type data Registration = Reg { added :: Bool , procId :: ProcessId , name :: String } -- start a /name monitoring agent/ nameMonitorAgent = do mxAgent (MxAgentId "name-monitor") Set.empty [ (mxSink $ \(pid :: ProcessId) -> do mxUpdateState $ Set.insert pid mxReady) , (mxSink $ let act = case ev of (MxRegistered p n) -> notify True n p (MxUnRegistered p n) -> notify False n p _ -> return () act >> mxReady) ] where notify a n p = do Foldable.mapM_ (liftMX . deliver (Reg a n p)) =<< mxGetLocal The client interface (for sending their pid) can take one of two forms: monitorNames = getSelfPid >>= nsend "name-monitor" monitorNames2 = getSelfPid >>= mxNotify For some real-world examples, see the distributed-process-platform package. - Performance, Stablity and Scalability Management Agents offer numerous advantages over regular processes: broadcast communication with them can have a lower latency, they offer simplified messgage (i.e., input type) handling and they have access to internal system information that would be otherwise unobtainable. Do not be tempted to implement everything (e.g., the kitchen sink) using the management API though. There are overheads associated with management agents which is why they're presented as tools for consuming low level system information, instead of as application level development tools. Agents that rely heavily on a busy mailbox can cause the management event bus to backlog un-GC'ed data, leading to increased heap space. Producers that do not take care to avoid passing unevaluated thunks to the API can crash all the agents in the system. Agents are not monitored or managed in any way, and those that crash will not be restarted. The management event bus can receive a great deal of traffic. Every time a message is sent and/or received, an event is passed to the agent controller and broadcast to all agents (plus the trace controller, if tracing is enabled for the node). This is already a significant overhead - though profiling and benchmarks have demonstrated that it does not adversely affect performance if few agents are installed. Agents will typically use more cycles than plain processes, since they perform additional work: selecting input data from both the event bus and their own mailboxes, plus searching through the set of event sinks (for each agent) to determine the right handler for the event. - Architecture Overview The architecture of the management event bus is internal and subject to change without prior notice. The description that follows is provided for informational purposes only. When a node initially starts, two special, internal system processes are started to support the management infrastructure. The first, known as the trace controller, is responsible for consuming MxEvents and forwarding them to the configured tracer - see Control.Distributed.Process.Debug for further details. The second is the management agent controller, and is the primary worker process underpinning the management infrastructure. All published management events are routed to this process, which places them onto a system wide event bus and additionally passes them directly to the trace controller. There are several reasons for segregating the tracing and management control planes in this fashion. Tracing can be enabled or disabled by clients, whilst the management event bus cannot, since in addition to providing runtime instrumentation, its intended use-cases include node monitoring, peer discovery (via topology providing backends) and other essential system services that require knowledge of otherwise hidden system internals. Tracing is also subject to trace flags that limit the specific MxEvents delivered to trace clients - an overhead/complexity not shared by management agents. Finally, tracing and management agents are implemented using completely different signalling techniques - more on this later - which would introduce considerable complexity if the shared the same event loop. The management control plane is driven by a shared broadcast channel, which is written to by the agent controller and subscribed to by all agent processes. Agents are spawned as regular processes, whose primary implementation (i.e., server loop) is responsible for consuming messages from both the broadcast channel and their own mailbox. Once consumed, messages are applied to the agent's event sinks until one matches the input, at which point it is applied and the loop continues. The implementation chooses from the event bus and the mailbox in a round-robin fashion, until a message is received. This polling activity would lead to management agents consuming considerable system resources if left unchecked, therefore the implementation will poll for a limitted number of retries, after which it will perform a blocking read on the event bus. Synopsis - - mxNotify :: Serializable a => a -> Process () - data MxAction - newtype MxAgentId = MxAgentId { - data MxAgent s a - mxAgent :: MxAgentId -> s -> [MxSink s] -> Process ProcessId - mxAgentWithFinalize :: MxAgentId -> s -> [MxSink s] -> MxAgent s () -> Process ProcessId - type MxSink s = Message -> MxAgent s (Maybe MxAction) - mxSink :: forall s m. Serializable m => (m -> MxAgent s MxAction) -> MxSink s - mxGetId :: MxAgent s MxAgentId - mxDeactivate :: forall s. String -> MxAgent s MxAction - mxReady :: forall s. MxAgent s MxAction - mxSkip :: forall s. MxAgent s MxAction - mxReceive :: forall s. MxAgent s MxAction - mxReceiveChan :: forall s. MxAgent s MxAction - mxBroadcast :: Serializable m => m -> MxAgent s () - mxSetLocal :: s -> MxAgent s () - mxGetLocal :: MxAgent s s - mxUpdateLocal :: (s -> s) -> MxAgent s () - liftMX :: Process a -> MxAgent s a Documentation This is the default management event, fired for various internal events around the NT connection and Process lifecycle. All published events that conform to this type, are eligible for tracing - i.e., they will be delivered to the trace controller. Constructors Instances Firing Arbitrary Mx Events mxNotify :: Serializable a => a -> Process () Source # Publishes an arbitrary Serializable message to the management event bus. Note that no attempt is made to force the argument, therefore it is very important that you do not pass unevaluated thunks that might crash the receiving process via this API, since all registered agents will gain access to the data structure once it is broadcast by the agent controller. Constructing Mx Agents Represents the actions a management agent can take when evaluating an event sink. newtype MxAgentId Source # A newtype wrapper for an agent id (which is a string). Constructors Instances mxAgentWithFinalize :: MxAgentId -> s -> [MxSink s] -> MxAgent s () -> Process ProcessId Source # Activates a new agent. This variant takes a finalizer expression, that is run once the agent shuts down (even in case of failure/exceptions). The finalizer expression runs in the mx monad - MxAgent s () - such that the agent's internal state remains accessible to the shutdown/cleanup code. type MxSink s = Message -> MxAgent s (Maybe MxAction) Source # Type of a management agent's event sink. mxSink :: forall s m. Serializable m => (m -> MxAgent s MxAction) -> MxSink s Source # mxReady :: forall s. MxAgent s MxAction Source # Continue executing (i.e., receiving and processing messages). mxSkip :: forall s. MxAgent s MxAction Source # Causes the currently executing event sink to be skipped. The remaining declared event sinks will be evaluated to find a matching handler. Can be used to allow multiple event sinks to process data of the same type. mxReceive :: forall s. MxAgent s MxAction Source # Continue exeucting, prioritising inputs from the process' own mailbox ahead of data from the management event bus. mxReceiveChan :: forall s. MxAgent s MxAction Source # Continue exeucting, prioritising inputs from the management event bus over the process' own mailbox. mxBroadcast :: Serializable m => m -> MxAgent s () Source # mxSetLocal :: s -> MxAgent s () Source # Set the agent's local state. mxGetLocal :: MxAgent s s Source # Fetch the agent's local state. mxUpdateLocal :: (s -> s) -> MxAgent s () Source # Update the agent's local state.
https://hackage.haskell.org/package/distributed-process-0.7.4/docs/Control-Distributed-Process-Management.html
CC-MAIN-2020-24
refinedweb
2,037
51.89
iParticleEmitter Struct Reference [Mesh plugins] A particle emitter. More... #include <imesh/particles.h> Detailed Description A particle emitter. The particle emitters are responsible for adding new particles and setting up their initial state. Definition at line 255 of file particles.h. Member Function Documentation Clone this emitter. Spawn some new particles. The number of particles to be emitted has be defined through the last call to ParticlesToEmit(). - Parameters: - Get the duration (in seconds) for this emitter. Get the emission rate, in particles per second. Get whether or not this emitter is enabled. Get the initial mass of the new particles. Get the initial time-to-live span of the particles emitted. Get the start time (in seconds). Get the number of particles this emitter wants to emit. - Parameters: - Set the duration (in seconds) for this emitter. By default emitters will emit particles infinitely, but by setting this you can make them stop a given number of seconds after they initiated emission. A negative duration is the same as infinite duration. Set the emission rate, in particles per second. Set whether or not this emitter is enabled. The emitter will emit particles only if it is enabled. Set the initial mass of the new particles. The emitter will assign a mass in the range specified. Set the initial time-to-live span of the particles emitted. The emitter will assign a time-to-live in the range specified. Set the start time (in seconds) for this emitter. By default emitters will start emitting particles as soon as the particle system is activated (comes into view), but with this setting this can be delayed. The documentation for this struct was generated from the following file: - imesh/particles.h Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api-2.0/structiParticleEmitter.html
CC-MAIN-2014-10
refinedweb
297
61.53
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 2.1.6 - - Component/s: Core Actions - Labels:None - Environment: Struts 2.1.6 - Flags:Patch Description So, here is a bit of a background on the why of that bug. I' m working for the canadian governement, and there is a strong need to internationalize applications to be english and french. That goes as far as bilinguifying the URLs. So the url shouldn't be inbox.action but inbox-reception.action if the locale is English or reception-inbox.action if the locale is French. So, of course, it's a nightmare, but hey, that's what make the job interesting Anyway, my approach to enable those weird urls is to use a custom ActionMapper. Our actions are actually named after their english names, and the action mapper is adding the French part correctly, and stripping it out when trying to find the right ActionMapping. That's working fine (after some tweaking of the Locale, but nothing really important). So where my issue kicks in is that for bilinguilism we have a link on each page using a <s:url /> without any actions, so that it redirects to the current page (with of course a different request_locale). Now, the thing that happens when we click the link is that the locale changes, and the url of the page changes. My understanding was that in this case, s:url would use the current namespace and action to generate the actual url. After reading through the code, that's the behaviour in ServletUrlRenderer#renderFormUrl, but ServletUrlRenderer#renderUrl is just recycling the current URL hoping it will still work. Actually that's not working in my case, as the URL changes after the Locale switched. So I'd like for #renderUrl to use the current action / namespace, instead of going for a wild guess based on the current URL. In most cases, that doesn't make a difference, as the same page usually have the same URL. However, it's not the same URL generated in my case, and I think the behaviour of ServletUrlRenderer is quite unexpected. I'll try to provide a patch ASAP.
https://issues.apache.org/jira/browse/WW-3090
CC-MAIN-2018-09
refinedweb
366
63.59
Hi , Anyone Please post some tricky questions on C or Links for C quiz.. It will be helpful 4 me.. Thanks in Advance.. Hi , Anyone Please post some tricky questions on C or Links for C quiz.. It will be helpful 4 me.. Thanks in Advance.. But the first one is definitely not good: These definitely do not have the right answer: These are a bit dodgy (may or may not be correct answer, depending on circumstances): These are compiler-dependant (compiler appears to be 16-bit in this case): That's after about half of the 52 questions. -- Mats thanks 4 ur post... Geekinterview was helpful.. Those questions are terrible, NOT worth looking at. This website has a nice quiz, why not use that? Thanks 4 sharing .... If u know , u just post me some other links too Jeez, Geekinterview is one sorry-assed site. Every program needs to begin with #include <ancient_1980s_assumptions.h> Hi everyone, I set up a small C quiz some time ago, see. The questions/answers are fully ANSI/ISO C9x compliant, but I like to restrain from C99-specific stuff. I'd be glad to hear your comments, or better yet, see you add some questions. Greets, Philip It would be nice to get a summary of the result on the final page, don't you think? Some of the questions are quite obscure (e.g. dealing with undefined behaviour that isn't entirely obvious to most people). I think there could be some more basic ones. Edit: And for fun, it would be nice to see what the percentage answers for each of the incorrect options where once the answer is given (e.g. when the right answer is given 30.9% of the time, what percentage cover was of the other 2 answers). -- Mats Also, quiz question 0x06a should probably rename the third answer to "undefined", since the current answer is sort of implying that the reader don't know the answer... -- Mats
http://cboard.cprogramming.com/c-programming/111601-quiz-c-printable-thread.html
CC-MAIN-2014-41
refinedweb
331
75.61
How many neurons can I fully connect from one to another using Nengo Loihi? At least for the v0.4.0 emulator, the answer seems to depend on whether I partition the ensemble into a bunch of sub-ensembles ($d$ ensembles, each containing $n$ neurons), even though the total number of neurons ($nd$) and total number of connections ($n^2d^2$), remains the same (every sub-ensemble is fully-connected to every sub-ensemble, including itself). In other words, it seems to depend on how the same number of virtual resources (neurons and connections) are being physically mapped. n d nd ? 0 512 1 512 False 1 256 2 512 False 2 170 3 510 True 3 128 4 512 True 4 102 5 510 True 5 85 6 510 True 6 73 7 511 True For example, in the above table, 4 ensembles of 128 neurons are okay, while 1 ensemble of 512 neurons are not. In both cases, there are 512 neurons and 512**2 connections. Is there an equation that describes this in general? Is there a way to have nengo_loihi perform the optimal partitioning for a given ensemble or network configuration, or some helper functions for satisfying these constraints? import warnings warnings.filterwarnings("ignore") from collections import defaultdict import numpy as np from pandas import DataFrame import nengo from nengo_loihi import Simulator from nengo_loihi.builder import BuildError def attempt(n, d): with nengo.Network(seed=0) as model: ensembles = [nengo.Ensemble(n, 1) for _ in range(d)] for ens1 in ensembles: for ens2 in ensembles: nengo.Connection(ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True)) try: with Simulator(model, progress_bar=None) as sim: pass except BuildError: return False else: return True data = defaultdict(list) nd = 512 for d in range(1, 8): n = nd // d data['n'].append(n) data['d'].append(d) data['nd'].append(n*d) data['?'].append(attempt(n, d)) print(DataFrame(data))
https://forum.nengo.ai/t/how-many-neurons-can-be-fully-connected/706
CC-MAIN-2018-51
refinedweb
321
55.03
Function Templates Function templates are used to reduce the repeated code. I want to make you clear about function templates by giving a small example here. Functions abs() and fabs() are used to find the absolute value or non-negative value of a integer and floating point variable respectively. To calculate the absolute value of integer and floating point variable, we need to call two different functions. And they are defined differently with almost same code except for data types. In case of function call, if we want absolute value of –2 then we need to call abs() function and to calculate the value of –6.6 we need to call fabs() function and function definition for both of them must be different. This shows that there is redundancy in coding for both function call and function definition. One important question arises here and that is “Can I use only one function call and only one function definition to calculate absolute value of both integer and floating point??”. The simple answer for this question is YES you can by the use of function template. Note that: only one function call can be made to call both the function by using function overloading but the problem of different function definition is still unsolved. Let us explore this idea by considering another example with source code: #include <iostream>using namespace std;int find_max(int a, int b){int result;if(a > b)result = a;elseresult = b;}float find_max(float a, float b){float;} In above example although same function call find_max() is used to call both integer and floating point, two different function definitions must be written in order call functions. So by the use of function template you can avoid this problem. #include <iostream>using namespace std;template <class T>T find_max(T a, T b){T;}
http://www.programming-techniques.com/2011/11/c-tutorial-function-templates.html
CC-MAIN-2016-50
refinedweb
306
50.57
.NET and Other Interesting StuffTheFlashFlashFlashYou.ToolsY. Good article. I know that you were responding to the "Flash is dead" article, but how about you take a 180 and try to put forth Flash's strong points? Is adoption really the only thing in favor or Flash? Is there something you can do in Flash and not in Silverlight? I'm not on either side. I've been reading a lot about SL and FL lately and most of the stuff I read was FUD (from both sides, but mainly Flash). There’s a lot of misinformation out there. Since you have deep knowledge about both I was wondering if you could compare the two from the other angle. Thanks. Silverlight has a serious problem.. it doesn't run on Linux and in "old" Windows versions like windows 2000(my case).If Microsoft solves this problem...then it might kill flash. Rodrigo, I'm not too worried about Linux; the Mono guys said they would love the challenge. I hope Microsoft gives them the help they need. Linux may be only X% of the market, but the fact that you have a competitor that runs on that OS and your solution would not makes that X% seem like (X*10)%. This may not be rational, but who said people are? As far as Windows 2000 I would really like to know the technical reasons behind this. I'd imagine supporting OSX would be a lot harder than Windows 2000. How can I make cool animations without Onion Skinning and other tools if something isn't frame based? I agree that ActionScript is a mess as a programming model. But I don't see so far how SilverLight can challenge the full functionality of Flash without having some kind of Frame based animation available, and the IDE that goes with that. I haven't made up my mind on this whole issue, but I think considering Flex and Apollo in a more objetive manner is essential to a discussion of these issues. Java isn't going anywhere, and I think it'll work perfectly as a framework for Flex. Apollo sticks Flex (or Flash, or Ajax) applications right on the desktop, without having to rewrite anything. I believe a more accurate way of looking at this issue is that this is the first time, at least recently, but maybe in the history of the web, that two companies have been able to innovate back and forth on the same issue to the extent and scale that Adobe and Microsoft are doing right now with RIAs. Browser wars are a joke compared to this. Yahoo and Google occasionally trade shots on an issue, like Maps, or Mail, or whatever, but I don't think the competition is as intense, or as beneficial to developers and users as this Flash/Flex/Apollo vs Silverlight has the potential to be. I think this is the good thing about Adobe's purchase of Macromedia. Microsoft didn't seem to be entirely concerned about the swf format until Adobe bought it and decided what kind of things they would do with it. And Macromedia didn't have the size and scale that Adobe has in order to get the attention of Microsoft, or Google, or anyone else, and create this kind of competition. Functionality wise, there isn't much that Flash can do that Silverlight can't. I don't think Silverlight has support for alpha channels on video or low level, socket based communication at this time. Those eliminate a few specific usage scenarios (I know some people interested in gaming are really pushing for socket communication). The big difference IMO is that Silverlight targets application developers from the ground up, where Flash has a legacy of supporting animation. Flash is very much frame and movie clip oriented, which can be a pain when you are trying to do anything other than create a little animation sequence. Defining an application as a series of movie clips and frames is just lame. Flex and Apollo are interesting. Apollo is a whole different ballgame though... it's like Central 2.0, and Central 1.0 was a complete failure, so I'm not expecting much from 2.0. It also doesn't deal with the creation of content, which is the side of the equation I am interested.. I'd still say you're jumping the gun. Also, I'd say you're making the wrong comparison, in terms of Application Development, Flex is the Adobe product to compare, not Flash. Quentin, obviously Flash isn't going away any time soon (as much as I'd like it to). I'm just pointing out why, if I had it my way, Silverlight would win in the end. I could be completely wrong, but there I think there are really good reasons to chose Silverlight over Flash (especially for the YouTube's of tomorrow who need things like lower cost video solutions). See my comments above about Flex: ." What ever happens I am really interested in knowing if silverlight will take the center stage in a year time. But if the microsoft guys do not support windows 2000 atleast, It cant really take the market. I think doing compairison is not the right way in this matter. There may be things that Apollo can do better an other things that Silverlight or WPF can do better. The problem is, that by dealing with phrases like "Flash is dead", you're doing nothing than supporting the people out there who think Microsoft wants nothing than to dominate the whole world. I think the better way is to choose one of the tools and be happy that there will be a choice in future. Flash is an graphics/animation tool that developed a programming model. Not the best approach, and it's apparent if you've ever programmed a substantial Flash application. Flex (which I do professionally) is certainly an improvement, but the language and tools are still lacking. As a .NET developer for 6 years prior to Flex, I'm acutely aware of the gap in languages and tools between Flex and .NET. Silverlight is approaching the problem of RIA development from the ground up. They have stronger, more powerful tools and languages with which millions of developers are familiar with - a solid foundation. This is a better long-term strategy and one that will make significant gains. I know that Silverlight can do animation. I have seen a few examples of the programmatic animation. But I haven't found any example of the more "traditional animation" such as those done in the ecards of or. So my question is, can Silverlight do that kind of animation? If so, is there any example out there? Hi, thanks for your comparison. It's a true developer story. When you see timeline, you don't see nothing, right? Well, I can tell you that for designers it's like: when they see code, they see nothing. They LOVE timeline and when they see timeline, they see interactions, dinamics, and everything else what you see when you look at code. That is the main reason, why designers love flash, and other designer tools (photoshop, etc) and they also prefer macs. I think that in future rich applications design the most important fact will be environment, which will allow developers and designers to collaborate. Even if SL has it's advantages over flash it is far away from replacing (or killing) it. Just my 2 cents :) Milan Hi, just to correct one thing, building a Flex 2 application does not require using a Java application server. You can build a Flex application with the free Flex SDK or you can choose to build with the Eclipse-based Flex Builder IDE. There are plenty of .NET developers trying and liking Flex. We know there are plenty of improvements we can make in the tooling, but fundamentally the move can be made without much difficulty. Matt Flex PM But you do need a Java app server to compile the flex in the first place right? So, you'd still have to take the java dependency if you were creating an application that output SWF via Flex, which is the main problem with that approach (outside of the general file format limitations). well, wrong again, but never mind :) I think the one thing that a lot of people are missing in the equation is that developers can't design, no more than designers can develop, right now, Adobe has the best environment to allow them to work together best. Microsoft has more than just silverlight to develop in order to create that environment. I know they have Blend, but it's not, in my opinion, as good as Flash/Flex for the designer side of things. You don't need any java application server for Flex/SWF applications. Compile in your own machine, maybe with eclipse, and deploy the swf wherever you want. Pingback from Brooks Andrus » Blog Archive » Flash 9 File Format Still MIA - Flash Should Be More Open Flash is dead;) weblogs.asp.net/.../silverlight-vs-flash-the-developer-story.aspx - Interesting Yep, Flex 2 doesn't have any dependencies on a Java app server. Just use Flex Builder or the free SDK and you're good :) Issues with fonts still bug me though: @font-face { src:url( "Assets/TradeGothicLTStd-Light.ttf" ); fontFamily: TradeGothicLTStd-Light; } All that to embed a font is just nasty ;) Likewise embedding an image is a bit creepy (I generally like attibute-based approaches, but not this one): [Embed(source="Assets/MyLogo.svg")] [Bindable] public var Logo:Class; The data binding is VERY sweet though, easier and sometimes nicer than WPF... It'll be interesting to see which route Silverlight 1.1 takes with databinding. I commented to Adnan’s recent post on Cheaper Solutions to Flash Lite post; saying that Flash is not Nice wrap-up and perspective from Jesse Ezell on the Flash vs. Silverlight theme. I'm relatively new SL have nice developper attraction (good thing i'm developper) but we are far behind the graphic quality that we could do in flash. We are just starting to make button and control and animation on the control. btw, i would like the Xaml of this flash site: There seems to be some sort of misconception amongst certain persons here. I just want to make clear that Silverlight does not mean "not using timeline". For all you graphics artists out there: The Blend tool from Microsoft has Timeline animation possibilities. Good point. The difference is that with Silverlight, the timeline is actually a timeline. With Flash, it is really a frameline. Pingback from SilverLight vs Flash? | Mysite's Advisor Blogging Spot How about we all discuss if an alpha/beta product beats 10 years of engineering, and market penetration? I'm glad everyone has ideas of where silverlight needs to catch up. Frankly it just makes Microsoft’s job easier. The question I have is if Microsoft adds every little thing that you think Flash can do better (which they are already doing. .i.e. sockets etc. ) then who will win? It’s not about the format if the devs listen...it’s really about adoption both by devs and users, ease of use, and market dominance. Ever hear of Netscape, word perfect, and Apple. No matter how cool and powerful they became in the end they were no more than a bump in the timescales. Microsoft does one thing better than the rest..and that’s re-engineer a product and integrate it so well you in the end you really don't care to buy or use anything else. Microsoft is listening....so keep up the great feature requests...I'm sure in 2 years of Flash's 10 years we'll all miss the good ole WordPerfect..er Adobe days. ;) Given Microsoft's track record, even if the technology is better than flash/flex I dont think I would ever switch, microsoft never created anything for the good of us, untill they had too, and even then they sucked at it. Example: Mac started gaining ground with OSX and Microsoft comes out with Vista, we all saw how great that is! So for microsoft coming out and presenting this HOLY new technology that they have called silverlight! when flash has been around for 10 years is just ridiculous, people will see through the marketing propaganda. Just my opinion. Dimi, Vista kicks ass FYI. I saw several people commenting about if Silverlight didn't support Windows 2000 it would never make it. My response is, "Are you kidding me?". That operating system is 7 years old and isn't used that much at all. I don't know what the percentage is, but it has to be extremely small. To say it is going to kill Silverlight adoption is pretty funny. Interesting to hear folks thoughts on this. I've been using flash since beta 0.9 & have always loved it, but since most of my sites leverage .net I've really been looking over SL. Unlike most of you I design & program and my reservations about SL mainly revolve around implementing a given design. Considering that PS & Illustrator are my bread & butter for design work, MS Expression appears too limited to me for applying designs to pages & user controls. I use alpha channels all the time in my designs & sure don't want to give that up. Also I'm puzzled by the perception that timelines should factor in time as opposed to being solely FPS & Layers. By leveraging time it would seem that either frames get dropped, added, or an adjusted in some way. I just don't see how that approach has smooth transitions. I would really like SL to be a hit, but in many ways it seems to have some serious flaws. Especially in terms of what can be done graphically. I admit that actionscripting is a pain, but I've always been able to get it to do what I want. The real problem with flash is that most designers can't write actionscripting & most programmers don't really get how flash movies should be structured. I'm glad to SL on the market & in the future I'd probably use both SL & flash. However to say flash is dead b/c of SL is just absurd. I think both will have a place in the future, but in my opinion SL will always be a bit limiting graphically compared with flash. I say that simply for the fact of how well Photoshop & Illustrator work with flash. Any vector work I do in Illustrator will export as a .swf file & colors, gradients, & layers are maintained. SL looks very promising, but I'll wait to make a more serious evaluation when it's out of beta. Pingback from Flash vs silverlight « Kollaborativ ." Flex DOESN'T requires a Java App server, you can do server side things in .NET, ColdFusion, Java or PHP. Pingback from Alasdair Mackenzie - Such Great Heights » Flash vs Silverlight Who says that MS has never done anything good? Who can touch the .NET Framework? - Java can't. Who can touch MS Office? - Everybody else is busy copying. Vista? - Surely Better than OSX - which runs on limited hardware set! And VS.NET – nothing to compare against. Silverlight will win because it has all developers behind it. VB.NET, C#, J# and even C++ I would guess. Web applications are not about design in the end. They are about functionality. Programmers are the minds behind the designs. What good is a design if it cannot be implemented to deliver a business solution? I definitley agree with Xadoa. I am a .NET developer and have always wanted to acheive things in Flash that couldn't be done in .NET but didnt want to learn Flash. With Silverlight, i can use all the same programming knowledge for the backend code and just learn a new front end. I dont know anything much about the technical details of flash but I will personally be jumping aboard the Silverlight bandwagon just because I'm a .NET developer and I know many others will as well. Something to keep in mind as well is that Silverlight is still in beta and alpha stages (dependant on what version you are working with.) thanks for the great comparison, even with all that's great about flash, the programming side definitely sends shutters down my spine. I got a question though, how would apollo and javafx compare? (dunno anything about central) What would a web app gain from being outside of the browser, any potential development issues/ security issues? Also, how does the performance of silverlight vs flash compare? which will have the smaller app size for a similar functionality? Ultimately, SL vs. Flash/Flex is really a .NET vs. 'something else' discussion. If you are a .NET developer, you will obviously tend to do work in Silverlight. Of course a .NET developer will not feel comfortable doing ActionScript compared to a Java developer, as many describe it as being a mix between Java and JavaScript. IMHO, Flash will not be killed by Silverlight because in order to do that, you really have to move all the developers to .NET. The obvious reality is that there are many developers and organizations that are rooted in languages/frameworks other than .NET (i.e. J2EE) and developers' comfort with either Flex or SL will reflect that. <br><br> Based on what I hear and read, I am sure SL will be excellent for .NET and RIA in general (although my organization will be sticking with Flex ;-). I believe that weaknesses in each one will be addressed and the competition will force quick product growth and innovation in both. Adobe will improve Flash video issues, .NET will improve its design tools - you get the picture. The way I see it: .NET has grown tremendously and Java has grown steadily and is still strong in the enterprise. There are still heated discussions about .NET vs. Java/J2EE as there always have been. Similarly, I foresee SL growing tremendously in the .NET community and Flash/Flex growing as well and remaining very strong. It's real easy to see only one side of the story when you only live on the one side of the fence - I say that for both sides. Our company develops for Web and CD at the same time with Flash. One swf can run through Director off the CD with very little change. Most of the time no change is needed. Mac and PC mostly. We've yet to ever have a client even request Linux. Can Silverlight run the same body of code on Web and CD? Without exposed source code? Flash dead? Yea, right. Have fun with script kiddies stealing your XAML code. Also... << Web applications are not about design in the end. << They are about functionality. And that's why most clients want to see things like intro animations and graphics above all first. You know, the "important" stuff. All the while only giving you temporary materials to work with while promising finals up to the end. Clients aren't developers and they don't think like you do. Most of the time you're paid to do what THEY want. Not what YOU want. Design is VERY important. It is usually what gets you the job in the first place. Like I said clients aren't developers. Can anyone confirm that the statements posted on Grant Skinner's blog are true? That Flash really is far better than Silverlight? Check out the arguments: How come no one has realized yet that Silverlight doesn't support the most basic components such as text box. I know the development workflow in Sliverlight is smother compared to Flash/ Flex but come on. I truly believe that anyone who has actually done some work with Sliverlight, Expression Studio and Visual Studio realizes that this technology is at least another 18 months away from really being a possible competitor to Flash... and I don't think Adobe will be sitting on their hands. Hi Jesse, I don't agree that Flash is dead, on the contrary I believe Silverlight is to be born dead. Anyway, the reason I'm commenting: Macromedia decided to open up the SWF format in May 1998, when Flash 3 public beta was released, there was no SDK, just the specs. I remember that clearly because that was the time I started working with SWF format. Best, Burak My biggest issues with WPF and SL is the designer tools. I work in a MS shop and I'm trying to embrace WPF, but Microsofts design tools just blow. I've tried Expression Designer and Expression Blend. MS needs to take additional cues from Adobe and get the design stuff down before they expect me to give up Illustrator,Fireworks,Flash and Flex. Pingback from More about Flash & Silverlight : standing on the shoulder of colossi Helt siden annonseringen av Silverlight på NAB tidligere i år har det gått livlige diskusjoner om hvilket Read Jesse Ezells blog about the difference between Silverlight and Flash and why you should consider Chris Said: > Can Silverlight run the same body of code on Web and CD? Without exposed source code? ... Have fun with script kiddies stealing your XAML code. No, this is NOT a problem with Silverlight. Typically, the XAML for an application gets compiled into an assembly dll, along with all the C# code. No source remains. As for running on a CD, I am going to do that test soon. Might end up compiling two different dlls, one for the CD, one for the web server, but 99% of the XAML and the C# and the media resources (bitmaps, audio, video) will be the same between the two. > Tom said: >> ... Silverlight, Expression Studio and Visual Studio realizes that this technology is at least another 18 months away from really being a possible competitor to Flash... and I don't think Adobe will be sitting on their hands. << Could be. But in the long run, Microsoft's underlying technical approach is fundamentally superior. Its not just Silverlight. Its the entire .NET approach to blending media with logic. IMHO, this is like comparing gunpowder to bow and arrow, when gunpowder was first invented and tended to blow up in your face. Yeah, the archers could still win, but not for long. For Adobe to be competitive in the long run, they would have to invent something like .NET -- not going to happen. Yes, Microsoft's design tools are far, far behind -- and Adobe keeps moving. But given all the technology Microsoft is making available, other companies will be able to rapidly make all kinds of specialized tools, customized to particular usage scenarios, industries, or work processes. Perhaps what we'll see is Adobe hold the deeply trained graphic professionals, while everyone else ends up using .. a wide variety of interesting new tools. Check my website early 2008, and you'll see one example (no relation to the product I've got currently). I'm sure there will be hundreds of other interesting applications from other vendors, applications that would have been too expensive to develop in any previous era. Another thought on Adobe vs. Microsoft: I know the graphic artist I am working with will continue to use Illustrator and Photoshop. And the videographer likewise is unlikely to use any Ms software for editing video. And some of the animations and effects we use will likely come from Flash [the tool]. So any Ms-centric web development solution will need to slurp in all those media formats. Silverlight, because it fits seamlessly into .NET development, and includes extensible languages (XAML, C# or other .NET language), is a great platform for creating customized media services. Flash [the file format] isn't designed for such a role. Maybe in the long run, Flash [the tool] would output Silverlight as a file format. That to me would be the best of both worlds. Adobe and Microsoft could then each make money doing what they do best, and all of us would have an overall solution that was superior to either side alone. I see here a lot of technical comparison between Silverlight and Flash/Flex, but this is wrong way of thinking. Silverlight will end as a tool that some .Net developers use to make their products more visual attractive. Flash doesn’t need to win the battle, because there won’t be any. Flash is a tool for expressing visual creativity and that only matter. From wikipedia: "Silverlight has been criticized for lack of Linux support - or indeed any platform other than Windows and Mac OS X, citing it as a factor that could limit the widespread adoption of Silverlight. However, according to Mike Harsh, a program manager for Silverlight, Microsoft will eventually port Silverlight to Linux after the work has been completed on the Windows and OSX platforms." en.wikipedia.org/.../Silverlight From a developer standpoint(not doing ads) the talk about features is basicly worthless, for the most part both Flex and Silverlight can do the same things. What does matter is the cross-platform and cross-browser support and what will happen with it in the future. As it is Microsoft has a horrid track record for suppling product that are not focused around the latest versions of thier products. They do have a good track record of just dropping support for the product after they have gained market share and driven out most of the competition. Adobe and Flash have a great track record for supporting a wide range of platforms(although coming in late to Linux) and supporting a huge amount of browsers. According to O/S statics collected from various web sites as of June 2007 usage Silverlight is not supported on over 10% of users. So if you are doing a cross-platform/browser web site and need to use Flex/Silverlight cabability, silverlight is not the smart one to use. I can understand why Silverlight would appeal to .net engineers since they have less to learn to produce "flash like" applications, but as a flash game developer Im only interested in the results. What can be done in Silverlight? what are the advantages, what are the weakpoints, etc. How is Silverlight stronger than flash, I dont care about video formats or SDK problems or learning platforms, thats just not the point. Here are a few: 1. A complete integration of all MS development tools like VS.NET utilizing all of the their backend media and communication technologies for advanced enterprise level applications that many will be using regardless if anyone like it our not. 2. Utilizing the newest VS.NET will Seemless integrate with MS SQL which will give us powerful desktop applications over the web. 3. Even faster RAD deliverables and now with stunning UI designs. 4. Vector based enterprise scale application which flash was unable to achieve will now be fully accepted by larger companies which will drive the next wave in many aspects of internet communications. 5. DirectX for with full 3D hardware support. 6. Programmers may probably build converters and importers to support SWF animations for Silverlight. So die hard flash animators may be used for cool vector effects with in a silverlight applications. Not sure how much will cross over though. IT and Marketing are no longer in 2 different camps. They are currently engaged and will be getting married shortly. It would be like if Mr. Nerd marries Beautiful Chick and has a family of very intelligent, trendy, well stressed kids. Nice comparison and nice discussion. I added your link in my Silverlight post. You can find Silverlight related resources, tutorials and articles from my blog. Flash is dead... ..and I have found it's dead body! If Silverlight is booming in its alpha versions then think what Microsoft can do in the next few releases. buy-tramadol--online.info/.../tramadol-online.php Silverlikght Vs Flash Pingback from Microsoft Silverlight ??? A worthy contender to Flash and what the ???Java Applets??? should really have been « TeXpressions The biggest problem I see with Silverlight is: You cannot have “x:code” chunks embed in the XAML file anywhere you want. One can only include in the Top parent Canvas. This fundamentally alters the web and the why many people write web applications. For example, you cannot embed mash-ups, if they contain JavaScript code. Other limitations are, you cannot dynamically generate JavaScript code or include data for the components inside the Top-canvas.. The world of technology is littered with the dead bodies of "better" technologies. The simple fact is without adoption a technology is doomed regardless of how much "better" it is. In order of SL to succeed it has to attract the designers, not the developers. That is how Flash got and has kept its market-share and that is where SL will either succeed or fail. At the end of the day a designer could give a flying crap if the file format is better. What they care about is if they are able to pick up something easily, make it do something cool, and if they can sell it to other non-developers. As a developer I see too many other developers who are so out of touch with reality they actually think people care about their opinion. News flash! 90% of the rest of the world either does not understand what you are saying or does not care. Now I would be the first to say that the adoption by developers would be extremely important if we were talking about a database technology or something that did not need to be sold to the masses... but what we are talking about is something that has to come out of the gate being ultra cool to the average person... and so far all the ultra cool stuff about SL is under the hood. Without the designer community SL is doomed... One other thing. You do not need a Java app server to compile Flex. Not sure what you are smoking but Flex is able to be compiled from the Flex Builder IDE. along with all the good points made here (as well as skinner's) - there are a few more... While adobe is no better than MS in the corporate world, the flash/flex community ( most notably OpenSourceFlash.org ) is by far stronger and more well established than anything that SL could ever hope to create. It's funny to hear .NET guys talk about 'standards' and 'open' formats... hmm, and you work with the most closed and non-standard of them all! Now, you're right, the FLV format is no WMV (thank god) but have you seen any web video lately? All Flash. That'll be a hard one to convert. Like the million of XP users. But that's all nonsense. Here's the real- go look at ToolmakerSteve's website That's why MS, .NET and SL will never kill ANYTHING (no disrespect Steve, but damn... ) It tooks me 1h to read all the funny stuff you post here( And the links ) Here two fractions: ------------------ - Designers There are two things i heard from the designer guys here. First "Design is much more important than functonallity", second 'Microsoft Expression ***' suxx". First, if nobody would care about functionallity why the People go on a Website with such functionallity when they can go to museum or in a cinema( ok it's a long way but... ). Second, if you don't want to create your graphics in Microsoft Expression '***', import the graphics from Photoshop or Illustrator( I heard you can do this ). - Programmers ActionScript vs. .Net, Features from hell,... :) ActionScript 3( only 3 ) is a nice Language but the 48 and more .Net languages are nice too( specially C# ). More Language/Framework Features -> .Net Better drawing routines -> Flash After this funny text... I studied game development, so i have learned to respect both, the designer and the programmer side( I'm Programmer ). In my job i develop at this time with ActionScript 2/3, and sometimes C#( It's not a game job, but this is changing ), and i think C# is a nice language, much nicer than ActionScript 3( And ActionScript 3 is nice too ). I also work with Photoshop, Illustrator,.... Fact is, Microsoft has much more Money, they have a own OS to spread Silverlight with one update( So over 80% of the Internet users will have it ), and at least, they have much more developers to make a silverlight a sucessfull product, that will not remove Flash from the web( Java exists and havn't got killed by .Net ), but Adobe has much to do if they want Flash alive in 5 Years. ( I hope my English doesn't suxx to much ) SpeedDesaster I agree Flash is probably dead, unless they open up their file format. Silverlight may seem great, but unless the player is ported to OS X and linux, it will go the way of ActiveX. History teaches more than poetry. ------------------------------------------- Maybe you should read something about MacOSX before you write something about it. Microsofts WRD magnification senario spills magazine bits against all Adobe efforts. See the case where Carvel software guru Michael Mcfurry ate spanish rice and felt sick in the morning. Macromedia made a smart busness move leveraging rich experiences to experienced rich people who own levers. Needless to say I smell trouble for my post. You would be crazy to trust Microsoft for anything that involves an open platform like the web. Flash is great tool for making web applications. Microsoft has a long history of making products that lock in a market subset and forcing others out. I just can't see anyone in their right mind who wants to make a web app using Silverlight. How could you trust Microsoft given their history... it would be painting a big "I'm stupid" sign on your forhead. Both have pros and cons. Here are my thoughts Flash / Flex / AIF - pros /cons - swf / actionscript is fairly hard to decompile in a useful way. - AS 3.0 is now a strongly typed language, though nothing compared to C#. - The Adobe Integrated Format (AIF) applications run as desktop applications on both Windows and Mac, with either SWF content, or plain old html / javascript. Perhaps we can run silverlight apps inside the AIF player (anyone have any thoughts here?) - The latest Flash runtime (I think it's called the Flash Virtual Machine) is 10x faster than the previous flash player (because of strong typing I believe), and is thus quite capable. - Flash video compression is quite impressive. - Many designers / developers already know flash (though most complain about it). - Flash code is often spread throughout many frames, and is not very organized, and is quite difficult to debug. Silverlight Pros / Cons -- We can now use C# for silverlight (as a xaml code behind file). C# on server, C# on client. -- Silverlight will eventually integrate seamlessly into the ASP.NET event model, as well as the ASP.NET AJAX framework. -- Source code is not protected (AFAIK) -- though this is only a partial issue because many SV "applications" will have client - server interactions and the server code is not accessible. -- Silverlight is still very new and needs time to grow up. It's just simply not there yet, but it's growing quickly. -- Frameworks will be developed to make developing RIA applications quite easy. I could see the Acropolis framework ported to Silverlight. Interesting thoughts. Flash is not dead for certain. It has a lot of momentum, so it will take at least a few years even if Silverlight is strictly better in everything. And Adobe will not exactly wait for that to happen. Nobody brought up price. With Flash , if you abandon the IDE, you get everything for free ... Is that so with SL? well, i guess jesse is a microsoft fan boy ("vista kicks ass" - what a crap), but appreciate the comparison anyway. But to my 2 cents: - I think it´s a stereotype that developers don´t know about design and designers don´t know about dev. - IMHO the real outstandig people out there are both and now about both. - from a app developers point of view Jesse might have a point, but he totally forgets to mention the streams of tears of agony Microsoft developers shed already about Microsoft products and tools - don´t think that will change with silverlight yloquen... i expect that SL code can be compiled with the .NET compiler that you get when you install .NET Framework or Mono/Moonlight. There are actually heaps of free .NET tools out there and we will see some for SL/Moonlight as well. Pingback from Microsoft Buys Corel | SilverlightFlashKiller Just a question... Microsoft paid u for said that crazy things? i haven't read all the replies to see if this has been said already, but from what i've seen, Silverlight only embeds through a javascript (no object/embed tags). i this this would be a problem for myspacer's or bloggers who are restricted from implementing their own javascript. i may be uninformed on this, so forgive me if i'm wrong... There are always a couple of things that one tool can do that others can't. Flash/Flex have unprecedented animation capabilities that only artistic people can appreciate and SL is no where near it. Every site that's been defined visually cool for the past decade was most possibly done in Flash; and Flex is growing on that foundation. Likewise, SilverLight has the unprecedented support in .Net framework and it's legion of developers that Flex can't match. It is stupendous to call one technology/community dead rather than cherishing the competition. Where would IBM be if not for Microsoft, and where would Yahoo and MS be if not for Google. It is called FREEDOM of CHOICE. Every living being deserves it. I just cant wait for MS plans to integrate Silverlight with XNA Game Studio Express, for all that rich unprecedented graphic and animation capabilities :)). I don't think Silverlight should be compared to Flash. Sorry, but whenever I see Flash on a web site, I look for the skip button so that I can get to the real content. While Flash can be used to great effect, its usually used for value-free marketing gloss, and frankly, I find all those 'cool animations' superficial and annoying. Silverlight (especially from 1.1 onwards) will be used be developers to create cross platform browser based content with genuinely interactive client side functionality that doesn't rely on the 'make do' string and sticky tape of AJAX. If Silverlight makes Web sites look nice, that's great too, but that's not why it will be successful. Silverlight will finally make it easy for developers to create browser hosted software that matches the user experience of desktop or client/server apps. see Siliver know supports some flavours of linux (i think its also supprted on mac) and the list a list of some popular websites that are now using silverligght, wwe.com, mlb.mlb.com to a few. So SL seems to be gaining adoption slowly. Extractado del Blog de Jesse Ezell , quien ha tenido una amplia experiencia en desarrollo de herramientas Pingback from Silverlight Vs Flash: Trying to collect different opinions | MCSE Blogs Pingback from Pieter Kersten.com » Blog Archive » Silverlight vs. Flash: The Developer Story Hey, I love microsoft, but turned away from their products because of lack of open source. Will admit that I sort of like Vista, but think Linux blows it out of the water, plus it's FREE. A lot of statements made in the main article of this page are logical fallacies in regards to the Flash authoring environment and Flex 2 Flex 3, and adobe's free SDK. I highly recommend this article. Pingback from Silverlight Takes on Flash: A Race to Deliver Rich Interactive Contents on the Web « Television and Interactive Content can anyone please show me a silverlight application that cannot be done in flash except the 3D stuff. Can applications like buzzword be built in silverlight? Can we build a portal based fully on silverlight? Al the demos I have seen done with silverlight have been on the web since some years. I recoomend everybody to stop beleiving the hype . Instead believe what you see. Thank you. weight-loss-pages.com Vishwas turned me on to this article , by Jesse Ezell, pointing out some of the shortcomings in Flash These opinions are still strongly divided into 2 camps: Flash - Great for animation. For coding though it's taken a VERY long time (and 8 player versions) for anything solid to materialize, like Flex, AS3.0, etc. Debugging still sucks AFAIK. Silverlight - Great for coding. Animation tools suck. It's interesting to note that for the last few years, the coolest flash sites are almost always ones that integrate really difficult dynamic coding into flash movies. Now for Microsoft the coin is flipped and the coolest SL sites are going to be ones where they figured out how to do a slick animation. Conclusion: If Adobe stayed the same and if MS dramatically improved their designer tools, I think MS would win. But Adobe won't stay the same and MS might not improve their tools, so it's definitely good competition at this point. Silverlight has a big drawback over its network foorprint over Flash. Flash delivers compiled binary vs Silverlight which delivers Text based XAML. Hence the amount of content that is delivered over Flash is much smaller compared to XAML. Try delivering similar UX over the web & access it over a 512 Kbps connection from anywhere in Asia and you'll know what Iam talking about. Do you really think Flash is dead? Silverlight also offers the compiled binary option. The XAML is compiled into BAML, which is binary and smaller in size. Actually, Microsoft recommends this. It’s the middle of October, and Silverlight is nowhere to be seen... How could it be that Silverlight, such a killer-app is so... ...Dead? How is BAML for streaming, as I don't think you would be able to stream in XAML. If BAML can't handle that I would see it as a rather large problem for Silverlight and Vid Seit anfangs September ist Silverlight 1.0 ( ) released, nachdem es im Mai Pingback from MSDN Blog Postings » Silverlight vs Flash People are taking up ruby because it is nice to work with, designers are not going to take up XAML because it is not nice to work with, I know I am working with it. Blend is so far behind the flash IDE in terms of creative flow you might as well start creating your own version, and often it is easier to hand code the XAML. No one yet has any idea how to really work well with XAML in large applications. XML solves everything is a current mistake, panel are good but start trying to animate and jump between code and graphics, and reusing sections of XAML like movieclips instances, na you guys missed the sum of parts aspects of flash. Hi – flash developer here (in addition to C#.net, VB, PHP, AJAX, and every freakin design tool you have ever heard of) I love all the comments from MS developers who cant/wont learn flash/actionscript. Honestly, imho the best SL can ever become is roughly equivalent to Flex - or as I like to refer to it "canned-flash for the layout challenged". With effective use of actionscript, it will do anything you tell it to do (with a tiny footprint, and low resource allocation). But it does take a level of graphic talent to handle it all - which I understand is a bit threatening. I am all for frameworks…but I don’t them. Silverlight will fail because propeller-heads are not art freaks and hippies. Back away from the client code-monkeys, and leave the aesthetic to the eclectic. Even better, SilverLight will be to Flash what Front Page was to Visual Studio. (I know, that smarts). The only way it will take off is if "designers" adopt it and produce flash killer UI's with it. (that ain’t happening - from what I have seen so far). People, designer types aren’t leaving Adobe. Do you really think you can force designers to use SL just b/c it will be easier on you? Ha! Btw, when is MS coming out with a Photoshop killer? lol. Wouldn’t that be the next logical step? *I am grinning big right now* Then, one blessed day you wouldn’t need graphic designers at all, and the web will resemble a communist nation or even better, a huge, featureless brick-wall with two colors of bricks, and one color of grout (grey prolly). Won’t it be lovely? One day when you crank open VS, and bind a control to a datasource in your aspx pages, the UI will automatically created by SL by simply adding another page directive and a single line in web.config: <appSettings> <add key="GUI_type" value="Gates_CheesyUI_1"/> </appsettings> Silly...programmers. You can’t do it all yourselves. And if you do, no one wants to look at the result. But you know that already. Sheesh…. Here's a suggestion – instead of bashing flash, and those “special” flash developers you love to hate (like me) I would say to approach them and BEG them to try Silver Light. I would even suggest giving licenses away to these people, in addition to offering free training. If not, they will never have a reason to switch, and you wont get too far teaching stud MS developers how to use it if they don’t have an eye for it in the first place. I will be waiting – but not holding my breath. Good luck, and God speed. SLdoah - Love your comments, but I dont think anyone hates you. 8] (jealous maybe) You are right though, what Microsoft really needs is to tap into that rare breed of "programming designer" that would be willing to cross over to Silverlight. The easiest way is to give them incentives for bringing thier talent over. Another way is to demonstrate huge income potential for someone with this skillset - but make no mistake, this type of head count is expensive (but very neccessary) for any employer who cares about the front-end. I dont think that its a one way or the other propisition btw. Those who will excel at this new technoloy will continue to excel at Flash/Flex/Actionscript. If you compensate them, they will come. Hi, I am new to silverlight and Flash. Can we create a silverlight application using dynamic data loaded from Database? I am searching for an example, I can not find one. thanks Wow im speachless. Iv worked with flash for a good 4-5 years now and, well actually Im impressed by what you say. I mean, for example, that you have to put a blank sound to force a regular frame rate is absurd. Why go into so much trouble when you can just use intervals... I mean, im not saying that silverligh dosen't have potential...but its faaar from what flash is doing right now and it will take time before it catches up. Too much time if you ask me, flex has become simpler, air will make everything better, and silverlight will fall into the shadows until MS shows you ppl what we have in stores.. I'm surprised one very simple point has NOT been substantially brought up. The Designer vs. Programmer issue. Design vs. Code. I mean if you take away the fact that a non coder can do some pretty amazing stuff in Flash, well, then Flash AS is just another language. But it never was AS that brought me to it, but rather the SIMPLICITY of being able to animate and create interactivity with ease. What tools can a designer use in conjunction with SL? Is there a WYSIWYG GUI like Photoshop/Premiere/After Effects/Flash that I can use? In what can only be considered one instance in a series of examples, I saw this story today on TechCrunch Just to clarify for all those M$ - centric folks, You really should be comparing SilverLight to Flex, not Flash. Flash (the "Player") is just the plugin, and the branded name for the authoring tool used for over a decade to make rich internet app-type stuff. Hey - it looks like everyone on this thread forgot (or were to young to know) that M$ tried to enter into this environment over a decade ago. When Flash was in its infancy, M$ bought a company called Liquid Motion, which had a product called Liquid Motion Pro... the software allowed you to create animations using a timeline based mechanism...If they treat Silverlight like they treated Liquid Motion Pro, than its as good as dead. I wonder how one could create an app that manages peripherals with Flex/Flash. Is there any kind of integration between action script and Java? I mean not via an app server but simply being able to invoke methods on local java classes. I guess it can be done in SL since it simply integrates with .net languages. In Flex/Flash world what would be the better way to integrate the presentation layer with business logic and hardware control? Thinking about a kiosk type app? IMHO the *true* power of flash is on the server, NOT on the client. I'm referring to FMS (or any other open source equiv. e.g. Red5). Does SilverLight have an associated server engine? If not, it cannot even begin to compete with Flash technology on any serious level for building full-featured, audio/video database-driven, multimedia apps. Flash sucks - it doesn't run on Win 3.1. Ducky, Win 3.1 has been extinct for a while now... "Does SilverLight have an associated server engine?" Silverlight content is just plain xml any web scripting language will already allow server integration. For me the biggest positive about the Silverlight plugin is the smooth hardware accellerated playback at 60fps. It puts the flash player to shame. However I find the Flash Design tools are far easier to use than Expression Blend. 60fps? Why would you waste processor cycles like that for ....anything? Do you know the human eye can only detect 32fps -ish. Anything more is just wasted frames for visualy effects. Why on earth would you ever want anything running that fast? lol. You need to get out of your cube more often dood. I hear the outdoors around Redmond is fabulous this time of year. From a nice posting here: Unlike Flash/Flex it(Silverlight) doesn’t do (as of v1.1): sound processing binary data exchange sockets per pixel bitmap editing bitmap filters (convolution, color matrix etc) bitmap effects (drop shadow, blur, glow) frame based animation (i.e. hand made) webcam microphone text input e4x built in file upload/download user controls layout engine local data storage linux player express install (through player) BACKWARDS COMPATIBILITY for 10 years so far! 1.1meg footprint … these are just a few features. Id say MS has a long way to go. I knew I read enough after seeing "...vista rocks...". Within 5 minutes of using Vista I (really!!) wanted to take the whole-tower outside and beat it with a baseball bat. Then go out to the street and throw the remnants down the sewer, followed by a molotov cocktail to ensure that "no raccoons were harmed in the destruction of this MicroJunk product". Fact of the matter is that if you want something crappy, get Microsoft to do it. Even in such case it is probably just (half of) a collection from another company's products twisted for their needs and sold as if tailored to someone else and that might someday become a reliable product. (strong emphasis on might) casey and SLdoah! sum up the points that were conveniently left off your very lopsided comparison. Frankly I only use anything MS if it's the last chick on earth type of thing. Adobe on the other hand is like walking into a classy-but-cheap brothel. "New to the RIA topic", you can integrate Java and Javascript (call Java methods from Javascript and vice-versa). Sun even has a library that does Java-to-Javascript communication (which you can use to easily implement Javascript-to-Java). But i would recommend flash over java whenever possible, because Java is bulky and it's security implementation usually means you'll have to sign libraries to do some simple stuff. I would move to Silverlight if it has better scripting capabilities than Flash. I don't really use flash for animation or presentation purposes, i tend to use flash as a faceless engine in a page to perform any tasks unsupported in javascript/ajax. It's usually a pick between Java and Flash. Ideally SilverLight would be a lighter Java, with the same low-level scripting support without the awkward/buggy behaviors. A few questions then: 1. What's the execution speed of AS vs Silverlight? 2. Can I make games in silverlight that won't suck? 3. Is the silverlight rendered faster than the flash one? 4. Does it have hardware T&L? If it's at least a triple yes then bye bye flash. Otherwise, this is some kind of joke. As both, programmer and designer, and as neither of them the full potential and better tool is Notepad. SL is Notepad with a few add-ons, if you don't believe me try it. Tell me you cannot write XAML with it and .NET and such crap-ola. I mean how much more success and proof do you need to show you that Microsoft products are great: -Zune (great for concealing really important things) -ActiveX (without it Explorer was able to use opacity and alphas) -Vista (such a great box, pretty colors, what can i do with it, though?) -XBox (Wii used to play it :-)) "he totally forgets to mention the streams of tears of agony Microsoft developers shed already about Microsoft products and tools" Hmmm.. I happen to be quite fond of some of the Microsoft products and tools. Take Visual Studio for example. It is by far the best IDE I have ever used. That is one of the big reasons that I am a .Net developer today: the tools. The tools are what makes a software system successful. Sony ignored that with the ps3, while MS offered fabulous development tools. As a result there are many more Xbox360 developers. The programming model behind WPF is really quite nice when you get into it. The current DESIGNER tools for SL may need a lot of enhancements, but they can improve that. The important thing is that they have the big picture view on where to take it. I'm not saying flash is dead or even dying. Just that SL definitely has a fighting chance. BTW, MS get socket support & binary formatter WCF support in SL please. @venividivici, That hideapod link was hilarious. Let me guess what operating system you use... hmm... OSX? You can really tell the Apple fanboys when you read their posts. Yes, there are many things about Vista that really suck, but I think that it IS better than XP because of the security. The average user is so stupid. They need a stronger security system to take care of them Something like Ubuntu or Vista work great for that. Haven't had a lot of OSX experience so can't speak for it. Much better than XP though even though the transition is very painful. Developers.... Developers..... Blah Blah Blah... This is the realm of designers. I cant see how developers would have any say in the matter. God forbid they actually let programmers design something...(eek!!) In the end it boils down to one thing...Flash is eye-candy...that is it's purpose, it is the reason for its existence that is why designers and (most importantly) end-users want it. Why the hell else would you use it? If it were up to developers why not just develop in asp? The Expression tools are light years behind any Adobe equivelant . The insane amount of bugs partnered with the MS paint like features somehow dont make me wanna pee in my pants. It took me 2 days to make a basic animated banner.(with all the buggy modification tools and dodgy xaml it generated) It took 10 mins to do the exact same thing in flash. Enough Said... Ultimately it comes down to what the companies investing in web marketing want. They are familiar with Flash, and so is most of the industry. They're not interested in risking money by using new technology, even if it is Microsoft. Industry professionals already know Flash, and it's easier to migrate to AS3 then to learn another proprietary language. Truth be told, the major marketing firms will continue to use Flash, and money dictates direction. Pingback from Silverlight/Flash performance comparisons (and a wee bit of fanboyism) « Design | Geek Pingback from Flex: Is Silverlight just a “Me Too” product? « Tales from a Trading Desk "Microsoft does one thing better than the rest..and that’s re-engineer a product and integrate it so well you in the end you really don't care to buy or use anything else" I just have a distrust of MSFT...Cross browser support: They get a reasonable share of the market, then add a "feature" in their next version that doesn't quite work right on other browsers, etc....You do care about using something else. You just can't - if you want things to work correctly. They often work with a standard format, then add a their own twist. "implements industry standard VC-1 codec for video" - Isn't this a little disingenuous? It sounds like this battle is just moving applications into what people wanted browser apps to be in the first place before that security sandbox put the handcuffs on desktop access. A mature looking front-end, desktop integration, desired content pushed from the server rather than polled. I think what the designers here are forgetting is that functionality is going to be the new drive on the web. Flashy (pardon the pun) animations that look cool but do nothing are so popular because that's all the designers can do. Now that Silverlight has come along and offers us a sane development model, developers can begin to make useful web apps with streamlined UIs. The Silverlight apps may not have animations firing off in all directions and blinking shapes flying all over the place, but they'll be able to accomplish tasks. The new web is definitely going to be developer driven, not designer driven. I think people have had enough cartoon sites, and now they'd like to get some work done. Has anyone noticed how popular Google has been lately? I'm sure you have. Where's Google's slick animations and colorful, bubbly UIs that are hard to navigate? Google's products are perfect examples of how the new web is driven by functionality, not flashiness. Google is just one example. There are websites all of the Internet that are springing up and becoming popular because of their simple and easy-to-navigate UIs. Most developers are smart. they know who their target audience is. They know catering to their clients needs pays the bills. Clients want websites which are audience inclusive.. I've never heard a client say "I don't care about Mac or Linux users". Today Flash is an inclusive platform regardless of it's developer features. Silverlake is not. Unless Silverlake can bridge to other platforms it will never be adopted by the client... because it excludes a segment of their audience. The client doesn't care if it's developed in Flash, XHTML or some other paradigm.. as long as their audience can participate. Silverlight is only alpha, but it already works on Windows XP, Windows Vista, and Mac OS X. Windows 2000 and Linux support is in the works, and will most likely be released by the time Silverlight 1.1 goes gold. As far as browser support, Silverlight currently runs on IE6, IE7, FF 1.5+, and Safari. I like Javascript integration, Xaml (yet I would have prefered svg), resusable code. However, I do not want to be supposed to develop with .NET plateform, which I dislike. I hope I will be able to get the tools & Ide to separate silverlight development from .NET development I use flash for virtually everything since it is "ubiqutious". (animation, web, games, broadcast, films and even print). I am talking about serious professional stuff. It does have several cool +ve points. But when it comes to taking it to limits we start feeling the limitations. While most people here compare the programming aspects of flash to silverlight (Since MS is a programming oriented company), I find its limitations in animation compared to Animo / US animation . I see it limited as a multimedia app compared to certain features of MM Director. I see it limited compared to JAVA comparing certain math algorithms. I see it limited in making certain CG for broadcast. I see limitation in print compared to COREL. All said and done, poor flash has nevertheless tried to keep all happy to an extent. Knowing Microsoft's poor sense of creativity (Its logo and the default wallpaper says it all), I wonder how much they must have accomplished them in silverlight? I have a web site that has an animated flash banner at the top. I have 3 computers and my desk top will not open up the banner. I am a novice at computers and only know what I need to know to do my job. I am a realtor. Is there a suggestion has to what I need to do to get my computer of open the banner. I have downloaded the lates Adobe product. Thanks Silverlight Probablemente el tema que mas preguntas generó fué Silverlight . Intentaré Pingback from MSDN Blog Postings » MSDN Briefing Online - Preguntas y Respuestas (Parte 3): Silverlight this post is bullshit ... Is Microsoft like a mac? Pingback from the rasx() context » Blog Archive » Today???s Links to the Client Side This article seems to me to be rather one sided for M$. Silverlight is gaudy and takes too much hardware to just develop it. I can download eclipse for free, then just grab the flex builder plugin and I'm golden. Since flash has been around for quite some time there's much much more documentation on just about anything you care to do in AS. If SL is anything like any other M$ product I can be assured the API will be nearly impossible to find not to mention ridiculous in it's language (not a big fan of VB or any sibling thereof). I'm just going to have to call shenanigans to this article and say somebody sounds like they may have a big grudge against Flash for the company wanting to be safe with their code. Get over it. 1.Ruby on Rails (or PHP) 2.Flash (using sendAndLoad();) Two steps and nearly every 'benefit' of SL has been dealt with. I am yet to run into any problems with this method, and AS3 has been quite nice to me. I'd like an example of a site that uses SL better than Flash (that didn't take 2 years and a team of MS developers) and then maybe I will consider getting back into SL. I worked in it for one month and was so frustrated! It took 1 week to learn AS3. If Microsoft continues to release Betas that drive REAL developers away... Well, SL will go the way of Vista. ------------------------------- Interesting point SLdoah. By proclaiming "flash is dead", MS clearly shows that they are well aware of your point now and were back then when they started thinking about creating SL. They can't really BEG, so by saying "flash is dead" they hope the Flash community will start trying SL -- which will only happen if SL will be able to produce SIGNIFICANTLY BETTER AND NICER EYE-CANDY THAN FLASH. That's the bottomline if you ask me. We will find out. One reason why I'm personally glad with the existence of SL is that it will hopefully force Mozilla to reconsider their policy of NOT wanting to fix the flash/animation bug in Firefox (they conveniently created a flash-blocker). This I think might pose a greater threat to flash and animation in general. Read What do you think? Which one supports DRM functions? So that the rich media content can be protected? "Want to move something across the screen in 3 seconds? Calculate how many frames 3 seconds will take, then calculate the matrixes required for each frame along the way." The class fl.transitions.Tween allows animating object properties by using seconds. Am I missing something here? You can tween several properties simultaneously. "..Silverlight supports the WPF animation model, which is not only time based instead of frame based, but lets you define the start and end conditions and it will figure out how to get there for you. .." Doesn't Flash do the same?!? 13c360cdb9:75 Thank you I get it. It's not that you feel that Silverlight is better. You are just a BIG fanboy of Microsoft. # Jesse Ezell said on June 8, 2007 06:26 PM: How much is Microsoft paying you to try to fill our heads with there propaganda bull crap. Don't get me wrong I am not a supporter of Microsoft or Adobe. It just seems to me that someone who comes out with a statement like "Vista kicks ass FYI." must be getting paid by that company when everyone knows the product is as about as useful as a sharp stick in the eye. BigA Just from experience you do not want to use those transtions.Tween classes all they do is add the image in a new position on a new depth. So if you keep using them you will eventually run out of depth levels. The best possible way is to code you animation using a time based method like intervals. But can be very complicated. Silverlight cannot do anything that Actionscript can't. Flash has better graphics, and Microsoft tends to put product useability second to product efficiency. First of all, AS3 is much faster than silverlight. Silverlight is trying to catch up to AS2, to be honest. Even if silverlight was faster, it wouldn't be as easy to use. As far as timeline vs frameline, frameline is much better for animation, and you don't need a timeline in flash for games and interactive material. Most games are made in under 5 frames, and the frameline only really deals with how fast the intervals are called, and can be dynamically controlled, depending on the computer speed. Silverlight can program in C#, but AS3 is very similar to Java, has an OOP based programming structure, and most functionality can be achieved via smart programming, regardless of what the code lets you do. I planned out a fully functional 3d Engine in about 3 days with flash, using nothing but variables and some mathematical functions. Keep in mind that flash is for games and animation, not desktop APPS. Flash has a good product and amazing market foothold. Microsoft won't have to worry about distribution (just put it in as an update), but flash is already available on most mobile devices, ipods, cell phones, etc. Silverlight supports more (microsoft) video formats, but flash has this area too. Almost every video site in existence uses flash and loads up a .swf or .flv file. Flv dominates all other video formats amazingly, can be streamed, and has really good quality and low file size. Adobe has good tool integration. They have photoshop, illustrator, and the whole macromedia production suite designed to work in tandem. Microsoft's programs aren't as cohesive and are usually designed to perform individually of each other. And if I had to make an outlandish statement, I'd say that Director is the future. I've heard that they will start offering actionscript support in director, which means that flash games will be backed by Direct 3d (salty for microsoft, because SL can't do that). Eventually though, I think that Flash and Director will merge. Flash will have 3d support, Director will have 2d support, and actionscript will have adapted to fit accordingly. What will matter a lot though is who gets hardware support first. I think Adobe will have this, considering the current state of Silverlight, but that is really not something I can back up. Vista is a beast to get working, lots of out-of-the-box glitches, but once you get it updated and customized, it is a very comfortable, pleasant OS. jol, please do us a favor and don't even begin to compare actionscript to c#. actionscript is a toy compared to vs.net and c#. as per this whole discussion of developer vs designer debate. we're talking about the future of making web applications become as sophisticated as desktop applications. Please make a distinction between lame little flash "ads" that plague the internet to RIAs which is what SL is intended for. In the app land, the developer is the "ruler" and not the designer. For this reason, SL will win the RIA war. and flash will continu "If you have server components, once again you need to switch back to .NET and throw out all the classes that the run time is using." You can code server side functions in Actionscript. "text" You can embed fonts in Flash. Ha ha ha... Vista kicks ass? Or did you mean sucks ass? In my opinion, Microsoft, specifically the Vista department, should be criminally prosecuted for the amount of delay they have added to everyone's lives for all that extra clicking... "Someone is trying to access the printer, Allow or Deny?" WTF do you think I clicked on it for? Yes, that someone is me... DUH... really, what is next... I can see the next operating system after Vista, asking another round of stupid questions, "Someone just clicked Allow! Are you sure you want to allow? Okay, let me see your drivers license and social security card, and then I will present the next stupid question to you" Pingback from 80s team blog » silverlight???Flash???????????????[silverlight vs Flash] I think you need to work on your Flash skills, some of your assertions are dead wrong, way, way wrong. It is nice that you like Silverlight so much though. # Sorry > totally agree. Note: I already use Flash together with c# and .NET.... whats the prob? (dont get confused, flash does not execute .NET/code but can very well communicate with them in many ways) For me it looks like Microsoft and Adobe will have and hold their own technology fans and artists, creating big things. It's not the question: "Silverlight" or "Flash / Flex". Both will stay interesting technologies, and surfers will have to install both; I can't see, that in the end there will be only one hero, having the whole market for himself. They'll split the market, and there's enough potential for Adobe and Microsoft to earn a lot of money. Perhaps now we know part of the reasons Microsoft wants to buy Yahoo! If they can force enough people to install Silverlight to access web services they own (incl the big Y), they would gain traction with Silverlight. But somebody please tell me: why - if Silverlight is so wonderful - is the MS website STILL using flash to showcase its products? (see) I think Silverlight is much better. I love adobe and all, but when it comes to development nothing can get close to microsoft. C#, Vb, and the asp.net framework are far far beyond actionscript, and flash. I agree that a big problem is going to be distribution. All around it seems way better than flash, but it also depends on developers actually implementing one or the other. I am a .net developer, and a flex developer. I honestly find actionscript horrible as far as code, but I use it, cause almost everybody has the flash plugin. But once again I say Silverlight is a much better "tool" for web development. With C# it has the best of back end with its own front end eyecandy. Excellent article. In-depth. However, you risk undoing all your good work with this uninformed catch-all general statement: >> Visual Studio.NET is by far the most powerful >> and most popular IDE I am a Java developer and manage two C# developers. The power of Eclipse JDT (Java Development Tools) is awesome and includes frequently needed features not included in Visual Studio .NET 2005 Microsoft distribution [1] such as: a) Open Type Hierarchy b) Open Call Hierarchy IN MY OPINION (important qualifier which you did not use), Eclipse is a superior IDE to Visual Studio. [1] I'm not talking about 3rd party addins. rebus - spot on eclipse rocks java / php even as3 but we're talking MS here i think. i am beginning some research on SL after working with the new javascript EXT 2.0 libs and being a bit frustrated at the effort. does anyone have any other useful links to SL evaluations? yeah yeah yeah... but why? Was Flash not doing it okay? I forsee Microsoft trying to strong-arm itself into Adobe's marketshare. They will force users to utilize their plug-in and make back room deals that limit the content delivery method to Silverlight. Since Microsoft can preinstall whatever it wants in Windows, it'll appear to be "better" or at least "easier" to the end user. This is the same company that put out Vista... who's CEO saw no future in the iPod or iPhone. Do we want to look to them for innovation? They just wanna pull another Netscape. amazing article.. thanks.. As a web designer, one of my favorite things about flash is being able to produce artwork in other Adobe applications such as Fireworks, Illustrator, and Photoshop and bringing them in fully intact, with transparency intact, and maintaining vector artwork. How does Silverlight work with other graphics packages? Also, most designers are on Apple computers and I'm not certain you can develop Silverlight with Mac - correct me if I'm wrong. andy - "...I love adobe and all, but when it comes to development nothing can get close to microsoft. C#, Vb, and the asp.net framework are far far beyond actionscript, and flash." Did you really just state the MS server-side technologies are are far far beyond Adobe client-side technologies? Please tell me you understand the difference.....*sigh*. That’s it dude...get back into your MS developer hole - your blogging privileges are hereby revoked. Your postings are making us all dumber. Disclaimer: I apologize for the personal attack…”I didn’t have to do it; I felt I owed it to him.” lol. 8] But seriously folks – imho, Steve has it right. I agree that it isnt going to be a "one way or the other" proposition. Things change fast in our line of work. If you have the skills, you may as well learn all you can about both platforms. It remains to be seen if Silverlight will gain popularity (note: popularity doesn’t mean penetration, btw)...but if it does, we should all embrace it. After all, in most cases, it is our clients who will dictate the technology we use, not our personal preference. I would add that flash is going nowhere by the way – I think most end-users would agree that Flash has become a defacto standard on the internet, much to the dismay of you-know-who. Still, if Silverlight ends up creating a gravy train like Flash has for much of this past decade...I’m on board. But that said... It's still going to be up to creative professionals to create the appeal, and ultimately the demand. I think our friends in Redmond would be wise to remember this. (…in other words….MS : Play nice with ADOBE, and APPLE if you expect to gain market share!) Pingback from Nesaprot.net » Blog Archive » Silverlight vs Flash Well, I think most of the discussion and comparison made here would not make much sense when Silverlight 2.0 comes out. Seriously cannot compare flash with Silverlight 2.0. You maybe able to compare Flex with it in some way. Flash is not powerful enough for large scale applications, and security issue is also a concern. ActionScript 3.0 apparently does better programming, but still too weak for complex logic. Also, flash is a bit too slow sometimes. JC Forgive me, but I think some of your blanket statements need to be corrected. Wrong #1: "Flash is not powerful enough for large scale applications" Do you mean you don’t know Actionscript well enough to build large scale applications? Please don’t confuse the limitations of a web browser with the limitations of Actionscript. No, flash is not going to build your next HR/Accounting application on its own (if that’s what you mean), but neither is Silverlight – nor is that the intent of either platform. Do you mean RIA’s? I would argue that the most “large scale” and widely distributed client applications on the planet are done in Flash/Flex. To be nice, I won’t even mention Adobe Air (who needs a browser anyway? lol). More to the point, Flash, Flex, etc… need to rely on server side scripting (ASPX/ASP, PHP, PERL, etc.) for the real large scale applications you are probably thinking of. That said, Actionscript can communicate with ANY backend language you want. How many do you think Silverlight will play nice with? *grins* Wrong #2: "ActionScript 3.0 apparently does better programming, but still too weak for complex logic" I have to scoff at this remark as well. What exactly do you consider "complex logic"? You do understand that Actionscript 3 is a strongly typed Object Oriented Programming language right? Here’s the short list of “complex” features of AS3 (copied from Wikipedia – btw), you obviously aren’t aware of (take particular note of item #9): • Compile-time and runtime type checking—type information exists at both compile-time and runtime. • Improved performance from a class-based inheritance system separate from the prototype-based inheritance system. • Support for packages, namespaces, and regular expressions. • Compiles to an entirely new type of bytecode, incompatible with ActionScript 1.0 and 2.0 bytecode. • Revised Flash Player API, organized into packages. • Unified event handling system based on the DOM event handling standard. • Integration of ECMAScript for XML (E4X) for purposes of XML processing. • Direct access to the Flash runtime display list for complete control of what gets displayed at runtime. • Completely conforming implementation of the ECMAScript Fourth Edition Draft specification. Wrong #3: "flash is a bit too slow sometimes" You can kill a machine with poor coding in any language. Please don’t blame the technology, blame the developer. I understand how you have come to this conclusion though, because I have witnessed it myself. Sometimes graphic designer types use it as a medium to get into coding - and good for them! Sometimes…however, this leads to poor results in the public domain, which has given some folks the wrong idea about what the Flash Player is all about. But again, in the right hands Flash/Actionscript can do wonders. To be fair, I do have some gripes with Flash in general though: 1) Not ideal for text based content. Let’s face it, content is king – not eye candy. 2) Debugging is a bit clunky. I would love to see a robust debugger – similar to Visual Studio (yes…a nod to MS) 3) It’s hard to find good talent out there. Mostly because you need a good programmer who also understands the graphic arts to be effective. This is a rare breed. In closing…let me ask you this: What exactly will Silverlight accomplish for the end user by ANY version (2.0 or otherwise) that Flash doesn’t already? (again, I stress what will it accomplish FOR THE END USER - no one but us MS developers care about CLR integration!) When I look at a nice flash design, my reaction is " wow, it looks so good, like a piece of art.." For silverlight design, all i can think of is "a nice computer program" Would it be nice if Adobe considers Flex+C# ! Pingback from Cristiano on Tech/Life » My Bookmarks For March 3rd - March 7th Very nice article.... the best part that Microsoft did is integrated silverlight with Visual Studio that fact alone means a lot and a huge + for this product. Hi guys, i have been listening to this flash and silverlight war thing it is interesting but i was just wondering that if i want to create a small website to run a flash game on that using silverlight can i do that. I know it may sound stupid but i am new to this flash and silverlight thing 在以前的一篇 文章中我已经说明了Adobe和Microsoft在presentation layer的竞争关系。根据一些资料总结的功能,我针对Flash以及silverlight做了一个比较的图表,后面我会针对每一个横向对比做出说明。由于国内外对Flash和silverlight的比较文章几乎没有,因此没法作为参考,文章中一些东西我不太确定的,请大家指出。 Runs on Linux Pingback from Microsoft SilverLight vs Adobe Flash | MundoTech.net casey said I'm not a developer, but my huge gripe with Flash is that they apparently have no plans to continue support for Windows Mobile. The latest flash player for WM is FlashPlayer 7. FlashLite 2.1 is based on Flash 7. FlashLite 3 is being released for Nokia phones, but in the Adobe Labs development forum an Adobe rep asked what people wanted to see. When asked about when support for Windows Mobile would come out, he responded that there were no plans for FlashLite3 for WM and wondered why anyone would want that. I want to view flash video (like MLB.tv - WMV in a flashplayer that requires flash 8 or higher) on my ppc but can't because Adobe won't provide that functionality to WM. MS just announced that SL is coming out for WM. If Adobe is not going to support a platform then I applaud MS for SL. I total agree with you. If Flash is not dead now, It'll be in a year or two. SilverLight has the backing of .NET and Microsoft. It's here to stay and make us developers' lives a lot easier. The problem with AS3 is that it's not ideal for desktop apps that need to communicate with low level drivers. For example, reading a serial port, usb port, writing Bluetooth stack, etc. Flash/AS3 is geared for games and advertising. Ever heared of Flex Builder, Flex SDK and AIR (still code-named Apollo in May 2007)? With the words of Dieter Nuhr: Democracy does mean that everyone has the right to have his/her own opinion, not that everyone has to have one! If you have no clue: just shut up! The only way to get Silverlight in the market, is to ship it with Windows - as always ... what MS wants to achieve with Silverlight is, to have the control over the foundation of the web, before domination of the OS market becomes irrelevant. There is absolutely no need for Silverlight with Flash / Flex and JavaFX as open platforms to choose from. Another important thing: Flex 3 SDK is open source under MPL, the AVM2 is under MPL, too, named Tamarin and will be used by Mozilla as basis for their next Javascript engine. It is based on ECMAScript4. With Flex and Actionscript / EMCAScript a free standards based web is assured, with Silverlight we are all dependent on one big company. The foundation of widely used internet services like WWW always have to be free and standards based. Silverlight is not about what you can or cannot do with flash. I believe Microsofts thoughts about the Windows Presentation Foundation and Windows Communication Foundation (which includes powerful tools when used with Silverlight) are for the developers to easier build rich applications for the web. Beeing a future-optimist about silverlight I don't see any problems running for instance Microsoft office as a web-application.. Pingback from ' + title + ' - ' + basename(imgurl) + '(' + w + 'x' + h +') i worked with flash and enjoyed my work. In flash 4 i wrote the first tiled based version of pacman and gained plenty of work for other game development. After the .com bang i moved to .Net for information system design. I welcome Silverlight, the .Net IDE is excellent. This has renewed my interest in rich web application development and i look forward to messing around with Silverlight Very premature to state "flash is dead"... Microsoft releases now the "first competitor" against swf format, and all of a sudden, for some techspecs, you come out shouting that... No way. If i can say so, this is the start of a fight where the benefits will reach all of us, designers to developers. Im not even by far stating here on a blind faith that adobe is the greatest and the flash will be the greater ever. But now adobe has reasons to care much more about its product, to not lose marketshare. That SL brings some improvements, it is just as expected, dont you think? Would microsoft release something crippled or lacking something that coundnt compete? Suppose that by a magical snap we can have the plug-in installed on all computers.. 100%. Could we say "flash is dead"? In terms of evolution, its useless to rewrite again the many features it has implemented since i dealt with flash 4. All places have schools teaching it, many professionals have their carreers made on and thanks to flash platform. Many workarounds and solutions have been achieved, and people got used to flash environment, action script and all of its stuff. And not to mention a "very little, insignificant, worthless" thing, is that many major sites and thousands of developers and producers are knee deep on action scripting and animating REAL and harder things on flash timelines... Lets not forget the sites like miniclip.com with thousands of mini games on flash, not to mention others... Let me be more clear now - THERE ARE THOUSANDS of programmers intented to learn action script and developing their games, like me :) Let me ask again... Suppose that the SL plug-in was in 100% of the computers, and regarding the scenario i described above... Could we say that "flash is dead"? On other blogs i read people with enthusiasm saying that ms is pushing SL to cell phones and BLA BLA BLA... Wouldnt that be obvious? Im not into stuff on cellphone development, but im not seeing ms beating Symbian OS... SO WHAT??? Will ms snap its magical fingers and make all our phones into silverlight lights? Just to remember the purists of this blog, most robotized-minds that calculate every word, that theres a scrict proclamation (by the author) up here stating that to him, "flash is dead". Dead as i know, means dead... OK!! So thats what im basing some of my args on here. So yes... If SL brought things that flash fails on deli vering UNTIL NOW, i think that the best bet is to say "adobe has its competitor". Now it has a reason to push harder the development, implement new stuff in order to remind us that the swf is the way we should work with. Even with billions, microsoft cant make miracles. Lets remember the first 2 years of "no profit" on xbox stuff, and the many 3 red lights problem that made users to get their 10, 11, 12 units of their xbox from the guarantee. So let the fight begins... People are getting excited with the new, as they ALWAYS DO, and surely we will see fan boys from everywhere stating whatever their minds allow them do see. One thing i know for sure. Programmers are program mers. And thats that. Almost never knew any programmer that has marketing knowledge, so i always heard that i shouldnt buy a new cell phone right now because theres a new platform of data transfer, which is 10 times faster, bla bla bla... Well theres a BIG DIFFERENCE between "developing new techs", "releasing new products", and "success on market and shares". We need to stay focused. As for me, im confortable on flash since i started to animate a square and create buttons on flash 4. Im not the beast on action scripting, but im getting used many many more. As a personal belief, adobe will run now, and possibly the flash updates will be more frequent. It has nothing more than close to 8 or 9 years of knowledge and experience on the run, and that cant be off just because a worth competitor stood on war frontlines. Lets wait and see... Laugh. Silverlight will not kill Flash. Reading through this your very bias to Microsoft like a true .Net loyalist. I'm not the source expert on all of these features, but on a few I'm very much so. Video- Microsoft is clearly playing catch up here. Flash video is extremely interactive and is now powering pretty much all the online video rave. Only people that use WMA's to stream live events are people that don't have the know how to know better. I deal with these types daily and its sad. What does microsoft know about video anyway? Outside of cookie cutter MovieMaker and other novice tools, they don't make a industry standard video editing suite, and they've been stuck in the "player" atmosphere forever now. Programming - All I can say here is your certainly assuming everyone is a .Net freak. I'm not and I perfer to use ColdFusion, a much more productive language, they doesn't require my social life in ruins to get a firm grasp on it as a professional. I've been down both roads, and CF is rediclously easier, and to date I've never needed to do something I couldn't do in CF. Dev Tools - Graphics wise the comparision is a joke. Adobe clearly wins as they are the sole superpower in bitmap/vector editing suite. Programing wise Adobe was playing catch up for awhile with Dreamweaver, as it was trying to be the jack of all trade and support all languages, though it still does this, its tailored more towards CF now and now even comes with Flex tools to make Ajax, something very tedious simple. I've been down the whole Microsoft line for this and their tools are pretty lame, outside of some good debugging tools, the web work flow is a joke. Must everything be built off the assumption your coming from a software developer background? Overall- Outside of Microsoft fanatics, I don't see anyone praising Silverlight. It doesn't make sense to abandon the multimedia gaint Adobe, that has created a solid bridge in their developer standard tools to client display, for a new player microsoft that has serious tools in the web game, outside of SQL Server, and .Net framework (but only as a launching point for other languages). Just because microsoft wants to play ball now, that Adobe Media Player is incoming, doesn't get them in the game they have to earn it. I've read most of this post (a lot of down time at work) and to make a statement like flash is dead is...silly. That in combination with "vista rocks" just makes me feel like this is a Microsoft add. Now that I think of it, this came up tops on my Google search... I think it is incorrect of you to compare Flash to Silverlight. If you are gonna compare Silverlight to something you should compare it to Flex. I am a "Flash Developer" but lately I've started to explore what flex can do for us as a company and I must say it has many advantages over Silverlight. The main one being that it is a finished product not a beta being stuff down peoples throats. We had some .net developers try it out and I can tell you with out a doubt we will not be using Silverlight in its current state. Functionality seems to be the developers resolve for using SL. However we are not in an "only" developer's world. We are in a world that has diversity and variety. We love to go to Art shows, theatre, and the movie industry is booming. Why? Because we humans love vanity. We live for our entertainment and for the weekend. If it were my choice as a designer, Flash as a eye candy tool, wins out over all. It is foolish to suggest that people want fuctionality now rather than "cool" animations. If this were so-say bye bye to Disney, etc. Come on people-Flash works. Use it. Microsoft is just getting into the game (smart business). Just another product to choose from. In the long run puchases and usability will be a matter of personality and preference. Much like Xbox/Playstaion/Wii. Which do you prefer. They are all doing well-but are still competitors. Once again, I say MS is just getting into the game. Which will you use. Maybe I'll use both Flash and Exression. I've played both on an Xbox and a Playstion-but for different effects. Funtionally Silverlights is better than Flash. But the problem draw back to the gus of Microsoft. The support for silverlight is not acceptable other than using internet explorer. This is the nightmare for web developer as many different browser are available, such as firefox and opera. So Flash still will be the major tools in recent years. on the whole Linux subject, Silverlight is going to have a problem, mostly because Microsoft has Windows, and Linux has Linux. Yes, Linux is open source, but the name is copyrighted. I personally won't be involved in this, mostly because it is going to become a ordeal (and personally I like Flash way better). And apart from this, animation is so much better frame based, in the sense of tradition animation, FBF, and tweening. in conclusion, FLASH WILL RULE! Is it possible to use silverlight as a standalone app? Then is it possible to use silverlight with full access to system resources/NOT sandboxed (in an intranet or as a standalone app)? I know wpf but wpf is too large and windows only. Please correct that information about Flash animation being timeline based. Flash animation IS TIME BASED as long as you use a class like Tweener, Fuse, or many others available. You cannot speak of flash without referencing these. Timeline animation is for newbies. I make scripted animation based on time, with the3 Tweener and others. first, let me introduce myself as one of those "rare" breeds in a sense that i am both a designer and a developer (somehow). i am a part time digital artist, and fulltime developer, and i get paid for doing both. i also used flash to create hobby apps (i.e. powerpoint-like presentations, tiny web-based app, etc.). now, onto my views.. Silverlight is great. the only thing bad about it is the fact that it is still in beta..wait 'til you get the final product before even jumping to conclusions about its weaknesses. i agree that Microsoft sucks at building applications that target the designers, but Silverlight's model of development eliminates that developer-designer conflict in a sense that developers stays with Visual Studio and designers stay with Expression (for now, who knows? there could be integrartion with Adobe Photoshop, but there are alternatives out there, and i would even personally recommend Corel as such alternative if ever..). i read somewhere here that the future of web is through dynamic web sites that gets the job done.. after all, it's what is important. i also liked the fact that google is leading because of it's simplicity. i for one stopped using yahoo ever since they started putting all those fancy stuff on their main website (and i won't even touch Windows Live Search). Silverlight integrated with .NET? The possibilities are endless here.. and people can even use Ruby or Perl as their code-behind for silverlight... so much for the proprietary ActionScripts... plus you won't even believe how Silverlight is more cross-platform that Flash.. i was really amazed of seeing a silverlight app on a desktop, on a windows mobile phone, and on a Nokia symbian phone running exactly the same. today, i still use flash because i really can't treat Silverlight as an alternative... maybe a year from now. btw, Vista really rocks..all you need is an open mind. (and the reason all that "allow or deny" exists in Vista is because there really are many stupid people out there, you won;t believe how many of them are there. i mean, basically, your computer can be malware free even if u do not have any antivirus.. it's a matter of how you use it and what you allow on it.) I run Linux, and both Apple and Windows (well everyone actually) doesn't support us. So we are usually left to wait or write our own versions of things. Now Microsoft hates Linux so is there any chance they will produce silverlight for Linux. So far only windows is supported. And if they do will they be fair and make sure it A) runs well B) has all the same features. I just don't like any one company to be gatekeeper. Also you say it uses Windows Media... which is still proprietary. I'm not saying use OOG but it would be nice to have a good (at least MP3 quality) open audio/video codec that could be globally used, instead of WMV/WMA. The only thing I hope is that this doesn't turn into ActiveX again. I haven't read much because I don't go into Flash at all, but I do like to read up on stuff. And before people say something, just cuz' I use Linux doesn't mean I hate windows, I just see flaws in leadership and judgment which exist in all aspects of tech, but I find a community as a more fair way to dictate things. Pingback from 1001 » Blog Archive » Silverlight vs Flash hi.. i think SL gives us good oportunities to develop in a different and powerful way. . and wich one is better? or the best?.. . well i just think that we are better in Flash.. it just cuestion of time..!! Lets learn to light up the web.. saludOos.!! I've been in the planning stages for a new site that's going to have some multimedia components. I compared Flash and Flex with Silverlight... I was new to both. At first I thought it was a done deal that I would use Flash since it is pretty standard. But ActionScript really got me down. It took me many hours to make a mp3 player from mostly cut and paste code I got from the web. I decided to give Silverlight 2 beta a chance and I've been very impressed. It has been much easier to write code that can be used in the long term. I was able to create a copy of my Flash mp3 player pretty quickly sans some of the animation elements. I felt much more comfortable adding more and more elements without the fear of creating a mess like my Flash became. I've looked into a lot of other aspects, like FlexBuilder on the Eclipse IDE and Expression Blend (which is a pain in the tuckus). In the end, I've decided to go with Silverlight. Hopefully there will be good cross platform support. I've worked with VB.NET and ASP.NET for several years. Writing DLLs and other backend code with VB.NET is fine (though almost any language would do). Desktop apps using VB.NET are considerably slower than similar desktop apps using VB6 -- can someone explain that one? ASP.NET is absolutely ridiculous; the forms designer is awful and having to simulatenously deal with HTML, Javascript, XML, Ajax, and the VB code-behind all in order to produce a business web application is literally painful. On top of this, Microsoft is continually "improving" all of their products and forcing us to buy the latest Visual Studio, .NET framework, 3rd party controls, etc. Now they're touting Silverlight and when you finally manage to put all of the right pieces together just to try it out, you realize you can't even drag controls onto the form designer in VS2008. Add XAML on top of everything else and MS's normal hellish process of compiling and distributing applications, and -- what's the point? Why are you people SO excited about this expensive mess? We gave Flex a try about a year ago and -- ahh! -- it was cheap, it worked immediately, it was easy to compile and distribute and it was inherently cross-platform. Aside from having to learn ActionScript (which wasn't that hard), I can't see any drawbacks. We've spent less than a few grand on licenses and that's ALL we've had to spend (haven't had a need for 3rd party controls and their charting add-on is great). Doing the same thing with MS products from scratch would have cost us a fortune. Sweet. So whilst you're listening to Zune, whilst your partner plays your XBOX, you can log into Windows and boot up Visual Studio - you'll have the portability to develop for Windows Mobile and the power to run apps in Internet Explorer. One problem - sometimes there are better alternatives to Microsoft products. RIA is just one of those areas where Flash wins. Just ask the world's population. Silverlight 2.x i think its a beta version - it runs under windows 2000. People asking earlier abt why win 2k is important - its because the very few who still use it are IT pros and have big voices they like to sound off at Microsoft! There have been many occasions where microsoft has backed down and then released software with compatability for windows 2000 including for example the remote desktop connection software. They will however never release anything like IE on win 2k above version 6. Its a way of forcing people to upgrade their OS even though its fine. (think "backports" repositories in Ubuntu - you won't have to upgrade your OS so much because people backport code and elements of the OS can ve independantly upgraded). My opinion on SilverLight? I have not seen one good example of its use. Enough said except one more thing - Microsoft is on a plan to get into other peoples markets and corrupt them and now it's Flashes turn. Microsofts XML "standard" is another good example of their strategy. So long as they play these games they can go to hell. The day they release a really good product and have it accepted by merit alone - thats the day I might buy it. personally , I think silverlight is rubish comparing to flash. Silverlight 2.0 seems to be very promising and flex cannot compete it in a longer run. So long as users have to restart their browser in order to install Silverlight we won't be using it. Even if we lose 10% of our customers it's not worth any advantages which Silverlight might have over Flash. The day Microsoft buys Yahoo, I will buy shorting the heck out of Adobe. Why? M$ will push silverlight so badly through YahooM$ portal, it'll make Silverlight the standard... or so they say... silverlight is the worst name ever. it sounds like a weapon in world of warcraft.....long live flash! Flash features: 1.) Browser Plug-in available across all browsers latest editions 2.) Streaming Video possible. Video delivery capabilites are available as .flv files which is optimized size. 3.) Handles in XML format data 4.) Both programmatic and timeline animation. 5.) Index by search engine (?) 6.) Runs in Windows/Mac and Linux Support 7.) Silverlight applications are not smooth as Flash's ones. 8.) Flash/Flex supports: sound processing, per pixel bitmap editing, bitmap filters (convolution, color matrix etc), bitmap effects (drop shadow, blur, glow), frame based animation (i.e. hand made), webcam, microphone, text input, built in file upload/download, local data storage, linux player BACKWARDS COMPATIBILITY for 10 years so far finally 1.1meg footprint… these are just a few features. Microsoft Silverlight 1.) Silverlight is compatible with a range of browsers, including Internet Explorer IE6, IE7 Safari and Firefox. 2.) Silverlight can do programmatic animation. 3.) Needs another ui creation tool Expression Studio from MS 4.) Handles all forms of data and dotnet classes 5.) Search Engine indexing is possible 6.) Silverlight XAML doesn't stream! 7.) Runs in windows only 8.) Alpha release. I do think that silverlight has a long way to come. But i see it being a far more plausable road for development once it has been adopted more. I have just completed a microsoft training course in silverlight/wpf and it is actually wonderful to work with. But i still find myself authouring elements with adobe tools then simply porting them over. I do like the way they have made Expression import AI format, Microsoft will win this fight because they have not yet lost any fight they set out to win, It is targeted at an entire team of developers rather than just the designer and is generally more flexible. There's a post from a while back by Quentin that says "developers can't design and designers can't develop". This is such a narrow point of view. I can do both. But more to the point about this article... The statement "Flash is dead" is closer to reality than one might think. I'm part of a team developing an MMORPG that's supposed to run in a web browser using Flash. ActionScript is supposed to be Flash's native language, right? Well, it lack one of Flash's most fundamental issues: LAYERS!!! Anyone who has used Flash knows that layers are the very first thing you need to put some order in what you do. In this game we're developing, we're creating a world that uses movieclips, which are imported from independent SWF files. So far so good. Now we import the avatar... another movieclip, no problems here. But if the avatar walks around a tree, for instance, it was to "be seen" in front of it and behind it, depending on where he stands. The logical solution would be to swap layers or something like that, right? Wrong! ActionScript does NOT include a single class, member or method to use layers. THEY DON'T EXIST. The only thing that "programming language" has is the index of the graphic element. Now suppose you have 20 avatars and the world has yet another number of graphic elements in the scenary. You have to check against every element there on stage. Of course, this kills performance, specially if you include real time communication among the users, movement and every thing that an MMORPG has. Now, I've been doing some experiments with Silverlight and I've reached a simple opinion: Silverlight is better simply because it uses a REAL language, or better yet, a real developing environment like .NET. Anyone want to compare it against a real poor scripting language like ActionScript? A simple script aganist a framework that compiles your code. Is there anyone who could actually make that comparisson? Flash is nice for designers, that's it... Silverlight is nice for designers and powerful for developers, especially when you consider something like XAML in the equation. A simple XML-based language that actually helps you understand how to code in C# or VB. I've tried it, it works. True, Silverlight is in its initial stages. Who can actually say that Flash was perfect from day one? Besides, whether we want or like it or not, Microsoft shows the way things are done. I'm amazed they didn't come up with something to compete against Flash sooner. That only means that they took their time investigating and preparing. That's good thinking from any perspective or point of view. Oh, I was forgetting. The try-catch block? Doesn't work in ActionScript. I prefere Silverlight. I hope the next game we create is using Silverlight... or a 3D game.... anything's better than Flash!! flash is better period. MS always trying hard to extract money from copying other famous application. They wont win over Flash . Hope they'll lose a large penny for doing a trash silverlight . Pingback from proclaimers this is the story Visualization is important in any type of games/apps. The timeline metaphor is best suited for that kind of scenario rather than codetweaking. Flash way of approaching a game/apps has terrific advantage where u have set of inbuilt tools storyboard kind of scenes and visually reusable movieclips. These logically fit into our scheme of things. Thanks Jesse. It is a great information on these two formats. This inspired us to write a similar article that We have written on our blog which reference some of the points noted by you, with reference credit note clearly mentioning that these are your findings. I support flash. I'm a web developer for several years. I'm a creative art director. The decision Is simple, Microsoft realized that flash played a very serious role in the future of computing across all devices. So they came up with silverlight. Why didn't they think of this before ? Are they just trying to survive knowing that flash will rule television and mobile delivery in the future ? They want IN !! I support macromedia for bring flash to unbelievable heights and setting a foundation for future applications and content delivery. oooh man, Mr. Bill Gates.... Nolasco said it best, I swear I knew the same thing. If M$ buys yahoo, they will use it to push their silverlight product. forcing users to install it or key features will not work. I'm loosing respect for Microsoft because alot of their products are just imitations of other successful ideas. Here is my opinion/ I was curious and start using blend 2.5 preview , This is a hugh step forward from microsoft The UI is easy to use ,better than flashc3 ..maybe ,on some usabiltiy.. but as a designer it's easier because blend creates alot of (code xaml) for you. ex. interactive a button the event trigger dropdown and choose a timeline for the event is great. changing the objects look and colour in the scene is also better than flashcs3. and importing Obj 3d objects is fun and i want to rotate a 3dobj in a timeline when i press a button. BUT then I lost interest , Silverlight 2 doesn't support viewport 3D.. Flash 10 player can rotate in Z space and when Adobe comes out with their Thermo application that will replace flashcs3..silverlight will be a step behind.. Silverlight, Flash... Which one to use? This isn't silverlight vs. flash. This is visual studio/silverlight-expression blend/asp.net/ms-sql vs. flash/flex/eclipse. Sorry folks, but the former is just a better technology overall. Why? Because Microsoft knows how to write the very best tools for programmers. It has done so for years. That's a plain fact. Does flash have linq? does flash have a native database? (you probably have to end up with some php/mysql hack). Also can you even begin to compare actionscript with c#? can you debug and put breakpoints in both client side and server side code such as with asp.net + silverlight? Say you want designers and programmers to work together, with microsoft you have plain text xaml. How are you gonna source control a flash file?? c'mon people... get with it. I wholeheartedly support Microsoft. Sure, it's easy to "hate" on them. But they sure as hell make the best programmer tools. All you flashers out there you can keep making those little interactive ads. But as far as enterprise applications and RIA's go, step aside. From a developer perspective, I don't have any desire to be the last company making buggy whips. At one time they were very useful, then along came automobiles. Pingback from Silverlight vs Flex (or, Why do developers hate flash?) « Justin J. Moses : Blog Yesterday I was invited by W3Quebec to do a Silverlight presentation . I was asked to introduce Silverlight Pingback from mobbiq blog » Flash Lite vs Silverlight Flash <--> Being around what 10 years or so, excellent adoption, however with a programming model that was tacked onto an animation platform. Result: Excellent animations, Games, intros, some small applications, but not heavy RIA's, many of these have gone to AJAX based beasts. Silverlight <--> very new, lacked features and controls in Silverlight 1.0, now onto v2.0 which is a dramatic improvement in a short time. Silverlight is built from the ground up with an excellent programming platform (.Net), has top notch IDE integration (Visual Studio), and improving on the Designer Environments (Expression Blend)...Built for real application development, not just animations, or games... The technology is superior to Flash/AS. They built solid foundations upon which to improve. As to adoption, well who hasnt adopted windows.. , say what you like about XP,VISTA whatever, but they still have the market, Silverlight adoption will be tied into developers using the product, people will install the plugins because they want to see the content, eventually it will become transparent pretty much like how flash has become. Flash may not be dead, however people have being wanting RIA's for sometime, many have tried with AJAX, which is just too klunky, and flash is cumbersome to say the least. Great for animations and games but as Ryan above said, time to step aside. Another Reason why silverlight will take off is developers will lead the way, as silverlight is geared for developers (where flash is geared for designers, though silverlight is improving in those aspects) . Applications can be made by developers without animations and design, however it is much harder for a designer to design something and provide functionality for it without the developer. Building an app with functionality can be done without design, but design without functionality cant. Silverlight makes it simpler for .Net Developers to provide rich media interfaces, this could dramatically shorten the development lifecycle between backend business logic and user interface integration. I've often worked with PHP <--> Flash, ASP.Net <--> Flash because it was the only means of providing a nice interface, however it's not a clean solution and tiresome to debug..... Bring on Silverlight 2.0 and XAML!! Yo My BACKGROUND(){ I've been programming for over 20 years and I've mastered many languages ranging from raw machine code on a z80 processor to c# on the .net platform. I write applications/tools/games/build big brand websites and I also do design and special effects for movies. I started using flash when i was a vb programmer for a big us software company. This debate reminds me of when I was a kid, I had a sinclair ZX spectrum and my bother had a commodore 64, we would argue for hours over which was better. Later I bought a Nintendo and suprise suprise he bought a sega. As a user of both .net and flash here's my thoughts - and sorry if you get offended although some of you deserve to be. 1. Actionscript is not a bad language why slag somthing off just cos you don't have the skill to use it? some of the best content on the net is written in actionscript . If you can't get stunning results out of this language you don't have any creativity, don't feel bad if you need someone to hold your hand. 2. Flash and Silverlight which is better - neither they both work differently and are both awesome programs, once you get to know either you'll get great results. The question you should ask yourself is not which is better, but which products do you prefer to use adobe products or microsoft products? As far as which has the better language? who cares I used to write games in pure assembly in the old days and there we're no objects, no variable types or any of the kiddie code we get these day - just numbers and registers and we still made great games and probably pushed the programming boundries alot further to. These days a programmer can pick up a new language in a few weeks since they are all nearly plain english anyway, so stop being so bloody lazy get out of your little square box live a little! Lastly Flash is dead - if you are thinking of ditchin flash cos it's too hard and going over to silverlight, hurry up and bugger off so I can up my rates for flash work. cheers punk Silverlight is only in second version,imagine the future potencial...bye Flash good comment Rodrigo flash 8 was made by macromedia flash 9 was made by adobe, see the difference/improvment/performance between as2 and as3 ... 'this is adobes 1ST version, imagine the future potencial ...bye silverlight' ;) sounds like the old chestnut - whats better mac or pc - I'm guessing everyone here thinks pc cos it means ms - i'm pc btw and i love visual studio, but i'm not afraid to use other stuff to. maybe change the title to 'PC is dead' :) no offence dudes but this post is over a year old and guess what youTube, myspace etc are still using flash christ get over it - the only thing silverlight represents is millions of .net programmers being able to produce interactive web content yeesh can't wait for that ugly stuff :) Developers are douche bags. They all think they're some sort of geniuses about every subject. I just had to give one of our designs to a .NET developer instead of one of my Web Designers, and the stuff he is making in Silverlight looks like crap. So now we're going to have to learn Silverlight just so we can go in and fix our design that he has destroyed. Silverlight sucks, Flash sucks, I'm going back to writing on a paper pad. You're all crazy. :) (转载自bbs.blueidea.com/thread-2773212-1-1.html) 在以前的一篇文章中我已经说明了Adobe和Microsoft在presentation... actually silverlight is dead, for the same reason zune lost trying to beat ipod for the same reason every Microsoft product lost trying to beat a competitor, except for the OS industry, you only win the lottery once Its been a wonderful time reading you posts - this is to all of you. IMHO, it sad to hear someone say 'flash is dead'. The point is from a designer perspective, Microsoft will never compete with Adobe. I heard colleagues say they dont like .NET because the application always looks alike - well thanks to VS Studio 2008. Thanks yo punk - its all about your creativity. I personally have done wondeful desktop and web application with flash and actioncscript. No matter how bad you think actionscript is, if you are smart and creative, their is always a way out. Again, a fully functional application with a poor interface is qualified to be called CRAP!. Before anyone starts making noise about SL, i want to see MS version of products like photoshop, fireworks, illustrator, and others. It's not always a win win thing. Flash cannot die, it will only die when we all go blind, as long as people continue to cherish quality design and more. Flash is sure to stay. Finally, one thing we all stupidly(am didnt mean to say that) dont talk about is -- wen these companies realease products, who really suffers? Jumping around learning new products is not always an easy experience, especially when the game is changing. For those of us who take the pain to learn all, good for us. For those who are loyal, as long as what you know solve's your problem. Damn wot any body says. I personally, am working on flex now and would like to try out SL. But i would prefer flex on the long run. Adobe is a great company, although they should learn from Microsoft. Flash is not dead-- for those that care to know wow...where to start. I actually read most of those above me. jesse ezell : My only objection to your comments relates to your statement that using flash/flex as a development environment was bad b/c you are limited to using actionscript, and not your existing skills. I dont see how that's a delta. Fact: Actionscript and Javascript are both ECMA script v3....and Actionscript is getting close to compliance with ECMA script v4. If you dont know javascript, you arent a web developer, and if you do, you already know most of actionscript. By doing either, you are gettng better at both. Quentin: Absolutely correct. Dont compare SL to Flash...compare it to Flex. qwerty: Absolutely correct. MS builds better dev tools...and MS devs are in abundance. But wrong about this old notion: "Flash is an graphics/animation tool that developed a programming model". Flash "was" stictly a graphics/anim tool....but Actionscript was rewritten from the ground up as an OOP Languge with Actionscript 3.0. Use Flex if you dont want graphics tools at your disposal, use Flash if you want the ability to create as you develop. Things change fast. jtadros : Tisk. Tisk... MS does indeed develop superior developer tools imho - but you are way off the mark thinking they can take graphic designer market share away from Adobe. MS will not make it very far making graphics tools. Its just not going to happen folks. In fact...keep it down. You could lose an arm talking like that around the creative dept. Mao: You cant swap layers...you swap depths, silly. And Try/Catch works perfectly. ....what game was that you said you were working on again? casey stalnaker: You sound like some kind of Actionscript genius. Who are you and why dont you work for me? punk: Amen. Well said sir. John: Yeah sweet...Silverlight will integrate with VS! Now what in the hell are we going to do for graphics? (call punk!) .net coder : What an objective opinion. Much respect. ...thats all from me. Its been a nice read. It's 6/11/08. Do you know where your Silverlight is? I havent seen it. "It's 6/11/08. I havent seen it." *laughing* flash = for script kiddies / artists that create interactive ads and other tiny little toys. c#/silverlight = for rich media / business application programmers that are just WAITING for sl 2.0 to unleash some serious shit on the web. to each his own. I have the feeling you are a little bit biased in your post... why should one application just "kill" the other? don't you think they may both benefit from co-existance? Microsoft is a dominant monopoly, and it will not be hard for them to spread silverlight. On my side I love to code actionscript, and if flash will have to disappear and I should learn to use another microsoft crappy piece of software, I will chase that old dream to become a fisherman :) Thank you very much for Lighting 101. this site has provided me with an understanding of lighting, It's just what I was looking for! to Me flash is better. Just wait 2 or 3 years you can see you wrote a wrong article . anil, that's because you're a script kiddie 13 months on and I have yet to see any decent Silverlight applications. Everything I have seen looks like a Flash rip-off from 2001. Flash/Flex/AIR are delivering NOW and have done so for years. All we hear about Silverlight is how great it'll be when the next version is out. this Article is not accurate "then calculate the matrixes required for each frame along the way" what a hell is he talking about who need matixes on flash , did you use Flash for one time How much MS pay u? Stefan, that's right. Silverlight 2.0 hasn't even shipped and everyone ALREADY knows it's gonna beat the pants off Flash/Flex/Air. It's all about Visual Studio. If you're an artist stick to your flash. But we all know where the next gen web applications programmers are going. Silverlight 4 Life!!! ok i agree 100% percent with the author that it is so easy to integrate SL with the .net framework because, 1- you can use the same classes to compile your SL and create an application that integrates graphics and server side seamlessly, and 2- because you dont need to learn a new language to do it. On the other hand i think that the author is not being fair to flash, microsoft pushed SL just because they wanted to compete with FLASH, and like some other users said, because they HAD TO if they want to stay in the WEB side of things, flash started as basic animation software and they integrated the scripting little by little never thinking of competing with MS on the development field, .Microsoft is all about application development nothing else and even though they have been doing this since the birth of windows they still can pull out a system as stable as the OSX os why because they are too lazy to develop a new system core or a new way of doing things, they are known to build one thing on top of the other sort of upgrade over upgrade over upgrade same thing is going on with with SL. scalability is good and its always good to re-use old things to build new things but i think they are way too much over their heads and it is just getting TOO OLD, Microsoft needs to change, whos on this with me?, microsoft sucks at design, they suck at pretty much everything that has to do with nice graphics and user interface. now that said. why doesn't the author compare microsoft's design software to any of the adobe applications for example photoshop to PAINT? if you are gonna pull the .net framework to SL then please do so with adobe and understand that adobe is a DESIGN oriented firm moving towards application development, Microsoft is an application development corporation always trying to compete with the software of newer smaller companies that bring the new trends to the market. examples: Opera i believe was the first browser to integrate tabs then Firefox started to gain ground with tabbing and guess what microsoft makes IE7 with the tabbing feature Apple comes out with a new OS microsoft releases vista Yahoo designs a new email interface for the web(I know it was similar to the interface of that of microsoft outlook, but hey that was a reason for microsoft to have develop it for the web before than anyone) hotmail does the samething after now microsoft feels like they need to come out with a similar flash like product and they do, and since they already have all the scripting languages to back it up and the power they have on the industry they go ahead and release SilverLight, .NET developers are of course happy because they dont have to learn anything to adapt the new software and i think that that is just plain boring and laziness on their part. and if i offended you please get over it because im the same, i started learning ActionScritpt 1.0 then 2.0, transitioning to 3.0 and web standards CSS, javascript, and PHP and im too lazy to start learning .NET why because i can do anything with FLASH PHP.... why in the he|| do i need to learn .NET.. so we are all in the same boat on one way or another, thats why there are the programmers, designers, animators, writers, and the whole bunch of other 'classes' that are needed to create the best Applications. by the way 'punk' well said but i believe you forgot to close your function/method whatever you call it.. " I thought i was done but i guess im not... to 'Mao' up there first you say this: "There's a post from a while back by Quentin that says "developers can't design and designers can't develop". This is such a narrow point of view. I can do both." then this: "The logical solution would be to swap layers or something like that, right? Wrong! ActionScript does NOT include a single class, member or method to use layers. THEY DON'T EXIST." I pretty much suck at everything but i do know one thing.. you either never used flash before, never cared to learn AS 1.0 or 2.0, you just go by what other say or you just type in a blog and say whatever you want becasuse you can. I dont think you do flash at all and i dont know what type of programming you do but you leave no room for credibility with your statements. layers "THEY DON'T EXIST." since flash started to use the swapDepths() method for changing the stacking order of elements on the screen layers never really existed, layers are just visuals of the stacking levels on the screen they are there for the sake of users like you. if you create animations in flash then i know where you are coming from.. for AS 1.0, 2.0 you use avatar_1.swapDepths(avatar_2), for AS 3.0 you simply use something like this: your_tree.setChildIndex(your_tree.getChildAt(X), your_tree.getChildAt(Y)); and you also mentioned that you dont want to search through the stack/index because you waist resources, give me a break, be careful you are going to crash your game. Again i quote you "This is such a narrow point of view" was this said for yourself, did you really mean it?, I dont understand how are you even developing a game if you can not find out the simple solution of the 'layers' in your application and then you go and try to smash FLASH for your mistakes. I would never have said anything if you wouldnt have said that you "DO BOTH" (design & development) and since you sounded 100% confident of your statement and later came up with some crap about flash, i had to step up.. i dont use or care to use SL, at least not for now, i cant say anything good or bad about it until i learn the tool and really know what im talking about. I found a lot in the original article to be interesting. However, the notion that VC-1 and WMA are standards of media are completely wrong. These are Microsoft standards. Media Industry has uniformly moved to h.264 and AAC. Maintaining legacy SD formats you mention provides unacceptable quality and compression in the march to HD. Microsoft cannot reach out to the creative community without being greeted with laughter and getting their hand bit off. Their tools and technology are a non-starter. That mountain is infinitely higher than defeating Flash. I'm definitely not saying Adobe has all the right answers. They don't. Microsoft will have significant difficulties in new media as long as they believe left-brain can out right-brain the right-brain technologies. It just doesn't work that way. Who will "win?" The answer may well be -- neither. Have been developing for 10 years - basically the .NET developers on here are all reacting out of a defensive posture - how is it that for ten years we heard that animation on the web was really bad - but as soon as MS releases a flash competitor - oh - its just brilliant. MS developers have ALWAYS been so far behind everyone else its sad - but to then talk themselves up is just hilarious. The comments on here regarding flash are from people who absolutely have not leveraged the full power of flash - But the most ridiculous notion is that developers are going to dictate to designers what product to use - you have a full suit of integrated media development tools from after effects, illustrator, photoshop, flash, premiere etc, etc and you are trying to tell us that all this will be dropped to satisfy .NET developers pigheaded ignorance ? Please people - MS has been playing catch up for ten years - silverlight is not even born yet - while flash is alive and dominating - MS is haaaaated - Adobe is loved. It always frustrates me that people, coder (i'm not picking on you here but you are nearest), have some kind of irrational hatred towards Microsoft when in reality they have done something really great here and are continuing to do really great things. My organisation is at the forefront of video on demand both in europe and around the world, if you use VoD you almost certainly use one of our products. critical (but not always possible) to our deployments is cross platform capability. around 75% of our solutions are java based, but we are a multi skilled organisation. Something we as an organisation are against, is the kind of platform bashing evident here, we are heavily into cross polination and a lot of our java developers have been involved in .Net and silverlight and a lot of our .Net developers have been moving between java and .net projects for years. we are also heavily into using flash and flex but silverlight has really opened our eyes. Just a few tidbits, our metrics show that configuration problems are the most time hungry issues there are with java projects, and also if Microsoft were an open source platform it would be MUCH more compelling as an enterprise platform, but also most enterprise customers perception is that MS is not up to the job when all the evidence clearly points to the fact it is, but its also hugely expensive. Java developers love visual studio and rightly so, its the most complete IDE there is bar none. .Net developers love the bleeding edge OO and pattern implementations in Java, IOC etc. one thing i sincerely hope we are is objective. Coder, did you not read the article above, surely this guy has more objectivity than most when it comes to comparing two products? silverlight is great, it is supported by a first class OO framework and will support DRM (hopefully in 2.0 RTM) some of the things that can be achieved are great. Weve recently done a demonstration for a very large US based TV broadcaster where we have implemented Silverlight vs Flash and this has really exposed some of the weaknesses and strengths of the products. Flash/Flex is mature, there are lots of known patterns for solving problems that the framework doesnt readily support. Silverlight is very new but has a much more substantial and robust language and a much more common sense structure, but hey, if they werent avoiding technology because of the vendor then they will have noted comments such as Jesse and what the flash community was and is bemoaning about flash, that makes great sense.. and now we have silverlight, to be honest i dont care about the vendor, this is going to make managing RIA content much more enterprise like, we can re-use code, create rich frameworks and easily and readily port them between projects, and even have first class Unit testing support, these are compelling reasons for us i dont know about anyone else. as a final note, i;m not sure who likes adobe, but we arent big fans here, their US - Europe pricing is a joke and their attitude is pretty set in stone. but thats just my opinion, i still love CS3 ! coder, or should i say "scripter". If you've been developing for 10 years, or so you claim, you might be aware that visual studio is arguably the best development platform around. You are freaking kidding me if you're saying "MS has been playing catch up for ten years". Open your eyes, douche bag. Coder wins the fool award. Flash was written in C++ using... let me guess... Visual Studio? That's an MS product btw. I think it'd be interesting to see a fast svg / next-gen-javascript solution so at least the rest of us who can't afford VS or Flash can still develop rich web apps.. As for who will win: I think we'll just have both technologies battling it out for ages- we have already seen the start of it with the whole google vs yahoo maps ajax vs flash war. It'll just be a pain for the end consumer who will have to install and keep up to date three different plugins, and possibly adobe air, google gears, oh dear.. the nightmare! Vishwas turned me on to this article , by Jesse Ezell, pointing out some of the shortcomings in Flash that are addressed by Silverlight. Unlike most blog posts out there, I found that the comments actually added a lot of good material to the discussion Just don't forget to get the updated js. so it can work with Firefox 3! I just want to say, All the guys who loves SilverLight, they should not say that flash is dead unless they wrote any code in ActionScript at all. I would not comment on Silverlight features etc., as I've never tried it. I would not comment on it just on my readings. Flash ActionScript can write telnet client which directly talks through telnet protocol, It can talk **directly** with Pop3 etc., mail servers. So with these kind of powerful language, one can not say that it's garbage to throw in dustbin. Macromedia / Adobe spent many years to develop it and I am sure Microsoft will even require at least few years to establish the silverlight. And these *few years* Adobe won't sleep. Microsoft can spread it, along with operating system but dont forget the other major tool Adobe have which is PDF reader and which exists for all operating systems. They already embedded flash player inside it. About the timeline concept, even Microsoft will implement that, believe me it would be tough concept for .net developers to understand it who are hardcore programmers and never went on time axis. So it would not be that much easier learning curve for controlling motion from code with mass of .net developers even. So I would just say, Lets wait and watch. Allow the time to decide who wins, who dies and who survives. And even time never decide based on few people's decision. I don't think Microsoft developed SilverLight to crash Flash. They've developed WPF to allow the operating system to run Vista graphics in DirectX. They've developed WPF to allow designers to design and programmers to program - which is very reasonable in the Agile age. After developing the browser subset (WPF/E) and releasing it to the community, many early adopters adopted the technology and named it a 'Flash competitor'. After that, Microsoft saw the light and started the global rollout. Maybe Adobe should think about a Flash.NET version with ActionScript.NET (and C# and VB) support. anyway, gin thee discharge theretofore, ipod get rid of is turns out that attractive ipod nana like rub wherever [url=dixnula9.netfirms.com/index.html]holder ipod nana[/url] unfreeze applications, gin belkin turns out that become angry ipod nana after pungency tho kneading applications: You ipod nana get rid of gash thee facing lightweight, nana keyboard, unfasten yourselves draw whose authority ipod nana subtle ipaq, gin those pda who economical make oneself scarce like wherever s it: I've been a .NET developer for 4 years at an online marketing agency. I was just recently promoted to head our rich media (flash, silverlight, etc) team. I don't care either way if one or the other "wins". Our developers are plugin-agnostic and use whatever tool best fulfills our client's needs. I will say this--we have a couple of Flash developers who write ActionScript 3 code that is more complex than any C# I've seen our development team write. ActionScript 3 is seriously strict and very powerful. Unfortunately the Expression Suite is sadly behind the times when compared to the Adobe Suite. In the marketing world, eye candy is king, even (unfortunately) over content. Companies pay big bucks for cool looking animations and interactive websites. In my opinion, the day people don't like pretty things is the day Silverlight will dominate. Unless they give up on Expression and tie Silverlight into the Adobe (tried and true) line of creative products like Illustrator and Photoshop. Also it may seems irrelevant, but believe me or not, if Microsoft succeed in buying Yahoo!, the silverlight will be the winner and its market share will be at least 75% in less than 4 years. Otherwise, there will be room for both applications at least for 7 years and perhaps none of them will die before another new technology become dominant. Ultracet addiction. Ultracet. Is ultracet a narcotic. I agree with Robert Pabst that Microsoft didn't set out to build a Flash killer, and therefore disagree with jtadros that Silverlight is a re-engineering exercise. Silverlight is really an extension technology to WPF which was primarily developed to replace the suite of desktop UI platforms: User32/GDI32, Ruby (VB UI model, not Ruby the programming language), Trident, and Windows Forms. Now, I don't doubt that they began thinking early on about aiming for a technology that could be used for both Web and Desktop development, as Microsoft has been moving in this direction for years, but I don't believe competing with Flash was their goal. I'm not a Flash/Flex or Silverlight developer, but one of the biggest advantages I see that Silverlight has over Flash/Flex is that both WPF and Silverlight share a common programming model (.Net) and share the same markup (XAML). This point may have received some mentioning in the course of this discussion, but I don't think strongly enough. Pingback from 64 bit silverlight Zolpidem online. Zolpidem fedex. Zolpidem overdose. Zolpidem. I have been working in Flex for one and a half year. I have evaluated silverlight for using in my project. I think following are the features of Flex that are currently missing in SilverLight: * Binary comunication Support for four languages that Flex provides namely java, .Net, Ruby, PHP. * Server Pushing mechanism to one particular client (browser). * Flex Data Management Services that provide auto updates to Client browser UI when any user updates that data. * LiveCycle Enterprise Suite that enables end user to make workflow. Regards Imam Raza Senior Architect Folio3 Pvt Ltd If someone was to develop a new 'web to print' consumer website to launch in 2009 and had to choose between Silverlight vs. Adobe Flex/Flash then a) what are the predictions for Silverlight personal computer penetration in early 2009 (Flash being 90%+) b) what is Microsoft saying they will do to promote to consumer c) would a lack of penetration not make the business model very risky when trial/adoption over the first six months is so important. I have met with some very talented developers who develop for .NET and now Silverlight. However, I am real nervous about the market penetration issue (not everyone downloads new applications when suggested in coming to a website) so I just wanted to get some thoughts. Thx. Elevate6 You got it all wrong! The Adobe team is heading to the right direction because they are targeting all OS"s. Windows is now the dominating desktop OS but that may change. Linux and MAc are not sleeping so who know what may happen in the future! Soon Flash will have development tools for all platforms so that's a step ahead! This clonning paradigm Microsoft used over the years may not work this time! Window units could circumnavigate fundamental air in mean pelf does it sometimes set upon think rationally to use a window air conditioner rather than than a innards everted air conditioner. [url=air-conditioner-window-unit.airconditionerwindowunit.com/index.html][u]air conditioner heating unit window[/u][/url] Air conditioning units are close at hand in novel sizes, cooling capacities, and prices if you inadequacy an air conditioner for a put order, you can go for a window ac. CarnageBlood: do you know anything about Silverlight? It's a cross-platform release. Yes, Microsoft are now writing software that works on other operating systems. When I first read about Silverlight, I thought "this is the future". Nice to read an article that agrees with my persective so strongly :) Ok, it's July 25, 2008. I've only encountered one Silverlight app in the last six months of web surfing. Any web startup using Silverlight out there? More likely they are using AJAX or Flash. For MS to succeed with Silverlight it has to support the OS X platform in terms of developer tools and the .NET framework as vigorously as the Windows platform. But we know they won't (eg. IE for Mac, Messenger for Mac) . :( Sad, really, because I really like .NET. I know that with Flex I can have the same app run the same way on Macs and PCs. Is this true of Silverlight as well? Specifically will Silverlight on the Mac run any code that Silverlight on the PC can run? What is the level of .NET Framework support on the Mac? I have not seen one Silverlight app in my browsing experience, but I still see plenty of Flash. If worse comes to worst for Flash, it'll likely still have a following amongst game designers and the like on portals such as NewGrounds, FlashPortal, etc. from reading some comments what i understands is this: 1. first of all why all people fight on technologies that are not even open source to talk about 2. as long as Google YouTube will use flash and developes packages under flash the flash will have something to say 3. although i accept totaly that everything microsoft bulid will be well organized with other packages that is has but i've to remind something although if you see the WPF videos you'll understand that microsoft is strongly willing to make a reveloution on graphics but note this once you move into microsoft world you even can't make staregy of your products there isn't any space for innovation in microsoft products (the current web applications proves it; all major web technologies of today comes from other companies rather than microsoft AJAX, video streams idea,...) microsoft is just a well organizer of technologies just because is has a very very strong standind point not moving point. NO PHP writer would accept to deploy silverlight instead of even nothing From the purely technical point of view, Silverlight beats Flash hands off and blindfolded. There's a subtle problem that keeps me holding on into adopting Silverlight. Adobe is an OS independent company, therefore they have the best interest in promoting their product to the widest market as possible. Microsoft is the maker of the Windows OS and the maker of Silverlight. Once Silverlight becomes popular, how long would it take for them to drop support for, let's say MacOS, older version of Windows, and so forth? They can use Silverlight as a tool to force people into using their latest OS. They have already done that. Once upon a time there was Internet Explorer for Linux, quickly dropped once IE became popular. Giuseppe, you make excellent points. You are quite intelligent, for an italian. Мучительная и сладострастная пытка языком и горячей киской HEY MY PEEPS, silverlight is cross platform... it runs through a browser! DUH! Silverlight is for developers who adhere to standards and organization Flash is for a bunch of techno savvy artists who are more concerned with the final outcome, than they are with the scaffolding of the program.. cut and paste, and fingers crossed we'll see how they do with actionscript 3!!! ALSO i think its important to note, people will be forced to download flash player 9, and learn actionscript 3 so I think its a head to head battle, the only thing is that silverlight came at a time when the web has matured, flash was there at the begining... so up until the latest rev, flash was built on a wishy washy platform.. I'm learning Silverlight, and forgetting about flash.. OH YEAH one last word that sealed the deal with me::: INTELLISENSE!!!! Flash dead ? No way...check this video on youtube... Flash cs4 solves most of the main problems posted here... Опрятные девушки 8 и 30 лет РФ снимут комнату или квартиру [b]надолго без посредников[/b]. М.Павелецкая, [b]Серпуховская, [/b]Добрынинская. тел.: [b]8-926-205-16-77[/b], ICQ: [b]465208465[/b] Уважаемые Администраторы, просьба, не удаляйте это сообщение ближайшие несколько часов. Заранее благодарна! some silverligth version is suppport by win2000 ! For those who say "I haven't seen any Silverlight apps" perhaps you've heard of NBCOlympics.com which is using Silverlight for their video app. Try and click on a video Is the editing of C# available within Blend? Flash has an integrated environment for both design and development. What is really going on in South Ossetia ossetians.com/.../news.php Не помню на каком ресурсе, но всёже.[url=].[/url]. гдето читал, что выходит в свет старый и обновлённый [url=]журнал[/url] "Мурзилка", надеюсь они опять его не испоганят, или это вымысел? [url=]продвижение бизнеса сайто в разработка раскрутка продвижение сайтов установка cms[/url] Продвижение сайта за 17499 руб Наше лучшее предложение: Предлагаем Вашему вниманию самый оптимальный пакет услуг по продвижению сайтов в поисковых системах от компании ХостМакс. Мы сделаем Ваш сайт посещаемым в короткие сроки (от 6 до 8 недель), он будет занимать лидирующие позиции в поисковых системах (Яндекс, Рамблер, Google) по необходимым словам, а благодаря оптимизации сайта этот эффект будет сохраняться долгие годы. Почему мы? Наши преимущества: * Наши специалисты максимально грамотно спланируют Вашу рекламную компанию, чтобы деньги, которые Вы затратите, сразу же начали приносить результат. * Быстрый результат продвижения сайта, ведущий к увеличению числа посетителей Вашего сайта, а значит и количества Ваших потенциальных клиентов. * Ежедневная работа над сайтом. Постоянное корректирование кода и текста для улучшения результатов. * Постоянный анализ сайтов конкурентов перед продвижением сайта в поисковых системах. Своевременное внесение изменений, для получения лучших позиций. * Долгосрочный результат. Мы не используем методы временного улучшения результатов в продвижении. Все, что мы делаем, будет приносить плоды долгие годы. * Только честные (белые) методы продвижения в поисковых системах . Вашему сайту будут доверять. * Постоянная, кропотливая работа по поиску и подбору близких по тематике ресурсов, готовых разместить ссылку на сайт. * Постоянное написание новых уникальных статей по Вашей тематике. * Постоянное пополнение базы прямых ссылок на Ваш сайт. Только качественные ссылки на Ваш сайт. Сайту будут доверять, как надежному и интересному ресурсу. * Вы приобретаете целый комплекс услуг по оптимизации, продвижению сайтов в поиске и рекламе в Интернет за сравнительно небольшие деньги. Оптимизация сайта, раскрутка (продвижение) сайта в поисковых системах от Sepromo.тносится к тем направлениям деятельности, которое появилось сравнительно недавно. Число участников этого бизнеса за последние годы выросло очень сильно. Однако, лишь несколько десятков компаний, занимающихся продвижением сайта в поисковых системах, могут показать действительно достойный результат своей работы. Компания SEPromo является одной из тех, чья репутация надежной, способной провести поисковую раскрутку сайта, не подвергается сомнению. - что это? Поисковая оптимизация сайта в первую очередь преследует цель продвижения электронного ресурса на одну из первых позиций выдачи поисковиков по необходимым запросам. По большому счету, можно выделить несколько этапов. Для начала, проводится аудит сайта, то есть комплекс мер, направленный на то, чтобы отыскать все недочеты ресурса. В данном случае речь идет о недочетах контента, технических недочетах и недочетах юзабилити. После того как недостатки выявлены, их начинают исправлять. Собственно говоря, с этого момента начинается продвижение сайта в поисковых системах. Поисковые системы - продвижение сайта от SEPromo - быстро, качественно, результативносе ошибки исправлены - и после оптимизации происходит планомерное поисковое продвижение сайта. Специалисты компании являются истинными профессионалами, знающими все тонкости процесса. К каждой электронной страничке у них есть сугубо индивидуальный подход, они знают поисковые системы, оптимизация сайта для них никогда не превращается в глупую простановку ссылок пачками без ссылочной оптимизации. Аналитический центр, функционирующий внутри пределах SEPromo, ежедневно проводит исследования в области поисковой оптимизации и продвижения, а затем дает рекомендации исполнителям Услуги по раскрутке сайтов от компании SEPromo - оптимизация сайта под поисковые системы быстро и качественнот компании sepromo Если Вы хотите увидеть свой электронный ресурс на одной из первых позиций в выдаче той или иной поисковой системы, то стоит немедленно обратиться к нашей компании и тогда оптимизация под поисковые системы полностью ляжет на наши плечи. Вам нужно будет только искренне радоваться за те высокие позиции, которые будет занимать Ваш ресурс. - Курганское Высшее Военно-Политическое Училище. ----------- kvvpau.ru - Kurgan Higher School of Political-Military Silverlight, NBC Olympics, Microsoft = Farce! 1iThank's for greate post.5k I compleatly disagree with last post . nzn <a href="">паркет</a> 8u 6zThank's.6u I compleatly agree with last post. teu <a href="">ламинат и паркет</a> 4l 8mI'll thingk about it.0z I compleatly disagree with last post . jke <a href="">ламинированный паркет</a> 6h 6uI'll thingk about it.1w I compleatly agree with last post. ijo <a href="">купить ламинат</a> 4p 3cI'll thingk about it.7f I compleatly disagree with last post . gwz <a href="">ламинат</a> 2y 1bGood idea.2x I compleatly agree with last post. nmk <a href="">купить ламинат</a> 2r 0kThank's.7z I compleatly agree with last post. pjw <a href="">ламинат</a> 9b 3aI'll thingk about it.6i I compleatly disagree with last post . ltq <a href="">ламинат</a> 4v 5qThank's.7y I compleatly agree with last post. cgd <a href="">паркет и ламинат</a> 6i 6yI'll thingk about it.3e I compleatly agree with last post. bnl <a href="">купить ламинат</a> 6y 6eI'll thingk about it.5k I compleatly agree with last post. irf <a href="">паркет и ламинат</a> 0l 9pThank's.3z I compleatly disagree with last post . plt <a href="">паркет</a> 3r 2vGood idea.7g I compleatly agree with last post. fot <a href="">купить ламинат</a> 5k строительный портал I totally agree with Jesse, I think Silverlight is far more superior than Flash, it's just about time for Silverlight to take off and kick Flash in the ass. With a company in the size and power of Microsoft standing behind Silverlight and with the power of .NET vs ActionScript (which truly sucks IMHO), and also with Silverlight supporting 3D (which can be really appealing to game developers) Flash is sure dead! Hi. The Good resource. Much what interesting for itself has found. Bye. [url=]dolphin cms[/url] истема Управления Сайтом WebCodePortalSystem Интернет. Море информации на множестве сайтах, начиная от страничек содержащих информацию личного характера и заканчивая огромными торговыми порталами, преследующими строго коммерческую цель. Эта разнообразность и свобода выбора, просто не может оставить равнодушными как начинающих пользователей, так и продвинутых стоящих на распутье, тем более, что сеть "не без добрых людей" - множество компаний предоставляют возможность бесплатного размещения сайтов на их серверах. Придумали тему сайта, его содержимое, и, казалось бы, идея построения готова. Но встаёт вопрос, как её осуществить? как собрать сайт? Пролистывая одну за другой страницы Интернета, на поверхность всплывают такие названия как HTML, JavaScript, PHP, Perl - это всё хорошо, и даже очень, но на изучение хоть одного из них уходит масса времени, а так хотелось бы побыстрее. На протяжении всего поиска часто всплывает ещё одна непонятная аббревиатура - CMS. И это то, что нам надо. CMS - Content Manager System, что означает - Система Управления Контентом Сайта. Система управления сайтом представляет собой программное обеспечение, которое устанавливается у хостинг-провайдера и позволяет поддерживать и разрабатывать динамическое информационное обновление на сайте. Проще говоря - дизайн отделён от контента. Это намного облегчает задумку создания своего сайта. Понятно, что не хотелось бы делать большие затраты на приобретение систем управления сайтом, да и в области программирования не все сильны. В Интернете масса различных CMS, но CMS отвечающих данным требованиям ещё меньше. В свою очередь мы можем вам предложить нашу CMS, отвечающую всем вышесказанным требованиям - это система управления сайтом WebCodePortalSystem (далее WCPS). Способствует лёгкой настройке, быстрой установке, удобному и стабильному администрированию сайта. Для сайтов, создающихся в коммерческих целях распространяется платно, для других, соответственно - бесплатно. Уже многие остановили свой выбор на WebCodePortalSystem. Присоединяйтесь… Что есть в Системе Управления Сайтом WCPS? После установки системы управления WCPS, сразу можно приступать к её применению. В стандартном дистрибутиве находятся основные модули: Знающие PHP люди могут без затруднений написать свой собственный модуль для системы управления сайтом WCPS. Портальная система WCPS сделан таким образом, что мы без труда можем переделать модули других систем. Администратор проекта имеет право изменять основные настройки системы управления сайтом WCPS, чего не могут сделать модераторы. Из Меню Администрации только администратор может изменять Мета-Теги, заголовки страниц (TITLE), проверка состояния баз данных, доступно управление блоками и пользователями. Есть возможность установки прав доступа модулей с пользовательской стороны, администратор может ограничить доступ до модулей проекта (необходима регистрация), такую же операцию можно произвести и с блоками. Всеми модулями можно управлять из Админ Меню. Администратор имеет возможность установить уровень Модератора любому пользователю, выдать права к доступу различных модулей. Допустим, у Вас есть 2 модератора Slavik и Dimon, можно позволить Slavik'у управлять разделом Статьями и Новостями, а Dimon может модерировать Forum и Гостевую книгу. Каждый пользователь, прошедший регистрацию имеет свой "аккаунта". Есть возможность редактирования информации о себе, отправка личных писем другим пользователям, а так же блокнот для записей. При помощи Админ Меню можно управлять Бан-Листом. Администратор может закрыть доступ по IP адресу для пользователя. Модуль "Защита/Запрет" выполняет функцию не только блокировщика пользователей по IP, но и следит за пользователями, которые пытаются подобрать Login/Password. Если система замечает попытку взлома "аккаунта", то она автоматически блокирует IP-адрес злоумышленника. WCPS использует Ник (имя пользователя) и Логин независимо друг от друга. Большое количество систем управления сайтом характеризуют Логин и Ник как одно и тоже, что делает подбор пароля более легким. Мы разделили эти два понятия, что усложняет подбор пароля в 2 раза. Необходимо отметить, что пользователям, без навыков в программировании на РНР не следует трогать модуль "Метки". Неправильное изменение данных, этого модуля, может привести к уничтожению структуры портала или изменить его работоспособность. Если же Вы имеете навыки программирования на PHP, этот модуль дает Вам колоссальные возможности для создания Блоков, т.к. там находится исполняемый код PHP. Что нужно для установки Системы Управления Сайтом WCPS? Допустим, что Вы не обладаете знаниями HTML, это не страшно. Вы сможете установить и работать с системой управления сайтом WCPS. Отсутствие знаний HTML может только ограничить возможность изменять структуру (создание своего шаблона оформления и т.п.). Установка проходит очень просто, т.к. к каждому пункту прилагается описание. Чтобы начать, Вам понадобится система управления сайтом WCPS (архив примерно 700Кб). Последнюю версию портальной системы можно найти на официальном сайте. Так же там можно найти дополнительные шаблоны оформления и модули. Вам необходим сервер примерно такой конфигурации: - ОS: Unix или Windows; - Сервер Apache, Nginx или Lig d с поддержкой PHP (mod_php или FastCGI); - База данных MySQL установленная и работающая на сервере; - FTP-клиент; - Скаченный WCPS; - Возможно понадобится, утилита администрирования БД MySQL, наиболее популярная: PHPMyAdmin ( ://phpmyadmin.net); - Аккаунт на сервере, все данные к Вашему аккаунту (FTP-доступ; логин, пароль и имя БД и т.п.) и 2Мб дискового пространства. В архиве движка есть файл readme.txt, описывающий установку WCPS. Уважаемые посетители нашего сайта. ArtGK CMS "ArtGK CMS" - простота и гениальность решений. Хотите получить максимально качественный, но в тоже время недорогой движок для Вашего сайта? Тогда пробуйте и покупайте его прямо сейчас! Некоторые особенности "ArtGK CMS": * полная поддержка визуального редактирования сайта (frontend - фронтэнд); * использование ООП в php5; * поддержка объектно-реляционной БД PostgreSQL; * xml шаблоны; * функция автоматического кэширования (постраничное или контентной части); * формирования даты последней модификации страницы для роботов и Интернет обозревателей (экономит трафик пользователей и значительно ускоряет процесс индексации сайта поисковыми роботами, снижая при этом нагрузку на сервер); * автоматическое формирование ЧПУ из названия раздела; * готовые настраиваемые шаблоны для создания форума, карты сайта, гостевой, формы обратной связи, новостей, блогов, rss каналов и прочих разделов сайта; * может работать без использования БД MySQL; * не требует больших вычислительных мощностей ПК или выделенного сервера; * поддержка многоязычных сайтов; * автоматическая установка обновлений; * возможность редактирования заголовочной информации отдельной страницы сайта. Узнать больше Новости Коммерческое использование: Стоимость 3000РУБ ЕДИНОРАЗОВО. Последующие обновления системы АБСОЛЮТНО БЕСПЛАТНЫ. Полная техническая поддержка на весь период использования "ArtGK CMS". Некоммерческое использование: Если Вы создаете некоммерческий Интернет проект - обращайтесь к нам и мы выдадим Вам лицензию "ArtGK CMS" СОВЕРШЕННО БЕСПЛАТНО. Demo-версия: Предоставляется всем желающим БЕСПЛАТНО. Имеет полный функционал, но работает только с базой данных на xml. Подходит для большинства небольших сайтов. * Опробовать Демо * Скачать Демо Партнерам: Если Вы создаете сайты и хотите постоянно использовать "ArtGK CMS" в ваших проектах - станьте нашим партнером. Для всех сертифицированных партнеров действует постоянная 50% СКИДКА. Хотим поздравить Вас с Днем Системного Администратора. В этот светлый летний день мы желаем Вам успехов, карьерного роста, больший и светлой любви, а так же: надежных серверов, быстрых каналов, пусть все Ваши сервиса будут в вечном UP'е. . Обновление системы управления сайтом WebCodePortalSystem до версии 4.4.2 Обновление обязательно, т.к. связан с безопасностью ИЗМЕНЕНИЕ: Устранена критическая уязвимость wlt.ru enjoy. I have just finnished writing a paper titled "Flash Vs Silverlight: A usability evaluation" It is available? vitara.primecar.ru/vitara-286.html Hunky gay boy gags on a meaty love club... Twink unloads into fresh hunk’s open mouth! I think that MS makes a enemy around and is killd all... although just MS may be a winner, it makes exclusive. totally, industry should be dropped. just MS is a big commpany. their goal is the top on the IT industry. more service,more intuition? just it starts from stealing. the Exclusive is much dangerous. This is like Microsoft updating MS Paint to compete with Photoshop. Not going to happen. Besides that, writing a class in SL to do something simple takes loads and loads of lines of code than actionScript. ActionScript is fine, people that down it are simply noobs. teşekkürler dostum , eline sağlık.. smallvilledizi.blogspot.com неплохой музыкльный сайт He said, She said...blah, blah, blah Most of the comments here are from people who don't even know enough to realize they don't know sh!t. Young and dumb! Silverlight vs Flash who cares!!!! If you are from the Microsoft you will tend toward silverlight or Adobe and Flash. I've been contracting for a long time and in most cases the client will tell you what platforms you will develop for. BTW - This designer vs developer mabojambo. To some it might be "what came first, the chicken or the egg". Answer - All of the various frameworks/tools/O.S. used .Net, Java, Visual Studio, Photoshop, Flash, Illustrator, Windows, OSX, Linux, .... Were created by DEVELOPERS!!!!! Flash is going down :} "and also with Silverlight supporting 3D (which can be really appealing to game developers) Flash is sure dead!" not really, if not many ppl have silverlight plugin. what the point developing a game that no one will be playing. Hi all! Bye Good article! informative! I took the angle from usability and restrictions of display, which in essance IS Flash vs silverlight because they ARE the medium of distribution. however I do agree that if it is the development environment that you are comparing or the laguages used, then you can't just use the global term “flash vs silverlight” I have just finnished writing a paper titled “Flash Vs Silverlight: A usability evaluation” It is available here: Let me know what you think, what would you have done differently? I am a graphic designer. I have a client who is on a PC and is using Word to create gift certificates and other pieces that need the logo in different sizes. I have the Adobe CS2 Suite and this logo was created in Illustrator. I have tried a few things but nothing is giving us a high resolution image when printed. What can I do with this logo so that my client can import it into Word ? Please help!!! Thank you, Barbara Davis 601-754-5210 @ barbara davis! install the office printtools. [url=]блок хаус имитация бруса [/url] Нашей компанией в течение нескольких лет успешно реализуется на российском рынке евровагонка из липы, которая имеет пониженную плотность, и безупречно подходит при отделке сауны и бани. Обладая более малой плотностью, эта древесина меньше нагревается. Она в отличие от других древесин никак не выделяет смолу. Прежде чем изготовить изделия из этого дерева производители выбирают такие участки, как будто кора и сердцевина, так как будто именно в этих местах ствола наиболее качественная древесина. Однако ценители парилки и сауны ценят в ней именно ее запах, так как будто по их ловам она усиливает целебное действие и создает неповторимую атмосферу банных процедур. Предлагаемая нами к продаже евровагонка из липы имеет еще одно название такое как будто <обшивная доска>, только это название Вы можете найти в технических документациях. Ее название появилось в те времена, в какое время еще ради обшивки вагонов использовали доску с двусторонней выборкой ради того, дабы в его стенах никак не было щелей. Евровагонка SoftLine (ширина: 86мм-рабочий, 95мм всеобщий разм., толщ.-14мм) Категория Размер, м Цена за пог.метр,руб. Древесина липы мягкая приблизительно белоснежная, безъядерная, легко обрабатывается. Используется на токарные и резные изделия, ради изготовления мебели и кадок, в фанерном производстве, в сапожном деле (колодки) и другие. Из коры липы в молодом возрасте получают лыко ради изготовления корзин, веревок и так далее. Из коры 30-50 летних деревьев приготавливают мочало идущее на рогожи, кули, лубки, используемые ради обшивки саней, телег, в производстве легкой тары. Из ее семян добывают пищевое масло, напоминающее миндальное. Евровагонка из липы бывает разных распилов и сечения и распилов (тангенциальный, радиальный и смешанный) и профилей (имитация бруса, евро-профиль, штиль, четверть, софт-лайн). Вы удивитесь нашим ценам, так как будто они отличаются своей доступностью, а так же в зависимости от реализуемого емкости предоставляется широкая система скидок. <Вельский лес> - главного дистрибъютера крупного деревоперерабатывающего комбината, расположенного в г. Вельске Архангельской области. Это - высокотехнологическое пердприятие, полностью оснащенное новейшим немецким оборудованием фирмы <WEINIG>, мирового лидера в области деревообработки. Мы предлогаем своим покупателям строганные пиломатериалы европейского качества из древесины хвойных пород северного леса камерной сушки: Северная древесина обладает значительно большей плотностью и твердостью, нежели древесина Средней полосы России. Это являеться важным фактором при отделке помещений и в мебельном производстве. Особое забота производитель обращает на качество продукции, которая сортируеться по европейским стандартам. Вся продукция имеет сертификаты соответствия. Вся выпускаемая продукция ради защиты от влаги и механических повреждений при погрузочно-разгрузочных работах и транспортировке упакована в термоусадочную пленку с фирменным логотипом. Торговля ведется валом и в розницу. Оптовые партиии под заказ. Срок поставки - никак не более 5-ти дней. Прямые поставки со склада в г. Самаре и от производителя. Для покупателей свыше 40 м_ - безвозмездная доставка по г. Самаре и области. * Компания ООО "Вельский лес" является главным дистрибьютером на Российском рынке крупного деревообрабатывающего комбината, расположенного в городе Вельске Архангельской области. На сегодняшний сутки затея представляет собой высокотехнологичный деревообрабатывающий завод, оснащенный новейшим немецким оборудованием "WEINIG". Проектирование гостиничном развлекательных комплексов от Ренессанс-Проджект"> А также:вторский надзор , фито дизайн современного жилища , фито дизайн современного жилищ. Ai... I guess everyone have different opinion about Silverlight between Flash. Adobe Flash is improving since it is not yet perfected. In life it is not about perfection but aiming for improvement, improve, improve, improve. Even Microsoft Silverlight also need to improve as always. Если вы собираетесь регулярно переживать опыт общения с растением, вам потребуется собранность и уравновешенность автогонщика, летчика испытателя, нейрохирурга, скалолаза или кого-то кто регулярно общается с растением. Без чуства равновесия у вас не получится регулярно испытавать опыт (крючки попадают). Самый простой путь к обретению равновесия - саморегуляция в прощессе общения с растением на регулярной основе (получается не у всех и не сразу, большая вероятность что крючки всеже упадут). Автор глубоко убежден, что растение не расшатывает равновесие (зато практикующий или его окружение под воздействием растения очень даже могут внести дисбалланс - прим. перев.). Тем не менее, если вы планируете регулярно погружаться в экспириенс, рекомендуется заняться чем-нибудь что требует равновесия, рассудительности. Работать - очень важно. Чем ответственнее работа, тем лучше. Строить что нибудь - идеальное занятие(гроубокс). Строить дом - просто превосходно. Заменять двигатель в машине, ремонтировать комлектованный електронный девайс(компутер) - все это поможет настроить ваш мозг на совершенствование. Любой род деятельности, требующий мышления, где ваш успех напрямую зависит от ваших достижений даст вам независимость и чуство равновесия. 5. УЧИМСЯ ОТКЛЮЧАТЬ ВНУТРЕННИЙ ДИАЛОГ Настоящий ключ к волшебству и магии - отключение внутреннего диалога. Растение - ценный учитель на этом поприще. Растение может обучить вас многим вещам, но отключение внутреннего диалога - "большой урок". Главное событие. Суть магии. Курим растение Да, вам понадобится курить растение. Предпочтительнее курить экстракт, т.к. нужно попасть "туда" как можно быстрее и без заморочек. Все рекомендации сводятся к тому, чтобы выбрать максимально темное и тихое место. Это хороший совет. Растение даст вам чистый(детский) взгляд в "нагуаль". В условиях насыщенных внешними раздражителями (яркий свет, шум) этот опыт может показаться раздражающим или даже адским. Во много раз более раздражающим чем все что вы можете представить (ага, напугал - прим. перев.). Вы должны осознать, что растению нет нужды вас раздражать, то каким будет результат, зависит от того, кто и как преживает опыт (это для тех кто напугался - прим. перев.) Все объясняется природой нагуаля. Она такова, что может запросто "выбить вас из колеи" (ну или типа того). Под "вами" подразумевается ваш "тональ". Ваш тональ может быть в ужасе от нагуаль, особенно если часть этого страха исходит от неких внешних источников - т.н. флаеров. В условиях не насыщенных внешними раздражителями ваш тональ почуствует знакомую обстановку - нечто подобное происходит каждый раз когда вы спите. Выключаем свет, ложимся в постель и тональ "сдает пост". В общем, эффект такой что вы спите наяву и вы знаете насколько странным может быть сон. Если у вас бывают странные сны, представьте, насколько странным это становится когда подобная информация поступает в ваше сознание, находящееся в бодрствующем, алертном состоянии. Вот причина, по которой растение может быть используемо в рекреационных (восстановление сил) целях. Это может быть достаточно интересно. [url=]5 spices[/url] Так , щас буду рассказывать про сальвию. В общем давно я решил попробывать сальвию, заказывать по нету как-то в лом было. Мое намеренье притянуло ко мне человека , совершенно случайно, который смог мне достать ее. Лежала она у меня значит около месяца, все никак времени не было пробывать.. И вот наконец-то свободный четверг на этой неделе. Я решил как следует подготовиться. В среду чистил организм, выполнял энергетические практики. В четверг пошел в другую квартиру,она была свободна, соорудил там булик(курил через водный), опять сделал упражнения по набору энергии, сходил в душь, в общем был свежий ,бодрый, полный сил и энергии. Для настройки почитал КК первый том. Решил проводить миттоту без ситера, не хотел просто. Расстелил на полу матрац, чтоб если вырубит ,башкой не стукнуться.Немного нервничал.. Поджег, сделел тягу, подержал, вторую … Жду когда же что -то произойдет. /Да … когда читал что там люди видят другой мир, там путешествуют, реальность у них сворачивается - разворачивается, кракадилы по реке плавают , итд , то соответственнол на то и надеялся,ждал, ну мож астрал по минимуму… Куда там… /После второй или третьей тяги не пойму где я … Кровь просто бурлит во всем левом полушарии мозга, бьет , вибрации.. , я осознаю ,что я на что-то пялюсь,(со временем понял что это булик)что я где-то, мне херово.. Потом ложная память как во сне, сам себе обьясняю где это я веду беседу. Потом я понимаю что я должен быть на матрасе, но что-то не так : все идет как лестница _ диван-матрац-пол. Я скем-то разговариваю (вот ВД тогда пробило,просто пи**ец) Говорю о том ,что где другая реальность, что сальвия слабая, с чуваком разговариваю который мне достал (хотя понимаю что никого нет) с ним еще кто-то. Гворю себе : " Я осознаю ,чтоя курил сальвию, я на мараце щас сижу" Я чувствую что что-бы попасть в др реальность , что для сдвига тс чего-то не хватает.. Смотрю на руки (проверка на реальность) Все,точно разобрался где я ,где пол-потолок,диван матрац.Думаю, наверно мало выкурил.. Я каким-то образом сумел встать , пойти принести пакетик ,высыпать опред часть ,поджечь и еще раз сделать пару тяг…Ух жесть.. Эффект - тот же. Я в начеле не понимаю где я , потом еле -еле доходит,что я пялюсь на булик, лестница, жуткое ощущение в левом полушарии. Знаете ,чесное слово ,такое ощущение,что прсто пи*дец перипил и все.. Еще напоминает , то когда КК созерцал области между листьев,попал в то дерево,не понимая где он,а потом осознал себя в другой реальности.Вот похоже на это было ,только я не осознавал себя в др реальности, я там просто не был. ВД - без умолку нес чушь,пытался остановить - безрезультатно; мне было очень плохо ; взбрела мысль что это и не сальвия ,что мне что-то подсунули другое, говорил себе что больше не буду курить ее;я нервничал, что надо убраться, что вдруг меня вырубит на часа три- четыре и я не успею придти до прихода отца домой (а мне в тот день надо было), почему я не подождал и не попросил быть ситером сестру - было бы с кем поделиться, кто бы мне помог ( еслиб я так сделал,то стпудово ныл и нес всю эту чушЬ в слух, еще наверно разревелся под конецю). Думаю , что ну когда же все это закончится.Со злости стукнул по-полу (о как ) Решил прилечь. Лежу нервничаю. Перед глазами образы, все разное, короче психоделика сплошная./ А самая главная фишка,что меня расстроило, что все эти образы были не как предсонные образы или сновиденные картины или визуализация, все это было как будто просто что-то представляешь, что-то визуализируешь,но не получается и видишь только что-то на черном фоне почтине различимае, а мозг настраиваешь на то что ты визуализировал, скорее зание что тто что ты визуализировал, а не само видене. /Потом пошли такие картины как в фильме "блуберри", самое интересное, но противное. Я все ждал когда же закончится, стало чуть по легче,я успокоился, пытался эти образы направить в то русло которое хочу (как в сновидении) ,но фигушки - ничего не поддавалось! Вот я так и лежал 50 минут,пока меня не отпустит, ,каждые 10 минут смотрел на часы,чтоб не проспать; был скачек времени : мне казалось,что прошло 5 мин , а прошло 20… Под конец мое индульгирование возобновилось, мне казалось, что на то чтобы убрать матрац на кровать и застелить ее понадобиться 15-20 минут,хотя я и понимал что это не больше 5 минут, но всеж встал пораньше. С открытыми глазами вообще п
http://weblogs.asp.net/jezell/archive/2007/05/03/silverlight-vs-flash-the-developer-story.aspx
crawl-002
refinedweb
29,394
73.47
Here is the program I created! Don't forget to add -lpthread in the linker options if you are running this! Code: Select all #include <stdio.h> #include <pthread.h> typedef struct { int start; int end; int step; }data; int isPrime(long int number) { long int i; for(i=2; i<number; i++) { if(number % i == 0) { //not a prime return 0; } } return number; } void calcPrimes(int start, int stop, int step) { long int s; for(s=start; s<=stop; s+=step) { if (isPrime(s) > 0) { //Its a prime number!!! } } } void* thread1Go() { calcPrimes(3, 100000, 8); //stepping 8 numbers for 4 cores return NULL; } void* thread2Go() { calcPrimes(5, 100000, 8); //starting thread 2 at the next odd number jumping 8 spaces for 4 cores return NULL; } void* thread3Go() { calcPrimes(7, 100000, 8); // starting thread 2 at the next odd number and jumping 8 spaces for a 4 core run return NULL; } void* thread4Go() { calcPrimes(9, 100000, 8); // think you get it. return NULL; } int main() { printf("Calculate Prime Numbers\n"); printf("==================================================\n\n"); //create the threads pthread_t thread0; pthread_create(&thread0, NULL, thread1Go, NULL); pthread_t thread1; pthread_create(&thread1, NULL, thread2Go, NULL); pthread_t thread2; pthread_create(&thread2, NULL, thread3Go, NULL); pthread_t thread3; pthread_create(&thread3, NULL, thread4Go, NULL); //wait for threads to join before exiting pthread_join(thread0, NULL); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); return 0; } And my results were not quite what I was expecting... sort of! 1 thread = 33.016 s 2 thread = 16.531 s 3 thread = 16.637 s <= WTF!!! 4 thread = 8.871 s I noticed that running on 3 threads the activity seemed only to be divided by 2 cores.... thought I would see 3 active cores and one ticking over? But its great to see a massive performance increase by having the extra cores to play with. I thought this was interesting so I shared it.
https://lb.raspberrypi.org/forums/viewtopic.php?p=838760
CC-MAIN-2020-16
refinedweb
312
70.73
- Advertisement minamurMember Content count41 Joined Last visited Community Reputation140 Neutral About minamur - RankMember 3D objects toward a flat 2D tiled scene. minamur replied to Alpha Brain's topic in For Beginners's Forumi know in maya you can load a picture that will display in the background, then you could load all you're models and do whatever until they look right, then you could render the scene and cut out the individual objects as sprites and plop them into you're game, or you could make them part of the tile set if you like. i'm not an animator or anything though, that's just how i'd go about it. 3D objects toward a flat 2D tiled scene. minamur replied to Alpha Brain's topic in For Beginners's Forumdo these objects need to rotate during the game, or do you just want to rotate them around to get the right look for you scene? if its the latter then you just need to render your building and use the rendered image as a sprite. reflect() function for normal and light vector? minamur replied to karx11erx's topic in Math and PhysicsQuote:Original post by Darkstrike v-2*n*(n.v) is correct as long as n is normalized. If it isn't, v - 2*n*(n.v)/(n.n) is the way to go. v doesn't need to be normalized in either case. quoted for truth. sorry, i didn't think before posting. reflect() function for normal and light vector? minamur replied to karx11erx's topic in Math and Physicsedit: -2.0f * n * (n*normalize(v)) + v where n is the surface normal and v is the direction vector you want to reflect since you aren't normalizeing v i think you're over estimateing the projection of v onto n which would fuck up your results. - Quote:Original post by jyk What advantage does the #define have over a constant? Why would you choose to use the former rather than the latter in this case? none, but i'd use it in this case simply because i'm more used to it, you could also say you're saveing eight bytes by #define-ing it, but thats just silly. i was just saying that if you encode the literal with type information using the proper suffix you can gain some degree of type safety, and that #define isn't the pure evil that you dare not speak its name. i think with most things, if someone says that something should never be used, thats probably an exageration. for example, i hate macros, mostly because i can't debug them, but i've written a whole slew of macros to make my life easier while programming on the gba. now of course i could have written functions for this sort of thing, but these macros where for set a bit at 0x4000010 and that sort of thing. really it made no difference to me weather it was "good programming practice" or not, since, like i said the worst part of macros is how impossible they are to debug, and the debugger never worked anyway :). so my point really is everything is a tool, if you know the limitations of the tool and what you need done, you can make a choice of how you go about doing things. you don't need to disregaurd something out of hand like that. Electrons minamur replied to gparali's topic in General and Gameplay ProgrammingQuote:Original post by gparali Thanks for the answer. He has already thought of the solutions you have given. The plan he has right now is for every electron to be affected by the electrons that are very near and then treat the other electrons as groops as you have suggested. If you find that there faults with this or there are better methods please post. i'm working on something that sounds actually kind of similar right now. my solution is to have each particle (in your case electron) store they're position and they're position divided by some number. this way i only need to check if each particle is in the same grid cell (or a neighboring cell), which saves me from haveing to compute the distance or even the squared distance between the particles. the draw back is that it involves a divide for updateing the particles, but you may be able to factor that out some. here's my abridged implimentation. /////////////////////////////////////////////////////// //particles update /////////////////////////////////////////////////////// this->position += this->velocity * ellapsed_seconds; //grid_x and grid_y are both longs this->grid_x = (long)this->position.x >> 6; //using a right shift to avoid full divide this->grid_y = (long)this->position.y >> 6; //depending on what you're doing this may not be do-able . . . //////////////////////////////////////////////////////// //particle systems updat //////////////////////////////////////////////////////// for(all the particles i) { i->update(elapsed_seconds); . . . for(all the particles j != i) { if(j->grid_x >= i->grid_x-1 && j->grid_x <= i->grid_x+1 && j->grid_y >= i->grid_y-1 && j->grid_y <= i->grid_y+1 ) { //find actual distance //if distance is less then some threshold, they're interacting //do stuff } } } - speaking of pi whats wrong with: #define pi 3.14159265 i don't see how that is unsafe. float temp = 180/pi; this will give a truncation from double to float warning. if you want pi to be a float then just use the right suffix. so if you're defining a simple value and you just use the right suffix on the number you have type safety. #define isn't so evil, but large macros are imposible to debug and so should be avoided. Cannot get average of three variables. minamur replied to Sarxous's topic in For Beginners's ForumQuote:Original post by Sarxous Thanks for the help guys. Steamers modified code worked perfectly, as did Danikar's "double cin.get()" in the sake of simplicity I went with Danikar's method. in general the way to clear the input stream is (although i haven't used cin in a while ;)): cin.clear(); //clears the fail flags while(cin.get() != '\n') {} //clears the input stream up to the newline Looking for a quick-and-easy 2d engine.. minamur replied to Incogito's topic in Engines and MiddlewareQuote:Original post by Incogito I've checked, and it turns out I'm allowed to use any free engine, but for whatever reason they still wont let me change compiler.. I've got 8 weeks to finish this thing, apparently.. Thanks for your suggestions so far, though are you working alone? OpenGL with SDL = crash minamur replied to Donyc's topic in Engines and MiddlewareQuote:Original post by BradDaBug All you're doing, as far as OpenGL is concerned, is clearing the same color and depth buffers every time insead of flipping to the next one. Why would that cause a problem? Am I missing something? since your clearing the back buffer but not flipping the buffers what should be rendering? chode thats what. alternative power function minamur replied to walkingcarcass's topic in Math and Physicsa bit off topic but... it seems that for a faster power function for a floating point type you could make long* buffer pointing to the float, copy the value stored at that long* into a long, then copy the new exponent into the exponant field, then cast that back to a float and return it. never tried that, just a random thought. [edit] maybe it isn't a bit off topic since the above would preserve the sign of the number. SDL window positioning minamur replied to minamur's topic in Engines and MiddlewareQuote:Original post by basement Not in any SDL version so far. However I think in SDL 1.3 this will finally be fixed. Best thing you can do for now is using native functions for your platform, for example use this to center the window: #include <SDL_syswm.h> void CenterWindow(int w, int h) { int x,y; SDL_SysWMinfo info; SDL_VERSION(&info.version); if(SDL_GetWMInfo(&info) > 0) { # if defined(WIN32) x = GetSystemMetrics(SM_CXSCREEN) / 2 - w / 2; y = GetSystemMetrics(SM_CYSCREEN) / 2 - h / 2; MoveWindow(info.window, x, y, w, h, TRUE); #else /* other platforms */ #endif } } nice, i wasn't aware of SDL_GetWMInfo. thanks. [SDL]SDL_WM_SetIcon minamur replied to Uphoreum's topic in Engines and MiddlewareQuote:Original post by Uphoreum EDIT: Come to think of it, SDL_WM_ToggleFullScreen doesn't work either... in the sdl doc it says that SDL_WM_ToggleFullScreen is only implimented in xwin i believe. you could probably do it yourself though. SDL window positioning minamur posted a topic in Engines and Middlewarewhen you create a window in sdl, is there any way to set the position at wich it will be created? 2d per pixel collision minamur replied to minamur's topic in Graphics and GPU Programmingwell thats good to know. thanks. - Advertisement
https://www.gamedev.net/profile/87725-minamur/
CC-MAIN-2018-39
refinedweb
1,475
70.84
In this blog, we will be scheduling jobs based on time using Akka scheduler. The jobs will be scheduled based on IST(Indian Standard Time). It is basically an alternative to Quartz Scheduler to some extent which is also used to schedule the time-based job. We can schedule jobs at any point of time i.e schedule jobs based on IST or any other TimeZone like UTC Let’s say, If you want your job to be scheduled at 9:00:00 AM IST every day, What would you do?. Scheduling such a job using the normal Akka scheduler is not easy. So what I have done is, I have written an extra function so that our normal Akka scheduler acts as a Quartz Scheduler but only ‘Time Based ’ not Calendar based. With the example I am going to share you will be able to schedule time-based jobs for almost any TimeZone, you just need to mention the timezone. In this example, I am focusing on IST(Indian Standard Time) but you can tweak it. Example Send an E-mail every day at 09:00:00 AM every day. Akka quartz is one way, but If you can play around with normal Akka scheduler why mess around with Akka quartz. Alright Let’s head over to the code and see How can we accomplish it. Here is the Schedule Actor Class import akka.actor.Actor class ScheduleActor extends Actor { import ScheduleActor._ var count = 1 //I am using var over here because it's a simple example to increment count. However you should prefer immutability wherever possible. def receive: PartialFunction[Any, Unit] = { case IncrementNumber => count+=1 println(count) } } /** * Created by deepak on 22/1/17. */ object ScheduleActor { case object IncrementNumber } Here is the implementation for the job. import java.text.SimpleDateFormat import java.util.{Date, TimeZone} import scala.concurrent.duration._ import ScheduleActor.IncrementNumber import akka.actor.{ActorSystem, Props} /** * Created by deepak on 22/1/17. */ object ScheduleJob extends App { val system = ActorSystem("SchedulerSystem") val schedulerActor = system.actorOf(Props(classOf[ScheduleActor]), "Actor") implicit val ec = system.dispatcher system.scheduler .schedule(calculateInitialDelay().milliseconds, 60.seconds)( schedulerActor ! IncrementNumber) //the first argument in the schedule function is the initial delay //the second argument in the schedule function is the interval def calculateInitialDelay(): Long = { val now = new Date() val sdf = new SimpleDateFormat("HH:mm:ss") sdf.setTimeZone(TimeZone.getTimeZone("IST")) val time1 = sdf.format(now) val time2 = "00:00:00" //this is where we provide the time(IST) for example I want the job scheduled at 9PM IST I would replace 00:00:00 with 21:00:00 val format = new SimpleDateFormat("HH:mm:ss") val date1 = format.parse(time1) val date2 = format.parse(time2) val timeDifference = date2.getTime() - date1.getTime() val calculatedTime = if (timeDifference < 0) (Constant.DAYHOURS + timeDifference) else timeDifference // val modifiedDate = projectDbService.getModifiedDate("sumit") calculatedTime } //Calculate initial delay method basically triggers the job at the IST time provided above. } That’s all, you are all set to schedule time-based jobs without using Quartz Scheduler. Full code is available on github If you find any challenge, Do let me know in the comments. If you enjoyed this post, I’d be very grateful if you’d help it spread.Keep smiling, Keep coding! Cheers!
https://blog.knoldus.com/2017/02/10/an-alternative-to-akka-quartz-time-timezoneist-and-others-based-jobs-using-akka-scheduler/
CC-MAIN-2017-43
refinedweb
544
50.23
Python. It’s difficult to talk about command line processing without understanding how command line arguments are exposed to your Python program, so let’s write a simple program to see them. If you have not already done so, you can download this and other examples used in this book. #argecho.py import sys for arg in sys.argv: print argprint arg [f8dy@oliver py]$ python argecho.py argecho.py [f8dy@oliver py]$ python argecho.py abc defargecho.py [f8dy@oliver py]$ python argecho.py abc def argecho.py abc def [f8dy@oliver py]$ python argecho.py --helpargecho.py abc def [f8dy@oliver py]$ python argecho.py --help argecho.py --help [f8dy@oliver py]$ python argecho.py -m kant.xmlargecho.py --help [f8dy@oliver py]$ python argecho.py -m kant.xml argecho.py -m kant.xmlargecho.py -m kant.xml So as we can see, we certainly have all the information passed on the command line, but then again, it doesn’t look like it’s going to be all that easy to actually use it. For simple programs that only take a single argument and have no flags, you can simply use sys.argv[1] to access the argument. There’s no shame in this; I do it all the time. For more complex programs, you need the getopt module. def main(argv): grammar = "kant.xml" try: opts, args = getopt.getopt(argv, "hg:d", ["help", "grammar="])try: opts, args = getopt.getopt(argv, "hg:d", ["help", "grammar="]) except getopt.GetoptError:except getopt.GetoptError: usage()usage() sys.exit(2) ... if __name__ == "__main__": main(sys.argv[1:])sys.exit(2) ... if __name__ == "__main__": main(sys.argv[1:]) So what are all those parameters we pass to the getopt function? Well, the first one is simply the raw list of command line flags and arguments (not including the first element, the script name, which we already chopped off before calling our main function). The second is the list of short command line flags that our script accepts. The first and third flags are simply standalone flags; you specify them or you don’t, and they do things (print help) or change state (turn on debugging). However, the second flag (-g) must be followed by an argument, which is the name of the grammar file to read from. In fact it can be a filename or a web address, and we don’t know which yet (we’ll figure it out later), but we know it has to be something. So we tell getopt this by putting a colon after the g in that second parameter to the getopt function. To further complicate things, our script accepts either short flags (like -h) or long flags (like --help), and we want them to do the same thing. This is what the third parameter to getopt is for, to specify a list of the long flags that correspond to the short flags we specified in the second parameter. Three things of note here: Confused yet? Let’s look at the actual code and see if it makes sense in context. def main(argv): grammar = "kant.xml" try: except getopt.GetoptError: usage() sys.exit(2) for opt, arg in opts:grammar = "kant.xml" try: except getopt.GetoptError: usage() sys.exit(2) for opt, arg in opts: if opt in ("-h", "--help"):if opt in ("-h", "--help"): usage() sys.exit() elif opt == '-d':usage() sys.exit() elif opt == '-d': global _debug _debug = 1 elif opt in ("-g", "--grammar"):global _debug _debug = 1 elif opt in ("-g", "--grammar"): grammar = arg source = "".join(args)grammar = arg source = "".join(args) k = KantGenerator(grammar, source) print k.output()k = KantGenerator(grammar, source) print k.output()
http://www.faqs.org/docs/diveintopython/kgp_commandline.html
crawl-002
refinedweb
613
78.14
+4 Copy the current viewing file's path to clipboard Arunprasad Rajkumar 9 years ago • updated by Alexis Sa 8 years ago • 5 It would be nice if we were able to get the full path of viewing file. Suppose if user right clicking on the file-tab it should show option to copy the current viewing file's path to clipboard :) Customer support service by UserEcho import sublime, sublime_plugin, os class PathToClipboardCommand(sublime_plugin.TextCommand): def run(self, edit): sublime.set_clipboard(self.view.file_name()) class FilenameToClipboardCommand(sublime_plugin.TextCommand): def run(self, edit): sublime.set_clipboard(os.path.basename(self.view.file_name())) class FiledirToClipboardCommand(sublime_plugin.TextCommand): def run(self, edit): branch, leaf = os.path.split(self.view.file_name()) sublime.set_clipboard(branch) Hi, it is a plugin right? Great work. Thanks for that :D BTW, do I need to map this to certain key combination? how to make this as usable one? Please help me Find the class - strip off the "Command" - then convert the CamelCase by separating the words with underscores where you see capital letters. Like following: Add the following to Key bindings - User: { "keys": ["ctrl+alt+c"], "command": "filename_to_clipboard" }, Or to add the commands to the right-click menu, you can create a file named Context.sublime-menu in your User folder containing the following: [ { "command": "filename_to_clipboard", "caption": "Filename to Clipboard" }, { "command": "filedir_to_clipboard", "caption": "Filedir to Clipboard" }, ] It is build into core ST. Right click on the body of the open file (not the tab the bit with the text in it). There is what you want right there on the menu and it has been there for years. No need for all the argby bargy in this thread - it is already supplied copy or delete long path files.
https://sublimetext.userecho.com/en/communities/1/topics/4429-copy-the-current-viewing-files-path-to-clipboard
CC-MAIN-2022-05
refinedweb
289
55.34
Content Migration in AEM using SlingPostServlet A very basic migration flow looks as follows: In this scenario, you have a CMS(that could be Sitecore, Drupal, WordPress or any other CMS) which has source content that needs to be migrated to AEM. To achieve this, we typically need to do following things: - Get content from source CMS in any format(XML, CSV, etc) - Process this content and extract content that needs to be exported to AEM. This would include parsing XML/CSV exported by source CMS and massaging it(if needed). - The processed content is then imported to AEM. There can be various strategies for this, like Talend, Package Manager and SlingPostServlet. I like SlingPostServlet as I feel it is closer to coding than other strategies. This blog is focused on that only. Considering that source CMS gives you XML, I created a Groovy script for migration. Here are the steps: - Parsing XML using Groovy XML Parser def records = new XmlParser().parseText(file.text)?.blog - Creating a map out of parsed content records.eachWithIndex { blog, idx -> def name = blog.name.text() def content = blog.outline.text() def status = blog.status.text() def parentSubject = blog.subject_parent.text() Map contentMap = [ "./jcr:primaryType": "cq:Page", "./jcr:content/jcr:primaryType": "cq:PageContent", "./jcr:content/jcr:title": "${name}", "./jcr:content/blog/sling:resourceType": resourceType, "./jcr:content/blog/status": status, "./jcr:content/blog/parentSubject": parentSubject, "./jcr:content/blog/text/sling:resourceType": "foundation/components/text", "./jcr:content/blog/text/text": "${content}" ] callPost("${baseContentPath}blog${idx}", contentMap) } Any key in the map corresponds to a property in JCR. If you split the property by “/”, last element would give you the property name and elements from first to second last gives you the hierarchy. Taking example of "./jcr:content/blog/status": "Published" entry in the map, this key would create hierarchy jcr:content -> blog and blog node would have a property status and value Published . - Posting content to AEM void callPost(String baseURL, Map contentMap) { /*Setting auth basic in request doesnt work... Had to set it in headers*/ // http.auth.basic("admin", "admin") MigrationConfiguration.client.request(Method.POST) { uri.path = baseURL requestContentType = ContentType.URLENC headers.'Authorization' = "Basic ${"admin:admin".bytes.encodeBase64().toString()}" body = contentMap response.failure = { resp -> println "\nERROR: ${resp.statusLine} for ${uri.path}" } } } And that is all you need to do. You can now check the content hierarchy in CRX. You can modify this Groovy script as per your use case. In addition, if you would like to know more on Content Migration to AEM, here’s a simple step-by-step guide on how to do it? Please put in your comments in case there are suggestions to improve it or if you face any issue with this. Thanks!! Wondering if anyone has tried both options and know the performance gap.. Interestingly, we had unexpected performance improvement while using REST based post (oracle WCM though) as against API level call. May be the custom code that used API call was not done right, but still curious to see if anyone has tried both options in AEM. Thanks for the post. It’s an interesting approach. I’m trying an osgi custom polling importer route, but nice to know there’s another option. @coloradobaugh : Thanks!! Its always good to compare different approaches.. Would be great to know the approach you are trying.. Do you have any experience how the performance is? Is it still useable if you have some million Nodes which have to be migrated? PostServlet works fine for moderate amounts of content. for larger amounts I recommend to package the importer logic (themone parsing the XML in this example) as an OSGi bundle, deploy it into AEM and create the JCR nodes via the JCR API. This is usually a lot faster because the request processing needed for the Sling-based method is not needed anymore. I AM NOT ABLE TO FIND THE GROOVY SCRIPT. PLEASE SHARE THE CODE. @Sören : Sorry, Somehow I missed notification for the comment. I did a dry run for about 500 pages that created about 2000 nodes nodes in JCR… It took me 11 odd seconds for the same.. I am not sure of the breakpoint by when SlingPostServlet would work better than JCR API but I think POST Servlet would be doing that behind the scene but it could be writing nodes in bulk which would be better in performance than writing individual nodes… Request processing time would surely be there in this approach as Michael mentioned… What I liked about it was that I did not have to manually create all the nodes.. Based on the map that I POST, all nodes are automatically created.. Moreover in this, we need not worry about order in which nodes are there in the map…
https://www.tothenew.com/blog/content-migration-in-aem-using-slingpostservlet/
CC-MAIN-2021-10
refinedweb
793
57.57
Interesting idea, for sure! Here’s what seems to me like a major issue: Time/maturity and fund redeeming. This fund naturally invests in early stage companies. It can take a very long time till you see any return (or even partial liquidity). Further, if this is an ETF (which would be the natural instrument to get the benefits you’ve mentioned), then as investors sell their public units (for whatever reason), sometimes (if there are no immediate buyers for them in the market), the fund needs to redeem their units and return the funds. Where would those funds come from? The fund owns stock in 1000 early-stage startups — and there is very little liquidity for these assets. You could sell some of them — but those are likely to be the ones you actually want to keep. Plus, managing this process would require human judgment. Alternatively, you can construct it as a closed-end fund, but then you’d lose some of the important advantages you’re trying to get here. One other question I’d ask is about this assumption: “Due to the 2/20 fee structure, limited partners of funds can only realize above market returns if the fund invests in one or several extremely high-performing startups (100x returns on a single startup).” I’m no VC, but from the math I’ve seen, this need for home runs stems from the high failure rate of startups, not from the management fees. And if that is true, then the suggested solution does not actually solve the core issue, since the batting average of such a fund will likely not be higher than that of other VCs/syndicates. In fact, due to factors you and Chris Treadaway both mention, the batting average is likely to be lower, exacerbating the issue. One last thing about the concept in general. IIRC, most VCs are not outperforming the stock market. The ones that are outperforming — what do you think makes the difference? Possible answers I can think of are — access to deal flow, deal selection & due diligence process, and value-add. Yet these are the exact factors such a seed fund would be weakest at, not so?
https://medium.com/@itamarro/interesting-idea-for-sure-acee67ab8430
CC-MAIN-2019-30
refinedweb
367
69.52
Pushed to marcuspope/verbotenjs Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. _ __ __ __ _______ | | / /__ _____/ /_ ____ / /____ ____ / / ___/ | | / / _ \/ ___/ __ \/ __ \/ __/ _ \/ __ \__ / /\__ \ | |/ / __/ / / /_/ / /_/ / /_/ __/ / / / /_/ /___/ / |___/\___/_/ /_.___/\____/\__/\___/_/ /_/\____//____/ zip archive: hg source: npm package: Introduction ================================================== Because everybody will think it is verboten anyway, I present to you VerbotenJS. Maintainable JavaScript: Don't modify objects you don't own. - Nicholas C. Zakas I, Marcus Pope, by imperial proclamation hereby declare ownership of Object.prototype. As sole owner I am the only authorized person to modify the prototype definition. Nope, too late I already called it and double stamped it. You can't triple stamp a double stamp! What Is VerbotenJS? ------------------- VerbotenJS is a general application development framework designed for NodeJS & Browser JavaScript hosts. In addition to a bunch of base prototype enhancements and terse programming patterns, the framework also includes extension libraries for various domains such as databases, filesystems, shell scripts etc. Who Should Use VerbotenJS? -------------------------- Nobody, it's verboten remember?!? No I'm kidding, you can use it for any personal project you want. I don't recommend using it for production systems since it is version 1.0 and I'm just one man trying to support his own personal JS framework. I'll probably change framework logic on a whim too, because that too is considered verboten right? Why Is VerbotenJS 750k Uncompressed, and do you really expect me to download that to a client? -------------------------------------------------------------- No, I don't, it's verboten to do so remember? Actually here's the deal. jQuery is not compatible with Object.prototype extensions. I had intended on releasing my own DOM, Event and Ajax wrapper but I ran out of time. So I decided to inject a forked version of jQuery which added almost 350k to the footprint. My DOM wrapper will probably put the library in the 500k realm for browser hosts. For NodeJS hosts modules are dynamically included on access so the run-time memory usage requirements are dynamic based on the modules you use. But by today's standards 750k is not really that bad except for mobile devices and cellular networks. If you must you can compress and gzip it down to 75k but with modern bandwidth and caching infrastructures it really isn't that bad. But I Heard/Read/Know That Object Prototype Extensions Are Bad, mkay? -------------------------------------------------------------- Good for you. Play around a little, find out how good it feels to be bad for once in your life :D Well, If Object Prototype Extensions Are Good, Then Why Does VerbotenJS Break My Application/Library/3rd Party Code? -------------------------------------------------------------- Some people write bad JavaScript code, including myself. jQuery authors, for instance, recognize that their codebase has a bug that makes jQuery incompatible with object prototype extensions, however they choose to ignore the issue for various artificial reasons. ExpressJS authors just don't understand how JavaScript reflection works, and they think OPE's are code smell. And yet in other cases VerbotenJS may conflict with existing namespaces in your application. In the latter case I recommend either opening a bug if the issue is caused by a non-conforming standard on my part, or refactoring your application if you want to use VerbotenJS. Where Do I Start? ----------------- First grab a copy of verbotenjs from npm. npm install verbotenjs In node hosts use: require('verbotenjs'); For browser hosts include a reference to this file [verbotenjs]/web/verboten.js There are no init functions that you have to worry about, and there is no real global entry point to the verboten framework. Everything you need is either declared globally, or attached to the prototypes of existing data types including Objects. I Kinda Like Where This Is Going, How Do I Help? ------------------------------------------------ Well, I'm not really in a position to coordinate a team of developers at the moment. I have a newborn daughter and a full time job, so to avoid the frustration of not hearing from me for a few weeks at a time, which totally happens, send me small patches, <500 lines, and I'll review them. Since I doubt this will be much of an issue either way, I'll leave it at that for now. What If I Wanted To Create A Commercial Product? ------------------------------------------------ Well, let's talk licensing and target market. But really you should probably wait until 2.0 at least before considering something for production. What's In Store For The Future Of VerbotenJS? --------------------------------------------- Jesus, what isn't in a state of almost complete! Documentation, Bugs, Dom.js, Cross Platform Compatibility, JSDom, Unit Tests, etc. You know, all the stuff you'd expect from a professional open source project. The same stuff you should expect to be missing from a project named VerbotenJS. Documentation ================================================= Ha! No, but I'm working on it. You'll notice some documentation tags in the source code, that's about as far as I've made it on the documentation front. Much of the codebase is pretty self explanatory, much of it is not. I do have a couple of other projects and a bunch of scripts that I've written over the years that will be used as examples, but otherwise I have no documentation other than comments in the source. Examples -------- Here are some before and after code samples to show how VerbotenJS can make your coding efforts easier. Grepping The File System Without VerbotenJS: function grep(dir, query, filemask, recurse, hidden, cb) { //grep for 'query' in 'dir'. //'recurse' if necessary, //ignore 'hidden' files if necessary. //'cb' required. if (typeof query == "string") { query = new RegExp(query, "i"); } //optionally filter by filemask if (typeof filemask == "string") { filemask = new RegExp(filemask); } var count = 1, //file count async = 0, //async callback count out = [], list = [], fs = require('fs'), fp = require('path'); dir = dir || process.cwd(); function search(path) { fs.stat(path, function(err, stat){ count--; if (err) throw err; if (stat.isDirectory()) { async++; fs.readdir(path, function(err, files){ async--; if (err) throw err; for (var i=0; i < files.length; i++) { if (!hidden && files[i][0] == '.') return; count++; search(fp.join(path, files[i])); } }); } else if (stat.isFile()) { //ignore unmatched file masks if (!filemask.test(path)) { return; } async++; fs.readFile(path, 'utf8', function(err, str){ async--; if (err) throw err; var lines = str.split('\n'); for (var i=0; i < lines.length; i++) { var line = lines[i].trim(); //return matching lines & line number if (query.test(line)) return [i, line]; } if (lines.length) { out.push(path); for (var i=0; i < lines.length; i++) { out.push((lines[i][0]+1) + ": " + lines[i][1]); list.push({ path : path, line : lines[i][0], text : lines[i][1] }); } out.push(''); } if(count == 0 && async == 0) { cb(out.join("\n") || "", list); } }); } }); } search(dir); }; About 50 lines of code (minus empty lines, comments and closing brackets) to implement a grep-like file system search in NodeJS. Here's the same function implemented with VerbotenJS conventions. function grep(dir, query, filemask, recurse, hidden, cb) { //grep for 'query' in 'dir'. //'recurse' if necessary, //ignore 'hidden' files if necessary. //'cb' required. dir = dir || process.cwd(); query = query.toRegex('i'); filemask = filemask.toRegex('i'); //recursively search the filesystem q.fs.ls(dir, recurse, function(list) { //filter files we don't need var files = list.ea(function(f) { if (!hidden && f[0] == ".") return; //things like .hg/.git if (!filemask.test(f)) return; return f; }); var matches = []; //read each file and collect matching line info files.sort().ea(function(next, f) { q.f.utf8(f, function(txt) { txt.split('\n').trim().ea(function(line, i) { //if line matches query, return info if (query.test(line)) { matches.push({ path : f, line : i, text : line }); } }); //process next file next(); }); }, function() { //replicate grep stdout var stdout = matches.ea(function(o) { return [o.path, o.line + 1, ": ", o.text, ''].join('\n'); }); //return stdout and matches obj cb(stdout, matches); }); }); }; Here it only took 22 lines of code to implement the same logic. And in reality, the VerbotenJS version is more robust than the raw JavaScript version due to the flexibility of functions like .toRegex() and .test(). And my .ea() function operates like Array.forEach, except that it allows for Object key iteration, asynchronous or synchronous iteration based on the presence of a callback function, and enhanced iteration workflows with helpers like ea.exit(), ea.merge() and ea.join(). Basically .ea() precludes the need to ever use for loops, for-in loops, or any of the new ES5 array extensions Array.forEach|map|filter|some|every|reduce* functions. That's all I have time to report on now, but I'll put up some more examples as I find well isolated examples. Conclusion ================================================== VerbotenJS has been a career long project of mine. I've renamed/rewritten the project multiple times over, for various different JavaScript hosts like WScript/JScript, HTA's, Rhino, J#, Jaxer and even a custom C# host I wrote for fun. With the growing popularity of NodeJS I think VerbotenJS has finally found a good home. And the architecture is finally to a point that I consider it stable and worthy of peer review. So get reviewing peers! Thanks for reading, Marcus Pope
https://bitbucket.org/marcuspope/verbotenjs
CC-MAIN-2015-40
refinedweb
1,557
67.35
Koen Claessen <[email protected]> writes: > | Does anyone have any better suggestions? > I think any solution that leaves it transparent as to if it > is a compiled or an interpreted module is fine. > But I have understood that this is hard to achieve... How about using a different command for importing the exported interface only? I.e. :l M --loads the whole (internal/top level) module :i M --"imports" the module's exported interface If :l requires interpretation, then so be it, otherwise use compiled modules whenever possible. I'm not sure I see the need for "namespace combinators" (ie. SM's suggested :m +M and so on) -kzm -- If I haven't seen further, it is by standing in the footprints of giants
http://www.haskell.org/pipermail/glasgow-haskell-users/2002-January/001380.html
CC-MAIN-2014-41
refinedweb
125
65.22
Map a device's physical memory into a process's address space #include <sys/mman.h> void * mmap_device_memory( void * addr, size_t len, int prot, int flags, uint64_t physical ); A memory area being mapped with MAP_FIXED is first unmapped by the system using the same memory area. See munmap() for details. This function already uses MAP_SHARED ORed with MAP_PHYS (see mmap() for a description of these flags). libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The mmap_device_memory() function maps len bytes of a device's physical memory address into the caller's address space at the location returned by mmap_device_memory(). You should use this function instead of using mmap() with the MAP_PHYS flag. Typically, you don't need to use addr; you can just pass NULL instead. If you set addr to a non-NULL value, whether the object is mapped depends on whether or not you set MAP_FIXED in flags: The address of the mapped-in object, or MAP_FAILED if an error occurs (errno is set). /* ); } QNX Neutrino mmap(), mmap_device_io(), munmap_device_memory()
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/m/mmap_device_memory.html
CC-MAIN-2018-26
refinedweb
182
64
I’ve implemented a minimal external sort of text file using heapq python module. On the few tests I did it seems to works well, but I would like to have some advice to have a cleaner and faster code. I do not know much of good practices and I want to learn (May wish to go from academics to industry one day). All remarks, advice and suggestions are warmly welcome. There are 3 functions: one that splits the big file in smaller files, one that does the merge, and one main function. import os import tempfile import heapq import sys import shutil # Algorithm based on # def split_large_file(starting_file, my_temp_dir, max_line=1000000): """ :param starting_file: input file to be splitted :param my_temp_dir: temporary directory :param max_line: number of line to put in each smaller file (ram usage) :return: a list with all TemporaryFile """ liste_file = [] line_holder = [] cpt = 0 with open(starting_file, 'rb') as f_in: for line in f_in: line_holder.append(line) cpt += 1 if cpt % max_line == 0: cpt = 0 line_holder.sort(key=lambda x: x.split()[0]) temp_file = tempfile.NamedTemporaryFile(dir=my_temp_dir, delete=False) temp_file.writelines(line_holder) temp_file.seek(0) line_holder = [] liste_file.append(temp_file) if line_holder: line_holder.sort(key=lambda x: x.split()[0]) temp_file = tempfile.NamedTemporaryFile(dir=my_temp_dir, delete=False) temp_file.writelines(line_holder) temp_file.seek(0) liste_file.append(temp_file) return liste_file def merged(liste_file, out_file, col): """ :param liste_file: a list with all temporary file opened :param out_file: the output file :param col: the column where to perform the sort, being minimal the script will fail if one column is shorter than this value :return: path to output file """ my_heap = [] for elem in liste_file: line = elem.readline() spt = line.split(b"\t") heapq.heappush(my_heap, [int.from_bytes(spt[col], "big"), line, elem]) with open(out_file, "wb") as out: while True: minimal = my_heap[0] if minimal[0] == sys.maxsize: break out.write(minimal[1]) file_temp = minimal[2] line = file_temp.readline() if not line: my_heap[0] = [sys.maxsize, None, None] os.remove(file_temp.name) else: spt = line.split(b"\t") my_heap[0] = [int.from_bytes(spt[col], "big"), line, file_temp] heapq.heapify(my_heap) return out_file def main(big_file, outfile, tmp_dir=None, max_line=1000000, column=0): if not tmp_dir: tmp_dir = os.getcwd() with tempfile.TemporaryDirectory(dir=tmp_dir) as my_temp_dir: temp_dir_file_list = split_large_file(big_file, my_temp_dir, max_line) print("splitted") merged(liste_file=temp_dir_file_list, out_file=outfile, col=column) print("file merged, sorting done")
https://proxieslive.com/python-3-simple-external-sort-with-heapq/
CC-MAIN-2021-10
refinedweb
392
51.85
With the release of Windows 95, Microsoft also introduced the DirectX Application Programming Interface (API), which allowed Windows-based applications to integrate closely, in a standard way, with the graphics hardware available on the system. Prior to DirectX, most PC game development targeted MS-DOS, as Windows-based graphics were too slow for most gaming needs. Although faster, working with the DirectX API could be challenging. The DirectX Software Development Kit (SDK) is targeted at C++, with no official support for other languages. The developer is also faced with large volumes of background work to get a DirectX project to the point where he can display images on the screen before ever considering the logic of the game itself. In 2002, Microsoft released Managed DirectX as an interface to the API from its new .NET development environment. The .NET Framework consists of a set of code libraries to perform common programming tasks, and the Common Language Runtime (CLR) which allows code written in the various .NET languages (including Visual Basic .NET and C#) to be compiled into common runtime code. In order to support devices such as Windows Mobile phones, a subset of the .NET Framework was released, called the .NET Compact Framework. The .NET CF, as it is often abbreviated, removed non-essential components of the full Framework in the interest of saving storage space on handheld devices. While Managed DirectX 2.0 was still in the beta phase, the project was cancelled, and Microsoft XNA was introduced in its place. XNA consists of the XNA Framework, a set of code libraries to perform common graphics, sound, and other game related tasks, and XNA Game Studio, an extension of the Visual Studio C# interface that includes a number of project templates to make use of the XNA Framework. The XNA project templates include an integrated game loop, easy to use (and fast) methods to display graphics, full support for 3D models, and simple access to multiple types of input devices. In addition to Windows games, XNA allows deployment to both the Xbox 360, the Zune handheld media player (with XNA 3.1) and Windows Phone 7 Series phones (with XNA 4.0). For the first time, a game console manufacturer has released a supported method for individual game developers to create (and sell!) content for their game console. Microsoft has even established the Xbox Indie Games system on Xbox Live to allow you to sell your creations to the world. Tip What does XNA stand for, anyway? According to the developers, XNA is an acronym for "XNA's Not Acronymed". In this introductory chapter you will: Look at an overview of the games presented in this book Download and install XNA Game Studio Create a new Windows Game project Modify the default Windows Game template to build your first XNA game Many beginning developers make the mistake of attempting to tackle far too large a project early on. Modern blockbuster video games are the result of the efforts of hundreds of programmers, designers, graphic artists, sound effects technicians, producers, directors, actors, and many other vocations, often working for years to create the game. That does not mean that the efforts of a solo developer or small team need to be dull, boring, and unplayable. This book is designed to help you develop a solid understanding of 2D game development with XNA Game Studio. By the time you have completed the projects in this book, you will have the knowledge necessary to create games that you can complete without an army of fellow game developers at your back. In this chapter, you will build your first XNA mini game, chasing squares around the screen with your mouse cursor. In subsequent chapters the following four more detailed games are presented: Flood Control : An explosion in one of the research laboratories has cracked the pressure dome protecting your underwater habitat. Work quickly to construct a series of pipes to pump water out of the habitat before it floods. Flood Control is a board-based puzzle game with simple game mechanics and slowly increasing difficulty. Asteroid Belt Assault : After being separated from your attack fleet in Hyper Space, you find yourself lost in an asteroid field without communications or navigation systems. Work your way through the chaos of the asteroid belt while combating alien pilots intent upon your destruction. A vertically scrolling space shooter, Asteroid Belt Assault introduces scrolling backgrounds, along with player and computer controlled characters. Robot Rampage : In the secret depths of a government defence facility, a rogue computer has taken control of robotic factories across the world, constructing an army of mechanical soldiers. Your missionâinfiltrate these factories and shut down their network links to break the computer's control. A multi-axis shooter utilizing both of the analog control sticks on the Xbox 360 gamepad controller, Robot Rampage generates and manages dozens of on-screen sprites and introduces world map construction. Gemstone Hunter : Explore the Australian wilderness, abandoned mines and ancient caves in a search for fabulous treasures. In Gemstone Hunter you will construct a classic platform-style game, including a Windows Forms-based level editor and a multi-map "world" to challenge the player. The games are each presented over two chapters. In the first chapter, the basics are implemented to the point where the game is playable. In the second chapter, features and polish are added to the game. Each game introduces new concepts and expands on topics covered in the previous games. At the end of each game chapter, you will find a list of exercises challenging you to use your newly gained knowledge to enhance previous games in the book. We will focus on Windows as our platform for the games presented in this book. That said, the code presented in this book requires very little in the way of changes for other XNA platforms, generally only requiring implementation of platform-specific controls (gamepads, touch screen, and so on) and consideration of the differences in display sizes and orientation on non-Windows devices. In order to develop games using XNA Game Studio, you will need a computer capable of running both Visual C# 2010 Express and the XNA Framework extensions. The general requirements are: Tip HiDef vs. Reach As of version 4.0, XNA now supports two different rendering profiles. The HiDef profile is available on the Xbox 360 and Windows PCs with DirectX 10 or better video cards, and uses Shader Model 3.0. The Reach profile is available on all XNA platforms, and uses Shader Model 2.0. If you have a DirectX 9 video card, or wish to distribute your games to computers with DirectX 9 support, you will need to right-click on your project in Solution Explorer and select Properties. On the XNA Game Studio tab, select the Reach profile. To get started developing games in XNA, you will need to download and install the software. You will need both Visual C# and XNA Game Studio. With the release of XNA 4.0, the install packages have been consolidated, and both required components are included in the Windows Phone Developer Tools package. Visit and download the Windows Phone Developer Tools package. Run the setup wizard and allow the installation package to complete. Open Visual Studio Express. Click on the Help menu and select Register Product. Click on the Register Now link to go to the Visual Studio Express registration page. After you have completed the registration process, return to Visual Studio Express and enter the registration number into the registration dialog box. Close Visual Studio Express. Download the Font Pack from. Extract the ZIP file contents to a temporary folder (leave this folder open). From the Start Menu, select Control Panel. Under Classic View, choose Fonts. Drag the fonts from the temporary folder to the Fontsfolder. Close both Explorer windows. Launch Visual Studio Express, and the Integrated Development Environment (IDE) will be displayed as seen in the following screenshot: Tip Other versions of Visual Studio and XNA Different versions of Visual Studio and XNA can be installed on the same PC without interfering with each other. If you wish to target the Zune platform, you will need to install Visual C# 2008 Express and XNA 3.1. Additionally, Visual Studio Express and Visual Studio Professional can coexist on the same PC, and XNA will integrate with both of them if it is installed after Visual Studio. You have now successfully installed the Windows Phone Developers Tools, including XNA Game Studio 4.0 and the Redistributable Font Pack provided by Microsoft for XNA developers. Tip The redistributable fonts package To use its integrated text drawing methods, XNA games need to convert normal Windows fonts into an internal format called a SpriteFont. These SpriteFonts get distributed with your game, which means you will not be able to use most of the fonts on your computer due to licensing restrictions. For this reason, Microsoft has provided a selection of fonts that XNA developers can freely distribute without purchasing an individual license to do so. XNA attempts to simplify many of the basic elements of game development by handling things like the game update loop and simplifying the display of graphical objects. To illustrate just how much of the background work is integrated into the XNA project templates, let's jump in straight away and create your first game within a few minutes of finishing the installation. In SquareChase, we will generate randomly positioned squares of different colors while the user attempts to catch them with their mouse pointer before they disappear. While building the project, we will discuss each of the major code sections pre-defined by the XNA templates. Each of the XNA project templates is a series of files and settings that get copied to your new project folder. Included in this set of files is the Game1.cs file, which is the heart of your XNA game. Tip Backup your projects When you create your project, the Location field specifies where it will be saved. By default, Visual Studio creates a folder in your user documents area called Visual Studio 2010 to store both programs and configuration information. Under this folder is a Projects folder that contains subfolders for each new project you create. Make backups of your projects on a regular basis. You do not want to lose your hard work to a disk failure! The most basic XNA game will have all of its code contained in the file called Game1.cs. This file is generated when you create a new project, and contains override declarations for the methods used to manage your game. In addition to the Game1 class' declarations area, there are five primary methods you will customize for any XNA project. Right below the class declaration for Game1 is the class level declarations area. By default, this area contains two variables: GraphicsDeviceManager graphics; SpriteBatch spriteBatch; The graphics object provides access to, not surprisingly, the system's video card. It can be used to alter the video mode, the size of the current viewport (the area that all drawing work will be clipped to if specified), and retrieve information about Shader Models the video card supports. XNA provides the SpriteBatch class to allow you to (very quickly) draw 2D images (called "sprites") to the screen. The declarations area is the spot for any variables that need to be maintained outside of any of the individual methods listed below. In practice, any data you need to keep track of throughout your game will be referenced, in some way, in your declarations section. These are all the variables you will need for the SquareChase mini game. Here is a quick breakdown: rand : This instance of the Random class is used to generate random numbers via the Next() method. You will use this to generate random coordinates for the squares that will be drawn to the screen. squareTexture : The Texture2D class holds a two dimensional image. We will define a small texture in memory to use when drawing the square. currentSquare : The XNA Framework defines a structure called Rectangle that can be used to represent an area of the display by storing the x and y position of the upper left corner along with a width and height. SquareChase will generate random squares and store the location in this variable. playerScore : Players will score one point each time they successfully "catch" a square by clicking on it with their mouse. Their score accumulates in this integer variable. timeRemaining: When a new square is generated, this float will be set to a value representing how many seconds it will remain active. When the counter reaches zero, the square will be removed and a new square generated. TimePerSquare : This constant is used to set the length of time that a square will be displayed before it "runs away" from the player. colors: This array of Color objects will be used when a square is drawn to cycle through the three colors in the array. The Color structure identifies a color by four components: Red, Green, Blue, and Alpha. Each of these components can be specified as a byte from 0 to 255 representing the intensity of that component in the color. The Alpha component determines the transparency of the color, with a value of 0 indicating that the color is fully transparent and 255 indicating a fully opaque color. Alternatively, each component of a color can be specified as a float between 0.0f (fully transparent) and 1.0f (fully opaque). The XNA templates define an instance of the Microsoft.Xna.Framework.Game class with the default name "Game1" as the primary component of your new game. Slightly more goes on behind the scenes, as we will see when we add an XNA game to a Windows Form in Chapter 8, but for now, we can consider the Game1 constructor as the first thing that happens when our XNA game is executed. The class constructor is identified as public Game1(), and by default, it contains only two lines: graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; For most of the games in this book, we will not need to make extensive modifications to the Game1 constructor, as its only job is to establish a link to the GraphicsDeviceManager object and set the default directory for the Content object which is used to load images, sound, and other game content. After the constructor has finished and your XNA game begins to run, the Initialize() method is called. This method only runs once, and the default code created with a new project template simply calls the base version of the method. The Initialize() method is the ideal place to set up things like the screen resolution, toggle full screen mode, and enable the mouse in a Windows project. Other game objects that do not rely on external content such as graphics and sound resources can also be initialized here. By default, the mouse is not visible inside the XNA game window. Setting the IsMouseVisible property of the running instance of the Game1 class enables the mouse cursor in Windows. Tip Input types on other platforms The Xbox, Zune, and Windows Phone do not support a mouse, so what happens when the code to enable the mouse runs on these platforms? Nothing! It just gets ignored. It is also safe to ask other platforms about their non-existent keyboards and check the state of a gamepad on a Windows PC without one attached. Part of the responsibility of the base Initialize() method is to call LoadContent() when the normal initialization has completed. The method is used to read in any graphical and audio resources your game will need. The default LoadContent() method is also where the spriteBatch object gets initialized. You will use the spriteBatch instance to draw objects to the screen during execution of the Draw() method. Open Microsoft Paint or your favourite image editor and create a new 16 by 16 pixel image and fill it with white. Save the image as SQUARE.BMP in a temporary location. Back in Visual C# Express, right-click on SquareChaseContent (Content) in Solution Explorer (you may need to scroll down to see it) and select Add | Existing Item. Browse to the image you created and click on Ok. Add the following code to the LoadContent()method after the spriteBatchinitialization: squareTexture = Content.Load<Texture2D>(@"SQUARE"); To load content, it must first exist. In steps 1 and 2, you created a bitmap image for the square texture. In step 3, you added the bitmap image as a piece of content to your project. Tip Powers of two Very old graphics cards required that all texture images be sized to "powers of two" (2, 4, 8, 16, 32, 64, 128, 256, etc). This limitation is largely non-existent with modern video hardware, especially for 2D graphics. In fact, the sample code in the XNA Platform Starter Kit uses textures that do not conform to the "powers of two" limitation. In our case, the size of the image we created previously is not critical, as we will be scaling the output when we draw squares to the screen. Finally, in step 4 you used the Content instance of the ContentManager class to load the image from the disk and into the memory when your game runs. The Content object is established automatically by XNA for you when you create a new project. When we add content items, such as images and sound effects to our game project, the XNA Content Pipeline converts our content files into an intermediate format that we can read via the Content object. These XNB files get deployed alongside the executable for our game to provide their content data at runtime. Once LoadContent() has finished doing its job, an XNA game enters an endless loop in which it attempts to call the Update() method 60 times per second. This default update rate can be changed by setting the TargetElapsedTime property of the Game1 object, but for our purposes, the default time step will be fine. If your Update() logic starts to take too long to run, your game will begin skipping calls to the Draw() method in favour of multiple calls to Update() in an attempt to catch up with the current game time. All of your game logic gets built into the Update() method. It is here that you check for player input, move sprites, spawn enemies, track scores, and everything else except draw to the display. Update() receives a single parameter called gameTime, which can be used to determine how much time has elapsed since the previous call to Update() or to determine if your game is skipping Draw() calls by checking its IsRunningSlowly property. The default Update() method contains code to exit the game if the player presses the "Back" button on the first gamepad controller. Add the following to Update()right before the call to base.Update(gameTime); if (timeRemaining == 0.0f) { currentSquare = new Rectangle( rand.Next(0, this.Window.ClientBounds.Width - 25), rand.Next(0, this.Window.ClientBounds.Height - 25), 25, 25); timeRemaining = TimePerSquare; } MouseState mouse = Mouse.GetState(); if ((mouse.LeftButton == ButtonState.Pressed) && (currentSquare.Contains(mouse.X, mouse.Y))) { playerScore++; timeRemaining = 0.0f; } timeRemaining = MathHelper.Max(0, timeRemaining - (float)gameTime.ElapsedGameTime.TotalSeconds); this.Window.Title = "Score : " + playerScore.ToString(); The first thing the Update() routine does is check to see if the current square has expired by checking to see if timeRemaining has been reduced to zero. If it has, a new square is generated using the Next() method of the rand object. In this form, Next() takes two parameters: an (inclusive) minimum value and a (non-inclusive) maximum value. In this case, the minimum is set to 0, while the maximum is set to the size of the this.Window.ClientBounds property minus 25 pixels. This ensures that the square will always be fully within the game window. Next, the current position and button state of the mouse is captured into the "mouse" variable via Mouse.GetState(). Both the Keyboard and the GamePad classes also use a GetState() method that captures all of the data about that input device when the method is executed. If the mouse reports that the left button is pressed, the code checks with the currentSquare object by calling its Contains() method to determine if the mouse's coordinates fall within its area. If they do, then the player has "caught" the square and scores a point. The timeRemaining counter is set to 0, indicating that the next time Update() is called it should create a new square. After dealing with the user input, the MathHelper.Max() method is used to decrease timeRemaining by an amount equal to the elapsed game time since the last call to Update(). Max() is used to ensure that the value does not go below zero. Finally, the game window title bar is updated to display the player's score. Tip The Microsoft.Xna.Framework namespace provides a class called MathHelper that contains lots of goodies to make your life easier when dealing with numeric data, including converting degrees to and from radians, clamping values between a certain range, and generating smooth arcs between a starting and ending value. The final method in the default Game1.cs file is responsible, not surprisingly, for drawing the current game state to the display. Draw() is normally called once after each call to Update() unless something is happening to slow down the execution of your game. In that case, Draw() calls may be skipped in order to call Update() more frequently. There will always be at least one call to Update() between calls to Draw(), however, as sequential Draw() calls would provide no benefitânothing in the game state will have changed. The default Draw() method simply clears the display window in the Cornflower Blue color. Alter the GraphicsDevice.Clear(Color.CornflowerBlue);call and replace Color.CornflowerBluewith Color.Grayto make the game a bit easier on the eyes. Add the following code after the call to clear the display: spriteBatch.Begin(); spriteBatch.Draw( squareTexture, currentSquare, colors[playerScore % 3]); spriteBatch.End(); Any time you use a SpriteBatch object to draw to the display, you need to wrap the calls inside a Begin() and End() pair . Any number of calls to spriteBatch.Draw() can be included in a single batch and it is common practice to simply start a Begin() at the top of your Draw() code, use it for all of your drawing, and then End() it right before the Draw() method exits. While not benefiting our SquareChase game, batching sprite drawing calls greatly speeds up the process of drawing a large number of images by submitting them to the rendering system all at once instead of processing each image individually. The SpriteBatch.Draw() method is used to draw a Texture2D object to the screen. There are a number of different options for how to specify what will be drawn. In this case, the simplest call requires a Texture2D object ( squareTexture), a destination Rectangle ( currentSquare), and a tint color to apply to the sprite. The expression playerScore % 3 takes the player's score, divides it by 3, and returns the remainder. The result will always be 0, 1, or 2. This fits perfectly as an index to the elements in the colors array, allowing us to easily change the color of the square each time the player catches one. Finally, the spriteBatch.End() tells XNA that we have finished queuing up sprites to draw and it should actually push them all out to the graphics card. You just finished your first XNA game, that's what! Granted it is not exactly the next blockbuster, but at only 33 lines of code, it implements a simple game mechanic, user input, score tracking and display, and clock-based timing. Not bad for a few minutes work. As simple as it is, here are a couple of enhancements you could make to SquareChase: Vary the size of the square, making it smaller every few times the player catches one, until you reach a size of 10 pixels. Start off with a higher setting for TimePerSquareand decrease it a little each time the player catches a square. (Hint: You'll need to remove the constdeclaration in front of TimePerSquareif you wish to change it at runtime). You now have a development environment set up for working on your XNA game projects, including Visual Studio Express and XNA Game Studio 4.0. We also saw how the XNA game loop initializes and executes, and constructs an elementary game by expanding on the default methods provided by the Windows Game template. It is time to dive head first into game creation with XNA. In the next chapter, we will begin building the puzzle game Flood Control in which the player is challenged to pump water out of their flooding underwater research station before the entire place really is underwater!
https://www.packtpub.com/product/xna-4-0-game-development-by-example-beginner-s-guide/9781849690669
CC-MAIN-2020-40
refinedweb
4,165
61.56
What are strings? In computer science, a string is a sequence of characters as either a variable or a literal constant. Read this article about strings to gain some understanding of strings: Why reverse strings? Manipulating strings is a basic fundamental knowledge that every software engineer must understand. Most software programs use strings as input and output strings for many reasons such as providing a human friendly interface. This is a standard question during technical interviews too, so make time to learn this, be able to explain it, code it and explain what your code is doing. How to reverse strings. - Login to your Browxy account. - Create a new java file and enter the following code for your new java file. - Run your new java file and view results in your web browser. You can view and run this code in your web browser right now with Browxy. public class ReverseString { public static void main(String []args){ System.out.println ( reverse("Hello Monica") ); } public static String reverse ( String s ) { int length = s.length(), last = length - 1; char[] chars = s.toCharArray(); for ( int i = 0; i < length/2; i++ ) { char c = chars[i]; chars[i] = chars[last - i]; chars[last - i] = c; } return new String(chars); } } Code Explanation //Create java class that other java classes can reuse. public class ReverseString { //main java function is required //take in string array[] of args from other java functions in this program //public //static //void public static void main(String []args){ //test our reverse java function //enter a string to run reverse function on System.out.println ( reverse("Hello Monica") ); } //create reverse java function //String //(String s) public static String reverse ( String s ) { //length set to input String s (Hello Monica input from main function) //last set to length minus the previous String s int length = s.length(), last = length - 1; //[]array chars set to input string (String s / Hello Monica) //convert input string (Hello Monica) to array char[] chars = s.toCharArray(); //loop to build out the new reversed string //look at each char (H e l l o M o n i c a) //run this when i is less than half the length of input string (Hello Monica) //each time loop runs, add another i to the new reversed string for ( int i = 0; i < length/2; i++ ) { //c set to array i char c = chars[i]; //array i set to last character we did not add yet chars[i] = chars[last - i]; //last char set to c chars[last - i] = c; } return new String(chars); } } Assignment Create one blog post with all of the following: - Create your own personal notes based on what you understand about manipulating strings with java.: “Manipulate Strings with Java Assignment for MoniGarr.com”.Your YouTube video description: “This is my video review of my Manipulate Strings with Java assignment for MoniGarr.com”. You can include more details if you wish, but please include the minimum I’m asking you for, to make it easier for our team to locate your work.Your YouTube video keywords: “java, string, Manipulate Strings Manipulate Strings - // Manipulate Strings with Java assignment for MoniGarr.com MANIPULATE STRINGS WITH JAVA QUIZ Complete and Pass Monigarr.com’s Manipulate Strings with Java Quiz References: - 5 ways to manipulate strings in java. - Strings (computer science). - Hints for Good Note Taking - Back to Basics: Perfect your Note Taking Techniques You must log in to post a comment.
https://monigarr.com/2014/08/26/manipulate-strings-java/
CC-MAIN-2021-17
refinedweb
570
59.74
Question: How to get genome coordinate of a refseq ID with biopython? 0 4.7 years ago by yuehu.mail • 0 United States yuehu.mail • 0 wrote: Hi Everyone, I tried to get genome coordinate of a refseq with Entrez.efetch but couldn't. I can get it with a gene ID but not a refseq ID. from Bio import Entrez Entrez.email = '[email protected]' handle = Entrez.efetch(db = "nucleotide", id= 'NM_001006118', retmode="xml") It will give me different information but not genome coordinate. I know how to download genome coordinates from UCSC but I want to check NCBI data. Would appreciate your help! ADD COMMENT • link •written 4.7 years ago by yuehu.mail • 0
https://www.biostars.org/p/146837/
CC-MAIN-2020-10
refinedweb
117
71.82
The MFC class library makes handling ActiveX control events so easy that you are misled to believe that handling COM events is no big deal, until you try to do it within a console application. As you may have heard, I wrote the very simple XYDispDriver class, which can be used to create COM objects and call COM methods easily, especially in console applications. It would be nice if we can also use XYDispDriver to handle COM events, so I started to work on its enhancement. Basically, a COM event is the opposite of COM method. Your code calls the methods in a COM object, and if you set up this thing called "event sink" correctly, the COM object will call your code (i.e. event handlers) when something happens within the object. That's why the COM event interface is called the "outgoing interface" (depending on which side you are standing, I guess). Here is my idea. If I have a COM object which fires events, I will be able to find out the event interface, including its class ID and function signatures, from the type library. I will build a new COM object as my event handler, whose methods will match those in the event interface of the original COM object (same dispid, same signature, etc.). The magic is, I will use the XYDispDriver class to create both objects and somehow "connect" them together. Whenever the original object fires an event, the corresponding method in the new object will be called to handle the event. Of course, I have to make it easy to use (in console applications). You can add more methods to the event handler object, even passing function pointers to it so that you can call functions defined outside the object when handling events. As you can see from the source code, I added an Advise method to the XYDispDriver class. This method takes two arguments, the first argument is the IDispatch pointer of the new COM object (the event handler), the second one is the class ID of the event interface of the original COM object. Calling the Advise method is all it takes to "connect" the event handler to the event interface. The Advise method is implemented with some boring COM API calls ( QueryInterface, FindConnectionPoint, etc.), but the code is surprisingly simple. By the way, you don't have to bother with Unadvise (if you have heard of it), it is done automatically when the XYDispDriver object goes out of scope. Now we test the idea. Using the MFC Control Wizard, I first created an ActiveX control TestCon, which has one method called Connect, this method will fire an InvalidLoginData event whenever it fails. Then I created a second ActiveX control TestHndlr as the event handler for the first control. The second control has one method that matches the dispid and signature of the InvalidLoginData event in the first control, the handler method will pop up a message box when it is invoked. Finally, I wrote a console application that uses these two controls. Here is the code of the console app. #include <stdio.h> #include "XYDispDriver.h" void main() { // declare two XYDispDriver variables XYDispDriver dispCon, dispEvent; // create the TestCon control if(dispCon.CreateObject("TestCon.1")) { // create the TestHndlr control if(dispEvent.CreateObject("TestHndlr.1")) { // the class id of the event interface in the TestCon control CLSID clsidEvent = {0xe77a1f7e,0xe3ff,0x11d5, {0x88,0x12,0x00,0xb0,0xd0,0x55,0xb5,0x23}}; // call the Advice method to set up the event handler dispCon.Advise(dispEvent.GetDispatch(),clsidEvent); } else printf("Error: %x\n",dispEvent.GetLastError()); // call the Connect method, passing "username" // and "password" parameters // it should generate an event in either one of the following cases: // 1. "username" or "password" is empty // 2. "password" length < 6 // 3. "password" equals "username" dispCon.InvokeMethod("Connect","MyName","MyPwd"); // you should see the event message box by now } else printf("Error: %x\n",dispCon.GetLastError()); ::MessageBox(NULL,_T("Done"),_T("Test"),MB_OK); } You can use the same technique with a more complicated event interface, of course. I have tested my XYMessenger.OCX control with this method, so it can now be used even in console applications. The more complicated situations (such as multiple event sinks, etc.) will be dealt with in a separate article. The zip file included with this article only has source code for the two ActiveX controls and the console app. You can download the source code for XYDispDriver from my other article. You can also find other articles and software from my home page. I do not claim that I invented anything new here. I am not a COM expert, this is actually the first time I used the IConnectionPoint interface. There could be other more "standard" ways to do the same thing. Thank you. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/COM/xyevent.aspx
crawl-002
refinedweb
805
63.9
Android: module "xy" is not installed Heya, Some time ago I started this thread but never got any reply: I keep having this issue with the declarative camera and now also with Shape {}. Qt 5.11 in the meantime, build platform is Windows for Android API level 22, x86, but using 25 and ARM for both API levels exposes the same error message. I got rid of the "Qt.labs.settings is not installed" error by removing any reference to it and working around it. Now I get module "QtQuick.Shapes" is not installed and module "QtMultimedia" is not installed only. I have entirely followed the documentation. In the QML: import QtQuick.Shapes 1.0 import QtMultimedia 5.8 In the .pro file: QT += multimedia The documentation doesn't say anything about Shape {}, hence I assume I don't need to add anything in the .pro file apart from what's already there: QT += quickcontrols2 I can build and run the application on Windows and Linux, but I cannot run it on Android due to that error message when the QML in question is loaded (via Loader). When the verbose output of the Android deployment tool is turned on in QtCreator, I do get a multimedia molule added. I haven't tried or checked for Shape (yet) as I assume it to be the same issue as with QtMultimedia. I have also tried to include the .so files manually by utilising ANDROID_PACKAGE_SOURCE_DIR, but I don't think this is the problem here. The folder android-build\libs\x86 in my opinion contains the correct files: libQt5Multimedia.so libQt5MultimediaQuick.so libQt5Quick.so libQt5QuickControls2.so - lots of others. I can't see or find anything in the documentation I may have missed. Does anyone have an idea what might be wrong? I did an extensive search for these issues and even found several sources with the same problem. However, no solution has been provided anywhere. One source suggests the import version numbers might be wrong. I have tried several different version numbers by always entirely deleting the build directory beforehand, but to no avail. My current code contains what the documentation states but still no luck. One example is this one: - raven-worx Moderators last edited by @Padlock for the QML imports you do not have to change anything in your .pro file. The just need to be in one of the import paths. I'm not sure I understand. How should I adjust the import paths manually? I thought the Android deployment tool checks the QML files for imports, does it not? I've been able to identify the culprit about a week ago. The qmlimportscanner does not examine the .pro file to see what QML files are included in the project. It also doesn't care about what's in the.qrc file(s). Instead, it blindly adds the modules it can find by diving into the QML files in the folder where the .pro file resides, including subfolders. Any QML file above the directory that contains the .pro file is ignored. I created a simple .qml file in the same directory as the .pro file. Then I only added the import lines that are used within all my .qml files in other folders. Since this lead to a syntax complaint I also added the text Page {} after the last import line. import QtQuick 2.9 import QtQuick.Layouts 1.3 import QtQuick.Controls 2.4 import QtQuick.Shapes 1.11 import QtMultimedia 5.8 import QtQuick.Window 2.11 Page { } Note that this file is not referenced in the .pro file and is of course not part of the project/application at all. It only exists for the purpuse that qmlimportscanner includes the correct modules. @Padlock said in Android: module "xy" is not installed: Instead, it blindly adds the modules it can find by diving into the QML files in the folder where the .pro file resides, including subfolders. Any QML file above the directory that contains the .pro file is ignored With your help, I created a symbolic link next to my project *.pro file to where my qml files reside, like below, it worked: $ ln -sv ../editorlib/qml/ qml qml -> ../editorlib/qml/ Now I have: $ ls -lh total 32 -rw-r--r-- 1 aec staff 253B Dec 11 13:18 deployment.pri -rw-r--r-- 1 aec staff 3.3K Dec 17 15:15 main.cpp lrwxr-xr-x 1 aec staff 17B Dec 29 11:20 qml -> ../editorlib/qml/ -rw-r--r-- 1 aec staff 1.4K Dec 29 10:54 standalone.pro After adding above symbolic link, qml import scanner can find QML dependencies with such messages: -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/DelayButton.qml -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/DelayButton.qmlc -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/designer/CircularGaugeSpecifics.qml -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/designer/CircularGaugeSpecifics.qmlc -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/designer/DelayButtonSpecifics.qml -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/designer/DelayButtonSpecifics.qmlc -- Appending dependency found by qmlimportscanner: /Users/aec/Qt5.11.3/5.11.3/android_armv7/qml/QtQuick/Extras/designer/DialSpecifics.qml - Julian Guarin 0 last edited by
https://forum.qt.io/topic/91130/android-module-xy-is-not-installed/6
CC-MAIN-2019-43
refinedweb
926
60.61
This is documentation for the next version of Grafana. For the latest stable release, go to the latest version. Manage alert rules The Alerting page lists all existing alert rules. By default, rules are grouped by types of data sources. The Grafana section lists all Grafana managed rules. Alert rules for Prometheus compatible data sources are also listed here. You can view alert rules for Prometheus compatible data sources but you cannot edit them. The Mimir/Cortex/Loki rules section lists all rules for Mimir, Cortex, or Loki data sources. Cloud alert rules are also listed in this section. - Manage alert rules View alert rules To view alerting details: - In the Grafana menu, click the Alerting (bell) icon to open the Alerting page. By default, the List view displays. - In View as, toggle between Grouped or State views by clicking the relevant option. See Group view and State view for more information. - Expand the rule row to view the rule labels, annotations, data sources the rule queries, and a list of alert instances resulting from this rule. Grouped view Grouped view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by namespace + group. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Expand a rule further to view its details. You can also expand action buttons and alerts resulting from the rule to view their details. State view State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts generated by this rule, and each alert can be further expanded to view its details. Filter alert rules To filter alert rules: - From Select data sources, select a data source. You can see alert rules that query the selected data source. - In the Search by label, enter search criteria using label selectors. For example, environment=production,region=~US|EU,severity!=warning. - From Filter alerts by state, select an alerting state you want to see. You can see alerting rules that match the state. Rules matching other states are hidden. Edit or delete an alert rule Grafana managed alert rules can only be edited or deleted by users with Edit permissions for the folder storing the rules. Alert rules for an external Grafana Mimir or Loki instance can be edited or deleted by users with Editor or Admin roles. To edit or delete a rule: - Expand a rule row until you can see the rule controls of View, Edit, and Delete. - Click Edit to open the create rule page. Make updates following instructions in Create a Grafana managed alerting rule or Create a Grafana Mimir or Loki managed alerting rule. - Click Delete to delete an alert rule..
https://grafana.com/docs/grafana/next/alerting/alerting-rules/rule-list/
CC-MAIN-2022-27
refinedweb
481
75.1
rpdb 0.1.6 pdb wrapper with remote access via tcp socket rpdb - remote debugger based on pdb rpdb is a wrapper around pdb that re-routes(port=12345) debugger.set_trace() It is known to work on Jython 2.5 to 2.7, Python 2.5 to 3.1. It was written originally for Jython since this is pretty much the only way to debug it when running it on Tomcat. Upon reaching set_trace(), your script will “hang” and the only way to get it to continue is to access rpdb using telnet, netcat, etc..: nc 127.0.0.1 4444 Installation in CPython (standard Python) pip install rpdb For a quick, ad hoc alternative, you can copy the entire rpdb subdirectory (the directory directly containing the __init__.py file) to somewhere on your $PYTHONPATH. Installation in a Tomcat webapp Just copy the rpdb directory (the one with the __init__.py file) in your WEB-INF/lib/Lib folder along with the standard Jython library (required). Trigger rpdb with signal set_trace() can be triggered at any time by using the TRAP signal handler. This allows you to debug a running process independantly of a specific failure or breakpoint: import rpdb rpdb.handle_trap() # As with set_trace, you can optionally specify addr and port rpdb.handle_trap("0.0.0.0", 54321) Calling handle_trap will overwrite the existing handler for SIGTRAP if one has already been defined in your application. Known bugs - The socket is not always closed properly so you will need to ^C in netcat and Esc+q in telnet to exit after a continue or quit. - There is a bug in Jython 2.5/pdb that causes rpdb to stop on ghost breakpoints after you continue (‘c’), this is fixed in 2.7b1. 0.1.6 (2015-01-05) - Give access to attributes of stdin and stdout (by @fuxpavel). - Add rpdb.post_mortem(), similar to pdb.post_mortem() (by @CamDavidsonPilon). 0.1.5 (2014-10-16) - Write addr/port to stderr instead of stdout (thanks to @onlynone). - Allow for dynamic host port (thanks to @onlynone). - Make q/quit do proper cleanup (@kenmanheimer) - Benignly disregard repeated rpdb.set_trace() to same port as currently active session (@kenmanheimer) - Extend backwards compatibility down to Python 2.5 (@kenmanheimer) 0.1.4 (2014-04-28) - Expose the addr, port arguments to the set_trace method (thanks to @niedbalski). 0.1.3 (2013-08-02) - Remove a try/finally that seemed to shift the trace location (thanks to k4ml@github). 0.1.2 (2012-01-26) - Catch IOError raised by print in initialization, it may not work in some environments (e.g. mod_wsgi). (Menno Smits) 0.1.1 (2010-05-09) Initial release. - Author: Bertrand Janin -.6.xml
https://pypi.python.org/pypi/rpdb/
CC-MAIN-2017-22
refinedweb
449
67.65
, Aug 22, 2005 at 11:34:24PM -0700, Kapil Thangavelu wrote: > hi folks, > > i''m interested in bringing forward my production/development > environment from mm 2.0 to mm cvs trunk, but I'm not clear if this is a > wise move, I'm like to do it because this installation is driving new > feature and contributions for/from me, and I'm like to be able to > contribute them upstream in an easily digestible usable? (i can deal with associated migration issues on > my own, but fixing incomplete/broken code on the trunk is a serious > issue), and is there a development roadmap for 2.1 around? > > thanks, > > -kapil Kapil, It might be helpful if we could chat about the development changes you want to make, and check that they are going to fit in with the work we have been doing. Up to this point, we haven't been dealing much with 3rd party developers, so we haven't been making all of the considerations required for this. We would like to get to the stage where we are doing this, but obviously this means some tradeoffs in development speed. A lot of the planned design for 2.1 and later releases is to modularise and restructure the MailManager code so there is a robust core at the centre which others can build upon. We would really like to get to the stage were the underlying design of MailManager is relatively static and well documented. The changes we are currently making to the code should reflect this. The code on the CVS head should be in a working state, and we have been trying to avoid putting incomplete changes onto that. There are some considerable changes which will be rolled out soon, however. In particular, for the 2.1 release the Zope page templates will be rewritten in order to add in CSS support for better accessibility. The Reports pages is also being moved into a modular reporting engine, and there should be a ruleset engine added to deal with ticket states and transitions. There is a public roadmap for 2.1 on our website at which covers most of the key areas we are developing. We do have our own internal development plans, but these have been relatively variable and speculative. Hopefully we can publish something a little more in depth onto the developers list. Perhaps it's best that you speak to us on irc, so you can get some responsive feedback about what we are doing. We also now have a MailManager developers list, which it would be helpful to move this discussion to. Hopefully we should see a lot more traffic on this list in the next few months month. Regards, Kevin -- Kevin Campbell GPG Key: F480EC23 Software Engineer kev@... Logical Progression Ltd I found the problem... after make install of psycopg-1.1.19, you will have the file = 'psycopgmodule.so' located in your zope root folder. That won't do, so you have to move the file to $ZOPE_HOME/lib/python2.1/lib-dynload/. Regards Petter -----Opprinnelig melding----- Fra: mailmanager-users-admin@... = [mailto:mailmanager-users-admin@...] P=E5 vegne av = Hans Petter J=F8rgensen Sendt: 22. august 2005 13:19 Til: mailmanager-users@... Emne: [Mailmanager-users] ZPsycopgDA Anyone seen this when importing the ZPsycopgDA to Zope? __________________________________________________________ Product at /Control_Panel/Products/ZPsycopgDA Import Traceback Traceback (most recent call last): File "/opt/zope/lib/python/OFS/Application.py", line 660, in = import_product product=3D__import__(pname, global_dict, global_dict, silly) File "/var/opt/zope/default/Products/ZPsycopgDA/__init__.py", line 92, = in ? import DA File "/var/opt/zope/default/Products/ZPsycopgDA/DA.py", line 93, in ? from db import DB File "/var/opt/zope/default/Products/ZPsycopgDA/db.py", line 100, in ? from psycopg import NUMBER, STRING, INTEGER, FLOAT, DATETIME ImportError: cannot import name INTEGER __________________________________________________________ I have compiled psycopg-1.1.19 Regards Hans Petter J=F8rgensen ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle = Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & = QA Security * Process Improvement & Measurement * = _______________________________________________ Mailmanager-users mailing list Mailmanager-users@... hi folks, i''m interested in bringing forward my production/development environment from mm 2.0 to mm cvs trunk, but i'm not clear if this is a wise move, i'd like to do it because this installation is driving new feature and contributions for/from me, and i'd like to be able to contribute them upstream in an easily digestable useable? (i can deal with associated migration issues on my own, but fixing incomplete/broken code on the trunk is a serious issue), and is there a development roadmap for 2.1 around? thanks, -kapil I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/mailmanager/mailman/mailmanager-users/?viewmonth=200508&viewday=23
CC-MAIN-2017-04
refinedweb
836
55.95
[meta] Use mozilla::Result<T, E> for fallible return values in the JS engine RESOLVED FIXED Status () P3 normal People (Reporter: jorendorff, Assigned: jandem) Tracking (Blocks: 2 bugs, {meta, triage-deferred}) Firefox Tracking Flags (Not tracked) Details Attachments (2 attachments, 7 obsolete attachments) Currently we use nullptr and false to indicate most errors. This is OK in the normal case but we have a *lot* of places when error handling starts to get a little bit atypical, and then we have bugs. JS::ObjectOpResult is a whole thing with its own rules; CanGC/NoGC functions are another; Pure functions are another; AllocPolicies add another angle; DenseElementResult, jit::MethodStatus, and jit::JitExecStatus are more variations on the theme. TokenStream::flags::hadError is an example of a custom error-handling scheme that I'm not sure could possibly be correct. So the plan is to introduce a template, mozilla::Result<T, E>, that's just a tagged variant that holds either a success value of type T or an error value of type E. Where we currently return `bool`, we'll return `Result<Ok, JS::Error>`, where Ok and JS::Error are maybe dummy types for now. Where we currently return `JSScript*`, we can return `Result<JSScript*, JS::Error>`. Then, add some macros like `MOZ_TRY(expr)` to make it easy to use Result correctly. If we get lucky, Result<> can supplant most of the error-handling schemes we've got today. Long term, where I'd love to go with this is to delete all error-handling state from JSContext (throwing, unwrappedException_, overRecursed_, and probably propagatingForcedReturn_). Instead it will all be part of the JS::Error type, indicated in the type signature of all JSAPI functions. A nice side benefit here is that `Result` can be a MOZ_MUST_USE_TYPE (see Attributes.h), so we don't have to remember to add `MOZ_MUST_USE` to every bool return. The patch I've got right now may not be totally correct, but it's about -540 lines net of everything, mainly due to many individual hunks like this: > - if (!LookupNameUnqualified(cx, name, scopeChain, &scope)) > - return false; > + MOZ_TRY(LookupNameUnqualified(cx, name, scopeChain, &scope)); Each of these saves a line of code. What's left is more compact and I *think* reads better, though this patch leaves the codebase only slightly Result-ified, so it's a bit of a mishmash. There are already several types named Result in various places in the codebase, but I think we can live with it. Created attachment 8758883 [details] [diff] [review] errormageddon-wip-1.patch Assignee: nobody → jorendorff Very cool. In comment 0 I should of course have noted that Terrence came up with the whole idea. Another thing this might help with is the kind of bug where an allocation fails, but we return false/null without reporting the OOM. We can rule that out by making our non-reporting functions return a Result with a different error type. You wouldn't be able to convert freely between the two, you'd have to be explicit about it, and we'd have a method or a macro to make that easy. The sheer massiveness of this change gives me pause, but I am really having trouble coming up with reasonable arguments against it. It seems to be as fast, and the code is shorter. Places where the error handling is weird look visually weird, which... no, that's actually a good thing too. Other arguments against: There are too many different macros. MOZ_TRY(res), JS_TRY_OR_RETURN_FALSE(cx, res), JS_TRY_OR_RETURN_NULL(cx, res). And a separate assigning version of each, MOZ_TRY_VAR(var, res) etc. But once everything is converted, the JS_ versions go away: they are only to handle the boundaries between Result-using and non-Result-using code. The compiler error messages can be gross, because templates. But whatever, right? Created attachment 8759478 [details] [diff] [review] errormageddon-wip-2.patch New in this WIP: `MOZ_TRY_VAR(var, res)` and a few places that use it; a special `ReportedOOM` error type for functions that can't fail any other way. (I kind of like that you can see that in the type. If it turns out to be a pain, we could always rip it out and use JS::Error* throughout the engine.) Created attachment 8762840 [details] [diff] [review] errormageddon-wip-3.patch Comment on attachment 8758883 [details] [diff] [review] errormageddon-wip-1.patch Review of attachment 8758883 [details] [diff] [review]: ----------------------------------------------------------------- A few comments from a quick skim... In mfbt/Result.h you've used SpiderMonkey style instead of Gecko style :( The biggest problems: - Please use 2 space indents - Please use mFoo/aFoo var naming - Please use upper case letters for the first letter in function names ::: mfbt/Result.h @@ +81,5 @@ > + * Specialization for the result of allocation, when the success type is a > + * pointer and the error type is OutOfMemory. > + * > + * Result<T*, Ok> is guaranteed to be pointer-sized, and all zero bits on error. > + * Do not change this representation! Why not? (For the next specialization you say that JIT code depends on the representation. Is that true here too? Or is there another reason?) @@ +122,5 @@ > +{ > + E* err; > + > + public: > + /* DEPRECATED */ Explain that this is only needed during the transition period when Result is being introduced? Created attachment 8764933 [details] [diff] [review] errormageddon-wip-4.patch Now works in `-m32` Linux builds. (Browser still doesn't build in MSVC because of the pkix namespace collision -- apparently MSVC handles C++ namespaces differently in some weird corner cases involving templates and the Koenig lookup. Working on it.) Attachment #8758883 - Attachment is obsolete: true Attachment #8759478 - Attachment is obsolete: true Attachment #8762840 - Attachment is obsolete: true WIP 4 applies on top of rev 0ad7433c2159. The next step here is to get this running on Win32 and then see if it makes us any slower. Unfortunately, on Win32 the ABI for returning a struct is wonky (and not well documented). Terrence helped me with this last night. Here is code, generated by MSVC, that calls a Result<>-returning function: Here is the function that it's calling: Unfortunately it looks like the ABI uses EAX, which we use as a scratch register whenever we call a function. And here's the current code we're emitting, which works on Linux gcc -m32. Unfortunately that's a different ABI. :-( I can look at the Win32 thing. Flags: needinfo?(jdemooij) (In reply to Jason Orendorff [:jorendorff] from comment #11) > Unfortunately it looks like the ABI uses EAX, which we use as a scratch > register whenever we call a function. Hm I don't think the ABI uses EAX, but it is weird we use EAX as scratch. I'll post a patch to fix that and then I can test this on Win32. Created attachment 8766345 [details] [diff] [review] Fix 32-bit OS X and Windows With this patch applied on top of WIP 4, jit-tests pass on Win32 and OS X 32-bit. There are 3 different conventions for this on x86: * OS X: Result<> is returned in eax \o/ * Linux (System V): pointer to stack memory is passed, callee discards this pointer. * Windows: also passes a pointer to stack memory, but caller discards it (like all other arguments). Flags: needinfo?(jdemooij) Testing performance on windows 10 with 32bit builds, we have: > Run1 Run2 Run3 Avg > Before: 27660 28165 28113 27979 > After: 27879 27946 28150 27991 So no discernible performance difference. Thank you both. This is excellent news. Comment on attachment 8764933 [details] [diff] [review] errormageddon-wip-4.patch Review of attachment 8764933 [details] [diff] [review]: ----------------------------------------------------------------- ::: mfbt/Result.h @@ +76,5 @@ > + * the error type is a pointer. > + * > + * In this case, Result<T, E*> is guaranteed to be pointer-sized, and all zero > + * bits on success. Do not change this representation! There is JIT code that > + * depends on it. Can you static_assert that sizeof this Result type is pointer-sized? @@ +92,5 @@ > + explicit ResultImplementation(bool ok) > + : mErrorValue(reinterpret_cast<E*>(uintptr_t(!ok))) {} > + > + explicit ResultImplementation(Ok) : mErrorValue(nullptr) {} > + explicit ResultImplementation(E* aErrorValue) : mErrorValue(aErrorValue) {} Should you assert aErrorValue is not null here? Or will the IsNull asserts in Result's ctor prevent that case here? Why are there ctor asserts in the Result class but not the ResultImplementation detail classes when the unwrap() asserts are in the ResultImplementation detail classes but not the Result class? Should the asserts be everywhere (belt and suspenders) or just the public Result class? @@ +97,5 @@ > + > + bool isOk() const { return mErrorValue == nullptr; } > + > + Ok unwrap() const { > + MOZ_ASSERT(isOk()); In addition to asserting isOk(), can you assert that the caller actually checked isOk()? Or would that be too onerous for callers that *know* the value is always Ok or error? Just asserting isOk() would only catch misuse when a (possibly rare) error is unwrapped in a debug build, but asserting that isOk() has already been checked would always detect that misuse of Result. @@ +192,5 @@ > + * (The rule is for efficiency reasons.) > + */ > + explicit Result(E aErrorValue) : mImpl(aErrorValue) { > + MOZ_ASSERT(!detail::IsNull(aErrorValue)); > + MOZ_ASSERT(isErr()); You might consider upgrading some of these asserts to MOZ_DIAGNOSTIC_ASSERT so they are checked in opt builds of Nightly and Aurora. Jason, any progress here? Flags: needinfo?(jorendorff) Keywords: meta Flags: needinfo?(nicolas.b.pierron) Created attachment 8802571 [details] [diff] [review] Rebased patch Here's a rebased patch. Applies to revision bc91be30f2aa and passes shell tests on OS X 32-bit and 64-bit. jorendorff, mind if I work on getting this landed or do you want to finish it? Attachment #8764933 - Attachment is obsolete: true Attachment #8766345 - Attachment is obsolete: true Attachment #8802571 - Flags: feedback?(jorendorff) Created attachment 8802891 [details] [diff] [review] Rebased This fixes a bunch of issues. Android linker weirdness, and JIT changes for Win64 and the ARM simulator. Try should be green now. Passing types like Result<> to JIT code sucks unfortunately, every platform has its own calling convention. Attachment #8802571 - Attachment is obsolete: true Attachment #8802571 - Flags: feedback?(jorendorff) According to emilio, having this in any functions we need to create Rust bindings for would be a problem right now. Mostly, that means it'd be great if we could hold off on using it in any JSAPI functions or the public headers. There might be a few additional places where it'd be problematic, though, because the Rust bindings reimplement various (mostly rooting-related) functions. Not sure about that, though. Presumably, impl specialization () will make it much easier to support this in rust-bindgen. Not sure if there are feasible solutions for that right now. Thanks for taking this, Jan. Till, we can get most of the benefit from this without changing the JSAPI. That said, we do want to change the API eventually because experience suggests that downstream code rarely handles JSAPI errors correctly. So it'd be good to know what the problem is exactly. C++ mozilla::Result isn't meant to be layout-compatible with Rust's std::result::Result, but it seems like the sort of type-translation work that bindgen would automate. (In particular: it seems like a "dumb" problem with a "dumb" solution, not the sort of case where you'd want to block on a powerful new Rust language feature. More likely, I've not understood the problem.) Flags: needinfo?(jorendorff) The problems are, unsurprisingly, related to templates, which work just differently enough in C++ and Rust for the bindings to be non-trivial. I do think that using mozilla::Result in JSAPI makes a lot of sense, and agree that it certainly shouldn't be blocked by Rust language features. (And I'm not even really sure that impl specialization would help substantially.) Even if it turns out that we really can't feasibly support mozilla::Result, I think my concerns are probably better addressed by introducing a wrapper that we *can* bind to and ignoring the unsupported items in jsapi.h. Needinfo fitzgen for maybe slightly more elaboration on what's problematic here. Flags: needinfo?(nfitzgerald) So the main problem is that Rust doesn't have the concept of struct specialization, so creating a generic Result<T, E> that can be layout compatible with the C++ implementation automatically for every T and E is pretty hard, both manually, and even more automatically. impl specialization may help if we parameterize in terms of a trait that could define different types for the members, but that's extremely hard to do automatically, and I don't know: * How smart is rustc passing repr(C) types with template parameters and zero-sized types (presumably not a huge deal because we do that in Stylo). * How smart both the C++ and Rust's compiler are to handle passing proper return values with templates. We've had problems with that in stylo, where gcc and clang only agreed with rustc if the return value didn't have a destructor. Assuming the last point is not a problem (we might be lucky, and if the types don't have destructors I seem to recall that there weren't huge problems), and that we get proper specialization, it might be doable.. Oh, other solution to that problem is getting cross-language LTO, and adding a proper wrapper to the C++ API that Rust can use, but I'm not too confident that's going to happen really soon. I don't have anything to add on top of what Emilio has said. I want to reiterate that generating Rust bindings should not interfere with improving SpiderMonkey/JSAPI. If need be, we can make a separate header with a C API for use only by bindgen that dumbs down everything. Flags: needinfo?(nfitzgerald) Assignee: jorendorff → jdemooij Created attachment 8808552 [details] [diff] [review] Rebased Rebased to revision d370d74e76d7 on top of the new mozilla::Result<> patch. In particular, the error type is now a reference instead of a pointer, per Waldo's suggestion. Attachment #8802891 - Attachment is obsolete: true (In reply to Emilio Cobos Álvarez [:emilio] from comment #27) >. I think throughout SM we will prefer to use a typedef'd Result over a bare Result<>. I don't immediately see a reason why we wouldn't/couldn't do this on the public API or even make it a requirement on the public API. Created attachment 8812769 [details] [diff] [review] Part 1 - Add JS::Result This adds JS::Result and uses it in a few places. It's a bit awkward because most code hasn't been converted yet, so we have to convert back and forth. js::CheckPropertyDescriptorAccessors shows what this will look like though. Unlike the mega patch, this one doesn't have the JIT integration bits. I also renamed JS::ReportedOOM in the big patch to JS::OOM, I hope the shorter name will encourage people to use it instead of the more generic JS::Error. Flags: needinfo?(nicolas.b.pierron) Attachment #8812769 - Flags: review?(luke) Comment on attachment 8812769 [details] [diff] [review] Part 1 - Add JS::Result Review of attachment 8812769 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/public/Result.h @@ +66,5 @@ > + * JS_TRY_OR_RETURN_FALSE(cx, DefenestrateObject(cx, obj)); > + * JS_TRY_VAR_OR_RETURN_FALSE(cx, v, GetObjectThrug(cx, obj)); > + * > + * JS_TRY_OR_RETURN_NULL(cx, DefenestrateObject(cx, obj)); > + * JS_TRY_VAR_OR_RETURN_NULL(cx, v, GetObjectThrug(cx, obj)); Initially I was confused by the "OR" (we're *always* trying, and only sometimes returning false/null), but than I realized that we're probably referring to the short-circuiting behavior of ||. Given the importance of terseness for what is intended to be such a common utility, this makes sense; just recording my reasoning here. @@ +109,5 @@ > + * value and a saved stack). > + * > + * The long-term plan is to remove JS_IsExceptionPending and > + * JS_GetPendingException in favor of JS::Error. Exception state will no longer > + * exist.. ::: js/src/jscntxt.h @@ +324,5 @@ > + inline JS::Result<> boolToResult(bool ok); > + > + /** > + * Intentionally awkward signpost method that is stationed on the > + * boundary between Result-using and non-Result-using code. Why do we put this on the cx; is the intention to later do something like asserting an invariants that actually involve cx/rt? ::: js/src/jsobj.h @@ +182,5 @@ > * Make a non-array object with the specified initial state. This method > * takes ownership of any extantSlots it is passed. > */ > + static inline JS::Result<JSObject*, JS::OOM&> create(js::ExclusiveContext* cx, > + js::gc::AllocKind kind, nit: column alignment But actually,? Just a drive by question. Shouldn't we create a typedef for "Result<...>" to remove those <> and make it more clear? (In reply to Luke Wagner [:luke] from comment #33) >. We could mark js::Result<V, E> as a GC type for the static analysis (we probably need to do this anyway for the case where V is a GC type like JSObject*, not sure the analysis is smart enough to figure that out since Result<> may use uintptr_t + pointer tagging to store it). Then we can js_new<JS::Error> and store GC pointers in it (say JS::Value for the exception value) without worrying about GC hazards. We could maintain a Vector of allocated JS::Error allocations and free all of them on GC (and of course we could add a separate class to "root" a JS::Error across GC calls for the few places that need to do this). Because we don't want to allocate on OOM, the JS::OOM instance could stay as static singleton like in this patch (similar to the "out of memory" constant atom we throw now). Does something like that seem reasonable? > Why do we put this on the cx; is the intention to later do something like > asserting an invariants that actually involve cx/rt? Since the exception state is on the cx currently, it doesn't seem unreasonable to put these methods there as well. This is intended to be temporary anyway, while we convert code. I'd be happy to change it to a separate function that takes a cx, though. Flags: needinfo?(luke) (In reply to Hannes Verschore [:h4writer] from comment #34) > Just a drive by question. > > Shouldn't we create a typedef for "Result<...>" to remove those <> and make > it more clear? What would we call this typedef? Since it will be used everywhere, it should be something short... I'm also not sure it's clearer, Result<> makes it obvious that it's similar to, say, Result<JSObject*>, which seems useful. A typedef is going to obscure more than it helps, especially when you consider there are two separate axes to Result. Revealing similarity as comment 36 notes is also an excellent reason to not typedef (and goes for other typedefs as well). (In reply to Luke Wagner [:luke] from comment #33) >? I'm pretty sure this "new" style is already haphazardly adopted in numerous places for member function signatures that grow particularly unwieldy. Doesn't seem to me it's explicitly sanctioned (in either sense), so no reason anyone shouldn't use it. (In reply to Jan de Mooij [:jandem] from comment #35) > Then we can js_new<JS::Error> and store GC pointers in it (say JS::Value for > the exception value) without worrying about GC hazards. We could maintain a > Vector of allocated JS::Error object and free all of them on GC Hm, maybe we don't even need this Vector if we treat any function that can dynamically allocate JS::Error (so excluding JS::OOM!) as a GC function. This way, the hazard analysis will complain whenever a JS::Result is used across a call that can throw a different JS::Result, other than OOM. (In reply to Jan de Mooij [:jandem] from comment #35) Ah, ok, that sounds like a good plan w.r.t allocated error objects. Mostly I was just a little weirded out by seeing the & in the Result types (& in stack objects returned by value sets off spider-senses for obvious reasons), but it seems like that's what we need to do for performance reasons. > > Why do we put this on the cx; is the intention to later do something like > > asserting an invariants that actually involve cx/rt? > > Since the exception state is on the cx currently, it doesn't seem > unreasonable to put these methods there as well. Ah, I was asking about JSContext::resultTo*, which don't currently seem to use the JSContext's exception state, and whether later they would. If they do, then member functions seems great as you've done. Flags: needinfo?(luke) (In reply to Jeff Walden [:Waldo] (remove +bmo to email) from comment #37) > I'm pretty sure this "new" style is already haphazardly adopted in numerous > places for member function signatures that grow particularly unwieldy. > Doesn't seem to me it's explicitly sanctioned (in either sense), so no > reason anyone shouldn't use it. Well then perhaps we could just start off by setting the style precedent in these initial landings? These things tend to propagate. We could do that. I confess that I'm not sure I care that much -- anything that's long enough lines is probably going to be wrapped as early as possible even if there's no requirement in place. (In reply to Jeff Walden [:Waldo] (remove +bmo to email) from comment #41) Great! I've seen Gecko code that shows late wrapping, which is why I asked in the first place. Comment on attachment 8812769 [details] [diff] [review] Part 1 - Add JS::Result Review of attachment 8812769 [details] [diff] [review]: ----------------------------------------------------------------- Any, r+ with the suggestion of wrapping long declarations after the return value as discussed. Attachment #8812769 - Flags: review?(luke) → review+ Pushed by [email protected]: part 1 - Add JS::Result<> and use it in a few places. r=luke Keywords: leave-open Steve, can you help me figure out how to make JS::Result<> a GC type, for the static analysis? We want this for 2 reasons: (1) JS::Result<JSObject*, Error&> will be a rooting hazard when unwrap()ped across a GC. (2) Eventually, JS::Error will store the exception JS::Value and we don't want rooting hazards when that happens. Furthermore, not holding JS::Error across a GC will simplify things when we dynamically allocate JS::Error instances (see comment 35). I know about JS_HAZ_GC_POINTER, but JS::Result is not a class/struct: template <typename V = Ok, typename E = Error&> using Result = mozilla::Result<V, E>; The hazard analysis seems to ignore the JS_HAZ_GC_POINTER attribute here. Worst case we can apply the attribute to mozilla::Result or make JS::Result inherit from mozilla::Result (it's |final| though), but it's not great. Flags: needinfo?(sphink) If the analysis sees mozilla::Result and JS::Result as the same type, we could also do some pattern matching on the template parameters maybe? But it's pretty annoying. (In reply to Jan de Mooij [:jandem] from comment #46) > If the analysis sees mozilla::Result and JS::Result as the same type, we > could also do some pattern matching on the template parameters maybe? But > it's pretty annoying. Looking at this, it seems like the nicest way is for it to figure out whether Result<T,E> is a GC pointer from the template parameters T and E. But as you say, the raw storage prevents that from happening now. But it appears that mozilla::Result already has to handle this for other static analyses -- it stores its contents as a mozilla::Variant, which stores them as raw but uses the attribute MOZ_INHERIT_TYPE_ANNOTATIONS_FROM_TEMPLATE_ARGS to tell it to do the right thing. Filing a bug to implement. Filed bug 1321014. (In reply to sfink from comment #48) > it stores its contents as a mozilla::Variant, which stores them > as raw but uses the attribute > MOZ_INHERIT_TYPE_ANNOTATIONS_FROM_TEMPLATE_ARGS to tell it to do the right > thing. mozilla::Result *may* use mozilla::Variant, but it also has some more optimized ResultImplementation's. We should probably apply MOZ_INHERIT_TYPE_ANNOTATIONS_FROM_TEMPLATE_ARGS to mozilla::Result as well... Thanks for looking at this! Jan, does this bug still need to be open? Flags: needinfo?(jdemooij) (In reply to Nicholas Nethercote [:njn] from comment #51) > Jan, does this bug still need to be open? Not sure. I thought of it more as a meta-bug that could be kept open as long as there's more bool -> Result work to be done, but I did land patches here so we should mark it fixed (for FF 53). There's also a needinfo? sfink though so let's wait for that. Flags: needinfo?(jdemooij) Keywords: triage-deferred Priority: -- → P3 (In reply to Nicholas Nethercote [:njn] from comment #51) > Jan, does this bug still need to be open? I think we should close this now because we did land patches so it's confusing to keep it open, and JS::Result<> is now a thing. Maybe we could file a new errormageddon meta bug. Status: NEW → RESOLVED Last Resolved: 9 months ago Resolution: --- → FIXED Keywords: leave-open (In reply to Jan de Mooij [:jandem] from comment #52) > There's also a needinfo? sfink though so let's wait for that. There's work I need to do for this in bug 1321014, but that's probably not a good reason to keep this particular bug alive. Flags: needinfo?(sphink)
https://bugzilla.mozilla.org/show_bug.cgi?id=1277368
CC-MAIN-2019-04
refinedweb
4,179
62.58
Recursion oddity I've been having a problem with a recursive factorization algorithm I've been writing in Sage code. The problem is reproduced on my machine using the code below: def gcd_recur(num): if(num <= 1): return g = gcd(15,num) print "GCD of 15 and {0}: {1}".format(num, g) gcd_recur(num/2) gcd_recur(50) The result I'm getting is: GCD of 15 and 50: 5 GCD of 15 and 25: 1 GCD of 15 and 25/2: 1 GCD of 15 and 25/4: 1 GCD of 15 and 25/8: 1 GCD of 15 and 25/16: 1 However, I can run the code below outside of the algorithm and receive the correct result: gcd(15, 25) 5 Are there some variable scoping issues that I'm not seeing here? Thanks. Can you be more clear as to what the answer that you want to get from gcd_recur? The code you posted is working correctly as it is written.
https://ask.sagemath.org/question/8468/recursion-oddity/?answer=12894
CC-MAIN-2020-24
refinedweb
164
67.22
public class GroovyRowResult extends GroovyObjectSupport Represents an extent of objects. It's primarily used by methods of Groovy's Sql class to return ResultSet data in map form; allowing access to the result of a SQL query by the name of the column, or by the column number. Checks if the result contains (ignoring case) the given key. key- the property name to look for Find the property value for the given name (ignoring case). property- the name of the property to get Retrieve the value of the property by its index. A negative index will count backwards from the last column. index- is the number of the column to look at Retrieve the value of the property by its (case-insensitive) name. property- is the name of the property to look at Associates the specified value with the specified property name in this result. key- the property name for the result value- the property value for the result Copies all of the mappings from the specified map to this result. If the map contains different case versions of the same (case-insensitive) key only the last (according to the natural ordering of the supplied map) will remain after the putAll method has returned. t- the mappings to store in this result
http://docs.groovy-lang.org/docs/next/html/gapi/groovy/sql/GroovyRowResult.html
CC-MAIN-2019-26
refinedweb
212
60.75
View nodes The Amazon EKS console shows information about all of your cluster's nodes, including Amazon EKS managed nodes, self-managed nodes, and Fargate. Nodes represent the compute resources provisioned for your cluster from the perspective of the Kubernetes API. For more information, see Nodes Prerequisites The IAM user or IAM role that you sign into the AWS Management Console with must meet the following requirements. Have the eks:AccessKubernetesApiand other necessary IAM permissions to view nodes attached to it. For an example IAM policy, see View nodes and workloads for all clusters in the AWS Management Console . Is mapped to Kubernetes user or group in the aws-auth configmap. For more information, see Managing users or IAM roles for your cluster. The Kubernetes user or group that the IAM user or role is mapped to in the configmap must be bound to a Kubernetes roleor clusterrolethat has permissions to view the resources in the namespaces that you want to view. For more information, see Using RBAC Authorization in the Kubernetes documentation. You can download the following example manifests that create a clusterroleand clusterrolebindingor a roleand rolebinding: View Kubernetes resources in all namespaces – The group name in the file is eks-console-dashboard-full. View Kubernetes resources in a specific namespace – The namespace in this file is default, so if you want to specify a different namespace, edit the file before applying it to your cluster. The group name in the file is eks-console-dashboard-restricted. To view nodes using the AWS Management Console Open the Amazon EKS console at . In the left navigation panel, select Clusters, and then in the Clusters list, select the cluster that you want to view compute resources for. On the Overview tab, you see a list of all compute Nodes for your cluster and the nodes' status. Important If you can't see any Nodes on the Overview tab, or you see a Your current user or role does not have access to Kubernetes objects on this EKS cluster error, see the prerequisites for this topic. If you don't resolve the issue, you can still view and manage your Amazon EKS cluster on the Configuration tab, but you won't see self-managed nodes or some of the information that you see for managed nodes and Fargate under Nodes. Note Each pod that runs on Fargate is registered as a separate Kubernetes node within the cluster. This is because Fargate runs each pod in an isolated compute environment and independently connects to the cluster control plane. For more information, see AWS Fargate. In the Nodes list, you see a list of all of the managed, self-managed, and Fargate nodes for your cluster. Selecting the link for one of the nodes provides the following information about the node: The Amazon EC2 Instance type, Kernel version, Kubelet version, Container runtime, OS and OS image for managed and self-managed nodes. Deep links to the Amazon EC2 console and the Amazon EKS managed node group (if applicable) for the node. The Resource allocation, which shows baseline and allocatable capacity for the node. Conditions describe the current operational status of the node. This is useful information for troubleshooting issues on the node. Conditions are reported back to the Kubernetes control plane by the Kubernetes agent kubeletthat runs locally on each node. For more information, see kubelet in the Kubernetes documentation. Conditions on the node are always reported as part of the node detail and the Status of each condition along with its Message indicates the health of the node for that condition. The following common conditions are reported for a node: Ready – This condition is TRUE if the node is healthy and can accept pods. The condition is FALSE if the node is not ready and will not accept pods. UNKNOWN indicates that the Kubernetes control plane has not recently received a heartbeat signal from the node. The heartbeat timeout period is set to the Kubernetes default of 40 seconds for Amazon EKS clusters. Memory pressure – This condition is FALSE under normal operation and TRUE if node memory is low. Disk pressure – This condition is FALSE under normal operation and TRUE if disk capacity for the node is low. PID pressure – This condition is FALSE under normal operation and TRUE if there are too many processes running on the node. On the node, each container runs as a process with a unique Process ID, or PID. NetworkUnavailable – This condition is FALSE, or not present, under normal operation. If TRUE, the network for the node is not properly configured. The Kubernetes Labels and Annotations assigned to the node. These could have been assigned by you, by Kubernetes, or by the Amazon EKS API when the node was created. These values can be used by your workloads for scheduling pods.
https://docs.aws.amazon.com/eks/latest/userguide/view-nodes.html
CC-MAIN-2021-17
refinedweb
804
60.85
How will i make a program that will read a line of text and output the number of occurance each letter. Assume that the input characters must consist entirely of letters, whitespaces, commas, and period. Printable View How will i make a program that will read a line of text and output the number of occurance each letter. Assume that the input characters must consist entirely of letters, whitespaces, commas, and period. >>How will i make a program... ... do some research! Find out how to getline from a user. Find out how to loop. Find out how to compare two letters. Write something, and post it here when you're stuck. >How will i make a program that will read a line of text and output the number of occurance each letter. Of course, if this is a homework problem that you were hoping we would write for you, the above code is useless. But it may give you some ideas.Of course, if this is a homework problem that you were hoping we would write for you, the above code is useless. But it may give you some ideas.Code: #include <iostream> #include <map> int main() { std::map<const char, int> frequency; std::map<const char, int>::iterator it; char input; while ( std::cin.get ( input ) ) { it = frequency.find(input); if ( it == frequency.end() ) frequency[input] = 1; else it->second++; } for ( it = frequency.begin(); it != frequency.end(); ++it ) std::cout<< it->first <<": "<< it->second <<std::endl; return 0; } Toodles! :) -Prelude can u give me a simplier code or program. my professor kinda new. he don't like complicated stuff which will scratch his head for week reading that code. thanks for the reply goddess Prelude and i'm kinda stuck >my professor kinda new. That was the point entirely. We don't write your homework for you, that defeats the purpose of it. >he don't like complicated stuff which will scratch his head for week reading that code. This sounds like you need a new professor. If your instructor can't figure out a simple frequency check with (very) simple STL usage, she/he shouldn't be teaching C++. Here is a hint for a simple implementation that may work for you. Build two arrays, one that holds all of the letters being input and one that holds the frequency of each letter. The two array indices should correspond: char letters[256]; int frequency[256]; if letters[5] == 'a' then frequency[5] == number of a's When you read a letter, search the letters array from start to finish. If the letter already exists then add one to the number in the corresponding frequency index. If the letter does not exist then add it to the end of the array. This is a messy and inelegant solution and I probably completely confused you, but it is still a solution. :) -Prelude thanks prelude. and i think i need a new professor. i'll work it out from your hint it's a great help. till again I would suspect very few c++ profesors would be able to say for certain if the code is absoutly correct since most professors arn't up to date with the latest libraries:) Try to understand this sort Dear Prelude here's wat i have came up Try this on for size: My C++ teacher doesn't even know what a class is! (but this is just a highschool and he's the only one who is willing to teach it. And he learned it specifically for the class, I believe) You are right not all teachers know C++ and all the benefits of using the STL. Beside they have a lot of non-certified teachers teaching programming in high school (C++ & Java). Those that do (most) teach towards the AP CS exam. Mr. C. >Dear Prelude here's wat i have came up Have you tried to compile this? If not (most likely) then you should, because there are plenty of syntax errors that will keep the program from working properly. As for the logic, it needs work. Here is pseudocode: -Prelude-PreludeCode: begin create input = 0 // Single character create letters[256] create frequency[256]; repeat i = 0 frequency[i] = 0 i = i + 1; until i >= 256 print "Enter Your Text Followed By EOF: " create letter_count = 0; while letter_count < 256 read input if input == EOF break; create exists = false; if input is letter // Process letters only repeat i = 0 // If input has already been read, increment the freqency if letters[i] == input frequency[i] = frequency[i] + 1; exists = true; break; endif until i >= letter_count // If input was not already processed, add // it and set the frequency to one if exists != true letters[letter_count] = input; frequency[letter_count] = 1; letter_count = letter_count + 1; endif endif loop repeat walk = 0 print letters[walk] + ": " + frequency[walk] + "\n"; until walk >= letter_count end
http://cboard.cprogramming.com/cplusplus-programming/25523-string-array-printable-thread.html
CC-MAIN-2015-32
refinedweb
813
72.26
acl_set_tag_type() Set the tag type of an ACL entry Synopsis: #include <sys/acl.h> int acl_set_tag_type( acl_entry_t entry_d, acl_tag_t tag_type ); Arguments: - entry_d - The descriptor of the ACL entry whose type you want to set. - tag_type - The type that you want to assign to the entry; one of the following: - ACL_GROUP — a named group. - ACL_GROUP_OBJ — the owning group. - ACL_MASK — the maximum permissions allowed for named users, named groups, and the owning group. - ACL_OTHER — users whose process attributes don't match any other ACL entry; the world. - ACL_USER — named users. - ACL_USER_OBJ — the owning user. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The acl.
http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/a/acl_set_tag_type.html
CC-MAIN-2013-20
refinedweb
115
69.58
This post borrows from a code example found in Programming Python: Powerful Object-Oriented Programming that demonstrates collecting command line arguments, opening a file, reading the file, and passing a function as a callback to another function. Code Here is the entire script that accepts a file as a command line argument and prints the contents of the file to the console. def scanner(name, func): # Open the file (with statement ensures closure even if there is an exception) with open(name, 'r') as f: # Iterate through the file for line in f: # Call our callback function func(line) if __name__ == '__main__': import sys name = sys.argv[1] # This is a function we are passing to scanner # Python has first class functions which can be # get passed as arguments to other functions def print_line(str): print(str, end='') # Call the scanner function, which in turn # calls the print_line function for each line # in the file scanner(name, print_line) Command Line Arguments The first concept covered in this script is processing command line arguments. Python requires us to import the sys module (line 12) which maintains an argv property. The argv property is a list-like object that contains all of the command line arguments used to hold all of the command line parameters. The first index [0] is the name of the script, followed by all of the other arguments supplied to the program. On line 13, we grab the target file (stored in argv[1]) and keep it in a name variable. At this point, our program knows which file to the open later on when we use the scanner function. First Class Functions Python treats functions as objects. As such, we can define any function in a Python program and store it in a variable just like anything else. Lines 18-19 define a print_line function that accepts a String parameter. On line 24, print_line is the second argument to the scanner function. Once inside of the scanner function, the print_line function is referenced by the variable func. On line 9, we call print_line with the func(line) rather than print_line(line). This works because func and print_line both refer to the same function object in memory. Passing functions in this fashion is incredibly powerful because it allows the scanner function to accept different behaviors for each line it processes. For example, we could define a function the writes each line processed by scanner to a file rather than printing it to the console. Later on, we may choose to write another function that sends each line over the network via network sockets. The beauty of the scanner function as defined is that it works the same regardless of the callback function passed to the func argument. This programming technique is sometimes known as programming to a behavior. Opening and Reading Files The final topic covered is opening and reading a file. Line 5 in the script uses the with statement combined with the open function to actually open the file in read mode. The as f assigns the result of the open function to the variable f. The f variable holds a Python file object. Since Python file objects support the iterator protocol, they can be used in for loops. On line 7, we read through each line in the file with the statement for line in f:. On each execution of the loop, the line variable is updated with the next line in the file. When the loop is complete, the with statement calls the file’s close() method automatically, even if there is an exception. Of course, Python’s garabage collection will also ensure a file is closed, but this pattern provides an extra level of safety, especially since there are a variety of Python interpretors that may act differently than the CPython. Conclusion The most powerful take away from this example if the first class functions. Python treats functions like any other data type. This allows functions to be stored as passed around the program as required. Using first class functions keeps code loosely coupled and highly maintanable! Sources Lutz, Mark. Programming Python. Beijing, OReilly, 2013.
https://stonesoupprogramming.com/2017/08/24/python-line-scanner/
CC-MAIN-2021-31
refinedweb
692
69.31
I am using a sklearn for the multi-classification task. I need to split data into train_set and test_set. I want to take randomly the same sample number from each class. Actually, I am using this function X_train, X_test, y_train, y_test = cross_validation.train_test_split(Data, Target, test_size=0.3, random_state=0) but it gives an unbalanced dataset! Any suggestion. You can simply use the train test split method available in scikit learn: For example: #import classfrom sklearn.model_selection import train_test_split#assign variablesX_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) #import class from sklearn.model_selection import train_test_split #assign variables X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) Hope this answer helps. If you wish to learn more about scikit learn visit this Scikit Learn Tutorial.
https://intellipaat.com/community/9488/how-to-split-data-on-balanced-training-set-and-test-set-on-sklearn
CC-MAIN-2020-05
refinedweb
131
53.17
perlmeditation Jenda <p>This is a continuation to my previous [id://576872] node. The first version of the module, for now without support for namespaces and processing instructions (and comments - you might want to process even those in some cases) may be found at [ site].</p> <p>A few weeks ago someone compared XML to Lisp. I can't find the node now, but it was something about how you could just transform the XML to Lisp and let it execute to produce the result you need. This forced me to think ... if we can transform the XML to Lisp, we can just as well transform it to Perl: <code> <root> <foo x="5"> <bar>hello</bar> <baz>world</baz> </foo> </root> => root( foo( x => 5, bar("hello"), baz("world") ) ) </code> but the question is whether we would gain anything. Of course it would be silly to convert the whole XML to Perl code and then eval("") it, we can instead execute the subroutines as we parse the closing tag and just remember the results so that we could pass them to the subroutine for the parent tag.</p> <p>I tried to come up with a few examples of things I might want to do with a XML and tried to implement them in this style and I like the results. Maybe it's my functional programming affected brain, but I find this style convenient.</p> <readmore> <code> $xml = <<'*END*'; > <street>Grant's st.</street> <city>New Creek</city> <country>Canada</country> <bogus>sdrysdfgtyh degtrhy <foo>degtrhy werthy</foo>werthy drthyu</bogus> </address> <phones> <phone type="office">663-486-7891</phone> </phones> </person> </doc> *END* %rules = ( _default => 'content', bogus => undef, # means "ignore" address => sub {address => "$_[1]->{street}, $_[1]->{city} ($_[1]->{country})"}, person => sub { '@person' => "$_[1]->{lname}, $_[1]->{fname}\n<$_[1]->{email}>\n$_[1]->{address}" }, doc => sub { join "\n\n", @{$_[1]->{person}} }, ); # $parser->parse() will return a single string containin the addresses # in plain text format %rules = ( _default => 'content', # bogus => sub {}, # means "returns no value. The subtags ARE processed. bogus => undef, # means "ignore". The subtags ARE NOT processed. address => 'no content', person => 'no content array', doc => sub {$_[1]->{person}}, #'pass no content', foo => sub {print "FOOOOOOOO\n"}, ); # returns a simplified data structure kinda similar to XML::Simple my $parser = new XML::Rules ( rules => [ _default => sub {$_[0] => $_[1]->{_content}}, 'fname,lname' => sub {$_[0] => $_[1]->{_content}}, bogus => undef, address => sub {address => "$_[1]->{street}, $_[1]->{city} ($_[1]->{country})"}, }, ] ); # prints the addresses, returns nothing </code> As you can see the rules applied to the parsed tags are basicaly of two types. Either they specify what data gets passed to the parent tag's rule and how or they do something with the attributes of the tag and the data returned by the rules of subtags. You can of course do both in your rules. For example if the XML looked like this: <code> <phones> <phone type="office">663-486-7891</phone> </phones> </person> </doc> </code> You might use a subroutine like this for the <address> tag: <code> sub { if (exists $_[1]->{id} and $_[1]->{id}+0 > 0) { $get_addr->execute($_[1]->{id}); my $result = $get_addr->fetchall_arrayref(); my ($street, $sity, $country) = ( $result->[0][0], $result->[0][1], $result->[0][2]); return address => "$_[1]->{street}, $_[1]->{city} ($_[1]->{country})" } else { return address => "$_[1]->{street}, $_[1]->{city} ($_[1]->{country})" } } </code> and proceed as if the data was directly in the XML in all cases. </readmore> <p>Let me know please what you think. I'd also be grateful for any suggestions regarding the support for XML namespaces.</p> <p><b>Update 2006-11-07:</b> I just uploaded an updated version of the module, with more tests and "start tag" rules allowing you to skip branches of XML if you can decide based on the tag's attribute that you do not need them. I also uploaded the module to [CPAN://XML::Rules|CPAN].<>
http://www.perlmonks.org/?displaytype=xml;node_id=581313
CC-MAIN-2016-50
refinedweb
671
62.92
Provided by: manpages-dev_4.04-2_all NAME DESCRIPTION. RETURN VALUE On success, getgroups() returns the number of supplementary group IDs. On error, -1 is returned, and errno is set appropriately. On success, setgroups() returns 0. (it does not have the CAP_SETGID capability). EPERM (since Linux 3.19) The use of setgroups() is denied in this user namespace. See the description of /proc/[pid]/setgroups in user_namespaces(7). CONFORMING TO SVr4, 4.3BSD. The getgroups() function is in POSIX.1-2001 and POSIX.1-2008. Since setgroups() requires privilege, it is not covered by POSIX.1. NOTES A process can have up to NGROUPS_MAX supplementary group IDs in addition to the effective group ID. The constant NGROUPS_MAX is defined in <limits.h>. The set of supplementary group IDs is inherited from the parent process, and preserved across an execve(2).groups()) 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.ubuntu.com/manpages/xenial/man2/getgroups.2.html
CC-MAIN-2021-49
refinedweb
168
63.05
CFD Online Discussion Forums ( ) - FLUENT ( ) - - Splitting Volumes with Faces in Gambit ( ) An Modh Coinniolach March 13, 2002 11:27 Splitting Volumes with Faces in Gambit I need to split the volume around a blade with faces wrapped around said blade. I create the geometry (the solid and the faces)in Pro/E and import them into Gambit as STEP files. Having brought the faces in, I heal them and them try to split the volume using these faces (the faces extend a short distance further than the solid geometry, giving me a total cut through the domain). I need to end up with two real, connected volumes. Now, most times that I try this, the whole thing fails. The first three faces split satisfactorily (there are four altogether, upper surface, lower surface and the leading and trailing edges). The problem arises when I try to split the volume with the final face. I get an error saying that the Cellular Topology structure is corrupt and that the geometric model is corrupt. Generally I persevere and eventually I get it to work, but I spend a lot of time just stuck at this point and that is very unsatisfactory. Could anyone shed light on this problem, suggest any tricks that might alleviate my situation? It would be of enormous assistance to me. Phil March 13, 2002 12:08 Re: Splitting Volumes with Faces in Gambit I can't help your problem but am experiencing a similar problem as you, why do you import as step and not use the direct proE import? Is the the STEP geometry generally more robust? An Modh Coinniolach March 13, 2002 12:33 Re: Splitting Volumes with Faces in Gambit The main motivation behind me not using the direct Pro/E import is that you need to specify a start up command in Gambit for Pro/E and I have never bothered finding out how to do it because it is handier to just get Pro/E to spit out a STEP file and import that into Gambit. Actually it is possible that my release of Gambit cannot run Pro/E in the background anyway (Gambit 1.3 on Windows NT). As regards robustness, I have found that the STEP file import is ranked second in terms of ways to get geometry out of Pro/E and into Gambit. The full list being: 1. PROSAT Translator 2. STEP 3. Spatial IGES and Healer 4. Native IGES 5. Mesh 6. Optegra Visualizer 7. Direct (VRML) 8. STL The "direct import" option is ranked seventh. You could export the file from Pro/E as vrml and use somthing like import vrml "model.wrl" to import this into Gambit. You will end up with non-conformal faceted geometry in Gambit. I'm not sure what that means, but I don't like the sound of it one little bit. So for these reasons and more, STEP seems to be the best bet for Pro/E into Gambit. All that and I still can't get this thing to split reliably. So in short, "yes"! yangqing March 13, 2002 20:54 Re: Splitting Volumes with Faces in Gambit i had face this problem recently. but i create it in gambit, and the result is good. split option should be: volume "split with face" face(real) "connected" maybe it will be helpful for your case An Modh Coinniolach March 14, 2002 07:57 Re: Splitting Volumes with Faces in Gambit That is precisely what I am doing (split with face, real connected). Ages ago I was doing something where I would import the faces and then delete the faces (keeping the lower geometry). Then I would populate some of the remaining edges with vertices and then delete those edges, put splines through the vertices, getting my geometry back and then create faces by extruding my spline curves to regenerate my faces. That did not work particularly well either. I have been around the block with this and then some. Sundar March 14, 2002 14:19 Re: Splitting Volumes with Faces in Gambit Hi Conniolach, I am relatively new to this Pro-E Gambit Translator, but I would like to get more info on it. Till now I build my geometries in gambit and never had an issue. Becos of the integration with design team we will be importing Pro-e Geomtries, so It could be very helpfull if you can give me a bunch of check list, what to do and what not to do on Pro-e Gambit translator. Thanks Sundar An Modh Coinniolach March 15, 2002 06:02 Re: Splitting Volumes with Faces in Gambit Hello Sundar. As you will have read, I import my geometry from Pro/E using and intermediate file (STEP format). I have never used (and probably can't given my configuration Gambit 1.3 under WinNT) the direct Gambit/Pro/E interface. Since you asked, I have just tried importing geometry using a VRML intermediate file. This is a manual version of the direct import option. The volume came in satisfactorily, the faces came in as virtual geometry. I then tried to import a step file containing faces but I got an error saying "ACIS error 2302 : no current bulletin board exists". Interesting. In terms of "dos and don'ts", there is very little advice to give. Importing the STEP file produced by Pro/E (the later versions do this, I am running Pro/E 2001) is uneventful. If there were any advice I could give, it would be to do as little in Gambit as possible in terms of preparing the geometry, but you have been creating your geometry in Gambit up until now so I guess you will have no problem. You should have no problem getting your Pro/E geometry into Gambit, using STEP files is very reliable and easy. goyalnn May 2, 2011 22:27 17:47 .
http://www.cfd-online.com/Forums/fluent/29494-splitting-volumes-faces-gambit-print.html
CC-MAIN-2015-35
refinedweb
987
70.53
I have installed pyodbc in my machine. Also, I have viewed the solution for including python libraries in test complete and written below code, from os import sys sys.path.insert(0, "C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages") import pyodbc But still I am getting ModuleNotFoundError: No module named 'pyodbc'. So, I can understand the above solution will work only for including simple libraries(.py files). But for modules like pyodbc kindly suggest some solution. And also i know we can able to do through ADO object but, we are preferred much on pyodbc. Any help will be appreciated, thanks in advance. @naveens33 Note: please find the attached screenshot of the file location "C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages" Solved! Go to Solution. Hi @naveens33, What Python37 means in your path? If this is a python version (3.7), TestComplete supports Python 3.6.0. Please refer to Python - Specifics of Usage help topic for more information. View solution in original post
https://community.smartbear.com/t5/TestComplete-Desktop-Testing/Connecting-to-SQL-server-database-using-pyodbc-in-testcomplete/td-p/203630
CC-MAIN-2021-25
refinedweb
172
52.46
tag:blogger.com,1999:blog-33770488933722951562012-05-20T07:39:17.074-05:00Mayank Srivastava{ Developer; Web / Mobile enthusiast; Open source / Agile proponent; Amateur musician; }Mayank [email protected]:blogger.com,1999:blog-3377048893372295156.post-28807309907980410102012-03-04T03:39:00.001-06:002012-05-03T14:00:03.394-05:002012-05-03T14:00:03.394-05:00Installing Windows 8 Consumer Preview on Acer Iconia Tab W500While trying to install Windows 8 Consumer preview on my Acer Iconia Tab W500, I ran into a few issues. It was a little problematic because I had Windows 8 Developer preview installed on it. But, I was able to finally install it and here are the steps for anyone else struggling - Create a bootable flash drive - You would need a USB flash drive with at least 4GB of available space. <img src="" height="1" width="1"/>Mayank [email protected] Cheat Sheet–Tags & Event Handler AttributesHTML5 Cheat Sheet – Tags , Event Handler Attributes – made easy by InMotion Hosting. Tags - Event Handlers: The original post from InMotion hosting also includes a sheet for browser support but it seems to be outdated. To find out how your browser is doing and how do new and old, desktop and mobile browser support HTML5, please visit – HTML5Test.com <img src="" height="1" width="1"/>Mayank [email protected] ASP.NET MVC 3.0 Views in jQuery ui dialog & Posting MVC forms from jQuery ui dialog “I blogged on this topic in September 2010 and received a few mail about showing how to create new object (as opposed to update) and also an implementation of this in MVC 3.0 using Razor. While the concepts and explanation in the previous blogs still hold correct, here’s an implementation using MVC 3.0 and Razor with some improvements. My apologies for not posting this blog as sooner; also <img src="" height="1" width="1"/>Mayank [email protected] and solution from MVC 3.0 presentation Session 2 at SDC Meetup ChicagoIt was a privilege to present the second final session for ASP.NET MVC 3.0 @ SDC meetup, covering the design considerations around MVC. Many thanks to the event hosts and as always, the audience interaction in the end was the best part. Following are the slides and solution from the presentation - Download slides here – | – Download the demo solution here. Here are some if the links <img src="" height="1" width="1"/>Mayank [email protected] and solution from MVC 3.0 presentation Session 1 at Software Development Community (SDC) ChicagoIt was a great pleasure to present MVC 3.0 at Software Development Community (SDC) Chicago July 10th 2011. Thank to Joel Shaw, Michael Kappel and the organizing team. The audience interaction was great. I will try to accommodate demo for the previous session’s question in my the next session on August 7th. Following are the slides and solution from the presentation - Download slides here<img src="" height="1" width="1"/>Mayank [email protected] to ASP.NET MVC 3 @ Chicago .NET User Group It was a privilege to present at CNUG, thanks to the great audience, Keith Franklin and the organizing team. You can download the slides and the demo solution from my presentation. If you have any queries about the presentation / demo solution, feel free to send a mail or post a comment. I will get back as the earliest. <img src="" height="1" width="1"/>Mayank [email protected] Beez Neez of 2010 Here’s the list of things that “really” caught my attention in 2010. By really I mean, not just caught my attention but made me spend time in it and go – “That is so cool!” I finally rated these developments of 2010 base on #1 if they are entirely new (or how great are the 2010 features), #2 how much time I spent on it and #3 with how many people I went – “Hey did you hear about ….”<img src="" height="1" width="1"/>Mayank [email protected] is ASP.NET MVC - Why ASP.NET MVC?I recently gave a presentation within my organization, giving folks an introduction to the ASP.NET MVC framework and exploring what the 2.0 version has to offer. The response to the presentation was great, actually better than I expected. One of my colleagues also mentioned that there are many introductions to MVC out there but I drew the comparison between the MVC & Web Forms which kept <img src="" height="1" width="1"/>Mayank [email protected] dialog extender for Asp.Net MVC Forms “This post and the demo solution is still valid for MVC 2.o. However, a new version of this post using MVC 3.0 and Razor with minor improvements is available here.” Continuing the previous post, let’s go ahead and implement the dialog extender for MVC forms (post actions). The mechanism will be similar with some details added. In case of a for we need to post to the server. If the post <img src="" height="1" width="1"/>Mayank [email protected] dialog extender for Asp.Net MVC Views “This <img src="" height="1" width="1"/>Mayank [email protected] jQuery formula - Blocking UI and showing progress dialog across Ajax calls with Asp.Net MVC This goal here is to <img src="" height="1" width="1"/>Mayank [email protected] jQuery formula - Blocking UI and showing progress dialog across asynchronous postbacks with ASP.NET Web Forms.The goal here is to not allow user to interact with the application, at the same time showing the user a dialog saying “Wait, let me first finish what you just asked me to do!” The version of jQuery that I’m using is 1.4.1 (downloadable here) and the version of jQuery UI I’m using is 1.8.2(downloadable here). I’ll be using ASP.NET 4.0 Web forms but this demo is valid for pervious version with <img src="" height="1" width="1"/>Mayank [email protected] while implementing Ninject 2 in an ASP.NET MVC 2 Application - No parameterless constructor defined for this object.Ninject is a very cool open source Inversion of Control container and also has been promoted by some of the best in the industry. However, when I tried to implement Ninject V2 in my ASP.NET MVC 2 web application, I ran into the following error and spent a couple of frustrated hours going through blogs after blogs to make sure I’m implementing it right. So here’s the solution - Pull the <img src="" height="1" width="1"/>Mayank [email protected] temporary HTML UI between redirection and rending of web form.<img src="" height="1" width="1"/>Mayank [email protected] RAD Grid nested table view items.<img src="" height="1" width="1"/>Mayank [email protected] data access using ADO.NET – The callback approachA very good book by the authors that I admire and appreciate mentioned the use of a class called SQLAsyncResult to access data asynchronously using ADO.NET 2.0 via the callback approach. My attempt to try out an example failed as I couldn’t find the SQLAsyncResult class, not in the namespace, not in the MSDN library. However, I did find a post on ASP.NET forum about a dev’s unsuccessful hunt for <img src="" height="1" width="1"/>Mayank [email protected] Data Access using ADO.NETADO.NET 2.0 enabled the infrastructure required for accessing data in asynchronous mode. Three standard approaches available to achieve this are: The Poll approach, The Wait approach & The Callback approach. The approach to choose will depend upon the scenario. Between the Poll and Wait approaches, the Wait approach is better. It provides a great deal of flexibility and efficiency with a bit<img src="" height="1" width="1"/>Mayank [email protected]“A” for Asynchronous!For most parts, our software industry is driven by buzzwords. It becomes difficult to tell your Director or Manager – “I understand that it is very cool but it probably is not the best fit for our application.” Especially after he just attended a meeting where other of his designation/role have been repeatedly using that buzzword and flaunting how they have implemented it in their applications. “<img src="" height="1" width="1"/>Mayank [email protected] appropriate WCF binding for your application.WFC comes with multiple bindings out of the box, allowing us to customize each one of them. It also allows us to create Custom Bindings (using Windows SDK) and User-Defined Bindings (a custom class inherited form the Binding class that performs all steps needed to create bindings) Here I intend to discuss the basic bindings that come out of the box. We will not be going in details of each <img src="" height="1" width="1"/>Mayank [email protected] Java Script in Visual Studio Solution <img src="" height="1" width="1"/>Mayank [email protected] UI Dialog in jQuery Grid (jqGrid) with ASP.NET MVCThis post is in continuation with my previous post where we created an ASP.NET MVC with JQgrid. Also, as we will be using the same solution on which we worked in the last post. Versions used – jQuery – 1.3.2 jqGrid – jqGrid 3.5 ALFA 3. jQuery UI 1.7.1 The source code of the demonstration can be downloaded here. A jQuery Dialog box is just HTML markup tags that sit in DIV tag <img src="" height="1" width="1"/>Mayank [email protected] Grid plug-in (jqGrid) with ASP.NET MVCjqGrid is a great plug-in, making good use of jQuery. jqGrid can be downloaded here and the documentations can be found here. We will shortly see how to use this plug-in in ASP.NET MVC application. Versions used – jQuery – 1.3.2 jqGrid – jqGrid 3.5 ALFA 3. The source code of the demonstration can be downloaded here. To get started, we create a new ASP.NET MVC Web Application. <img src="" height="1" width="1"/>Mayank [email protected]
http://feeds.feedburner.com/AspnetLive
crawl-003
refinedweb
1,668
59.3
Fetch Data Using HTTP Client From Json-Server API in Angular In this video you will learn how to fetch data using http client service from Angular. To create a mock API from where we will fetch data we will use json-server. It's a nice npm package which helps us to create a mock API in a matter of all previous videos we worked only with data which where inside our Angular application. In the real project we fetch almost all data from the backend via API. Angular Angular project. We have all users inside app.component. Now we want to get them from API and remove the raw data. Normally logic with API we move to services but I want to keep this video simple so we will write code without services for now just inside our component. So what do we need to do? We want to fetch user data when our component is initialized. But we don't know anything about initialize yet. Here is how it looks like. ngOnInit(): void { console.log('component is inited') } As you see in browser we get a console log now when our app.component is initialized. Now we need to make an API call. For this we need to use a httpClient service from Angular and inject it inside our component. import { HttpClient } from '@angular/common/http'; export class AppComponent { constructor(private http: HttpClient) {} } So here we imported HttpClient service and injected it in constructor. To inject something inside our component we almost always use this notation: private something: Something. After this code http is available for us inside this. Now let's make our fetch ngOnInit(): void { this.http.get('').subscribe(res => { console.log('res', res) }) } In javascript we normally write fetch.then but here we have subscribe. And this is one more thing why Angular is more difficult than other frameworks. Because it uses inside RxJs library and you also need to learn it at some point. So RxJS is the streams implementation in javascript. So our data flows like a stream and we can change them or subscribe to them to get new data again and again. It is complicated but not in the case with simple API request. In this case it works exactly like .then in JavaScript. We just get a value from http.get inside subscribe and we can do something with it. Now let's look in browser because we will for sure get an error. NullInjectorError: R3InjectorError(AppModule)[HttpClient -> HttpClient -> HttpClient]: NullInjectorError: No provider for HttpClient! So here we get an error that we don't have a provider for HttpClient. Such errors may be really confusing to debug but here the problem is that we need first to import HttpClientModule in app.module. Because HttClient is the part of the module and we need to define it as a dependency. @NgModule({ ... imports: [..., HttpClientModule, ...] }) export class AppModule {} As you can see, now we don't have any errors and we get our data in console. As you can see it's an array of users which we can save in our users variable. users: UserInterface[] = [] ngOnInit(): void { this.http.get('').subscribe(res => { this.users = res }) } I also removed our users array and assigned it with empty array as an initial value. In this case there won't be type mismatch when first the value is underfined and after fetch API an array. Now there is also a Typescript error that we get back an Object from API. We need to specify the correct interface here in order to tell Typescript what are we getting from API. this.http .get('') .subscribe((res: UserInterface[]) => { this.users = res }) As you can see in browser, everything is working as previously but now we are getting data from our json-server. Call to action In this video you learned on the real example how to setup json-api and make get API call with HttpClient.
https://monsterlessons-academy.com/p/fetching-data-in-angular
CC-MAIN-2021-31
refinedweb
660
75
Soft return ?Steve Fairbairn Feb 26, 2009 2:07 AM 1. Re: Soft return ?[Jongware] Feb 26, 2009 3:55 AM (in response to Steve Fairbairn)InDesign calls it a "Soft Line Break" -- Shift+Enter. Use with extreme discretion. If you need a new paragraph, but without indenting/spacing above/below of the current one, create a new paragraph style. If you need to keep two or more words together, use non-breaking spaces or the No Break text attribute. If you want to manually tailor hyphenation/line breaks, use the Single-line Composer instead of the Paragraph Composer, in combination with No Break and/or hard spaces. [Post-Edit:] ID also offers a i Discretionary Line Break, which sort-of combines a few functions. It marks a good line breaking position inside a word i without showing a hyphen when broken. Great for URLs. 2. Re: Soft return ?Steve Fairbairn Feb 26, 2009 4:27 AM (in response to Steve Fairbairn)Thanks, found it. What I was looking for is called Type : Insert Break Character : Forced Line Break (Shift+Return). 3. Re: Soft return ?Bob Bringhurst - Adobe Feb 27, 2009 9:47 AM (in response to Steve Fairbairn)Forced Line Break and other break characters are listed here: 4. Re: Soft return ?[Jongware] Feb 27, 2009 1:21 PM (in response to Steve Fairbairn)I reckoned it worthwhile to add the advice. I never use a soft return. i "Chorus. What, never? i Captain. No, never! i Chorus. What, never? i Captain. Hardly ever!" - 6. Re: Soft return ?(Dave_Saunders) Feb 28, 2009 1:33 PM (in response to Steve Fairbairn)Gilbert & Sullivan: HMS Pinafore Dave 7. Re: Soft return ?Al Ferrari Feb 28, 2009 3:29 PM (in response to Steve Fairbairn)<G><br /><br />Thanks Dave. Good to have a context for that.<br /><br />Al 8. Re: Soft return ?[Jongware] Feb 28, 2009 5:02 PM (in response to Steve Fairbairn):-D I was just waiting for an opportunity to insert that snippet. 9. Re: Soft return ?(Frozen_Tundra) Mar 1, 2009 11:34 AM (in response to Steve Fairbairn)I always appreciate an obscure reference. Here is a little background if anyone is interested. 10. Re: Soft return ?cts_graphics Mar 2, 2009 1:09 PM (in response to Steve Fairbairn)Soft returns have their place. It's when people overuse them and use them for things like creating extra space between paragraphs when you run into trouble. 11. Re: Soft return ?Steve Fairbairn Mar 4, 2009 2:31 AM (in response to Steve Fairbairn)Like when some people enter endless tabs to produce a line break. Reckon it must be a hangover from typewriter days. I don´t know how often I have reprimanded my wife for doing it :-) But soft returns shouldn't be a problem with professionals who know what they're doing. basically it's a question of keeping your typography squeaky clean. And by the way, long live Gilbert and Sullivan - totally brilliant! I thought they were thoroughly "out" and am very surprised that anyone still remembers them :-) 12. Re: Soft return ?(Mal_Ross) Mar 5, 2009 8:37 AM (in response to Steve Fairbairn)I'm a bit of a soft line-break phobic myself. I've never understood what they're actually *meant* for; surely if you're changing topic, you start a new paragraph? Otherwise, just carry on with the current one, without need for a line break of any kind... right? It's one of those things that really bugs me when I see my colleagues doing it. Can anyone point me in the direction of some guidelines regarding the use of soft line breaks? I'd really appreciate it. Thanks, Mal. 13. Re: Soft return ?(Mal_Ross) Mar 5, 2009 8:48 AM (in response to Steve Fairbairn)Actually, I've misused the term soft return. I'd better explain. :) I'm a software developer, not a technical writer or anything like that. The passages of text I write are plain-text prompts, with no inherent concept of paragraphs. Instead, I talk about single carriage returns as soft returns and double carriage returns as paragraph breaks. Hope that's a bit clearer. What riles me is when I see colleagues writing such messages with some sentences separated by single returns and others separated by double returns. The former just looks a mess to my eye, as the semantics of the single return are unclear in the presence of a more recognisable, double-return paragraph break. Plus, the lack of soothing whitespace in single-returned paragraphs is detrimental to readability, IMHO. This is now becoming a long shot, but can anyone point me to guidelines that would cover this kind of... well, abomination? :) 14. Re: Soft return ?Vernox Mar 5, 2009 10:10 AM (in response to Steve Fairbairn)Oh my, where to start ;-) Mal, I don't think typography applies to what you do, so what should govern your work is what makes it most functionally legible. Just as an example, double carriage returns to add paragraph spacing is (to be mild) frowned upon in typesetting. Which does not mean it is wrong for you to do it, as you are not typesetting. Just keep doing what you are doing and be happy. If you want to jump into the fine art of typography, however, you can't go wrong by finding yourself a copy of Bringhurst's "The Elements of Typographic Style" and giving it a read. There are plenty of other tomes as well, but that one in particular is considered authoritative. Yours Vern 15. Re: Soft return ?NTC Ann Mar 5, 2009 12:37 PM (in response to Steve Fairbairn)Thank you for asking about soft returns!!!!! There is nothing like getting a document from someone and needing to create a new package and instead of using soft returns (shift/return) or just letting the text wrap, the person has used all hard returns or used spaces instead of tabs to align things or used a whole bunch of spaces to move text to the next line. Or getting text that is not linked where every column or every page is it's own text block instead of one continuous text block for the document. Or using extra paragraph returns instead of using space before or space after to allow for extra room in between paragraphs. Please do use them.... it makes life a lot easier for others if they have to work off your document. Thank you. 16. Re: Soft return ?NTC Ann Mar 5, 2009 12:51 PM (in response to Steve Fairbairn)Actual guidelines? Here are a couple suggestions that might help. If you want to bring your line of text to the next line without a hard return such as if you have your document set up so that say you have 1p or 1" in between paragraphs and you don't want to have that 1p or 1" space before (or space after) your next line but you want to just continue your text as one block. Or if your paragraph has an awkward rag to one side (say one line really sticks out far and you want to bring a word to the next line so the one side looks better... soft return). Or if you have a lot of hyphens and you don't want them and you've kerned and it hasn't solved the hyphen problem, then you can use a soft return to put the beginning part of the word on the next line. (Personally I try to avoid all hyphens when ever possible.) Or if when you've used a clipping path on an object and the text just doesn't sit right and you need to adjust the words in a paragraph so they look better... you can use the soft return in between words so that the rag on that one side looks more attractive. If you want to line up text under an area on the left where the text doesn't go all the way to the right in a box. Say you have the text 3" in from the right side of the text box but for some reason, you are not using the indent tool. And say you have a bullet and a space on the L... and then when you run the text to line #2 and below... you want the text to line up under the beginning of the first work instead of under the bullet at the beginning.... never use spaces to line up anything please... use tabs or in this case, after you have soft returned at the end of that 1st line... click in front of the first word after the space and bullet and hit "command and \"... then the text will line up for you under where you want it to line up. But it won't do that if you have done a hard return. I hope I am clear on explaining some of these uses. 17. Re: Soft return ?BobLevine Mar 5, 2009 1:06 PM (in response to NTC Ann)No no no! Soft returns should NEVER be used in a paragraph in InDesign especially if the paragraph composer is on. Use a no break to keep words together. Bob 18. Re: Soft return ?Kath-H Mar 5, 2009 1:34 PM (in response to Steve Fairbairn)>And say you have a bullet and a space on the L... and then when you run the text to line #2 and below... you want the text to line up under the beginning of the first work instead of under the bullet at the beginning. Isn't that what hanging indents are for? 19. Re: Soft return ?Michael Gianino Mar 5, 2009 1:39 PM (in response to Steve Fairbairn)Astronauts love soft returns. 20. Re: Soft return ?Nini Tjäder Mar 6, 2009 3:22 AM (in response to Steve Fairbairn)I'm with Kath on this - hanging indents is for that. 21. Re: Soft return ?Steve Fairbairn Mar 6, 2009 3:33 AM (in response to Steve Fairbairn)The whole point of a soft return (or forced line break) is to move a word down into the next line without creating a new paragraph. This usually happens if a column of text is aligned left and the right edge is unnecessarily ragged. (Old-school typographers often call this "ragged right" instead of aligned left.). I disagree with NTC Ann when she talks about using tabs for the front edges of bulleted lists. It's much better to use a combination of paragraph indent and minus value first line indent and save it as a paragraph style. Tabs can be a nuisance if you need to re-flow text. 22. Re: Soft return ?[Jongware] Mar 6, 2009 4:12 AM (in response to Steve Fairbairn). But whenever the text is re-formatted (due to text editing, usually), the soft return will i keep on breaking at that exact point. In this case I prefer to either insert a non-breaking space between the small word and the next one, or select the small word and (part of) the next word and apply 'No Break'. You might argue "yeah but that's about the same, innit?" since these texts will not ever be brokenjust as with a Soft Return. It's not the same: if the text re-runs, the fix becomes unnoticeable, and whenever it re-runs i again and the joined words come near the margins, they'll keep on sticking together. Just the way you want. 23. Re: Soft return ?Steve Fairbairn Mar 6, 2009 4:57 AM (in response to Steve Fairbairn)Yep, I'll drink to that :-) 24. Re: Soft return ?M Blackburn Mar 6, 2009 5:45 PM (in response to Steve Fairbairn)> I hope I am clear on explaining some of these uses. Clear enough but all bad. I'm adding to jongware's comments. Paragraph spacing should be controlled with paragraph spacing and a new style. Rag is much better controlled with No Break, and even that should be used with discretion. Hyphenation should be controlled with H&J and discretionary hyphens. If text is to be indented, use an indent. And there is absolutely no reason for a soft return after the first line of a bulleted paragraph. I'm with the Captain, I never (hardly) use soft returns. If I were asked for legitimate uses one would would be to give a contextual break to a headline. Another would be to break lines in an address - sometimes. 25. Re: Soft return ?The Artworker Mar 7, 2009 5:56 AM (in response to Steve Fairbairn)No Break can be just as bad as a soft return in certain situations. At least with a soft return with invisibles on you can see what is going on when text flow isn't behaving as it should. No Break doesn't show up you can waste several minutes trying to work out why text doesn't appear to be flowing properly because subsequent edits to the copy have made existing No Breaks wrong. 26. Re: Soft return ?M Blackburn Mar 8, 2009 4:57 PM (in response to Steve Fairbairn)> No Break can be just as bad as a soft return in certain situations. That would have to be a rather singular situation and/or the result of rather indiscriminate use of No Break. Many No Breaks, even to shape a rag, are applied to words that could logically stay together under any situation. Generally the worst they can do is make the rag as bad as it was before they were used in the first place and they have to be at the end a line to do even that. Soft returns on the other hand will always cause problems when they come anywhere in the text other than at the end of the line where they were put: they are much more disruptive and several in a row can throw a whole layout out of whack. Bottom line even if No Break could be a problem in 1 out of 100 times that would make it 1% as bad as using soft returns. 27. Re: Soft return ?MT.Freelance Mar 9, 2009 6:06 AM (in response to Steve Fairbairn)Non Breaking Space does show up with hidden characters visible. It is not a character attribute that can be inherited by accident (which can result in overset text if a whole paragraph acquires the attribute, for instance) as with the no-break character attribute. Now, if only there was a Discretionary Non-Breaking Space setting that would apply a non-breaking space before the last word of a paragraph. I wonder if a GREP style would cover that? -mt 28. Re: Soft return ?Harbs. Mar 9, 2009 6:21 AM (in response to MT.Freelance)[email protected] wrote: > I wonder if a GREP style would cover that? > Yes: "\s\S+\s?$" (or "\S+\s\S+\s?$" depending on what you want...) -- Harbs 29. Re: Soft return ?MT.Freelance Mar 9, 2009 9:03 AM (in response to Steve Fairbairn)Thanks Harbs. That is all parseltongue to me as I am a total neophyte when it comes to GREP. I've only recently opened the door to what is CS4 at home (being on ID2 prior to that). Although it was ordered at work, it has not yet arrived, so I remain on CS2. BUT, I will plug it in and give it a go. What is the difference in the syntax used? Or, perhaps more importantly, what is a good printed or online resource for all things GREP? 30. Re: Soft return ?M Blackburn Mar 10, 2009 8:21 AM (in response to Steve Fairbairn)I think I prefer the character attribute though the difference may be nominal. It seems a bit easier to apply because your selection can include part of the word, and it is easier to get rid of: simply click Clear Overrides. Granted this is only safe if you're in the (good) habit of using character styles instead of local formatting. Having a paragraph style that applies No Break could be disconcerting but doesn't seem that likely. Though I should add that I have found occasion to use just such paragraph styles (it has to do with using an Align To Character tab within one character width of the right margin) and that has given me some headaches when I was careless with Based On. > That is all parseltongue to me For me as well and that's been keeping me from trying to learn it. There is book by Peter Kahrel that has been recommended in this forum. 31. Re: Soft return ?(Olav_Kvern) Mar 10, 2009 12:13 PM (in response to Steve Fairbairn)Frozen Tundra wrote: "I always appreciate an obscure reference." Wait..."H.M.S. Pinafore" is obscure...? It's basic cultural knowledge, like knowing how to pronounce "pwn3d" or "Fifty Cent". If W.S. Gilbert, the greatest writer of comic verse in English, is "obscure," then what in the world has become of us?:-) Thanks, Ole 32. Re: Soft return ?[Jongware] Mar 10, 2009 2:03 PM (in response to Steve Fairbairn)>..basic cultural knowledge.. "Parseltongue" seems to be added to basic vocabulary. Oh -- and my introduction to "Pinafore" was, in fact, through "Star Trek -- Insurrection". But it roused my curiosity enough to delve a bit deeper. 33. Re: Soft return ?Michael Gianino Mar 10, 2009 4:09 PM (in response to Steve Fairbairn)Why didn't you just learn it like the rest of usfrom Sideshow Bob? 34. Re: Soft return ?claidheamdanns Sep 24, 2010 11:06 AM (in response to Michael Gianino) Grep. 35. Re: Soft return ?BobLevine Sep 24, 2010 11:31 AM (in response to claidheamdanns) Not correct if you have the paragraph composer in use. A soft return could easily rewrap the entire paragraph and make it look like crap. Bob 36. Re: Soft return ?Joel Cherney Sep 24, 2010 11:41 AM (in response to claidheamdanns)." 37. Re: Soft return ?P Spier Sep 24, 2010 11:46 AM (in response to claidheamdanns). 38. Re: Soft return ?[Jongware] Sep 24, 2010 2:17 PM (in response to P Spier). 39. Re: Soft return ?claidheamdanns Sep 30, 2010 12:09 PM (in response to P Sp.
https://forums.adobe.com/message/3160703
CC-MAIN-2015-11
refinedweb
3,085
81.73
Microsoft .NET controls give developers the leverage they need to build modular solutions. A developer can design custom controls from a myriad of existing controls and embed the custom control into a Windows Form or another control. With some additional code, one can also embed those controls in a Web Form. Writing an extra, simple library beyond those changes will allow you to run that control in a Java application or applet. "Why would anyone want to embed .NET controls inside Java?" you may be asking. Being able to solve complex problems with a single language for one solution is often impossible - especially when dealing with legacy applications. COM made solving such problem easier by allow solutions to be written in multiple languages and in a module fashion. One developer who is proficient in Visual Basic can write a component in VB while another developer proficient in C++ can write a component using ATL. This modular architecture also lets developers work side-by-side in a single project, as well as allow for better versioning since minor changes in one "module" do not require recompiling and redistributing the entire solution. When my company's parent company said they wanted our .NET components to be embedded in their legacy Java application, I knew that a good modular architecture and COM interoperability were the way to go. This tutorial will cover many things, including properly exposing .NET controls as COM objects for release, basic ATL (Active Template Library) classes necessary for instantiation and control, basic information about Java Native Interfaces (JNI), and wrapping the .NET control in Java using the aforementioned technologies. While our destination for the .NET control is a JavaBean, I hope you will see many places throughout this tutorial where information can lead you to branch your solution and arive at a different destination. For instance, once you learn how to instantiate the .NET control in C/C++ using ATL, you could then create a simple Win32 or MFC Windows application to host the control. In any case, I'm sure you will find this tutorial helpful in understanding proper COM implementation in .NET for Runtime-Callable Wrappers (CRW), and instantiating the control as a COM object using C/C++ and ATL. Before traveling down a difficult path, it is important to understand basic concepts and to have various frameworks and tools installed on your computer. If you think you've never dealt with COM before, you'd probably be mistaken. COM has been an integral part of Windows since Windows 95. Everything from the Desktop to the task bar, the menus to the toolbars, and so much more revolves around COM. But what is COM? COM stands for the Component Object Model, which means that you can builds pieces of code in a certain way in which all pieces can work together. For instance, the Microsoft Common Controls that include the ubiquitous text box, label, etc. are COM objects. When you're programming in VB6, almost everything you use is a COM object. COM also presents a natural client/server architecture where the client provides functionality that the server consumes. Internet Explorer is a classic example where the WebBrowser control (IWebBrowser2 interface) provides browsing functionality and the application that contains the menus, toolbars, explorer bars, status bar, etc. consumes that functionality. Before I get too deep into COM, however, just keep in mind that COM objects can be used in practically any language. Every COM object implements IUnknown (just like every .NET class implicitly extends System.Object) which contains three methods: AddRef, QueryInterface, and Release. AddRef and Release increment and decrement the reference count for a DLL. When it's first used, the DLL is loaded into memory. Upon successive instantiation, AddRef is called and the reference count is incremented. When a server is done with the client, Release is called and the reference count is decremented. When the reference count reaches zero (0), the COM object destroys itself. QueryInterface is used to check for and get a particular interface that the COM object might also implement, such as IObjectSafety, IPeristStorage,, and many, many others. IUnknown System.Object AddRef QueryInterface Release IObjectSafety IPeristStorage, So, since we have a means to host a control in practically any language, using COM seems like the logical choice for hosting a control in Java that isn't natively Java. Since we can also expose .NET controls as COM objects, this solution becomes even more attractive. But how do you embed COM objects in Java? Java Native Interface, or JNI, are a means for Java to use native methods, typically in a C++ application. Since we're dealing with C++, we can easily instantiate the COM object. JNI methods are called in a DLL from Java native methods, which then can create a Window to host the COM control in and attach the Window to a Window handle, or an HWND. Since every Window in Windows has an HWND, we need only get an HWND from Java through an undocumented method and we can do practically anything - including hosting our .NET control exposed as a COM object in a Java application or applet! native HWND To begin, lets start with a simple .NET User Control. Since the scope of this tutorial does not necessarily include hosting such controls on the Web, I will not discuss certain things that are necessary for Web hosting of User Controls. A tutorial covering specific constraints of such an effort will be ported later. Create a new Windows Control Library (C# will be used in this example, although it could easily be ported to VB.NET) and call it "COMTest". After the project is created, rename UserControl1.cs to MyCOMObject.cs. Do this both for the filename (select the file itself and check the PropertyGrid) and for the control (select the control itself and check the PropertyGrid). Go ahead and throw some controls on there. It doesn't matter much what you put on the control for this tutorial and I only have you name it as I mentioned above because the name will be coming up quite a bit and I don't want to get too vague. See the demo project for a sample .NET control. It's a basic user control that takes an input value and display a "Hello" message back to the user. This is a simple enough example. I would like to add an event, however, so that the containing control knows when the inner button is clicked and can get the value from the text box. I will implement a simple event (not even resembling the EventHandler delegate) and expose the text of the box. EventHandler To do so, add a public property called UserName as Type String: Type String [Category("Data")] [Description("The name displayed in the \"Your Name:\" text box.")] public string UserName { get { return this.NameBox.Text; } set { this.NameBox.Text = value; } } Also, above the class, add the following public delegate: public delegate void HelloClicked(); This is a simple event delegate that is easy to handle in practically any situation where the EventArgs class and child classes would be hard to represent. EventArgs We then add a protected On<Event> method and the event itself to the code, and raise the event by calling the protected On<Event> method inside our Button.Click handler. Button.Click [Category("Action")] [Description("Occurs after the \"Say Hello\" button is clicked.")] public event HelloClicked Clicked; protected virtual void OnClicked() { if (this.Clicked != null) this.Clicked(); } Finally, add the following in the set accessor of the UserName property after setting the value, which will call the protected method above that raises the Clicked event: Clicked OnClicked(); The final code is in the COMTest folder of the demo project. We will add the rest of the code contained in the class when we add the COM interop functionality in the next section. The code as it exists now, however, is fully embeddable in other .NET controls. Believe it or not, very little is need to expose this .NET control as a COM object. But before I get into that, I want to explain a few things about interop. First, we don't want to let the .NET compiler to generate a class interface - the interface that COM servers actually "talk" to - because of many problems including versioning and VTABLE ordering, which is the order of the functions that appear in the VTABLE, which contains information about the class itself. If we allow the .NET compiler to do this, it is hard to maintain a consistent class interface since the method orders could be changing and COM servers expect functions to be at certain addresses. This is known as dispatching. Using the IDispatch interface, servers can find out methods available in a COM client and their respective index in the VTABLE. If this method location changes, our COM server will call the wrong method! IDispatch Second, we need to make our COM object strongly named, which means we must generate a public key for the assembly and give the assembly some assembly-level attributes. This allows us to insert the .NET control in the Global Assembly Cache (GAC), which means faster load times, especially if we add it as a native assembly (ngen -i <Assembly>). Let's take the easy part first: strongly naming the assembly. First, run "sn -k KeyFile.snk" in the project directory. Second, open the AssemblyInfo.cs file and make sure that the AssemblyTitle, AssemblyVersion, and AssemblyKeyFile attributes are filled in. We can also define a GUID here, which will be the GUID identifying the type library, which I'll talk more about later. My AssemblyInfo.cs file ended up looking like this: AssemblyTitle AssemblyVersion AssemblyKeyFile using System.Reflection; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; [assembly: AssemblyTitle("My COM Test")] [assembly: AssemblyDescription("COM Test Library")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("CodeProject")] [assembly: AssemblyProduct("COMTest")] [assembly: AssemblyCopyright("Copyright 2002")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] [assembly: AssemblyVersion("1.0.*")] [assembly: AssemblyDelaySign(false)] [assembly: AssemblyKeyFile(@"..\..\KeyFile.snk")] [assembly: AssemblyKeyName("")] [assembly: Guid("B58D7C8C-2E2D-4aa6-8EAF-CF7CB448E353")] You can generate a GUID from the Tools meno of VS.NET using the "Create GUID" tool, or run "guidgen" from the command-line / Run prompt. Now, we'll do the slightly harder part, although it isn't too bad. I would also like to mention that you should use the ComVisibleAttribute on any Type that you don't want exposed as COM objects. These typically include modal forms and internal user controls that are used within your project only, such as modal dialogs that open from the control that you're exposing as COM. To do this, simply use ComVisible(false) on any class, struct, or interface. This is also exlusive, so that you can use this attribute on the assembly-level and use the "true" parameter for any class you want to expose. I don't use this method because I want to expose almost everything in this project since it only contains my COM component. For example, I do use the ComVisibleAttribute on my delegate so that it doesn't appear as a COM object itself: ComVisibleAttribute ComVisible(false) [ComVisible(false)] public delegate void HelloClicked(); Anyway, back to the class interface. The class interface is what the COM client actually uses. It knows the interface that contains the functionality it wants to use, as well as the CLSID (class identifier, a globally unique identifier, or GUID) of the class that implements it. Using this approach, the COM client knows nothing about the location, language, or implementation of a control (COM server), only that it contains certain functionality. In essence, a simple text diagram of a call from a COM client to a server would look like this: client --> [interface --> server] The client calls a method on the interface, but the runtime is marshalling that call to a class that implements the interface. So, when we generate our class interface, we'll want to expose any methods, properties, or events that the COM client may use. First, we'll define our event interface, which contains only events we want to expose. In this case, that's only the Click event. Attributes are also attached and will be explained after the code fragment. Click So, open the code for MyCOMObject.cs and put the following at the top after the namespace declaration: [Guid("70B9F4F4-0285-4aae-B64E-DE57BDBF49C5")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface DMyCOMObject { void Clicked(); } The GuidAttribute is another GUID for the interface only. Prefixing the COM object name with the letter "D" is a standard naming convention for event interfaces. The InterfaceTypeAttribute declares this interface as a dispatch interface (dispinterface), which are typically event interfaces. GuidAttribute InterfaceTypeAttribute dispinterface Next, we expose several inherited properties and methods used for drawing the control and the UserName property so that they can all be accessed from COM clients. Guid("CAE73FF2-2D47-4677-B8EA-3E0FF12E4B0D")] [InterfaceType(ComInterfaceType.InterfaceIsDual)] public interface IMyCOMObject { Color BackColor { get; set; } Color ForeColor { get; set; } int Top { get; set; } int Left { get; set; } int Width { get; set; } int Height { get; set; } IntPtr Handle { get; } bool Visible { get; set; } void Show(); void Hide(); void Refresh(); void Update(); string UserName { get; set; } } Again, we define a GUID for this interface and prefix the interface with the letter "I", which is common for both interfaces in general and for class interfaces, which you should remember contain the properties and methods exposed from the COM object. The first many are inherited from System.Windows.Forms.Control, while the last one is for MyCOMObject itself. System.Windows.Forms.Control Finally, we add a GuidAttribute to the MyCOMObject class, as well as some other attributes like so: [Guid("F65B3579-FEAA-4da5-BABA-1B9D195307FF")] [ComSourceInterfaces(typeof(DMyCOMObject))] [ClassInterface(ClassInterfaceType.None)] [ProgId("COMTest.MyCOMObject")] public class MyCOMObject : System.Windows.Forms.UserControl, IMyCOMObject { // ... } The ComSourceInterfaceAttribute defines the event interface(s) for this COM object. The ClassInterfaceAttribute tells the compiler not to generate a class library for this class, but to instead use the implementing interfaces, which you will noticed we added IMyCOMOBject as an interface this class implements. Finally, we add a ProgId, which is an easy way to refer to this control. There are version-independent ProgId's like the one above, and version-dependent ProgId's such as "COMTest.MyCOMObject.1.0", which .NET actually adds to the registry based on the AssemblyVersionAttribute value (major.minor only). ComSourceInterfaceAttribute ClassInterfaceAttribute AssemblyVersionAttribute The final code can be found in the COMTest folder of the demo project. Now open the Project properties, make sure "Register for COM Interop" is "true" for all builds, and build your project. If you wish to build your project from the command line, you can run the following command using regasm.exe: regasm.exe /codebase COMTest.dll If you have given your assembly a strong-name (signed with a private key and given a version number), you can add the assembly to the Global Assembly Cache and run regasm.exe to register the assembly so that clients use the version in the GAC and don't depend on the codebase of your assembly: gacutil /i COMTest.dll regasm.exe COMTest.dll Before actually writing the JNI wrapper, we need to generate a header from a Java class. Since this Java class does not exist yet, the next logical step is to create our Java class. For simplicity, I'll develop a simple java.awt.Frame-based application that hosts our .NET user control. In my real-world example that I designed to help our company win the contract with our parent company (the guys with the legacy Java application), I actually used JavaBeans. You could also use Java Applets, although you'd just be creating extra work to embed a .NET user control in a web page for a platform that already supports .NET user controls in web pages. I live either of these problems as an exercise for the user, though. So, lets develop a simple Frame-based Java application. Lets just go ahead and throw it in the same project directory as our COMTest project. Location is important in this example, since VS.NET or regasm do not properly associate the type library with the COM library. If you locate your .java file in a separate place or use packages (I did for our solution, but I leave that out as well for simplicity), you'll be responsible for translating the paths when we use them later. I went ahead and created a simple Java source file with some native methods that allow me to resize the control and change the background and foreground colors easily. The finished Java source file can be found in the demo project, but the key lines of code are shown below: import java.awt.*; import java.awt.event.*; public class JavaTest extends Canvas { static { System.loadLibrary("JNITest"); } // ... public native void initialize(); public native void destroy(); public native void setCOMSize(int width, int height); public native void setCOMBackground(long rgb); public native void setCOMForeground(long rgb); } The methods above marked as native are methods that Java will call from the Java environment into the native environment, which is C++ in this case. The methods initialize() and destroy() are called from within addNotify() and removeNotify(), which Java calls when it attaches windowed controls to native resources, such as the ubiquitous HWND. Within those overrides, we call initialize() and destroy(), which actually get called in the JNI wrapper we'll discuss shortly. initialize() destroy() addNotify() removeNotify() The maintainence code is pretty straight forward so I won't spend much time on that. However, I will mention that I added a WindowAdapter (saves developers from having to implement every methods of a WindowListener) so that I can safely clean-up resources when the frame is closing. Also, without this code, the frame won't actually close - even if you click the "X" button on the Window frame! WindowAdapter WindowListener The other important piece of code is the ComponentAdapter (again, saves coding versus a ComponentListener), which I receive resize events from. I choose to implement my resize code in this manner rather than overriding every overloaded setBounds() method from java.awt.Canvas, which this class extends. This makes for less code but doesn't resize the control as the user resizes the frame that contains the control. The control is only resized once the user releases the edge of the Window, thereby "committing" the new Window size. You may do it either way - just make sure that the method you call isn't the native method, but a method that instead does its thing in Java then calls a native method. This is true in any case, actually. ComponentAdapter ComponentListener setBounds() java.awt.Canvas So, the only thing left to do is compile the Java class. From the directory that contains the JavaTest.java file, run the following: javac JavaTest.java You'll end up with three class files (two from or nested declarations). We will then generate a JNI header from the main class file for use in our JNI wrapper below. To do so, type the following in the same command line: javah -jni -classpath "%CLASSPATH%;." -o JNITest.h JavaTest This makes sure that the current directory is included in our CLASSPATH and that we generate a JNI-style header named JNITest.h from the JavaTest class we previously compiled. For brevity, I'll not list that header file here since it contains a lot of information. Essentially, though, you should see a bunch of method signatures that match the following pattern: JNIEXPORT void JNICALL Java_JavaTest_<METHODNAME> (JNIEnv *, jobject[, additional params]); If you review the JNI documentation, you'll see that "Java" is always the first word in the function signature. It is important to keep this signature. The second to n-1 words are the package name members and the class. The last word is the method. As far as parameters go, the first and second are always present; the first represents the Java environment, while the second represents the object whose methods you're handling natively. Any other parameters depend upon your parameters specified in the method in the Java source file. We'll go over what to do with these in the next section. Before we get too in-depth, let me explain a little about type libraries. Type libraries (.tlb files, or typelibs) contain information about a COM object or objects useful during development. This helps drive the Intellisense technology that make Visual Studio so easy and powerful to use! This also helps preprocessors generate header information so that COM servers can compile and link against the proper functions when using dual interfaces. So, since VS.NET was nice enough to generate a typelib for us, we can now add a Win32 DLL project to our solution and use the typelib in a COM server that is also our Java Native Interface wrapper, or JNI wrapper. Right-click on the solution and add a new C++ Project, specifically a Win32 project. In the wizard, click on "Application Settings" and select "DLL" to create a dynamic link library. Click "Finish" and you should see the new project added to your solution. Double-click on stdafx.h and add the following #includes at the bottom: atlbase.h and atlwin.h (in that order). You may also right-click on the project and add an existing file, that file being the header file we generated with "javah" from our Java class file above. This just keeps the file reference in your project for easy reference while maintaining its location. #includes You'll also want to modify VS.NET's configuration and change your VC++ project settings: Now, open your JNITest.cpp file and add the following #includes to the top after "stdafx.h": #include "..\COMTest\JNITest.h" // or wherever it was #include <win32\jawt_md.h> You'll also want to add a #import, a MS VC++ extension that allows you to import and use a typelib. Since this is generated during preprocessing, you'll have to compile your project before you can actually use it. Include the following after the #includes we added from above: #import #import "..\COMTest\bin\Release\COMTest.tlb" raw_interfaces_only named_guids using namespace COMTest; You'll also notice that C++ uses namespaces. This is actually somewhat standard. You find this a lot when using the Active Template Library, or ATL. The namespace used above is the namespace you defined in your .NET class, replacing periods (.) with underscores (_). You can rename this using additional #import options if you like (see MSDN for more details). Now go ahead and compile your C++ project so that the preprocessor can generate a .tlh file, which is a header file generated from the typelib using the options you specified in the #import statement (see MSDN for more details). You can go ahead and take a look at what this header file contains in the "Release" directory of the C++ project directory. It's just a header file like any other, but you may find it interesting. It contains all the interfaces, methods and GUIDs of every COM object exposed in your .NET user control project. Since you now have interface declarations for your COM object, lets add a few global variables we'll need for following code. After your "using namespace COMTest;" statement, add the following lines: using namespace COMTest; static HWND m_hWnd = NULL; static CAxWindow *m_axWindow = NULL; static CComPtr<ICOMTEST> m_spMyCOMObject = NULL; static OLE_COLOR m_BackColor = NULL; static OLE_COLOR m_ForeColor = NULL; Now all that's left to do is implement the JNI methods defined in the JNI header we generated previously. The logical place to start coding our JNI wrapper is the method that instantiates the COM object through a Runtime Callable Wrapper (RCW). One thing to keep in mind while doing this is that JAva calls native methods, so you can't call the Java class through those native methods, nor can you call methods in the Java library while in the body of a method within the same thread as the current call. For this reason, you must create a new thread to actually instantiate the COM control. Your Java_JavaTest_initialize() method should like the following: Java_JavaTest_initialize() JNIEXPORT void JNICALL Java_JavaTest_initialize(JNIEnv *env, jobject canvas) { JAWT awt; JAWT_DrawingSurface *ds; JAWT_DrawingSurfaceInfo *dsi; JAWT_Win32DrawingSurfaceInfo *dsi_win; jboolean result; jint lock; awt.version = JAWT_VERSION_1_3; result = JAWT_GetAWT(env, &awt); assert(result != JNI_FALSE); ds = awt.GetDrawingSurface(env, canvas); assert(ds != NULL); lock = ds->Lock(ds); assert((lock & JAWT_LOCK_ERROR) == 0); dsi = ds->GetDrawingSurfaceInfo(ds); dsi_win = (JAWT_Win32DrawingSurfaceInfo*)dsi->platformInfo; m_hWnd = dsi_win->hwnd; if (m_hWnd != NULL) // Pass control to a new thread _beginthread(initCOMTest, 0, NULL); ds->FreeDrawingSurfaceInfo(dsi); ds->Unlock(ds); awt.FreeDrawingSurface(ds); } This part of the code almost never changes. The concept is simple: every window in Windows has a handle called an HWND. Java surfaces such as frames and controls are no different, they're just not exposed in Java directly. The Java Runtime Environment, or JRE, does create an HWND for each window, so all you're doing above is getting that handle, assigning it to a glocal variable within your JNI wrapper, and passing control to a new thread while the current thread finishes executing the current method and cleans-up resources. In order for this code to compile, you must declare the function that actually instantiates the COM object: initCOMTest(). You should either define this method before Java_JavaTest_initialize() or add a forward-declaration statement before it. I choose to define the method before Java_JavaTest_initialize(), which you'll see when I display all code together. For now, however, your code for initCOMTest() should look like: initCOMTest() void initCOMTest(void *argv) { if (m_axWindow == NULL) { CoInitialize(NULL); m_axWindow = new CAxWindow(m_hWnd); if (m_axWindow != NULL) { HRESULT hr = S_OK; hr = m_axWindow->CreateControl( CT2OLE(TEXT("COMTest.MyCOMObject")), NULL, NULL); if (SUCCEEDED(hr)) { hr = m_axWindow->QueryControl(IID_IMyCOMObject, (LPVOID*)&m_spMyCOMObject); if (FAILED(hr)) { m_spMyCOMObject = NULL; m_axWindow->DestroyWindow(); return; } if (m_BackColor != NULL) m_spMyCOMObject->put_BackColor(m_BackColor); if (m_ForeColor != NULL) m_spMyCOMObject->put_ForeColor(m_ForeColor); } } } // start the message loop MSG msg; while (GetMessage(&msg, NULL, NULL, NULL)) { TranslateMessage(&msg); DispatchMessage(&msg); } _endthread(); } The first thing we do is initialize COM with CoInitialize(). We'll unload COM in the function definition for Java_JavaTest_destroy(). We then create an ATL CAxWindow class using the global HWND m_hWnd. If that succeeds, we add a our .NET COM object to the CAxWindow using the ProgID we defined earlier. If the control was added successfully, we get a CComPtr<IMyCOMObject> reference to the control and set the background and foreground colors. After that, we start the message loop (otherwise the control disappears immediately since no message pump is active) and signal the thread that we're finish when the message loop is broken (when the control is being destroyed). We must clean a few of these things up when the object is destroyed, so we handle the Java_JavaTest_destroy() method next: CoInitialize() Java_JavaTest_destroy() CAxWindow CComPtr<IMyCOMObject> JNIEXPORT void JNICALL Java_JavaTest_destroy(JNIEnv *env, jobject canvas) { if (m_axWindow != NULL) { delete m_axWindow; m_axWindow = NULL; } CoUninitialize(); } The method makes sure that the CAxWindow is properly destroyed and unloads the COM library and COM objects. The rest of the methods are fairly straight forward so I won't describe each one in detail. For the rest of the code listed below, however, keep in mind that .NET marshals System.Drawing.Color as OLE_COLOR, which can be type-cast from a long. System.Drawing.Color OLE_COLOR long The completed code can be found in the demo project. Now that you've completed a .NET component exposing a COM object, a Java application, and a JNI wrapper, you're ready to run your example. If you have a background in Java, you're in luck. If not, there's some simple things to learn about Java that make its class loader interesting. If you've specified a package (similar to a namespace in .NET), you should've created a directory structure to match it, such that com.codeproject.examples.Class1 would be in a directory com\codeproject\examples\Class1.java. When you compile it with "javac", you type the path to the Java source file. When you load and execute the class with "java", you use the package and class name syntax. In this example, however, I didn't specify a package to make it easy. So, to run your application, perform the following steps: com.codeproject.examples.Class1 java -Djava.library.path="%PATH%;." JavaTest If Java complains that it can't find the class JavaTest, your CLASSPATH environment variable may not include the current directory. The permanent fix is to change your CLASSPATH environment variable (according to your operating system) to include ".", which means the current directory. A temporary fix is to run the following command: java -Djava.library.path="%PATH%;." -cp .;%CLASSPATH% JavaTest The -D command-line parameter defines (or redefines) a Java environment variable. In this case, we include the current directory in the PATH environment variable, because the current directory contains our JNITest.dll. Because we used System.loadLibrary() in our Java app, Java will attempt to load the referenced library from the PATH (as Windows does for executables). If JNITest.dll was in our PATH, you would not need to include the -D command-line parameter. If you don't want to include the library in your PATH and know that it will be in a particular location at all times, you can use System.load(), which takes a path to a library. With the high maintainence required for the latter, I recommend the former approach. System.loadLibrary() System.load() You should see a Java window popup and shortly thereafter your .NET User Control. Fill-in your name and click the button. To summarize the concept above, you need only do the following: If you think about the solution, it's really not that complicated. COM bridges the gap between .NET and Java, and both frameworks use a method of communicating with native modules which COM glues together. Similar approaches can be used with other languages and frameworks, too. In a perfect world, we'd either have one great language with which to develop, or work on projects completely separate from projects written in other languages. Unfortunately, it's not a perfect world and you may be faced with such challenges. I hope that what you learned above will help you not only integrate rich .NET User Controls in Java applications, but teach you (via example) how to integrate .NET with other languages and frameworks.
http://www.codeproject.com/script/Articles/View.aspx?aid=2909
CC-MAIN-2014-10
refinedweb
5,087
55.13
Windows Presentation Foundation has been around for a relative short while. Silverlight brings WPF to the browser, with that the amount of writings on WPF is growing fast. At the moment I'm working on a project which uses Deep Earth, a very cool Silverlight based viewer on Virtual Earth (or any other map provider). Deep Earth is an open source project. Having seen the curly path of this project I want to take a closer look at separation of presentation and logic behind, being one of the fundamentals of WPF. WPF presentation is written out in markup, which is just another XML document. Logic is written out in C# (or any other .net language). There are several kinds of logic like business logic and presentation logic. Business logic should be kept as far from WPF as possible but when it comes to presentation logic there are many, many ways to handle that. In markup but also In C#. And sometimes you have to cross the border, C# code manipulating markup or markup providing a hook for your C# code. I will discuss both here. As a demo I have a simple Silverligth user control. The xaml editor in VS is in it's current state not that useful. My image is drawn with Expression Blend. After that I copied the markup into VS. (VS and Expression Blend can be integrated, that's another story). The result looks like this. Just an ellipse with two lines (the path elements) I want to be able to set a visual property like the line thickness from code. There are many many ways to do that. According to most sources the way to do it is using databinding in the markup. But that is quite a hassle; not only the markup required also the timing. What if I want to update the property over and over again ? The most informative story I have found was this post by Rick Strahl on his frustrations. What made me see the light was laying the web aside and browsing Petzolds book Applications = code + markup (this post's title is a tribute). That book has been critiqued by some as being not the right approach to WPF. For me it works just fine as it perfectly describes the two aspects, code and markup, and how they can communicate. I'm a code guy, at first sight Blend frightens me as a the tool of graphical artist. Anyway, Petzold's treatment presented a simple solution. All visual elements have an optional name attribute. By default Expression Blend omits it. Elements with a name do show up in the code behind; elements without one don't. So after updating the XAML to <Path x:Name="Line1" HorizontalAlignment="Left" Margin="163.401 <Path x:Name="Line2" HorizontalAlignment="Left" Margin="158.222 I can access the lines from code behind. public partial class MyFancySilverlightControl : UserControl { public MyFancySilverlightControl() { InitializeComponent(); } public void ChangeLine(Color color, double thickness) { Line1.Stroke = new SolidColorBrush(color); Line1.StrokeThickness = thickness; } } The next thing I want to do is use my usercontrols in DeepEarth. DeepEarth does have a nice collection of shape controls to place on a map. Things like polygons to outline a region and pushpins to mark a position. Here the vector graphics used by WPF really shine. For example you can zoom in on a map and outline a city boundary in every detail using a Polygon shape. Zoom out and the boundaries melt into a small dot. Zoom in again and every detail is still there. Imagine doing that with bitmaps.. In the newest version the content of the different shapes is defined in markup templates. The basic markup of a shape is straightforward <Style TargetType="DeepShapes:GeometryBase"> <Setter Property="Template"> A shape basically contains a single path (line). The template for a pushpin also contains a lot of fancy markup building the image of the pin. The WPF method OnApplyTemplate instantiates the shape. In overrides of this method the DeepEarth classes take further care of the correct size and position of the control on the map. On a map usercontrols are like pushpins, an image of some kind placed on a certain location. A first shot would be to subclass the DeepEarth Pushpin class and add the markup of my pushpins to the templates of DeepEarth. But that would not be very flexible; in such a scenario every new control requires fiddling with the DeepEarth core assemblies. Besides that it would be hard again to set control properties from code. Again, working from code provides a solution. And here the markup in the template provides a base for the code. I have added one new class to the DeepEarth shapes, a UserControlHost. This is it's markup Instead of a path the template declares a named grid. The nice thing about a grid control is that other controls can be added to its Children. And that's just what I do when applying the template namespace DeepEarth.Shapes { public class UserControlHost : Pushpin { private readonly UserControl hostedControl; public UserControlHost(UserControl hostedControl) { DefaultStyleKey = typeof(UserControlHost); this.hostedControl = hostedControl; } public override void OnApplyTemplate() { ((Grid) GetTemplateChild("LayoutRoot")).Children.Add(hostedControl); base.OnApplyTemplate(); } } } The usercontrol is passed in the constructor. In the overriden OnApplyTemplate the grid is grabbed using the GetTemplateChild method and the usercontrol is added to its Children. After that OnApplyTemplate of the PushPin base class is fired. It takes care of all the sizing and positioning work. To that method there is no difference between xaml from the original template or xaml from injected usercontrol. The UserControlHost class is just another shape like a polyline, polygon or the default pushpin. And you can do all the same things with it, like adding it to layers. This way you can host any UserControl in DeepEarth. This snippet will look familiar to anybody working with DeepEarth UserControl visual; if (isCurrentPosition) visual = new ShipVisual(shipColor, (double) clusteredWp.Course); else visual = new PushpinVisual(); var pin = new UserControlHost(visual); pin.Point = new Point(clusteredWp.Longitude, clusteredWp.Latitude); wayPointLayer.Shapes.Add(pin); That's it. I hope to have demonstrated that in WPF the border between markup and code is somewhat flexible. Both aspects are very powerful and they can really help each other. Separate well and rule (the deep earth). [Advertisement] Pingback from Dew Drop - December 11, 2008 | Alvin Ashcraft's Morning Dew Nice Article! Its important to point out that WPF is the windows specific desktop technology while Silverlight (originally called WPF everywhere) is a subset that will run on Windows, Mac, Linux, WinMo, Nokia and maybe even Iphone if the Mono guys get there way. The XML markup is very similar between WPF and Silverlight (essentially just some missing controls) and is called XAML, everything UI is in the XAML, Blend is a great tool to make XAML. Thanks for your feedback on the DeepEarth project! Pingback from 2008 December 12 - Links for today « My (almost) Daily Links With your example I cant see the problem with data binding the color and/or thickness of the line (via XAML) to dependency properties defined in the code behind class. It doesnt matter that much as you have packaged it up as a user control but removing that dependency would be more in-line with WPF thinking and would allow you to make it into a 'lookless' custom control if required. Matt @John: You are quite right. I lump everything xaml under the brand WPF but in the real world the there are two, slightly different, implementations. The Windows one still bears the label WPF the one web for the web Silverlight. The first name for Silverlight was WPF/E though. @Matthew: You are right, the actual properties in my example could be relatively easely done using data binding. The properties are set once. I'm working on a follow up on another demo control where a visual property is updated over and over again in a way I wouldn't know how to do with databinding . But as said in the pos,t I'm still
http://codebetter.com/blogs/peter.van.ooijen/archive/2008/12/11/wpf-code-markup-custom-pushpins-for-deepearth.aspx
crawl-002
refinedweb
1,346
65.52
Hello, zoT1wy1njA0=! Let’s jump right into Java cryptography with some examples. The first example can be run by anyone who has the Java Development Kit (JDK) 1.1 or later installed. The second example uses classes from the Java Cryptography Extension (JCE). To run it, you will need to download and install the JCE, which is available in the United States and Canada only at. Chapter 3, discusses these pieces of software and how they fit together. Don’t worry if you don’t understand everything in these programs. They are demonstrations of what you can do with cryptography in Java, and everything in them will be explained in more detail elsewhere in the book. Masher Our first example demonstrates how a message digest works. A message digest takes an arbitrary amount of input data and creates a short, digested version of the data, sometimes called a digital fingerprint, secure hash, or cryptographic hash. Chapter 2 and Chapter 6 contain more detail about message digests. This program creates a message digest from a file: import java.io.*; import java.security.*; import sun.misc.*; public class Masher { public static void main(String[] args) throws Exception { // Check arguments. if (args.length != 1) { System.out.println("Usage: Masher filename"); return; } //(); // Print out the digest in base64. BASE64Encoder encoder = new BASE64Encoder(); String base64 = encoder.encode(raw); System.out.println(base64); } } To use this program, just compile it and give it a file to digest. Here, I use the source code, Masher.java, as the file: C:\ java Masher Masher.javanfEOH/5M+yDLaxaJ+XpJ5Q== Now try changing one character of your input file, and calculate the digest again. It looks completely different! Try to create a different file that produces the same message digest. Although it’s not impossible, you probably have a better chance of winning the lottery. Likewise, given a message digest, it’s very hard to figure out what input produced it. Just as a fingerprint identifies a human, a message digest identifies data but reveals little about it. Unlike fingerprints, message digests are not unique. A message digest is sometimes called a cryptographic hash. It’s an example of a one-way function , which means that although you can calculate a message digest, given some data, you can’t figure out what data produced a given message digest. Let’s say that your friend, Josephine, wants to send you a file. She’s afraid that your mutual enemy, Edith, will modify the file before it gets to you. If Josephine sends the original file and the message digest, you can check the validity of the file by calculating your own message digest and comparing it to the one Josephine sent you. If Edith changes the file at all, your calculated message digest will be different and you’ll know there’s something awry. Of course, there’s a way around this: Edith changes the file, calculates a new message digest for the changed file, and sends the whole thing to you. You have no way of knowing whether Edith has changed the file or not. Digital signatures extend message digests to solve this problem; I’ll get to them in Chapter 6. So how does this program work? It operates in four distinct steps, indicated by the source comments: Check command-line arguments. Masherexpects one argument, a filename. Obtain the message digest object. We use a factory method, a special static method that returns an instance of MessageDigest. This factory method accepts the name of an algorithm. In this case, we use an algorithm called MD5. MessageDigest md = MessageDigest.getInstance("MD5"); This type of factory method is used throughout the Security API. Calculate the message digest. Here we open the file and read it in 8-kilobyte chunks. Each chunk is passed to the MessageDigestobject’s update()method. Finally, the message digest value is calculated with a call to digest(). Make the result readable. The digest()method returns an array of bytes. To convert this to a screen-printable form, we use the sun.misc.BASE64Encoderclass. This class converts an array of bytes to a String, which we print. SecretWriting The next example uses classes that are found only in the Java Cryptography Extension (JCE). The JCE contains cryptographic software whose export is limited by the U.S. government. If you live outside the United States or Canada, it is not legal to download this software. Within the United States and Canada, you can get the JCE from. The SecretWriting program encrypts and decrypts text. Here is a sample session: C:\ java SecretWriting -e Hello, world!Lc4WKHP/uCls8mFcyTw1pQ== C:\ java SecretWriting -d Lc4WKHP/uCls8mFcyTw1pQ==Hello, world! The -e option encrypts data, and the -d option decrypts it. A cipher is used to do this work. The cipher uses a key. Different keys will produce different results. SecretWriting stores its key in a file called SecretKey.ser. The first time you run the program, SecretWriting generates a key and stores it in the file. Subsequently, the key is loaded from the file. If you remove the file, SecretWriting will create a new key. Note that you must use the same key to encrypt and decrypt data. This is a property of a symmetric cipher. We’ll talk more about different flavors of ciphers in Chapter 7. “Hello, world!” can be encrypted to many different values, depending on the key that you use. Here are a few sample ciphertexts: Lc4WKHP/uCls8mFcyTw1pQ== xyOoLnWOH0eqRwUu3rQHJw== hevNJLNowIzrocxplKI7dQ== The source code for this example is longer than the last one, but it’s also a more capable program: import java.io.*; import java.security.*; import javax.crypto.*; import sun.misc.*; public class SecretWriting { public static void main(String[] args) throws Exception { // Check arguments. if (args.length < 2) { System.out.println("Usage: SecretWriting -e|-d text"); return; } // Get or create key. Key key; try { ObjectInputStream in = new ObjectInputStream( new FileInputStream("SecretKey.ser")); key = (Key)in.readObject(); in.close(); } catch (FileNotFoundException fnfe) { KeyGenerator generator = KeyGenerator.getInstance("DES"); generator.init(new SecureRandom()); key = generator.generateKey(); ObjectOutputStream out = new ObjectOutputStream( new FileOutputStream("SecretKey.ser")); out.writeObject(key); out.close(); } // Get a cipher object. Cipher cipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); // Encrypt or decrypt the input string. if (args[0].indexOf("e") != -1) { cipher.init(Cipher.ENCRYPT_MODE, key); String amalgam = args[1]; for (int i = 2; i < args.length; i++) amalgam += " " + args[i]; byte[] stringBytes = amalgam.getBytes("UTF8"); byte[] raw = cipher.doFinal(stringBytes); BASE64Encoder encoder = new BASE64Encoder(); String base64 = encoder.encode(raw); System.out.println(base64); } else if (args[0].indexOf("d") != -1) { cipher.init(Cipher.DECRYPT_MODE, key); BASE64Decoder decoder = new BASE64Decoder(); byte[] raw = decoder.decodeBuffer(args[1]); byte[] stringBytes = cipher.doFinal(raw); String result = new String(stringBytes, "UTF8"); System.out.println(result); } } } SecretWriting has to generate a key the first time you use it. This can take a few seconds, so be prepared to wait. In the meantime, let’s look at the steps in this program: Check command-line arguments. We expect an option, either -eor -d, and a string. Next we need a key to use the cipher. We first attempt to deserialize the key from a file named SecretKey.ser. If this fails, we need to create a new key. A KeyGeneratorobject creates keys. We obtain a KeyGeneratorby using a factory method, in just the same way that we obtained a MessageDigestin the Masherexample. In this case, we ask for a key for the DES (Data Encryption Standard) cipher algorithm: KeyGenerator generator = KeyGenerator.getInstance("DES"); The key generator must be initialized with a random number to produce a random new key. It takes a few seconds to initialize the SecureRandom, so be patient. generator.init(new SecureRandom()); This done, we are set to generate a key. We serialize the key to the SecretKey.ser file so that we can use the same key the next time we run the program. Having obtained our key, we obtain a cipher in much the same way: Cipher cipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); This specifies the DES algorithm and some other parameters the Cipherneeds. We’ll talk about these in detail in Chapter 7. Finally, we encrypt or decrypt the input data. The Cipheris created in an uninitialized state; it must be initialized, with a key, to either encryption mode or decryption mode. This is accomplished by calling init(). When encrypting, we take all of the command-line arguments after the -eoption and concatenate them into one string, amalgam. Then we get a byte array from this string and encrypt it in the call to Cipher’s doFinal()method: byte[] stringBytes = amalgam.getBytes("UTF8"); byte[] raw = cipher.doFinal(stringBytes); Finally, as in the Masherexample, we convert the raw encrypted bytes to base64 and display them. Decrypting is the same process in reverse. We convert the command-line argument from base64 to an array of bytes. We then use our Cipherobject to decrypt this: byte[] stringBytes = cipher.doFinal(raw); We create a new Stringfrom the resulting byte array and display it. Note that we specify an encoding for converting between a Stringand a byte array. If we just used the default encoding (by calling getBytes()with no argument), then the ciphertext produced by this program might not be portable from one machine to another. We use UTF8 as a standard encoding because it can express all Unicode characters. For more information on UTF8, see. You don’t really have to understand how UTF8 works; just think of it as a standard way to convert from a string to a byte array and back. This is only a demonstration program. Note that its key management is not secure. SecretWriting silently writes the secret key to a disk file. A secret key must be kept secret—writing it to a file without notifying the user is not wise. In a multiuser system, other users might be able to copy the key file, enabling them to decode your secret messages. A better approach would be to prompt the user for a safe place to put the key, either in a protected directory, in some sort of protected database, on a floppy disk, or on a smart card, perhaps. Another approach is to encrypt the key itself before writing it to disk. A good way to do this is using password-based encryption, which is covered in Chapter 7. Although SecretWriting doesn’t do a whole lot, you can see how it could be expanded to implement a cryptographically enabled email application. I’ll develop such an application in Chapter 11. Get Java Cryptography now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/java-cryptography/1565924029/ch01s05.html
CC-MAIN-2020-45
refinedweb
1,780
59.3
Subject: [Boost-users] Code stopped compile after migrating 1.42->1.45 From: Pavel Pervov (pavel.pervov_at_[hidden]) Date: 2010-12-07 09:40:03 I'm compiling my codebase with Microsoft Visual C++ 8/9 (Visual Studio 2005/2008) Here is an example code which I've crafted from what I have in my codebase: //------- code start -------------------------------- #include <algorithm> #include <map> #include <string> #include <iostream> #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> typedef std::map<int, int> MyMap; std::ostream& operator << (std::ostream& outs, const MyMap& a) { std::for_each(a.begin(), a.end(), outs << boost::lambda::bind(&A::value_type::second, boost::lambda::_1) << "\n"); return outs; } int main() { MyMap a; std::cout << a; } //------- code end --------------------------------- This code stopped compiling after I've migrated from boost version 1.42 to version 1.45. Any ideas would be appreciated. Regards, Pa. Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
http://lists.boost.org/boost-users/2010/12/64763.php
CC-MAIN-2013-20
refinedweb
170
53.37
Files and strings Please is another task. Write a program that asks for the user's first and last name and saves these into a file named by the user. The program must start by asking for the first name. The last name is entered next, followed by the desired file name. The first part of the file name may have a maximum of 8 characters and the second part may have 3 (for example: personal.usr). The file must reside in the same directory as the program. The last name can have a maximum of 20 characters, the first name 15. Hint: In the chapter dealing with file processing, files were opened using a string array. Read the chapter and you should be able to perceive how to implement the program. My code #include <stdio.h> int main(){ char firstname[10], lastname[5]; char fname[]="filename.txt"; /*There is a problem here*/ char *opening_mode= "w"; printf("The program saves your first and last name into a file."); printf("\nEnter your first name:"); scanf("%s", &firstname[0]); printf("Enter your last name:"); scanf("%s",&lastname[0]); printf("File where you want to save your name:"); scanf("%s",&fname[0]); FILE *fptr; if((fptr = fopen("filename.txt",opening_mode)) == NULL) { printf("Failed to open file (flename.txt)."); exit(1); } else{ fprintf(fptr,"%s %s",&firstname[0],&lastname[0]); printf("\nSuccessfully saved the data!"); fclose(fptr); } return 0; } Output should be something like this: # ./a.out The program saves your first and last name into a file. Enter your first name: John Enter your last name: Doe File where you want to save your name: filename.txt Successfully saved the data!# # ./a.out The program saves your first and last name into a file. Enter your first name: David Enter your last name: Smith File where you want to save your name: file.dat Successfully saved the data!# My code output # ./a.out The program saves your first and last name into a file. Enter your first name: John Enter your last name: Doe File where you want to save your name: filename.txt Successfully saved the data!# so is if file name say 'file.dat', is entered, my code doesn't
https://www.studypool.com/questions/12532/files-and-strings
CC-MAIN-2017-09
refinedweb
369
76.52
Looking at the data you see that each incident has latitude and longitude coordinates. {'INCIDENT_KEY': '184659172', 'OCCUR_DATE': '06/30/2018 12:00:00 AM', 'OCCUR_TIME': '23:41:00', 'BORO': 'BROOKLYN', 'PRECINCT': '75', 'JURISDICTION_CODE': '0', 'LOCATION_DESC': 'PVT HOUSE ', 'STATISTICAL_MURDER_FLAG': 'false', 'PERP_AGE_GROUP': '', 'PERP_SEX': '', 'PERP_RACE': '', 'VIC_AGE_GROUP': '25-44', 'VIC_SEX': 'M', 'VIC_RACE': 'BLACK', 'X_COORD_CD': '1020263', 'Y_COORD_CD': '184219', 'Latitude': '40.672250312', 'Longitude': '-73.870176252'} That means we can plot on a map. Let us try to do that. We want to plot all the shooting incidents on a map. You can use OpenStreetMap to get an image of a map. We want a map of New York, which you can find by locating it on OpenStreetMap or pressing the link. You should press the blue Download in the low right corner of the picture. Also, remember to get the coordinates of the image in the left side bar, we will need them for the plot. map_box = [-74.4461, -73.5123, 40.4166, 41.0359] Importing data from a CVS file is easy and can be done through the standard library csv. Making plot on a graph can be done in matplotlib. If you do not have it installed already, you can do that by typing the following in a command line (or see here). pip install matplotlib First you need to transform the CVS data of the longitude and latitude to floats. import csv # The name of the input file might need to be adjusted, or the location needs to be added if it is not located in the same folder as this file. csv_file = open('nypd-shooting-incident-data-year-to-date-1.csv') csv_reader = csv.DictReader(csv_file) longitude = [] latitude = [] for row in csv_reader: longitude.append(float(row['Longitude'])) latitude.append(float(row['Latitude'])) Now you have two lists (longitude and latitude), which contains the coordinates to plot. Then for the actual plotting into the image. import matplotlib.pyplot as plt # The boundaries of the image map map_box = [-74.4461, -73.5123, 40.4166, 41.0359] # The name of the image of the New York map might be different. map_img = plt.imread('map.png') fig, ax = plt.subplots() ax.scatter(longitude, latitude) ax.set_ylim(map_box[2], map_box[3]) ax.set_xlim(map_box[0], map_box[1]) ax.imshow(map_img, extent=map_box, alpha=0.9) plt.savefig("mad_mod.png") plt.show() This will result in the following beautiful map of New York, which highlights where the shooting in the last year has occurred. Now that is awesome. If you want to learn more, this and more is covered in my online course. Check it out. You can also read about how to plot the mood of tweets on a leaflet map.
https://www.learnpythonwithrune.org/3-steps-to-plot-shooting-incident-in-ny-on-a-map-using-python/
CC-MAIN-2021-25
refinedweb
444
67.35
.NET BlogThe .NET blog discusses new features in the .NET Framework and important issues for .NET developers. Evolution Platform Developer Build (Build: 5.6.50428.7875)2015-03-18T12:53:00ZAnnouncing .NET Core and ASP.NET 5 RC<p>Today, we are announcing .NET Core and ASP.NET 5 Release Candidate, supported on Windows, OS X and Linux. This release is "Go Live", meaning you can deploy apps into production and call Microsoft Support if you need help. Please check out the <a href="">Announcing ASP.NET 5 RC blog post</a> to learn more about the updates to ASP.NET 5.</p> <p>The best way to get the RC is to go to the <a href=""></a> site. It has everything you need: downloads, instructions and samples. If you already have one of the betas installed, you can upgrade your environment to RC from the command-line.</p> <p>We also have exciting news to share about what's coming next for .NET Core, specifically new commandline tools and .NET Native. Check out the updated <a href="">.NET Core Roadmap</a> for more details.</p> <p>The team will be hanging out in the <a href="">.NET Foundation Forums</a> to answer your questions about the RC or anything else about .NET Core and ASP.NET 5. We're here to help!</p> <h1>.NET Core and ASP.NET 5 RC</h1> <p>.</p> <p>For RC, we've added many features and now have largely feature complete Linux and OS X implementations in place. You can read <a href="">Announcing ASP.NET 5 RC</a> to learn about ASP.NET 5 RC in detail. You can also check out the release notes to see a list of the product changes and a milestone of commits: <a href="">ASP.NET</a>, <a href="">.NET Core</a>. The following are some of the key features that we've added since Preview:</p> <p><strong>ASP.NET 5 Visual Studio Experience</strong></p> <ul> <li>Integrating Bootstrap snippets</li> <li>Updated Bower package UI</li> <li>MVC scaffolding is enable for ASP.NET 5 projects</li> </ul> <p><strong>ASP.NET 5 Runtime</strong></p> <ul> <li>Transparent DNX app hosting model</li> <li>Configurable webroot folder</li> <li>Strong-named framework assemblies</li> </ul> <p><strong>.NET Core Runtime and BCL</strong></p> <ul> <li>CoreFX open source progress.</li> <li><a href="">Removal of MaxPath restriction.</a></li> <li>RyuJIT now supported on Linux and OS X, including JIT and crossgen</li> <li><a href="">Support for LLDB and SOS on Linux</a></li> <li>Integration of exception handling with debugger and crash dumps</li> <li>GC/thread suspension for Linux and OSX</li> <li>Native eventing support via LTTNG for Linux</li> </ul> <p><strong>.NET Core Libraries</strong></p> <ul> <li><a href="">Cross platform SQL Client</a></li> <li><a href="">Azure Libraries</a></li> </ul> <h2>Feature Complete</h2> <p>We often say, "shipping is a feature". Getting to "feature complete" is a very important step towards shipping a final version.</p> <p>!</p> <p.</p> <h2>Explicit DNX app hosting model</h2> <p>DNX has included a host that provides a variety of services to your app. You opt into using this host when you launch your app with <code>dnx run</code>, <code>dnx web</code> or any other dnx launch commands. The way that the host was called and wired up into your app was a bit <em>magic</em>. Some scenarios require providing a different host or doing work before the host initialized your app. There are lots of reasons why you might want a bit more control.</p> <p <a href="">expression method bodies</a> syntax.</p> <script type="text/javascript" src=""></script> <p>You can also call the host the <em>old fashioned way</em>. This approach is important if you want to do work before calling the host. You shouldn't do work after the host.</p> <script type="text/javascript" src=""></script> <h2>RyuJIT now supported on Linux and OS X</h2> <p>RyuJIT can generate code for Windows, Linux and OS X. Even though RyuJIT can generate machine code for X64 on Windows, it needed to be taught how to generate code for the same chip on Linux and OS X. The <a href="">calling convention</a> and other aspects are different on different OSes. We've made <a href="">changes</a> to accomodate that.</p> <a href="">PE</a> images on all OSes, since that's the executable binary format CoreCLR knows how to load. It generates <a href="">PDB</a> images on Windows and <a href="">Textual Symbol Tables</a> on Linux and OS X in order to integrate with platform-specific debugging tools. CrossGen is currently <a href="">broken on Linux</a> and will be fixed.</p> <h2>Long file names (AKA "MaxPath")</h2> <p.</p> <p.</p> <p>For more information and a code sample, see this <a href="">gist</a>.</p> <h2>Cross Platform SQL Client</h2> <p.</p> <p>SqlClient is not yet feature complete, but it does work on Windows, OS X and Linux. To use SqlClient on OS X or Linux, Multiple Active Result Sets (MARS) must be disabled in the connection string (it is enabled by default). Set it to false by including <code>MultipleActiveResultSets=False;</code>, such as in the following example.</p> <p>Additionally, connectivity to SQL Server is only possible via TCP. </p> <p>Communication over Named Pipes, Shared Memory, or LocalDB is not supported yet.</p> <p>Example connection string:</p> <div class="highlight highlight-text-xml"> <pre>gist</a>.</p> <h1>Go Live Support</h1> <p>The .NET Core 5 and ASP.NET 5 Release Candidates is a "Go Live" release. That means if your ASP.NET 5 app passes your tests and you are happy with it, you can host your app in production.</p> <p>You can engage with the <a href="">.NET Core CLR</a>, <a href="">.NET Core FX</a> and <a href="">ASP.NET 5</a> teams directly on GitHub, or, if you encounter issues that are preventing you from deploying in a production environment, please contact <a href="">Microsoft Product Support</a>. Support is English only and U.S. business hours only (M-F 6a-6p PST), business day response.</p> <h1>CoreFX Open Source Progress</h1> <p>.</p> <p.</p> <p><a href="" target="_blank"><img style="max-width: 100%;" src="" alt="CoreFX-Progress" data-</a></p> <h1>Closing</h1> <p>We have released .NET Core and ASP.NET 5 RC. You can use these in production and call on Microsoft for support via the regular Microsoft Support channels. <a href="">Get ASP.NET</a>, read the <a href="">ASP.NET 5 RC announcement</a>, learn from <a href="">ASP.NET Docs</a> and play with some simple <a href="">ASP.NET samples</a>. Get started with RC!</p> <p>The team will be monitoring StackOverlow (<a href="">asp.net</a> and <a href="">.net</a> tags) and the <a href="">.NET Foundation Forums</a> for questions.</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] Framework 7 RC1 Available<p>Today we are making Entity Framework 7 RC1 available. EF7 will be the next major release of Entity Framework and is currently in pre-release.</p> <p> </p> <h2>When to use EF7</h2> <p>As discussed in our <a href="">EF7 – v1 or v7 post</a>,.</p> <p>The situations where we would recommend using EF7 are:</p> <ul> <li>New applications that do not need the features that are not yet implemented in EF7.</li> <li>Applications that target .NET Core, such as Universal Windows Platform (UWP) and ASP.NET 5 applications.</li> </ul> <p>For all other applications, you should consider using EF6.x. EF6.x will continue to be a supported release for some time – see the end of this post for more details on upcoming releases.</p> <p”.</p> <p> </p> <h2>Getting started with EF7</h2> <p>You can find the documentation for EF7 at <a href="">docs.efproject.net</a>. In particular, you probably want to start with a tutorial for the type of application you want to build.</p> <ul> <li><strong><a href="">Full .NET (Console, WPF, WinForms, and ASP.NET 4)</a></strong></li> <li><strong><a href="">Universal Windows Platform (UWP)</a></strong></li> <li><strong><a href="">ASP.NET 5</a></strong></li> <li><strong><a href="">OSX</a></strong></li> <li><strong><a href="">Linux</a></strong></li> </ul> <h3>Supported databases</h3> <p>The following database providers are available on NuGet.org and support RC1. See our <a href="">providers page</a> for more information and links to getting started.</p> <ul> <li>EntityFramework.MicrosoftSqlServer</li> <li>EntityFramework.SQLite</li> <li>EntityFramework.InMemory</li> <li>EntityFramework.SqlServerCompact40</li> <li>EntityFramework.SqlServerCompact35</li> <li>EntityFramework.Npgsql</li> </ul> <p><strong>We’d like to thank </strong><a href=""><strong>Shay Rojansky</strong></a><strong> and </strong><a href=""><strong>Erik Ejlskov Jensen</strong></a><strong> for their collaboration to provide the Npgsql and SQL Compact providers and drive improvements in the core EF7 code base.</strong></p> <h3>Glimpse support</h3> <p>Glimpse also announced <a href="">Glimpse 2 Beta 1 today</a>, which supports EF7 and ASP.NET 5 RC1.</p> <p> </p> <h2>What’s implemented in RC1</h2> <p>The following features have been added since our last release (Beta 8).</p> <ul> <li>Cascade delete support</li> <li>Table-Per-Hierarchy inheritance pattern</li> <li>.NET Native support (allows deployment of UWP applications that use EF7)</li> <li><a href="">Improved documentation</a></li> </ul> <p>The following features were implemented in previous releases and continue to be available (most of them with improvements).</p> <ul> <li>Basic modeling including built-in conventions, table/column mapping, and relationships <ul> <li>Fluent API (a.k.a ModelBuilder/OnConfiguring API) for configuring your model</li> <li>Data Annotations for configuring your model</li> </ul> < queries (via DbSet.FromSql)</li> <li>Logging</li> <li>Alternate keys including the ability to use them as keys in a relationship</li> <li>Reverse engineering a model from an existing database</li> </ul> <p> </p> <h2>What’s coming in RTM</h2> <p>As we head towards our initial RTM of EF7 we will be focusing on cross-cutting quality concerns. There are no additional features planned for the RTM release.</p> <ul> <li>Bug fixing</li> <li>Performance tuning</li> <li>Documentation</li> </ul> <p> </p> <h2>Quality</h2> <p>As you can <a href="">see from our issue tracker</a> there are still a number of issues we are working to resolve prior to the RTM release. Please continue to report any new issues that you hit so that we can address them.</p> <p>One area of RC1 that has a number of outstanding issues is our query pipeline..</p> <p> </p> <h2>Performance</h2> <p>We have a <a href="">set of benchmarks</a>.</p> <p>As an example, here are the results from our <a href="">SimpleQueryTests.Include test</a>.</p> <p><a href=""><img style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-width: 0px;" title="PerfExample" src="" alt="PerfExample" width="680" height="599" border="0" /></a></p> <p>*Note we are showing the “change tracking on” variation in this example because the “change tracking off” variation was already faster than EF6 in RC1.</p> <p> </p> <h2>An update on EF6.x</h2> <p>Given that we have said EF6.x will continue to be a supported release, and that we will continue with bug fixes and small improvements to the code base, you may be asking why there hasn’t been much activity on the <a href="">EF6.x CodePlex project</a> for the last 6 months.</p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">Rowan Miller .NET Framework 4.6.1 RC<p>We are pleased to announce the release of <a href="">.NET Framework 4.6.1 RC</a>:</p> <ul> <li><span style="text-indent: -0.25in;" lang="EN">WPF improvements for spell check, support for per-user custom dictionaries and improved touch performance </span></li> <li><span style="text-indent: -0.25in;" lang="EN"><span lang="EN">Enhanced support for Elliptic Curve Digital Signature Algorithm (ECDSA) X509 certificates</span></span></li> <li><span lang="EN">Added support in SQL Connectivity for <a href="">AlwaysOn</a> and <a href="">Always Encrypted</a></span></li> <li>Profiling improvements related to <a href="">IcorProfilerInfo</a> interface and introduction of Ngen PDBs</li> <li><span style="text-indent: -0.25in;" lang="EN"><span lang="EN">System.Transactions APIs now support </span>distributed transactions with a non-MSDTC coordinator</span></li> <li><span style="text-indent: -0.25in;" lang="EN">Many other performance, stability, and reliability related fixes in RyuJIT, GC, WPF and WCF.</span></li> </ul> <p>You can download and try out the release now:</p> <ul> <li><a href="">.NET Framework 4.6.1 RC</a> (<a href="">Web Installer</a>; <a href="">Offline Installer</a>)</li> <li><a href="">.NET Framework 4.6.1 RC Targeting Pack</a></li> </ul> <p> </p> <h2>.NET Framework 4.6.1 RC</h2> <p>You can learn more about this release by looking at <a href="">.NET Framework 4.6.1 RC release change list</a>, <a href="">Application Compatibility in the .NET Framework 4.6.1 RC</a>, and <a href="">.NET Framework API diff (GitHub) </a>between the .NET 4.6.1 and .NET 4.6 releases. The .NET Framework 4.6.1 can be installed on Windows 10, Windows 8.1, Windows 8, Windows 7 and the corresponding server platforms. You can install the .NET Framework 4.6.1 RC using either the <a href="">web installer</a> or the <a href="">offline installer</a>. </p> <p>You can target the .NET Framework 4.6.1 in Visual Studio 2012 or later by installing the <a href="">.NET Framework 4.6.1 RC Targeting Pack</a>.</p> <p> </p> <h2>Windows Presentation Foundation</h2> <p>The WPF team has made a number of key improvements in this release. For more details on these improvements and more, check out the blog post by the WPF Team <a href="">here</a>.</p> <h3>Improved Performance<strong></strong></h3> <p.</p> <h3>Samples</h3> <p>There are a number of WPF Samples on <a href="">MSDN</a>. We are moving over 200+ of these popular samples (based on usage) into an Open Source GitHub <a href="">repository</a>. Help us improve our samples by sending us a pull-request or opening a GitHub <a href="">issue</a>.</p> <h3>DirectX Extensions</h3> <p>We have released a <a href="">NuGet package</a> that provides new implementations of D3DImage that will make it easy for you to<br />interoperate with DX10 and Dx11 content. Support for DX12 will be added in a future release. The code for this package has been open sourced and is available <a href="">here</a>.</p> <h3>Spell Checking Improvements</h3> <p>The spell checker in WPF has been updated on Windows 8.1 and above to leverage OS support for spell checking additional languages. There is no change in functionality on Vista SP2, Windows 7 and Windows 8.</p> <h3>Additional support for per-user custom dictionaries</h3> <p.</p> <p> </p> <h2>Cryptography Updates</h2> <h3>Support for X509certificates containing ECDSA</h3> <p>We made some major improvements to the <a href="">System.Security.Cryptography APIs< actually calls into the existing Windows functionality.</p> <p>Here is an example of how the new approach can be used:</p> <script type="text/javascript" src=""></script> <p> </p> <h2>SQL Connectivity</h2> <h3>Always Encrypted: Support for Hardware protected keys</h3> <p.</p> <h3>Improve MultisubnetFailover connection behavior for AlwaysOn</h3> <p>The <a href="">SqlClient</a> now automatically provides faster connection to <a href="">AlwaysOn Availability Group</a> release, an application does NOT need to set MultisubnetFailover to true anymore. For more information about SqlClient support for AlwaysOn Availability Groups, see <a href="">SqlClient Support for High Availability, Disaster Recovery</a>. </p> <p> </p> <h2>System.Transactions</h2> <p>Users of the <a href="">Transaction.EnlistPromotableSinglePhase</a>.</p> <p>Once a non-MSDTC transaction promoter is enlisted, the following methods will throw a TransactionPromotionException because these methods would require promotion to MSDTC by System.Transactions:</p> <p> Transaction.EnlistDurable</p> <p> TransactionInterop.GetDtcTransaction</p> <p> TransactionInterop.GetExportCookie</p> <p> TransactionInterop.GetTransmitterPropagationToken</p> <p> Serialization of a Transaction object</p> <p.</p> <p>Users of this new variation of the Transaction.EnlistPromotableSinglePhase method must follow a specific call sequence in order for the promotion operation to complete successfully. These rules are documented in the MSDN description of the new method.</p> <p>You can view a code sample <a href="">here</a>.</p> <p> </p> <h2>Profiling</h2> <h3>Better support for accessing PDBs in the <a href="">IcorProfilerInfo</a> Interface</h3> <p.</p> <script type="text/javascript" src=""></script> <h3>Better instrumentation with ICorProfiler</h3> <p.</p> <p.</p> <p> </p> <h2>Native Image Generator (NGEN) PDBs</h2> <p>Cross-machine event tracing allows customers to profile a program on Machine A and look at the profiling data with source line mapping on Machine B. With Ngen PDBs, Ngen can now create a PDB which contains the IL to native mapping without a dependency on the IL PDB. In cross-machine event tracing scenario, all that is needed is to copy the native image PDB, which is generated by Machine A, to Machine B, use <a href="">DIA APIs<.</p> <p.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p> </p> <h2>Summary</h2> <p>With .NET Framework 4.6.1 we are moving ahead with our investments in several areas while continuing to deliver fixes that will substantially increase the reliability, stability, and performance of the product. Please try out the new release and reach out to us if you have any feedback.</p> <p>A major driver for the work we’re doing in .NET 4.6.1 has been the feedback you have sent us for .NET 4.6. We have been actively listening to your feedback, so keep it coming, we greatly appreciate it.<span style="font-stretch: normal; font-size: 7pt; font-family: 'Times New Roman';"> </span></p><div style="clear:both;"></div><img src="" width="1" height="1">The .NET Fundamentals Team's new for .NET and UWP in Win10 Tools 1.1<blockquote> <p><em>This post was written by <strong>Lucian Wischik</strong>, Program Manager on the <strong>Managed Languages </strong>team.</em></p> </blockquote> <p>Last week we updated the <a href="">Visual Studio tools for Universal Windows Apps</a>. The easiest way to get the update is within Visual Studio, under <em>Tools > Extensions > Updates</em>. (Also read the <a href="">release notes</a>).</p> <p>As part of this update, we're including a new opt-in pre-release feature that will shrink the size of your app:</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /><br /><br /></p> <p>In this article I'll tell you the what and why and how of this new feature. Then I'll tell you how it fits into the wider context -- about how improvements in .NET make their way into your UWP apps.</p> <h1>UWP apps have included .NET app-locally</h1> <p>Until now, when you build a UWP app, it has included all the .NET APIs that it actually uses, app-locally.</p> <p><em>"... But doesn't this get a bit big?"</em></p> <p>Here's a concrete example. I wrote a <a href="">Game of Life</a> simulator that runs on Windows 10 devices, both desktop and mobile, and takes advantage of the <a href="">Win2d library</a> to run the simulation fast on each device's graphics accelerator. My source code is <a href="">here on github</a>. This is what the simulator looks like:</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p>And this is what I get when I build my app in Release mode. I've shown the "before" and "after" of the new feature:</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p><em>What we're delivering with Win10 Tools 1.1 is the ability to shave off that extra grey segment that comes from the .NET libraries.</em></p> <h1>How the feature works</h1> <p>The ability to reduce app-size is still in pre-release. We hope to make it the default experience soon, but for now in the Win10 Tools 1.1 release the feature is opt-in...</p> <ul> <li>This is the first public release of the feature, and we want to give it some time and testing before switching everyone over.</li> <li><em>For most apps, the feature delivers faster build times for Release builds.</em> We observe build times are generally faster, up to about 30% -- but there are a few apps actually take longer to build. We are working on making the flag deliver a more consistent improvement to build times.</li> <li>When you upgrade to VS2015 Update 1, if your project is using this flag, then you might need to upgrade your project in some way in order to open it. We haven't closed on this.</li> <li>Any apps you submit to the store right now, using this flag, will continue to work.</li> </ul> <p><strong>How the feature works.</strong> When the feature is enabled, .NET Native release builds <em>no longer</em> include the bulk of .NET locally within the app. Instead .NET is delivered through a <em>"Shared AppX Framework Package"</em>. That means:</p> <ol> <li>When customers download your app from the Store, this download doesn't contain the bulk of .NET</li> <li>Instead, the right version of .NET gets automatically downloaded on-demand from the Store, and is shared between all apps that use it.</li> </ol> <p>You can see how it's accomplished inside your app's <code>bin\Release\x86\ilc\AppxManifest.xml</code> file. It has a line like this, which is what the Store app obeys when installing your app:</p> <p style="padding-left: 30px;"><span style="font-family: 'courier new', courier;"><PackageDependency Name="Microsoft.NET.Native.Framework.1.2" </span></p> <p style="padding-left: 30px;"><span style="font-family: 'courier new', courier;"> </span></p> <h1>Instructions to opt-in to using the SharedFramework</h1> <p>As mentioned, we plan make this feature the default so it will happen automatically. Until then, you must opt-in by a configuration flag as follows. Within Visual Studio, right-click on your UWP project and <em>unload project</em>. Right-click once again and <em>Edit the .vbproj/.csproj</em>. Within this proj file, look for all three occurrences of <code><UseDotNetNativeToolchain></code> and add a new directive under them as follows. By default, the three places that use the .NET Native toolchain are Release|x86, Release|x64 and Release|Arm.</p> <script type="text/javascript" src=""></script> <p>Next, right-click on your project once again, <em>Reload</em>, and rebuild.</p> <p>Tip: you can also avoid unloading+reloading by editing the proj file outside VS, for instance in Notepad. VS will automatically detect when it needs to reload the project.</p> <p><strong>Guidance.</strong> Here are our recommendations for how to use this flag for now:</p> <ul> <li><strong>Try it at least once:</strong> try turning on the flag at least once -- see whether it improves your Release mode buildtimes, and whether you encounter any issues.</li> <li><strong>Develop how you wish to submit:</strong> in the daily rhythm of development and testing, use the flag in the same way as you intend to submit.</li> <li><strong>For store submission:</strong> it's your call whether the benefits of smaller app-size and potentially faster build-times outweigh the risks of using a pre-release feature for you.</li> <li><strong>For sample code:</strong> leave flag off, so that in the future you and others have an easier time building your sample.</li> <li><strong>If you're using out-of-band updates to .NET:</strong> leave flag off. Read the next section for an explanation.</li> </ul> <h1>How to take advantage of fixes in .NET</h1> <p>There are a lot of improvements being made to .NET on a regular basis. How do you as a developer take advantage of these improvements?</p> <p>Let's motivate this with a specific example. Suppose you're writing a UWP app using VS2015 RTM and you write this code:</p> <script type="text/javascript" src=""></script> <p>Don't worry if this code is unfamiliar -- it's a small corner of the .NET framework using <a href="">expression trees</a> that's not commonly used. But for those people who do use it, they need it, and were blocked when they discovered a .NET bug which prevents the call to <em>Expression.Update</em> from working.</p> <p>In the past if you found an issue in .NET, you were basically stuck -- you'd have to wait for the next public release of .NET, and even then you couldn't depend on all your users installing that update onto their machines.</p> <p>But now with UWP things are different. This particular Expression.Update issue was identified in the public open-source version of .NET on github, and was fixed with github changeset <a href="">#2361</a>, and is available in a new publically-available beta of the NuGet package <a href="">System.Linq.Expressions</a>. To use this beta in your UWP app, do <em>Manage Nuget References</em>, turn on the <em>Include prerelease</em> checkbox, and pick up the latest prerelease version of the package. Most people won't need any of this. But those who do, who were blocked by this particular API, they now have a way forward.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p>What's important with UWP is that <em>you don't have to depend on your users installing .NET updates onto their machines</em>. Your app will always run on end-user machines against the exact same version of .NET you chose to develop with.</p> <a href="">corefx github</a>.)</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h1>You can't combine out-of-band .NET fixes with Shared Framework</h1> <p>You can't combine out-of-band updates to .NET with the Shared Framework flag. This is what you'll see if you try it with a pre-release version of System.Linq.Expressions:</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p style="padding-left: 30px;"><span style="font-family: 'courier new', courier;"><em>"ILC1308: SharedAssembly is not applicable to you project configuration. Project dependencies don't match the assemblies that the shared assembly is built against. You may need to adjust your project dependencies to ensure they match the versions that the shared assembly is built against."</em></span></p> <h1>Conclusions</h1> <p>.NET Native is an important technology for building UWP apps. In this release we've improved it significantly.</p> <p>We are eager for feedback on the new Shared Framework feature. Please turn the flag on during development. Many of you will observe faster build-times. What we're hoping to get are reports from you of any problems you encounter -- either by email to <a href="mailto:[email protected]">[email protected]</a> or via the <em>send-a-smile/frown</em> icon at the top of the Visual Studio 2015 titlebar (below). Thank you!</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] Summer Internship on the .NET Team<p style="text-align: center;"><span>This post was written by Microsoft Explorer interns <strong>Daniel King</strong>, </span><strong>Zoë Petard</strong>, and <strong>Jessica Petty </strong>and highlights their project & experiences this summer working on the .NET Roslyn team. We loved having them this summer!</p> <h1>Roslyn Diagnostic Analyzer Tutorial</h1> <p><em>Tired of frustrating compile-time errors? Analyzers can examine code as you type and present live diagnostics.</em></p> <p><em>Wish you could standardize code across your organization? Analyzers can enforce coding protocols during code production instead of at code check-in.</em></p> <p><em>This blog post will explore how analyzers work and walk through the analyzer tutorial that we wrote.</em></p> <h2>Our project</h2> <p>This summer we had the amazing opportunity to be part of <a href="">Microsoft’s Explorer Program</a>--a 12-week rotational internship for students who want to explore their budding interest in computer science. As Explorer interns on the <a href="">.NET Roslyn</a> team, our summer project was to create a tutorial to help non-compiler-savvy developers understand and write simple live-code diagnostics (aka “analyzers”).</p> <p>The objectives for our project at Microsoft were two-fold:</p> <ul> <li>Create an easy entry point for someone to learn about analyzers and their potential</li> <li>Explore the possibility of using analyzers as a teaching tool</li> </ul> <p>We achieved these goals by writing a meta-analyzer that served as a tutorial for writing an analyzer. In other words, we created a step-by-step tutorial using the diagnostic squiggles and light bulb code fixes in Visual Studio.</p> <h2>What are analyzers?</h2> <p><a href="">Analyzers</a> leverage the power of the open source C# and VB compiler project, called <a href="">Roslyn</a>..</p> <p</p> <h2>Walkthrough</h2> <h3>Instructions and Overview</h3> <p>We've put together a <a href="#screencast">screencast walkthrough</a> <a href="">syntax visualizer</a>, of an if-statement can be seen here:</p> <p><img src="" alt="" /></p> <p>Instructions on how to access the tutorial and run it on your machine to follow along with the screencast can be found on the <a href="">GitHub</a> page.</p> <h3><a name="screencast"></a>Screencast</h3> <p><iframe src="" frameborder="0" width="640" height="360"></iframe></p> <p.</p> <p <a href="">GitHub</a> page for analyzers.</p> <h2>Internship Reflection</h2> <p.</p> <p.</p> <p><img src="" alt="" /></p> <p><em>Daniel King is going into his second year at Harvey Mudd College in Claremont, California. A Seattle-native, this is his second internship at Microsoft and his first as an Explorer Intern. </em></p> <p><em>Zoë Petard is starting her fourth year at McGill University in Montreal, Qu</em><em>ébec. She loves climbing, canoeing, and all things outdoors.</em></p> <p><em>Jessica Petty is going into her third year at the University of Colorado in Boulder, Colorado. She enjoys hiking, long walks on the beach, and beluga whales. </em></p><div style="clear:both;"></div><img src="" width="1" height="1">The .NET Team is going cross-platform with .NET Core!<p>We have some exciting new developments to share – an update on our open source development, our ongoing cross-platform work, and more.</p> <p>Going forward, we will post everything around MSBuild and build tools in general here on the .NET blog. You can still check out the many interesting tips and tricks that we earlier posted on the <a href="">MSBuild Team blog</a>.</p> <h2>MSBuild is now open source, and it is going cross-platform.</h2> <p>The .NET Compiler Platform (“Roslyn”) has been <a href="">open source</a> for a while now, <a href="">Microsoft is taking .NET open source and cross-platform</a>, and the <a href="">ASP.NET 5 Runtime is open source and cross-platform</a> as well. And as of March, <a href="">MSBuild is open source</a> on <a href="">GitHub</a> and part of the <a href="">.NET Foundation</a>, too. The sources on GitHub are closely aligned with the version that ships with Visual Studio. You may notice a few differences, but we’ll prune those discrepancies over time.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border-width: 0px;" title="image" src="" alt="image" width="604" height="310" border="0" /></a></p> <p!</p> <p>Immediately after open-sourcing, we started work on a version of MSBuild that runs on Linux and Mac using the Mono software platform. This let us find and fix many Windows-specific parts of our code. Moving forward we’ve decided to switch gears and bet on <a href="">.NET Core</a> as our runtime, the open source, cross-platform version of .NET. This work is happening in the <a href="">xplat branch</a> on GitHub. We are tracking all the <a href="">issues</a> that we have to tackle on GitHub.</p> <h2>What’s next?</h2> <p>Porting MSBuild to .NET Core will take a while. As we do all development in the open on GitHub, you are welcome to join in and pick up an issue that is <a href="">up-for-grabs</a>. At the end, we want to have a single open source code base that works cross-platform, and from which we also build the MSBuild package that ships with Visual Studio.</p> <p:</p> <ul> <li>drastic simplification of how you author and maintain build specs,</li> <li>first-class support for packages,</li> <li>hugely improved performance by leveraging the cloud,</li> <li>unprecedented reliability through better knowledge of dependencies.</li> </ul> <p>Stay tuned to learn more about these exciting new developments, and other tips and tricks around MSBuild.</p> <h2>Meet the team behind MSBuild.</h2> <p>MSBuild has a long history: It started out in 2005 as a part of the .NET Framework itself, and became an integral component of Visual Studio over time. Most recently, Microsoft’s build engine found a new home in the <a href="">Tools for Software Engineers</a> (TSE) team in Microsoft’s Cloud and Enterprise group.</p> <p><a title="Tools for Software Engineers" href=""><img style="background-image: none; float: right; padding-top: 0px; padding-left: 0px; margin: 0px 0px 0px 8px; display: inline; padding-right: 0px; border: 0px;" title="tse-logo" src="" alt="tse-logo" width="133" height="133" align="right" border="0" /></a>TSE’s mission is <strong>Enabling Microsoft to accelerate software development</strong>,.</p> <h2>We are hiring!</h2> <p>Are you passionate about software engineering practices in general or MSBuild in particular? The <a href="">Tools for Software Engineers</a> team is hiring! Check out our <a href="">open positions</a>, or drop us an email at <a href="mailto:[email protected]">[email protected]</a>.</p><div style="clear:both;"></div><img src="" width="1" height="1">Nikolai Tillmann future of Unity<p style="padding-left: 30px;"><em>This post was written by <strong>Christopher Bennage</strong> (<a href="">@bennage</a>), a member of the <strong>Microsoft patterns & practices</strong> team.</em></p> <p>A few months ago, we announced that we were <a href="">handing Prism over to new owners</a>. We put a lot of time and effort into identifying owners that would invest in the project and support the community.</p> <p>Today, we are announcing a similar transition of ownership for <a href="">Unity</a>.</p> <p>The new owners for Unity are:</p> <p><strong>Pablo Cibraro</strong>. Pablo is an internationally recognized expert with over 15 years of experience in designing and implementing large distributed systems with Microsoft technologies. For the last 9 years Pablo has helped numerous Microsoft teams develop tools and frameworks for building service-oriented applications. Pablo now focuses on technologies that enable developers to build large scale systems and web applications with focus on mobile, such as HTML5, Node.js, ASP.NET and Microsoft Azure.</p> <p><strong>Pedro Wood</strong>. Pedro has been working as a software developer, architect and dev lead in a wide variety of business areas for more than 20 years. Pedro is a frequent contributor in open source projects and enjoys being part of the .NET community in Argentina.</p> <h2>Our motivation</h2> <p><em>Why are we transitioning to new owners?</em> That's a natural question to ask.</p> <p>The .NET community has a rich history of dependency injection containers, dating back before the introduction of Unity. Dependency injection containers for .NET have continued to mature and evolve significantly. In addition, open source components are now more accepted. The need for having an "official" container from Microsoft is no longer as widespread as it once was. We did spend a few months in 2014 thoughtfully experimenting for a "Unity 4". However, we began to recognize that the p&p team was not equipped to carry the project forward.</p> <p>At the same time, we believe that it would have been a poor choice to simply call the project "done". We wanted to support all those who had invested in the library. After consulting with internal teams and p&p alumni, we asked Pablo and Pedro to assume the mantle.</p> <h2>Going Forward</h2> <p>Be sure to read the <a href="">official announcement</a> from the new team and follow their work on the <a href="">new GitHub repo</a>. Let them know what you'd like to see in future releases of Unity and help them continue to grow the community.</p><div style="clear:both;"></div><img src="" width="1" height="1">Immo Landwerth [MSFT] 2015 .NET Security Updates<p>The .NET team released two security bulletins today as part of the monthly "Update Tuesday" cycle. </p> <p><a href="">Microsoft Security Bulletin MS15-080 - Critical</a><strong>, </strong>Vulnerability in .NET Framework Could Allow Remote Code Execution (<a href="">3078662<!-- CultureCode --><!-- CultureCode --></a>) </p> <p>This security update resolves vulnerabilities in Microsoft .NET Framework. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted document or visits an untrusted webpage that contains embedded TrueType or OpenType and 4.6 RC on affected releases of Microsoft Windows.</p> <p>More details about the versions affected by this vulnerability can be found in the security bulletin <a href="">MS15-080</a>.</p> <p><strong></strong> </p> <p><a href="">Microsoft Security Bulletin MS15-092 - Important</a><strong>, </strong>Vulnerability in .NET Framework Could Allow Information Disclosure (<a href="">3086251</a>) </p> <p>This security update resolves vulnerabilities in Microsoft .NET Framework. The most severe of the vulnerabilities could allow elevation of privilege if a user installs a specially crafted partial trust application. However, in all cases, an attacker would have no way to force users to run the application; an attacker would have to convince users to do so.</p> <p>This security update is rated Important for Microsoft .NET Framework 4.6 and 4.6 RC on affected releases of Microsoft Windows.</p> <p>More details about the versions affected by this vulnerability can be found in the security bulletin <a href="">MS15-092</a>. More context can also be found <a href="">here<[Guest post] Visual F# Power Tools: community-led tooling for F# in Visual Studio<p><em>This is a guest post by <a href="" target="_blank">Anh-Dung Phan</a> and <a href="" target="_blank">Vasily Kirichenko</a>, F# community developers and contributors to the superb Visual F# Power Tools extension for Visual Studio.  – Visual F# Team</em></p> <p>We are pleased to tell you about the <strong>Visual F# Power Tools</strong>, a Visual Studio extension aimed at providing extended tooling for F# in Visual Studio. You can <a href="" target="_blank">download it today</a> from the Visual Studio gallery.</p> <p>The goal of the extension is to complement the standard Visual Studio F# tooling by adding missing features such as semantic highlighting, rename refactoring, find all references, metadata-as-source, and more.</p> <p>What’s particularly special about <a href="" target="_blank">this project</a> is that it's a collective effort of the F# open source community. We work alongside the Visual F# Team at Microsoft in order to provide a complete toolset for F# users in Visual Studio.</p> <p>This project is about 1.5 years old now, and has received <a href="" target="_blank">overwhelming</a> <a href="" target="_blank">positive</a> <a href="" target="_blank">feedback</a> from the F# community. Today we are excited to share this effort, and the story behind it, with a broader audience.</p> <h2>How did it get started?</h2> <p>There have been some interesting conversations in the F# community, asserting that if F# had better IDE tools in Visual Studio, then it would be a lot easier to learn and adopt the language.</p> <p>In fact, F# easily has the best IDE support among statically-typed functional languages, through Visual Studio and Xamarin Studio. But the bar has been set extremely high by the C# tooling in Visual Studio (not to mention ReSharper), and as a result many developers expect this same level of support before even considering F#.</p> <p>While we believe that F# is a superb general-purpose language that can stand out on its own, there is certainly lots of room for improvement in the tooling area. After working on F# <a href="" target="_blank">ReSharper support</a> for <a href="" target="_blank">a few years</a> without success, we finally attempted to do something about the situation.</p> <p>At the end of January 2014, the Power Tools project got started.</p> <p>The initial goal was to collect existing F# IDE extensions into a single place for curating and maintaining. At various points in time, extensions for automatic source code formatting, XML doc generation, and depth colorizing had been used in the community, but none were actively maintained.</p> <p>Around the same time, the <a href="" target="_blank">FSharp.Compiler.Service</a> (FCS) project debuted as a shiny new compiler-as-a-service library for F#.  FCS is a Roslyn-like compiler API where all components of the compilation pipeline (e.g. lexer, parser, symbols, typed ASTs, assembly generator) are exposed for programmatic use.</p> <p>FCS gave us a wonderful opportunity to not only curate existing extensions, but also to add many advanced features.</p> <p>We took a chance on FCS, and many features were implemented quite naturally.  <a href="" target="_blank">This tweet</a> sums up our experience from the first three months:</p> <blockquote lang="en" class="twitter-tweet"> <p lang="en" dir="ltr">3 months, 15 contributors, 700 commits and 5.6k downloads. It has been an amazing journey <a href="">#fsharp</a> <a href="">#visualstudio</a>.</p> — Visual F# PowerTools (@FSPowerTools) <a href="">May 6, 2014</a></blockquote> <script async</script> <h2>Where are we?</h2> <p>Now over a year has passed, and we would like to take stock of where we are, and reflect on some of our most popular features.</p> <h4>Getting started</h4> <p>Version 2.0.0 of the extension was just released, and supports Visual Studio 2015 and 2013. In order to install the extension, search for "F# Power Tools" in "Tools -> Extensions and Updates -> Online". You can also download it directly from the <a href="" target="_blank">Visual Studio Gallery</a>. <br /> <br />The Power Tools offer a variety of features, each of which may or may not suit a particular developer’s preferences, so we provide a dedicated option set to tweak things for your own tastes.</p> <p><a href=""><img title="generaloptions" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="generaloptions" src="" width="758" height="441" /></a></p> <h4 id="semantic-highlighting">Semantic highlighting</h4> <p>Similar to what the C# editor does, we colorize F# code based on its semantic structures.</p> <p>Besides highlighting F# reference types and value types, we also provide color categories for modules, functions, quotations, unused values, and more.</p> <p>A unique category that has been adopted enthusiastically is one covering all mutable values:</p> <p><a href=""><img title="semantic_highlighting" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; border-top-width: 0px; margin-right: auto" border="0" alt="semantic_highlighting" src="" width="331" height="189" /></a></p> <p>Right there, the F# community shows its love for immutability, opting in to bright red alerts for any use of mutable state! We attribute the popularity of this little feature to all the pains we've been through trying to manage the complexity of mutable state in our pre-F# days.</p> <h4 id="find-all-references">Find all references</h4> <p>This is one of those features that, once you have it, it's really hard to live without.</p> <p>Place the cursor on any symbol defined in the current solution, hit <code>Shift-F12</code>, and boom -- all references with detailed navigation information are displayed:</p> <blockquote> <p><a href=""><img title="find_all_references" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; border-left: 0px; display: block; padding-right: 0px; margin-right: auto" border="0" alt="find_all_references" src="" width="640" height="389" /></a></p> </blockquote> <p>No more string search or manual browsing in order to look for certain symbols.</p> <p>Similarly, we have <code>Refactor -> Rename</code> for all symbols defined in current solution.</p> <h4 id="navigateto">NavigateTo</h4> <p>This is one of our favorite navigational features.</p> <p>In Visual Studio 2013, there was <a href="" target="_blank">an important improvement</a> where NavigateTo (<code>Ctrl-,</code>) started delivering results via a non-modal dialog.</p> <p>With the Visual F# Power Tools, just type in any identifier you like (or a part of it) and you've got all relevant results from all projects in your solution, ready for quick navigation. It even works seamlessly on mixed F#/C# solutions:</p> <p><a href=""><img title="navigate_to" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; border-left: 0px; display: block; padding-right: 0px; margin-right: auto" border="0" alt="navigate_to" src="" width="640" height="480" /></a></p> <h4>Source code formatting</h4> <p>Automatic code formatting is a feature long-enjoyed by C# developers, but not offered for F# in Visual Studio.  The Power Tools project adds support for source code formatting, along with various options which control the details of adjusted code.</p> <p><a href=""><img title="reformat" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; border-top-width: 0px; margin-right: auto" border="0" alt="reformat" src="" width="290" height="259" /></a></p> <h4>Resolve unopened namespaces and modules</h4> <p>Another favorite of C# developers is the ability for the editor to detect when you might have forgotten a ‘using’ statement.</p> <p>F# developers can now do the same – we can offer to add ‘open’ statements for missing namespaces and modules, or to fully-qualify usages for you:</p> <p><a href=""><img title="addopen" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; border-top-width: 0px; margin-right: auto" border="0" alt="addopen" src="" width="319" height="156" /></a></p> <h4>Automatic generation of pattern match cases</h4> <p>Besides filling in IDE features that C# already has, we have additionally added some features that are specific to F#.</p> <p>One of these is the ability to automatically generate pattern match cases when matching against an F# discriminated union type.  The F# type system knows which cases have not been handled, so we can auto-populate those for you: <br /></p> <p><a href=""><img title="patterncase="patterncase3" src="" width="417" height="197" /></a></p> <p>There are more exciting features that we don’t have time to mention here, so check out <a href="" target="_blank">our homepage</a> for more detailed documentation.</p> <h2>What's next?</h2> <p>Recently, a lot of new shiny tools have been created to assist with F# development, matching the growing number of F# developers who want them. There are many opportunities to provide better tooling to F# users by integrating such projects.</p> <p>For example, we have been using <a href="">SourceLink</a>, a .NET library that automates source indexing, to support navigation to source code hosting platforms such as .NET reference sources, GitHub, etc. for external libraries.</p> <p>We are also working on integrating <a href="">FSharpLint</a>, a style-checking tool for F#.</p> <p>Another possibility is to improve the experience of using F# Interactive inside Visual Studio. There have been <a href="">feature requests</a> to provide more fine-grained ways to send code to <a href="" target="_blank">F# Interactive</a>. Some features, like syntax coloring and code completion, could tremendously improve the F# Interactive user experience.</p> <p>Another area for enhancement is <a href="">C# interoperability</a>, where our features should be aware of referenced C# projects. If you would like to give this a higher priority, let us know by comments or voting.</p> <p>There have been some requests to implement more <a href="">refactoring features</a>. These features are indeed challenging and they require significant involvement of FCS. But of course, they are also fun to implement, just like the feeling we have had since day one of this project.</p> <p>We are actively seeking contributors. If you would like to join us, you can <a href="">report bugs</a>, <a href="">send reviews/suggestions</a>, or, better yet, fork the project and <a href="">send pull requests</a>.</p> <p>Together we make F# tooling better, day by day.</p><div style="clear:both;"></div><img src="" width="1" height="1">Visual FSharp Team [MSFT] Windows apps in .NET<blockquote> <p><em>This post was written by Lucian Wischik, a Program Manager on the Managed Languages team.</em></p> </blockquote> <p>We just released the <a href="">Universal Windows app development tools</a> for writing Windows 10 apps in <a href="">Visual Studio 2015</a>. It is an exciting release: you can now use the latest .NET technology to build <strong>Universal Windows Platform</strong> ("UWP") apps that run on every Windows device - the phone in your pocket, the tablet or laptop in your bag, the PC on your desk, the Xbox console in your living room, and all the new devices that are being added to the Windows family like <a href="">HoloLens</a>, <a href="">Surface Hub</a>, and IoT devices like the <a href="">Raspberry Pi 2</a>.</p> <h1>Installing the UWP Tools</h1> <p>You can <a href="">install the free Community Edition</a>, which install the UWP tools by default. If you need the Professional or Enterprise edition, you can download them from <a href="">VisualStudio.com</a>. During setup, choose 'Custom' to install the Tools for Universal Windows Apps.</p> <p>If you already have Visual Studio 2015, here's two ways to get the new tools:</p> <ul> <li>Download and run the <a href="">Windows Tools installer</a>.</li> <li>Open up Programs and Features from the Control Panel, select Visual Studio 2015 and click Change. Then in setup, click Modify and select the Tools for Universal Windows Apps.</li> </ul> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <h1>What's new with UWP</h1> <p>As a .NET developer you'll appreciate what UWP offers --</p> <ul> <li>UWP apps run "windowed" on the huge number of desktop machines out there that will upgrade to Windows 10.</li> <li>UWP apps will also reach every other Windows 10 device out there -- Phone, XBox, HoloLens, even "Internet of Things" devices including Raspberry Pi.</li> <li>UWP apps take advantage of the new <a href="">.NET Core</a>. You can use the latest version of .NET Core, which will include new features that makes your app easier to write.</li> <li>The .NET heart of your app, your business logic, can run on other platforms that support .NET Core, including ASP.NET 5.</li> <li>UWP apps deploy a small copy of .NET with your app, so that your app always uses the .NET version that you tested with.</li> <li>UWP apps use <em>.NET Native</em>, which generates highly optimized native machine code before they are downloaded onto customer machines. .NET Native provides much faster app launch times, lower battery consumption, and faster performance.</li> <li>UWP apps are easy for your customers to buy, install and upgrade via the Windows Store.</li> <li>UWP apps integrate perfectly with <a href=""><em>Application Insights</em></a> for detailed telemetry and analytics -- this is a crucial tool with which to understand your users and improve your apps.</li> </ul> <p>The new things you can do with this release --</p> <ul> <li>Write Windows 10 UWP apps with .NET.</li> <li>Write Portable Class Libraries that target .NET Core.</li> <li>Use more .NET surface area in UWP apps than was previously available to Windows Store or Phone apps, including <a href="">System.Net.Sockets</a>, <a href="">WCF Client</a>, <a href="">System.Numerics.Vectors</a>, and new <a href="">Diagnostics APIs</a>.</li> <li>You can choose to use <a href="">NuGet 3.1</a> (recognizable by the file "project.json") for NuGet consumption in all project types.</li> </ul> <h1>Getting started with UWP development</h1> <p>Here are some useful overviews and tutorials for UWP development:</p> <ul> <li><a href="">How to build a Windows 10 universal app</a> [MSDN] -- with adaptive UI and adaptive code so your UWP app looks good and runs well on all Windows 10 devices.</li> <li><a href="">Guide to UWP apps</a> [MSDN] -- how "universal" apps are supported across all devices.</li> <li><a href="">Porting apps to UWP</a> [MSDN] -- from Phone Silverlight, Win8.1 and VS2015 RC.</li> <li><a href="">Developing Universal Windows Apps with C# and XAML</a> [Microsoft Virtual Academy] -- practical online training course by expert Jerry Nixon, spread over 22 hour-long lessons.</li> <li><a href="">Developing UWP apps in VS2015</a> [BUILD talk].</li> <li><a href="">Deep dive into XAML and .NET UWP development</a> [BUILD talk].</li> </ul> <p:</p> <p><strong>File > New > C#/VB > Windows > Universal</strong> Get started with a new blank UWP app. It's faster than VS2015 RC thanks to improvements in NuGet. You can also create Portable Class Libraries (PCLs) that span UWP, ASP.NET 5 and .NET4.6.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Solution Explorer > References</strong> The References node shows NuGet packages with their own distinctive icon. One important package here is <code>Microsoft.NETCore.UniversalWindowsPlatform</code>; it contains the .NET Core runtime and framework. The project.json file drives the new NuGet 3.0, replacing packages.config. NuGet 3.0 is faster and more flexible than NuGet 2.0.</p> <p> <a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Adaptive XAML</strong> Developers could always design "adaptive UIs" that scale to any device, any form-factor. It's easier now thanks to many XAML improvements, including ViewState triggers, more device previews, and live Visual XAML Tree debugging. Also, use the new <em>x:Bind</em> for higher performance data-binding.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Adaptive code</strong>.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Fast graphics: <a href="">Win2d</a> and <a href="">System.Numerics.Vectors</a></strong> For fast graphics, use the <a href="">Win2d library</a> – an elegant .NET-friendly wrapper around DirectX. Of course, you can still use <a href="">SharpDX</a> or <a href="">MonoGame</a> too. And <a href="">System.Numerics.Vectors</a> leverages the CPU's <a href="">SIMD</a> instructions for faster vector and matrix arithmetic. All this let me compute the Mandelbrot fractal in just 70 milliseconds on my mid-range Nokia 635.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>WCF, HTTP/2 and Sockets</strong> The .NET Core libraries now include <a href="">WCF</a> and AddServiceReference, previously unavailable for Phone apps. <a href="">HttpClient</a> has been rewritten from scratch: it performs better and supports <a href="">HTTP/2</a>. We've also included <a href="">System.Net.Sockets</a>, a long-requested .NET feature for Windows Store apps.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Improved debugging and EnC</strong> You can now use "Edit and Continue" (EnC) when debugging on the emulator. The whole debugger engine has been overhauled – to <a href="">support lambdas and LINQ expressions in the immediate and watch windows</a>, and <a href="">to support EnC in many more places than ever before</a>. Some developers code their entire app while in EnC. Try it!</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>.NET Native</strong> When you build in Release mode, your app gets built with the new ".NET Native" compiler. This turns it into heavily optimized native machine code – for much faster app startup time, lower battery consumption, and faster overall performance.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Store submission</strong>.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p> </p> <p><strong>Application Insights and Diagnostics</strong> Application Insights is included by default in every new project. It provides detailed analytics about your app – like crashes and usage. All the top apps in the Store already know that obtaining and responding to analytics is what makes them top. There are also <a href="">richer tracing features</a> available in ETW.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <h1>.NET Native</h1> <p>.</p> <p> <a href="">perf-tips feature and Diagnostics Tools window</a>, new in VS 2015.</p> <p>There have been <a href="">many articles about .NET Native preview</a> on the .NET team blog already. What's new with UWP is that it's the first <em>production</em>.</p> <p!</p> <h1>.NET Core Framework</h1> <p>The <a href="">.NET Core</a> Framework ("CoreFX") has already been <a href="">discussed last week on this blog</a>:</p> <blockquote> <p>.NET Core is a new version of .NET for modern device and cloud workloads. It is a general purpose and modular implementation that can be ported and used in many different environments for a variety of workloads.</p> </blockquote> <p>CoreFX is used for UWP apps. It is a superset of the .NET APIs that were available for Windows Store development.</p> <p>Let's highlight some parts of .NET Core FX that will be of particular interest to UWP developers:</p> <ul> <li><a href="">System.Net.Sockets</a> .</li> <li><a href="">HttpClient</a> (like many low-level parts of .NET Core FX), needs a different implementation for each platform it runs upon. In UWP apps it is built on top of the WinRT HTTP stack. This brings it the ability to use <a href="">HTTP/2</a> by default if the server supports it, with lower latency and fewer round-trip communications.</li> <li><a href="">WCF Client</a> (and the associated Add Service Reference dialog) was previously unavailable in Windows Phone appx projects. But since it's part of .NET Core, it can be used by all UWP apps.</li> <li><a href="">System.Numerics.Vectors</a> provides vector and matrix types that are implemented by SIMD opcodes on the CPU -- <em>Single Instruction Multiple Data</em>. These are faster for vectors and matrices than the normal "single instruction <em>single</em> data" opcodes! -System.Diagnostics.Tracing.EventSource now lets you send <a href="">richer payloads</a> to <em>Event Tracing for Windows</em> (ETW) in your events.</li> </ul> <p>Two exciting aspects of CoreFX are that <a href="">it's open-source</a>.</p> <p".</p> <p>If you are a library author who wants to write .NET Core libraries, you can write PCLs that target any of .NET4.6, UWP and ASP.NET 5.</p> <h1>Universal Projects</h1> <p>What UWP delivers is the ability to write <em>universal</em>.</p> <p>There's a good explanation on the MSDN " <a href="">Guide to UWP apps</a>" about how to make sure your app looks good on all the different devices. Happily, it often turns out that the UI tweaking needed to make your app look good at different <em>window sizes</em> on desktop, also makes it look good on different devices.</p> <p>From the .NET side, the technically most interesting aspect is <em>adaptive code</em>. Here's an example:</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <script type="text/javascript" src=""></script> <p>My app looked great on Windows 10 Desktop, but on Windows 10 Mobile it was showing the Status Bar. I thought it would look better if I called <code>StatusBar.HideAsync</code>. However <code>StatusBar</code> is a type (and concept) that doesn't even exist on Desktop. The code to deal with this absence looks simple – the WinRT API <code>Windows.Foundation.Metadata.ApiInformation.IsTypePresent</code> is used to determine whether a named WinRT type is present on the machine an app happens to be running on, and it will only invoke the platform-specific methods in that case.</p> <p>Sometimes it's hard to remember whether an API you're calling needs to be guarded by <code>IsTypePresent</code>. To help with this, I wrote a NuGet package called <a href="">PlatformSpecific.Analyzer</a> which you add to your project: it gives squiggly warning messages in the IDE if you've forgotten the guard.</p> <p>What's interesting is that this style of <em>adaptive code</em> <em>all</em> UWP device families and versions. As for Release builds, .NET Native compilation bakes in the necessary metadata into the final native machine code, in the form of COM IIDs and vtables.</p> <p.</p> <h1>NuGet 3.0 and "project.json"</h1> <p>NuGet has become the de facto standard for package management in .NET apps. We wanted to deploy .NET Core as NuGet packages, but the existing NuGet 2.0 client and its <em>packages.config</em>, although great for its scenarios, <a href="">wasn't the best choice</a> for scaling up to the 100+ sub-packages that make up .NET Core – too slow, and not flexible enough. <a href="">NuGet 3.0</a> fixes those issues. First used in ASP.NET 5; it is now used in UWP as well.</p> <p:</p> <ol> <li>When you install a NuGet package, a reference is added to your project.json file, and shows up in your SolutionExplorer > References node.</li> <li.</li> <li>At build-time, if a project.json file is present, then MSBuild reads it and references the appropriate DLLs and .targets files contained within it.</li> </ol> <p>Let's spell out the advantages that the project.json workflow brings:</p> <ul> <li>Your .vbproj/.csproj no longer includes any NuGet references: they are kept completely separate. This makes source-control and merge-conflict-resolution easier!</li> <li>You can change your app target platform, and change Debug/Release and x86/x64/ARM/AnyCPU, and NuGet will now honor those settings.</li> <li>You can now have two different solutions in two different directories that include the same NuGet-consuming project. This is particularly useful when you're working across two different repositories.</li> <li.</li> <li>Packages are cached globally (on a per-user per-machine) basis rather than being downloaded+unzipped locally into every single solution that uses them.</li> <li>File > New and Manage NuGet Packages > Install have both become faster.</li> <li>You get more precise control over NuGet package upgrades, and version mismatches.</li> </ul> <p>Please read more about NuGet on the <a href="">NuGet Team Blog</a> and the <a href="">NuGet Home repo</a>.?</p> <p>Some NuGet packages don't work quite the same way when installed into UWP apps. If you find others, or are blocked on some, please let us know in the Comments box at the bottom of this post.</p> <ul> <li><strong>SharpDX.Toolkit 2.6.3</strong>. Upgrade to <a href="">SharpDX 3</a> (currently in alpha), which works fine in UWP apps. The SharpDX <em>Toolkit</em> has been deprecated and won't move to version 3 and can't be installed into UWP apps. As an alternative consider other toolkits built on SharpDX such as <a href="">Paradox</a> or <a href="">MonoGame</a>.</li> <li><strong>MvvmLight</strong>. <a href="">MvvmLight VSIX</a> instead.</li> <li><strong>Sqlite-net</strong>. Although this NuGet package can no longer be installed into UWP apps, the equivalent <a href="">Sqlite.Net-PCL</a> (by the same author) works fine.</li> <li><strong>LiveSDK</strong>. <a href="">Windows.Security.Authentication.OnlineID</a> as described <a href="">here</a>, and for OneDrive you can use the <a href="">REST APIs</a> via HttpClient.</li> </ul> <p>Incidentally, project.json is also used by default for "modern" PCLs – i.e. those whose targets are limited to some or all of .NET4.6, UWP, and ASP.NET 5 Core.</p> <h1>UWP apps use CoreCLR for Debug and .NET Native for Release</h1> <p>The following diagram shows what happens when you build your UWP app, debug it, and submit it to the store. The VB and C# compilers continue to emit DLLs in MSIL format as before. It's what happens next that's different…</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Debug build: CoreCLR</strong>. When you build your UWP app in Debug mode, it uses the ".NET Core CLR" runtime, the same as used in ASP.NET 5. This provides a great edit+run+debug experience – fast deploy, rich debugging, Edit and Continue. It also means</p> <p><strong>Release build: .NET Native.</strong> When you build in Release mode, it takes an additional 30+ seconds to turn your MSIL and your references into optimized native machine code. We're working on improving that time. It does "tree-shaking" to remove all code that will never be called. It does " <a href="">Marshalling Code Generation</a>" to pre-compile serialization code so it doesn't have to use reflection at runtime. It does whole-program optimization. This work and compilation to native code results in a single native DLL. You can explore this in bin\x86\Release\ilc.</p> <p>**.</p> <p><strong>Store submission.</strong>.</p> <h1>Tips for developing with .NET Native</h1> <p><strong>Test your app in Release build</strong>. <a href="">Release build is fully optimized and you will want to disable optimizations to have the best debugging experience</a>.</p> <p><strong>.NET Native Analyzer</strong>. <a href="">Microsoft.NETNative.Analyzer</a>.</p> <p><strong>AnyCPU is gone</strong>..</p> <p>If you're developing a class library or PCL, however, you should generally develop these as "AnyCPU". That makes things easier - you can distribute a single DLL that can be consumed by all project targets.</p> <p>The easiest approach I've found is with the Build > ConfigurationManager dialog. I can set it so that even if my toolbar shows "AnyCPU" for the benefit of the libraries, it still builds+deploys my UWP application as x86.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Debugging .NET Native</strong>. <em>Project > Properties > Compile with the .NET Native tool chain</em>. In VB it's under <em>MyProject > Build > Advanced</em>.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p><strong>Customizing the .NET Native optimizations</strong>. Sometimes, especially in apps that make subtle use of reflection, .NET Native can remove too much in its optimizations. You can control this. These blog posts explain it well:</p> <ol> <li><a href="">Dynamic features in static code</a>.</li> <li><a href="">Help! I hit a MissingMetadataException!</a>.</li> <li><a href="">Help! I didn't hit a MissingMetadataException!</a>.</li> <li><a href="">.NET Native deep dive: making your library great</a>.</li> <li><a href="">.NET Native deep dive: optimizing with Runtime Directives</a>.</li> </ol> <p>*<strong><em>Expression.Compile.</em></strong>* <em>interprets<.</p> <p><strong>F# -</strong> F# DLLs cannot be used in UWP store apps: they are not currently supported by .NET Native. That's a scenario we'd like to fix. Please tell us if that's important to you!</p> <p><strong>Get help</strong>. If you're stuck with a .NET Native issue, the best place to find help is email to <a href="mailto:[email protected]">[email protected]</a>.</p> <h1>Summary</h1> <p>Today's release of the Universal Windows Platform opens up important new opportunities for .NET developers. You'll get huge reach for your UWP apps, and you'll be able to code them with very latest .NET technologies.</p> <p>Please try it out. Let us know what you think. If you have questions, post here or at the " <a href="">Developing Universal Apps</a>" forum on the Windows Dev Center. And you can <a href="">measure your app startup time</a> to under 200ms thanks to .NET Native, please post here to boast about it!</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] Bug Advisory in the .NET Framework 4.6<p><strong>Update 8/11/2015:</strong> We released an updated version of RyuJIT today, which <em>resolves this advisory</em>. The update was released as <a href="">Microsoft Security Bulletin MS15-092</a> and is available on Windows Update or via direct download as <a href="">KB3086251</a>. The update resolves: <a href="">CoreCLR #1296</a>, <a href="">CoreCLR #1299</a>, and <a href="">VisualFSharp #536</a>. Major thanks to the developers who reported these issues. Thanks to everyone for their patience.</p> <p><span style="font-size: 12px;">A code generation (AKA "codegen") issue in </span><a style="font-size: 12px;" href="">RyuJIT</a><span style="font-size: 12px;"> in the </span><a style="font-size: 12px;" href="">.NET Framework 4.6</a><span style="font-size: 12px;"> has been discovered that affects a calling pattern called </span><a style="font-size: 12px;" href="">Tail Call Optimization</a><span style="font-size: 12px;">. The RyuJIT team has </span><a style="font-size: 12px;" href="">fixed the issue</a><span style="font-size: 12px;"> and has started the process of producing a .NET Framework 4.6 patch that will be freely available for anyone to download and install.</span></p> <p>There is a <a href="#recommendation">workaround for this issue</a>, with the .NET Framework 4.6. It is supported to use this workaround in production to safely avoid this issue. The workaround is enabling a RyuJit config switch to <a href="">disable tail call optimizations</a>. See the recommendation below, for a detailed explanation how to proceed.</p> <h1>Description of the Issue</h1> <p.</p> <p.</p> <p>The following annotated C# repro provides a detailed explanation of the bug.</p> <script type="text/javascript" src=""></script> <p>The following F# repro provides the F# version of the issue.</p> <script type="text/javascript" src=""></script> <h1>Customer Bug Report</h1> <p><a href="">Nick Craver</a> and <a href="">Marc Gravell</a>, a team of two at <a href="">Stack Exchange</a> (runs <a href="">Stack Overflow</a>),.</p> <p>We were able to diagnose the issue by Friday and provide a simple work-around to disable the specific RyuJIT optimization.</p> <h1>Advisory</h1> <p>Nick Craver published his own customer advisory yesterday, on <a href="">Why you should wait on upgrading to .Net 4.6</a>. It's a good post that you should read if you are deploying the .NET Framework 4.6.</p> <p.</p> <p.</p> <h2><a name="recommendation"></a>Recommendation</h2> <p>Our recommendation to StackExchange and to any other customer is the following:</p> <ol> <li>Scout the .NET Framework 4.6 in your environment.</li> <li>If you run into an issue that you cannot diagnose, try <a href="">disabling RyuJIT</a>.</li> <li>If disabling RyuJIT resolves the issue, please re-enable RyuJIT and <a href="">disable tail call optimization</a>.</li> <li.</li> <li>If your issue is not mitigated with the tail call optimization disabled, but is mitigated with RyuJIT disabled, we want to hear from you on <a href="">.NET Framework Connect</a>. You can also run your app in production in this configuration (RyuJIT disabled).</li> <li>If your issue is not mitigated by disabling RyuJIT or tail call optimization, then it something else and unrelated to this advisory.</li> </ol> <p.</p> <p><span.</span></p> <h1>Closing</h1> <p>Thanks again to the StackExchange team for reaching out to us with this issue and for getting the word out about the issue.</p> <p>As stated at the start of the post, we have already started producing a RyuJIT patch for the .NET Framework 4.6. We will post an update when it is is available.</p> <p.</p> <p>The .NET Framework 4.6 is a great release that we can continue to recommend deploying. It is perfectly safe to run the .NET Framework 4.6 with tail call optimizations disabled, while you are waiting for the patch. Your app will get the benefit of other <a href="">.NET Framework 4.6 improvements</a>.</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] Networking APIs for UWP Apps<p style="padding-left: 30px;"><em>This post was written by <strong>Sidharth Nabar</strong>, Program Manager on the Windows networking team.</em></p> <p (<a href="">API reference on MSDN</a>)..</p> <p.</p> .</p> <h2>What’s New</h2> <p>These are the new APIs and features that we have added into .NET Core 5 for UWP app developers.</p> <h3>System.Net.Sockets</h3> <p>With Windows 10 and .NET Core 5, <strong><code>System.Net.Sockets</code> has been added into the API surface for UWP app developers</strong>. This was a <a href="">highly requested API</a> for Windows Store apps (it was already available for Windows Phone Silverlight apps) and includes types such as <code>System.Net.Sockets.Socket</code> and <code>System.Net.Sockets.SocketAsyncEventArgs</code>, which are used by developers for asynchronous socket communication. The current API surface of <code>System.Net.Sockets</code> <a href="#looking-ahead">Looking ahead</a> section below.</p> <p>The implementation underneath the <code>System.Net.Sockets</code> <a href="">GitHub</a> if you see any differences in behavior or performance as you port your Sockets code to UWP.</p> <h3>System.Net.Http gets HTTP/2</h3> <p>Developers writing UWP apps on Windows 10 and .NET Core 5 will get <strong>HTTP/2 support in <code>System.Net.Http.HttpClient</code></strong>. HTTP/2 is the latest version of the HTTP protocol and provides much lower latency in web access by minimizing the number of connections and round-trip messages. Adding this support into the <code>HttpClient</code> <a href="">this talk</a> from Build 2015. The talk also features a simple photo downloading app that shows approximately 200% improvement in latency upon switching to HTTP/2 (<a href="">demo video</a>).</p> <p>The following code snippet shows how to query the HTTP version preference on the client as well as the actual HTTP version being used for the connection:</p> <pre><code>var myClient = new HttpClient();<br />var myRequest = new HttpRequestMessage(HttpMethod.Get, "");</code></pre> <pre><code>// This property represents the client preference for the HTTP protocol version.<br /></code><code>// The default value for UWP apps is 2.0.<br />Debug.WriteLine(myRequest.Version.ToString());<br />var response = await myClient.SendAsync(myRequest);</code></pre> <pre><code>// This tells if you if the client-server communication is actually using HTTP/2<br />Debug.WriteLine(response.Version.ToString());</code></pre> <p><strong>Notes:</strong></p> <ol> <li> <p>Setting the <code>Request.Version</code> property to 2.0 is not supported on other .NET platforms and will throw a <code>System.ArgumentException</code> when trying to send such a request. The default version on .NET platforms other than UWP is 1.1.</p> </li> <li> <p>The <code>Request.Version</code>.</p> </li> </ol> <h2>What’s Changed</h2> <p.</p> <h3>System.Net.Http</h3> <p>In Windows 8.1, the implementation of <code>HttpClient</code> was based on a managed HTTP stack comprising of types such as <code>System.Net.HttpWebRequest</code> and <code>System.Net.ServicePointManager</code>. In .NET Core for UWP apps, this has been replaced by a completely new, lightweight wrapper on top of native Windows OS HTTP components such as <code>Windows.Web.Http</code>, which is based on <a href="">WinINet</a>. <a href="">here</a> remains unchanged.</p> <p <a href="">GitHub</a>.</p> <h3>System.Net.Requests</h3> <p>This library contains types related to <code>System.Net.HttpWebRequest</code> and <code>System.Net.HttpWebResponse</code> <a href="">here</a>.</p> <p>This library is provided purely for backward compatibility and to unblock usage of .NET libraries that use these older APIs. For .NET Core, the implementation of <code>HttpWebRequest</code> is actually based on <code>HttpClient</code> (reversing the dependency order from .NET Framework). As mentioned above, the reason for this is to avoid usage of the managed .NET HTTP stack in a UWP app context and move towards HttpClient as a single HTTP client role API for .NET developers.</p> <h2>What’s the same</h2> <p>Other types from <code>System.Net</code> and <code>System.Net.NetworkInformation</code> namespaces that were supported for Windows 8.1 Store apps will continue to be supported for UWP apps. There have been some minor additions to this API surface, but no major changes in implementation.</p> <h2>Looking Ahead</h2> <p.</p> <p <a href="">Windows platform missing APIs uservoice</a> or file an issue in <a href="">GitHub</a>. We look forward to working with you to deliver awesome apps to the entire breadth of Windows devices.</p><div style="clear:both;"></div><img src="" width="1" height="1">Immo Landwerth [MSFT] the RTM of Visual F# 4.0<p>We are pleased to announce that Visual Studio 2015, and along with it Visual F# 4.0, hit RTM today! Visit the <a href="">downloads page</a> to install the release build. The F# components in VS 2015 map to commit dd8252eb8d20 in our <a href="" target="_blank">repo</a>.</p> <p>For an overview of the new language, runtime, and IDE features in Visual F# 4.0, take a look at our earlier blog posts from VS 2015 <a href="" target="_blank">preview</a> and <a href="" target="_blank">RC</a>, or review the VS 2015 <a href="" target="_blank">release notes</a>.  For a complete list of bug fixes and changes, see <a href="" target="_blank">CHANGELOG.md</a>.</p> <h3>Thank you to our contributors</h3> <p>Visual F# 4.0 marks the first major-version release of the F# language and VS tools to include community contributions.  As such, we offer a heart-felt “thank you!” to all of the F# community developers who contributed code, opened issues, or dogfooded early builds.</p> <p><a href=""><img title="F# 4.0 code contributors" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; border-top-width: 0px; margin-right: auto" border="0" alt="F# 4.0 code contributors" src="" width="480" height="480" /></a></p> <p>F# 4.0 is as much about a change in culture for the language as updates to the language specification, library and tools.  There are numerous new technical features, but in many ways the most significant changes are in how we have extended the core library, and how we’ve changed the way we are doing language design, implementation and delivery.</p> <p>Language design is now done in an open, collaborative way through <a href="" target="_blank">fslang.uservoice.com</a>.  Language implementation has shifted to a fully open mode of engineering, and language delivery is shifting to be much more cross-platform and multi-editor. All of this is done by a range of contributors, including Microsoft, Microsoft Research, <a href="" target="_blank">F# Software Foundation</a> members and many more. </p> <h3>Join us</h3> <p>Help us make F# and the Visual F# tools the best functional-first, data-centric, highly-productive language and toolset it can be! Contribute at <a title="" href=""></a>, make suggestions at <a title="" href=""></a> or <a href=""></a>, or follow us on Twitter at <a href="">@VisualFSharp</a> for the latest news and updates.</p> <h3>Take a look</h3> <p>For a quick preview of some of the new capabilities in Visual F# 4.0, check out this Channel 9 video.</p> <iframe height="315" src="" frameborder="0" width="560" allowfullscreen="allowfullscreen"></iframe><div style="clear:both;"></div><img src="" width="1" height="1">Visual FSharp Team [MSFT] .NET Framework 4.6<p>We're excited to announce the RTM releases of <a href="">.NET Framework 4.6</a> and <a href="">Visual Studio 2015</a> today. You can read about the new features or leave that for later and try them out now. The quickest way to get started is to install the free Visual Studio 2015 Community version.</p> .</p> <p.</p> <p>You can download and try out the releases now:</p> <ul> <li><a href="">Visual Studio 2015</a></li> <li><a href="">.NET Framework 4.6</a></li> </ul> <p>As a team, we're really excited to share everything we've been working on:</p> <ul> <li><a href="#net-framework-46">.NET Framework 4.6</a></li> <li><a href="#aspnet-46">ASP.NET 4.6</a></li> <li><a href="#entity-framework">Entity Framework</a></li> <li><a href="#net-languages">.NET Languages</a></li> <li><a href="#visual-studio-improvements-for-net">Visual Studio Improvements for .NET Developers</a></li> <li><a href="#net-core">.NET Core and ASP.NET 5</a></li> </ul> <p>You can check out the earlier <a href="">RC</a> and <a href="">Preview</a> releases to see how the release has developed over the last year. In fact, it's only been 14 months since we released the <a href="">.NET Framework 4.5.2</a>.</p> <h1><a name="net-framework-46"></a>.NET Framework 4.6</h1> <p>There are many great features in the <a href="">.NET Framework 4.6</a>. Some of these features, like RyuJIT and the latest GC updates, can provide improvements by just installing the .NET Framework 4.6. Give it a try!</p> <p>You can learn more about the release by looking at <a href="">What's New in the .NET Framework</a>, <a href="">Application Compatibility in the .NET Framework 4.6</a>, <a href="">.NET Framework 4.6 release changelist</a>, and an <a href="">.NET Framework API diff</a> (<a href="">GitHub</a>) between the .NET Framework 4.6 and 4.5.2 releases. Check out the <a href="">ASP.NET Team post</a> to learn more about ASP.NET updates.</p> <p>The .NET Framework 4.6 is part of Windows 10 and can be installed on Windows 7 and Windows 8. You can target the .NET Framework 4.6 in Visual Studio 2012 or later, by installing the the <a href="">.NET Framework 4.6 Targeting Pack</a>. It comes with Visual Studio 2015. There are two ways to install the .NET Framework 4.6: <a href="">web installer</a>, <a href="">offline installer</a>. The web installer is recommended for most users.</p> <h2>Windows Presentation Foundation</h2> <p>The team has made key improvements to WPF in this release:</p> <h3>Transparent Child Window support</h3> <p>WPF now supports transparent child windows in Windows 8.1 and above. This enables you to create and compose non rectangular and transparent child windows in your top level Windows. You can enable this by setting the <a href="">UsesPerPixelTransparency property</a> to true in <a href="">HwndSourceParameters</a>.</p> <h3>High DPI Improvements</h3> <p.</p> <pre><code><runtime> <br /> <AppContextSwitchOverrides <br /></code> </pre> <pre></runtime> </pre> <p>WPF windows straddling multiple monitors with different DPI settings (Multi-DPI setup) are now rendered correctly, without blacked out regions. You can opt out of this behavior by adding the following line to the section in the app.config file:</p> <pre><code><appSettings><br /></code> <add key="EnableMultiMonitorDisplayClipping" value="true"/><br /></appSettings> </pre> <p>Support for automatically loading the right cursor based on DPI setting has been added to <a href="">System.Windows.Input.Cursor</a>. This enables you to provide a multi-image .cur file to the WPF platform and configure it to automatically pick up the right cursor based on the current DPI of the active display.</p> <h3>Touch is better</h3> <p>The team adopted the double-tap threshold used by UWP applications, which is considered to be an industry-quality implementation. WPF now uses this same implementation on Windows 8.1 and above.</p> <p>Touch events are now more reliable. This <a href="">Connect issue</a> requesting touch event improvements has been fixed in this release.</p> <h2>Windows Forms Updates for High DPI</h2> <p>Windows Forms High DPI support has been updated to include more controls, which is a project that started in the <a href="">.NET Framework 4.5.2</a>.</p> .</p> <p>You can see a few examples of Windows Forms High DPI improvements.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p>This is an opt-in feature. To enable it, set the EnableWindowsFormsHighDpiAutoResizing element to true in the app.config file:</p> <pre><code><appSettings><br /></code> <add key="EnableWindowsFormsHighDpiAutoResizing" value="true" /><br /></appSettings> </pre> <h2><a name="ryujit"></a>RyuJIT</h2> (introduced in 2005 .NET 2.0 release). There was always a big gap in throughput between the 32- and 64-bit JITs. That gap has been closed, making it easier to exclusively target 64-bit architectures or migrate workloads from 32- to 64-bit.</p> <p>RyuJIT is enabled for 64-bit processes running on top of the .NET Framework 4.6. Your app will run in a 64-bit process if it is compiled as 64-bit or AnyCPU (although not as Prefer 32-bit),="">RyuJIT blog posts</a>, try out several RyuJIT CTPs and (suprise!) can now read and contribute to the <a href="">RyuJIT source code</a>. Thanks to everyone who helped improve RyuJIT along the way to RTM. We fixed a lot of publicly-reported bugs and performance issues based on those CTP releases. It's been a pleasure for Microsoft engineers to adopt a more public development process with RyuJIT.</p> <p>The project was initially targeted to improve high-scale 64-bit cloud workloads, although it has much broader applicability. We do expect to add 32-bit support in a future release.</p> <h2>SIMD</h2> <p>The 64-bit CLR introduces support for <a href="">Single Instruction Multiple Data (SIMD)</a> Vectors. These new types are in the <a href="">System.Numerics namespace</a>, and are recognized as intrinsics by the JIT, which generates code utilizing the capabilities of SSE2 and AVX2 hardware, depending on the machine.</p> <p.</p> <script type="text/javascript" src=""></script> <p>This will perform 4 adds in parallel on SSE2, or 8 on AVX2. (Of course, details about ensuring that the arrays are all the same size, and a multiple of Vector.Count have been omitted.)</p> <p.</p> <p>These new types are available via the <a href="">System.Numerics.Vectors NuGet package</a>, and will automatically be accelerated when run on the 64-bit .NET Framework runtime. SIMD is also supported on the 64-bit .NET Core.</p> <h2>Garbage Collector Updates</h2> <p.</p> <p>The GC now handles pinned objects in a more optimized way. It is now possible for the GC to compact more memory around pinned objects. This change can provide a suprisingly impactful improvement for large-scale workloads with significant use of pinning.</p> <p.</p> <p>The Garbage Collector has a new mode that avoids garbage collection while certain memory-related conditions are met. This new mode is important for low-latency workloads that cannot afford interuptions. It enables you to <a href="">specify that a certain amount of memory must be available</a> before entering a <em>No GC Region</em>. While in the region the GC will not collect, which means that it will not interupt your workload during that time. The GC>Windows Communication Foundation</h2> <h3>SSL</h3> <p less secure protocols. This can be done either by setting the System.ServiceModel.TcpTransportSecurity.SslProtocols property or by updating a configuration file, as shown below.</p> <pre><code><netTcpBinding><br /></code> <binding><br /> <security mode= "None|Transport|Message|TransportWithMessageCredential" ><br /><code> <transport clientCredentialType="None|Windows|Certificate" <br /> </code></transport><br /> </security><br /> </binding><br /></netTcpBinding> </pre> <h3>Send messages using different HTTP connections</h3> <p>WCF now allows users to ensure certain messages are sent using different underlying HTTP connections. There are two ways to achieve this.</p> <ol> <li.</li> <li> <p>Using different channel factories: Users can also enable a feature that will ensure messages sent using channels created by different channel factories will use different underlying HTTP connections. To enable this feature users must set the following appSetting to true:</p> </li> </ol> <pre> <appSettings><br /> <add key="wcf:httpTransportBinding:useUniqueConnectionPoolPerFactory" <br /><br /> </appSettings></pre> <p> </p> <h2>Windows Workflow</h2> <p>The workflow team added a new setting that specifies.</p> <p>You can add the setting in the appSettings section of an app.config file.</p> <pre><code><appSettings><br /></code> <add key="microsoft:WorkflowServices:FilterResumeTimeoutInSeconds" value="60"/><br /></appSettings> </pre> <p>The default value is 60 seconds. If the value is set to 0, then the out-of-order requests are immediately rejected with a fault with text that looks like this:</p> <pre><code>Operation 'Request3|{}IService' on service instance with identifier <br />'2b0667b6-09c8-4093-9d02-f6c67d534292' cannot be performed at this time. <br />Please ensure that the operations are performed in the correct order and that the binding <br />in use provides ordered delivery guarantees. </code></pre> <p>This is the same message that is received if an out-of-order operation message is received and there are no non-protocol bookmarks.</p> <p>If the value of FilterResumeTimeoutInSeconds is non-zero and there are non-protocol bookmarks and the timeout expires, the operation fails with a timeout message.</p> <h2>ADO.NET improvements</h2> <p>ADO .NET now supports the <a href="">Always Encrypted</a> feature available in SQL Server 2016.. You can learn more about this feature on the <a href="">SQL Security Blog</a>.</p> <h2>Async</h2> <p>The new System.Threading.AsyncLocal.Value property was explicitly changed, or because the thread encountered a context transition. You can see an example of this new type in use.</p> <script type="text/javascript" src=""></script> <p <a href="">System.Globalization.CultureInfo</a> class topic.</p> <p>Three convenience methods, CompletedTask, FromCancelled, and FromException, have been added to Task to return completed tasks in a particular state.</p> <p>The NamedPipeClientStream class now supports asynchronous communication with its new ConnectAsync method.</p> <h2>Networking Enhancements</h2> <h3>System.Net.Sockets</h3> <p384), which could limit the scalability of a service by causing port exhaustion when under load.</p> <p>In the .NET Framework 4.6, the System.Net.Sockets.SocketOptionName.ReuseUnicastPort enumeration value and the System.Net.ServicePointManager.ReusePort property, have been added to enable port reuse.</p> <p.</p> <p>Developers writing a sockets-only application can specify the System.Net.Sockets.SocketOptionName.ReuseUnicastPort option when calling a method such as System.Net.Sockets.Socket.SetSocketOption so that outbound sockets reuse local ports during binding.</p> <h3>System.Uri</h3> <p>A new property, System.Uri.IdnHost, has been added to the System.Uri class to better support international domain names and PunyCode.</p> <h2>CLR Assembly Loader Performance</h2> <p>The assembly loader now uses memory more efficiency by unloading IL assemblies after a corresponding NGEN image is loaded. This change is a major benefit for virtual memory for large 32-bit apps (such as Visual Studio) and also saves physical memory.<.</p> <p>The team made the following improvements:</p> <ul> <li>RSA Encryption: Added support for OAEP padding using the SHA-2 hash family.</li> <li>RSA Signing: Added support for PSS padding</li> <li>RSA usability: Improved API surface area.</li> <li)).</li> </ul> <script type="text/javascript" src=""></script> <h2>Unix Time</h2> <p>You can now more easily convert date and time values to or from .NET Framework types and Unix time. This can be necessary, for example, when converting time values between a JavaScript client and .NET server. The following APIs have been added to the <a href="">DateTimeOffset structure</a>:</p> <ul> <li>static DateTimeOffset FromUnixTimeSeconds(long seconds)</li> <li>static DateTimeOffset FromUnixTimeMilliseconds(long milliseconds)</li> <li>long DateTimeOffset.ToUnixTimeSeconds()</li> <li>long DateTimeOffset.ToUnixTimeMilliseconds()</li> </ul> <h2>EventSource now supports the Event Log</h2> <p>You now can use System.Diagnostics.Tracing.EventSource to log administrative or operational messages to the event log, in addition to any existing ETW sessions created on the machine. This is also called "ETW Channel support". In the past, you had to use the <a href="">Microsoft.Diagnostics.Tracing.EventSource NuGet package</a> for this functionality. The functionality is now built into the .NET Framework 4.6.</p> <p>Both the NuGet package and the .NET Framework 4.6 have been updated with the following features:</p> <ul> <li>DynamicEvents - Allows events defined 'on the fly' by without creating a event method.</li> <li>RichPayloads - Allows specially attributed classes and arrays as well as primitive types to be passed as a payload.</li> <li>ActivityTracking - Causes Start and Stop events to tag events between them with ID that represents all currently active activities.</li> </ul> <h2>Compatibility Switches</h2> <p><a href="">AppContext</a> appropriately> <h2>Other Base Class Library changes</h2> <ul> <li>A number of collection objects, such as System.Collections.Generic.Queue and System.Collections.Generic.Stack, now implement System.Collections.Generic.IReadOnlyCollection.</li> <li.</li> <li.</li> </ul> <h2>Reference Source</h2> <p>The .NET Framework reference source has been updated for the .NET Framework 4.6. You can see the latest source at the <a href="">.NET Framework Reference Source Website</a>, which is also used for <a href="">.NET Framework source debugging</a>. You can see that the new <a href="">AsyncLocal<T></a> type, for example, is now available on Reference Source.</p> <p>The <a href="">.NET Framework referencesource repo on GitHub</a> will be updated with .NET Framework 4.6 shortly. This repo is primarily in place so that the <a href="">Mono Project</a> can adopt .NET Framework source in Mono.</p> <h1><a name="aspnet-46"></a>ASP.NET 4.6</h1> <p>The ASP.NET team has made many updates to ASP.NET 4.6. You can learn more by reading the <a href="">ASP.NET 4.6 RTM blog post</a> or watch <a href="">ASP.NET team member Pranav Rastogi describe the update</a>. The release includes updates for the following components.</p> <ul> <li>ASP.NET Web Forms 4.6</li> <li>ASP.NET MVC 5.2.3</li> <li>ASP.NET Web Pages 3.2.3</li> <li>ASP.NET Web API 5.2.3</li> <li>ASP.NET SignalR 2.1.2</li> </ul> >Identity and Authentication Updates</h2> <p.</p> request.</p> <p>The browser and the webserver (IIS on Windows) do all the work. You don't have to do any heavy-lifting for your users.</p> <p>Most of the <a href="">major browsers</a> support HTTP/2, so it's likely that your users will benefit from HTTP/2 support.</p> <h2>Support for Token Binding Protocol</h2> <p>Microsoft and Google have been collaborating on a new approach to authentication, called the <a href="">Token Binding Protocol</a>. The premise of the protocol> <h1><a name="entity-framework"></a>Entity Framework</h1> <p>There are two versions of Entity Framework currently under development.</p> <ul> <li><a href="">EF 6.1.3</a> is recommended for production workloads. It contains fixes for high priority issues that were reported on EF 6.1.2.</li> <li><a href="">EF 7</a>.</li> </ul> <h1><a name="net-languages"></a>.NET Languages</h1> <p>The .NET languages team is releasing final updates to C# 6, <a href="">F# 4.0</a> and VB 14 today. This includes final compiler implementations, and, for C# and VB, final language feature sets. The latest versions of the languages were actually <a href="">feature complete at RC</a>.</p> <h2>Roslyn v1</h2> <p>The team is also releasing the v1 version of Roslyn, after working on it for ~ 6 years. Roslyn was considered an ambitious project from the beginning. It aimed to replace the black box native C++ based C# and VB compilers with .NET implementations (written in both C# and VB) that exposed a rich set of language, compiler and other APIs. The Roslyn v1 product that you can use today in Visual Studio 2015 delivers on that vision and has enabled great new development experiences in Visual Studio.</p> <p>For a historical grin, you can check out the earlier blog post, <a href="">Introducing the Microsoft “Roslyn” CTP</a>. It's fun to look back at more humble beginnings.</p> <p>To learn more about Roslyn, check out the <a href="">Roslyn repo</a> and the <a href="">Roslyn Overview</a>.</p> <h2>C# 6 and VB 14</h2> <p><a href="">C#</a> and <a href="">VB</a> are both part of the <a href="">Roslyn compiler</a>. You can see a comparison of <a href="">Languages features in C# 6 and VB 14</a> to learn which feature is supported by which language.</p> <p>The following language features are a subset of the new capabilities you can use in either language. Some of the other <a href="">new language features</a> are unique to one language or the other or existed already in one and were added to the other this release.</p> <p><strong>String interpolation:</strong> An intuitive String.Format-like syntax for composing strings from templates with inline expressions.</p> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <p>The <strong>Null-Conditional operator (?.):</strong> A streamlined syntax for conditionally accessing a member or invoking a method on a value if it's non-null and returning null if the object is null instead of throwing a NullReferenceException.</p> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <p>The <strong>NameOf operator:</strong> A rename-safe way to refer to the name of a code element such as in PropertyChanged events and ArgumentExceptions.</p> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <p><strong>Read-only Auto-Properties:</strong> A concise syntax for declaring properties which may only be assigned in their initializers or inside of a constructor.</p> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <p><strong>Static imports:</strong> New in C# 6! Enables a concise syntax for calling static methods without type qualification. This is handy for frequent calls to members of utility classes like <code>System.Console</code> and <code>System.Math</code>.</p> <script type="text/javascript" src=""></script> <p>This feature is new in C# 6 but already existed in previous versions of VB.</p> <script type="text/javascript" src=""></script> <p><strong>Multiline string literals:</strong> New in VB 14! You can now include newline characters in string literals. This makes it easier than ever to include multiline content in your VB programs without having to manually concatenate <code>vbCrLf</code> into strings.</p> <script type="text/javascript" src=""></script> <p>This feature is new in VB 14 but already existed in previous versions of C#.</p> <script type="text/javascript" src=""></script> <h2>F# 4.0</h2> <p><a href="">F# 4.0</a> introduces a number of new language and runtime capabilities. Just a few are described below; see the F# team blog posts from the <a href="">Preview</a> and <a href="">RC</a> releases for a more complete list, or review the VS 2015 <a href="">release notes</a>.</p> <p><strong>Constructors as first-class function values</strong> Constructors can now be treated as first-class function values, similar to curried functions or other .NET methods. This eliminates the need to create small lambdas for the sole purpose of calling a constructor.</p> <script type="text/javascript" src=""></script> <p><strong>Simplified mutable values</strong> The <code>mutable</code> keyword can now be used in all cases to create a mutable value. Scenarios where <code>ref</code> values were previously required will be handled automatically by the compiler.</p> <script type="text/javascript" src=""></script> <p><strong>Implicit quotation of method arguments</strong> Method arguments now support the <code>[<ReflectedDefinition>]</code> attribute, which enables access to both the passed argument value and a quotation of its callsite expression.</p> <script type="text/javascript" src=""></script> <p><strong>Normalized collections API</strong> The <code>List</code>, <code>Array</code>, and <code>Seq</code> modules have been expanded and fully normalized, with dedicated implementations of every API across all collection types. This represents the addition of 104 new APIs.</p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <h1><a name="visual-studio-improvements-for-net"></a>Visual Studio Improvements for .NET</h1> <p><a href="">Visual Studio 2015</a> includes major improvements for .NET.</p> <h2>Visual Studio Community</h2> <p>You can use the free <a href="">Visual Studio 2015 Community edition</a>. It is very similar to Visual Studio Pro and free for students, open source developers and many individual developers. It supports Visual Studio plugins like Xamarin or Resharper.</p> <h2>EnC - Lambda and Async Task support</h2> <p.</p> <p>You can now use EnC with lambdas, async methods, LINQ and other language features. Given today's coding patterns, that's a huge jump forward for EnC usability. Check out <a href="">Supported Edits in Edit & Continue (EnC)</a> to see the complete set of EnC operations supported by EnC in Visual Studio 2015.</p> <p[].</p> <script type="text/javascript" src=""></script> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h2>F# Script Debugging</h2> <p>F# scripts and F# Interactive are now integrated with the Visual Studio debugger. You can now take advantage of the rich Visual Studio debugging tools while working interactively.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h2>WPF - Live Visual Tree</h><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h2>Application Timeline Tool</h2> <p.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h2>Debugger Improvements</h2> <p>Visual Studio 2015 addresses many requests that you have made for improving your debugging life, such as <a href="">lambda debugging</a>, <a href="">Edit and Continue (EnC) improvements</a>, <a href="">child-process debugging</a>, as well as revamping core experiences such as <a href="">powerful breakpoint configuration</a> and introduces a <a href="">new Exceptions Settings tool window</a>.</p> 2><a name="xamarin"></a>Xamarin Starter Included in Visual Studio 2015</h2> <p>Xamarin is a great way to <a href="">start building iOS and Android apps in C# or F# within Visual Studio</a>. Microsoft and Xamarin have partnered together to include <a href="">Xamarin Starter Edition</a> as a free optional feature within Visual Studio 2015. Many .NET developers are using Xamarin to increase the reach of their apps and development effort to iOS and Android.</p> <p>Xamarin really delivered by providing <a href="">same-day Xamarin support for Visual Studio 2015 across all of their offerings</a>.</p> <p>To install Xamarin with Visual Studio 2015, select the <em>Custom</em> installation option. Select the displayed checkbox below.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <p>You can use <a href="">Xamarin Starter edition</a> as long as you want, build apps, test on devices and simulators and publish to app stores. It is limited to apps that are <a href="">128k of byte code or less</a>. You can start a <a href="">Xamarin Business trial</a> to try out the richer experience. You can always return to Xamarin Starter Edition after that.</p> <h2>Xamarin.Forms for Windows</h2> <p>Xamarin.Forms support for the Windows platform has been updated to support Windows 8.1, and Windows Phone 8.1 apps. This means that you can build and ship Xamarin.Forms apps that target all of the major mobile platforms from a single code base.</p> <h2>Code Completion for Xamarin.Forms XAML</h2> <p.</p> <p><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></p> <h2>Xamarin + Visual C++ Debugger Integration</h2> <p.</p> <h1><a name="net-core"></a>.NET Core</h1> <p><a href="">.NET Core</a>.</p> <p!</p> <p>.</p> <p>.NET Core is really three things: a cross-platform runtime implementation, a cross-platform framework library implementation and a standard API shape that can be satisfied by multiple .NET implementations (e.g. .NET Framework, .NET Core, Xamarin, Unity).</p> <p>Today, you can use .NET Core within Visual Studio 2015, by using ASP.NET 5. The .NET tools for UWP, which also use .NET Core, ship on 7/29.</p> <h2>.NET Core FX</h2> <p>Today, the <a href="">.NET Core Framework</a>.</p> <p.</p> <p>All of the .NET Core libraries are distributed as NuGet packages. You can acquire the packages easily within Visual Studio or with one of the NuGet clients directly.</p> <h2>ASP.NET 5</h2> <p>ASP.NET 5 is the latest version of several ASP.NET technologies, include MVC and Web API. It supports running on both the .NET Framework and .NET Core, so by extension, supports running on Windows, Linux, OS X and any other .NET Core platforms. The <a href="">ASP.NET Home</a> repo is a great place to start learning about ASP.NET 5. You'll also find samples and getting started instructions in the same place.</p> <p>The team recently shipped <a href="">ASP.NET 5 beta 5</a>, <a href="">VS Code</a> and other <a href="">OmniSharp-enabled text editors</a>.</p> <h1><a name="dotnet-oss"></a>.NET Open Source</h1> <p>The RTM versions of the .NET Framework and .NET Core now contain changes from people outside of the core .NET team in Microsoft but by passionate and highly skilled developers working with us in the <a href="">coreclr</a> and <a href="">corefx</a> repos on GitHub. The Roslyn compiler is the same, with the <a href="">roslyn</a>.</p> <p.</p> <p: <a href="">1210</a>, <a href="">2344</a>, <a href="">3974</a>. We also employ automated scans over the changes to find problems that visual code reviews have missed. We have found a small set of issues this way and fixed them.</p> <p>All of the contributions to .NET Core, from both Microsoft and community members, are governed by the <a href="">.NET Foundation Contributor License Agreement</a>,.</p> <h1>Summary</h1> <p>Today's releases of the <a href="">.NET Framework 4.6</a> and <a href="">Visual Studio 2015</a>, and all the associated languages and components, provide major improvements to your development experience and the reliability and performance of your .NET apps. Please try out the new releases and tell us what you think.</p> <p.</p> <ul> <li>Roslyn shipped! Roslyn v1 was a 6 year language compiler project. It's now fully integrated into Visual Studio and is open source. It runs on the .NET Framework, .NET Core and Mono.</li> <li>RyuJIT shipped! RyuJIT v1 was a 5 year JIT compiler project. It's fully integrated into the .NET Framework and .NET Core and is also open source.</li> </ul> <p.</p> <p>Thanks to everyone who gave us feedback on our various milestone releases for the .NET Framework, .NET Core, Roslyn, ASP.NET 5, Visual Studio and other releases. We very much appreciate it.</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] .NET Port Award<p>Recently, we shared quite a bit how we make progress on our open source journey (<a href="">Roslyn's First Year of Open Source</a> and <a href="">.NET Core Open Source Update</a>). We all feel highly privileged to be on the .NET team when all of this awesomeness happens!</p> <p>However, unless you're an active contributor on any of our OSS projects, you probably didn't have the opportunity to experience this first hand.</p> <p>Fortunately, you don't have to take our word for it. <a href="">Geoff Norton</a>, whom some of you probably know as <a href="">kangaroo</a>, did a <a href="">talk at .NET Fringe</a> about his experience on porting CoreCLR to OS X.</p> <h1>Community Thanks!</h1> <p>You <a href="">probably noticed</a> that the Roslyn team (thanks to the work of <a href="">Kasey Uhlenhuth</a>) sends out mugs for accepted pull requests:</p> > <p>When asked by the crowd, Geoff shared that he hadn't been given <strong>anything</strong> for his efforts: no mug, no shirt, no thank-you, not even a sticker. Little did he know that we came very well prepared to .NET Fringe.</p> <p>Since we knew that Geoff would attend .NET Fringe we decided that we'll take this conference as an opportunity to give out the first <strong>.NET Port Award</strong>: a SHA1 port glass, paired with a bottle of port, awarded in Portland. A match made in heaven!</p> <p>Sad to have missed out on the fun? You can watch the recording of the award ceremony on <a href="">Channel 9</a>:</p> <p><iframe src="" frameborder="0" width="640" height="360"></iframe></p> <h1>Wait -- OS X isn't the only port!</h1> <p>Totally true! In fact, there are three ports in progress right now:</p> <ul> <li>OS X</li> <li>Linux</li> <li>FreeBSD</li> </ul> <p>Want to port CoreCLR to something else? File an issue on <a href="">CoreCLR</a> to get the conversation started!</p> <h1>Update</h1> <p>Our community member <a href="">Geoffrey Huntley</a> pointed out that several more ports are already in-progress:</p> <ul> <li> <p><em><a href="">Solaris, OpenIndiana and SmartOS</a></em></p> </li> <li> <p><em><a href="">AmgiaOS, Alpine and Lilblue</a></em></p> </li> <li> <p><em>POWER (AIX [and|or] Linux) @ IBM will provide access to beefy hardware at no charge. See <a href="">this page</a> for more details.</em></p> </li> <li> <p><em>NetBSD/OpenBSD do not currently work on Microsoft Azure but the FreeBSD port team have virtual machines deployed and ready for whomever wants to step up to the challenge.</em></p> </li> <li> <p><em><a href="">FreeBSD @ github</a> and directly in Gitter (we are spread between multiple timezones)</em></p> </li> </ul> <p style="padding-left: 30px;"><em>If hacking on the CoreCLR is of interest then please introduce yourself over at <a href="">Gitter</a>.</em></p><div style="clear:both;"></div><img src="" width="1" height="1">Immo Landwerth [MSFT] you for your contributions<p><img src="" alt="" /></p> <p>Since going open source, <a href="">Roslyn</a> has had 45 and <a href="">Visual F#<.</p> <p>Over the past six months, some of you have received these small tokens of gratitude and some of you have seen evidence of them on Twitter (<a href="">#roslyn</a>, <a href="">#fsharp</a>). Every contribution—no matter how great or small—warranted a fun surprise in the mail. We have been absolutely thrilled with the pull-requests we have been receiving (shout outs: <a href="">BradBarnich</a>, <a href="">mrange</a>) as well as the engagement on language design. We cannot thank you all enough for joining us on <a href="">our exciting journey into open source</a>.</p> <p>This cup campaign was a team effort and came straight from the Roslyn and Visual F# offices to you. Let’s take a look “behind-the-scenes” (every step was done by our team!):</p> <p><img src="" alt="" /></p> <p!</p> <p>Now we are out of mugs so the team’s days of wandering down to <a href="">The Garage</a> to <a href="">laser etch<:</p> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Got a really sweet mug from the <a href="">#roslyn</a> team today, love the engraved SHA1! Thanks guys and gals! :) <a href="">pic.twitter.com/OBvTgFs2uH</a></p> — Josh Varty (@ThisIsJoshVarty) <a href="">February 23, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Loving my mug from the <a href="">#roslyn</a> team. Thanks everyone! <a href="">pic.twitter.com/uiCbTTKCl8</a></p> — Darren Blaby (@Giftednewt) <a href="">February 23, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr"><a href="">#Roslyn</a> team does not disappoint :D <a href="">pic.twitter.com/3IerpLO0dN</a></p> — Sam Harwell (@samharwell) <a href="">February 23, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> > <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Today a Cup<'t> arrived. I'm very pleased, thank you <a href="">@VisualFSharp</a> team! <a href="">#fsharp</a> <a href="">pic.twitter.com/EcyTXebY8J</a></p> — Max Malook (@max_malook) <a href="">March 2, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">I just got home and saw the other side of the <a href="">#Roslyn</a> mug... I'm speechless. :-D <a href="">pic.twitter.com/QB2BToTTGW</a></p> — Sam Harwell (@samharwell) <a href="">March 1, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Have a very nice day everyone! <a href="">#fsharp</a> <a href="">@VisualFSharp</a> <a href="">pic.twitter.com/PyonrLVoME</a></p> — Pierre Irrmann (@pirrmann) <a href="">March 3, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Thanks <a href="">@VisualFSharp</a> for the new mug! <a href="">pic.twitter.com/TH7S6KCNnX</a></p> — Patrick McDonald (@PaddyMcDonald) <a href="">March 3, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Seems tge <a href="">@VisualFSharp</a> team is joining my mother and my wife and telling me that I drink way to much coffee. ;-) <a href="">pic.twitter.com/W9xsb88j1S</a></p> — Steffen Forkmann (@sforkmann) <a href="">March 3, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">The cup is huge. I will drink twice as much coffee now. I blame you <a href="">@VisualFSharp</a> :-) <a href="">pic.twitter.com/fG1xcPGk2r</a></p> — Anh-Dung Phan (@dungpa) <a href="">March 5, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Thank you Rosyn team for amazing library. I just wish for CTP6 git tag and less internal API! <3 ya all! <a href="">pic.twitter.com/poqEOtsGZO</a></p> — David Karlaš (@davidkarlas) <a href="">March 9, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Thank you <a href="">@VisualFSharp</a> team! <a href="">#fsharp</a> <a href="">pic.twitter.com/0wsZzOGLK7</a></p> — Rodrigo Vidal (@rodrigovidal) <a href="">April 2, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Thx ! Totally my Cup<'t> ! Cheers ! <a href="">@VisualFSharp</a> <a href="">@dsyme</a> <a href="">#fsharp</a> <a href="">pic.twitter.com/VXz3zEHIBL</a></p> — Jérémie Chassaing (@thinkb4coding) <a href="">April 2, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr"><a href="">@VisualFSharp</a> Thanks guys, totally awesome gift! :) <a href="">pic.twitter.com/VNmt5ZOZXN</a></p> — Simon Dickson (@SimonHDickson) <a href="">April 2, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Proof of geekery. Thanks <a href="">@VisualFSharp</a>! <a href="">pic.twitter.com/FJYtBBj3cv</a></p> — Robert Jeppesen (@rojepp) <a href="">April 1, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr"><a href="">#Roslyn</a> and <a href="">@ThatVBGuy</a> Thank You for my Cup(Of T) <a href=""></a> <a href="">pic.twitter.com/G85GtUc6Ys</a></p> — Adam Speight (@AdamSpeight2008) <a href="">April 24, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">After a bad day I finally receive my loot for my epic remove commented out file from the CLR pull request. <a href="">pic.twitter.com/Y46lS8lFab</a></p> — stefansedich (@stefansedich) <a href="">April 27, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Made my day.. <a href="">@kuhlenhuth</a> <a href="">#roslyn</a> <a href="">pic.twitter.com/pVWKk6hxyx</a></p> — Petr Krebs (@petr_k) <a href="">April 28, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Quite chuffed with this. Very nice touch. Thanks. <a href="">@LincolnAtkinson</a> <a href="">@fsharporg</a> <a href="">pic.twitter.com/54f9yhAGPz</a></p> — Richard Dalton (@richardadalton) <a href="">May 10, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Thanks <a href="">#Roslyn</a> team and <a href="">@Microsoft</a> - <a href="">#happydev</a> <a href="">pic.twitter.com/bbfPjwCJbv</a></p> — Adam Tornhill (@AdamTornhill) <a href="">May 29, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <blockquote class="twitter-tweet" lang="en"> <p lang="en" dir="ltr">Love how the Microsoft teams send you gifts when you contribute. Roslyn mug is even better than the NuGet tshirt <a href="">pic.twitter.com/aoxutqpH9l</a></p> — Corin Blaikie (@corinblaikie) <a href="">May 28, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <p>Over ‘n’ out,</p> <p><strong>Kasey Uhlenhuth</strong>, Program Manager, <strong>.NET Managed Languages</strong></p><div style="clear:both;"></div><img src="" width="1" height="1">The .NET Team the day with a Visual Basic, C#, or F# T-Shirt!<p>Mads and Dustin showed off these stylish little numbers at BUILD and ever since we've been getting pinged by community members wanting to know where they can get their very own so they too can show their passion for their favorite programming language(s).</p> <p><em><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a><br />Who wouldn't want to look this awesome?</em></p> <p "<a href="">Just put it on CafePress.com already!</a>". Great idea, Jim! We actually lost the pattern but I've come up with a couple of different designs based on the original I want to share with you and hear what you think.</p> <p>I've always had a few problems with the original designs. They're obvious homages to Superman but the background is black instead of blue which is weird. What if the background were blue? Should the C# one be blue too?</p> <p><strong>The Classic Theme</strong> </p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a></p> <p:</p> <p><strong>The Light Theme</strong></p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a><br /> <em>Complete with F#.</em></p> <p><strong>The Dark Theme</strong></p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a><br /> <em>For those of us who code at night.</em></p> <p>And…</p> <p><strong>The Dark Theme (High Contrast)</strong></p> <p><a href=""><img style="display: block; margin-left: auto; margin-right: auto;" src="" alt="" border="0" /></a><br /> <em>For those of us who like the dark theme but with the white hot intensity of colors that figuratively literally scream at you like an ALL CAPS menu.</em></p> .</p> <p>Regards,</p> <p><strong>-ADG</strong></p> <p><strong>Anthony D. Green</strong>, Program Manager, <strong>Visual Basic, C#, and F# Languages Team</strong></p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Anthony D. Green [MSFT] 2015 .NET Security Updates<p>The .NET team released two security bulletins today as part of the monthly "Update Tuesday" cycle. </p> <p><a href="">Microsoft Security Bulletin MS15-044 - Critical</a><strong>, </strong>Vulnerability in .NET Framework Could Allow Remote Code Execution (<a href="">3057110<!-- CultureCode --><!-- CultureCode --></a>) </p> <p>This security update resolves vulnerabilities in Microsoft .NET Framework. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted document or visits an untrusted webpage that contains embedded TrueType RC on affected releases of Microsoft Windows.</p> <p>More details about the versions affected by this vulnerability can be found in the security bulletin <a href="">MS15-044</a>.</p> <p><strong></strong> </p> <p><a href="">Microsoft Security Bulletin MS15-048 - Important</a><strong>, </strong>Vulnerability in .NET Framework Could Allow Information Disclosure (<a href="">3057134</a>) </p> <p>This security update resolves vulnerabilities in Microsoft .NET Framework. The most severe of the vulnerabilities could allow elevation of privilege if a user installs a specially crafted partial trust application.<48< the .NET Framework 4.6 RC<p>The .NET Framework 4.6 is the latest version of the .NET Framework. The .NET Framework 4.6 exposes new APIs that you can use in your app or library. You can also use it to run existing apps.</p> <h1>Using Visual Studio 2015</h1> <p>You can target the .NET Framework 4.6 using Visual Studio 2015. Visual Studio 2015 targets the .NET Framework 4.5.2 by default, since the .NET Framework 4.5.2 has been deployed broadly (globally). Targeting the .NET Framework 4.5.2 is the best choice, unless you specifically need the new APIs in the .NET Framework 4.6.</p> <p>You can target the .NET Framework 4.6 by changing the <em>Target Framework</em> for your app or library, under Project Properties. See how to do that in the image below.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <h1>Using Visual Studio 2012 or 2013</h1> <p>You can target the .NET Framework 4.6 in Visual Studio 2012 and Visual Studio 2013. You can do that by installing the <a href="">.NET Framework 4.6 RC Targeting Pack</a> or installing Visual Studio 2015 on the same machine.</p> <p>You also need to install the <a href="">.NET Framework 4.6 RC</a> to run your app. It does not contain the targeting pack. The targeting pack and the framework are separate components.</p> <h1>On a Build Machine</h1> <p>You can target the .NET Framework 4.6 as part of your build, to build 4.6 apps and libraries. You can do that by installing the <a href="">.NET Framework 4.6 RC Targeting Pack</a>.</p> <h1>Using the .NET Framework 4.6</h1> <p>By targeting the .NET Framework 4.6, your app will require the .NET Framework 4.6 (or later) to run. You will need to deploy the <a href="">.NET Framework 4.6 RC</a> or rely on your users to do that. See the Deploying the .NET Framework 4.6 section below for more information on deployment.</p> <p>You can run existing apps built for the .NET Framework 4.0 and 4.5.x on the newer framework without making any changes to your apps. These existing apps will start using the .NET Framework 4.6 after it has been installed on a given machine. They will benefit from performance and reliability updates that are part of the .NET Framework 4.6</p> <h1>Deploying the .NET Framework</h1> <p>The <a href="">Deploying the .NET Framework and Applications</a> guide describes several options for deploying the .NET Framework. It is important to choose an approach for ensuring that your users have the version of the .NET Framework that is needed to run your app.</p> <p>As a last resort, your users will be prompted to download the .NET Framework if they attempt to run your app and it is not installed. They will be directed to the download location for the given version of the .NET Framework that they need.</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] out Visual F# 4.0 in VS 2015 RC<p>Today marks the release of Visual Studio 2015 RC, which includes the latest updates to the Visual F# 4.0 language and tools. Download the RC <a href="" target="_blank">here</a>, and review the VS release notes <a href="" target="_blank">here</a>.</p> <p><a href="" target="_blank">Back in November</a>, we described the F# 4.0 features that were completed in time for the Visual Studio 2015 Preview build. New features like constructors as first-class functions, simplified mutable/ref values, a normalized collections API, and more were warmly received. We didn’t stop there, though!</p> <p>This post describes the F# 4.0 work that’s been completed since Preview. Together with the features announced earlier, this completes the <a href="" target="_blank">planned feature set</a> for F# 4.0. Bug fixes and performance optimizations will continue to be accepted up until VS 2015 RTM. The F# bits that ship with today’s RC build map to commit <a href="" target="_blank">76ae08d</a>.</p> <h4>Built by the F# community</h4> <p>Visual F# 4.0 was built completely in the open by F# community developers, in partnership with the Visual F# team at Microsoft. It represents the work of 38 contributors, over 75% of whom have no Microsoft affiliation.</p> <p>We extend a big thank you to all of the community devs who help us make F# and the Visual F# Tools great!</p> <h4> Feedback</h4> <p>As you try out VS 2015 RC, please log any F#-specific issues directly on <a href="" target="_blank">GitHub</a>. Feedback on Visual Studio itself can be sent through <a href="" target="_blank">Connect</a>, <a href="" target="_blank">UserVoice</a>, or as quick notes through <a href="" target="_blank">Send-a-Smile</a> directly in the IDE.</p> <p>Here’s what’s been added since the Preview build:</p> <h2>Language</h2> <h4>Implicit quotation of method arguments</h4> <p>Method arguments of type FSharp.Quotations.Expr<_> can now be marked with the [<ReflectedDefinition>] attribute, enabling automatic quotation of the corresponding argument expressions at callsites. This enables seamless syntax for various meta-programming scenarios that depend on access to the underlying AST, e.g. LINQ-style transformations or R-style plotting routines that label chart axes automatically.</p> <p>A simple example of using this feature in an expression printing method:</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="autoquote" src="" alt="autoquote" width="593" height="342" border="0" /></a></p> <h4>Extended preprocessor grammar</h4> <p>F# preprocessor directives now support the standard logical operators &&, ||, and !, as well as grouping via ( ). This leads to significantly cleaner conditionally-compiled code, and the ability to easily express conditions that previously required code duplication.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="preproc" src="" alt="preproc" width="528" height="187" border="0" /></a></p> <h4>Rational exponents in units of measure</h4> <p>F# <a href="" target="_blank">units of measure</a> were previously limited to support only integer exponents. With F# 4.0, units can now be raised to arbitrary rational powers. Certain physical sciences, such as electrical engineering, frequently use such fractional powers for units.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="rationalUOM" src="" alt="rationalUOM" width="530" height="122" border="0" /></a></p> <h4>Inheritance from types with multiple generic interface instantiations</h4> <p>Earlier versions of F# disallowed types from implementing the same .NET interface at multiple generic instantiations, or even inheriting from C# types that did so. This was due to complexities in how multiple generic instantiations would interact with F# type inference.</p> <p>Although the former restriction still applies, the restriction on inheritance has been lifted in F# 4.0. This significantly simplifies interoperability with certain C#-based APIs, an increasing number of which rely on such designs.</p> <p>It is also now possible to work around the remaining restriction by defining a small type hierarchy in pure F# code, each member of which implements an additional generic instantiation:</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="interface" src="" alt="interface" width="487" height="172" border="0" /></a></p> <h4>Non-nullable provided types</h4> <p><a href="" target="_blank">Provided types</a> which report the attribute [<AllowNullLiteral( false )>], will now be treated as non-nullable in the same way as standard F# types. The brings stronger safety guarantees to provided types and aligns them more closely with patterns and idioms from the rest of the language.</p> <h4>Multiple properties in [<StructuredFormatDisplay>]</h4> <p>F# types specifying custom %A string formatting via the [<StructuredFormatDisplay>] attribute were previously limited to interpolating just a single property value. StructuredFormatDisplay now supports an arbitrary number of interpolated properties, as well as escaped { } brackets.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="structuredformat" src="" alt="structuredformat" width="531" height="135" border="0" /></a></p> <h4>Extension properties in object initializers</h4> <p>F# supports not just extension methods, but extension properties, as well. Settable extension properties can now be assigned directly within F# object initializers, allowing for one-step initialization of data.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="extensionset" src="" alt="extensionset" width="449" height="203" border="0" /></a></p> <h4>Leading ‘Microsoft’ prefix optional</h4> <p>Although Microsoft publishes the Visual F# Tools for use in Visual Studio on Windows, the F# language itself is supported cross-platform by a variety of companies, and its primary advocate is the independent, community-governed <a href="" target="_blank">F# Software Foundation</a>.</p> <p>In this vein, to keep F# code itself vendor- and platform-neutral, the leading “Microsoft.” can now be optionally omitted when referring to namespaces, modules, and types from the FSharp.Core runtime.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="msftprefix" src="" alt="msftprefix" width="251" height="58" border="0" /></a></p> <h2>Runtime</h2> <h4>Optimized non-structural comparison operators</h4> <p>By default, the =, <>, <, >, <=, >=, compare, min, and max operators in F# represent <em>structural</em> equality and comparison. This is a sensible choice for a data-oriented functional language, but presents problems in some scenarios. In particular, standard .NET operator overloads methods like ‘op_Equality’, ‘op_LessThan’, etc become difficult to use from F#, and performance can be significantly degraded, especially for non-primitive value types.</p> <p>In F# 4.0, a new module of non-structural operators has been created. By opening the new ‘NonStructuralComparison’ module, these operators are brought into scope and used instead of the default structural operators.</p> <p><a href=""><img style="margin-right: auto; margin-left: auto; display: block;" src="" alt="" border="0" /></a></p> <h4>Async extensions to WebClient</h4> <p>Two new extension methods for System.Net.WebClient have been added to the FSharp.Control.WebExtensions module, allowing for easier usage of WebClient within F# <a href="" target="_blank">async workflows</a>:</p> <ul> <li>WebClient.AsyncDownloadFile</li> <li>WebClient.AsyncDownloadData</li> </ul> <h4>Interop APIs for Option</h4> <p>A handful of new convenience APIs have been added for converting between possibly-null .NET types and F# option types.</p> <ul> <li>Option.toNullable: option:'T option -> Nullable<'T></li> <li>Option.ofNullable: value:Nullable<'T> -> 'T option</li> <li>Option.ofObj: value: 'T -> 'T option when 'T : null</li> <li>Option.toObj: value: 'T option -> 'T when 'T : null</li> <li>Operators.tryUnbox : value:obj -> 'T option</li> <li>Operators.isNull : value:'T -> bool when 'T : null</li> </ul> <h4>Quotation active pattern for Decimal values</h4> <p>There is now a dedicated active pattern in the runtime for deconstructing F# quotations which contain constant System.Decimal values. This previously required some awkward matching on compiler-internal function names.</p> <h2>IDE</h2> <h4>Script debugging</h4> <p>One of the most productivity-boosting aspects of the F# development process is the lightweight, iterative REPL experience. As scripts and code snippets get larger and more complex, however, they can become difficult to debug. In the past, there was no way to attach the Visual Studio debugger directly to an F# script, so debugging options were limited and cumbersome: “printf debugging”, or refactoring one’s script into a full-fledged project.</p> <p>In VS 2015, you can now debug F# scripts directly. The debugger can be attached to the current F# Interactive session through context menus in the editor or the F# Interactive window itself.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="debug" src="" alt="debug" width="640" height="226" border="0" /></a></p> <h4>Integrated project up-to-date check</h4> <p>When building a multi-project solution, Visual Studio developers are accustomed to seeing a summary of how many project builds “succeeded”, “failed”, or were already “up-to-date.” The F# project system never implemented the logic to detect the “up-to-date” status, so full builds were always launched. This was a minor performance issue, and caused F# projects to always be reported as “succeeded” or “failed.”</p> <p>In VS 2015, F# projects now fully support the integrated up-to-date check, and properly report their status in the build summary.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="uptodate" src="" alt="uptodate" width="584" height="120" border="0" /></a></p> <p> </p> <h4>Intellisense in object initializers</h4> <p>Intellisense integration has been improved for F# object initializer expressions. Within an object initializer, the completion list (triggered by Ctrl+Space) will now contain the settable properties one can initialize.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="objinit" src="" alt="objinit" width="563" height="98" border="0" /></a></p> <h4>Intellisense for named parameters</h4> <p>Similar to above, F# intellisense now gives auto-completions to assist with using named parameters to methods.</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="namedparam" src="" alt="namedparam" width="302" height="65" border="0" /></a></p> <h4>Bug fixes around folder support</h4> <p>The F# project system does not implement native support for organizing code into folders. The popular <a href="" target="_blank">Visual F# Power Tools</a> extension adds this functionality, but occasionally encounters bugs that require fixes to the project system itself.</p> <p>Various fixes in this area have been made in VS 2015 in order to better support the Power Tools. You can also now open F# project folders directly from solution explorer:</p> <p><a href=""><img style="background-image: none; float: none; padding-top: 0px; padding-left: 0px; margin-left: auto; display: block; padding-right: 0px; margin-right: auto; border: 0px;" title="folder" src="" alt="folder" width="437" height="345" border="0" /></a></p> <p> </p> <p> </p> <p>We hope you enjoy the new capabilities in Visual F# 4.0!</p> <p>-The <a href="mailto:Team@VisualFSharp" target="_blank">@VisualFSharp</a> Team</p><div style="clear:both;"></div><img src="" width="1" height="1">Visual FSharp Team [MSFT] Announcements at Build 2015<p><strong>Updated (July 2015)</strong><span>: See </span><a href="">Announcing .NET Framework 4.6</a><span> to read about the final version of the .NET Framework 4.6.</span></p> <p>At the Build conference today, Scott Guthrie announced the .NET Framework 4.6 RC and Visual Studio 2015 RC. He also announced important updates for .NET Windows 10 apps, ASP.NET 5 and .NET Core. You can download and try out the releases now:</p> <ul> <li><a href="">Visual Studio 2015 RC</a></li> <li><a href="">.NET Framework 4.6 RC</a></li> <li><a href="">.NET Core (on GitHub)</a></li> <li><a href="">ASP.NET 5 (on GitHub)</a></li> </ul> <p>As a team, we're really excited to share everything we've been working on:</p> <ul> <li><a href="#dotnetcore">.NET Core - for device and cloud</a></li> <li><a href="#dotnet46">.NET Framework 4.6</a></li> <li><a href="#dotnetlang">.NET Languages - F#, C#, VB</a></li> <li><a href="#aspnet">ASP.NET</a></li> <li><a href="#entityframework">Entity Framework</a></li> <li><a href="#visualstudio">Visual Studio Improvements for .NET</a></li> </ul> <p>We've been working on these releases for a couple years now. Please do check out the earlier <a href="">.NET Preview releases</a> that we announced at Preview in November.</p> <p>We are also exited to share that <a href="#xamarin">Xamarin Starter is now included in Visual Studio 2015</a>.</p> <p><a name="dotnetcore"></a></p> <h1>.NET Core - for Device and Cloud</h1> <p>. </p> <p>Today, the <a href="">.NET Core Framework</a>.</p> <p>All of the .NET Core libraries are distributed as NuGet packages. You can acquire the packages easily within Visual Studio or with one of the NuGet clients directly.</p> <h2><a name="efficient"></a>Self-contained and Efficient</h2> <p.</p> <p.</p> <p>.NET Native is currently scoped to Windows 10 Universal apps, however, it is intended to fit in as a deployment option for .NET Core apps generally.</p> <h2>Cross-Platform</h2> <p>.NET Core supports Windows, OS X and Linux. You can write Universal apps on Windows 10 and ASP.NET 5 and Console apps on all of the three OSes. FreeBSD support is in progress, <a href="">led by the .NET open source community</a>. We expect that the community will create other OS ports. In fact, the <a href="">OS X</a> port has also been community led.</p> <p>.NET Core supports x86, x64 and ARM CPUs in order to support device, cloud and console app scenarios. We expect that to see more chips come online, particularly given the <a href="">LLILC</a> LLVM integration project. LLILC will (in theory) make it possible to port .NET Core to all of the chips that LLVM supports.</p> <h2>Open Source</h2> <p>The <a href="">.NET Core</a> is open source on GitHub. You can look at the code and even make contributions. We have received many great contributions over the last number of months. Thanks!</p> <p>The <a href="">.NET Core Framework</a> team are in the process of publishing all of their code on GitHub and are now over half-way done. You can check out their progress in the image below.</p> <p><img style="max-width: 100%;" src="" alt="CoreFX Progress" /></p> <h2>PartsUnlimited</h2> <p>The team built and released a demo of a .NET Core app, using ASP.NET 5. The demo is called <a href="">PartsUnlimited</a> and is open source on GitHub. It runs on Linux, OS X and Windows.</p> <p>Parts Unlimited is an ecommerce app for a ficticious company, based on a website of the same name in <a href="">The Phoenix Project</a>. The website includes product listings by category, product details, shopping cart, order history, product recommendations, search, and more.</p> <p>Both the demo and .NET Core are under development. The <a href="">master branch</a> of the repo supports .NET Core and ASP.NET 5 beta 4, and runs on Windows. The <a href="">beta 5 branch</a> supports .NET Core and ASP.NET 5 beta 5 and runs on Windows, OS X and Linux. You can try out both branches, although you may encounter a beta 5 build that doesn't work as expected.</p> <p>Key Features:</p> <ul> <li>Works with Visual Studio 2015 RC</li> <li>ASP.NET 5 support for Linux and Mono</li> <li>Includes a Dockerfile and sample publishing profile to publish to a Docker container</li> <li>Entity Framework code-first using SQL Azure or an in-memory database (Mono)</li> <li>Includes Azure RM JSON templates and PowerShell automation scripts to easily build and provision your environment</li> </ul> <p><a name="dnx"></a></p> <h1>.NET Execution Environment (DNX)</h1> <p>The <a href="">.NET Execution Environment (DNX)</a>.</p> <p>DNX provides the following benefits:</p> <ul> <li>Create single application that can work on multiple operating without cross compiling (Windows, Mac, Linux)</li> <li>Create applications that can run from source without a build step enabling development with just simple text editors (Sublime, Emacs, VIM, Visual Studio Code).</li> <li>Enables debugging from source for referenced NuGet packages.</li> <li>Straightforward acquisition of .NET runtimes (e.g. .NET Core).</li> <li>Manage multiple .NET runtimes on a single machine both globally or app centric including security updates.</li> <li>Supports ASP.NET 5 and .NET Core console app workloads.</li> </ul> <p 'KRE' before), but that's purely historical.</p> <p>Note: DNX is not the only SDK for .NET Core. .NET Native, for example, is another one.</p> <h2>DNX Tools and Concepts</h2> <p>There are several pieces to DNX:</p> <ul> <li>DNX (distribution): A distribution (a NuGet package) of the components that are the implementation of the new environment. <ul> <li>The .NET Core DNX distribution includes CoreCLR and the base parts of CoreFX.</li> <li>The .NET Framework and Mono DNX distribution only contain the DNX components.</li> </ul> </li> <li>DNVM: A tool for aquiring and managing DNX distributions. Not part of DNX itself, since it plays an admistrator role for DNX.</li> <li>DNU: The NuGet client for DNX. NuGet.exe is not used.</li> <li>DNX (commandline tool): A eponymously named tool that controls various app operationals, primarly launching.</li> </ul> <h2>Using DNX</h2> <p.</p> <p>In a typical workflow, you do the following (each of which is a simple command):</p> <ul> <li>Acquire DNVM.</li> <li>Acquire the desired runtime flavor (e.g. X64 .NET Core for OS X) with DNVM.</li> <li>Write or git clone app source, using DNX concepts (e.g. project.json).</li> <li>Restore packages for the app, using DNU (part of the DNX distribution).</li> <li>Launch app from source: <code>dnx . run</code>.</li> </ul> <p>You can learn how to try DNX yourself, with the following instructions:</p> <ul> <li><a href="">ASP.NET 5 apps</a></li> <li><a href="">.NET Core console apps</a></li> </ul> <p><a name="dotnetlang"></a></p> <h1>.NET Language Updates</h1> <h2>C# 6 and VB 14</h2> <p>With the RC release of Visual Studio 2015 RC, both Visual Basic 14 and C# 6 are language complete. In this release we've made several improvements to the languages and IDE to reduce boilerplate and respond to top customer feedback. Some highlights include:</p> <ul> <li>String interpolation: An intuitive String.Format-like syntax for composing strings from templates with inline expressions.</li> <li>The Null-Conditional operator (?.): A streamlined syntax for conditionally accessing a member or invoking a method on a value if it's non-null and returning null if the object is null instead of throwing a NullReferenceException.</li> <li>The NameOf operator: A rename-safe way to refer to the name of a code element such as in PropertyChanged events and ArgumentExceptions.</li> <li>Read-only Auto-Properties: A concise syntax for declaring properties which may only be assigned in their initializers or inside of a constructor.</li> </ul> <p>Read about these features and many more on the <a href="">VB</a> and <a href="">C#</a> team blogs.</p> <p>We've also made several improvements to the debugging experience including support for using lambda and query expressions inside of the debugger windows and Edit and Continue support in and around lambda expressions, simple queries, as well as inside Async and iterator methods.</p> <p <a href="">GitHub</a>.</p> <h2>Visual F# 4.0</h2> <p <a href="">76ae08d</a> in the <a href="">F# repo on GitHub</a>.</p> <p>F# 4.0 includes major new enhancements across the language, the runtime and the IDE experience. The following features are a small selection of what's included:</p> <ul> <li>constructors as first-class functions</li> <li>simplified mutable/ref values</li> <li>a normalized collections API</li> <li>Leading ‘Microsoft’ namespace optional</li> <li>Async extensions to WebClient</li> <li>Implicit quotation of method arguments</li> <li>Script debugging</li> <li>Intellisense in object initializers</li> </ul> <p>Check out the more indepth blog posts from the F# team on F# 4.0 release, for <a href="">F# 4.0 RC</a> and <a href="">F# 4.0 Preview</a>.</p> <p><a name="visualstudio"></a></p> <h1>Visual Studio Improvements for .NET</h1> <p>The Visual Studio Team has added some key improvements for .NET in the RC release. There were many additional <a href="">Visual Studio improvements for .NET in the Preview release</a> that you can also try out.</p> <h1>Debugger Improvements</h1> <p>Visual Studio 2015 addresses many requests that you have made for improving your debugging life, such as <a href="">lambda debugging</a>, <a href="">Edit and Continue (EnC) improvements</a>, <a href="">child-process debugging</a>, as well revamp core experiences such as <a href="">powerful breakpoint configuration</a> and introduce a <a href="">new Exceptions Settings tool window<1>More EnC - Lambda and Async Task support</h1> <p.</p> <p>You can now use EnC with lambdas, async methods, LINQ and a few other situations. Given today's coding patterns, that's a huge jump forward for EnC usability. You can check out the set of <a href="">EnC improvements the team already released</a> in Visual Studio CTP 6.</p> <p>The following screenshot demonstrates the new support. There are two separate lines that were typed using this new support, one in an async method and the other in a lamda.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <p>The following scenarios are now supported:</p> <ul> <li>Async functions</li> <li>Lambda expressions</li> <li>LINQ queries</li> <li>Iterator functions</li> </ul> <p>EnC improvements are still in progress and we are working to support more scenarios (and providing more documentation). Please <a href="">file any issues</a> you come across on our GitHub.</p> <blockquote> <p><strong>Note:</strong> If you don't understand why an edit fails, try checking the Error List. There are explanations for errors there that should help clarify issues. Please file an issue if you find these messages confusing or if they do not exist for your error.</p> </blockquote> <p>Read more about <a href="">earlier EnC improvements</a> from Visual Studio 2015 CTP 6, such as modifying iterators, async/await, methods, etc.</p> <h1>WPF - Live Visual Tree</h><a href=""><img src="" alt="" border="0" /></a></p> <p><a name="xamarin"></a></p> <h1>Xamarin Starter now Included in Visual Studio</h1> <p>Xamarin is a great way to start building iOS and Android apps in C# or F# within Visual Studio. <a href="">Xamarin Starter Edition</a> is now included as a free optional feature within Visual Studio 2015. Many .NET developers are using Xamarin to increase the reach of their apps and development effort to iOS and Android. According to a recent blog post, Xamarin has been downloaded by <a href="">1 Million unique developers</a>. That's a lot.</p> <p>To install Xamarin with Visual Studio 2015, select the <em>Custom</em> installation option. Select the displayed checkbox below.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <p>There are several additional application templates that are available for you to use after installing Xamarin Starter edition, for iOS (displayed below) and Android.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <p>You can use Xamarin Starter editon as long as you want, build apps, test on devices and publish to app stores. You can start a <a href="">Xamarin Business trial</a> to try out the richer experience. You can always return to Xamarin Starter Edition after that.</p> <p><a name="aspnet"></a></p> <h1>ASP.NET Updates</h1> <p>The ASP.NET team has been busy since Preview, with ASP.NET 4.6 and ASP.NET 5. You can see all of the ASP.NET updates in <a href="">Visual Studio 2015 RC on the webdev blog</a> and <a href="">Updates for ASP.NET 4.6 – Web Forms/ MVC 5/ Web API 2</a>. You can also see the updates from the earlier <a href="">ASP.NET CTP 6 update</a>.</p> <h2>ASP.NET 5 Project</h2> <p>The team has a lot of focus on the ASP.NET 5 project. There are some important new updates in the RC described below and in the <a href="">webdev blog</a>. At this point, most of the attention is on fit-and-finish, performance and reliability. We've talked to many customers that want to start deploying it on both Windows and Linux. There is also a lot of interest in OS X, particularly with the recently announced <a href="">Visual Studio Code</a>. It's our goal to make this incredible new ASP.NET scenario available to you as soon as we can.</p> <p>The team also recently announced <a href="">support for Visual Basic</a> in ASP.NET 5.</p> <h2><span style="font-size: 1.5em;">Updated New Project Dialog</span></h2> <p>The addition of ASP.NET 5 as new separate version of ASP.NET motivated the team to re-work the ASP.NET <em>New ASP.NET Project</em>.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <h2>Missing NuGet Packages - No Longer</h2> <p.</p> <p>In the example below, the XDocument type (just the text <code>XDocument</code>) is resolved to its type definition with a simple "<code>CTRL ."</code>. The using statement is added to the file and the System.Xml.XmlDocument NuGet package.</p> <p><a href=""><img src="" alt="" border="0" /></a></p> <p>We wanted to make it easy to copy some code from <a href="">StackOverflow</a>, for example, and resolve type and package references quickly and easily. Please tell us if we've achieved that goal.</p> <h2>Enabling the .NET Compiler Platform (“Roslyn”) in ASP.NET applications</h2> <p>You can use the new language features of C# and VB in any ASP.NET 4.6 project. The Web Forms templates in VS 2015 have the <a href="">Microsoft.CodeDom.Providers.DotNetCompilerPlatform package</a> pre-installed. For VS 2015 RTM, it will be installed in all templates. Read <a href="">Enabling the .NET Compiler Platform (“Roslyn”) in ASP.NET applications</a> post for more details.</p> .</p> <p>The browser and the webserver (IIS on Windows) do all the work. You don't have to do any heavy-lifting for your users. Most of the <a href="">major browsers</a> support HTTP/2, so it's likely that your users will benefit from HTTP/2 support if your server supports it. Give it a try with the RC update.</p> <h2>Support for Token Binding Protocol</h2> <p>Microsoft and Google have been collaborating on a new approach to authentication, called the <a href="">Token Binding Protocol</a>. The premise> <p><a name="dotnet46"></a></p> <h1>.NET Framework 4.6</h1> <p>Today's release is .NET Framework 4.6 RC. It's the first "Go Live" release for the 4.6 release. Please install it and start trying it out. You can read about the <a href="">.NET Framework Preview release</a>, which we shipped in November.</p> <h2>RyuJIT</h2> ).</p> <p>The project was initially targeted to improve high-scale 64-bit cloud workloads, although it has much broader applicabilty. We also do expect to add 32-bit support in a later release.</p> <p>RyuJIT is on by default for 64-bit processes running on top of the .NET Framework 4.6. Your app will run in a 64-bit process if it is compiled as 64-bit or AnyCPU,="">many blog posts on RyuJIT</a>, try out several RyuJIT CTPs and (suprise!) you can now even read and contribute to the <a href="">RyuJIT source code</a>. Thanks to everyone who helped improve RyuJIT along the way to RC. It's very close to going into use and improving the performance of many production workloads.</p> <h2>Garbage Collector Update</h2> <p>The Garbage Collector has a new mode that attempts to avoid garbage collection while certain memory-related conditions are met. This new mode is important for workloads that require uninterupted computation (at least as it relates to GC CPU use).</p> <p>The new mode enables you to <a href="">specify a certain amount of memory be available</a> as a pre-requisite to enter a <em>No GC Region</em>. While in the region the GC will not collect. It. In this update, the team has added support to use CNG certificate keys with the <a href="">X509Certificate class</a>.</p> <p>This update is the first step towards broader support for the Windows CNG API and for more modern cryptography algorithms generally. Note that team is still in the middle of building this new support, so expect the API to change for RTM.</p> <h2>Compatibility Switches</h2> <p>AppContext appropraitely> <p><a name="entityframework"></a></p> <h1>Entity Framework</h1> <p <a href="">today's announcement post from the EF team</a>.</p> <h2>Entity Framework 6.1.3</h2> <p.</p> <p>The EF runtime will be installed if you create a new model using the Entity Framework Tools in a project that does not already have the EF runtime installed.</p> <p>The runtime is pre-installed in new ASP.NET projects, depending on the project template you select. The EF6.1.3 Tools for Visual Studio 2015 are included to make sure you get the latest bug fixes and improvements.</p> <h2>Entity Framework 7 Beta 4</h2> <p.</p> <p>EF7 can be used in the .NET Framework, .NET Core (including ASP.NET 5) and Mono apps.</p> <p>Here is a rough guide to what currently works in Beta 4. Most of these features are a work-in-progress and still have limitations.</p> <ul> <li>Basic modeling including built-in conventions, table/column mapping, and relationships< commands</li> <li>An early preview of reverse engineering a model from a database</li> <li>Logging</li> <li>Unique constraints including the ability to use them as keys in a relationship</li> </ul><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT] 2015 .NET Security Updates<p>The .NET team released a security bulletin today as part of the monthly “patch Tuesday” cycle.</p> <p><a href="">Microsoft Security Bulletin MS15-041 - Important</a><strong>, </strong>Vulnerability in .NET Framework Could Allow Information Disclosure (<a href="">3048010</a>) </p> <p.<41< Journey Through Open Source: The Trials & Triumphs in Roslyn's First Year of Open Source<p style="padding-left: 30px;"><em>This post is written by <strong>Kasey Uhlenhuth </strong>a <strong>Program Manager</strong> on the Managed Languages Team.</em></p> <div style="margin: 10 0 0 0;"> <div style="margin: 10 0 0 0;"> <blockquote style="font-family: Georgia, serif; font-size: 18px; font-style: italic; width: auto; margin: 0 auto; padding: 0.35em 40px; line-height: 1.45; position: relative; color: #383838; border-bottom: 3px solid #ccc; display: table;"> <p>"I am looking for someone to share in an adventure."</p> <span>— <cite style="font-size: 80%;"><em><a href="">Gandalf, The Hobbit, J.R.R. Tolkien</a></em></cite></span></blockquote> </div> <p><img src="" alt="" /></p> <p>On April 3, 2014, Anders Hejlsberg set us on our open source journey when he <a href="">made the .NET Compiler Platform (aka “Roslyn”) source code public</a> <span>live on stage in San Francisco</span>. Without much open source experience to guide us (or a Grey Wizard), we anxiously yet excitedly hit the open roads. This post details the real and true story of the trials and triumphs we’ve experienced in Roslyn’s first year of open source.</p> <h1>The Call to Adventure</h1> <p <a href="">re-architecting our compiler as a platform</a>..</p> <h1>Crossing the Threshold</h1> <p.</p> <p>Despite this dream of going open source, we didn’t cross the threshold until about five years later. We saw the success the F#, ASP.NET, and TypeScript teams experienced with their open source approach and we quickly followed in their footsteps at Build in 2014.</p> <h1>Trial and First Failure</h1> <p.</p> <p>Naturally, we didn’t avoid all the cobwebs or snowstorms in our pathfinding. But our open strategy allowed us to catch our mistakes early and adjust to find better ways to manage our project. For example, we now publish our <a href="">language design notes each week</a>, test our external build scripts, and increase our responsiveness to <a href="">issues/questions</a></p> <h1>Meeting Allies</h1> <p>Along the way we have made some great allies (we love dwarves, hobbits, elves, and mankind!) who have really helped and supported us along this journey. The immediate community feedback on our Language Design Notes was vital in some of the decisions we made for VB14 and C# 6.0. For example, <a href="">this discussion</a> made us realize we had to yank primary constructors. We also gained valuable insights into <a href="">null-conditional operator syntax</a> and <a href="">string-interpolation</a>.</p> <p>Furthermore, the community has already made some truly awesome tools. Here is a sample of community-driven projects that are a part of The Fellowship of the Roslyn Project:</p> <ul> <li><a href="">C# Pad</a> is an interactive shell that lets you run and execute C# in your browser.</li> <li><a href="">CodeConnect.io</a> creates visualizations of your call stack at design-time and includes refactoring and search features.</li> <li><a href="">DuoCode</a> cross compiles your C# 6.0 code into JavaScript code</li> <li><a href="">LINQPad.CodeAnalysis</a> is a library that adds capabilities to LINQPad to make it easier to work with Roslyn.</li> <li><a href="">Mono/Roslyn</a> is a cross-platform, open source implementation of the .NET Framework.</li> <li><a href="">OzCode v2.0</a> uses Roslyn to provide magical debugging experiences.</li> <li><a href="">Scrawl</a> by FluentCo.de is a light-weight editor for modern web developers.</li> <li><a href="">scriptcs</a> is an open source project that lets you use C# as a scripting language and provides a command-line C# REPL.</li> <li><a href="">Try Roslyn</a> demonstrates Roslyn and shows how to repro a compiler bug.</li> <li><a href="">WebEssentials Markdown editor</a> is powered by Roslyn.</li> </ul> <p>Based on the quantity and quality of projects in the community that use Roslyn, it is clear we made the right move by open sourcing our platform.</p> <h1>Growth & New Skills</h1> <p>We originally put our source code on CodePlex because it was fast and easy to do--as well as the established “place” for Microsoft open projects at the time. However, many of you told us that you would prefer GitHub as a platform and so <a href="">we moved our source there</a>.</p> <p>When we moved we also changed our workflow to follow some of the best practices we saw <a href="">other Microsoft teams</a>).</p> <p).</p> <p <a href="">milestones</a> to communicate our prioritization—which is always up for discussion or <a href="">up-for-grabs</a>.</p> <h1>Transformation</h1> <p>Moving to a workflow that has our team working in the open as much as possible and using the same contribution model as the community has allowed us to almost double our community engagement in a third of the time:</p> <p><img src="" alt="" /></p> <p>Our user base for Roslyn is also steadily increasing (the more the merrier!). This graph illustrates how our pull-request users and issue-logging users are growing since our new model:</p> <p><img src="" alt="" /></p> <p>Here are some additional statistics on our response-rates and engagement that we track to ensure our core team is suitably prioritizing acceptance of code over net-new development:</p> <p><img src="" alt="" /></p> <p>Our response rate for issues is half that for pull-requests—which is expected due to our high volume of incoming issues! Regardless, we are happy with these numbers and are always looking for ways to make them better.</p> <p>We have also been open sourcing more code. Since January we have open sourced three more components of our platform:</p> <ul> <li><a href="">Scripting</a></li> <li><a href="">Expression Evaluators</a></li> <li><a href="">Visual Studio Language Services</a></li> </ul> <p>Our <a href="">Interactive design meeting notes</a> are also now in the open.</p> <h1>Acceptance</h1> <p:</p> <blockquote class="twitter-tweet" lang="en"> <p>I inspired a new issue label in <a href="">#Roslyn</a>! <a href=""></a></p> — Schabse Laks (@Schabse) <a href="">March 15, 2015</a></blockquote> <script charset="utf-8" type="text/javascript" src=""></script> <p?</p> <p>To top it off, we are working on <a href="">making it easier for F5 builds to pick up the changes you made to the compiler and workspace layer</a>.</p> <h1>The Road Ahead</h1> <p <a href="">Gitter</a>, <a href="">GitHub</a>, <a href="">Twitter</a>, etc. So thank you all for being our very own Samwise Gamgees and wish us well on the remainder of this journey! For those of you who haven’t joined our fellowship, <a href="">check us out</a> and help us on our journey into Year Two.</p> <blockquote style="font-family: Georgia, serif; font-size: 18px; font-style: italic; width: auto; margin: 0 auto; padding: 0.35em 40px; line-height: 1.45; position: relative; color: #383838; border-bottom: 3px solid #ccc; border-top: 3px solid #ccc; display: table;"> <p>"End? No, the journey doesn't end here."</p> <span>— <cite style="font-size: 80%;"><em><a href="">Gandalf, The Lord of the Rings, J.R.R. Tolkien</a></em></cite></span></blockquote> <p>Over 'n' out <br /><strong>Kasey Uhlenhuth</strong>, Program Manager, <strong>Managed Languages Team</strong></p> </div><div style="clear:both;"></div><img src="" width="1" height="1">Immo Landwerth [MSFT] grows up<p>It's been over seven years since the <a href="">Prism</a> project started, led by the <i><a href="">patterns & practices</a></i> team. It was originally known as the <i>Composite Application Library</i>. Last year, we celebrated our <a href="">5th official release of Prism</a>. The two major focuses of that release were: composability and ease of community contribution.</p> <p>From the beginning, Prism has had a strong community focus. This hasn't simply been a matter of surveys and user studies, but frequent discussions and regular code reviews with community members and industry experts. Now it's time to take Prism to the next level.</p> <p>We are very pleased to announce that we are officially transferring ownership of our Prism projects to three passionate and dedicated community members.</p> <ul> <li><b>Brian Lagunas</b>. Brian is a Microsoft MVP whose involvement in the project started with Prism 2. Brian frequently speaks about Prism at various events and conferences, as well as provides professional training on Prism. He has authored courses about Prism for Pluralsight.</li> <li><b>Ariel Ben Horesh</b>. Ariel has been involved with Prism since its first release. A longtime advisor and advocate, he has built dozens of applications using Prism. More recently he was an early tester and advisor for <i>Prism for Windows Runtime </i>and he has several Prism-based apps in the Windows Store.</li> <li><b>Brian Noyes</b>..</li> </ul> <p>There is a strong continuity for the project since the new owners have been significant contributors since the beginning. As part of this hand-off, the projects will be consolidated in the <a href="">PrismLibrary org on GitHub</a>. Likewise, discussions are underway about bringing Prism into the <a href="">.NET Foundation</a>. This change of ownership means that future releases of Prism will be developed by the new team, not by Microsoft.</p> <p>You can read about the Prism team's plans on their <a href="">official announcement</a>. Be sure to star their repos and show them your support. We're excited about Prism's future.</p> <blockquote> <p><a href=""><em>Christopher Bennage</em></a><em> (</em><a href=""><em>@bennage</em></a><em>), a member of the </em><a href=""><em>Microsoft patterns & practices</em></a><em> team, with a focus on developer guidance.</em></p></blockquote><div style="clear:both;"></div><img src="" width="1" height="1">The .NET Team Engine is now Open Source on GitHub<p>Today we are pleased to announce that <a href="">MSBuild</a> is now available on <a href="">GitHub</a> and we are contributing it to the <a href="">.NET Foundation</a>! The <a href="">Microsoft Build Engine (MSBuild)</a> is a platform for building applications. By invoking msbuild.exe on your project or solution file, you can orchestrate and build products in environments where Visual Studio isn't installed. For instance, MSBuild is used to build the <a href="">.NET Core Libraries</a> and <a href="">.NET Core Runtime</a> open source projects.</p> <p><a href=""><img style="margin-right: auto; margin-left: auto; display: block;" src="" alt="" border="0" /></a></p> <p <a href="">Visual Studio 2015</a> installed in order to build the first time.</p> <p.</p> <h3>Walkthrough</h3> <p><strong>Build the Source Tree</strong></p> <p>The first scenario you might want to try is building the source tree. To do this, you will need to have Visual Studio 2015 installed on your machine. From a Developer Command Prompt, run the following:</p> <p><code>git clone</code></p> <p><code>cd msbuild </code></p> <p><code>build.cmd</code></p> <p><strong>Build a Console App</strong></p> <p>To build an app, you'll first want to run the BuildAndCopy.cmd script we included in the root folder of the source. This will build the sources and create a copy of your build output with everything you need. Again from a Developer Command Prompt, run this command from your MSBuild source location: <code>BuildAndCopy.cmd bin\MSBuild true</code> Now, to build a simple .NET Core console application, try the following:</p> <p><code>cd ..\ </code></p> <p><code>git clone </code></p> <p><code>.\msbuild\bin\MSBuild\MSBuild.exe .\corefxlab\demos\CoreClrConsoleApplications<br />\HelloWorld\HelloWorld.csproj </code></p> <p><code>.\corefxlab\demos\CoreClrConsoleApplications\HelloWorld\bin\Debug\HelloWorld.exe</code></p> <p><img style="margin-right: auto; margin-left: auto; display: block;" src="" alt="" border="0" /></p> <h1>Summary</h1> <p>MSBuild is the default build engine for Visual Studio and the .NET community on the Windows platform. Through open sourcing MSBuild we are responding to community <a href="">feedback</a> and we intend to make it the best choice for .NET developers on the Linux and Mac platforms.</p> <p>You can learn about the opportunities to get involved <a href="">here</a>. We look forward to your comments and hearing from you on the <a href="">.NET Foundation Forums</a>!</p><div style="clear:both;"></div><img src="" width="1" height="1">Rich Lander [MSFT]
http://blogs.msdn.com/b/dotnet/atom.aspx
CC-MAIN-2015-48
refinedweb
29,346
58.18
- Advertisement CGameProgrammerMember Content count4264 Joined Last visited Community Reputation640 Good About CGameProgrammer - RankContributor Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingI'm too lazy right now to make the code neatly arranged so attached is the quick'n'dirty code. It's a C# console application that reads an image, figures out the required points, and outputs an image showing them. It also outputs intermediate images showing the steps taken to get to that point. I've also attached one of these sample output images. EDIT: Apparently it doesn't let me attach source code files. Well here is the entire file: using System; using System.Collections.Generic; using System.Drawing; using System.Drawing.Imaging; namespace Vector { struct Point { public static readonly Point Zero = new Point(0,0); public double X; public double Y; public Point (double x, double y) { X = x; Y = y; } public override string ToString () { return "{" + X + "," + Y + "}"; } } class Polygon { public List<Point> Points; public Polygon () { Points = new List<Point>(); } } class Path { public List<Point> Points; public bool[,] EdgeMap; public Path (bool[,] edgeMap, IEnumerable<Point> points) { EdgeMap = (bool[,])edgeMap.Clone(); Points = new List<Point>(); Points.AddRange(points); } } class Program { static readonly double EDGE_OFFSET_THRESHOLD = 1; static readonly string INPUT_PATH = @"A:\Data\shape.png"; static void Main (string[] args) { Bitmap bmp = (Bitmap) Image.FromFile(INPUT_PATH); bool[,] edgeMap; int edgeCount; FindEdges(bmp, out edgeMap, out edgeCount); Bitmap output = new Bitmap(bmp.Width, bmp.Height); for (int y = 0; y < bmp.Height; y++) for (int x = 0; x < bmp.Width; x++) { if (edgeMap[x,y]) output.SetPixel(x, y, Color.White); else output.SetPixel(x, y, Color.Black); } output.Save(INPUT_PATH + ".edges.png", ImageFormat.Png); // Now find the polygons: List<Polygon> polygons = new List<Polygon>(); while (edgeCount > 0) { // Remove any point to start a new polygon Point point = FindRandomEdge(edgeMap); edgeMap[(int) point.X, (int) point.Y] = false; // Create the first path Path path = new Path(edgeMap, new Point[0]); path.Points.Add(point); List<Path> paths = new List<Path>(); paths.Add(path); EvaluatePath(path, paths); // Now look through the list of paths to find the one(s) with the // most points. There may be two with the same number of points; // just choose the first one arbitrarily in that case. Path bestPath = null; int bestCount = 0; foreach (Path option in paths) { Point first = option.Points[0]; Point last = option.Points[option.Points.Count-1]; // Look for the path that has all the points and that // correctly forms a closed polygon. if (first.X >= last.X-1 && first.X <= last.X+1 && first.Y >= last.Y-1 && first.Y <= last.Y+1 && option.Points.Count > bestCount) { bestCount = option.Points.Count; bestPath = option; } } edgeCount -= bestCount; // Create the new polygon Polygon polygon = new Polygon(); polygon.Points.AddRange(bestPath.Points); polygons.Add(polygon); edgeMap = bestPath.EdgeMap; } output = new Bitmap(bmp.Width, bmp.Height); for (int y = 0; y < bmp.Height; y++) for (int x = 0; x < bmp.Width; x++) output.SetPixel(x, y, Color.Black); for (int n = 0; n < polygons.Count; n++) { for (int p = 0; p < polygons[n].Points.Count; p++) output.SetPixel((int) polygons[n].Points[p].X, (int) polygons[n].Points[p].Y, GetPolygonColor(n)); } output.Save(INPUT_PATH + ".polygons.png", ImageFormat.Png); // Now we need to remove useless edge pixels until we only have the corners remaining. foreach (Polygon polygon in polygons) { int p = 0; while (p < polygon.Points.Count) { Point start = polygon.Points[p]; int removeCount = 0; for (int n = 2; n < polygon.Points.Count; n++) { int endIndex = (p+n) % polygon.Points.Count; Point end = polygon.Points[endIndex]; bool removeToEnd = true; // Now see if all pixels between start and end are within the allowed distance // from the line from start to end. If so then keep going. Otherwise we are done // and can eliminate any pixels between start and end. for (int m = 1; m < n; m++) { int middleIndex = (p+m) % polygon.Points.Count; Point middle = polygon.Points[middleIndex]; double distance = GetPointSegmentDistance(middle, start, end); if (distance > EDGE_OFFSET_THRESHOLD) { // We've found a pixel that must not be removed so stop here. removeToEnd = false; break; } } if (removeToEnd) removeCount = n-1; else break; } if (removeCount > 0) { int firstRemoval = (p+1) % polygon.Points.Count; int lastRemoval = (p+removeCount) % polygon.Points.Count; // We've found at least one pixel to remove. if (firstRemoval <= lastRemoval) { polygon.Points.RemoveRange(firstRemoval, 1+lastRemoval-firstRemoval); p++; } else { polygon.Points.RemoveRange(firstRemoval, polygon.Points.Count - firstRemoval); polygon.Points.RemoveRange(0, 1+lastRemoval); break; } } else p++; } } for (int n = 0; n < polygons.Count; n++) { for (int p = 0; p < polygons[n].Points.Count; p++) output.SetPixel((int) polygons[n].Points[p].X, (int) polygons[n].Points[p].Y, Color.White); } output.Save(INPUT_PATH + ".corners.png", ImageFormat.Png); } static void FindEdges (Bitmap bmp, out bool[,] edgeMap, out int edgeCount) { edgeMap = new bool[bmp.Width, bmp.Height]; edgeCount = 0; // Important: do not count diagonals. for (int y = 1; y < bmp.Height-1; y++) { for (int x = 1; x < bmp.Width-1; x++) { if (bmp.GetPixel(x, y).R > 0) { // This is a foreground pixel (part of the polygon). // Count how many surrounded pixels are the background. int count = (bmp.GetPixel(x-1, y).R == 0 ? 1 : 0) + (bmp.GetPixel(x+1, y).R == 0 ? 1 : 0) + (bmp.GetPixel(x, y-1).R == 0 ? 1 : 0) + (bmp.GetPixel(x, y+1).R == 0 ? 1 : 0); if (count > 0 && count < 4) { // It counts as an edge edgeMap[x, y] = true; edgeCount++; } } } } // Now we have to eliminate any edge pixels that are adjacent (including diagonally) // to only one other edge pixel; they are dead-ends. bool outliersFound = true; while (outliersFound) { outliersFound = false; for (int y = 1; y < bmp.Height-1; y++) { for (int x = 1; x < bmp.Width-1; x++) { if (edgeMap[x, y]) { int count = (edgeMap[x-1, y-1] ? 1 : 0) + (edgeMap[x-1, y] ? 1 : 0) + (edgeMap[x-1, y+1] ? 1 : 0) + (edgeMap[x, y-1] ? 1 : 0) + (edgeMap[x, y+1] ? 1 : 0) + (edgeMap[x+1, y-1] ? 1 : 0) + (edgeMap[x+1, y] ? 1 : 0) + (edgeMap[x+1, y+1] ? 1 : 0); if (count == 1) { edgeMap[x, y] = false; edgeCount--; outliersFound = true; } } } } } } static double GetPointLineDistance (Point p, Point lineA, Point lineB) { double length = GetLength(lineA, lineB); double xAP = lineA.X - p.X; double yAP = lineA.Y - p.Y; double xBA = (lineB.X - lineA.X) / length; double yBA = (lineB.Y - lineA.Y) / length; return (xAP * yBA) - (yAP * xBA); } static double GetNearestPointOffsetOnLine (Point p, Point lineA, Point lineB) { double xAP = p.X - lineA.X; double yAP = p.Y - lineA.Y; double xAB = lineB.X - lineA.X; double yAB = lineB.Y - lineA.Y; double ABdotAB = (xAB*xAB) + (yAB*yAB); double APdotAB = (xAP*xAB) + (yAP*yAB); if (ABdotAB != 0) return APdotAB / ABdotAB; else return double.NaN; } static Point GetPointOnLine (Point lineA, Point lineB, double offset) { double x = lineA.X + (offset * (lineB.X - lineA.X)); double y = lineA.Y + (offset * (lineB.Y - lineA.Y)); return new Point(x, y); } static double GetPointSegmentDistance (Point p, Point segmentA, Point segmentB) { double offset = GetNearestPointOffsetOnLine(p, segmentA, segmentB); if (offset <= 0) return GetLength(p, segmentA); else if (offset >= 1) return GetLength(p, segmentB); else return GetLength(p, GetPointOnLine(segmentA, segmentB, offset)); } static double GetLength (Point p1, Point p2) { double dx = p1.X - p2.X; double dy = p1.Y - p2.Y; return Math.Sqrt((dx*dx) + (dy*dy)); } static Color GetPolygonColor (int index) { switch (index) { case 0: return Color.FromArgb(64, 0, 0); case 1: return Color.FromArgb(64, 64, 0); case 2: return Color.FromArgb(0, 64, 0); case 3: return Color.FromArgb(0, 64, 64); case 4: return Color.FromArgb(0, 0, 64); case 5: return Color.FromArgb(64, 0, 64); default: return Color.FromArgb(64, 64, 64); } } static void EvaluatePath (Path path, IList<Path> allPaths) { while (true) { List<Point> adjacent = FindAdjacentEdges(path.EdgeMap, path.Points[path.Points.Count-1]); if (adjacent.Count >= 2) { // Multiple options so create new paths for each option. for (int a = 0; a < adjacent.Count; a++) { Path branch = new Path(path.EdgeMap, path.Points); branch.Points.Add(adjacent[a]); branch.EdgeMap[(int) adjacent[a].X, (int) adjacent[a].Y] = false; allPaths.Add(branch); EvaluatePath(branch, allPaths); } // This path must be abandoned now. break; } else if (adjacent.Count == 1) { // Only one direction to go. path.Points.Add(adjacent[0]); path.EdgeMap[(int) adjacent[0].X, (int) adjacent[0].Y] = false; } else { // No more adjacent pixels; we're done with this path break; } } } static Point FindRandomEdge (bool[,] edgeMap) { for (int y = 1; y < edgeMap.GetLength(1)-1; y++) for (int x = 1; x < edgeMap.GetLength(0)-1; x++) { if (edgeMap[x, y]) return new Point(x, y); } return Point.Zero; } static List<Point> FindAdjacentEdges (bool[,] edgeMap, Point point) { List<Point> list = new List<Point>(); for (int y = (int) point.Y-1; y <= (int) point.Y+1; y++) for (int x = (int) point.X-1; x <= (int) point.X+1; x++) { if (x == point.X && y == point.Y) continue; if (edgeMap[x, y]) list.Add(new Point(x, y)); } return list; } } } Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingUpdate: I had implemented the length-based solution but found that it did not work perfectly; it seemed to eliminate curve definition too much or it allowed unnecessary pixels to remain. I came up with a different technique: for each pixel, I find the longest chain of pixels following it that are near the line drawn from the first pixel to the last of the chain. Then I remove all the pixels between the start and end. This nicely handles all sloped lines and can precisely determine the corner pixels. I'll post code and sample images later. The code is really sloppy right now but I'll try to clean it up a bit first. Anybody left from the 2003 crowd? CGameProgrammer replied to Xtremehobo's topic in GDNet LoungeI've been here since 1999 but barely post anymore... Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingActually I was thinking more about your original idea and I think it would be the solution after all. I was thinking that vertices would be eliminated if the length error of their absence is within a certain margin, but the actual solution (and what I believe you were trying to say) is to only remove the single least useful vertex after evaluating all of them, and then repeat, until the error from removing any remaining vertex is above the threshold. As the least useful vertices are removed, the error from removing the remaining ones becomes much larger and so they would not be erroneously removed. Last night I wrote the code to complete step #2 (creating the ordered lists of points for each polygon) so today I will write the simplification algorithm and let you know how it goes. Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingYour idea may have some merit Krypt0n. I'm not positive that calculating length is the right way to determine error; I worry it may either cut vital corners if it's too aggressive or fail to remove enough points on curves if it's not aggressive enough. Or both. But I can try it and see. Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingI appreciate you trying to help but you keep suggesting inappropriate alternatives instead of the solution to the question I asked. In this case the polygon is added to Google Maps as a list of coordinates so it can be properly vector-drawn; adding it as a raster image would look much worse as I mentioned in the original post. If it wasn't obvious, and maybe it wasn't, the answer I really need is the solution to step #3 in the original post: reducing an ordered list of points along the edges to just the corners (except for curves). Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingNo, that would not work because I am not rendering a triangular mesh; the output must be the list of points that define the polygon's border. Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer replied to CGameProgrammer's topic in Graphics and GPU ProgrammingI know it's very easy to test if something is within the original simple polygons; my problem is rendering them. There is no way to directly render the original polygons that would look correct. Now it is trivial to render the raster polygons with a border; I'd just draw a circle (diameter = border thickness) at each edge pixel. The problem with that is of course it'll look blurry when stretched and pixellated when shrunk. I'd be rendering them on a map so the user would zoom a lot. Also since we're dealing with pixels, they are not on a straight line due to interpolation. For example: X XX X XX Vectorizing a 1-bit raster image of polygon(s) CGameProgrammer posted a topic in Graphics and GPU ProgrammingI have a one-bit image (each pixel is either black or white) of a polygon or polygons - the edges of the image are always black. I would like to convert this to a set of one or more polygon vectors, where each vector is simply an ordered list of points (one list per polygon) describing the corners or approximating curves. Attached is an example input image with red dots overlayed to show what the output points should be. It should be two lists of points of course. The threshold for placing points around a curve is arbitrary and not important. The input polygons will never have holes. Speed is irrelevant; this is not being done at realtime and is just precomputed. The actual scenario, if you're wondering, is that I have airspace regions defined as simple polygonal regions (one list of coordinates per polygon) which are either unioned or subtracted. The sample image illustrates an example of this (not actual airspace, just a simple test). I started with a hexagon, then subtracted from it a quadrilateral (which splits the hexagon in two), then I added (unioned) a circle. Trying to directly generate vector polygons from these input polygon vectors would be maddening; I figure it's simpler to render them as raster polygons temporarily (additive is drawn in white, subtractive in black) and then convert that. Step one, edge detection, becomes trivial obviously; it's any white pixel adjacent to a black pixel. Step two is to create huge unsimplified ordered lists of edge points. My plan is to create a bucket of all edge pixels, then (sloppy pseudocode): point = bucket.pop() isDone = false while !isDone { isDone = true for all edge pixels in bucket { if this edge pixel is adjacent to point { append edge pixel to polygon points point = edge pixel isDone = false break; } } } Then if there is anything remaining in the bucket, repeat the process for the next new polygon. The third and final step, and one that I definitely can't seem to figure out, is to simplify these lists of every edge into just the ones that mark the corners (or that approximate to some threshold the curves). Can anyone help? Get points of a 3d triangle relative to its own plane in 2d CGameProgrammer replied to tm1rbrt_160038's topic in Math and PhysicsIt's easiest to arbitrarily declare one of the triangle's sides as being parallel to the X or Y axis. So if you have a triangle and say its first side is 4 units long. You can just start by saying its two endpoints are at {0,0} and at {4,0}. Then you just have to calculate the position of the remaining point. You know the lengths of the other two sides; it's exactly same in 3D as in 2D since the triangle lies along a plane, and you can easily find equations online for finding the point given that you know all lengths (and all angles, as a result). Create a sphere that intersects another at right angles CGameProgrammer replied to CGameProgrammer's topic in Math and PhysicsThe solution I've come up with is, since as an input I have the circle's radius along the surface of the Earth (not straight-line radius): sphereRadius = earthRadius * tan(circleRadius / earthRadius); sphereCenterDistance = sqrt( earthRadius^2 + sphereRadius^2 ); CircleRadius / EarthRadius gives the radians of the arc covered by the circle (or half of it since this is radius and not diameter). sphereCenterDistance is the distance along the center axis where the sphere's center lies. So with those two things I have my sphere. I would assume any solution that does not involve trigonometric functions would be wrong; maybe you just hadn't realized my circle's radius is measured along the Earth's surface. Create a sphere that intersects another at right angles CGameProgrammer replied to CGameProgrammer's topic in Math and PhysicsWait, I think I can utilize what quasar3d noted about there being a right triangle from the earth's center, the red sphere's center, and a tangent point. I was thinking I only knew one side (earth's radius) and one angle (90 at the tangent) but of course I can calculate the angle at the Earth's center, and then that would be enough information to invoke Pythagorean. And I already know that angle since it's trivial to calculate given the radius of the intersection circle along the Earth's surface; you just divide by the Earth's radius and that gives you radians. Create a sphere that intersects another at right angles CGameProgrammer replied to CGameProgrammer's topic in Math and PhysicsSpeed matters and getting the intersection of two 3D vectors constrained to some arbitrary plane does not seem like the correct solution. There must be a more direct way. Create a sphere that intersects another at right angles CGameProgrammer replied to CGameProgrammer's topic in Math and PhysicsI still can't come up with a solution. Remember that are an infinite number of points where the two spheres intersect because this is 3D. How do I find the 3D cartesian center and radius of the red sphere given its center axis (relative to the blue sphere's center) and the radius of the intersection circle? Create a sphere that intersects another at right angles CGameProgrammer posted a topic in Math and PhysicsI need to find the center and radius of a 3D cartesian sphere that intersects another sphere at right angles. Illustration: [attachment=5946:Circles.png] [b]Knowns: * Center of the blue sphere, which is centered at the origin in fact (0,0,0) * Radius of the blue sphere * Axis along which the center of the red sphere lies (also intersection point of the axis with the blue sphere) * Distance between the two red dots[/b] Basically the blue sphere is Earth and I need to generate a cartesian sphere representing a circle on the surface of the Earth. I've done this already except that my circles are centered on the Earth's surface (red dot) which means they don't intersect the ground at right angles like the red circle roughly does. This causes certain math routines to fail. So I need to find the center/radius of the red circle drawn in the illustration. I know that, on a 2D picture, if you draw tangents of the blue circle where the red circle intersects it, the intersection of those two tangents marks the center of the red circle. Still don't know how to correlate this with the above 3D inputs to find what I need. Can anyone help? - Advertisement
https://www.gamedev.net/profile/130-cgameprogrammer/
CC-MAIN-2018-13
refinedweb
3,321
65.12
In Build 2018, Microsoft introduced the preview of ML.NET (Machine Learning .NET) which is a cross platform, open source machine learning framework. Yes, now it's easy to develop our own Machine Learning application or develop costum module using Machine Learning framework. ML.NET is a machine learning framework which was mainly developed for .NET developers. We can use C# or F# to develop ML.NET applications. ML.NET is an open source which can be run on Windows, Linux and macOS. The ML.NET is still in development and now we can use the preview version to work and play with ML.NET. Reference link: Introducing ML.NET: Cross-platform, Proven and Open Source Machine Learning Framework In this article, we will see how to develop our ML.Net application for Clustering Model. Machine Learning is nothing but a set of programs which is used to train the computer to predict and display the output for us. Example live applications which are using Machine Learning are Windows Cortana, Facebook News Feed, Self-Driving Car, Future Stock Prediction, Gmail Spam detection, Pay pal fraud detection, etc. In Machine Learning, there is 3 main types: In each type, we will be using the Algorithm to train the Machine for producing the result. We can see the algorithm for each Machine Learning type. In my previous article, I explained about predicting Future Stock for an Item using ML.NET for the Regression Model for the Supervised learning. In this article, we will see how to work on Clustering model for predicting the Mobile used a simple dataset with random members cluster count by Sex, Before 2010 and After 2010. dataset Reference link: ML.NET to cluster, Taxi fare predictor (regression), trained and using this data, our model needs to be analyzed to predict the result. The Predicted result will be displayed to as Cluster ID and Score as Distance to us in our console application. Customer Male Female Before2010 After2010 MobilePhone Score here is not. Reference link: ML.NET to cluster. Microsoft.ML style="width: 480px; height: 115px" data-src="/KB/miscctrl/1265359/3.PNG" class="lazyload" data-sizes="auto" data-> Click on Install, I Accept and wait till the installation is complete. We can see that the Microsoft.ML package was installed and all the references for Microsoft.ML have been added in our project references. Now we need to create a Model training dataset. For creating this, we will add a CSV file for training the model. We will create a new folder called Data in our project to add our CSV files. Right click the project and Add New Folder and name the folder as “Data”. Right click the Data folder, click on Add >> New Item >> select the text file and name it as “custTrain.csv”. Select the properties of the “StockTrain.csv”, change the Copy to Output Directory to “Copy always”. Add your CSV file data like below: Here, we have added the data with the following fields: (Feature). Microsoft.ML.Runtime.Api ClusterPrediction using Microsoft.ML.Runtime.Api; Next, we need to add all our columns same like our CSV file in the same order in our class and set as the column 0 to 3. to this reference link. PredictedLabel Score PredictedCustId Distances.Predicted Note: It is important to note that in the prediction column, we need to set the column name as the “Score”, also set the data type as the float[] for Score and for PredictedLabel set as uint. float[] uint public class ClusterPrediction { [ColumnName("PredictedLabel")] public uint PredictedCustId; [ColumnName("Score")] public float[] Distances; } To work with ML.NET, we open our “program.cs” file and first, we import all the needed ML.NET references. using Microsoft.ML.Legacy; using Microsoft.ML.Legacy.Data; using Microsoft.ML.Legacy.Trainers; using Microsoft.ML.Legacy.Transforms; Also, import the below to your program.cs file. using System.Threading.Tasks; using System.IO; We set the custTrain.csv data and Model data path. For the traindata, we give “custTrain.csv” path. traindata The final trained model needs to be saved for producing results. For this, we set modelpath with the “custClusteringModel. zip” file. The trained model will be saved in the zip file automatically during runtime of the program, our bin folder with all needed files. modelpath static readonly string _dataPath = Path.Combine(Environment.CurrentDirectory, "Data", "custTrain.csv"); static readonly string _modelPath = Path.Combine(Environment.CurrentDirectory, "Data", "custClusteringModel.zip"); Change the Main method to async Task Main method like the below code: Main async Task Main static async Task Main(string[] args) { } to change the Language version to C#7.1. In the Project Properties >> Build tab >> click on Advance button at the bottom and change the Language Version to C#7.1. style="width: 424px; height: 344px" data-src="/KB/miscctrl/1265359/9.png" class="lazyload" data-sizes="auto" data-> First, we need to train the model and save the model to the zip file for this in our main method, we call the predictionModel method and pass the CustData and ClusterPrediction class and return the model to the main method. main predictionModel CustData static async Task Main(string[] args) { PredictionModel<CustData, ClusterPrediction> model = await Train(); } public static async Task<PredictionModel<CustData, ClusterPrediction>> Train() { } train CSV file for training and here, we set as the useHeader:true to avoid reading the first row from the CSV file. TextLoader useHeader:true Next, we add all our features columns to be trained and evaluate. The learner will train the model. We have selected the Clustering model for our sample and we will be using KMeansPlusPlusClusterer learner. KMeansPlusPlusClusterer is one of the clustering learners provided by ML.NET. Here, we add the KMeansPlusPlusClusterer to our pipeline. KMeansPlusPlusClusterer We also need to set the K value as how many clusters we are using for our model. Here, we have 3 segments as Windows Mobile, Samsung and Apple so we have set K=4 in our program for the 3 clustering. Finally, we will train and save the model from this method. public static async Task<PredictionModel<CustData, ClusterPrediction>> Train() { // Start Learning var pipeline = new LearningPipeline(); // Load Train Data pipeline.Add(new TextLoader(_dataPath).CreateFrom<CustData> (useHeader: true, separator: ',')); // </Snippet6> // Add Features columns pipeline.Add(new ColumnConcatenator( "Features", "Male", "Female", "Before2010", "After2010")); // Add KMeansPlus Algorithm for k=3 (We have 3 set of clusters) pipeline.Add(new KMeansPlusPlusClusterer() { K = 3 }); // Start Training the model and return the model var model = pipeline.Train<CustData, ClusterPrediction>(); return model; } Now it's time for us to produce the result of predicted results by model. For this, we will add one more class and, in this Class, we will give the inputs. class Class Create a new Class named as “TestCustData.cs“. We add the values to the TestCustDataClass which we already created and defined the columns for Model training. TestCustDataClass static class TestCustData { internal static readonly CustData PredictionObj = new CustData { Male = 300f, Female = 100f, Before2010 = 400f, After2010 = 1400f }; } We can see in our custTrain.csv file that we have the same data for the inputs. In our program main method, we will add the below code at the bottom after Train method calling to predict the result of ClusterID and distances and display the results from model to users in command window. Train ClusterID var prediction = model.Predict(TestCustData.PredictionObj); Console.WriteLine($"Cluster: {prediction.PredictedCustId}"); Console.WriteLine($"Distances: {string.Join(" ", prediction.Distances)}"); Console.ReadLine(); When we can run the program, we can see the result in the command window like below: style="width: 504px; height: 89px" data-src="/KB/miscctrl/1265359/11.PNG" class="lazyload" data-sizes="auto" data-> ML.NET (Machine Learning DotNet) is a great framework for all the dotnet lovers who are all looking to work with machine learning. Now only preview version of ML.NET is available and I it's a great framework to get you started with ML.NET. Hope you enjoy reading this article and see you all soon with another.
https://www.codeproject.com:443/Articles/1265359/Getting-Started-with-Machine-Learning-DotNet-for-C?msg=5567587&PageFlow=Fluid
CC-MAIN-2021-43
refinedweb
1,335
58.08
This is the scenario : we have a Silverlight application (SL app) which run is running unattended on a big screen in an office. It displays the actual positions of objects on a map. Once every 10 minutes the app queries the hosting web server for position information. The information is not public, it is using asp.net forms authentication to guard it against unauthorized eyes. To get this to work as intended required some special attention. In this post I will give an overview. The site is using ajax. Also Silverlight is doing a lot of async communication. The combination of async (partial) postback and forms authentication has some quircks on itself, I wrote a little post on that short ago. But as said, there are more quircks. The server The user sessions are running a long long time. The risk is that within a session IIS will recycle the underlying web application. When that happens the session state will be lost because, by default, session state is stored inproc. The first step is to store the sesssion state in sql server. This will keep the session information alive when the application is recycled. Setting up sql server is no big deal, this is a good overview. The thing to watch is that now everything you store in the session has to be explicitly serializable. Which usually boils down to setting the serializable attribute and adding default constructors. The service A Silverlight application gets it’s data from a service. Our basic service queries a repository to get specific data. public class KaartData { public static List<Kaart> Kaarten(KaartSoort soort, KaartPositie topLeft, KaartPositie bottomRight) { KaartRepository repository = RepositoryFactory.KaartRepository(); return repository.ListKaarten(soort, topLeft, bottomRight); } } This service is published in a WCF service. A SL app can easily communicate over WCF with a service which is hosted in the same web application as the SL-app itself is running in. It should be possible to host the service somewhere else but that will introduce a large amount of security settings you will have to solve. In VS there is a template for such a service. The result is pretty straightforward [ServiceContract(Namespace = "Datema")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class KaartService { [OperationContract] public List<Kaart> Kaarten(KaartSoort soort, KaartPositie topLeft, KaartPositie bottomRight) { return Datema.DatemaDirect.KaartServices.KaartData.Kaarten(soort, topLeft, bottomRight); } } As the service is embedded in the website it is guarded by forms authentication. That’s as intended, we don’t want unauthorized people or software to consume our service. But that will hit back, as we will see later. The Silverlight Application The application itself is built around Deep Earth. This is an open source project which combines the power of Silverlight with that of a geo image servers like virtual Earth, Yahoo maps, Open Street Maps and many others. The xaml markup wraps up a deepearth map <UserControl x:Class=“ChartMap.Page“ xmlns=““ xmlns:x=““ xmlns:DeepEarth=“clr-namespace:DeepEarth;assembly=DeepEarth“ > <Grid x:Name=“LayoutRoot“> <DeepEarth:Map x:Name=“Map“> </DeepEarth:Map> <StackPanel HorizontalAlignment=“Left“ VerticalAlignment=“Top“ Margin=“10“ > <Slider x:Name=“SliderZoom“ Orientation=“Vertical“ Value=“1“ Height=“200“ Minimum=“1“ Maximum=“20“ ValueChanged=“Slider_ValueChanged“></Slider> </StackPanel> </Grid> </UserControl> A SL-app is living in the browser. The code behind does the registration. private readonly GeometryLayer mapLayer; public Page() { InitializeComponent(); if (Map.BaseLayer == null) { Map.BaseLayer = new TileLayer(MapMode.Aerial); } Loaded += ((theMap, args) => HtmlPage.RegisterScriptableObject(“ChartMap”, this)); mapLayer = new GeometryLayer(Map); } The constructor also initializes a basic geometrylayer. That’s an aerial view of the world, to our app it’s just a background. A background you can zoom into in great detail. Check the deepearth site for more on that. The constructor registers the application (this) on the HtmlPage under the name ChartMap. We’ll meet that later on in the javascript. The application exposes methods to JavaScript by setting the Scriptable attribute [ScriptableMember] public void ShowKaarten(KaartSoort vanSoort) { var tl = new KaartPositie(); tl.Hoogte = Map.GeoBounds.Bottom; tl.Lengte = Map.GeoBounds.Left; var br = new KaartPositie(); br.Hoogte = Map.GeoBounds.Top; br.Lengte = Map.GeoBounds.Right; var svcProxy = new KaartServiceProvider(); svcProxy.GetKaarten(vanSoort, tl, br, DrawKaarten); } This method needs the WCF service to get the data. Consuming the WCF service in Silverlight The first step is to add a service reference. For this to work you have to disable the forms authentication, else VS cannot get to the service to generate a proxy. The generated proxy, KaartServiceClient, can only be invoked asynchronous. To encapsulate the specific initialization and async coding aspects of the the proxy I have created a helper class internal class KaartServiceProvider { private readonly KaartServiceClient svc; internal KaartServiceProvider() { var addres = new Uri(Application.Current.Host.Source, “/Services/KaartService.svc”); svc = new KaartServiceClient(“BasicHttpBinding_KaartService”, new EndpointAddress(addres)); } internal void GetKaarten(KaartSoort soort, KaartPositie tl, KaartPositie br, Action<ObservableCollection<Kaart>> kaarten) { svc.OpenCompleted += ((sender, e) => svc.KaartenAsync(soort, tl, br)); svc.KaartenCompleted += ((sender, e) => kaarten(e.Result)); svc.OpenAsync(); } } It wraps the proxy svc. In the constructor the proper address of the service is assembled. As the service is hosted by the same application as the Silverlight app I can use Application.Current.Host.Source to get the right uri. The GetKaarten method is doing the work. It is passed the parameters to the service and also passes kaarten a callback method to catch the result of the service invocation. The service is explicitly opened, also this has to be done async. OpenCompleted will start the real work by firing KaarternAsync. When that completes the callback method will update the map with the fresh data. [ScriptableMember] public void ShowKaarten(KaartSoort vanSoort) { …. var svcProxy = new KaartServiceProvider(); svcProxy.GetKaarten(vanSoort, tl, br, DrawKaarten); } private void DrawKaarten(ObservableCollection<Kaart> kaarten) { mapLayer.Clear(); foreach (var kaart in kaarten) mapLayer.Add(new VisualKaart(kaart)); } The Silverlight app on a web page The Silverlight app is running in the browser. So the way to program it is through JavaScript. It takes a little puzzling to find the object to talk to. Remember the silverlight app registered itself as ChartMap. To fire the scriptable webmethod you need the Content.ChartMap property of the SL-object. A silverlight application can be used in several styles of web apps. A classical asp.net works well, an MVC app works even better. The latter is easier because the views in MVC are really client side views, the same enviroment the SL-app is living in. Classical Asp.net is more focused on server side code. It will look like this in an MVC view. The SL- app is between the <object> tags and has id Wereld. The button’s onclick event fires the SL method. <select id=”kaartSoort”> <option value=”0″>ENC</option> <option value=”1″>ARCS</option> <input type=”button” value=”Toon kaarten” onclick=”Wereld.Content.ChartMap.ShowKaarten(kaartSoort.value)” /> <object data=”data:application/x-silverlight-2,” type=”application/x-silverlight-2″ width=”100%” height=”100%” id=”Wereld”> <param name=”source” value=”/ClientBin/ChartMap.xap” /> <param name=”onerror” value=”onSilverlightError” /> <param name=”background” value=”white” /> <param name=”minRuntimeVersion” value=”2.0.31005.0″ /> <param name=”autoUpgrade” value=”true” /> <a href=”″ style=”text-decoration: none;“> <img src=”″ alt=”Get Microsoft Silverlight” style=”border-style: none” /> </a> </object> To keep the map up to date the method has to be fired over and over again, controlled by some kind of timer. Javascript has no real timers but using the setTimeOut method you can (recursively) call a method after an interval. In this example the function showMap updates the map and calls itself after 600 seconds. var fleetId = 0; var logoDisplayed = false; var map; function showMap() { map = document.getElementById(‘ctl00_ContentPlaceHolder1_TrackingMap’).Content.TrackingMap; map.ShowFleetTrack(fleetId); // Refresh map every 10 minutes setTimeout(“showMap()”, 600000); } This script is living on a classical asp.net page. Notice the ugly long id you need to get to the SL-app. I have moved all var’s out of the method to the page. Over time this code will build quite a call stack, not need for “invocation specific instance data”. What’s wrong with this code ? At first sight this code looks alright. But nevertheless it will hit an exception after a certain amount of time. Usually a quite cryptic Silverlight error. It took some puzzling to find out what went wrong. The application stays on one and the same page. The script on the page hits the embedded service once every 10 minutes. The service is protected by forms authentication. On every roundtrip the authentication cookie is checked. But what does not happen is updating the timeout of forms authentication. When the user hits code behind a web page the session timeout of forms-authentication is reset. When the user’s page hits the embedded service this time-out is not reset. I’m not sure whether this is a bug or a feature but it is by no means the behavior you would expect. To prevent the service from timing out the script has to hit some server side code behind the page. As a target I add a dummy webmethod to the page public partial class FleetMap : System.Web.UI.Page { [WebMethod] public static void HartBeat() { // No need to do anything, this is just a hartbeat from JavaScript to tell the page is still alive // This hartbeat prevents a time out of the authentication cookie } A webmethod, part of ASP.net ajax, can be called directly from script. function showMap() { PageMethods.HartBeat(); map = document.getElementById(‘ctl00_ContentPlaceHolder1_TrackingMap’).Content.TrackingMap; map.ShowFleetTrack(fleetId); // Refresh map every 10 minutes setTimeout(“showMap()”, 600000); } In the loop the hartbeat tickles the server to notify the page is still alive. After which the SL-app can start firing its WCF requests. I am not checking on exceptions here. In case the session has timed out issuing a HartBeat will redirect the user to the login page. As intended. To conclude? And now the app works as intended. Looking back it was a great experience combining all these new api’s. Communication between them works pretty good. Once you know how The bad guy was imho forms authentication. Which dates from a time when there was no such thing as partial or async postback. In a previous post I talked about the problems it has with ajax. Silverlight, another async poster, apparently has its own problems.
http://codebetter.com/petervanooijen/2009/04/16/keeping-a-long-running-silverlight-application-alive-under-forms-authentication/
crawl-003
refinedweb
1,720
59.19
I'm using the book "C++ Primer Plus 6e" by Prata and I feel like I'm following the instructions step-by-step. I am using Microsoft Visual C++ 2010 Express. I start a new project and select "Win32 Console Application" and on the next screen I ensure that "Console application" is selected and I uncheck "Precompiled Header" and put a check in the box for "Empty Project." Once the project is created, I make a new .cpp file and type the following code When I try to build/debug I am greeted with this error messageWhen I try to build/debug I am greeted with this error messageCode:#include <iostream> int main() { using namespace std; cout << "Come up and C++ me some time."; cout << endl; cout << "You won't regret it!" << endl; return 0; } I did a quick google search and tried a few different things to no avail. Hopefully this forum can help!I did a quick google search and tried a few different things to no avail. Hopefully this forum can help!Code:1>------ Build started: Project: thisdoesntwork, Configuration: Debug Win32 ------ 1>LINK : error LNK2001: unresolved external symbol _mainCRTStartup 1>c:\Users\Leewiz\Documents\Visual Studio 2010\projects\thisdoesntwork\Debug\thisdoesntwork.exe : fatal error LNK1120: 1 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Thanks in advance
http://cboard.cprogramming.com/cplusplus-programming/149680-help-first-program-hello-world.html
CC-MAIN-2015-18
refinedweb
225
62.88
Mono 0.17 Hello! Version 0.17 of Mono has been released. There are plenty of new features, bug fixes, new classes, performance improvements, optimizations and much more available in this release. Availability Source code To install mono from source code you only need the first one (mono), as it contains pre-compiled versions of the compiler and the class libraries. Packages for Linux Precompiled packages with full debugging information in Dwarf-2 format (so you can use the soon to be released Mono debugger with it) is available for various distributions. Packages are available either on the ‘Mono’ Red Carpet channel, or you can download your rpms, and source rpms from XSP ASP.NET Web server No packages were made of the XSP server, but its so small that you should have no problem compiling and running it. Stats 2605 cvs commits to the Mono repository since October 1st, an average of 37 commits per day including weekends. 212 commits to the Mono module. 1438 commits to the MCS module. Mono Improvements Work has begun to make the runtime run a finalizer thread and invoke all the finalizers from this thread. This is the same behavior as Java and the Microsoft runtime, but it is disabled on this build. Integrated the Linux/s390 work from Neale Ferguson. Beginning of the work for pre-compiling code (Ahead of time compilation) for Mono (based on the early work of Zoltan). New option --noboundscheck for benchmark purposes, it disables array bound checks. Uses mmap instead of SysV shared memory for the Windows API emulation layer. Plenty of bug fixes, improvements and integration with the upper layer class libraries. New exception handling code uses the GCC native support for stack-walking if available and gives big performance boost (15% on mcs bootstrap). A lot of the work in the new release of Mono is required for the Mono Debugger (which will be released separately). The Mono debugger is interesting, because it can debug both managed and unmanaged applications, but it only supports the JITer for debugging. Dick, Dietmar, Gonzalo, Martin and Paolo were in charge of most of these changes. Compiler improvements Many bug fixes as usual, better C# compliancy. Performance improvements. The new release of the Mono C# compiler is 37% faster than the previous version (self-compile is down to 8 seconds). On my P4 1.8Ghz machine, the Mono C# compiler compiles (342,000 lines per minute). Thanks to go Ravi and Martin for helping out with the bug fixing hunt. Cryptography and Security classes Sebastien Pouliot and Andrew Birkett were extremely busy during the past two months working on the cryptography classes, many of the crypto providers are now working Jackson on the other hand helped us with the security classes, he said about those: Writing security classes is the most exciting thing I have ever done, I can not wait to write more of them. ASP.NET We have now moved the code from the XSP server (which was our test bed for ASP.NET) into the right classes inside System.Web, and now any web server that was built by using the System.Web hosting interfaces can be used with Mono. The sample XSP server still exists, but it is now just a simple implementation of the WorkerRequest and ApplicationHost classes and can be used to test drive ASP.NET. A big thanks goes to Gonzalo who worked on this night and day (mostly night). Gaurav keeps helping us with the Web.Design classes, and improving the existing web controls. ADO.NET New providers are available in this release. The relentless System.Data team (Brian, Dan, Rodrigo, Tim and Ville) are hacking non-stop on the databse code. Improving existing providers, and new providers. The new providers on this release: - Oracle - MS SQL - ODBC - Sybase - Sqlite (for embedded use). Many regression tests have been added as well (Ville has been doing a great job here). Brian also created a DB provider multiplexor (The ProviderFactory) Stuart Caborn contributed Writing XML from a DataSet. Luis Fernandez contributed constraint handling code. Also there is new a Gtk# GUI tool from Dan that can be used to try out various providers. System.XML Atsushi has taken the lead in fixing and plugging the missing parts of the System.XML namespace, many fixes, many improvements. CodeDom and the C# provider Jackson Harper has been helping us with the various interface classes from the CodeDOM to the C# compiler, in this release a new assembly joins us: Cscompmgd. It is a simple assembly, and hence Microsoft decided not to waste an entire “System” “dot” on it. Testing Nick Drochak has integrated the new NUnit 2.0 system. Monograph Monograph now has a –stats option to get statistics on assembly code. CVS Contributors to this release Alejandro Sanchez, Alp Toker, Andrew Birkett, Atsushi Enomoto, Brian Ritchie, Cesar Octavio Lopez Nataren, Chris Toshok, Daniel Morgan, Daniel Stodden, Dennis Hayes, Dick Porter, Diego Sevilla, Dietmar Maurer, Duncan Mak, Eduardo Garcia, Ettore Perazzoli, Gaurav Vaish, Gonzalo Paniagua, Jackson Harper, Jaime Anguiano, Johannes Roith, John Sohn, Jonathan Pryor, Kristian Rietveld, Mads Pultz, Mark Crichton, Martin Baulig, Martin Willemoes Hansen, Miguel de Icaza, Mike Kestner, Nick Drochak, Nick Zigarovich, Paolo Molaro, Patrik Torstensson, Phillip Pearson, Piers Haken, Rachel Hestilow, Radek Doulik, Rafael Teixeira, Ravi Pratap, Rodrigo Moya, Sebastien Pouliot, Tim Coleman, Tim Haynes, Ville Palo, Vladimir Vukicevic, and Zoltan Varga. (Am sorry, I could not track everyone from the ChangeLog messages, I apologize in advance for the missing contributors).
http://www.mono-project.com/docs/about-mono/releases/0.17.0/
CC-MAIN-2018-26
refinedweb
920
62.07
Access LoPy mac address without using atom. Hi. I want to create a python scripts that returns the mac address of the connected LoPy. I am connected to the LoPy via. USB using a USB To TTL Serial Converter Adapter. With this, is it possible to send the following commands down to the LoPy: import ubinascii, network ubinascii.hexlify(network.LoRa().mac()) and then return the value of ubinascii.hexlify(network.LoRa().mac()), is it possible? As of now, I have to manually write import ubinascii, network print (ubinascii.hexlify(network.LoRa().mac())) in the atom REPL and copy the print result, I wish to automate this process. Thanks. Best regards - Paul Thornton last edited by @hobbypy any programming language with a serial library should work. simply open a serial connect to the board via your preffered language / framework. and send the code as a string over it. The response will come back over serial just like the REPL does. Thanks. Yes, I am aware that I can put the commands in any python script located at the LoPy. Basically, I want to create a python function/script that returns the mac address, which I can then use for futher use (register device on ttn using their python-SDK for example). But looking at the pyserial library, I think what I want is harder to implement than what I previously thought so I will have to see if I can figure out something. @hobbypy With the risk of stating the obvious, you can put these two statements in any Python file that is executed on the LoPy, e.g. main.py. If you do not want to do that, any program on your PC which is able to communicate over the serial interface can send these two commands and get the answer back. Atom is just one of plenty.
https://forum.pycom.io/topic/4402/access-lopy-mac-address-without-using-atom/4
CC-MAIN-2020-50
refinedweb
309
72.76
On 11/08/2011 03:23 PM, Jonathan Cameron wrote:> On 11/08/2011 01:32 PM, Lars-Peter Clausen wrote:>> On 11/07/2011 03:52 PM, [email protected] wrote:>>> From: Jonathan Cameron <[email protected]>>>>>>> [...]>>> Dear All,>>>>>> Firstly note that I have pushed ahead of this alongside the ongoing>>> discussions on how to handle in kernel interfaces for the devices>>> covered by IIO. I propose to build those on top of this patch>>> set and will be working on that support whilst this set is>>> under review.>>>>>> Secondly, this code has some namespace clashes with the staging>>> IIO code, so you will need a couple of patches that can be found>>> in>>>>>> This is our first attempt to propose moving 'some' of the>>> Industrial I/O subsystem out of staging. This cover letter>>> attempts to explain what IIO is and why it is needed.>>> All comments welcome on this as well as the code!>>>>>> I don't think moving just part of the IIO core out of staging will work.> It's the only option that looks plausible. We just aren't going to get> anyone to review all the code in one go. The original move into staging> was entirely about exposure, rather than code quality (not to say we> haven't improved that as well!) The other thing is that the> simple stuff is mature and useful. The buffering and event side of> things is still evolving and hence it may be a while yet before it is> stable enough. (It was mature until the whole in kernel interface stuff> came.Still two almost identical frameworks for the same purpose. The code for theout-of-staging and still-in-staging branches have already started to divert.Having both in the mainline kernel is going to be maintenance hell. Peoplewill start sending patches for one, but not the other. I just don't thinkthis will workout well.>>>> elements> to be discussed in each of the major parts. There's a lot of pressure> to get 'something' out for the simple drivers now even if we take a> while to 'discuss' the other elements. Hence it needs to happen in> chunks from the point of view of review, even if the final pull request> will bring over the whole core.> If the core split-up is just for review and is not intended to be mergedpart-by-part over several kernel releases I don't see a problem.- Lars
https://lkml.org/lkml/2011/11/8/203
CC-MAIN-2016-07
refinedweb
413
71.55
I did as you suggested and it works fine unless the TTree in which the TKey is pointing to contains data. Below is some code illustrating the problem. I ran the following commands root [0] .L FileTest.C++ Info in <TUnixSystem::ACLiC>: creating shared library /home/kerrylee/./FileTest_C.so root [1] test() No TestTree object found (int)0 root [2] system("ls -l testing.root") -rw-r--r-- 1 kerrylee kerrylee 81962 May 19 13:35 testing.root (const int)0 root [3] test() TestTree already exists. Do you want to replace it (y or n)? y deleting TestTree (int)0 root [4] system("ls -l testing.root") -rw-r--r-- 1 kerrylee kerrylee 159566 May 19 13:36 testing.root (const int)0 root [5] test() TestTree already exists. Do you want to replace it (y or n)? y deleting TestTree (int)0 root [6] system("ls -l testing.root") -rw-r--r-- 1 kerrylee kerrylee 221169 May 19 13:38 testing.root (const int)0 As you can see the file is growing. If I comment out the t->Fill() line all the file does not grow. I assume this is because the TTree itself needs to be deleted from the file also. Is there a way to delete not only the TKey pointing to the TTree but also remove the TTree so that the space will be reused later? Thanks Kerry #include "TSystem.h" #include "TFile.h" #include "TString.h" #include "Riostream.h" #include "TRandom3.h" #include "TTree.h" int test(){ TString DataFile="testing.root"; TFile *f = new TFile(DataFile,"UPDATE"); char ans = 'n'; if(f->FindObjectAny("TestTree")){ cout<<"TestTree already exists. Do you want to replace it (y or n)? "; cin>>ans; if(ans == 'n') { return -1; } else { std::cout<<"deleting TestTree"<<std::endl; f->Delete("TestTree;*"); f->Close(); delete f; f = new TFile(DataFile,"UPDATE"); -----Original Message----- From: owner-roottalk_at_pcroot.cern.ch on behalf of Rene Brun Sent: Fri 5/12/2006 12:11 AM To: Lee, Kerry T. (JSC-SF)[UHCL] Cc: roottalk_at_pcroot.cern.ch Subject: Re: [ROOT] Deleting a TKey from a TFile What you see is the expected behaviour. The gap left in the file when deleting an object will only be reused if you close and reopen the file. This is done on purpose to support the case of multiple readers accessing a file written by another process. Rene Brun On Thu, 11 May 2006, Lee, Kerry T. (JSC-SF)[UHCL] wrote: > Dear ROOT team, > > I am using ROOT 5.11/02 compiled on a linux machine with gcc 3.4.4. > > I am attempting to delete a TKey within a TFile, but it does not reduce the size of the file as I would expect. Below is a couple short examples to illustrate what I see. > > > root [0] TFile *f = new TFile("testing2.root","RECREATE"); > root [1] f->Write(); > root [2] f->Close(); > root [3] .q > > the size of the file is then 330 bytes as shown below > > [kerrylee_at_jsc-sf-2148872 ~]$ ls -l testing2.root > -rw-r--r-- 1 kerrylee kerrylee 330 May 11 12:44 testing2.root > > Now if I write an empty TTree to a file. > > root [0] TFile *f = new TFile("testing.root","RECREATE"); > root [1] TTree *t = new TTree("TestTree","This is a test"); > root [2] f->Write(); > root [3] f->Close(); > root [4] .q > [kerrylee_at_jsc-sf-2148872 ~]$ ls -l testing.root > -rw-r--r-- 1 kerrylee kerrylee 4223 May 11 12:53 testing.root > > Then try to delete the tree. I do not see a change in the file size even though ls() shows that the TTree as being deleted. > > root [0] TFile *f = new TFile("testing.root","UPDATE"); > root [1] f->ls(); > TFile** testing.root > TFile* testing.root > KEY: TTree TestTree;1 This is a test > root [2] f->Delete("TestTree;1"); > root [3] f->Flush(); > root [4] f->ls() > TFile** testing.root > TFile* testing.root > root [5] f->Close(); > root [6] .q > [kerrylee_at_jsc-sf-2148872 ~]$ ls -l testing.root > -rw-r--r-- 1 kerrylee kerrylee 4223 May 11 12:55 testing.root > > > I would expect that the updated file should be 330 bytes. Have I misunderstood how to remove a TKey properly? > > Thanks > Kerry > > > >Received on Fri May 19 2006 - 20:43:47 MEST This archive was generated by hypermail 2.2.0 : Mon Jan 01 2007 - 16:31:58 MET
https://root.cern.ch/root/roottalk/roottalk06/0570.html
CC-MAIN-2022-21
refinedweb
728
78.35
Deploy Ubuntu 16.04 Xenial Xerus Discussion This is not working on Xenial Xerus because there is no packages for Ubuntu 16.04 in Phusion Passenger repositores yet. You can try using the older trusty repository for now while they work up on updating their apt repository: sudo sh -c 'echo deb trusty main > /etc/apt/sources.list.d/passenger.list' nginx-extras require perlapi-5.20.2, libperl5.20 (>= 5.20.2), passenger (< 5.0.28) wich will not be installed You guys can remove that passenger repository and install the version directly from Ubuntu. Should be good enough for now, and then once Phusion updates their repo, you can switch back to it so you can continue to get updates....§ion=all We can't, because nginx from Ubuntus repositories does not have built-in passenger support. Unfortunately they haven't released a package for xenial yet. It's on their todo list, but they said by June at the latest. That's a long time from now. Here's how you can install it manually:... After changing database.yml and secrets.yml to examples, whenever I run Rails s for running server. I get error and it complains that it can't find the yml files and server will not get started. What can I do? Chirs this video is great !!! would you please publish a similar video with deploy rails on unicorn and nginx with good explain about the config! thanks ) Edit /etc/nginx/nginx.conf file and uncomment #include /etc/nginx/passenger.conf; For example, you may see this: include /etc/nginx/passenger.conf; after this: sudo service nginx restart and final check if passeger work: sudo /usr/bin/passenger-config validate-install do you see the message "Restarting nginx nginx" when you type "sudo service nginx restart" ? In my case I don't see it and Chris said that something gonna be wrong if you don't see it. strange! I am using ubuntu 16.04 and every thing is working correctly but I didn't saw anything like [ok] or "restarting nginx ... [ok]" like Chris does! weird :/ Hi Chris, when I do sudo service nginx restart it doesn't show anything, you say that this might be an error, I did sudo tail /var/log/nginx/error.log but I can't see any error there! this is the output : [ 2016-07-09 12:46:04.5741 23679/7f3c412f4700 Ser/Server.h:464 ]: [UstRouter] Shutdown finished [ 2016-07-09 12:46:04.5743 23679/7f3c47f47780 age/Ust/UstRouterMain.cpp:523 ]: Passenger UstRouter shutdown finished [ 2016-07-09 12:46:04.5939 23674/7f3bb0d6f780 age/Cor/CoreMain.cpp:967 ]: Passenger core shutdown finished 2016/07/09 12:46:05 [info] 23783#23783: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:75 [ 2016-07-09 12:46:05.7251 23790/7f341ddf1780 age/Wat/WatchdogMain.cpp:1291 ]: Starting Passenger watchdog... [ 2016-07-09 12:46:05.7624 23793/7fdc2184b780 age/Cor/CoreMain.cpp:982 ]: Starting Passenger core... [ 2016-07-09 12:46:05.7625 23793/7fdc2184b780 age/Cor/CoreMain.cpp:235 ]: Passenger core running in multi-application mode. [ 2016-07-09 12:46:05.7646 23793/7fdc2184b780 age/Cor/CoreMain.cpp:732 ]: Passenger core online, PID 23793 [ 2016-07-09 12:46:05.8046 23798/7f724a019780 age/Ust/UstRouterMain.cpp:529 ]: Starting Passenger UstRouter... [ 2016-07-09 12:46:05.8060 23798/7f724a019780 age/Ust/UstRouterMain.cpp:342 ]: Passenger UstRouter online, PID 23798 by the way when I opened /etc/nginx/nginx.conf the first time, I haven't find the lines passenger_ruby and passenger_root Hi! I had that same problem and the solution for me was to comment out this line in my /config/environments/production.rb file: "config.force_ssl = true" What was happening was that each time you connected, the rails application forced you to redirect to "https" for SSL but your SSL cert is not setup yet! Chris, Thanks so much for this tutorial. I followed along and was able to get this all working on a VirtualBox setup. However a problem i encountered was a bundler error. i had to SSH into the server and use 'which bundler' and add that path into the :default_env in deploy.rb FOR THOSE WITH THIS ERROR: set :default_env, { path: "/home/deploy/.rbenv/shims:~/.rbenv/bin:$PATH" } My question after this...Is there an easy way to run custom rake tasks, once deployed? Thanks The Capistrano part of this tutorial is quite out of date. See... I'd be happy to help rewrite that part if needed. Thanks for this very helpful tutorial, everything works as expected, except for my assets... some of my images aren't showing up in the browser, but they get deployed... I´m not sure what to do... I've been googling around and don´t seem to find a solution Make sure you're using these helpers for your static assets:... After installing NGinx it is necessary to set up and activate the firewall - otherwise you won't see even the Nginx start page. See... When I use rbenv and launch "cap production deploy" I get an error: "LoadError: cannot load such file -- capistrano/rbenv". On the local computer I use RVM and I want to set up rbenv on the remote one. Perhaps it is not possible? Do I have to have rbenv on the local computer installed? Newbie question here. I followed the tutorial and I have the rails 5 app on server. I bundle install everything but when trying to start rails (rails s -e production) I get the rails new prompt ....like I need to create a new rails app. Any tips on how to start it or what I might be missing ? You don't need to run Rails, Passenger will start the app for you automatically, that's why we set it up this way so you don't have to manage running scripts. It's automatic. I had to Generate RSA SSH Keys to make things work. The command is "ssh-keygen". See... Hi there! I have a issue here: No Rakefile found (looking for: capfile, Capfile, capfile.rb, Capfile.rb, /usr/lib/ruby/vendor_ruby/Capfile) What should I do? Great tutorial. Everything went well, but when I do cap deploy production, I get the error below. Any ideas? Thank you! cap aborted! SSHKit::Runner::ExecuteError: Exception while executing as [email protected]: passenger-config exit status: 1 passenger-config stdout: Nothing written passenger-config stderr: *** ERROR: Phusion Passenger doesn't seem to be running. If you are sure that it is running, then the causes of this problem could be one of: 1. You customized the instance registry directory using Apache's PassengerInstanceRegistryDir option, Nginx's passenger_instance_registry_dir option, or Phusion Passenger Standalone's --instance-registry-dir command line argument. If so, please set the environment variable PASSENGER_INSTANCE_REGISTRY_DIR to that directory and run this command again. 2. The instance directory has been removed by an operating system background service. Please set a different instance registry directory using Apache's PassengerInstanceRegistryDir option, Nginx's passenger_instance_registry_dir option, or Phusion Passenger Standalone's --instance-registry-dir command line argument. I had the same issue, I had missed to uncomment "include /etc/nginx/passenger.conf;" and server_name was not proper. after fixing them its working Almost there but when I run `cap production deploy` I get ``` 00:00 rbenv:validate rbenv: 2.3.1 is not installed or not found in $HOME/deploy/.rbenv/versions/2.3.1 ``` Is cap checking this on my local machine or the server? I have changed the path to just `$HOME/.rbenv` and still get the same issue. Also getting an error `There are no Phusion Passenger-served applications running whose paths begin with '/home/deploy/app_name'`. app_name is replaced with the name of my app. Seems there is another user getting this problem as well.... Thanks Chris for this video! How do we setup password authentication, if we're using this tutorial to deploy a staging version of the site? I've setup everything via this site... and yet it's not asking for passwords upon entering the site. Hmmm, double check that you restarted nginx and that went correctly. If so, it should pick that up. Maybe I missed it in the tutorial, but I had to authenticate the DigitalOcean server with GitHub (by creating a SSH key from the server). Capistrano deployment failed until I did that, as the server was denied access to the Git repository. I get a 'Permission denied (publickey) fatal: could not read from remote repo' when I 'cap production deploy'. The same keys are on DigitalOcean and Github. Pushes to github works fine. I can ssh into the digitalocean server as root or deploy users fine too. Anyone know what I've missed? Hi Chris, How I can set domain name to my server ip address to input my site domain instead of writing ip address? Thanks Make sure you added yout domain name in /etc/nginx/sites-enabled/default. After that, for digital ocean, follow these steps:.... Hope it helps ;) Edit: Don't forget to set up DO namespaces at your domain registrar panel. (tutorial link:... Just to confirm, firstly the domain need to buy in godaddy or some similar, correct? I don't think you can do this with Digital Ocean Yes, correct. You should look for any domain registrar. Some are: GoDaddy HostGator Namecheap 1&1 Name.com Network Solutions eNom Gandi Register.com A Small Orange iwantmyname Google Domains beta Hello Chris I fixed the issue I initally posted about lol. But now when I visit the URL there's no site to visit, it says unable to connect ----- Great tutorial say I'm trying to deploy a rails 5 app with ubuntu 16.04 etc... On cap production deploy I get the following error on deploy:migrate INFO [deploy:migrate] Run `rake db:migrate` DEBUG [15816e11] Running if test ! -d /home/deploy/app/releases/20170313091718; then echo "Directory does not exist '/home/deploy/app/releases/20170313091718'" 1>&2; false; fi as [email protected] DEBUG [15816e11] Command: if test ! -d /home/deploy/app/releases/20170313091718; then echo "Directory does not exist '/home/deploy/app/releases/20170313091718'" 1>&2; false; fi DEBUG [15816e11] Finished in 0.306 seconds with exit status 0 (successful). INFO [64a72276] Running ~/.rvm/bin/rvm default do bundle exec rake db:migrate as [email protected] DEBUG [64a72276] Command: cd /home/deploy/app/releases/20170313091718 && ( export RAILS_ENV="production" ; ~/.rvm/bin/rvm default do bundle exec rake db:migrate ) DEBUG [64a72276] rake aborted! DEBUG [64a72276] PG::ConnectionBad: could not connect to server: Connection refused Is the server running on host "138.68.83.124" and accepting TCP/IP connections on port 5432? DEBUG [64a72276] /home/deploy/app/shared/bundle/ruby/2.4.0/gems/activerecord-5.0.2/lib/active_record/connection_adapters/postgresql_adapter.rb:671:in `initialize' Do you have any pointers? to make it work can't seem to find similar issue on deployment server note i tried this it didn't work:... Thanks Hey! No not yet, does your passenger:restart say there is no "Passenger served application"? I solved the issue by configuring my postgreSQL to accept TCP/IP connections from my server. Check /etc/postgresql/9.5/main for postgresql.conf and pg_hba.conf for configurantions about this. Thanks for the tutorial Chris!! I'm just having an issue when trying to set the branch for the repo, it always took master and I'm using the set branch but nothing happens Hey @disqus_FdxDeiCXJb:disqus! I'm not sure how you've got it setup, but if you set the branch in your Capistrano config, that should do the trick. You can either do it in config/deploy.rb globally (it defaults to master) or inside each of the stages like config/deploy/production.rb. It's usually just set :branch, "mybranch" All the other config options are listed here:... That's the way I'm doing it, in the config/deploy.rb but when I run cap production deploy, it gets the HEAD last commit on master deploy.rb: # ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp } -- try this and ask for the branch, I typed develop, but neither worked set :branch, 'develop' 0:02 git:check 01 git ls-remote [email protected]:altocustos/custos_api.git HEAD 01 a64ca2f22cfaeee296f994b1475d422b179ec94a -- this is the only commit on master branch 01 01 HEAD 01 ✔ 01 [email protected] 1.825s Yep your code looks right. I think that git ls-remote is fine because Capistrano actually clones your full repo to the server (all the branches) and then when it does a "git archive branch | tar" command later to make the new release folder, it will grab the branch there. I don't think this step is gonna be the problem, so it might actually be using the right branch. Gosh!! found the problem, the app was in a folder inside the main dir, and that causes it couldn't find the gemfile and that's why I was assuming that set branch didn't work. Now I have another problem :( when it tries to precompile the assets it gives me an error, because mi app is a rails 5 api, how can I disable the assets:precompile? Awesome! :) I believe for that, you can either override the deploy:assets:precompile task with an empty task, or you can also possibly remove the web role from the server line in config/deploy/production.rb. That may also disable some other things that run on the web role, but I don't remember what those might be so it might be the best solution for an api-only app. Hi, you can create the database.yml same way as you did with secrets.yml. Create an empty file in same place where secrets.yml are and then copy the code from your local copy (ie development env). You have to set the credentials accordning to the ones you have set on production obviously :-) To create the files (assuming you are connected on server) cd into your shared/config/ folder and then type touch database.yml and touch secrets.yml i have the following error in the tutorial: /home/nicoara/.rbenv/versions/2.2.3/lib/ruby/2.2.0/rubygems/specification.rb:2112:in `raise_if_conflicts': Unable to activate capistrano-rails-1.2.3, because capistrano-2.15.9 conflicts with capistrano (~> 3.1) (Gem::ConflictError) from /home/nicoara/.rbenv/versions/2.2.3/lib/ruby/2.2.0/rubygems/specification.rb:1280:in `activate' from /home/nicoara/.rbenv/versions/2.2.3/lib/ruby/2.2.0/rubygems.rb:198:in `rescue in try_activate' from /home/nicoara/.rbenv/versions/2.2.3/lib/ruby/2.2.0/rubygems.rb:195:in `try_activate' ... my gemfile has: group :development do gem 'capistrano', '~> 3.7', '>= 3.7.1' gem 'capistrano-rails', '~> 1.2' gem 'capistrano-passenger', '~> 0.2.0' gem 'capistrano-rbenv', '~> 2.1' my capfile has: # Capfile require 'capistrano/rails' require 'capistrano/passenger' # If you are using rbenv add these lines: require 'capistrano/rbenv' set :rbenv_type, :user set :rbenv_ruby, '2.4.0' # Load DSL and set up stages require "capistrano/setup" # Include default deployment tasks require "capistrano/deploy" what to do? Thanks I'm planning to do a fresh OS installation on my local machine, so I have a question: is there a way to backup my existing ssh keys that I use to login to the server or my repos, or do I have to generate and add a new key in this case? if thats the case I guess all I need to do is to generate the new key on my fresh install and ssh-copy-id to the server again?? Meant to reply sooner, but all your ssh keys are located in ~/.ssh so you can backup that folder and just replace it on your new install to use the same keys. After getting through the process, when I tried to deploy my project, I kept getting: "Your Ruby version is 2.3.1, but your Gemfile specified 2.4.1" Logged into the server as deploy, ruby version was 2.4.1. After a lot of troubleshooting, I su'd into root, and found that root had ruby 2.3.1 as it's version. I figured i must have screwed something up, so I started fresh.... this time watching the output of each command closely. It seems that the Passenger install (as deploy user) imported ruby 2.3.1... 2.4.1 was already set up under deploy... and this 2.3.1 did not seem to change anything for deploy. However, root, which had no ruby prior to passenger, was now set to 2.3.1. I've updated root's ruby to be 2.4.1 using the rbenv method as root so that I could get something deployed to practice on... but... I'm confused as to how and why 2.3.1 got there, and why it's interfering with the deploy command when i try to push an update... Chris this tutorial is very helpful. Can i deploy Worpress and Rails both in a server. Like - WordPress for blogging - ( ) and rails application is deployed to - (). Please help me. Error 403 The passenger_ruby and passenger_root are configured conrrectly. :/ [EDIT] Seem that my passenger is not running correctly. Can anybody help me? Chris, Incredibly useful tutorial! For a small bit of extra config and command line hacking, I get to deploy my app to a server I control rather than an option like Heroku. Thanks for putting this together! I ran into an issue installing this on nginx using rbenv. If you try to cap production deploy and get an error with passenger. Try opening /etc/nginx/passenger.conf and add this to the file. passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; to get your specific passenger root type passenger-config --root Lovely tutorial... how do you launch the rails console to make a few changes to the application...? Thank you for this guide! One question I have is how do I load the ruby app if I am not able to use git? When I run cap production deploy it fails trying to read from remote repository because it is calling git and I am not using git. I am new to all of this so sorry for my ignorance. INFO [01ba63f0] Running /usr/bin/env git ls-remote HEAD as [email protected] DEBUG [01ba63f0] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.4.1" GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/git-ssh-orderentry-producti..." ; /usr/bin/env git ls-remote HEAD ) DEBUG [01ba63f0] fatal: 'HEAD' does not appear to be a git repository DEBUG [01ba63f0] fatal: Could not read from remote repository. Thanks a lot for this post! Simply I use it every new project I have since long time ago. Perfect stack and everything works fine following the instructions. Could someone help me , iv'e been at this for a week now i get this error when i do a cap production deploy rbenv: bundle: command not found (Backtrace restricted to imported tasks) cap aborted! SSHKit::Runner::ExecuteError: Exception while executing as [email protected]: bundle exit status: 127 bundle stdout: Nothing written bundle stderr: rbenv: bundle: command not found SSHKit::Command::Failed: bundle exit status: 127 bundle stdout: Nothing written bundle stderr: rbenv: bundle: command not found Tasks: TOP => deploy:updated => bundler:install (See full trace by running task with --trace) The deploy has failed with an error: Exception while executing as [email protected]: bundle exit status: 127 bundle stdout: Nothing written bundle stderr: rbenv: bundle: command not found Thank you very much for the tutorial. But..I have a problem (ruby 2.4.2, rails 5.1.4). When i run "cap production deploy". I'm having an error: _______________________________________________________________________________________________ 00:09 deploy:assets:backup_manifest 01 mkdir -p /home/user/app/releases/20170925040319/assets_manifest_backup ✔ 01 [email protected] 0.047s WARN Rails assets manifest file not found. (Backtrace restricted to imported tasks) cap aborted! SSHKit::Runner::ExecuteError: Exception while executing as [email protected]: Rails assets manifest file not found. Capistrano::FileNotFound: Rails assets manifest file not found. Tasks: TOP => deploy:assets:backup_manifest (See full trace by running task with --trace) The deploy has failed with an error: Exception while executing as [email protected]: Rails assets manifest file not found. _______________________________________________________________________________________________ Google did not help me. Can you help me? Thanks! Hi Chris, thanks for brilliant tutorial. I followed it about an year ago and every thing worked smoothly. Until Today I saw that the production.log on the server side isn't updating. It seems to be stuck in 23/5 2017 :) and havn't updated since. So I was thinking, Is there maybe something in the production.rb setup or the sim-links that I could alter to get the production.log updated again? please see my question here about this problem:... Thanks for this great tutorial and My cap deploy went fine without any error and I removed the default nginx page. After I restarted the nginx again I can't see my rails app running. I got This Site can't be reached. Can you help me on this please? I got Connection refused is the error. I have use Vue.js and user *vue, the single file component. so, how to deploy it? Anyone know?
https://gorails.com/forum/deploy-ruby-on-rails-on-ubuntu-16-04-xenial-xerus
CC-MAIN-2020-50
refinedweb
3,585
59.19
NAME libinn - InterNetNews library routines SYNOPSIS #include "inn/libinn.h" char * GenerateMessageID(domain) char *domain; void HeaderCleanFrom(from) char *from; char * HeaderFind(Article, Header, size) char *Article; char *Header; int size; FILE * CAopen(FromServer, ToServer) FILE *FromServer; FILE *ToServer; FILE * CAlistopen(FromServer, ToServer, request) FILE *FromServer; FILE *ToServer; char *request; void CAclose() struct _DDHANDLE * DDstart(FromServer, ToServer) FILE *FromServer; FILE *ToServer; void DDcheck(h, group) DDHANDLE *h; char *group; char * DDend(h) DDHANDLE *h; void close; int NNTPsendarticle(text, ToServer, Terminate) char *text; FILE *ToServer; int Terminate; int NNTPsendpassword(server, FromServer, ToServer) char *server; FILE *FromServer; FILE *ToServer; void Radix32(value, p) unsigned long value; char *p; char * ReadInFile(name, Sbp) char *name; struct stat *Sbp; char * ReadInDescriptor(fd, Sbp) int fd; struct stat *Sbp;36 which limits the format of the header. HeaderFind searches through Article looking for the specified Header. Size should be the length of the header name. It returns a pointer to the value of the header, skipping leading whitespace, or NULL if the header cannot be found. Article should be a standard C string containing the text of the article; the end of the headers is indicated by a blank line -- two consecutive \n characters. CAopen and CAclose provide news clients with access to the active file; the ``CA'' stands for Client Active. CAopen opens the active file for reading. It returns a pointer to an open FILE, or NULL on error. If a local or NFS-mounted copy exists, CAopen will use that file. The FromServer and ToServer parameters should be FILE's connected to the NNTP server for input and output, respectively. See NNTPremoteopen or NNTPlocalopen, below. If either parameter is NULL, then CAopen will just return NULL if the file is not locally available. If they are not NULL, CAopen will use them to query the NNTP server using the ``list'' command to make a local temporary copy. The CAlistopen sends a ``list'' command to the server and returns a temporary file containing the results. The request parameter, if not NULL, will be sent as an argument to the command. Unlike CAopen, this routine will never use a locally-available copy of the active file. CAclose closes the active file and removes any temporary file that might have been created by CAopen or CAlistopen. CloseOnExec file is consulted to determine the proper value for the Distribution header after all newsgroups have been checked. DDstart begins the parsing. It returns a pointer to an opaque handle that should be used on subsequent calls. The FromServer and ToServer parameters should be FILE's connected to the NNTP server for input and output, respectively. If either parameter is NULL, then an empty default will ultimately be returned if the file is not locally available. DDcheck should be called with the handle, h, returned by DDstart and a newgroups, group, to check. It can be called as often as necessary. DDend releases any state maintained in the handle and returns an allocated copy of the text that should be used for the Distribution header. SetNonBlocking enables (if flag is non-zero) or disables (if flag is zero) non-blocking. See moderators(5) for details on how the address is determined. GetModeratorAddress does no checking to see if the specified group is actually moderated. The returned value points to static space that is reused on subsequent calls. The FromServer and ToServer parameters should be FILE's connected to the NNTP server for input and output, respectively. If either of these parameters is NULL, then an attempt to get the list from a local copy is made. GetResourceUsage fills in the usertime and systime parameters with the total user and system. It returns -1 on failure, or zero on success. FromServerp and ToServerp will be filled in with FILE's which can be used to communicate with the server. Errbuff can either be NULL or a pointer to a buffer at least 512 bytes long. If not NULL, and the server refuses the connection,. NNTPsendarticle writes text on ToServer using NNTP conventions for line termination. The text should consist of one or more lines ending with a newline. If Terminate is non-zero, then the routine will also write the NNTP data-termination marker on the stream. It returns -1 on failure, or zero on success. NNTPsendpassword sends authentication information to an NNTP server by finding the appropriate entry in the passwd.nntp file. Server contains the name of the host; ``innconf->server'' will be used if server is NULL. FromServer and ToServer should be FILE's that are connected to the server. No action is taken if the specified host is not listed in the password file. Radix32 converts the number in value into a radix-32 string into the buffer pointed to by p. The number is split into five-bit pieces and each pieces is converted into a character using the alphabet 0..9a..v to represent the numbers 0..32. Only the lowest 32 bits of value are used, so p need only point to a buffer of eight bytes (seven characters and the trailing \0). ReadInFile reads the file named name into allocated memory, appending a terminating \0 byte. It returns a pointer to the space, or NULL on error. If Sbp is not NULL, it is taken as the address of a place to store the results of a stat(2) call. ReadInDescriptor performs the same function as ReadInFile except that fd refers to an already-open file. HashMessageID returns hashed message-id using MD5. EXAMPLES char . */ CloseOnExec)
http://manpages.ubuntu.com/manpages/oneiric/man3/libinn.3.html
CC-MAIN-2014-35
refinedweb
926
63.29
There are two main ways to get the length of a video file in C#: - Use a Shell object (found in shell32.dll in the system32 directory) to get the video length from the file metadata. - Instantiate a WindowsMediaPlayer object (found in wmp.dll if WMP is installed on your machine), load the file into it and then get the length from the corresponding object property. The advantage of the first method is that WMP does not need to be installed on your machine for it to function. This makes it the only option for many servers as well as systems running an “N” edition of Windows which does not contain WMP by default. The second method, however, is simpler to use and on my machine offered a very slight performance advantage (~5%) when looping through a collection of 20 files. Here are both methods: Method 1 - Shell Don’t forget to create a reference to shell32.dll found in the system32 folder. using Shell32 // ... var shell = new Shell(); var folder = shell.NameSpace(@"filepath"); foreach (FolderItem2 item in folder.Items()) { if (item.Name == "filename") { Console.WriteLine(TimeSpan.FromSeconds(item.ExtendedProperty("System.Media.Duration") / 10000000)); } } Marshal.ReleaseComObject(folder); Marshal.ReleaseComObject(shell); You may need to apply the [STAThread] attribute to the entry point of your application for this to function properly. Method 2 – WMP Don’t forget to create a reference to wmp.dll found in the system32 folder. using WMPLib; // ... var player = new WindowsMediaPlayer(); var clip = player.newMedia(filePath); Console.WriteLine(TimeSpan.FromSeconds(clip.duration));
http://www.levibotelho.com/development/get-the-length-of-a-video-in-c/
CC-MAIN-2018-34
refinedweb
255
58.69
Hi, I am new to coding and was trying to implement AddTwo code using the solution that was provided already. I am getting this error while running the code. Can someone suggest ta way to correct this Here is the code:: //Definition for singly-linked list. public class ListNode { int val; ListNode next; ListNode(int x) { val = x; } } public class Solution { public ListNode addTwoNumbers(ListNode l1, ListNode l2) { //Create a temporary node to store and return a link to the result ListNode sumList = new ListNode(0); //Create nodes that store values of L1 and L2 respectively ListNode firstList = l1 , secondList = l2; //create a current node that would be used to traverse the list //Initialize current node to the present head of the list ListNode currentList = sumList; //Declare carry that will store value 0 or 1 depending upon the addition operation int carry = 0; //Traverse the ListNode until either of the lists reaches its end while( firstList!=null || secondList!= null){ //Store values present in the first and second list elements int num1 = (firstList != null) ? (firstList.val) : 0; int num2 = (secondList !=null) ? (secondList.val) : 0; //Perform addition int sum = num1 + num2 + carry; //Update carry carry = sum/10; //Create next node of the Current list and give its initial value //This is the actual result of the addition performed currentList.next = new Listnode( sum%10 ); //Update currentList node to move to the next element currentList = currentList.next; //Move the first and second lists to the next element if they are not at the end if( firstList != null ) { firstList = firstList.next; } if( secondList != null ) { secondList = secondList.next; } //For the next addition, provide value of carry that was thee output of the previous addition //Carry can be either 0 or 1 if( carry == 1) currentList.next = new ListNode (carry); } //Return the output return sumList.next; } }
https://discuss.leetcode.com/topic/60311/compile-error-line-3-error-class-listnode-is-public-should-be-declared-in-a-file-named-listnode-java
CC-MAIN-2017-34
refinedweb
300
59.23
QGeoSatelliteInfo Since: 1.0 #include <QtLocationSubset/QGeoSatelliteInfo> The QGeoSatelliteInfo class contains basic information about a satellite. Overview Public Types Index Public Functions Index Public Types Defines the attributes for the satellite information. Elevation The elevation of the satellite, in degrees. Azimuth The azimuth to true north, in degrees. - Elevation - - Azimuth - Public Functions Creates a satellite information object. Creates a satellite information object with the values of other. Destructor. qreal Returns the value of the specified attribute as a qreal value. hasAttribute(), setAttribute() bool Returns true if the specified attribute is present in this update. bool Returns true if any of the information for this satellite is not the same as that of other. QGeoSatelliteInfo & Assigns the values from other to this object. bool Returns true if all the information for this satellite are the same as those of other. int Returns the PRN (Pseudo-random noise) number, or -1 if the value has not been set. void Removes the specified attribute and its value. void Sets the value for attribute to value. void Sets the PRN (Pseudo-random noise) number to prn. The PRN number can be used to identify a satellite. void Sets the signal strength to signalStrength, in decibels. int Returns the signal strength, or -1 if the value has
http://developer.blackberry.com/native/reference/cascades/qtmobilitysubset__qgeosatelliteinfo.html
CC-MAIN-2015-22
refinedweb
212
60.41
apr_general.h defines bzero() if APR doesn't think we have bzero(). This causes a problem on a SVR4 (see thread "re: APACHE20 problem with string.h and strings.h" on new-httpd for details). No code in APR uses bzero() and only two places in Apache use bzero(). I'd prefer to get rid of our macro which defines bzero(), and of course change the Apache code to use memset() instead. We assume the existence of memset() in multiple places. Any disagreements before I commit? This whole section of apr_general.h is a little disturbing to me, with us defining APIs which aren't in our namespace. Also, we do stuff like #if !define(APR_HAVE_xyz) #define xyz(a,b) abc(b,a) #endif At this point, we always have xyz but the app has been told otherwise. Silly details I guess, but they are inconsistent with the rest of APR. But I'm just suggesting zapping bzero() for now. It clearly isn't important enough a routine for us to pretend to be libc. -- Jeff Trawick | [email protected] | PGP public key at web site: Born in Roswell... married an alien...
http://mail-archives.apache.org/mod_mbox/apr-dev/200101.mbox/%[email protected]%3E
CC-MAIN-2017-13
refinedweb
192
78.55
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Beginning Java Author Overloaded Constructor Methods Steve Jensen Ranch Hand Joined: Sep 23, 2002 Posts: 126 posted May 08, 2003 12:28:00 0 Folks, this is really a follow-up to the post I made a couple of days ago, which is:- Now, the code below is another way to implement the program. But this time, it uses overloaded constructor methods instead of overloaded buildRect() methods. import java.awt.Point; class MyRect2 { int x1 = 0; int y1 = 0; int x2 = 0; int y2 = 0; MyRect2(int x1, int y1, int x2, int y2) { this.x1 = x1; this.y1 = y1; this.x2 = x2; this.y2 = y2; } MyRect2(Point topLeft, Point bottomRight) { x1 = topLeft.x; y1 = topLeft.y; x2 = bottomRight.x; y2 = bottomRight.y; } MyRect2(Point topLeft, int w, int h) { x1 = topLeft.x; y1 = topLeft.y; x2 = (x1 + w); y2 = (y1 + h); } void printRect() { System.out.print("MyRect: <" + x1 + ", " + y1); System.out.println(", " + x2 + ", " + y2 + ">"); } public static void main (String[] arguments) { MyRect2 rect; System.out.println("Calling MyRect2 with coordinates 25, 25, 50, 50:"); rect = new MyRect2(25, 25, 50, 50); rect.printRect(); System.out.println("***"); System.out.println("Calling MyRect2 with points (10,10), (20,20):"); rect = new MyRect2(new Point(10,10), new Point(20,20)); rect.printRect(); System.out.println("***"); System.out.print("Calling MyRect2 with 1 point (10,10)"); System.out.println(" width (50) and height (50):"); rect = new MyRect2(new Point(10,10), 50,50); rect.printRect(); System.out.println("***"); } } Now, the first thing i've noticed is that the constructor methods DO NOT return a value - which is what i'd expect for a constructor method. Thing is, if they don't return a value, then how does the program work??? What I mean is, each constructor method receives a set of parameters. And then what??? Looking at the code, it would appear that without returning anything, nothing else can hapen. Could somebody please enlighten me. Cheers in advance, folks. John Bonham was stronger, but Keith Moon was faster. jim gotti Ranch Hand Joined: Jul 02, 2002 Posts: 36 posted May 08, 2003 12:38:00 0 constructors set parameters for the object you are creating. What constructor is used depends on the number and type of parameters used when the object is created. look at the line rect = new MyRect2(25, 25, 50, 50); this has 4 parameters, which makes a call to this constructor MyRect2(int x1, int y1, int x2, int y2){ this.x1 = x1; this.y1 = y1; this.x2 = x2; this.y2 = y2; } Now you have a rectangle object (rect) with the 4 corners set to 25, 25, 50, 50. the next line calls the printRect() method on using that object. then a few lines down you have rect = new MyRect2(new Point(10,10), new Point(20,20)); which makes a call to the constructor with 2 parameters MyRect2(Point topLeft, Point bottomRight) { x1 = topLeft.x; y1 = topLeft.y; x2 = bottomRight.x; y2 = bottomRight.y; } ...then you call the printRect() method on that object again. so the constructors do not return a value, but instead set parameters when an object is created. (please correct me if i am wrong or if i use the wrong terminology...i'm still learning myself) [ May 08, 2003: Message edited by: jim gotti ] Tom Purl Ranch Hand Joined: May 24, 2002 Posts: 104 posted May 08, 2003 12:52:00 0 Yes, constructors in Java don't return vales. Ever. Typically, you instantiate (create) an object in Java like this: Whoosit w = new Whoosit( 7, "Name" ); Your object is now instantiated. Typically, if you wanted that object to actually do something useful (like return a value or calculate a value or spawn a thread or whatever) you would call a method of that object. int sum = w.getSum( "7", "3" ); // ...where w is your Whoosit object. Does this answer your question? You seem to have a non-Java coding background. Just out of curiosity, what language are you most familiar with? Tom Purl<br />SCJP 1.4 Barry Gaunt Ranch Hand Joined: Aug 03, 2002 Posts: 7729 posted May 08, 2003 13:02:00 0 In addition to what Jim wrote: Steve, notice that Jim did not use the word "method" with constructor. That's because they are not strictly methods because they have no return value. A constructor instantiates ( "makes" ) an object using its class as a "template". Constructors are normally used together with the new keyword. So Myclass mc = new MyClass(); can be read as "make me a new object using the class MyClass as a template, and put a handle (a reference) to the newly created object in the variable mc which is of type: reference to MyClass". By having constructors with parameters you can instantiate objects whose "insides" are different to other objects of the same kind (class). If still uncertain, please keep asking... -Barry PS Hi Tom, didn't see you there - but the more the merrier [ May 08, 2003: Message edited by: Barry Gaunt ] Ask a Meaningful Question and HowToAskQuestionsOnJavaRanch Getting someone to think and try something out is much more useful than just telling them the answer. Barry Gaunt Ranch Hand Joined: Aug 03, 2002 Posts: 7729 posted May 10, 2003 00:55:00 0 Steve, are you receiving this? Please give us feedback. [ May 10, 2003: Message edited by: Barry Gaunt ] Steve Jensen Ranch Hand Joined: Sep 23, 2002 Posts: 126 posted May 11, 2003 16:34:00 0 Err, yeah, I've sort of understod it, thanks. But this has lead me to a follow-up question - just why should we bother with constructors at all then. The cattledrive assignments (for example), that i've completed - have worked fine without me needing to use(??) constructors. What I mean is, if constructors don't actually do anything - why use them?? :roll: jim gotti Ranch Hand Joined: Jul 02, 2002 Posts: 36 posted May 11, 2003 19:17:00 0 Well, they do do something. they initialize a new object...set its parameters. for example. take the Vector API if you just make a new constructor. Vector v = new Vector(); //notice no parameters this calls the default constructor, noted below from the API. Constructor Summary Vector() Constructs an empty vector so that its internal data array has size 10 and its standard capacity increment is zero. However if you create a new Vector with the following: Vector v = new Vector(50); it will call the constructor that takes an int as its parameters, noted below.... Vector(int initialCapacity) Constructs an empty vector with the specified initial capacity and with its capacity increment equal to zero. ....and so on. So see? the constructor do actually do something...just sometimes when people make a new object, such as vectors, linked lists, arrays...they make them with no parameters and Java notices and it just makes a call to the default constructor. Also, if you write code , you do not need to define a constructor, java will use its default constructor...however if you define ONE constructor, then you have to take total control of them all(if more than one). This is why you have never had to define the constructors when you do the cattledrive examples. [ May 11, 2003: Message edited by: jim gotti ] I agree. Here's the link: subject: Overloaded Constructor Methods Similar Threads What is going wrong??????? Help!!! drawing arrows drawRect() issues Use of the "This" keyword, and "Point" objects. java problem All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/393739/java/java/Overloaded-Constructor-Methods
CC-MAIN-2014-42
refinedweb
1,291
66.03
26 July 2012 05:13 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The average realised price of methanol for the April-June 2012 period rose to $384/tonne, up by 5.8% compared with the same period a year earlier, the company said in a statement. Its sales volume for the March-June period, however, fell slightly to 1.85m tonnes from 1.87m tonnes in the same period a year earlier, Methanex said. The company said its net profit stood at $74m in the first half of 2012, slightly lower than the $75m posted in the same period last year. The company sold 3.66m tonnes of methanol in the January-June period, a drop of 1.6% compared with the same period in 2011, but the average realised price of the product increased by 5% year on year to $383/tonne, it said. “Overall methanol demand has remained good and the pricing environment has been relatively stable, despite some demand softness in certain derivatives,” said Methanex president and CEO Bruce Aitken in the statement. “Industry demand growth is expected to significantly exceed new capacity additions over the next few years and we have a number of growth projects in place to capitalise on the positive industry conditions,” Aitken
http://www.icis.com/Articles/2012/07/26/9581091/canadas-methanex-q2-net-profit-up-27-on-higher-methanol-prices.html
CC-MAIN-2014-35
refinedweb
210
63.29
Additional Titles Other Devvy Articles: Vote Fraud: What They Aren't Telling You Forced Mental Health Screening for Your Children More Devvy Articles: By: Devvy June 4, 2007 � 2007 - NewsWithViews.com "I am concerned for the security of our great nation, not so much because of any threat from without, but because of the insidious forces working from within." - General Douglas MacArthur. The American people find themselves in a quandary over the issue of corporatism, the support of free enterprise and refusing to fund their own destruction by supporting these multi national corporations who have and are selling out America. I have always been a staunch supporter of a free enterprise system in which Americans can choose who they decide to do business with or buy products from any company legally operating in this country. Competitiveness not only brings affordability and quality, but it also brings out the best in Americans. There's no doubt, unless you're in total denial, that most of the U.S. Senate and Bush no longer are making any pretense of representing the American people. I wrote a column March 6, 2006, titled No More Pretense of Representation: ." [Advertisement: Read, "The Coming Battle" first published in 1899, it predicts how corporations and the money power that grow up around them will deprive Americans of their freedom] [Advertisement: Read, "The Coming Battle" first published in 1899, it predicts how corporations and the money power that grow up around them will deprive Americans of their freedom] This week the counterfeit U.S. Senate will once again take up the fight over S.1348, the massive amnesty and terrorist facilitation act being sold as immigration reform. Two weeks ago a friend sent e-mail; he lives in Arlen Specter's section of the state. The staffer on the phone in the district office said opposition to S1348 was running 9-1, but Specter was going to vote for it anyway. Last week, Comrade Diane Feinstein, who should be in a federal prison except there are no men with the stones in this country to prosecute her, thinks 8,000+ calls in opposition to S 1348 isn't worthy of her time; her office said it would take at least 30,000 for this Red lover to take the matter seriously. Two weeks ago constituents in Jon Kyle's district were in opposition 20 to 1, yet Kyle is going to vote for it anyway. So you see, these senators serving in office under a law that doesn't exist are no longer making any pretense of representing the people of their districts. It's all about money. The counterfeit U.S. Senate has no fear of the states because the states have no representation in Congress. They don't fear the gutless cowards who serve as governors for the four border states and they do not fear we the people. These career shysters in Congress take millions of dollars from corporate America and special interest groups who own their souls. Those who want S1348 passed are those who profit from it and they pay well to see it gets passed. Whether it's pharmaceuticals or illegals, the major corporations own these scoundrels. This blanket amnesty will destroy this republic and send a clear message: go ahead and invade America, steal jobs, murder and rape America's children, kill them while driving drunk and by all means, terrorists welcome! If S 1348 isn't enough, another bill is sitting and if it ever becomes law, you will be forced to fund a racist, seditious, organization called La Raza; they hate America and gring." Never mind that this bill is unconstitutional, there's no money in the treasury. It's overdrawn almost $9 TRILLION dollars. This congressman wants to write another hot check and slap more debt on the back of your children and grand babies. If you've done enough research on this amnesty bill, corporate foot prints are all over it, i.e., the Z-visas. We the people have choices, but they aren't easy ones to make. If we boycott these companies, they lose business and our fellow Americans find themselves out of work. If we own stock in these corporations selling out America and buy their products and services, we are funding our own destruction. I can't tell you what to do, but I know as sure as the sun rises, that we must make these short term sacrifices in order to send a clear message to any company doing business in this country: if you support S.1348, if you out source American jobs to Communist China and other foreign nations, and if you continue to fund the destruction of this country, I will NOT buy your goods and services. I will go without. I have and will continue to do so. I don't use google, I use scroogle.org. I don't buy anything from the corporations below or use their services. I refuse to speak with any foreign operator for any company that has sold out my fellow Americans and their good jobs for cheap labor in India, Communist China or Mexico. China Mart (Walmart) employs 100 Chinese workers to 1 US worker. Slave labor in foreign countries. I will not patronize them, their products or own their stock: Wal-Mart Stores Inc., Exxon Mobil Corp., General Motors Corp., Ford Motor Co., General Electric Co., ChevronTexaco Corp., ConocoPhillips, Ctigroup Inc., International Business Machines Corp., American International Group, Inc., Hewlett-Packard Co., Verizon Communications Inc., The Home Depot Inc., Berkshire Hathaway Inc., Altria Group Inc., McKesson Corp., Cardinal Health Inc., State Farm Insurance Cos., The Kroger Co., Fannie Mae, The Boeing Co., AmerisourceBergen Corp., Target Corp., Bank of America Corp., Pfizer Inc., J.P. Morgan Chase & Co., Time Warner Inc., The Procter & Gamble Co., Costco Wholesale Corp., Johnson & Johnson, Dell Inc., Sears Roebuck and Co., SBC Communications Inc., Valero Energy Corp, Marathon Oil Corp., MetLife Inc., Safeway Inc., Albertson's Inc., Morgan Stanley, Medco Health Solutions, United Parcel Service Inc., Atlanta, J.C. Penney Co. Inc., The Dow Chemical Co., Midland, Mich., Walgreen Co., Deerfield, Ill., Microsoft Corp., TheAllstate Corp., Lockheed Martin Corp., Wells Fargo & Co., Lowe's Cos. Inc., Walgreen Co., Microsoft Corp., and let's not forget Halliburton and the other companies making billions off the invasion of Iraq and the blood of American soldiers. That's just a short list of corporations that own Congress, but they are powerful because Americans enrich them with consumer dollars. If Americans would simply change their buying habits, we could turn things around in a relatively short period of time. Most won't because they can't do without "things" or they won't take the time to buy Made in America. A tragedy for OUR country. Another alarming fact: "...the U.S. government revealed for the first time how much of its classified intelligence budget is spent on private contracts: a whopping 70 percent." See The corporate takeover of U.S. intelligence. I will not be intimidated by propaganda from slimy, career politicians like John McCain that rounding up and deporting these criminals who break our immigration laws will cause riots in this country. John McCain is a coward, too afraid to begin a real round up of the illegals already here and getting them deported as President Eisenhower did. This is one of the major reasons I've been a staunch supporter of Dr. Edwin Vieira's effort to revitalize the militia under the state legislatures. Last week an article on this very subject was penned by Lionel Waxman of Inside Tucson Business: ." Did I not warn of this last year in my column, When illegals go berserk will your state be prepared? I did. America has the right to restrict immigration for the good of our country. We have the right to deport any illegal alien, regardless of country of origin, as soon as they are identified. This isn't racism or about civil rights. It's about our laws, so junk the rhetoric. Too many employers in this country don't want our laws upheld, they want cheap labor. They don't care how many Americans are slaughter by drunken illegals on our roads. They don't care how many Americans lose their jobs in favor of cheap labor. They don't care about terrorists crossing our borders. They care nothing about America, only how rich they can become. And, if you own stock in these corporations like Microsoft, Dell and all the rest who have sold out the American worker, that dividend check drips with the blood of innocents: Unlicensed DUI 'illegal' kills mom, 2 children Half of family wiped out in Christmas Eve crash, December 24, 2006 "A suspected illegal alien from Mexico is being held in the Salt Lake County jail in lieu of $500,000 after running a red light and broad siding a family of six, killing three, on Christmas Eve. Carlos Prieto was arraigned this morning and charged with three counts of automobile homicide, three counts of driving under the influence and driving without a license...." Go thank Swift & Co., they hire illegals in Utah. But, wait! Their concern is how much it will cost them to hire Americans: "Swift & Co., the third-largest U.S. beef and pork producer, said federal immigration raids at the company's plants, including one in Utah, in December will cost it between $45 million and $50 million, more than previously estimated...lost production from the raids and training of replacement workers would cost about $30 million in the fiscal year through May 27. Swift said Friday in a statement that all four of its beef plants have returned to full staffing after about 950 employees were removed for immigration violations." That's how much Swift & Co., thinks of half a family wiped out by a drunk illegal. If they were diligent in their hiring practices to begin with, they would have figured out how easy it is to nail an illegal applying for work. There is NO way to process 20 million illegal aliens under this phony "reform" bill. It's a smokes screen. As I have written repeatedly: Illegal aliens have NO rights. No right to any job in this country, no right to free medical treatment, education or anything else. They are not U.S. citizens. They have not earned citizenship, they have slithered across the border like thieves in the night demanding a free ride. If this amnesty bill is passed, the middle class will be living at poverty level within five years. There is no way to verify when these law breakers sneaked into the country. They lied and cheated to get here and they will lie through their teeth to stay here. How, pray tell, can an already broken system process approximately 20 MILLION applications when the very bureaucratic behemoths who are supposed to do this verification work are already so corrupt? Michael Hampton said it well last December when he wrote: "And nowhere is the corruption more evident than within Immigration and Customs Enforcement, where the reports of crime conducted by ICE employees eclipse the management report summaries....But the biggest section of the report was for Immigration and Customs Enforcement, which is charged with protecting the nation's borders, where dozens of agents were prosecuted for various crimes such as harboring fugitives, allowing illegal aliens and drugs to pass through border checkpoints, bribery, workers compensation fraud, identity theft, and sexual assault. "Transportation Security Administration employees, including federal air marshals, found themselves arrested for stealing from passengers, child pornography, money laundering, and drug smuggling. U.S. Citizenship and Immigration Services, responsible for throwing red tape in front of people who mistakenly think this is a free country and still want to immigrate here, was not immune either. Some of their officers got in trouble for harboring illegal aliens, forging passports, visas and green cards for aliens, or just issuing them outright when they should not have, and sexual assault." The full report from the Department of Fatherland Security can be read here. Anyone who believes upwards of 20 MILLION illegal aliens are simply going to "go home," then come back to get a pat on the head and a "path to citizenship" lives in a dream world. Watch this short video. THIS is what Congress wants to import more of and it WILL reach your city. Do you want these illegals to remain here and import another 100 million of them if S.1348 becomes law? Forget e-mails, get on the phone and tell your counterfeit U.S. Senator: NO to S.1348, period. You can obtain their DC and district office phone numbers here: Important related information: 1. Sen. John McCain's Amnesty Pill For Illegal Aliens 2, Crime Victims of Illegal Aliens 3, Blackout On Violent Illegal Alien Invasions of Schools 4, Former U.S. Border Patrol Agent predicts violence 5, Hide what's in your heart today 6, We hire illegal aliens 7, It's okay for illegals to steal your identity! 8, You can buy Made in USA - Support American workers 9, Listen to Devvy's commentaries. But the violence in northern Mexico is not stopping at the border. It's headed this way and a lot of Tucsonans know it. It is crossing the border because there is little to stop it.
https://www.newswithviews.com/Devvy/kidd275.htm
CC-MAIN-2022-21
refinedweb
2,240
63.8
Pymakr 1.0.0.5b does not connect to LoPy via WiFi I've managed to update the LoPy firmware using the standalone updater. I was able to connect to the LoPy with 192.168.4.1 with PyMakr on Kubuntu Xenial. Then I've altered the boot.py to connect to my local WiFi network. The LoPy logs in, I can see it and connect to it by using telnet or FTP. But I cannot connect to it via PyMakr neither by using its IP nor its name ("ESP"). Is there anything else I should look out for? Edited to make the title clearer. @peter thx this also worked for me now, the other example from the lopy documentation somehow did not work for me, so thx for sharing. @DarkHawk, I am using this booty.py to connect to my local WLAN: import machine from network import WLAN wlan = WLAN(mode=WLAN.STA) nets = wlan.scan() for net in nets: if net.ssid == 'MYSSID': wlan.connect(net.ssid, auth=(net.sec, 'my-wlan-key'), timeout=5000) while not wlan.isconnected(): machine.idle() break It is pretty standard as I've just adapted an example I've found somewhere around here. - Keptenkurk last edited by @peter ahh, sorry I missed the part with "wifi", but it's also the same here. using pymakr I cannot connect via usb or wifi (AP mode). but telnet/ftp via wifi works, direct connect on terminal via usb also works fine. it's just pymakr that didn't want to connect. but regarding wifi, wifi is only working in AP mode, If I use the example to connect to my own wifi, the lopy tells me it succeeded, but I can't connect or even ping it. @peter could you share your boot.py example? I'm also curious how to set it up to use dhcp. There are no other sessions open. In fact I try to connect to the LoPy with PyMakr first. Only when this has failed, I checked it by using telnet or ftp. Pymakr 1.0.0.b5 connects to LoPy as expected if no other sessions opened. The telnet server only allows one connection at a time. Did you disconnect any other Telnet clients before trying with Pymakr? Please note that my request did not raise an issue about connecting the LoPy via USB at all. I did not even try to connect it via ttyUSB, so I can not say if this would work or not. There is no physical connection between my PC and the LoPy. My question exclusively pertains to the problem that PyMakr 1.0.0.5b is unable to connect via WiFi to the LoPy while I can reach it via telnet and ftp from the exact same machine without any problems. same her pymakr (on mac) is not able to connect to the lopy via usb, not the old pymakr and als not the new version. Thank you for trying to help. As I've mentioned, I did manage to update the firmware with the standalone updater. That is not the issue. I also do not have another Arduino like device connected via any ttyUSB. In fact, the LoPy is not connected by USB at all. That is also not the issue. I can connect to it via WiFi. It gets an IP adress by my DHCP server. I can ping it, telnet to it and ftp to it. PyMakr however tries to connect but fails. That is my problem. - bartvrancken last edited by Do you have ANY other ardiono device connected? I had a TTN Uno connected to it borks the IDE also try using the standalone FW updater to get up to date first.
https://forum.pycom.io/topic/66/pymakr-1-0-0-5b-does-not-connect-to-lopy-via-wifi/1
CC-MAIN-2019-13
refinedweb
624
84.78
On Tue, Sep 30, 2008 at 7:01 PM, Karl Fogel <kfogel_at_red-bean.com> wrote: > Michael Haggerty <mhagger_at_alum.mit.edu> writes: > > [Please forgive me for spamming multiple mailing lists. Followups > > please to dev_at_cvs2svn.tigris.org.] > > Not spammy at all, I think. This is totally on-topic for those lists. > > > I would like to announce that on 8 September our second child, a son > > named Phillip, was born. He was a bit premature and had to spend 2.5 > > weeks in the hospital, but he is home now and both baby and mother are > > doing fine. > > Whew -- glad to hear both are okay. Congratulations! > > > I see the following as the highest priorities for the near future of > > cvs2svn: > > > > 1. Fix problems in the test suite caused by upstream changes in svntest. > > I will discuss this problem in a separate email. > > > > 2. Make a release 2.2 containing the improvements that have been > > accumulating on trunk. See [1] for a fairly up-to-date summary of > > what's there. > > > > 3. Give Johan Herland as much encouragement and support as possible to > > get involved with the cvs2svn project :-). Johan has worked on his own > > company-internal "cvs2git" converter called ConVerSion2Git that has a > > lot in common with cvs2svn, and he has expressed interest in > > collaborating with us (and has already submitted a few patches, thanks!) > > Johan, are you on this list? Did you hear that? :-) > > > 4. Try to encourage people from the git, hg, and bzr communities to > > test, use, and contribute improvements to cvs2svn so that it becomes > > more useful as a tool to convert to those DVCSs. All three of them have > > "fast-import" extensions that in principle consume data formatted in the > > "git-fast-import" format [2], but the extensions are in various levels > > of maturity and all have quirks that might have to be accommodated by > > cvs2svn. And the documentation and user interface for converting to > > non-SVN VCSs is still pretty primitive. Finally, a certain amount of > > evangelizing would help. People have to understand that most other > > tools are in fact broken! > > A marketing suggestion: > > It's usually counterproductive to say directly that all the other tools > are broken (even if it's true). Instead, stress the incredible amount > of field-testing cvs2svn has had, its amazing "zoo" of pathological CVS > repositories -- the finest such collection in the world, no doubt -- etc. > > Basically, make all the claims the other tools can't make, and then link > to those other tools, so people can do the comparison themselves. > > > 5. Improve our branding as a universal "cvs2xxx" converter that can > > convert to multiple destination systems. I haven't heard any really > > good suggestions for a new project name; other suggestions are welcome. > > So far I have been using "cvs2svn", "cvs2git", and "cvs2hg" to refer to > > three of its incarnations, and one approach might be to claim the entire > > "cvs2*" namespace. But for project branding, it would probably be > > better to come up with a new overarching project name to help people > > understand that a single tool is being talked about. > > I completely agree. How about "cvs2new"? > Names that seem good: cvs2x, or cvsconv (or even cvsconvert if you like verbosity). cvs2new does not seem seam good to me: at some point these VCS's will be old, and 'cvs2new' will seem incorrect. > > > I also have no idea what has to be done at tigris.org to get the > > project renamed; perhaps somebody could look into it? > > I think someone at feedback_at_tigris.org could help. Most of this can be > done manually: replay all the commits into the new repository, port over > the old issues by reference (i.e., create a new issue for each old > issue, then put bidirectional cross-links in the new and old issues). > For the mailing lists, just link to the old archives and explain that > they're old. The tigris admins can help with porting over the > subscriber lists. > > (It's important that all the old URLs stay valid anyway, so it's not > like the old project is going away.) > > I'm sorry I can't help much more than this. My excuse isn't as good as > yours, but I do have one :-). > > -Karl > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscribe_at_cvs2svn.tigris.org > For additional commands, e-mail: dev-help_at_cvs2svn.tigris.org > > Received on 2008-10-04 03:42:10 CEST This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2008-10/0223.shtml
CC-MAIN-2021-49
refinedweb
739
74.29
sargrigory: > I: I have tweaked this program a few ways for you. The big mistake (and why it runs out of space) is that you take ByteString.Lazy.length to compute the block size. This forces the entire file into memory -- so no benefits of lazy IO. As a separate matter, calling 'appendFile . encode' incrementally for each element will be very slow. Much faster to encode an entire list in one go. Finally, using System.Random.Mersenne is significantly faster at Double generation that System.Random. With these changes (below), your program runs in constant space (both writing out and reading in the 0.5Gb file), and is much faster: {-# LANGUAGE BangPatterns #-} import Data.Binary.Put import Data.Binary import System.IO import Data.Int import qualified Data.ByteString.Lazy as BL import System.Random.Mersenne path = "Results.data" n = 20*1024*1024 :: Int -- getBlockSize :: BL.ByteString -> Int64 -- getBlockSize bs = round $ (fromIntegral $ BL.length bs) / (fromIntegral n) -- -- ^^^^^ why do you take the length!? -- -- there's no point doing lazy IO then. -- Custom serialization (no length prefix) fillFile n = do g <- newMTGen (Just 42) rs <- randoms g :: IO [Double] BL.writeFile path $ runPut $ mapM_ put (take n rs) -- fillFile :: MTGen -> Int -> IO () -- fillFile _ 0 = return () -- fillFile g i = do -- x <- random g :: IO Double -- encodeFileAp path x -- fillFile g (i-1) processFile :: BL.ByteString -> Int64 -> Int -> Double -> Double processFile !bs !blockSize 0 !sum = sum processFile bs blockSize i sum = processFile y blockSize (i-1) (sum + decode x) where (x,y) = BL.splitAt blockSize bs main = do fillFile n -- compute the size without loading the file into memory h <- openFile path ReadMode sz <- hFileSize h hClose h results <- BL.readFile path let blockSize = round $ fromIntegral sz / fromIntegral n print $ processFile results blockSize n 0 ------------------------------------------------------------------------ Running this : $ ./A +RTS -sstderr 1.0483476019172292e7 226,256,100,448 bytes allocated in the heap 220,413,096 bytes copied during GC 65,416 bytes maximum residency (1186 sample(s)) 136,376 bytes maximum slop 2 MB total memory in use (0 MB lost due to fragmentation) ^^^^^^^^^^^^^^^^^ It now runs in constant space. Generation 0: 428701 collections, 0 parallel, 3.17s, 3.49s elapsed Generation 1: 1186 collections, 0 parallel, 0.13s, 0.16s elapsed INIT time 0.00s ( 0.00s elapsed) MUT time 118.26s (129.19s elapsed) GC time 3.30s ( 3.64s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 121.57s (132.83s elapsed) %GC time 2.7% (2.7% elapsed) ^^^^^^^^^^^^^^^^ Does very little GC. Alloc rate 1,913,172,101 bytes per MUT second Productivity 97.3% of total user, 89.0% of total elapsed -- Don
http://www.haskell.org/pipermail/haskell-cafe/2009-September/066332.html
CC-MAIN-2013-48
refinedweb
437
70.6
Create an ASP.NET Web Forms app with SMS Two-Factor Authentication (C#) by Erik Reitan Download ASP.NET Web Forms App with Email and SMS Two-Factor Authentication This tutorial shows you how to build an ASP.NET Web Forms app with Two-Factor Authentication. This tutorial was designed to complement the tutorial titled Create a secure ASP.NET Web Forms app with user registration, email confirmation and password reset. In addition, this tutorial was based on Rick Anderson's MVC tutorial. Introduction. Note Important: You must install Visual Studio 2013 Update 3 or higher to complete this tutorial. - 'Hook Up SendGrid' section of the tutorial titled Create a Secure ASP.NET Web Forms App with user registration, email confirmation and password reset. <> Warning Security - SmsServiceclass in the App_Start\Identity); } } Add the following usingstatements to the beginning of the IdentityConfig.cs file: using Twilio; using System.Net; using System.Configuration; using System.Diagnostics; Update the Account/Manage.aspx file by removing the lines highlighted in yellow: <%@> In the); } } } In the codebehind of Account/TwoFactorAuthenticationSignIn.aspx.cs, update the. Enable Two-Factor Authentication for a Registered User At this point, you have enabled two-factor authentication for your app. For a user to use two-factor authentication, they can simply change their settings using the UI. - As a user of your app you can enable two-factor authentication for your specific account by clicking on the user ID (email alias) in the navigation bar to display the Manage Account page.Then, click on the Enable link to enable two-factor authentication for the account. - Log off, then log back in. If you've enabled email, you can select either SMS or email for two-factor authentication. If you haven't enabled email, see the tutorial titled Create a Secure ASP.NET Web Forms App with User Registration, Email Confirmation and Password Reset. - The Two-Factor Authentication page is displayed where you can enter the code (from SMS or email). Clicking on the Remember this browser check box will exempt you from needing to use two-factor authentication to log in when using the browser and device where you checked the box. As long as malicious users can't gain access to your device, enabling two-factor authentication and clicking on the Remember this browser will provide you with convenient one step password access, while still retaining strong two-factor authentication protection for all access from non-trusted devices. You can do this on any private device you regularly use. Additional Resources - Two-factor authentication using SMS and email with ASP.NET Identity - Links to ASP.NET Identity Recommended Resources - Deploy a Secure ASP.NET Web Forms App with Membership, OAuth, and SQL Database to an Azure Website - ASP.NET Web Forms tutorial series - Add an OAuth 2.0 Provider - ASP.NET Web Forms tutorial series - Enable SSL for the Project - Account Confirmation and Password Recovery with ASP.NET Identity - Creating the app in Facebook and connecting the app to the project
https://docs.microsoft.com/en-us/aspnet/web-forms/overview/security/create-an-aspnet-web-forms-app-with-sms-two-factor-authentication
CC-MAIN-2021-39
refinedweb
501
50.43
[ ] Sean Busbey edited comment on HBASE-18626 at 10/31/17 7:30 PM: --------------------------------------------------------------- I think this warrants something in the [Upgrade Paths|] section of the ref guide, if only to call out the release note to the subset of folks going to 1.4 that are impacted. was (Author: busbey): I think this warrants something in the [Upgrade Paths|] section, if only to call out the release note to the subset of folks going to 1.4 that are impacted. > Handle the incompatible change about the replication TableCFs' config > --------------------------------------------------------------------- > > Key: HBASE-18626 > URL: > Project: HBase > Issue Type: Bug > Reporter: Guanghao Zhang > Fix For: 1.4.0, 2.0.0-beta-2 > > > About compatibility, there is one incompatible change about the replication TableCFs' config. The old config is a string and it concatenate the list of tables and column families in format "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer mapping. When parse the config, it use ":" to split the string. If table name includes namespace, it will be wrong (See HBASE-11386). It is a problem since we support namespace (0.98). So HBASE-11393 (and HBASE-16653) changed it to a PB object. When rolling update cluster, you need rolling master first. And the master will try to translate the string config to a PB object. But there are two problems. > 1. Permission problem. The replication client can write the zookeeper directly. So the znode may have different owner. And master may don't have the write permission for the znode. It maybe failed to translate old table-cfs string to new PB Object. See HBASE-16938 > 2. We usually keep compatibility between old client and new server. But the old replication client may write a string config to znode directly. Then the new server can't parse them. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.apache.org/mod_mbox/hbase-issues/201710.mbox/%3CJIRA.13095553.1503034035000.114593.1509478260812@Atlassian.JIRA%3E
CC-MAIN-2017-47
refinedweb
318
75.71
Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. In this tutorial we will have a look at something more basic on the Spring spectrum, but as most things can sometimes be forgotten and is something that personally I which does not require the use of any other beans (dependencies) and therefore does not require the use of dependency injection which we will look at later. @Component public class MyBeanImpl implements MyBean { @Override public void someMethod() { System.out.println(getClass() + "(); } }(); } } it's and); } } who configuration file or pass them into the method directly ( @Value annotation. your used to. The code for this post can be found on my GitHub and for more similar content (written by me of course!) have a look at my own blog at
https://tech.io/playgrounds/2096/playing-around-with-spring-bean-configuration
CC-MAIN-2022-05
refinedweb
139
62.78
Contents Yes. The pdb module is a simple but adequate console-mode debugger for Python. It is part of the standard Python library, and is documented in the Library Reference Manual. You can also write your own debugger by using the code for pdb as an example. The IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle), includes a graphical debugger. There is documentation for the IDLE debugger at. PythonWin is a Python IDE that includes a GUI debugger based on pdb. The Pythonwin debugger colors breakpoints and has quite a few cool features such as debugging non-Pythonwin programs. Pythonwin is available as part of the Python for Windows Extensions project and. There are a number of commercial Python IDEs that include graphical debuggers. They include: Yes. PyChecker is a static analysis tool that finds bugs in Python source code and warns about code complexity and style. You can get PyChecker from. Pylint is another tool that checks if a module satisfies a coding standard, and also makes it possible to write plug-ins to add a custom feature. In addition to the bug checking that PyChecker performs, Pylint offers some additional features such as checking line length, whether variable names are well-formed according to your coding standard, whether declared interfaces are fully implemented, and more. provides a full list of Pylint’s features.). It then turns the bytecode for modules written in Python into. Obviously, freeze requires a C compiler. There are several other utilities which don’t.-in function or to a component of an imported module. This clutter would defeat the usefulness of the global declaration for identifying side. Import modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space. It’s good practice if you import modules in the following order:. It is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says: Circular imports are fine where both modules use the “import <module>” form of import. They fail when the 2nd module wants to grab a name out of the first (“from module import name”) and the import is at the top level. That’s because names in the 1st are not yet available, because the first module is busy importing the 2nd. In this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import. It may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option. Only move imports into a local scope, such as inside a function definition, if it’s necessary to solve a problem such as avoiding a circular import or are trying to reduce the initialization time of a module. This technique is especially helpful if many of the imports are unnecessary depending on how the program executes. You may also want to move imports into a function if the modules are only ever used in that function. Note that loading a module the first time may be expensive because of the one time initialization of the module, but loading a module multiple times is virtually free, costing only a couple of dictionary lookups. Even if the module name has gone out of scope, the module is probably available in sys.modules.) Remember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there’s no alias between an argument name in the caller and callee, and so no call-by-reference per se. You can achieve the desired effect in a number of ways. By returning a tuple of the results: def func2(a, b): a = 'new-value' # a and b are local names b = b + 1 # assigned to new objects return a, b # return new values x, y = 'old-value', 99 x, y = func2(x, y) print(x, y) # output: new-value 100 This is almost always the clearest solution. By using global variables. This isn’t thread-safe, and is not recommended. By passing a mutable (changeable in-place) object: def func1(a): a[0] = 'new-value' # 'a' references a mutable list a[1] = a[1] + 1 # changes a shared object args = ['old-value', 99] func1(args) print(args[0], args[1]) # output: new-value 100 By passing in a dictionary that gets mutated: def func3(args): args['a'] = 'new-value' # args is a mutable dictionary args['b'] = args['b'] + 1 # change it in-place args = {'a':' old-value', 'b': 99} func3(args) print(args['a'], args['b'])) There’s almost never a good reason to get this complicated. Your best choice is to return a tuple containing the multiple results... Generally speaking, it can’t, because objects don’t really have names. Essentially, assignment always binds a name to a value; The same is true of def and class statements, but in that case the value is a callable. Consider the following code: class A: pass B = A a = B() b = a print(b) <__main__.A >>> a 8 Hexadecimal is just as easy. Simply precede the hexadecimal number with a zero, and then a lower or uppercase “x”. Hexadecimal digits can be specified in lower or uppercase. For example, in the Python interpreter: >>> a = 0xa5 >>> a 165 >>> b = 0XB2 >>> b 178 It’s primarily driven by the desire that i % j have the same sign as j. If you want that, and also want: i == (i // j) * j + (i % j) then integer division has to return the floor. C also requires that identity to hold, and then compilers that truncate i // j need to make i % j have the same sign as i. There are few real use cases for i % j when j is negative. When j is positive, there are many, and in virtually all of them it’s more useful for i % j to be >= 0. If the clock says 10 now, what did it say 200 hours ago? -190 % 12 == 2 is useful; -190 % 12 == -10 is a bug waiting to bite.' There are various techniques. The best is to use a dictionary that maps strings to functions. The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct: def a(): pass def b(): pass dispatch = {'go': a, 'stop': b} # Note lack of parens for funcs dispatch[get_input()]() # Note trailing parens to call function Use the built-in function getattr(): import foo getattr(foo, 'bar')() Note that getattr() works on any object, including classes, class instances, modules, and so on. This is used in several places in the standard library, like this: class Foo: def do_foo(self): ... def do_bar(self): ... f = getattr(foo_instance, 'do_' + opname) f() Use locals() or eval() to resolve the function name: def myFunc(): print("hello") fname = "myFunc" f = locals()[fname] f() f = eval(fname) f() Note: Using eval() is slow and dangerous. If you don’t have absolute control over the contents of the string, someone could pass a string that resulted in an arbitrary function being executed. You can use S.rstrip("\r\n") to remove all occurrences. Not as such. For simple input parsing, the easiest approach is usually to split the line into whitespace-delimited words using the split() method of string objects and then convert decimal strings to numeric values using int() or float(). split() supports an optional “sep” parameter which is useful if the line uses something other than whitespace as a separator. For more complicated input parsing, regular expressions tuple(seq) converts any sequence (actually, any iterable) into a tuple with the same items in the same order. For example, tuple([1, 2, 3]) yields (1, 2, 3) and tuple('abc') yields ('a', 'b', 'c'). If the argument is a tuple, it does not make a copy but returns the same object, so it is cheap to call tuple() when you aren’t sure that an object is already a tuple. The type constructor list(seq) converts any sequence or iterable into a list with the same items in the same order. For example, list((1, 2, 3)) yields [1, 2, 3] and list('abc') yields ['a', 'b', 'c']. If the argument is a list, it makes a copy just like seq[:] would.. Use the reversed() built-in function, which is new in Python 2.4: for x in reversed(sequence): ... # do something with x... This won’t touch your original sequence, but build a new copy with reversed order to iterate over. With Python 2.3, you can use an extended slice syntax: for x in sequence[::-1]: ... # do something with x..... You probably tried to make a multidimensional array like this: A = [[None] * 2] * 3 This looks correct if you print it: >>> A [[None, None], [None, None], [None, None]] But when you assign a value, it shows up in multiple places: >>> A[0][0] = 5 >>> A [[5, None], [5, None], [5, None]] The reason is that replicating a list with * doesn’t create copies, it only creates references to the existing objects. The *3 creates a list containing 3 references to the same list of length two. Changes to one row will show in all rows, which is almost certainly not what you want. The suggested approach is to create a list of the desired length first and then fill in each element with a newly created list: A = [None] * 3 for i in range(3): A[i] = [None] * 2 This generates a list containing 3 different lists of length two. You can also use a list comprehension: w, h = 2, 3 A = [[None] * w for i in range(h)] Or, you can use an extension that provides a matrix datatype; Numeric Python is the best known. Use a list comprehension: result = [obj.method() for obj in mylist],, Isorted may also be computed by def intfield(s): return int(s[10:15]) def Icmp(s1, s2): return cmp(intfield(s1), intfield(s2)) Isorted = L[:] Isorted.sort(Icmp) but since this method calls intfield() many times for each element of L, it is slower than the Schwartzian Transform.')] >>> result = [x[1] for x in pairs] >>> result ['else', 'sort', 'to', 'something'] An alternative for the last step is: >>> result = [] >>> for p in pairs: result.append(p[1]) If you find this more legible, you might prefer to use this instead of the final list comprehension. However, it is almost twice as slow for long lists. Why? First, the append() operation has to reallocate memory, and while it uses some tricks to avoid doing that each time, it still has to do it occasionally, and that costs quite a bit. Second, the expression “result.append” requires an extra attribute lookup, and third, there’s a speed reduction from having to make all those function calls. A class is the particular object type created by executing a class statement. Class objects are used as templates to create instance objects, which embody both the data (attributes) and code (methods) specific to a datatype.. A method is a function on some object x that you normally call as x.name(arguments...). Methods are defined as functions inside the class definition: class C: def meth (self, arg): return arg * 2 + self.attribute Self is merely a conventional name for the first argument of a method. A method defined as meth(self, a, b, c) should be called as x.meth(a, b, c) for some instance x of the class in which the definition occurs; the called method will think it is called as meth(x, a, b, c). See also Why must ‘self’ be used explicitly in method definitions and calls?. Use the built-in function isinstance(obj, cls). You can check if an object is an instance of any of a number of classes by providing a tuple instead of a single class, e.g. isinstance(obj, (class1, class2, ...)), and can also check whether an object is one of Python’s built-in types, e.g. isinstance(obj, str) or isinstance(obj, (int, float, complex)).): # ... code to search a mailbox elif isinstance(obj, Document): # ... code to search a document elif ... A better approach is to define a search() method on all the classes and just call it: class Mailbox: def search(self): # ... code to search a mailbox class Document: def search(self): # ... code to search a document obj.search() Delegation is an object oriented technique (also called a design pattern). Let’s say you have an object x and want to change the behaviour of just one of its methods. You can create a new class that provides a new implementation of the method you’re interested in changing and delegates all other methods to the corresponding method of x. Python programmers can easily implement delegation. For example, the following class implements a) Here the UpperOut class redefines the write() method to convert the argument string to uppercase before calling the underlying self.__outfile.write() method. All other methods are delegated to the underlying self.__outfile object. The delegation is accomplished via the __getattr__ method; consult the language reference ... Most __setattr__() implementations must modify self.__dict__ to store local state for self without causing an infinite recursion.. For static data, simply define a class attribute. To assign a new value to the attribute, you have to explicitly use the class name in the assignment: class C: count = 0 # number of times C.__init__ called def __init__(self): C.count = C.count + 1 def getcount(self): return C.count # or return self.count c.count also refers to C.count for any c such that isinstance(c, C) holds, unless overridden by c itself or by some class on the base-class search path from c.__class__ back to C. Caution: within a method of C, an assignment like self.count = 42 creates a new and unrelated instance named “count” in self‘s own dict. Rebinding of a class-static data name must always specify the class whether inside a method or not: C.count = 314 Static methods are possible: class C: @staticmethod def static(arg1, arg2, arg3): # No 'self' parameter! ... However, a far more straightforward way to get the effect of a static method is via a simple module-level function: def getcount(): return C.count If your code is structured so as to define one class (or tightly related class hierarchy) per module, this supplies the desired encapsulation. This answer actually applies to all methods, but the question usually comes up first in the context of constructors. In C++ you’d write class C { C() { cout << "No arguments\n"; } C(int i) { cout << "Argument is " << i << "\n"; } } In Python you have to write a single constructor that catches all cases using default arguments. For example: class C: def __init__(self, i=None): if i is None: print("No arguments") else: print("Argument is", i) This is not entirely equivalent, but close enough in practice. You could also try a variable-length argument list, e.g. def __init__(self, *args): ... The same approach works for all method definitions. – that is, to create a .pyc file for a module that is not imported – you can, using the py_compile and compileall modules. The py_compile module can manually compile any module. One way is to use the compile() function in that module interactively: >>> import py_compile >>> py_compile.compile('abc.py') This will write the .pyc to the same location as abc.py (or you can override that with the optional parameter cfile). You can also automatically compile all files in a directory or directories using the compileall module. You can do it from the shell prompt by running compileall.py and providing the path of a directory containing Python files to compile: python -m compileall . A module can find out its own module name by looking at the predefined global variable __name__. If this has the value '__main__', the program is running as a script. Many modules that are usually used by importing them also provide a command-line interface or a self-test, and only execute this code after checking __name__: def main(): print('Running test...') ... if __name__ == '__main__': main() Suppose you have the following modules: foo.py: from bar import bar_var foo_var = 1 bar.py: from foo import foo_var bar_var = 2 The problem is that the interpreter will perform the following steps: The last step fails, because Python isn’t done with interpreting foo yet and the global symbol dictionary for foo is still empty. The same thing happens when you use import foo, and then try to access foo.foo_var in global code. There are (at least) three possible workarounds for this problem. Guido van Rossum recommends avoiding all uses of from <module> import ..., and placing all code inside functions. Initializations of global variables and class variables should use constants or built-in functions only. This means everything from an imported module is referenced as <module>.<name>. Jim Roskind suggests performing steps in the following order in each module: van Rossum doesn’t like this approach much because the imports appear in a strange place, but it does work. Matthias Urlichs recommends restructuring your code so that the recursive import is not necessary in the first place. These solutions are not mutually exclusive. Try: __import__('x.y.z').y.z For more realistic situations, you may have to do something like m = __import__(s) for i in s.split(".")[1:]: m = getattr(m, i) See importlib for a convenience function called import_module(). For reasons of efficiency as well as consistency, Python, do this: import'
http://docs.python.org/release/3.2.3/faq/programming.html
CC-MAIN-2013-48
refinedweb
3,049
63.19
Type definitions, construct declarations, and imports can occur outside of the interface body. All definitions from the main IDL file will appear in the generated header file, and all the procedures from all the interfaces in the main IDL file will generate stub routines. This enables applications that support multiple interfaces to merge IDL files into a single, combined IDL file. As a result, it requires less time to compile the files and also allows MIDL to reduce redundancies in the generated stubs. This can significantly improve object interfaces through the ability to share common code for base interfaces and derived interfaces. For non- object interfaces, the procedure names must be unique across all the interfaces. For object interfaces, the procedure names need to be unique only within an interface. Note that multiple interfaces are not permitted when you use the /osf switch. The syntax for declarative constructs in the IDL file is similar to that for C. MIDL supports all Microsoft C/C++ declarative constructs except the following: x (y) short x (y) The import keyword specifies the names of one or more IDL files to import. The import directive is similar to the C include directive, except that only data types are assimilated into the importing IDL file. The constant declaration specifies Boolean, integer, character, wide-character, string, and void * constants. For more information, see const. A general declaration is similar to the C typedef statement with the addition of IDL type attributes. Except in /osf mode, the MIDL compiler also allows an implicit declaration in the form of a variable definition. The function declarator is a special case of the general declaration. You can use IDL attributes to specify the behavior of the function return type and each of the parameters. [ uuid(12345678-1234-1234-1234-123456789ABC), version(3.1), pointer_default(unique) ] interface IdlGrammarExample { import "windows.idl", "other.idl"; const wchar_t * NAME = L"Example Program"; typedef char * PCHAR; HRESULT DictCheckSpelling( [in, string] PCHAR word, // word to look up [out] short * isPresent // 0 if not present ); } Send comments about this topic to Microsoft Build date: 4/9/2009
http://msdn.microsoft.com/en-us/library/aa367289(VS.85).aspx
crawl-002
refinedweb
350
53.61
This page describes how to use supported libraries in the Google App Engine Python 2.7 runtime environment. By default, this runtime environment includes the Python standard library, the App Engine libraries, and a few bundled third- party packages. For a complete list of runtime-provided libraries, see the built-in third-party libraries reference. Adding libraries You can add a third-party library to your app in one of two ways: requesting the library or installing the library. Requesting a library You can request a library by using the libraries: directive in app.yaml. libraries: - name: PIL version: "1.1.7" - name: webob version: "1.1.1" Note that: - The library must be one of the supported runtime-provided third-party libraries. - When deployed, App Engine will provide the requested libraries to the runtime environment. - Some libraries must be installed locally. Installing a library You can install the library into a folder in your project's source directory. The library must be implemented as pure Python code with no C extensions. The code is uploaded to App Engine with your application code, and counts towards file quotas. The easiest way to manage this is with a ./lib directory: Use pip to install the library and the vendor module to enable importing packages from the third-party library directory. Create a directory named libin your application root directory: mkdir lib To tell your app how to find libraries in this directory, create or modify a file named appengine_config.pyin the root of your project, then add these lines: from google.appengine.ext import vendor # Add any libraries installed in the "lib" folder. vendor.add('lib') Use pipwith the -t libflag to install libraries in this directory: pip install -t lib gcloud The appengine_config.py file above assumes that the current working directory is where the lib folder is located. In some cases, such as unit tests, the current working directory can be different. To avoid errors, you can explicity pass in the full path to the lib folder using: vendor.add(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'lib')) Using pip requirements files version: Flask==0.10 Markdown==2.5.2 google-api-python-client To install the libraries from a requirements file, use the -r flag in addition to the -t lib flag: pip install -t lib -r requirements.txt Using libraries with the local development server While several of the runtime-provided libraries are available to your local development environment through the App Engine SDK, the following libraries are platform-dependent and must be installed locally before you can use them with the OS X, the Xcode Command Line Tools are required to build some packages. Using Django or matplotlib This section provides information you should know when using the Django or matplotlib libraries. Using Django Django is a full-featured web application framework for Python. It provides a full stack of interchangableoutput.
https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27?hl=en
CC-MAIN-2016-36
refinedweb
488
55.95
It is possible to use the serial port to receive commands directly in the Arduino code. We can for example control the GPIO from the serial monitor of a code editor such as the Arduino IDE or PlatformIO. It is also possible to make several development boards or micro-controllers communicate with each other (STM32, ESP32, ESP8266) via the serial port. Open the serial port in the Arduino code Before being able to receive messages); How to receive commands from the serial port? Using the serial port as a command input is not much more complicated than the serial output. We already know the Serial.print() command (and associated functions) which allows you to send characters to the serial port as well as the other derived commands presented in detail in this article. To receive characters on the serial port, we have several commands (official documentation) which allows to listen to everything that arrives on the serial port and to trigger a treatment. allows to know the number of bytes (characters) available for writing to the serial buffer without blocking the write operation. The methods to read in the buffer memory of the serial port read whatever comes in on the serial port. 4 other more specialized functions reads data from the serial buffer until the search string is found. The function returns true if the string is found. reads data from the serial buffer until a target string of a given length or termination string is found. The function returns true if the target string is found. is used to modify the Timeout, the waiting time before the execution of blocking Serial functions are aborted. The Timeout is 1 second by default.is used to modify the Timeout, the waiting time before the execution of blocking Serial functions are aborted. The Timeout is 1 second by default. The wait time is in milliseconds. 1 second = 1000ms The following functions are blocking: find(), findUntil(), parseInt(), parseFloat(), readBytes(), readBytesUntil(), readString(), readStringUntil(). Upload the project to drive an LED by sending commands to the serial port Create a new sketch on the Arduino IDE or a new PlatformIO project and paste the following code. The following orders are accepted - to turn on the LED - to turn off the LED - to know the state of the LED (ON or OFF) which is stored in the variable led_status - Any other command returns the error Invalid command Before uploading the code, modify the constant LED_PIN which indicates the pin on which the LED is connected #include <Arduino.h> #define LED_PIN 32 bool led_status = false; String command; void setup() { Serial.begin(115200); pinMode(LED_PIN, OUTPUT); } void send_led_status(){ if ( led_status ) { Serial.println("LED is ON"); } else { Serial.println("LED is OFF"); } } void loop() { if(Serial.available()){ command = Serial.readStringUntil('\n'); Serial.printf("Command received %s \n", command); if(command.equals("led=on")){ digitalWrite(LED_PIN, HIGH); led_status = 1; } else if(command.equals("led=off")){ digitalWrite(LED_PIN, LOW); led_status = 0; } else if(command.equals("ledstatus")){ send_led_status(); } else{ Serial.println("Invalid command"); } } } Project circuit For this project, we will simply connect an LED to a digital output of an Arduino board. The code is compatible with any ESP32 or ESP8266 board. Just change the pin that the LED is connected to. Here the LED is plugged into output 32 of an ESP32 development board. Explanation of the code The Serial.available() command changes to true whenever the serial port buffer contains new characters. By placing the function in the main thead of the program, the loop(), we can immediately receive incoming commands if( Serial.available() ){ ... processing of incoming commands on the serial port } As we saw in the introduction, there are several functions available to read from the serial port buffer. The serial monitor finishes sending it with the control character corresponding to the end of the line . You can use the Serial.readUntil(“\n”) method which allows you to read up to the string passed in parameter. We store the string in the command variable of type String. command = Serial.readStringUntil('\n'); All that remains is to use the string processing functions to identify the command and the parameter. All the String processing functions are detailed in this article Here, we will simply test if we have just received one of the three commands led=on, led=off , ledstatus . The state of the LED is stored in the variable led_status if(command.equals("led=on")){ digitalWrite(LED_PIN, HIGH); led_status = 1; } else if(command.equals("led=off")){ digitalWrite(LED_PIN, LOW); led_status = 0; } else if(command.equals("ledstatus")){ send_led_status(); } else{ Serial.println("Invalid command"); } The send_led_status function prints the status of the LED as a string to the serial port using the println() function. To learn more about all the methods to write to serial port, you can continue by reading this article void send_led_status(){ if ( led_status ) { Serial.println("LED is ON"); } else { Serial.println("LED is OFF"); } } Test the project from the Arduino IDE serial monitor Open the serial monitor from the Tools -> Serial monitor menu or using the icon An input field is located above the serial monitor window. Press the Send button or the Enter key on your keyboard to send the string to the Arduino. Now you can test the operation of the three commands led=on , led=off , ledstatus . Video demonstration Test the project from the PlatformIO serial monitor By default, the PlatformIO serial monitor does not allow sending commands from the Terminal. To be able to send commands (character strings) on the serial port, filters must be added using the monitor_filters option detailed here in the platformio.ini file. Here is a sample configuration that you can use in your projects. The command is sent by pressing the enter key. The exchange log is written to a file at the root of the project. monitor_filters = debug, send_on_enter, log2file Save the configuration file then restart the serial monitor by clicking on the trash can. Place the cursor (not visible) by clicking in the Terminal then enter the command. Send by pressing the enter key on the keyboard. [TX:'l'] [TX:'e'] [TX:'d'] [TX:'='] [TX:'o'] [TX:'n'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received led=on \n'] ommand received led=on [TX:'l'] [TX:'e'] [TX:'d'] [TX:'='] [TX:'o'] [TX:'f'] [TX:'f'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received led=off \n'] ommand received led=off [TX:'l'] [TX:'e'] [TX:'d'] [TX:'s'] [TX:'t'] [TX:'a'] [TX:'t'] [TX:'u'] [TX:'s'] [TX:'\r\n'] [RX:'C'] C [RX:'ommand received ledstatus \nLED '] ommand received ledstatus LED [RX:'i'] i [RX:'s OFF\r\n'] s OFF Use CoolTerm for Windows, macOS or Linux A final practical alternative is the CoolTerm open source software developed by Roger Meier which you can download here . Open the Connection menu then Settings . Select the COM port of the Arduino board, ESP32, ESP8266, STM32 as well as the speed , here 115200 bauds. Save the connection parameters and click on the Connect icon Open the Connection -> Send String… menu. Enter the desired command then click on Send to send Attention, the Enter key on the keyboard returns the cursor to the line, which will pose a problem with the Arduino code. Updates 23/10/2020 Publication of the article - - Get started with the I2C bus on Arduino ESP8266 ESP32. Wire.h library
https://diyprojects.io/getting-started-arduino-receive-commands-from-the-serial-port-esp32-esp8266-compatible/?amp
CC-MAIN-2022-40
refinedweb
1,230
54.93
There are many times when you may need to set a Pandas column value based on the condition of another column. In this post, you’ll learn all the different ways in which you can create Pandas conditional columns. Video Tutorial If you prefer to follow along with a video tutorial, check out my video below: Let’s begin by loading a sample Pandas dataframe that we can use throughout this tutorial. We’ll begin by import pandas and loading a dataframe using the .from_dict() method: import pandas as pd df = pd.DataFrame.from_dict( { 'Name': ['Jane', 'Melissa', 'John', 'Matt'], 'Age': [23, 45, 35, 64], 'Birth City': ['London', 'Paris', 'Toronto', 'Atlanta'], 'Gender': ['F', 'F', 'M', 'M'] } ) print(df) This returns the following dataframe: Name Age Birth City Gender 0 Jane 23 London F 1 Melissa 45 Paris F 2 John 35 Toronto M 3 Matt 64 Atlanta M Using Pandas loc to Set Pandas Conditional Column Pandas loc is incredibly powerful! If you need a refresher on loc (or iloc), check out my tutorial here. Pandas’ loc creates a boolean mask, based on a condition. Sometimes, that condition can just be selecting rows and columns, but it can also be used to filter dataframes. These filtered dataframes can then have values applied to them. Let’s explore the syntax a little bit: df.loc[df[‘column’] condition, ‘new column name’] = ‘value if condition is met’ With the syntax above, we filter the dataframe using .loc and then assign a value to any row in the column (or columns) where the condition is met. Let’s try this out by assigning the string ‘Under 30’ to anyone with an age less than 30, and ‘Over 30’ to anyone 30 or older. df['Age Category'] = 'Over 30' df.loc[df['Age'] < 30, 'Age Category'] = 'Under 30' Let's take a look at what we did here: - We assigned the string 'Over 30' to every record in the dataframe. To learn more about this, check out my post here or creating new columns. - We then use .locto create a boolean mask on the Age column to filter down to rows where the age is less than 30. When this condition is met, the Age Category column is assigned the new value 'Under 30' But what happens when you have multiple conditions? You could, of course, use .loc multiple times, but this is difficult to read and fairly unpleasant to write. Let's see how we can accomplish this using numpy's .select() method. Using Numpy Select to Set Values using Multiple Conditions Similar to the method above to use .loc to create a conditional column in Pandas, we can use the numpy .select() method. Let's begin by importing numpy and we'll give it the conventional alias np : import numpy as np Now, say we wanted to apply a number of different age groups, as below: - <20 years old, - 20-39 years old, - 40-59 years old, - 60+ years old In order to do this, we'll create a list of conditions and corresponding values to fill: conditions = [ (df['Age'] < 20), (df['Age'] >= 20) & (df['Age'] < 40), (df['Age'] >= 40) & (df['Age'] < 59), (df['Age'] >= 60) ] values = ['<20 years old', '20-39 years old', '40-59 years old', '60+ years old'] df['Age Group'] = np.select(conditions, values) print(df) Running Let's break down what happens here: - We first define a list of conditions in which the criteria are specified. Recall that lists are ordered meaning that they should be in the order in which you would like the corresponding values to appear. - We then define a list of values to use, which corresponds to the values you'd like applied in your new column. Something to consider here is that this can be a bit counterintuitive to write. You can similarly define a function to apply different values. We'll cover this off in the section of using the Pandas .apply() method below. One of the key benefits is that using numpy as is very fast, especially when compared to using the .apply() method. Using Pandas Map to Set Values in Another Column The Pandas .map() method is very helpful when you're applying labels to another column. In order to use this method, you define a dictionary to apply to the column. For our sample dataframe, let's imagine that we have offices in America, Canada, and France. We want to map the cities to their corresponding countries and apply and "Other" value for any other city. city_dict = { 'Paris': 'France', 'Toronto': 'Canada', 'Atlanta': 'USA' } df['Country'] = df['Birth City'].map(city_dict) print(df) When we print this out, we get the following dataframe returned: Name Age Birth City Gender Country 0 Jane 23 London F NaN 1 Melissa 45 Paris F France 2 John 35 Toronto M Canada 3 Matt 64 Atlanta M USA What we can see here, is that there is a NaN value associated with any City that doesn't have a corresponding country. If we want to apply "Other" to any missing values, we can chain the .fillna() method: city_dict = { 'Paris': 'France', 'Toronto': 'Canada', 'Atlanta': 'USA' } df['Country'] = df['Birth City'].map(city_dict).fillna('Other') print(df) This returns the following dataframe: Name Age Birth City Gender Country 0 Jane 23 London F Other 1 Melissa 45 Paris F France 2 John 35 Toronto M Canada 3 Matt 64 Atlanta M USA Using Pandas Apply to Apply a function to a column Finally, you can apply built-in or custom functions to a dataframe using the Pandas .apply() method. Let's take a look at both applying built-in functions such as len() and even applying custom functions. Applying Python Built-in Functions to a Column We can easily apply a built-in function using the .apply() method. Let's see how we can use the len() function to count how long a string of a given column. df['Name Length'] = df['Name'].apply(len) print(df) This returns the following dataframe: Name Age Birth City Gender Name Length 0 Jane 23 London F 4 1 Melissa 45 Paris F 7 2 John 35 Toronto M 4 3 Matt 64 Atlanta M 4 Take note of a few things here: - We apply the .apply()method to a particular column, - We omit the parentheses "()" Using Third-Party Packages in Pandas Apply Similarly, you can use functions from using packages. Let's use numpy to apply the .sqrt() method to find the scare root of a person's age. import numpy as np df['Age Squareroot'] = df['Age'].apply(np.sqrt) print(df) This returns the following dataframe: Name Age Birth City Gender Age Squareroot 0 Jane 23 London F 4.795832 1 Melissa 45 Paris F 6.708204 2 John 35 Toronto M 5.916080 3 Matt 64 Atlanta M 8.000000 Using Custom Functions with Pandas Apply Something that makes the .apply() method extremely powerful is the ability to define and apply your own functions. Let's revisit how we could use an if-else statement to create age categories as in our earlier example: def age_groups(x): if x < 20: return '<20 years old' elif x < 40: return '20-39 years old' elif x < 60: return '40-59 years old' else: return '60+ years old' df['Age Group'] = df['Age'].apply(age_groups) print(df) Conclusion In this post, you learned a number of ways in which you can apply values to a dataframe column to create a Pandas conditional column, including using .loc, .np.select(), Pandas .map() and Pandas .apply(). Each of these methods has a different use case that we explored throughout this post. Learn more about Pandas methods covered here by checking out their official documentation:
https://datagy.io/pandas-conditional-column/
CC-MAIN-2022-27
refinedweb
1,293
69.31
User account creation filtered due to spam. Created attachment 27629 [details] test.f90 If "sqrt" is a generic type-bound procedure, not only something like a%sqrt() or a%sqrt(b) [for pass and nopass, respectively] should work but also a simple: sqrt(a) or sqrt(a, b) That is: The generic enter the normal generic namespace with the exception that use, only: type also imports the generic name for that type. See also: It is not obvious from the standard that this holds, but it is analog to ASSIGNMENT(=) and OPERATOR(...) which also act that way. [Which is supported in gfortran.] Additionally, the following statement (F2008,4.5.7.3 Type-bound procedure overriding) wouldn't make sense with a different interpretation of the standard: "If a generic binding specied in a type denition has the same generic-spec as an inherited binding, it extends the generic interface and shall satisfy the requirements specied in 12.4.3.4.5." (In reply to comment #0) > See also: Note: That link does not seem to work. (In reply to comment #0) > It is not obvious from the standard that this holds, but it is analog to > ASSIGNMENT(=) and OPERATOR(...) which also act that way. [Which is supported in > gfortran.] It is correct that gfortran supports this for ASSIGNMENTs and OPERATORs. However, there are problems, cf. PR 41951 comment 6 to 10. The two PRs might be fixable in one go. (In reply to comment #1) > (In reply to comment #0) > > See also: > Note: That link does not seem to work. Try: Slightly compactified test case: module type_mod implicit none type field real :: var(1:3) contains procedure :: scalar_equals_field generic :: assignment (=) => scalar_equals_field procedure, nopass :: field_sqrt generic :: sqrt => field_sqrt end type contains elemental pure subroutine scalar_equals_field (A, b) class(field), intent(out) :: A real, intent(in) :: b A%var(:) = b end subroutine elemental pure function field_sqrt (A) result (B) class(field), intent(in) :: A type(field) :: B B%var(:) = sqrt (A%var(:)) end function end module program test use type_mod, only : field implicit none type(field) :: a a = 4.0 print *, sqrt(a) end program (In reply to comment #3) > > > See also: > > Note: That link does not seem to work. > > Try: > > The correct google groups link would be: Btw, I'm not completely convinced yet that the code in comment #0 (and #4) is really legal. No one in the c.l.f. thread has brought up a quote from the standard which clearly shows that referencing a type-bound generic is legal without part-ref syntax. For me, to most convincing reference up to now is this quote from F08:12.4.3.4.5 (though it still sounds a bit 'cloudy' to me): NOTE 12.10 In most scoping units, the possible sources of procedures with a particular generic identifier are the accessible interface blocks and the generic bindings other than names for the accessible objects in that scoping unit. (In reply to comment #5) > Btw, I'm not completely convinced yet that the code in comment #0 (and #4) is > really legal. In any case, here is a simple draft patch, which makes the code in comment 4 work (at least when the ONLY clause in the USE statement is removed): Index: gcc/fortran/decl.c =================================================================== --- gcc/fortran/decl.c (revision 188334) +++ gcc/fortran/decl.c (working copy) @@ -8374,12 +8374,20 @@ gfc_match_generic (void) { const bool is_op = (op_type == INTERFACE_USER_OP); gfc_symtree* st; + gfc_symbol *gensym; st = gfc_new_symtree (is_op ? &ns->tb_uop_root : &ns->tb_sym_root, name); gcc_assert (st); st->n.tb = tb; + /* Create non-typebound generic symbol. */ + if (gfc_get_symbol (name, NULL, &gensym)) + return MATCH_ERROR; + if (!gensym->attr.generic + && gfc_add_generic (&gensym->attr, gensym->name, NULL) == FAILURE) + return MATCH_ERROR; + break; } Index: gcc/fortran/resolve.c =================================================================== --- gcc/fortran/resolve.c (revision 188335) +++ gcc/fortran/resolve.c (working copy) @@ -11125,6 +11125,26 @@ specific_found: return FAILURE; } + /* Add target to (non-typebound) generic symbol. */ + if (!p->u.generic->is_operator) + { + gfc_symbol *gensym; + if (gfc_get_symbol (name, NULL, &gensym)) + return FAILURE; + if (gensym) + { + gfc_interface *head, *intr; + head = gensym->generic; + intr = gfc_get_interface (); + intr->sym = target->specific->u.specific->n.sym; + intr->where = gfc_current_locus; + intr->sym->declared_at = gfc_current_locus; + intr->next = head; + gensym->generic = intr; + gfc_commit_symbol (gensym); + } + } + /* Check those already resolved on this type directly. */ for (g = p->u.generic; g; g = g->next) if (g != target && g->specific One problem with the patch in comment #6 is that it produces double error messages for type-bound generics, e.g. on typebound_generic_{1,10,11}.'. More than three years ago Tobias Burnus wrote >'. Any reason to keep this PR opened? Note that the tests now fail with Error: INTENT(OUT) argument 'a' of pure procedure 'scalar_equals_field' at (1) may not be polymorphic
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53694
CC-MAIN-2016-40
refinedweb
777
55.54
I was trying to create a function that added up the range of a number, including the number. So when I input 5, I should receive 15. Why is this function not working? def sum_nums(number): start = 0 for index in range(0,len(number)): start += index return start print sum_nums(5) This could be done with a 1 liner but i'll help you out with a solution in the format that you're attempting. def sum_nums(number): total = 0 for i in range(number + 1): total += i print total Input 5 Output 15 Also, some clarification as per your error message. len is to be used for strings. An integer does not have a length. By simply passing it to the range function it will try all of the numbers (if you don't specify a starting number) from 1 up to that number, but not including it. That's why we do range(number + 1) to include the target number. Hope this helps :)
https://codedump.io/share/m0IHe7AwTSjP/1/adding-all-the-numbers-in-a-range-including-the-number
CC-MAIN-2017-47
refinedweb
166
72.76
C++ <cwchar> - fwide() Function The C++ <cwchar> fwide() function is used to determine the orientation of stream. If the orientation of the stream is not yet established, the function attempts to make stream oriented depending on the value of mode. - If mode > 0, it attempts to make stream wide-oriented. - If mode < 0, it attempts to make stream byte-oriented. - If mode == 0, it only queries the current orientation of the stream. If the orientation of the stream has already been decided (by executing output or by an earlier call to fwide()), the function do nothing. Syntax int fwide (FILE* stream, int mode); Parameters Return Value Returns value depending on the stream orientation after the call. - A integer value greater than 0, if the stream is wide-oriented. - A integer value less than 0, if the stream is byte-oriented. - A value equal to 0, if the stream has no orientation. Example: The example below shows the usage of fwide() function. #include <cwchar> #include <cstdio> void show_orientation(int n) { if(n == 0) printf("No stream orientation.\n"); else if (n < 0) printf("Stream is byte-oriented.\n"); else printf("Stream is wide-oriented.\n"); } int main (){ //open the file in write mode FILE *pFile = fopen("test.txt", "w"); if (pFile) { //A newly opened stream has no orientation. show_orientation(fwide(pFile, 0)); //Establish wide orientation. show_orientation(fwide(pFile, 1)); //close the file fclose(pFile); } return 0; } The output of the above code will be: No stream orientation. Stream is wide-oriented. ❮ C++ <cwchar> Library
https://www.alphacodingskills.com/cpp/notes/cpp-cwchar-fwide.php
CC-MAIN-2021-43
refinedweb
253
59.7
Originally posted by Kathy Hodgson: The reference is to an instance of S2. S2 inherits display(), so why isn't that call printing S2's own String s instead of its parent's? public class Question{ String s = "Outer"; public static void main(String[] args){ S2 s2 = new S2(); s2.display(); } } class S1 { String s = "S1"; void display() { System.out.println(s);} } class S2 extends S1{ String s = "S2"; } Could somebody please explain it? If a child class inherits a method without changing it, doesn't it really have it at all? The only way any of this makes sense to me is if the child class doesn't really have any method it inherits without overriding - in other words, it is not calling its own invisible copy of the display method (which is how I thought method inheritance worked) but is calling its parent's display method. Also working all 19 of Dan Chisholms exams helped. Static method Q.printS1 hides the static method P.printS1 in the super class P. Instance method Q.printS2 overrides the instance method P.printS2. Due the the differences between the hiding of static methods and the overridding of instance methods the invocation of the two methods in P.printS1S2 produces different results. The method invocation expression printS1 results in the invocation of the hidden super class method P.printS1. The method invocation expression printS2 results in the invocation of the overridding sub class method Q.printS2.
http://www.coderanch.com/t/242642/java-programmer-SCJP/certification/doesn-subclass-variable-hide
CC-MAIN-2013-48
refinedweb
244
66.44
68915/how-do-get-the-path-and-name-the-file-that-currently-executing I have scripts calling other script files but I need to get the filepath of the file that is currently running within the process. For example, let's say I have three files. Using execfile: How can I get the file name and path of script_3.py, from code within script_3.py, without having to pass that information as arguments from script_2.py? Hello @kartik, Since Python 3 is fairly mainstream, I wanted to include a pathlib answer, as I believe that it is probably now a better tool for accessing file and path information. from pathlib import Path current_file: Path = Path(__file__).resolve() If you are seeking the directory of the current file, it is as easy as adding .parent to the Path() statement: current_path: Path = Path(__file__).parent.resolve() Thank You!! Hey, @Subi, Regarding your query, you can go ...READ MORE You can use np.maximum.reduceat: >>> _, idx = np.unique(g, ...READ MORE my_dict = {'one': 'first', 'two': 'second', 'three': ...READ MORE i have textfile now i want to ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE Hello kartik, The variation of the context processor ...READ MORE Hello, The error itself explain what's the problem.In ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/68915/how-do-get-the-path-and-name-the-file-that-currently-executing
CC-MAIN-2022-33
refinedweb
270
69.38
. - The semantics is simply expansion of the synonym.. Another example showing pattern synonyms used as views, with regular ViewPatterns: import qualified Data.Sequence as Seq pattern Empty = (Seq.viewl -> Seq.EmptyL) pattern x :< xs = (Seq.viewl -> x Seq.:< xs) pattern xs :> x = (Seq.viewr -> xs Seq.:> x) Implicitly-bidirectional pattern synonyms In cases where pat is in the intersection of the grammars for patterns and expressions (i.e. is valid both as an expression and a pattern), the pattern synonym is said to be bidirectional, and can be used in expression contexts as well. For example, the following two are not bidirectional: Plus1 from the earlier example in an expression?. The second part describes the expansion in expressions. fac 0 = 0 fac (Plus1 n) = Plus1 n * fac n Associated Patterns).
https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms?version=13
CC-MAIN-2015-48
refinedweb
130
52.56
Contents In the last days, I’ve played around with Cevelop a bit, mainly interested in the refactoring capabilities it offers. Of course, one of the main points of a modern IDE is the setup and configuration of a project (which I failed miserably at). Another is analyzing and providing quick help with our code. Setting up the project I used the “Commandline Videostore” project I also have used in the refactoring webinar I did together with Jetbrains/CLion. The reason is that I am familiar with the source and the necessary refactoring steps, so I could fully concentrate on the tooling. Cevelop currently only supports makefile and SCons projects, so I had to port the CMake project to one of those. I followed the FAQ and managed to open the project and compile it once, but from there it was a little misery. From what I can tell, Cevelop is Eclipse CDT based and has, at least for my taste, an unmanageable amount of menus, settings, etc. From the second build on, Cevelop demanded to be shown the installation directory of SCons which I don’t have. I could remedy the popups only by reinstalling Cevelop. The makefile build broke shortly after with linker errors I could not explain. I wanted to test the refactoring tooling instead of spending hours of experimenting with settings and/or searching through the Eclipse CDT documentation. So, I resorted to build and run the tests manually from command line. Another annoyance was Symantec which always warned me about Cevelop being not known well enough. Warnings and quick fixes Cevelops provides a healthy amount of warnings about source code and some quick fixes to remedy them. In general, they seem to be reasonable, but some seemed rather nonsensical. For example, consider the following two lines: double totalAmount = 0; int frequentRenterPoints = 0; The first line is OK by Cevelop’s standards. The second gets marked as “un- or ill-initialized variable found”. Cevelop seems to have problems parsing complicated macro invocations, e.g. the macro in the test suite gets marked as an error that a method can not be resolved. A missing include for vector will provide a quick fix, but only to create a “vector” class, not to include the header. The quick fix also won’t take into account qualified names, i.e. for std::vector it would still create a vector struct in the current (global) namespace instead of inside namespace std. Having e.g. a std::vector<Movie>, where Movie is a type that does not exist, Cevelop will provide a quick fix to create it. However, that quick fix is somewhat hidden in the context menu of the error marker. Creating the new type in its own header and source will work but won’t add the necessary include to the file where it’s needed. The refactorings Cevelop has its own refactoring menu, which is also available as a context menu. However, some of the refactorings seem to be missing from the context menu, and they are in different order and sometimes have slightly different names. All refactorings appear to be available always and result in an error if selected in the wrong context. For many refactorings that may modify larger quantities of code, Cevelop will show a diff view of everything it is about to change. This is a good thing, since some refactorings may change (and break) our code considerably, if we are not careful. The basics The following refactorings are the ones every good refactoring tool for any language should have. They are the bread and butter for any refactoring session. Rename… works very well for classes, functions, variables etc. It even renames headers and sources if they have the same name as a contained class that gets renamed. Extract Local Variable will introduce a new variable for the selected expression. It does, however, not use the extracted variable for all occurrences of the expression and has problems with type inference. FOr example, std::vector<std::string> v; v[1]; //extract v[1] can lead to the new variable having type __gnu_cxx:: __alloc_traits ::value_type. The counterpart, inline variable, is missing. Extract Constant takes a literal expression and extracts it into a constant, which is very useful to get rid of magic numbers etc. The newly created constant will be a const variable in an anonymous namespace. Since only literal expressions are allowed I’d have expected to see a constexpr. Extract function seems to work well and as expected. It takes a few lines of code and moves them into their own function with appropriate parameters. The original lines are replaced with a call to that function. The counterpart, inline function, however, is missing. Toggle function seems to move functions from source file to the header and back but won’t analyze and add the necessary includes. Hide method makes a method private. This functionality seems not to be available for data members, and I could not find counterparts, i.e. making a function public or protected. Extract Interface… will add a new header with an interface class for the selected class. You can select which functions will be added as pure virtual functions to the interface class. One additional refactoring I’ve been sorely missing is extract parameter, i.e. take an expression from inside a function and make it a parameter, moving it to the call sites. C++ specifics Elevate project seems to modify some variable initializations (but not all) in the project to use brace initialization. I could not find any documentation about the feature. Extract Template… will take a class or function and make a template out of it. It will suggest a bunch of expressions whose type it will make template parameters. With larger functions or classes this can be a bit confusing, but otherwise it can be a handy feature. Inline type alias will do just that: e.g. if you have using Foo = std::string; it would replace some occurrences of Foo with std::string. It won’t do this with e.g. a std::vector<Foo> and cannot handle alias templates. Convert typedef to alias is obvious: it modernizes the code to use the C++11 type alias syntax instead of a typedef. Extract (Namespace) Using Declaration… is supposed to extract a single using declaration, e.g. using std::string, and use the unqualified name after that. In my case, it did so in the middle of my larger function, leaving earlier occurrences of std::string untouched. Extract Using Namespace Directive… should, in theory, add something like using namespace std; and use unqualified names after that. I only got an exception in Cevelop when I tried it on my code. Inline Using… is the counterpart of the two and works for both using declarations and using directives. Qualify Unqualified Name… will put the std:: to your map. Extract to New Header File will take a selected function or class and move it into a new header file. The new header will automatically be included in the current file. Conclusion Besides the fact that I was unable to get started properly using Cevelop in over two hours, its refactoring capabilities are promising, yet incomplete. Some of the most basic refactorings like inlining variables and functions are missing, others fail, and the handling and documentation are somewhat less than perfect. On the other hand, there are some interesting choices like “Extract Template” which, albeit still lacking some usability, could come handy for developers of template-heavy libraries. 1 Comment Permalink Have you taken a look at their Template Visualization feature?
https://arne-mertz.de/2017/09/cevelop/
CC-MAIN-2017-47
refinedweb
1,271
63.7
Trying the following windows SDK samples: BlackJack Calculator I keep getting errors on the xaml: these error were taken from the Calculator build: MyApp.xaml(1,14): error MC4629: '' is not a recognized namespace. Line 1, position 14. Window1.xaml(1,9): error MC4629: '' is not a recognized namespace. Line 1, position 9. Please help, Thanks, Janiv Ratson. For a tool to migrate these samples and any other pre-existing WPF/XAML see this tool: Check the ReleaseNotes.htm file that came with Windows SDK. In section 6.6 it lists the samples that don't compile due to changes in the last CTP of WinFX and in section 6.6.12 is presented exactly this error: " 6.6.12 Many Windows Presentation Foundation samples do not build A recent change to the schema location in Windows Presentation Foundation causes many samples to not build. The changes are the following: 1. old = "" new= "" 2. old = "" new =" " 1. old = "" new= "" 2. old = "" new ="
http://social.msdn.microsoft.com/forums/en/wpf/thread/9d6a62cc-400d-46e9-a379-a4375ec7fcc1/
crawl-002
refinedweb
162
70.6
I just had a conversation with one of my colleagues and he mentioned the subject of using Static Events which was new to me and I want to investigate it in this article. Basically the idea is to have something shared among all the loaded instances of a class and ensure that changing the static property will cause all instances to update their content right away without the changer having to iterate through the existing objects and figure out which ones need to be updated. Kind of building the intelligence into the class so that all instances know what to do when the static property has changed. This reminds me of the example of exchange rate, which is very well known to those implementing in banking systems: all transactions respect the current exchange rate. But I don’t recall using Static Events for that. We saw this property as a separate object and we made sure that there is only one instance of it at a time. And all instances of transactions knew where to find it when needed. There is a fine difference though. The transactions will not need to know about the changes happening on the exchange rate, rather they will use the last changed value at the time that they use it by requesting the current value. This is not enough when, for example, we want to implement an application where the user interface reacts immediately on changes in the UI characteristics like font, as if it has to happen at real-time. It would be very easy if we could have a static property in the Font class called currentFont and a static method to change that value and a static event to all instances to let them know when they need to update their appearance. Font currentFont It is clear that we need a static field that all instances respect. Let’s say we need a static field called Font that all the labels will use to refresh when this base field has been changed. public class MyLabel : System.Windows.Forms.Label { // The static field all class instance will respect private static Font font = new Font("Verdana",11); This field requires a static property setter to allow us to make changes to the base field. public static Font Font { set { font = value; OnMyFontChange(new FontChangedEventArgs(font)); } } As you can see this is where we set the static variable but we also call the notification method to start the delegate. private static void OnMyFontChange(FontChangedEventArgs e) { if (MyFontChanged != null) MyFontChanged(null, e); } Now almost everything is set. All we need to do is to make sure that every instance subscribes to this event. And that is what we do in the constructor of the class. public MyLabel() { // Every instance subscribes to this event MyLabel.MyFontChanged += new FontChangedEventHandler(this.ChangeBaseFont); } The delegated method is where we use the changed value to refresh the UI. private void ChangeBaseFont(object sender, FontChangedEventArgs e) { base.Font = e.Font ; base.Invalidate(); } Of course we could access the static field without the need to pass it over and introduce a new EventArgs class but it just happens to be implemented so, and it certainly has nothing to do with this subject. EventArgs In the demo, I have provided a test application using this label control and it demonstrates how it will update multiple screens by changing the base Font property. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here if (this.MyCustomEvent!=null) { Delegate[] delegates = this.MyCustomEvent.GetInvocationList(); if (delegates!=null && delegates.Length>0) { for (int c=0;c<delegates.Length;c++) { this.MyCustomEvent-= (MyNamespace.MyCustomEventHandler) delegates[c]; } } } Exchange Rate for foreach Dispose protected override void Dispose(bool disposing) { if( disposing ) { MyLabel.MyFontChanged -= new FontChangedEventHandler(this.ChangeBaseFont); if (MyLabel.MyFontChanged == null) { MessageBox.Show("Test passed! You just called dispose method right."); } } base.Dispose (disposing); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=12137
CC-MAIN-2016-30
refinedweb
709
62.27
: 1.7 ! anton 31: : bset ( bmask c-addr -- ) ! 32: tuck c@ or swap c! ; ! 33: 1.1 anton 34: : set-bit { u addr -- } 35: \ set bit u in bit-vector addr 36: u bits/au /mod 37: >r 1 bits/au 1- rot - lshift 1.7 ! anton 38: r> addr + bset ; 1.1 anton 39: 40: : compare-images { image1 image2 reloc-bits size file-id -- } 41: \G compares image1 and image2 (of size cells) and sets reloc-bits. 42: \G offset is the difference for relocated addresses 43: \ this definition is certainly to long and too complex, but is 44: \ hard to factor. 45: image1 @ image2 @ over - { dbase doffset } 46: doffset 0= abort" images have the same dictionary base address" 47: ." data offset=" doffset . cr 48: image1 cell+ @ image2 cell+ @ over - { cbase coffset } 49: coffset 0= 50: if 51: ." images have the same code base address; producing only a data-relocatable image" cr 52: else 1.2 pazsan 53: coffset abs 22 cells <> abort" images produced by different engines" 1.1 anton 54: ." code offset=" coffset . cr 55: 0 image1 cell+ ! 0 image2 cell+ ! 56: endif 57: size 0 58: u+do 59: image1 i th @ image2 i th @ { cell1 cell2 } 60: cell1 doffset + cell2 = 61: if 62: cell1 dbase - file-id write-cell throw 63: i reloc-bits set-bit 64: else 65: coffset 0<> cell1 coffset + cell2 = and 66: if 67: cell1 cbase - cell / { tag } 68: tag dodoes-tag = 69: if 70: \ make sure that the next cell will not be tagged 71: dbase negate image1 i 1+ th +! 72: dbase doffset + negate image2 i 1+ th +! 73: endif 74: -2 tag - file-id write-cell throw 75: i reloc-bits set-bit 76: else 77: cell1 file-id write-cell throw 78: cell1 cell2 <> 79: if 80: 0 i th 9 u.r cell1 17 u.r cell2 17 u.r cr 81: endif 82: endif 83: endif 84: loop ; 85: 86: : comp-image ( "image-file1" "image-file2" "new-image" -- ) 87: name slurp-file { image1 size1 } 1.4 pazsan 88: image1 size1 s" Gforth2" search 0= abort" not a Gforth image" 1.1 anton 89: drop 8 + image1 - { header-offset } 90: size1 aligned size1 <> abort" unaligned image size" 91: size1 image1 header-offset + 2 cells + @ header-offset + <> abort" header gives wrong size" 92: name slurp-file { image2 size2 } 93: size1 size2 <> abort" image sizes differ" 94: name ( "new-image" ) w/o bin create-file throw { outfile } 95: size1 header-offset - 1- cell / bits/au / 1+ { reloc-size } 96: reloc-size allocate throw { reloc-bits } 97: reloc-bits reloc-size erase 98: image1 header-offset outfile write-file throw 99: base @ hex 100: image1 header-offset + image2 header-offset + reloc-bits 101: size1 header-offset - aligned cell / outfile compare-images 102: base ! 103: reloc-bits reloc-size outfile write-file throw 104: outfile close-file throw ; 105: 106:
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/comp-i.fs?annotate=1.7;sortby=rev;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2021-17
refinedweb
480
70.73