text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
2009-09-03 09:07:59 8 Comments
For example, if we have a table Books, how would we count total number of book records with hibernate?
Related Questions
Sponsored Content
24 Answered Questions
[SOLVED] How can I count the occurrences of a list item?
58 Answered Questions
[SOLVED] How to create a memory leak in Java?
- 2011-06-24 16:11:52
- Mat B.
- 641225 View
- 3241 Score
- 58 Answer
- Tags: java memory memory-leaks
44 Answered Questions
[SOLVED] How do I convert a String to an int in Java?
- 2011-04-07 18:27:54
- Unknown user
- 5996822 View
- 3043 Score
- 44 Answer
- Tags: java string int type-conversion
51 Answered Questions
[SOLVED] How to fix java.lang.UnsupportedClassVersionError: Unsupported major.minor version
- 2012-04-30 11:28:24
- ERJAN
- 1880429 View
- 1577 Score
- 51 Answer
- Tags: java jvm incompatibility unsupported-class-version
59 Answered Questions
[SOLVED] How do I read / convert an InputStream into a String in Java?
- 2008-11-21 16:47:40
- Johnny Maelstrom
- 2124221 View
- 4092 Score
- 59 Answer
- Tags: java string io stream inputstream
27 Answered Questions
[SOLVED] How to get an enum value from a string value in Java?
21 Answered Questions
[SOLVED] How to efficiently count the number of keys/properties of an object in JavaScript?
- 2008-09-24 08:56:21
- mjs
- 680277 View
- 1560 Score
- 21 Answer
- Tags: javascript performance properties count key
13 Answered Questions
[SOLVED] How do you find the row count for all your tables in Postgres
- 2010-04-07 23:51:29
- mmrobins
- 306557 View
- 396 Score
- 13 Answer
- Tags: postgresql count database-table
@xrcwrn 2015-03-16 05:53:10
@rajadilipkolli 2017-07-03 07:26:25
It should be``` Long count = (Long) session.createQuery("select count(1) from Book").uniqueResult();``` it will improve performance
@Salandur 2009-09-03 10:34:05
For older versions of Hibernate (<5.2):
Assuming the class name is Book:
It is at least a
Number, most likely a
Long.
@dj_segfault 2011-11-15 18:48:31
It returns a long.
@Jerry Tian 2012-02-22 03:28:21
As @Salandur suggests, "It is at least a Number", and Number type has "intValue()", "longValue()" methods, so we can easily get the desired primitive type we want: ((Number) criteria.uniqueResult()).intValue()
@Lion 2012-07-27 03:20:50
It returns a type of
Object.
@Tobias M 2012-09-17 01:27:16
If the entity mapping is unable to be found using a string parameter to the create criteria method, session.createCriteria(Book.class) can also be used
@bcmoney 2013-04-04 18:36:42
Like @MontyBongo said, I actually had to refer to the class like this:
return (Number) session.createCriteria(Book.class).setProjection(Projections.rowCount()).uniqueResult();
@Salandur 2013-05-08 08:55:42
Actualy, Hibernate also maps the class name (Book in this case) as a entity name. So you can both use the entity name ("Book") as the class name (Book.class). But if you have 2 entity mappings on the same class than you should use the enityname, otherwise both entity mappings will be loaded.
@nikel 2014-10-02 02:19:59
But If the number of rows in the table is more than what Long can hold , then what do we cast to?
@Salandur 2014-10-03 13:56:00
Then you should not use a rational database ;). Max value of long is 9,223372037×10¹⁸, which is laaaaaaaaaarge
@hram908 2016-12-16 16:06:21
how can you get the row count with connection?
@Lluis Martinez 2018-04-18 09:34:41
@dj_segfault are you sure it returns a long always? It can depend on the driver and the DBMS flavour I guess.
@Capn Sparrow 2018-10-14 00:38:41
createCriteria is deprecated since 5.2.
@Jon Spokes 2009-09-03 09:32:58
You could try
count(*)
Where
Booksis the name off the
class- not the table in the database.
@craftsman 2009-09-03 09:55:25
sorry but its not working with Java and Hibernate :( ( I did replace int with Integer, as it is in Java for type casting. )
@Jon Spokes 2009-09-03 12:21:50
It should work - with Integer instead of int ? You need to put the class name in the HQL, not the table name - is the only thing I can think that may be wrong
@Matt Sidesinger 2009-09-04 13:10:48
I believe the post directly below this is more in line with the core Hibernate principles.
@rParvathi 2016-07-01 11:10:29
for me its not working with java and hibernate. what to do instead?
@Vlad Mihalcea 2018-06-27 06:11:23
It's very easy, just run the following JPQL query:
The reason we are casting to
Numberis that some databases will return
Longwhile others will return
BigInteger, so for portability sake you are better off casting to a
Numberand getting an
intor a
long, depending on how many rows you are expecting to be counted.
@LucianoDemuru 2018-03-14 10:48:26
This works in Hibernate 4(Tested).
Where getCurrentSession() is:
@rajadilipkolli 2017-02-14 20:40:31
If you are using Hibernate 5+, then query will be modified as
Or if you Need TypedQuery
@Antonio 2011-02-19 12:10:06
Here is what official hibernate docs tell us about this:
You can count the number of query results without returning them:
However, it doesn't always return
Integerinstance, so it is better to use
java.lang.Numberfor safety.
@Tom 2013-05-23 02:05:42
+1 for an answer that gives the Hibernate team recommended method.
@rogerdpack 2013-09-17 20:17:43
For me this gave "java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer" but casting to a Long instead works...
@machinery 2014-02-26 12:02:40
@rogerdpack this is because Hibernate changed the returned type in 3.5 to Long: community.jboss.org/wiki/HibernateCoreMigrationGuide35
@Guillaume Husta 2018-02-06 09:55:36
The return type for the count function can be found in
org.hibernate.dialect.function.StandardAnsiSqlAggregationFunctions.CountFunction(StandardBasicTypes.LONG)
@marioosh 2010-11-27 08:06:09
In Java i usually need to return int and use this form:
@Jason Nichols 2012-01-18 14:59:08
The accepted answer for this question didn't work for me, but yours did. Thanks!
@kommradHomer 2012-03-16 10:28:37
is this the fastest and cheapest way for getting count of a query? i mean hibernate-wise
@thermz 2012-04-17 15:35:55
What's the point of using an ORM if we end up coding SQL anyway?
@Pramod 2012-09-21 05:37:42
That's my main concern (using SQL instead of HQL). I have to use nested SELECT just to count number of rows that comes after left outer join (I did not find proper implementation of left outer join in hibernate).
@BrunoJCM 2012-12-28 20:22:16
First off, this solution doesn't use SQL, it's HQL. And using count(*) instead of 'select count(e) from E e' or criteria works with @EmbeddedId and databases that don't support tuple count (eg. MySQL, where queries like 'select count((a,b)) from table1' doesn't work).
@another 2017-01-17 08:57:46
Why don't just cast directly to Integer, and let the unboxing do its work?? Please, is there an explaination?
@Jerry Chin 2017-08-14 07:03:34
The latest version of Hibernate (5.1.9.final at the time of writing) makes it very hard to do a simple conditional querying, so I'd prefer this method.
@Lluis Martinez 2018-04-18 09:35:46
@thermz That's why ORM are leaky abstractions. Very clear in this case. | https://tutel.me/c/programming/questions/1372317/how+do+we+count+rows+using+older+versions+of+hibernate+2009 | CC-MAIN-2020-29 | refinedweb | 1,300 | 61.46 |
POSIX
Table of Contents
This page is based on POSIX.1-2008 2016 Edition
POSIX defines the operating system interface. The starndard contains volumes:
- Base Definition: convention, regular expression, headers
- System Interfaces: system calls
- Shell & Utilities: shell command language and shell utilities
- Rationale
I found most of them are not that interesting, except Base Definition section 9 regular expression. This definition is used by many shell utilities such as awk.
1 Regular Expression
1.1 Definition
The search for a matching starts at the beginning of string, and stops when the first sequence matching is found. If the pattern can match several string at a start point, the longest such sequence is matched. In other word, the longest of the leftmost matches.
Similarly Each subpattern (capturing group?), from left to right,
shall also match the longest possible string. E.g. when matching
\(.*\).*, the
(\1) is the whole string.
The end of the string should be a NUL. Some utilities use <newline>.
1.2 Basic Regular Expressions (BRE)
Special characters
- <period>, <left-square-bracket>, <baclslash>
- when NOT in Bracket Expression
- Match any character other than NUL
- <asterisk>
- when NOT in Bracket Expression
- when NOT the first of entire BRE (after initial
^if any)
- when NOT the first of subexpression (after initial
^if any)
- circumflex>
- when used as the first of the entire BRE
- when used as the first in Bracket Expression
- <dollar-sign>
- when used as the last of entire BRE
1.2.1 Bracket Expression
[^]…[:alnum:]..1-8.-]
- the close bracket can be ordinary if it is the first (after intial
^is any)
- hyphen(-) can be ordinary if it is first of last. Put it last if you want to use it with close bracket above
- hyphen is inclusive
^immediately after left bracket turns it into a non-matching list, a.k.a. matches any character except these.
- [:slnum:] is only valid inside bracket expression. It is called character class expression. All such expressions:
- alnum, cntrl, lower, upper, space, alpha, digit
- print, blank, graph, punct, xdigit
1.2.2 Matching Multiple Characters
*
\(\)is used. It can be arbitrarily nested.
\Nis back reference. It shall match the same string as was matched by N-th sub-expression, it can be empty.
\{m\},
\{m,\},
\{m,n\}: inclusive
1.2.3 Precedence
From high to low:
[::]
- escape
\
- bracket expression
- subexpression/back reference
- multiple character (
*and
\{m,n\}) (
+?for ERE)
- concatenation
- anchor
- (
|for ERE)
1.3 Extended Regular Expressions (ERE)
()are special
+?{are special
|is special
2 pThread
#include <pthread.h> pthread_create (thread, attr, start_routine, arg) pthread_exit (status) pthread_join (threadid, status) pthread_detach (threadid)
2.1 Create threads
If main() finishes before the threads it has created, and exits with pthreadexit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes.
#define NUM_THREADS 5 struct thread_data{ int thread_id; char *message; }; int main() { pthread_t threads[NUM_THREADS]; struct thread_data td[NUM_THREADS]; int rc; int i; for( i=0; i < NUM_THREADS; i++ ){ td[i].thread_id = i; td[i].message = "This is message"; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)&td[i]); if (rc){ cout << "Error:unable to create thread," << rc << endl; exit(-1); } } pthread_exit(NULL); }
2.2 Join and Detach
int main () { int rc; int i; pthread_t threads[NUM_THREADS]; pthread_attr_t attr; void *status; // Initialize and set thread joinable pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); for( i=0; i < NUM_THREADS; i++ ){ cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], &attr, wait, (void *)i ); if (rc){ cout << "Error:unable to create thread," << rc << endl; exit(-1); } } // free attribute and wait for the other threads pthread_attr_destroy(&attr); for( i=0; i < NUM_THREADS; i++ ){ rc = pthread_join(threads[i], &status); if (rc){ cout << "Error:unable to join," << rc << endl; exit(-1); } cout << "Main: completed thread id :" << i ; cout << " exiting with status :" << status << endl; } cout << "Main: program exiting." << endl; pthread_exit(NULL); } | http://wiki.lihebi.com/posix.html | CC-MAIN-2017-22 | refinedweb | 634 | 55.64 |
Now that we have covered the Windows Store App specific design and coding adventures, we can make a quick detour through some other coding gems. Today we will peek behind the solution scene to explore the News feature.
Those familiar with the first version of the ALM Rangers Treasure Map with recognise two new island categories, namely Favourites and News. The Treasure Map under the bonnet (hood) #5 … Finishing touches, but not yet finished post briefly mentioned the ability to mark categories and projects as favourites using the AppBar … therefore we can skip that island and sail to the News island.
When we select (click) the News island we are presented with an aggregated list of news titles and summary extracts. In this post we will investigate where these gems come from.
If we look into the TreasureMapDataModel.xml configuration file we recognise a news tag, embracing five (5) RSS feeds. You can explore these feeds to validate that the entries above are indeed legit and to dig into more of the details of each News post.
To explore the News feature you need to visit two areas of the Windows Store App solution.
As always we recommend the use of the CodeMap feature in Visual Studio to visually map the dependencies and execution of the code.
It is evident that the RangersNewsFeed is referenced and called by both the App when initializing and when navigating to the the News View.
Again the team uses Async features to ensure a seemingly instantaneous application initialisation and not bottleneck the application performance and behaviour through the loading of the static configuration for the map and the expensive retrieval of News items.
The code contains some regular expressions. Use the UNISA Chatter – Design patterns in C++ Part 6: Widgets Validation and Regular Expressions post created a long, long ago if you need a quick reference sheet for regular expressions.
These are strictly sample code extracts and may most likely have been updated in the interim to meet quality bars, support new features or other code churn factors.
1: public App()
2: {
3: this.InitializeComponent();
4: RangersNewsFeed = new RangersNewsFeed();
5: Database = new DB();
6: Database.LoadDBAsync().ContinueWith(t =>
7: {
8: TileUpdater = TileUpdateManager.CreateTileUpdaterForApplication();
9: TileUpdater.EnableNotificationQueue(true);
10: TileGenerator.GeneratePercentageTile();
11:
12: RangersNewsFeed.LoadItemsAsync().ContinueWith(_ =>
13: {
14: UpdateNewsTiles();
15: });
16: }).Wait();
17: }
1: //-----------------------------------------------------------------------
2: // <copyright file="RangersNewsFeed.cs" company="Microsoft Corporation">
3: // Copyright Microsoft Corporation. All Rights Reserved. This code released under the terms of the Microsoft Public License (MS-PL,.) This is sample code only, do not use in production environments.
4: // </copyright>
5: //-----------------------------------------------------------------------
6:
7: namespace Microsoft.ALMRangers.VsarTreasureMap.WindowsStoreApp.News
8: {
9: using System;
10: using System.Collections.Generic;
11: using System.Linq;
12: using System.Text.RegularExpressions;
13: using System.Threading.Tasks;
14: using Windows.Web.Syndication;
15:
16: /// <summary>
17: /// Defines the class which handles retrieval of RSS feeds.
18: /// </summary>
19: internal class RangersNewsFeed
20: {
21: /// <summary>
22: /// The closing paragraph break tag
23: /// </summary>
24: private Regex closingParagraphBreakTag = new Regex("</?[pP]>");
25:
26: /// <summary>
27: /// The line break tag
28: /// </summary>
29: private Regex lineBreakTag = new Regex("</?[bB][rR]/?>");
30:
31: /// <summary>
32: /// The tag remover regex
33: /// </summary>
34: private Regex tagRemoverRegex = new Regex("<.*?>");
35:
36: /// <summary>
37: /// Initializes a new instance of the <see cref="RangersNewsFeed"/> class.
38: /// </summary>
39: public RangersNewsFeed()
40: {
41: this.Items = new SortedSet<NewsStory>(new NewsStoryComparer());
42: }
43:
44: /// <summary>
45: /// Gets or sets the items.
46: /// </summary>
47: /// <value>The items.</value>
48: public SortedSet<NewsStory> Items { get; set; }
49:
50: /// <summary>
51: /// Loads the items async.
52: /// </summary>
53: /// <returns>Task.</returns>
54: public async Task LoadItemsAsync()
55: {
56: this.Items.Clear();
57: var client = new SyndicationClient();
58: var tasks = (from url in App.Database.NewsUrls
59: select client.RetrieveFeedAsync(url).AsTask()).ToList();
60:
61: while (tasks.Count > 0)
62: {
63: var nextTask = await Task.WhenAny(tasks);
64: if (nextTask.Status == TaskStatus.RanToCompletion)
65: {
66: this.ParseSyndicationFeed(nextTask.Result);
67: }
68:
69: tasks.Remove(nextTask);
70: }
71: }
72:
73: /// <summary>
74: /// Cleanups the specified content.
75: /// </summary>
76: /// <param name="content">The content.</param>
77: /// <returns>System.String.</returns>
78: private string Cleanup(string content)
79: {
80: var result = this.lineBreakTag.Replace(content, Environment.NewLine);
81: result = this.closingParagraphBreakTag.Replace(result, Environment.NewLine);
82: result = this.tagRemoverRegex.Replace(result, string.Empty);
83: result = result.Replace("&", "&");
84: return result.Trim();
85: }
86:
87: /// <summary>
88: /// Parses the syndication feed.
89: /// </summary>
90: /// <param name="syndicationFeed">The syndication feed.</param>
91: private void ParseSyndicationFeed(SyndicationFeed syndicationFeed)
92: {
93: foreach (var item in syndicationFeed.Items)
94: {
95: this.Items.Add(new NewsStory()
96: {
97: Id = item.Id,
98: Title = item.Title.Text,
99: Published = item.PublishedDate.DateTime,
100: Author = item.Authors.Aggregate<SyndicationPerson, string>(
101: string.Empty,
102: (current, next) =>
103: {
104: if (current.Length > 0)
105: {
106: current += ", ";
107: }
108:
109: current += next.NodeValue;
110:
111: return current;
112: }),
113: Content = this.Cleanup(item.Summary.Text),
114: Link = item.Links[0].Uri
115: });
116: }
117: }
118: }
119: }
Dev Lead question time …
Q: What, if anything, was a challenge with the News feature and why?
Robert MacLean, our dev lead, replies …
Nothing :) Windows Store Apps really treat the web as a first class citizen so connecting to websites, grabbing RSS etc... are all built in so it is super easy to work with. Add to that the new async options and we can build some super slick piece of code like LoadItemsAsync, which does the loading of feeds but does it async & loads the content as fast as possible. Now if you asked me, what sections I think could have more work - the number on is Cleanup, which tried to cleanup the HTML for presentation & without a HTML to Text parser we have added some of our own logic but this is an area which could use a better parser than what we have got
We will peek into the code tracks the progress through the treasure map to see if we can reveal a few more gems. | http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/06/28/treasure-map-under-the-bonnet-hood-6-news-island.aspx?Redirected=true&title=Treasure%20Map%20under%20the%20bonnet%20(hood)%20&summary=&source=Microsoft&armin=armin | CC-MAIN-2015-27 | refinedweb | 1,008 | 51.24 |
macro_rules! write { ($dst:expr, $($arg:tt)*) => { ... }; }
Expand description
Writes formatted data into a buffer.
This macro accepts a ‘writer’, a format string, and a list of arguments. Arguments will be
formatted according to the specified format string and the result will be passed to the writer.
The writer may be any value with a
write_fmt method; generally this comes from an
implementation of either the
fmt::Write or the
io::Write trait. The macro
returns whatever the
write_fmt method returns; commonly a
fmt::Result, or an
io::Result.
See
std::fmt for more information on the format string syntax.
Examples
RunRun
use std::io::Write; fn main() -> std::io::Result<()> { let mut w = Vec::new(); write!(&mut w, "test")?; write!(&mut w, "formatted {}", "arguments")?; assert_eq!(w, b"testformatted arguments"); Ok(()) }
A module can import both
std::fmt::Write and
std::io::Write and call
write! on objects
implementing either, as objects do not typically implement both. However, the module must
import the traits qualified so their names do not conflict:
RunRun
use std::fmt::Write as FmtWrite; use std::io::Write as IoWrite; fn main() -> Result<(), Box<dyn std::error::Error>> { let mut s = String::new(); let mut v = Vec::new(); write!(&mut s, "{} {}", "abc", 123)?; // uses fmt::Write::write_fmt write!(&mut v, "s = {:?}", s)?; // uses io::Write::write_fmt assert_eq!(v, b"s = \"abc 123\""); Ok(()) }
Note: This macro can be used in
no_std setups as well.
In a
no_std setup you are responsible for the implementation details of the components. | https://doc.rust-lang.org/nightly/core/macro.write.html | CC-MAIN-2022-21 | refinedweb | 251 | 68.47 |
How to reverse a vector in c++
In this tutorial, we will learn how to reverse a vector using c++. Here we will use the STL(standard template library) library. Here STL is a set of c++ template classes. Using the STL library, it is easy to implement this code.
Here we will use template function, that is std:: reverse. Reverse range reverses the order of elements in the range from first to last.
In addition, Reverse function calls iter_swap to swap the elements to the new location. parameters that are passed are the first and the last element of the vector. These parameters contain all the element from first to the last element. This will include the first element but not the last element.
Moreover, reverse() is a predefined function in the header file algorithm. So it seems easy to reverse the elements of a vector by the help of header file algorithm. Here we will easily learn, how to reverse a vector in c++.
Firstly, we will store the numbers in a vector and we will create an iterator for it.
In this tutorial, we will learn techniques to reverse the vector that is more simple and uncomplicated and of course elementary. In conclusion, there are many more functions in the STL that makes the programming simple and unfussy. We can directly use those functions in our code.
Finally, you need to follow all the steps of this code to reverse the elements of the vector.
you may also like :
Pair in STL ( Standard Template Library ) in C++
push_back() and pop_back() function in C++ STL
REVERSE A VECTOR in C++
#include<iostream> #include<bits/stdc++.h> #include<vector> #include<algorithm> using namespace std; int main() { //Get the vector vector <int> v={1 ,54, 23 , 45 , 2, 87 , 56 ,9, 4 }; //create an iterator vector <int> :: iterator itr; //print the vector cout<<"before reversing "; cout<<endl; for(int i=0 ;i<v.size();i++) { cout<<v[i]<<" "; } // reverse the vector here reverse(v.begin(),v.end()); //print the reverse vector cout<<endl<<" after reversing"; cout<<endl; for(int i=0 ;i<v.size();i++) { cout<<v[i]<<" "; } return 0; }
OUTPUT EXAMPLE:
Before reversing 1 54 23 45 2 87 56 9 4 After reversing 4 9 56 87 2 45 23 54 1 | https://www.codespeedy.com/how-to-reverse-a-vector-in-cpp/ | CC-MAIN-2020-50 | refinedweb | 384 | 64.3 |
Hello, I am having an issue where GameObject is acting like gameObject would; where the game doesn't recognize the GameObject put into the slot in the unity inspector. It instead keeps recognizing it whenever I GetComponent as a gameObject or the object that the script is currently on. Not sure what is going on or how to fix it, so any help is appreciated.
public class Scp_ConstantValues : MonoBehaviour
{
public GameObject Color1;
public GameObject Color2;
public Color Red = new Color(1.0F, 0.3F, 0.4F);
public Color Green = new Color(0.2F, 1.0F, 0.4F);
public Color Blue = new Color(0.2F, 0.3F, 1.0F);
public void SetColor1Red()
{
RocketColor1 = "Red";
ParticleSystem Color1 = GetComponent<ParticleSystem>();
Color1.startColor = Red;
}
public void SetColor1Green()
{
RocketColor1 = "Green";
}
Answer by UnityCoach
·
Jul 28, 2017 at 11:55 PM
There are inconsistencies between Color1, the GameObject member of the class, and Color1, the ParticleSystem within the SetColor1Red method.
By the way, I'm not sure why you have individual methods. You could have a more generic method SetColor (int colorIndex, Color color) {}
SetColor (int colorIndex, Color color) {}
Mostly because I am totally new to Programming and just go off of the help of the unity page and trial and error currently, so I'm not sure what you mean for inconsistencies. could you explain a bit more, please?
Color1, within the scope of SetColor1Red method, is declared as a ParticleSystem type.
At the root of the class (MonoBehaviour Script), it's declared as a GameObject.
So, when you access the startColor property or Color1, it's the ParticleSystem property. The one found on the same game object, as you assigned it with GetComponent.
GameObject doesn't have a startColor property, so the compiler knows which "Color1". But if it did, the compiler would be lost.
Ooh I see what you mean, than maybe its the way I'm thinking it should go, I was trying to access another game object that is known in this script as Color1, then I am trying to access the particle system of that object, then trying to access the startColor of the particlesystem and setting it to the color of red, green, or blue. I'm assuming I have done this totally wrong
395 People are following this question.
Trail Render is rendering behind the background sprite
1
Answer
how to make a Login Window with unity and c#
0
Answers
Best way to load high volume of particles.
1
Answer
How can i solve this issue?? i need real help.
1
Answer
How do i get the LocalSize of an object?
2
Answers | https://answers.unity.com/questions/1386148/gameobject-keeps-acting-like-gameobject.html?sort=oldest | CC-MAIN-2020-50 | refinedweb | 436 | 61.67 |
This article is aimed at explaining how the
WebBrowser control in C# works and how to build an RSS Feed reader using several
WebBrowser controls. It also explains how RSS feed XML files can be processed using functions provided by
System.Xml namespace.
The application created here has the following features:
The application in this article has been developed using Visual Studio 2008, but in .NET Framework 2.0. It can be built and run as is using .NET Framework 3.0 and 3.5.
The code in this article works using
WebBrowser controls which will be explained in detail later in the article. The code takes in an RSS Feed link and adds it to the Web content of the LHS pane. Any click on a feed in the RHS pane is directed towards loading the corresponding feed after being formatted using the
XPathNavigator, in the RHS top pane. Any click on a feed link in this pane is redirected to loading the corresponding page in the RHS bottom pane. The user can further browse other pages in this pane by clicking on other links on this page. The user can move back and forth between browsed pages using the Forward and Back buttons in the toolbar.
Now, coming to the
WebBrowser control. This control provides us with all the functionality needed to build a Web browsing application. I will go through all its methods and properties used in the application being built here. But first, let us see how we can fetch and process a page provided to us by an RSS Feed link.
Since RSS feeds are XML, we need to use XML objects to process it. For this, we need to import the namespace
System.Xml.
using System.Xml;
Then we need to create an XML document and load up the RSS content into it:
XmlDocument RSSXml = new XmlDocument(); RSSXml.Load(txtURL.Text);
Then we need to get the list of nodes from this XML document, that can be used to display the feed content. After that, we go through each node and pull out display relevant information from it, like the title, link and description.
XmlNodeList RSSNodeList = RSSXml.SelectNodes("rss/channel/item"); StringBuilder sb = new StringBuilder(); foreach (XmlNode RSSNode in RSSNodeList) { XmlNode RSSSub desc = RSSSubNode != null ? RSSSubNode.InnerText : ""; sb.Append("<font face='arial'><p><b><a href='"); sb.Append(link); sb.Append("'>"); sb.Append(title); sb.Append("</a></b><br/>"); sb.Append(desc); sb.Append("</p></font>"); }
At the end of this, the string builder contains the content of RSS feed, as a well formatted HTML. This is where we start off with the
WebBrowser control. The
WebBrowser control can be loaded with a page using several ways. A URL can directly be loaded to the
WebBrowser control using
Navigate(Uri) function of the
WebBrowser. However to load HTML content that has been created programmatically, we need to populate the
DocumentText of the
WebBrowser using:
RSSBrowser.DocumentText = sb.ToString();
Now we need to learn how to communicate between two browser controls. By default, if a link is clicked on a page loaded in a
WebBrowser control, it loads the target page in the same control. In case we want it to load the content in another browser, we need to block this control and invoke the other control's navigation.
To do this, we need to understand the
Navigating event of the
WebBrowser control. This event is triggered before a
WebBrowser control starts navigating to a new page. It can be used to stop the navigation by setting its event argument's
Cancel value to
true as in the following code:
private void RSSList_Navigating(object sender, WebBrowserNavigatingEventArgs e) { if (!m_bFromLoadEvent) { e.Cancel = true; NetBrowser.Navigate(e.Url); } else { m_bFromLoadEvent = false; } }
This code, based on a boolean value, blocks the current object's navigation and navigates another
WebBrowser control
NetBrowser, to the target URL. This is how the different panes communicate with each other in the given application.
Lastly, we need to learn how to navigate back and forth between the pages already visited by the
WebBrowser control. This is quite simple and can be done as follows:
if (NetBrowser.CanGoBack) { NetBrowser.GoBack(); } if (NetBrowser.CanGoForward) { NetBrowser.GoForward(); }
The
CanGoForward and
CanGoBack properties show the history status in forward and backward directions. The
GoBack() and
GoForward() functions navigate back and forth in the history of visited pages.
The application stores the list of subscribed RSS feeds in a text file in the same location as the program EXE. Coming soon in the next versions of
ReadForSpeed are the features to tag feeds and store feed content for offline usage.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/ReadForSpeed.aspx | crawl-002 | refinedweb | 780 | 55.64 |
Test-driven Development Support with the Generate From Usage Feature
- Tuesday, October 28, 2008 6:33 PMModeratorHi all -
My name is Karen Liu and I'm the lead program manager of the C# and VB IDE's. One direction we've taken in this release is to provide great support when you're writing code to consume an API before it even exists (one example is in test-driven development where you write tests first). Check out our new generate from usage feature -- it allows you to generate stubs for any class, constructor, method, or property you use before it exists (think about VS as laying down the train tracks for you).
We'd love your feedback on the workflow and scope of the feature! Do you wish we generated something else? Are there scnearios you use often that we're missing?
For more information on the CTP itself, check out.
Thanks!
Karen Liu
Lead Program Manager C#/VB
Program Manager, Visual C#
All Replies
- Saturday, November 15, 2008 6:42 PMHi. I must say this is an awesome and useful feature and I expect it will help a lot in moving to TDD.
While reproducing the walkthrough in my VS2010 VPC and generating a class from the unit test I chose to place the class in another class library project, but when I opened the class I noticed that the generated class is placed inside the Test class namespace, but not in the class library default namespace.
Not sure if this is a but or is by design but I would really expect to have the new class in the class library namespace to have a better grouping and organization.
Is it possible to make this change in this feature?
Thanks,
Julio
- Edited by Julio O Casal Saturday, November 15, 2008 6:42 PM
-
- Monday, January 12, 2009 7:25 AMModeratorJulio -
Thanks for the feedback here! You'll be happy to see that we've made this change for the Beta.
Karen
Lead Program Manager - C#/VB
- Monday, January 12, 2009 3:46 PMThanks a lot!
Julio | http://social.msdn.microsoft.com/Forums/en-US/vs2010ctpvbcs/thread/d37cd027-dfa0-4bbd-958e-765446dd3476 | crawl-003 | refinedweb | 348 | 65.96 |
Example 1: Using a for loop
The content of the file
my_file.txt is
honda 1948 mercedes 1926 ford 1903
Source Code
def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 print(file_len("my_file.txt"))
Output
3
Using a for loop, the number of lines of a file can be counted.
- Open the file in read-only mode.
- Using a for loop, iterate through the object
f.
- In each iteration, a line is read; therefore, increase the value of loop variable after each iteration.
Example 2: Using list comprehension
num_of_lines = sum(1 for l in open('my_file.txt')) print(num_of_lines)
Output
3
- Open the file in read-only mode.
- Using a for loop, iterate through
open('my_file.txt').
- After each iteration, return 1.
- Find the sum of the returned values. | https://www.programiz.com/python-programming/examples/line-count | CC-MAIN-2022-21 | refinedweb | 139 | 68.47 |
Every so often, it’s good to get a little practice in using regular expressions… Via the wonderful F1 Metrics blog, I noticed that the @f1debrief twitter account had been tweeting laptimes from the F1 testing sessions. The data wasn’t republished in the F1metrics blog (though I guess I could have tried to scrape it from the charts) but it still is viewable on the @f1debrief timeline, so I grabbed the relevant tweets using the Twitter API statuses/user_timeline call:
response1 = make_twitter_request(twitter_api.statuses.user_timeline, screen_name='f1debrief',count=200,exclude_replies='true',trim_user='true', include_rts='false',max_id='705792403572178944') response2 = make_twitter_request(twitter_api.statuses.user_timeline, screen_name='f1debrief',count=200,exclude_replies='true',trim_user='true', include_rts='false',max_id=response1[-1]['id']) tweets=response1+response2
The tweets I was interested in look like this (and variants thereof):
The first thing I wanted to do was to limit the list of tweets I’d grabbed to just the ones that contained a list of laptimes. The way I opted to do this was to create a regular expression that spotted patterns of the form N.NN, and then select tweets that had three or more instances of this pattern. The regular expression .findall method will find all instances of the specified pattern in a string and return them in a list.
import re regexp = re.compile(r'\d\.\d') #reverse the order of the tweets so they are in ascending time order for i in tweets[::-1]: if len(re.findall(regexp, i['text']))>=3: #...do something with the tweets containing 3 or more laptimes
Inspecting several of the timing related tweets, they generally conform to a pattern of:
- first line: information about the driver and the tyres (in brackets)
- a list of laptimes, each time on a separate line;
- an optional final line that typically started with a hashtag
We can use a regular expression match to try to pull out the name of the driver and tyre compound based on a common text pattern:
#The driver name typically appears immediately after the word του #The tyre compound appears in brackets regexp3 = re.compile('^.* του (.*).*\s?\((.*)\).*') #I could have tried to extract drivers more explicitly from a list of drivers names I knew to be participating #split the tweet text by end of line character lines=i['text'].split('\n') #Try to pull out the driver name and tyre compound from the first line m = re.match(regexp3, lines[0]) if m: print('',m.group(1).split(' ')[0],'..',m.group(2)) #There is occasionally some text between the driver name and the bracketed tyre compound #So split on a space and select the first item dr=m.group(1).split(' ')[0] ty=m.group(2) else: dr=lines[0] ty=''
For the timings, we need to do a little bit of tidying. Generally times were of the form N:NN.NN, but some were of the form NN.NN. In addition, there were occasional rogue spaces in the timings. In this case, we can use regular expressions to substitute on a particular pattern:
for j in lines[1:]: j=re.sub('^(\d+\.)','1:\\1',j) j=re.sub('^(\d)\s?:\s?(\d)','\\1:\\2',j)
The final code can be found in this gist and gives output of the form:
There are a few messed up lines, as the example shows, but these are easily handled by hand. (There is often a trade-off between fully automating and partially automating a scrape. Sometimes it can be quick just to do a final bit of tidying up in a text editor.) In the example output, I also put in an easily identified line (starting with == that shows the original first line of a tweet (it also has the benefit of making it easy to find the last line of the previous tweet, just in case that needs tidying too…) These marker lines can easily be removed from the file using a regular expression pattern as the basis of a search and replace (replacing with nothing to delete the line).
So that’s three ways of using regular expressions – to count the occurrences of a pattern and use that as the basis of a filter; to extract elements based on pattern matching in a string; and as the basis for a direct pattern based string replace/substitution. | https://blog.ouseful.info/2016/03/15/using-regular-expressions-to-filter-tweets-based-on-the-number-of-times-a-pattern-appears-within-them/ | CC-MAIN-2021-25 | refinedweb | 721 | 55.37 |
Opened 4 years ago
Closed 4 years ago
#12149 closed (invalid)
pre_save is not called before the overridden save() method on a model
Description (last modified by kmtracey)
If I have a model where I override the save() method, then the pre_save signal is not sent before the save method is called.
Example:
class MyModel(models.Model): name = models.CharField(max_length=20) def save(self, force_insert=False, force_update=False): if self.name == "dont_save": return super(Project, self).save(force_insert, force_delete)
def presave_handler(sender, instance, **kwargs): instance.name = "dont_save" signals.pre_save.connect(presave_handler, sender=MyModel, dispatch_uid="abc")
In the above case, the flow goes like this
- call overridden save method
- check the condition in save method (condition is false)
- call super
- call pre_save
- set name to "dont_save"
- object saved to database with name = "dont_save"
This is rather unintuitive that the pre_save gets called in the middle of the save method. Also, any processing done in the pre_save cannot be handled in the save method as the flow has gone to the super class by then.
The expected flow should be like this
- call overridden save method
- call pre_save
- set name to "dont_save"
- execution enters save method
- check condition in overridden save method (condition is true)
- return without saving
Attachments (0)
Change History (3)
comment:1 follow-up: ↓ 2 Changed 4 years ago by kmtracey
comment:2 in reply to: ↑ 1 Changed 4 years ago by siddhi
- Resolution invalid deleted
- Status changed from closed to reopened
It isn't clear to me how you expect a signal to get sent at step 2 here:
- call overridden save method
- call pre_save
- set name to "dont_save".
At that point execution is in your own code, how is Django supposed to cause a signal to be sent?
As currently implemented, the signal can't be sent. But it should. That is the bug :) Perhaps steps 1 and 2 should be exchanged in that flow.
The design for sending the signals needs to be changed. Maybe the metaclass should automatically decorate the save method to send the signal before and after, instead of sending it within the parent save(). The right design and alternatives can be discussed.
As it stands, you cant use the changes from the pre_save within the custom save method. By the time pre_save is called execution is already out of the custom save and in the superclass. This is very illogical. You would expect a signal called pre_save would be triggered before the save.
comment:3 Changed 4 years ago by Alex
- Resolution set to invalid
- Status changed from reopened to closed
The pattern you want is simply not possible with code, you need to manually send the signal (or some custom signal).
(Fixed formatting. Note you've got to put a space before the 1. in your lists to get them to format properly.)
The signals doc:
notes that if you override save() you must call the parent class method in order for the signals to be sent. That makes it pretty clear the parent class code is what is going to send the signals. It isn't clear to me how you expect a signal to get sent at step 2 here:
At that point execution is in your own code, how is Django supposed to cause a signal to be sent? | https://code.djangoproject.com/ticket/12149 | CC-MAIN-2014-15 | refinedweb | 552 | 68.7 |
Note: A new GRASS GIS stable version has been released: GRASS GIS 7.8. Go directly to the new manual page here
First, try a standard command in Console tab in Layer Manager in GRASS GUI:
r.info map=elevation -g
We are running r.info with an option map set to elevation Now, switch to Python tab and type the same command but in Python syntax:
grass.read_command('r.info', map='elevation', flags='g')
We used function read_command() from the grass.script package which is imported under the name grass in the Python tab in GRASS GUI. There are also other functions besides read_command() most notably run_command(), write_command() and parse_command(). The first parameter for functions from this group is the name of the GRASS module as string. Other parameters are options of the module. Python keyword arguments syntax is used for the options. Flags can be passed in a parameter flags where value of the parameter is a string containing all the flags we want to set. The general syntax is the following:
function_name('module.name', option1=value1, option2=..., flags='flagletters')
The function parameters are the same as module options, so you can just use standard module manual page to learn about the interface.
Most of the GRASS functionality is available through modules and all of them can be called using the functions above. However, in some cases, it is more advantageous to use specialized Python functions. This is the case for mapcalc() function (wrapper for r.mapcalc module) and list_strings() function (wrapper for g.list module).
To launch a Python script from GUI, use File -> Launch Python script.
import grass.script as gscript def main(): input_raster = 'elevation' output_raster = 'high_areas' stats = gscript.parse_command('r.univar', map='elevation', flags='g') raster_mean = float(stats['mean']) raster_stddev = float(stats['stddev']) raster_high = raster_mean + raster_stddev gscript.mapcalc('{r} = {a} > {m}'.format(r=output_raster, a=input_raster, m=raster_high)) if __name__ == "__main__": main()
import grass.script as gscript def main(): rasters = ['lsat7_2002_10', 'lsat7_2002_20', 'lsat7_2002_30', 'lsat7_2002_40'] max_min = None for raster in rasters: stats = gscript.parse_command('r.univar', map=raster, flags='g') if max_min is None or max_min < stats['min']: max_min = stats['min'] print max_min if __name__ == "__main__": main()
#!/usr/bin/env python #%module #% description: Adds the values of two rasters (A + B) #% keyword: raster #% keyword: algebra #% keyword: sum #%end #%option G_OPT_R_INPUT #% key: araster #% description: Name of input raster A in an expression A + B #%end #%option G_OPT_R_INPUT #% key: braster #% description: Name of input raster B in an expression A + B #%end #%option G_OPT_R_OUTPUT #%end import sys import grass.script as gscript def main(): options, flags = gscript.parser() araster = options['araster'] braster = options['braster'] output = options['output'] gscript.mapcalc('{r} = {a} + {b}'.format(r=output, a=araster, b=braster)) return 0 if __name__ == "__main__": sys.exit(main())
The options which has something like G_OPT_R_INPUT after the word option are called standard options. Their list is accessible in GRASS GIS C API documentation of STD_OPT enum from gis.h file. Always use standard options if possible. They are not only easier to use but also ensure consistency across the modules and easier maintanenace in case of updates to the parameters parsing system. Typically, you change description (and/or label), sometimes key and answer. There are also standard flags to be used with flag which work in the same way.
The examples of syntax of options and flags (without the G_OPT... part) can be obtained from any GRASS module using special --script flag. Alternatively, you can use GRASS source code to look how different scripts actually define and use their parameters.
Note that the previous code samples were missing some whitespace which Python PEP8 style guide requires but this last sample fulfills all the requirements. You should always use pep8 tool to check your syntax and style or set your editor to do it for you. Note also that although a some mistakes in Python code can be discovered only when executing the code due to the dynamic nature of Python, there is a large number of tools such as pep8 or pylint which can help you to identify problems in you Python code.
Note: A new GRASS GIS stable version has been released: GRASS GIS 7.8. Go directly to the new manual page here
Help Index | Topics Index | Keywords Index | Full Index
© 2003-2020 GRASS Development Team, GRASS GIS 7.6.2dev Reference Manual | https://grass.osgeo.org/grass76/manuals/libpython/script_intro.html | CC-MAIN-2020-34 | refinedweb | 726 | 65.01 |
Dec 11, 2006 05:32 AM|pdabak|LINK
Hello,
I have written a provider that is registered with MPS. While executing the method in the provider, I get the following error
<response><errorContext description="Resolution of execute statements failed fro
m a procedure=main namespace=request executing procedure=MyMethod namespace=MyNamespace. Check Event Viewer for additional namespace load errors." code="0xc2201
My namespace is registered properly and is visible in the provisioning manager GUI.
I started filemon to see the file system activity and it is not even reaching to the point where it will load my provider's assembly.
What are the typical causes for the above error message?
Thanks.
-Prasad
Dec 11, 2006 09:01 AM|DGaikovoi|LINK
Any error messages in Event Viewer?
Did you registred you provider as a COM component?
Dec 12, 2006 01:18 AM|pdabak|LINK
There are no error messages in Event viewer except the one that is same as shown by provtest.exe.
Yes, I registered my provider as a COM component.
Interestingly, if I get rid of some other providers in the system, my provider starts working fine. My provider doesn't depend on any other providers and no other providers depend on my provider. This makes me believe, that, there may be some hardcoded limit on number of providers that can be registered in the system. Any idea?
Thanks.
-Prasad
Dec 12, 2006 09:35 AM|DGaikovoi|LINK
Prasad,
When you said - "I get rid of some other providers in the system..." - do you mean custom made providers (like yours) or standard providers?
Did you use MPS SDK for your provider development?
Dec 18, 2006 02:41 AM|pdabak|LINK
Hello,
I got rid of some random set of standard providers. There is one more intersting observation.
When you add/remove providers, typically, following message is posted in the event viewer.
"Provisioning Engine Successfully updated the named procedure definitions from the configuration database"
We observe, that, on the setups where we run into issues, above message is not posted in the event viewer, when we register our provider using ProvNamespace.exe. Later when we get rid of some random set of standard providers, eventually, above message is posted in the event viewer and then our provider starts working fine.
We also notice that there is a registry value called HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Provisioning\Engine\Namespace Cache. We suspect, that, when the above message is posted in the event viewer, it is actually updating this registry value and namespace loading always happens using the cached list in the registry. This is why, till the above message is not posted, our provider is not recongnized.
So that leads to question: Under what circumstances it may happen that the namespace cache is not updated after registering the provider in the system?
Thanks.
-Prasad
Dec 18, 2006 11:07 AM|DGaikovoi|LINK
Prasad,
According to Provisioning Manager Help: "The provisioning engine checks the configuration database for namespace changes every five minutes or on restart of the provisioning engine COM+ application in Component Services." Could you check if MPS Engine restart immediately after provider registration solves your issue?
Dec 19, 2006 08:29 AM|pdabak|LINK
Dec 19, 2006 09:39 AM|DGaikovoi|LINK
Hello,
Could you post post your namespace registration XML and sample request?
7 replies
Last post Dec 19, 2006 09:39 AM by DGaikovoi | https://forums.asp.net/t/1053302.aspx?Error+invoking+method+in+my+provider+using+provtest+exe | CC-MAIN-2018-34 | refinedweb | 564 | 61.67 |
Differences between obsolete,discontinued,deprecated in the documentation
- Monday, October 27, 2008 9:27 PM
Papy NormandModerator
Hello,
In the Sql Server 2008 documentation, i have found 4 types of changes:
- discontinued : the use causes automatically an error ,it is the case of :
- sp_addalias
- sp_addgroup
- registered servers APIs ( replaced by a new API )
- sac.exe
- deprecated : no error when it is used ( but in the next or future version, it will be discontinued
- DATABASEPROPERTY
- sp_dboption
- sp_attach_db
- sqlmaint.exe
- sys.database_principal_aliases
- sp_configure options 'user instances enabled' ( not clear the explanation )
- for SMO :
- obsolete : i found in one place
why MS has used a really different term (obsolete) and what is its meaning ?
deprecated,discontinued....
I hope that somebody will be able to find an explanation as i have an application using registered servers with this smo.registeredservers namespace ( i have a solution with smoapplication.enumregisteredservers() but it is less evident )
Thanks beforehand
Have a nice day
Answers
Discontinued - no longer in this release of SQL Server.
Deprecated - in this release, but no longer being developed or supported. Will be discontinued in the next release or two.
Obsolete - a better alternative is available so this is now deemed redundant. Will probably be discontinued in the next release.
- Friday, October 31, 2008 6:18 PM
Alan Brewer [MSFT]Answerer
The Smo.RegisteredServers topic is part of the managed reference doc set for that .NET Framework namespace. We use the same reflection-based tools to generate our mref docs as are used by the teams who build the .NET Framework SDK docs. When the developer marked the namespace as obsolete, that information is available to the writer and she could include it in the topic.
So the terminology in that topic is more aligned with reference material in the .NET Framework SDK because we're using the same authoring tools.
All Replies
- Tuesday, October 28, 2008 1:32 PM
Papy NormandModerator
Hello gvee,
Many thanks.
For obsolete, it is amusing that this term is mainly used by the .Net Framework.
It seems that SMO is developped by people who are near to .Net Framework
Have a nice day
- Friday, October 31, 2008 8:15 PM
Papy NormandModerator
Hello Alan,
I have posted on the Sql Server Documentation Forum but i was thinking that SMO is so "special" that it would be logical i cannot have an answer.
Thanks for your clear answer which confirms what i was thinking : SMO is related to Sql Server but , in the same time, it is in a "different world".
Just a last question : is it the good forum to report a problem in the SMO documentation ? For example, an error ( for me ) as a function declared as Sub in VB ( void in VC# ) which should returned a datatable
Anyway, thank you very much and have a nice day
- Saturday, November 01, 2008 6:32 PM
Papy NormandModerator
Hello,
I have discovered why the namespace Microsoft.SqlServer.Management.Smo.RegisteredServers is "obsolete".
It is because it is replaced by
this replacement is noticed in the new namespace not in the obsolete namespace.
A little omission....
Have a nice day
- Monday, November 03, 2008 8:32 PM
Alan Brewer [MSFT]Answerer
The best way to report feedback for anything in SQL Server Books Online is to use the feedback UI in the topics.
If you are looking at the topic in a local copy of Books Online, the feedback UI generates an email that goes to a mailbox monitored by our feedback processor. The processor looks up the topic owner, then creates a doc bug with your feedback and assigns it to the writer. If the process cannot determine an owner for some reason, it puts the bug in our team triage queue and we manually assign it to the owner.
If you are looking at the topic in either the MSDN or TechNet library, the feedback comment is stored in a database. Our writers periodically scan that database for comments on the topics they own.
Alternatively, you could also open a Connect item on a doc issue, just as you do with any other issue you find in SQL Server. | http://social.msdn.microsoft.com/forums/en-US/sqldocumentation/thread/899f6b8c-165b-45ff-ad2b-6bd77e6dfdd6/ | crawl-002 | refinedweb | 691 | 52.09 |
CodePlexProject Hosting for Open Source Software
Hi, I am helpless, I need deserialize JSON string in this format to class.
JSON string
{"newNickInfo":{"2775040":{"idUser":2775040,"nick":"minci88","sefNick":"minci88","sex":2,"photon":"http:\/\/213.215.107.125\/fotky\/277\/50\/n_2775040.jpg?v=4","photos":"http:\/\/213.215.107.125\/fotky\/277\/50\/s_2775040.jpg?v=4","logged":false,"idChat":0,"roomName":"","updated":1289670130}}}
Class:
public class User
{
public string idUser{get;set;}
public string nick{get;set;}
public string sefNick{get;set;}
public string sex{get;set;}
public string photon{get;set;}
public string photos{get;set;}
public bool logged{get;set;}
public int idChat{get;set;}
public string roomName{get;set;}
public string updated{get;set;}
}
First problem is, class properties must be lower case, and second problem is I try many ways but I don’t know how cut json object from this json string and deserialize in class. Any ides? Thank for any help.
One more question, what format would have c# class, if I want deserialize this json string into it.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://json.codeplex.com/discussions/234596 | CC-MAIN-2017-09 | refinedweb | 213 | 54.52 |
Subject: [[email protected]: BOUNCE [email protected]: Non-member submission from ["William Kreamer"
From: Sam TH ([email protected])
Date: Wed Mar 14 2001 - 09:16:03 CST
sam th --- [email protected] ---
OpenPGP Key: CABD33FC ---
DeCSS:
Return-Path: <[email protected]>
Delivered-To: [email protected]
Received: from 954access.net (mail.954access.net [216.235.105.251])
by parsons.abisource.com (Postfix) with ESMTP id 1CD4B13B8AF
for <[email protected]>; Wed, 14 Mar 2001 08:29:43 -0600 (CST)
Received: from default [216.235.99.60] by 954access.net
(SMTPD32-5.05) id A093570B0114; Wed, 14 Mar 2001 09:30:43 -0500
Message-ID: <001201c0ac92$dba68560$3c63ebd8@default>
From: "William Kreamer" <[email protected]>
To: "Paul Filiault" <[email protected]>,
"Dan Stromberg" <[email protected]>
Cc: "Bernard_REVET" <[email protected]>,
"Kevin Vajk" <[email protected]>, <[email protected]>
References: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>
Subject: Re: libgal.so.4 ATTENTION WITH RED HAT 7 gcc 2.96 compiler
Date: Wed, 14 Mar 2001 09:27:07 -0500
intend to install Linux as a second OS in the near future, and I want
Abiword to be a cross-platform word processer.
From: "Paul Filiault" <[email protected]>
To: "Dan Stromberg" <[email protected]>
Cc: "Bernard_REVET" <[email protected]>; "Kevin Vajk" <[email protected]>;
<[email protected]>
Sent: Tuesday, March 13, 2001 20:10
Subject: Re: libgal.so.4 ATTENTION WITH RED HAT 7 gcc 2.96 compiler
have no
> major problems but would like to contact users that use it more heavily
that I.
>
> Dan Stromberg wrote:
>
> > On Tue, Mar 13, 2001 at 10:40:58AM -0500, Bernard_REVET wrote:
> > > Dear all
> > > If it was only Gnome for which you are going to have trouble would be
fine.
> > > The worst are the compilers which come with Red Hat 7 even the
upgrade
> > > versions . Being so in a bleeding edge gcc 2.96 is just not stable
and should
> > > not be used if you do not want to be with nighmares while compiling .
> > > Just replace it with 2.95.2.1
> > > For example you can get it from
> > >
> > >
> > > or wait for 3.0....
> > >
> > > Look at
> > >
> > >
> > > for extra information
> >
> > Actually if you patch your 2.96 it's a better compiler than 2.95.1.
> >
> > Practically everything that fails to build with 2.96, fails to build
> > because the code wasn't standards conformant, and the older gcc's
> > passed nonconformant code. Typically you just have to define one or
> > two preprocessor symbols to bring back the heavily populated
> > namespaces ("overpopulated", according to the standards), and you're
> > ok, but there are other places where standards compliance was improved
> > as well.
> >
> > And of course if you just hate progress, you can always use the "kgcc"
> > that comes with redhat 7.
> >
> > > I mentioned this point to Red Hat but did not get any answer
> >
> > They're probably tired of hearing it. How do you respond to hoardes
> > of people insisting that an improvement isn't an improvement? I'd
> > probably consider ignoring it too. Perhaps I should have this time
> > (I'll know I should have if this turns into a battle).
> >
> > BTW, the mesa modes in xscreensaver run MUCH faster on redhat 7 than
> > they did on 6.2. I haven't figured out why yet. It could even be
> > because of the improvements in the compiler's output, but perhaps it's
> > more to do with using XFree86 4.x with an 3d video card.
> >
> > --
> > Dan Stromberg UCI/NACS/DCS
> >
>
------------------------------------------------------------------------
> > Part 1.2Type: application/pgp-signature
>
>
> -----------------------------------------------
> | https://www.abisource.com/mailinglists/abiword-user/01/March/0071.html | CC-MAIN-2021-49 | refinedweb | 631 | 61.02 |
/* * __GNUC__ >= 3 ; ; This function was generated by disassembling the 'OSObject::free(void)' ; function of the Panther7B7 kernel in gdb. ; ; Then add the 'li r4,3' flag taken fropm the Puma kernel 'OSObject::free' ; .text .align 5 .globl __ZN8OSObject4freeEv __ZN8OSObject4freeEv: ; function prologue stw r31,-4(r1) mflr r0 stw r0,8(r1) mr r31,r3 stwu r1,-80(r1) ; const OSMetaClass *meta = getMetaClass(); lwz r9,0(r3) lwz r12,32(r9) mtctr r12 bctrl ; if (meta) ; meta->instanceDestructed(); cmpwi r3,0 beq delete_this bl __ZNK11OSMetaClass18instanceDestructedEv delete_this: ; delete this; lwz r9,0(r31) mr r3,r31 li r4,0 ; Load up some sort of flags, for 2.95 destructors? lwz r0,88(r1) addi r1,r1,80 lwz r12,8(r9) mtlr r0 lwz r31,-4(r1) mtctr r12 bctr #endif /* __GNUC__ >= 3 */ | http://opensource.apple.com/source/xnu/xnu-1504.9.17/libkern/c++/OSObjectAsm.s | CC-MAIN-2016-26 | refinedweb | 131 | 54.26 |
Hey guys,
Getting back into Java. Never really worked much with Objects, new types, and the sort. What I'm trying to do is make a recipe book program that will help me sort my recipes. What I have so far is just up to entering a recipe. My problem is that I'm not quite sure how to create a new type. What I've done here is create a class Recipe and have the constructor have some strings you need. Then when I create the array for the recipes, I find the pieces of info I need using a scanner, then set the current piece of array with those pieces of info.
Here is the main class:
Code Java:
import java.util.Scanner;[]) { String input; if(hasRunBefore == false) {() { String input; System.out.print("\nWhat would you like to do?"); input = scan.nextLine(); switch(input.toLowerCase()) { case "tutorial": Tutorial(); RunCommand(); case "recipe": EnterRecipe(); } } public static void EnterRecipe() { String name; String tag; String author; String recipe; System.out.print("Name?"); name = scan.nextLine(); System.out.print("Tag?"); tag = scan.nextLine(); System.out.print("Author?"); author = scan.nextLine(); System.out.print("Recipe?"); recipe = scan.nextLine(); recipes[currentRecipeNumber] = new Recipe(name, tag, author, recipe); System.out.print(recipes[currentRecipeNumber].name); /* This is a debug line while I figure out what's wrong */ } }
And here is my Recipe class:
Code Java:
I am probably missing something, or doing something wrong. This is my first time working with stuff like this. Can someone help me?
Thanks,
-Silent
P.S: In the main class, the .name in the last line (System.out.print(recipes[currentRecipeNumber].name);) has an error. It says it can't find it. So I just need to know how to create a type so that it can be created and referenced. | http://www.javaprogrammingforums.com/%20object-oriented-programming/24699-recipe-book-arrays-objects-types-printingthethread.html | CC-MAIN-2015-18 | refinedweb | 300 | 67.55 |
13 August 2010 11:41 [Source: ICIS news]
SINGAPORE (ICIS)--Wilmar International Ltd's second-quarter net profit declined 15.4% year on year to $344.5m (€268.7m) as margins were lower and its convertible bonds had a negative change in valuation of $41.7m, the Singapore-listed crude palm oil (CPO) producer said on Friday.
Sales for the period jumped 18.3% to $6.76bn as a result of strong sales volumes and higher prices, with net profit excluding non-operating items growing 12.8% to $380.3m, the company said in a statement.
The company, which has palm plantations in ?xml:namespace>
Wilmar's plantations and palm oil mills segment, however, saw a 24% year-on-year fall in pre-tax profit to $76.6m due to lower CPO prices and higher production costs in the June quarter, it said.
“Yield dropped 10% to 4.51 tonnes per hectare in the second quarter as a result of lower yield of newly matured hectarage and wet weather in most parts of
For the first half of the year, Wilmar had a 5.2% decline in net profit to $745.9m even as sales surged 27% to $13.52bn, it said.
Net profit excluding non-operating items for the six-month period was up 8.8% at $772.1m, it said.
“The group is positive on the prospects of Asian economies, especially
Wilmar said it was planning a major expansion into sugar to boost its long-term profitability, with the proposed acquisition of Sucrogen Ltd and the development of sugar production in
In a separate statement, the company said it is in discussions to take a minority stake in fellow CPO producer Kencana Agri Ltd | http://www.icis.com/Articles/2010/08/13/9384791/wilmar-q2-net-profit-falls-15-to-344.5m-despite-sales-up-18.html | CC-MAIN-2014-15 | refinedweb | 286 | 67.55 |
.script.el.parser;19 20 /**21 * Describes the input token stream.22 */23 24 public class Token {25 26 /**27 * An integer that describes the kind of this token. This numbering28 * system is determined by JavaCCParser, and a table of these numbers is29 * stored in the file ...Constants.java.30 */31 public int kind;32 33 /**34 * beginLine and beginColumn describe the position of the first character35 * of this token; endLine and endColumn describe the position of the36 * last character of this token.37 */38 public int beginLine, beginColumn, endLine, endColumn;39 40 /**41 * The string image of the token.42 */43 public String image;44 45 /**46 * A reference to the next regular (non-special) token from the input47 * stream. If this is the last token from the input stream, or if the48 * token manager has not read tokens beyond this one, this field is49 * set to null. This is true only if this token is also a regular50 * token. Otherwise, see below for a description of the contents of51 * this field.52 */53 public Token next;54 55 /**56 * This field is used to access special tokens that occur prior to this57 * token, but after the immediately preceding regular (non-special) token.58 * If there are no such special tokens, this field is set to null.59 * When there are more than one such special token, this field refers60 * to the last of these special tokens, which in turn refers to the next61 * previous special token through its specialToken field, and so on62 * until the first special token (whose specialToken field is null).63 * The next fields of special tokens refer to other special tokens that64 * immediately follow it (without an intervening regular token). If there65 * is no such token, this field is null.66 */67 public Token specialToken;68 69 /**70 * Returns the image.71 */72 public String toString() {73 return image;74 }75 76 /**77 * Returns a new Token object, by default. However, if you want, you78 * can create and return subclass objects based on the value of ofKind.79 * Simply add the cases to the switch for all those special cases.80 * For example, if you have a subclass of Token called IDToken that81 * you want to create if ofKind is ID, simlpy add something like :82 * <p/>83 * case MyParserConstants.ID : return new IDToken();84 * <p/>85 * to the following switch statement. Then you can cast matchedToken86 * variable to the appropriate type and use it in your lexical actions.87 */88 public static final Token newToken(int ofKind) {89 switch(ofKind) {90 default :91 return new Token();92 }93 }94 95 }96
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/beehive/netui/script/el/parser/Token.java.htm | CC-MAIN-2018-26 | refinedweb | 453 | 61.36 |
I think system call 62 in Riscv32 Linux is _llseek:
It appears to have 5 arguments of 32 bits each (x12-x16).
The library appears to use only lseek and _lseek, expecting them to be the same. The wrapper for both could be:
long lseek(int fd, long offs, int whence) { long ret[2]; _llseek(fd, 0, offs, &ret[0], whence); return ret[0]; }
#include <sys/types.h> #include <unistd.h> #include <errno.h> #include <stdio.h> #include <fcntl.h> struct loff { long offset0; long offset1; }; typedef struct loff loff_t; extern int _llseek(unsigned int fd, unsigned long offset_high, unsigned long offset_low, loff_t *result, unsigned int whence); main() { int fd; char buf[32]; int r, err; loff_t result; fd = open("test.txt",O_RDONLY); errno = 0; r = _llseek(fd, 0, 8, &result, SEEK_SET); err = errno; printf("Result: %d\n",r); printf("errno: %d\n",err); printf("Offset 0: %d\n", result.offset0); printf("Offset 1: %d\n", result.offset1); read(fd,buf,2); printf("Second row:\n"); write(1,buf,2); return 0; }
root@buildroot:~# as -o as.o as.s Assembling module 'as.s'... root@buildroot:~# ./a.out -o as1.o as.s Assembling module 'as.s'... root@buildroot:~# diff as.o as1.o
root@buildroot:~# time ./lccas -o as1.o as.s Assembling module 'as.s'... real 0m 2.28s user 0m 2.11s sys 0m 0.17s root@buildroot:~# time as -o as1.o as.s Assembling module 'as.s'... real 0m 3.01s user 0m 2.80s sys 0m 0.17s
@lawrie: that is a great result! What was the issue with the fail on fputc? In any case,
as is a non-trivial 3,000 line C program. That it compiles and works correctly gives a lot of confidence in what we have now. Maybe the gcc version will be faster if compiled with -O2.
@emard: I think the game code needs raw input as well as non-blocking. What the current LCC lib offers is ioctl() and you can use it to achieve both, I think.
interesting ISSI vs. winbond flash differences in this chat, I'd like to confirm that my stuff (doesn't) work on the ISSI but I only have a blue winbond board here
@emard do you know if the ISSI parts on the ULX are preconfigured to have the QE enabled by default, if that's possible? I know for the winbond ones you can order them that way but otherwise the QE configuration must be done before trying quad cmds
make ulx3s_prog Makefile:176: icestation-32/software/common/core.mk: No such file or directory make: *** No rule to make target 'icestation-32/software/common/core.mk'. Stop.
fujprog -j flash -f 0x200000 prog.bin fujprog ulx3s.svf
yeah probably is, thanks for confirming. the sound effects are pretty loud when just plugging in headphones so it is noticable
My plan is to use the standard JEDEC ID cmd in the boot code and do any vendor specific stuff when I know which vendor flash is on board, should be no problem
#include <stdio.h> #include <unistd.h> #include <errno.h> #define KDGETMODE 0x4B3B #define KDSETMODE 0x4B3A #define FIONBIO 21537 int main() { int r; int mode; char buf[2]; buf[1] = 0; mode = 0; r = ioctl(0, FIONBIO, &mode); printf("Result: %d\n", r); printf("errno: %d\n", errno); do { r = read(0, buf, 1); if (r > 0) printf("key is %d\n", buf[0]); } while (r <= 0); printf("r is %d\n", r); mode = 0; r = ioctl(0, FIONBIO, &mode); return 0; } | https://gitter.im/ulx3s/Lobby?at=5f69b77d8fe6f11963536401 | CC-MAIN-2021-31 | refinedweb | 594 | 77.03 |
UFDC Home | Help | RSS TABLE OF CONTENTS HIDE Copyright Front Cover Prelude Sweet corn production guide and... Planting dates, spacing and seeding,... Suckering, chemical weed control,... Precautions and nematode contr... Back Cover Group Title: Circular - Florida Cooperative Extension Service - 100 C Title: Bean (bush and pole) production guide CITATION PAGE IMAGE ZOOMABLE PAGE TEXT Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Bean (bush and pole) production guide Series Title: Circular (Florida Cooperative Extension Service) Alternate Title: Bean production guide Physical Description: 11 p. : ; 23 x 10 cm. Language: English Creator: Montelaro, JKostewicz, S. RFlorida Cooperative Extension ServiceUniversity of Florida -- Institute of Food and Agricultural Sciences Publisher: Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida Place of Publication: Gainesville Fla Publication Date: 1974 Subjects Subject: Beans ( lcsh ) Genre: government publication (state, provincial, terriorial, dependent) ( marcgt )non-fiction ( marcgt ) Notes Statement of Responsibility: prepared by James Montelaro and Stephen R. Kostewicz. General Note: Cover title. General Note: "3-7.5M-74." Funding: Florida Historical Agriculture and Rural Life Record Information Bibliographic ID: UF00067 - 51249003 Table of Contents Copyright Copyright Front Cover Page 1 Prelude Page 2 Sweet corn production guide and varieties Page 3 Planting dates, spacing and seeding, and seed treatment Page 4 Suckering, chemical weed control, and fertilization Page 5 Page 6 Page 7 Page 8 Precautions and nematode control Page 9 Page 10 Page 11 Back Cover FI / C 1) 0,3 0 CIRCULAR 100 C ~II:ii ;: ~ A' *,,. FLORIDA COOPERATIVE EXTENSION SERVICE STITUTE OF FOOD AND AGRICULTURAL SCIENCES UNIVERSITY OF FLORIDA, GAINESVILLE BEAN (BUSH & POLE) PRODUCTION GUIDE (Revision of Circular 100B) This guide presents general recommendations for the production of bush and pole beans in Flor. ida. Modification may be necessary as improve( practices are developed through research and ex- perience. For details on local application of these prac tices, see your County Agricultural Extension Agent. Other publications on bush and pole bear production are: (1) "Commercial Vegetable Insect and Diseas Control Guide," Extension Circular 193. (2) "Chemical Weed Control for Florida Veg etable Crops," Extension Circular 196. (3) "Commercial Vegetab e Fertilizatio Guide," Extension Circular 225. (4) "Vegetable Variety Trial Results for 1969 1970-1971 and Recommended Varieties," Fla. Ex: Sta. Circular S-223. NOTE: Since these publications are revise from time to time, be sure to get the latest ed itions. ACREAGE HARVESTED* (1971-72 Season) County Bush Beans Pole Beans** Tot Gadsden 950 9 Alachua 1,060 1,0 Hillsborough 850 8 Palm Beach 16,350 16,3 Broward 8,950 8,9. Dade 2,950 3,600 6,5 All Others 1,390 1,3 TOTAL 32,500 3,600 36,1 *From "Florida Agricultural Statistics. Vegetable Summary 1972." **Pole beans are also grown on limited acreage in Gadsden, Hil borough and other counties. YIELD, COSTS AND RETURNS* (1971-72 Season, Range Per Acre) Palm Beach-Broward Dade County (Pole Item Range: from to Range: from Yield (bu/acre) 33 87 246 329 Total growing cost $ 180.67 $313.74 $ 499.36 $ 790 Total harvest/market cost $ 51.97 $153.59 $ 401.29 $ 519 Total crop cost $ 260.62 $467.33 $ 986.75 $1,295 Crop sales $ 148.50 $435.00 $1,181.89 $1,543 Net return (or loss) ($-131.62) $106.78 $ 174.43 $ 534 *From University of Florida, Ag. Econ. Report 44. (Note: Ran from low to high are for each item and are not additive in t umns.) SWEET CORN PRODUCTION GUIDE The purpose of this guide is to present general recommendations for the production of sweet corn in Florida. For details on local application of these practices, see your county Extension agent. Additional information on sweet corn production can be found in the following publications: Uni- versity of Florida, Extension Circulars 193, 196, 225 and Experiment Station Bulletins 596 and 714. Since Extension circulars are revised from time to time, be sure to obtain latest editions. ACREAGE AND HARVEST PERIODS* (1968-69 Season) Usual Areas in Florida Harvest Period Acreage Harvested North & West Florida May-June 1,610 North Central Florida May-July 8,300 West Central Florida April-June 290 Everglades Oct.-June 32,870 South Florida Dec.-May 13,530 State Total 56,600 *From Florida Agricultural Statistics, Vegetable Sum- mary 1969. YIELDS, COSTS AND RETURNS* (5-Season Average 1963-64 to 1968-69) Lower East Everglades Coast Zellwood (Muck) (Sand) (Muck) Crates (5 doz.)/A 156 149 231 Production Costs/A $202.63 $252.27 $212.79 Harvest-Market Costs/A 176.71 168.87 261.26 Total Costs/A 379.34 421.14 474.05 Sales F.O.B. 370.68 462.01 514.61 Profit or Loss/A -8.66 +40.87 +40.56 *From University of Florida Agricultural Economics Report 2, by D. L. Brooke. VARIETIES Many sweet corn hybrids are released each year. Trials to screen sweet corn varieties for performance under Florida conditions are con- ducted each year by research workers of the Flor- ida Agricultural Experiment Stations. Varieties listed here are those that performed well in one or more locations in Florida. Listing of these varieties only is not meant to imply that other varieties may not be suitable to Florida, also. Carmelcross and Northern Belle.-Plant small, early maturity. Ear yellow and only of fair ap- pearance. Susceptible to leaf blights. Iobelle (Fla. 104).-Medium size plant of mid- season maturity. Ear attractive, pale yellow. Planted for all season production. Susceptible to leaf blights. Wintergreen.-Medium plant of midseason ma- turity. Ear yellow, fair appearance. Tolerant to leaf blights. Florigold 106, 106A and 107.-Medium size plants of midseason maturity. Ears yellow, very good appearance but with poor husk cover under some conditions. Tolerant to leaf blights. Golden Belle (3373).-Tall plant of midseason maturity. Ear yellow. Susceptible to leaf blights. Suitable for fall production in the organic soils of the Everglades. Gold Cup.-Medium size plant of midseason maturity. Yellow. Fair quality. Susceptible to leaf blights. More suitable for production in Cen- tral Florida. Silver Queen.-Tall plant of midseason ma- turity. White. Excellent quality. Susceptible to leaf blights. PLANTING DATES Dates Days to Maturity North Florida: March-April Central Florida: February-March 75 to 100 South Florida: August-March SPACING AND SEEDING Planting Distances Depths of Seed/Acre(1) Seeding Between rows: 28" to 42" 1" to 2" 5 to 12 lbs. Between plants: 8" to 15" () Rates can be reduced by 50% or more with precision seeding. SEED TREATMENT For seed that is purchased untreated, treat with the following fungicide: (1) Thiram (50%)-2 ounces per 100 lbs. seed SUCKERING Neither yield nor earliness is improved by re- moving suckers which are normally produced at the base of the plant. CHEMICAL WEED CONTROL The amounts listed here are for overall appli- cation. When band treatment is used, reduce the amount proportionately. Chemical weed control may not prove to be very effective on the very light sandy soils that are subject to severe shift- ing of the surface soil from wind or rain. HERBICIDES -RATES AND USE Amt./Acre (Active Ingredient) Herbicide2 Sandy Soils Muck Soils Pre-emergence to Crop CDAA (Randox)" 5 lbs. CDEC (Vegadex)' 6 lbs. 4 to 6 lbs. CDAA + CDEC' 6 lbs. 4 to 6 lbs. Propachlor (Ramrod) 4 to 5 lbs. (4 to 5) lbs.1 Atrazine (AAtrex)' 1 to 2 lbs. (3 to 4) lbs.1 Simazine (Princep)' 1 to 2 lbs. - Post-emergence to Crop' Atrazine' 1 lb. % to 1/ lbs. Atrazine & Oil8P, 1 lb. + 1 gal. 1 lb. + 1 gal. 2,4-D %/ to % lb. % to % lb. 1 Rates given in parenthesis ( ) are suggested for trial purposes only. 2 All treatments are "preemergence" to weeds unless stated otherwise. 3 CDAA is more effective against grasses than broad- leaf weeds. 4 CDEC is more effective against broadleaf weeds than grasses. 5 Combine CDAA + CDEC for mixed broadleaf weed and grass populations. 6 Good surface soil moisture is necessary for best re- sults. 7 Apply directionally to base of plants. 8 Will kill weeds up to 1% inches tall. 9 Use a non-phytotoxic crop oil plus emulsifier and ap- ply in 40 gallons of water. FERTILIZATION Placement.-Fertilizer applied at seeding or during early stages of growth should be placed in bands two to three inches to each side, at or slightly below the level of the seed or the grow- ing tips of roots. Timing.-The basic application of fertilizer may be applied before planting, during planting, short- ly after planting, or in split applications combin- ing any two or all three of these. Supplemental fertilizer may be applied whenever needed during the growing season and especially after heavy, leaching rains. Liming.-Optimum pH range of sweet corn production is between 5.8 and 6.2. If soil tests show that lime is needed, apply and mix it in well with the soil two to three months before planting. Minor Elements.-Soils with pH 6.2 or above may be deficient in some of the minor elements. Zinc and manganese can be supplied by fungi- cides, which are usually needed for disease con- trol. Zinc and manganese as well as the other minor elements may also be applied to the soil in fertilizer or to the foliage as sprays or dusts. FERTILIZERS -RATES AND USE Basic Supplemental Application Applications Soil Actual Lbs./ Actual Lbs. Acre Each No. of Application Applica- N-P205-KO N-PO,-KO tions Mineral soils' 90-120-120 30-0-15 1 to 4 (irrigated) Mineral soils2 72- 96- 96 30-0-15 1 to 2 (unirrigated) Muck and peat 0-160-180 (3) (3) Marl 54- 72- 72 30-0-15 1 to 2 Rockland 45- 60- 60 30-0-30 1 to 2 1 Includes all mineral soils (except marl and rockland) having a dependable supply of moisture. 2 Includes all mineral soils (except marl and rockland) not having a dependable supply of moisture. 3 The amount of fertilizer suggested here is the amount needed for organic soils low in P2Os and K.O. When soil tests show a medium level of PAOs in an organic soil reduce the amount of P205 suggested here by one- third; when soil PsOs levels are high, reduce by two- thirds. Follow the same suggestions for medium and high levels of KIO. The amount of fertilizer suggested here is sufficient to grow these crops under normal conditions. Most crops will respond to supplemental applications of nitrate-nitro- gen during periods of cool weather or following heavy rainfall. On new peat soils, make a broadcast application of 15 pounds of .CuO, 10 pounds of MnO, and 4 pounds of B2Os per acre before any crop is planted. I)O U-o rn V 0 0 t=1- 0 a (D ' (D1 Va USn tc"^ acp (DO (D ^ niV (D3 0 (D;' rny _si a a 5*0 , CD <- 0 |. -o 7 an SCfli an wn Ui s-s- Min. Days Insects Sprays Dust To Harvest Aphids1 Parathion 4E, 1/ pt. Parathion 1-2% 3 Spider Mites Phosdrin 2E, 1 pt. Phosdrin 1 % 1 Fall Armyworms2 Gardona 75% WP, 1-11/ Ibs. ** and Lannate 90% SP, % lb. ** Corn Earworms Parathion 4E, 1/ pt. plus ** feeding in bud Toxaphene 8E, 1%/ pts. Parathion-methyl parathion 6-3E 1/2 pt. Silk-Fly" Parathion 4E, % pt. 3 Earworms' Gardona 75% WP, 2/-1 lb.* NTL Lannate 90% SP, lb. NTL Parathion 4E, pt. plus DDT 2E, 4 qts.* DDT 5%-Parathion 1% 3 Parathion-methyl parathion 6-3E, pt.* Parathion 2% 3 Sevin 80% WP, 2 lbs.* Sevin 10% NTL Corn Stem WeevilP Lannate 90% SP, lb. ** Gardona 75% WP, 11/ lbs. ** DDT 2E, 4 qts. Cutworms See footnote No. 6 below. Wireworms See footnote No. 7 below. Lesser Cornstalk See footnote No. 8 below. Borer *These amounts should be mixed in 50 gallons of water and applied to one acre. **These usages should not result in a residue problem on the edible ears. first appear and continued until all the silks are dry or brown. Additional applications may be needed where re- newed silk growth occurs after normal browning. Appli- cations of one of the recommended insecticides will give control when applied at 48-hour intervals to sweet corn silking during October through March. During the rest of the year, apply one of the recommended insecticides every 24 hours. The amounts of insecticides recommended in the preceding table should be mixed with 50 gallons of water and applied to one acre. The dust must be ap- plied at 25 to 30 pounds per acre. Dusts or sprays should be directed to thoroughly cover the silks. (5) Corn Stem Weevil.-Treatments must be started when the first seedlings emerge from the soil and con- tinued every four days or two times a week for six ap- plications or until the corn stem is %1-inch or more in diameter. Sprays must be directed to the lower stem and the adjoining soil. Use overhead nozzles to apply 50 gal- lons per acre until the foliage begins to form a canopy that prevents the spray from reaching the ground and lower stem. Then, add a nozzle to each side of the row and increase the rate to 100 gallons per acre. Cultivation should be as infrequent as possible during this spray schedule and should precede a spray application. Pre-emergence chemical weed control (See Extension Circular 196C) and delay of the first cultivation until after the final corn stem weevil spray has resulted in better corn stem weevil control. The corn stem weevil has been recognized as a pest only in the Everglades. (6) Cutworms.-Apply toxaphene, or chlordane at 2 pounds active ingredient (5 pounds of 40% WP or 20 pounds of 10% dust or granules) per acre to the soil surface before planting if cutworms are known to be present. Do not disturb soil for three to five days. A 2%% toxaphene or 2% chlordane or 5% Dylox bait can be used as above at 20 to 40 pounds per acre. If cutworm damage to young plants is noted, baits should be used at once. Regular applications of insecticides, including para- thion, toxaphene, etc., for control of foliage insects will prevent the establishment of cutworms. A home-made bait can be prepared by thoroughly mix- ing 5 pounds of 40% chlordane WP or 6 pounds of 40% toxaphene WP with 100 pounds of wheat bran. Moisten bait slightly with water and apply in late afternoon. Use freshly mixed baits. (7) Wireworms.-Apply parathion or diazinon at 2 pounds active ingredient per acre on mineral soils; on organic soils apply parathion at 5 pounds or diazinon at 4 pounds active ingredient per acre. Distribute evenly over the soil surface 2 to 3 weeks before planting and immediately mix into the upper 6 inches of soil. (8) Lesser Cornstalk Borer.-In the Everglades area apply parathion, using a wetting agent or detergent in the spray water to help wet the soil and the webbing. Make first application broadcast (covering rows and mid- dles) just before crop emerges, using 1 pint of parathion 4E per acre. Make second application as soon as crop emerges and before cultivation, using 1 pint of parathion 4E per acre. Higher gallonage (up to 300 per acre) of more dilute coarse sprays at about 100 pounds pressure may be more effective. The lesser cornstalk borer is an erratic pest with out- breaks during dry periods; doubtful that routine control measures would be profitable. PRECAUTIONS Read pesticide labels thoroughly before opening container, and observe all safety precautions. Dis- pose of empty containers promptly and safely. Information is given on recommended pesticides and minimum days between last application and harvest. There will be changes and cancellations; therefore, the grower is urged to keep abreast of developments through county agents, experiment stations, industry, etc. DISEASE CONTROL Min. Days Disease Spray To Harvest Helminthosporium Maneb 80% 11/ lbs., or NTL* Leaf Blights- Polyram 80% 2 lbs., or 1* ** (Helminthosporilm Zineb 75% 2 lbs., or NTL*** maydis and Dithane M-45 80%, 7** Hclminthosporium 11 lbs., or turcicum) Manzate 200, 80%, 7** 1./ lbs. *Kqrnel and cob Do not feed treated forage to livestock. **Use restricted to Florida. ***Do not feed forage to dairy animals, or animals being finished for slaughter. Any of the materials properly applied once or twice weekly, depending on weather, disease con- ditions and locations, will give economic control. For crops in the "whorl stage" of growth, the sprayer should have two nozzles over the row in addition to the side nozzles required for complete coverage of unfurled leaves. Application of fun- gicide should cease 10 days before harvest unless younger corn is growing nearby. Maneb, Man- zate 200, Dithane M-45 and Zineb should also give satisfactory control of corn rust. Where it is practicable use Helminthosporium resistant varieties. Bacterial Leaf Blight (Pseudomonas alboprecipitans). No chemical control. NEMATODE CONTROL Sweet corn is subject to injury from nem- atodes. Planting in soils known to be heavily infested with nematodes should be avoided when- ever possible. Fallowing, flooding, pangolagrass and other crop rotations are practices which can be used to help control nematodes. When it is necessary to use nematode-infested soils, they should be fumigated with one of the materials suggested in the following table. NEMATICIDES-RATES AND USE Broadcast',2 Row Pints/chisel Gal/Acre Pints/chisel Materials Gal/Acre per 1000 ft. 36" Row3 per 1000 ft.' Remarks D-D, Vidden D Telone Ethylene Dibromide (EDB-85) Vorlex' For broadcast application, set shanks 12" apart. Inject at a depth of 8" 10". Fumigate only when soil is moist. Avoid undecomposed trash in field. Wait at least 3 weeks before planting. For Broadcasting application, set shanks 8" apart. Inject at a depth of 6" 8". For organic (peat and muck) soils, rates should be increased 50-75%. Broadcast rate per acre on 12-inch chisel spacing, except Vorlex which requires an 8-inch chisel spacing. These gallonages are given as a guide to determine total amount of chemical needed for a field. Closer row spacing will require more chemical per acre; wider spacing less. Amount per single chisel per row regardless of row spacing. Vorlex at rates of 25-40 gal/A (mineral soils) and 40-60 gal/A (organic soils) is effective against nematodes, soil diseases, soil insects and weeds. HARVESTING Quality of sweet corn st mediately after harvest. Ti vest and pre-cooling (remo should be kept to a minimum to Sweet corn should be harvested a stage of maturity, cooled to 32 to 400 F. an maintained in that temperature range until con- sumed. The use of trade names in this publication is solely for the purpose of providing specific information. It is not a guarantee or warranty of the products named and does not signify that they are recommended to the exclusion of others of suitable composition. Prepared by James Montelaro and M. E. Marvel in cooperation with personnel of the Institute of Food and Agricultural Sci- ences, University of Florida. Special thanks go to J. E. Brogdon, D. W. Dickson and R. S. Mullin for valuable assistance received from them in the preparation of this publication. Revised January 1971 COOPERATIVE EXTENSION WORK IN AGRICULTURE AND HOME ECONOMICS (Acts of May 8 and June 30,1914) Cooperative Extension Service, IFAS, Unversity | http://ufdc.ufl.edu/UF00067888/00001 | CC-MAIN-2018-26 | refinedweb | 3,235 | 66.44 |
create installer - Development process
please send me how to create .rpm for my java project...create installer "I have create installer (.exe) for my java project with the help of izpack. Now i want to create .rpm installer for my java
Open Source Installer
Open Source Installer
Open source installer tool
NSIS (Null... point for your own work.
Litestep Open Source Installer...;
Open source, multiplatform installer and builder
InstallJammer
Open Source GPS
Open Source GPS
Open
Source GPSToolKit
The goal of the GPSTk project is to provide a world class, open source computing suite to the satellite....
Open Source GPS Software
Working with GPS
Open Source Metaverses
Open Source Metaverses
OpenSource Metaverse Project
The OpenSource Metaverse Project provides an open source metaverse engine along the lines... of an emerging concept in massively multiplayer online game circles: the open-source
creation of installer - Java Magazine
creation of installer plz tell me how can be create installer for any developed application in java? visit the following url izpack.org..
it will helps u software
Open-source software Hi,
What is Open-source software? Tell me the most popular open source software name?
Thanks
Hi,
Open-source software is also know as OSS. Open-source software are those software that comes
Open Source Content Management
money pit?" The article prodded me to learn more about open source CM
error please send me the solution
error please send me the solution HTTP Status 500 -
type Exception...
java.sql.DriverManager.getConnection(Unknown Source)
java.sql.DriverManager.getConnection(Unknown Source)
DisplayServlet.doPost(DisplayServlet.java:56
Open Source Groupware
Open Source Groupware
Open
Groupware
Open Group... Software
This list is focused on open source software projects relating... see the Free Software Foundation and Open Source Intiative for definitions of Free
Open Source Jobs
Open Source Jobs
Open Source Professionals
Search firm specializing in the placement of Linux / Open Source professionals, providing both... / Open Source landscape and employment marketplace make us your most efficient
Open Source VoIP
the Asterisk open source PBX phone system has given me some hope that we?re returning...Open Source VoIP/TelePhony
Open source VoIP/Telephony
One of the first open source VoIP projects -and one of the earliest VoIP PBXes
Open Source Version Control
CVS:
Open source version control
CVS...;
Two Open-Source Version Control Programs Spring
Critical flaws have been found in two open-source applications: Concurrent Versions System (CVS
Open Source JMS
Open Source JMS
Open Source JMS
OpenJMS is an open source... detection
.
Open Source JMS Implementations
JMS4Spread.... JMS4Spread is being actively developed as a open source project at the Center
Open Source Community
developers tell me over and over again that now there is no myth of open source..
Open Source CD
TheOpenCD
TheOpenCD is a collection of high quality Free and Open Source Software. The programs run in Windows and cover the most... CD is a collection of over 100 Free/Open Source software for home and business
Open Source Servers
Open Source Servers
Open Source Streaming Server
the open source...; these tools are not available as part of the open source project. Technical..., and SUSE Linux Enterprise Server, an award-winning open-source server for delivering
Open Source Testing
Open Source Testing
Open Source Security Testing Methodology Manual
The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed... are regularly added and updated.
Open Source
Open Source Business Model
such as
Send mail. The open source business model relies on shifting.. Accounting Software
Open Source Accounting Software
Open
Source Accounting Software
TurboCASH .7 is an open source accounting package that is free for everyone...). It is one of the world's first fully-featured open source accounts packages for small
Open Source Images
Open Source Images
Open
Source Image analysis Environment
TINA (TINA Is No Acronym) is an open source environment developed to accelerate... the development of key open source software within Europe and represents clear recognition Intelligence
Open Source Intelligence
Open
source Intelligence
The Open Source..., a practice we term "Open Source Intelligence". In this article, we use three...;
Open source intelligence Wikipedia
Open Source Intelligence (OSINT
Open Source program
Open Source program
Applications for Open Sound System
SLab Direct... or display functions loaded at run time.
Open source sound project
The l.o.s.s project promotes and supports the use of free, open source
Open Source E-mail
Open Source E-mail
Open Source E-Mail...;
hMailServer -Open source email
hMailServer is a free, open...;
POPFile: Open Source E-Mail Solution
POPFile is a program
Open Source Frameworks
Matrix, please send them to
[email protected].
* The Open Source...Open Source Frameworks
Open
Source Web Frameworks in Java...;
Building enterprise with open source frameworks
Any software developer worth
why the php is open source?
why the php is open source? why the php is open source
Open Source Text Editor
Open Source Text Editor
jEdit Programmers text
editor
jEdit... on the client computer.
Because it is Open Source, you may use it however you want...
text editor Open source Text and Programming Editor
Syn is an Open Source
Microsoft Open source
ever decided to embrace open source.
If you had asked me that question...Microsoft Open source
Microsoft open to open source
Microsoft Corp. says it is looking to turn over more of its programs to open-source software
Open Source proxy
Open Source proxy
Open-source HTTP proxies
A HTTP proxy is a piece...;
RabbIT is the First Open Source Proxy
Robert Olofsson, author... to improve RabbIT performance and reliability.
Open Source
Open Source Hardware
Open Source Hardware
What is open source hardware
Open-ness in hardware terms can have a whole range...;
Open source Hardware
Open source hardware is computer
open source - Java Beginners
open source hi! what is open source? i heard that SUN has released open source .what is re requisite to understand that open source. i know core concepts only Hi friend,
Open source is an approach to design
Open Source content Management System
the $1.2m CMS money pit?" The article prodded me to learn more about open source...
Open Source content Management System
The Open Source Content Management System
OpenCms is a professional level Open Source Website Content
send me example of jmsmq - JMS
send me example of jmsmq please send me example about jmsmq (java microsoft message queuing ) library
source code
source code sir...i need an online shopping web application on struts and jdbc....could u pls send me the source code PIM
Open Source PIM
Open Source PIM Software
Being organised is something...;
The Battle of the Open Source PIM
Alright. Got my... this is a no brainer for me. I?m sure I?ll get along fine with that.
Open
please send me the answer - JDBC
please send me the answer -difference between DriverManager and DataDourse what is Datasourse? What r the advantages? what is the difference between DriverManager and DataDourse
Open Source Chat
Open Source Chat
Open Source Chat Program
FriendlyTalk is a simple... most people think of open source database products what comes to mind more often...;
Open Source APIs for Java Technology Games
Welcome to today's Java Live chat
Plz send me answer quckly
Plz send me answer quckly Respected Sir,
myself is pavan shrivastava.i want ask a question that is ( we can't
create object of interface then how would possible to create object
Open Source ISO
Open Source ISO
Open Document Format Gets ISO Approval
The Open...
The International Organisation for Standardisation (ISO) has approved the open source... open source cousin,
Open Office. Not surprisingly, that's where work
Open Source FTP
Open Source FTP
Open source FTP clients
The always-excellent... it.
In the meantime, if you need a good, free, open source FTP client for Windows...).
Open Source FTP Benchmark
Dkftpbench is an FTP
please send me the banking data base in swings
please send me the banking data base in swings sir, please send me how to create the banking data base program in swings
Open Source Media Center
Open Source Media Center
Open source media center for Windows
Why buy.../nothing/nada/nopes and best of all it is
open source. This means anyone can...;
Elish Media center
Elisa is a project to create an open source
Open Source Database
Open Source Database
Open Source... Source Java Database
One$DB is an Open Source version of Daffodil... editions.
Open-Source Database Games
recommended me "Open Source Game Development: Qt Games for KDE, PDA's and Windows...Open Source Games
Playing the Open Source Game
In this article I... of software. I will suggest a few reasons why an open source approach could Databases
Open Source Databases
The Open Source Database Benchmark
Featuring... Source Databases: A brief look
This month I take a brief look at Open Source... have an understanding of what Open Source is, and why one would use it. If not, see
Open Source projects
Open Source projects
Mono
Open Source Project
Mono provides...://), the Mono open source project has an active and enthusiastic contributing... standards
* Can run .NET, Java, Python and more.
* Open Source, Free Software
Open Source Hardware
Open Source Image
GNU Image...;
Open Source Image Analysis Environment
TINA Is No Acronym (TINA ) is an open source environment developed to accelerate
Open Source Intranet
Open Source Intranet
Digger Solutions-Intranet Open Source
Digger... newsletter software (previously Newsletter Open Source) that has been ported...;
An Open Source Intranet Focused on Projects
In this series
Palm Open Source
;
Interest in Open Source Palm OS clone
A while back it became clear to me...Palm Open Source
Open
source software for Palm OS... synchronise your Palm device.
Open Source Business Models
Send me Binary Search - Java Beginners
Send me Binary Search how to use Binary think in java
give me the Binary Search programm
thx.. Hi friend,
import java.io.*;
public class BinarySearchDemo {
public static final int NOT_FOUND = -1
Open Source Shopping Cart
Open Source Shopping Cart
Open Source Shopping carts software... portal then you can choose any
of the good open source shopping cart software... is ready to go live.
Choosing the right Open Source shopping cart is very Calendar
Open Source Calendar
Choosing an open calendar manager...;
Open Source Calendar/Alarm/Planner
I did a little searching and I found Rainlendar. It's a open source calendar with a todo list, eventlist, pop up
Open Source PVR
Open Source PVR
MythTV:
The Open Source PVR
Linux users have been... Tivo illegal.
That's why it's heartening to see open source alternatives... and video.
When we made VP3 available to the open source community, we did so
send me javascript code - Java Beginners
send me javascript code please send me code javascript validation code for this html page.pleaseeeeeeeee.
a.first:link { color: green;text-decoration:none; }
a.first:visited{color:green;text
Open Source SQL
Open Source SQL
Open Source SQL Clients in Java
SQuirreL SQL Client.... The minimum version of Java supported is 1.3.
iSQL-Viewer is an open-source JDBC 2.x... out common database tasks.
Open Source: Data from MS SQL
Open Source Vector
Open Source Vector
Open source vector graphics
The open source Xara....
Open source graphics software
At the end of 2005 we announced our intention to Open Source our flagship graphics software, Xara
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/46268 | CC-MAIN-2015-48 | refinedweb | 1,903 | 64.91 |
After reading the article of Michael Birken, one has to admit that it is again proven that there is only one ingredient that makes a good game: the gameplay. Not the looks, not the sound. As a huge fan of the Introversion Software games, which all have a special 'old' look, I thought why not add this simple retro look to this game.
Like James Curran stated, Michael Birken's article covers the game from top to bottom, there isn't much light I can shed over this. So, I will focus more on the differences of building a 2D game (which isn't a console) that looks like a console game.
The first question I needed to solve was, which technique I would use for the game. DirectX or OpenGL? The second was even more important: Since this is my first project on graphics, would I be able to master the chosen technique and write all the needed code for this on time?
After looking for a while on the web, I stumbled upon the following site: IrrLicht. This is a free Graphics Engine that supports both DirectX and OpenGL, which also solved my first question. This engine provides both 2D and 3D modes, which is perfect. I could start in 2D, and move on to 3D without the need to change the engine.
One of the largest differences when not using a console is the way we process inputs. A console program is an input-driven program. After analysing the input, we perform the required logic and then redraw the screen. A Windows (non console) program is an event-driven program. This means that I would need to analyse the events and create my own input to drive the game logic. And, because it's a window, I would need to draw the screen myself and keep on updating the screen while waiting for input.
If you want to use and compile this code, you also need the IrrLicht SDK, this can be downloaded here. It contains everything you need to start, including an already built DLL and lib.
After placing the SDK on your disk, you need to add the location of the include files and the library files to Visual Studio. Open the Options menu under the Tools menu. Select the option Project and Solution and the sub option VC++ directories.
You are know ready to use and compile this code. I have started from a console project template; this gives the advantage that the console will be used as the output and trace window, which makes it easier to debug the engine. In the main, I create a
Game object. This object contains the functionality to setup, run, and end the game.
int _tmain(int argc, _TCHAR* argv[]) { Game* pTheGame = Game::getInstance(); if(pTheGame) { pTheGame->setupGame(); pTheGame->createData(); pTheGame->runGame(); pTheGame->endGame(); } return 0; }
The
setupGame() method creates the game window and the game acts. The game acts will take care of the game logic. I have put the game logic in an
Act object. This
Act object is handed over to a
Director object. And, this
Director object will allow you to switch between acts. There are three acts in this game:
IntroAct,
PlayAct, and
CreditsAct. The Intro will draw a Star Trek logo and will show the goal of the game. The Play will contain the actual game, while the Credits will be called when the game is over, to show tribute to our victorious captain or to weep over the destruction of the greatest starship ever. The last object we need is an
InputManager. This object will convert the events returned by the IrrLicht engine to inputs we like to be notified about. In our case, these will be KeyPress events.
In order to send the KeyPress events to the acts, I could have used a simple callback function, but I like the delegate concept of C#. In C++, this can be done by a Functor. The boost library provides several classes to do this, but I didn't want to use boost (at least not for this article). So, I created my own hardcoded Functor for this job.
struct FKeyPressed { virtual ~FKeyPressed() {}; virtual bool operator()(EKEY_CODE) = 0; }; template class KeyPressed : public FKeyPressed { public: typedef bool (ACTOR::*FunctionType)(EKEY_CODE); public: KeyPressed(ACTOR* pActor, FunctionType pFunctor) { m_pActor = pActor; m_pFunctor = pFunctor; } virtual ~KeyPressed() {}; virtual bool operator()(EKEY_CODE keyCode) { return (m_pActor->*m_pFunctor)(keyCode); } protected: ACTOR* m_pActor; FunctionType m_pFunctor; };
The
Act object that would like to receive KeyPress events just needs to provide a function of the following signature:
bool OnKeyPressed(EKEY_CODE keyCode);
In the function
runGame, the screens are rendered. For us, this means the draw function of the active
Act will be called. Either we draw everything here, which I will do for both the
IntroAct and the
CreditsAct, because it isn't much, or we load some
IDrawable objects into the
Act.
I have used two types of
IDrawable, those that remain rather static, and those that are more dynamic. An example of a static drawable is the
ShipDisplay. This class shows the status of the ship and the game on the right side of the screen. This information always needs to be drawn. An example of a dynamic drawable is the
Torpedo, which will be drawn for some period of time and will be removed from the screen afterwards. This type, that I have called an
Animator, implements the
IAnimator interface, which expands the
IDrawable interface. An example of this is the
Phaser class. The
Animator provides all the code to draw it and to control the lifecycle. So, the
Phaser only needs to add what is different.
void Phaser::updateInfo(Info& rInfo) { if(rInfo.Alpha >= 0) { rInfo.Alpha += rInfo.Fade; } if(rInfo.Alpha >= 255) { rInfo.Alpha = 255; rInfo.Fade = - rInfo.Fade; } }
When the lifecycle of the
Phaser ends, the
endAnimator is called. In the case of the
Phaser, the targeted vessel is hit.
void Phaser::endAnimator() { if (m_pVessel) { m_pVessel->hitPhaser(m_iEnergy); } }
One of the most difficult parts was to implement the console interface. The
Console class handles the looks and the input. Whenever the Enter or Escape key is pressed, the
CommandManager is triggered. This manager stores the state where the input of the game is in.
enum Mode { WaitForCommand, NavigateWaitForCourse, NavigateWaitForDistance, LaunchWaitForEnergy, LaunchWaitForHit, LaunchWaitForCourse, LaunchWaitForCoordinates, TransferWaitForEnergy, ComputerWaitForCommand, WaitForAnimation };
The
CommandManager validates the input and takes the appropriate action, e.g., let the
Enterprise object transfer energy to the shields. The third class is the
Controller, this object actually creates the
Torpedo or
Phaser animators and adds them to the
PlayAct. The function
runEnemyAI checks if there is a need to run and whether there are enemy vessels in the sector or not. There are three vessels:
Enterprise,
KlingonShip, and
StarBase, which all implement the
IVessel interface.
First, I would like to apologize for using a great library like IrrLicht to just draw a simple console look. But, it is fairly easy to change the characters <E> to a 2D image of the Enterprise. Or, you could go even further and make it full 3D. It should also be easy to change the
Controller and
CommandManager classes to make it real-time instead of turn-based.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/game/StarTrekRetro.aspx | crawl-002 | refinedweb | 1,212 | 63.9 |
Mac-Forums
Mac-Forums Forums
iPhoto Won't Import Pictures Properly
msn wont sign
Question about iTuns
iChat video quality poor - help please?
Dboost
InCopy/InDesign
Will my MACBOOK PRO run World of Warcraft?
Help With Maximizing The Remote Deskop Window
MSN Messenger
iPhoto - Two Libraries???
Whats the deal with iwork?
itunes store question
Andkon Problems?
Quicken 2007 -- can't specify date for scheduled
Remote Mac Log-In
Looking for Fungus (the game)
Anti Virus Software?
Iwork 08 Problem
Mac Mail or Entourage??
Warnings in Numbers?
pie chart
Can Automator do this...?
iPhoto Library
QuickTime Pro
Need Free Image Software
game:Need for Speed Carbon
iTunes Music Store File Formats.
Slide show Screensaver
Avenir
Downloading and installing flash 9
Internet Explorer 5.5
Crashing of Devonthink, DevonAgent and Omniweb
Help with iPhoto
Entourage send/receive
iPhoto-->Aperture import problems
record yourself
java problems
Macbook Stolen and now I found great payback software
Looking for Coin Database Software for Mac
Simple and free voice recording app
Archiving in Mail?
What's a good Accounting/Business/Finance app?
Removing Photos and Videos from the Web Gallery?
CD Labeling Software
Microsoft announces MacOffice 2008 pricing.
Safari closes unexpectantly
Easy Question.. i apologize ahead of time
backing up to a fat32 drive
Finally, A new version of Office coming out for OS X on January 15th
Counter-strike
EyeTV 2.5
Apple Mail Won't Send Messages!!!
iChat connection problem
Bitlord for Mac
Favorite Freeware Apps?
BootCamp Clean Install?
Problem with Handbrake
MMORPGs that work on mac
Graphing Calculator
Aim 4.7
Fastest Browser? New Release, 1.4
Problems with World Of Warcraft!!!!!
Using Webcam On Msn Messenger
discount software for college students?
import outlook 2003 to apple mail
Photography Software?
MBP won't run a DVD I rented??
quicktime subtitles
Pages... Help
I am in awe.
Trannsfering favorites?
Can't import .mpg or .avi files into iMovie
Diagram drawing software
Vuze\Azureus Forum?
3DMark equivalent for OSX?
Where did my DVD app go?
Preview printing help?
work applications.
Safari can't open some webpages
pictures not showing up
Entourage and Mail
Photoshop/ Illustrator
logic pro
Sorting Files and Folders
Is there any good photo viewing application for mac beside preview?
mp3 to wav
No Front Row Support for the Mac Pro - BOO!!
Logic express help!?!?
old software, new mac?
Password cracking
help! im loosing all my notes! files not saving properly
Tool Tech says 1 error found
photobooth effects in skype
imac logs off after an hour
Uninstalling DeskTunes
Archiver program
changing icons
Skype?
Keynote Jeopardy thing
Safari 3.0.3
shockwave crashes intel mac browsers
flip 4 mac
I-tunes Store Trouble
Photoshop Element
Shakespeer File Lis Issues
network/resource viewer
Free Invoiceing software
Mac 4.7 Version of AIM
MAC Office Entourage 2008
Videora Alternative?
PPC games on intel
Record Alarm
id3 tagger
Playing Starcraft with a Windows user?
Anyone having problems with the FireFox update?
Slow quicktime
Where to change default browser for FTP
What's with Mail?
Call of duty 2 ( online error ) HELP PLEASE :(
Photoshop cs3 problem
Paint
Xbox 360 backups
Madden 08
OOXML-The Great Debate!
Setting up mail
hiding open apps from the dock?
Installing update of expired demo
Mac Pilot - Is it good?
Transmission?
PrefsWriter?
prompt box - exists beyond my monitor
iWork '08 or Office?
Lost my iWeb site.. kind of
macoffice2008.com
iTunes display
Pages '08 - Sorting numbered lists
Using Bonjour with a Imac and Win xp box for printing
Dealing with ad ware on Macs
Freeware
Just curious
Safrai help
Final Cut Studio install problem
Pages- Memory hog?
Office for Mac or the Windows side?
ebook library app
i work 08 or microsoft office?
financial software
Quicken open-sourced equipvalents?
Blocking Safari, programas, applications
Office Problem
For Sale Nokia N95(8GB)
Alterpod!
Changing the placement of items on the toolbar?
Outlook express files into Mail?
creating a 'folder' in mail
mail app
iWeb Save Feature
region free superdrive: dvd player works, but not frontrow?
IM & IRC clients?
Chessmaster 9000
iWork 06 for cheap, or upgrade to iWork 08 (more expensive)?
Delocalizer Uh Oh
Please help
Hand-me-domn iMac missing apps
Video Chat
Ok, this is making me mad.
Youtube to iPod converter for Mac?
Unreal tournament in Mac, from a Mac newbie
I need Microsoft Office for my Mac.
Itunes Gone and Macbook in Trouble?
Limewire or Frostwire
is there a....?
Mail Makeover
Lost Slide Bar
Quake 3 Arena Demo
iDVD--how to burn to external DVD drive?
.avi not playing completely in QT
Safari not displaying HTML
Mira Remote for free?
iWork: A Office flop or killer?
copy itunes shared music
I Know I Must Be Doing Something Wrong, But...
Enabling sftp and ssh?
best browser?
How to install individual apps
what's hogging my space?
Keynote Help
mail.app removes message content-converts to txt
Parallels or VMWare Fusion
Questions about World of Warcraft on my Macbook Pro.
problem with front row
Recording Audio Output... possible on mac?
newbie ?? how do I roperly remove limewire ans such programs
MacBook and itunes
Limewire
Secure Emails & Internet browsing?
ilife 08
Calculator widget
Counter Strike: Condition Zero
Removing icons from desktop
Battery Updater
Webcam Program
Beat Program for Mac?
Question about mail
Will Bioshock run on my mac?
iWork Pages/OS X PDF Export
iChat-Video Frustration
Quicksilver recently slow
iMovie '06: Isolating Music?
iTune Skinning?
Mac Mame
Safari Hijacking
Adium/msn will not work in school - mac only...
aMSN
FireFox Problem - Always on Top
Two ?'s: Airtunes and NeoOffice
Temperature Monitoring & program to load CPU?
NeoOffice instead of office or iwork suite
What program do I use to download bit torrent in mac?
iMovie 08 launches, but beach ball stays and does not respond
iCal Import for Automator
Keep getting beachball in Safari
installing photoshop CS3
Help me please - my MBP is so sloooooow
iTunes... anywhere to find 7.3?
harleyhock
Trace of an Applciation which is deleted.
iPhoto Albums/Events Hide & Share Q's
Mac Mail and Gmail
Adobe CS3 apps not showing for simple finder user accounts
?! My Browser Bar in Safari disappeared (Pic included)
Pages WordArt?
Still cannot send Entourage email-have read all threads
disk order 2.5.1 FTP help
Msn Messenger
frontrow enhancements/alternatives?
Mail - mutliple recipients
QuickTime Huh???
gimp need x11 but won't install
free mac wma to .mp3 converter?
Need HELP in defeating teenage hacker
iPhoto '08 importing help!!!
iTunes & music library help
Problem with Onyx - desktop has 'disappeared'
Adium vs MSN Messenger for mac
iChat! On/off/on/off/on/off
Microsoft Office 2008 for Mac Pricing
I wanna be an itunes store member... bt im not from the US?!
Growl
folding at home wont work.
Backup software
Anyway to change to Start-up screen?
HELP- 863 Mail messages DELETED
Game Problems
iChat problems
need help here!!
Front Row bug on my Macbook Pro
iTunes Troubles
Remote backup software ?
Mail 2.0 randomly not working
blue phone elite!
Microsoft Office 2007 Windows
parallels
Note taking apps?
iPhoto problem
Free trial for CD burning software
Dot Mac Backup
Only Albums?
Oblivion on macbook with some problems
Flv
iChating with MSN Messenger, Can it be done?
Programming Applications
How do i download AIM for mac? Ichat sucks
Terminal
Is there ANY WAY AT ALL...
iTunes 7.4
Syncing, but not with a mobile device...
Free File Recovery software?
Watch TV on Mac ?
ical not syncing around the office
Problems with iChat, please help!!
What do these apps do???
itunes myspace exporter
Restrict access to applications on time basis
RTF with Textframes?
CD DVD Label Maker for Mac
RSS Software
Really annoying entourage issue
Office error
bit torrent
Personal Finance software
iPhoto and General Picture storage
Ahh why? Netscape to blame?
Looking for good business/accounting prg for small online retail business
Strangest problem with Safari
YouTube problem
Any way to tell Quicksilver to ignore an external h/d?
backlight in frontrow?
Multiple .docs to pdf?
Fonts and photoshop problem.
I accidently deleted script editor....
Programs for chatting?
Accounts being windowed
reinstall
Windows Sizer, and better dual monitor support?
Help my button's missing!
Halo graphics
iWeb '08 gutted my website. Images are gone. No drop shadows. Ugly!
Are all versions of newer Adobe products universal?
missing Keynote 4 effect
iWork 2006 Pages Key Bindings
KiSMAC problems
Safari - tabbed browsing/external links
please .rar??
Problem with Camino
xterm
New here!
Bulk Mailer Help
Pro desktop
iChat shows address book names, HELP!
thunderbird setup
Adium File Transfer Problems
two questions about rremote and vlc...
iPhoto '08 not saving changes to photos??
Quick Aperture question
Newbie RSS reader Question
adobe help
Any apps that use the apple remote?
Playing torrent files.
Azureus gurus v.pleasehelp!
.wmv
iMovie Import Problem
good .gif animator
Photosharing
QuickTime and iChat
Microsoft Office: Mac - Installation Problem
looking for: Bass Guitar FX Program
Is it OK to move programs off of my HD and reinstall them later?
Microsoft Office X help
iWork Numbers question?
Safari/IE/Itunes/Software Updates internet acess problem
Using Disk Utility
Text to speech
Halo demo
Password Protecting Files and Applications
parallels
Getting .docs to always open in Pages
Keynote Problem
Company of Heroes in Boot Camp?
phantom of the [browser]opera?
The Sims 2?
Limewire problem
iTunes files
MacMail won't launch
Garageband: Changing pitch (?) of imported songs
Histotime question
Garageband Won't Open
iLife Included?
Missing font
Iphoto Help Please
HandBreak
paralles vs. bootcamp
Best Math Program?
iDvd not working
gotcha
Question about Fire messaging program
neoffice? Is it worth it?
imail password lost
another ITUNES question
iPhoto - library - map with external harddrive?
a question about the dashboard...
Apps..are they closed or just minimized?
Microsoft Office 2004 language help needed.
caps in mail
no video: parallels + fredoria linux
toast/general question
Mac Mail program online
Automator Problems
HELP HELP! ReadIris Pro 11.5 for Mac Install??
Is It Okay to Use Other Programs While Burning Discs?
Mail
New Macbook Pro Owner!Love It! Two ?'s
How can i import my sites o Safari ??
Mail giving error - refused on port 110
Quicktime
Google Earth
Software does not support this hardware?
iPhoto shows wrong thumbnails - how to fix?
Safari memory hog, hanging, dead downloads
Need .rm/.rmvb to Apple TV/MP4 converter
Um... did I turn off the "quit unexpectedly" dialog?
iCal Question: Email Alerts
Quicktime loading slow
creating virtual people, true to life
iTunes questions
Mac mail and gmail?
Mac mail and Yahoo
Attempting to copy to the disk "Mac hard drive" failed. This disk cannot be.......
Fast DVD Copy4/iTunes dual layer burning trouble
trick website into thinking....
Office won't start
Creating PDFs
Database Software
Photoshop CS3 Trial
AntiVirus Software?
Mail.app missing default folders
iTunes questions... HELP!
Having Some Trouble With Mail Application..
Frontrow: Browse pics without the slideshow??
Mozzilla Install Problems
Riven/Myst
Pages iWork 08 question
iMovie '08 ... iDon't Like it... Do you?
adium timeout errors
Adium Errors loggin into accounts
Pages iWork 08 - Magazine style
Recording music from a DVD movie
Help! Need app to monitor bandwidth.
Anyway to make Adium not move?
entorage help,,
.mac account
how does a vCard work? (before I send one)
Toast wont open-bounces on dock, then disappears
iMovie and Panasonic NV-GS230
Bookmarking RSS
Removing a HDD Image from the Desktop
Playing World of Warcraft at school
Garage Band - Jam Pack question
Word 2004 corrupt fonts - word won't run
Audio hijack normal or Pro?
bit of a strange post...
Temporary internet files (with firefox and Safari)
copying Entourage or other apps between machines
Norton Antivirus uninstall
looking for a particular software
Help me find me peer with great music!!! (itunes help)
MS Paint equivalent
mpeg-2 playback/quicktime pro
DVD Burning
Mail+Gmail problems
free ventrilo server
exporting ical
sharing files b/t osx and bootcamp
good stiching program.
Need a printable, editable calenedar
aMSN music plugin
Keychain Message When Opening Safari
Safari & RSS Widgets
for DJ
Pretty free mac games
Quicksilver
uninstal helpp????
Please recommend backup software
itunes help
Disk labeling software
Garage Band Loops
Help with GarageBand 08
pages ilife 08 big problem
Gaming on Macs
DVD burning program
Need Picture Screensaver with Multiple Directory Support
A newbie writes....where's the word processor??
Need an editor: TextMate or BBEdit?
Quicktime/Safari/Tinkertool
telling itunes music is on another disk
multiplayer games???
Need help running metatrader with crossover
Editing PDF Docs....not just text...but hand written notes after scanning in...
ripping subtiles
Removing Itunes completely
How do I make files visible again?
P2p
Standalone ODF to DOC converter
Office 2007 Compatibility
Is your Mac Jealous of how the iPhone scrolls??
Front Row
Administrator Privileges To Install Painter IX?
Theft Recovery Software
Stalker wont work with Crossover!
Changing the display fonts?
iPhoto/aperture slit gallery in two locations
Mailplane
Mail - Server Issues
Safari for Widows and Yahoo Mail OK..but Not OSX
iCal addons?
Best Program for Web Design on Mac
Adiumx with OSX
Smackbook / Bumptunes
windows media error message
iTunes Movie Artwork Crashing
Omg Pages>>> Challenge
Please help with reading CD/DVD's in boot camp??
FLAC player compatible with OSX???
Help ! DVD made with IDVD '06 - Playback fails on some home DVD players...
Can't Connect to Adium X.........Please help!!
how many macbooks...iwork 08 | http://www.mac-forums.com/mac-forums-sitemap.php?forumid=23&page=42 | CC-MAIN-2014-42 | refinedweb | 2,208 | 69.38 |
Introduction :
On Android and iOS, if you tap one button, the opacity of it decreases during the time you pressed it down. TouchableHighlight is used to implement similar effects. It decreases the opacity of the wrapped view on pressed.
You will have to play with colors to find out the best combination. Note that it should have only one child. If you want to have several child components, wrap them in a view.
Props :
Following props are available for this view :
[number] activeOpacity :
It defines the opacity of the wrapped view when touch is active. The default is 0.85 and it should be always between 0 to 1.
[function] onHideUnderlay :
Function that is called after the underlay is hidden.
[function] onShowUnderlay :
Function that is called after the underlay is shown.
[View.style] style :
Style of the TouchableHighlight.
[color] underlayColor :
When the touch is active, it is the color of the underlay that will show through.
[bool] hasTVPreferredFocus :
Apple TV preferred focus.
[bool] nextFocusDown, nextFocusUp, nextFocusRight, nextFocusLeft :
TV next focus down, up, right, left.
[Bool] testOnly_pressed :
Handy for snapshot tests.
Example program :
Let’s take a look at the below example program :
import React from 'react'; import {StyleSheet, TouchableHighlight, Text, View} from 'react-native'; export default function ExampleProgram() { const onPress = () => {}; return ( <View style={styles.container}> <TouchableHighlight underlayColor={'#283593'} style={styles.touchable} onPress={onPress}> <Text style={styles.text}>Click Me</Text> </TouchableHighlight> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', margin: 12, }, touchable: { alignItems: 'center', backgroundColor: '#7986cb', padding: 10, }, text: { color: 'white', }, });
Here, the we are using one backgroundColor and one underlayColor for the TouchableHighlight view. Once you click on it, it will change the color as like below :
| https://www.codevscolor.com/react-native-touchablehighlight | CC-MAIN-2020-50 | refinedweb | 280 | 59.3 |
After introducing this series in the last post, today we’re going to address the first 3 items on our TODO list:
- The initial blue text
- The theme music
- The star field
- The disappearing Star Wars logo
- The crawling text
The following two items are fairly significant, in their own right, so they’ll each take a post of their own to complete. Oh, and I’ve thrown in a surprise item 6, which I’ll unveil when we implement the crawling text.
Before we dive in, it’s important to make some points about the code: because this is mainly just a bit of fun, this code hasn’t been generalised to work with any drawing and any view, etc. I’ve hardcoded a lot of values that just work on a specific drawing and on my system. The timing works well for me, but may be off when working on systems with different performance profiles. Mileage may vary, as they say.
That’s one reason I’m providing a drawing with the appropriate views set up. I originally planned on putting the layers in there, too, but then decided to create those at runtime (as it was simple to do so). I could have done that with the views, too, but it didn’t seem worth the extra effort.
Another thing I should mention: in my first run at this I used transient graphics to display the intro and crawl text. I then decided to switch to db-resident objects, as I thought I could place them on layers that I turn off when they’re no longer needed. I ended up finding that didn’t work – as I have a single transaction making all the drawing modifications in the command, and couldn’t find a way to have the graphics system reflect the pending database changes – so I went ahead and erased them instead. I decided to stick with db-resident rather than transient graphics, nonetheless, but using transient graphics remains a viable approach for this: the reason I’m not doing so isn’t especially significant.
With that, here’s a look at the code in this post running inside AutoCAD:
A few task-specific comments:
1. The initial blue text
This was straightforward to implement. I ended up creating MText with the font information embedded in the contents, rather than a separate style. This is mainly because it’s the approach I used for the crawl text, later on, which uses multiple fonts. Overkill for this text, of course, but it saves me creating the style.
We’re using the intro text to do a bunch of things, behind the scenes. It’s essentially our splash screen for doing things like downloading the MP3 file for the theme music, etc.
2. The theme music
I came across an online version of the crawl music accessed in this great HTML implementation of the opening crawl. Seems like an ideal candidate for SWAPI integration, extending it beyond the single episode. :-) Here’s the code associated with it, if you want to take a look.
I found out that the System.Windows.Media namespace contains a MediaPlayer object that allows you to access/play an MP3 file via a URL. All that remained was to work out some of the timings related to the music – it starts at 8.5 second in, for instance – and apply these to the code. In this version of the code, the player stops either when the music finishes or when AutoCAD is closed, whichever happens first.
3. The star field
This was a fairly simple matter of generating a bunch of random numbers and using them to define stars. I didn’t want to pass around large numbers of AutoCAD objects, though – whether DBPoints or even Point3ds – so I used a list of F# tuples (with x and y values, each of which holds a float between 0 and 1) that later gets transformed into DBPoints somewhere in the screen space. Although for the surprise item 6 I extended that space to be twice as high as currently needed.
Here’s the F# code implementing our first pass at the EPISODE command:
module StarWars.Crawler
open Autodesk.AutoCAD.ApplicationServices
open Autodesk.AutoCAD.ApplicationServices.Core
open Autodesk.AutoCAD.DatabaseServices
open Autodesk.AutoCAD.Geometry
open Autodesk.AutoCAD.Runtime
open System
open System.Windows.Media
// The intro music MP3 file
let mp3 =
""
// The layers we want to create as a list of (name, (r, g, b))
let layers =
[
("Stars", (255, 255, 255));
("Intro", (75, 213, 238))
]
// Create layers based on the provided names and colour values
// (only creates layers if they don't already exist... could be
// updated to make sure the layers are on/thawed and have the
// right colour values)
let createLayers (tr:Transaction) (db:Database) =
let lt =
tr.GetObject(db.LayerTableId, OpenMode.ForWrite) :?> LayerTable
layers |>
List.iter (fun (name, (r, g, b)) ->
if not(lt.Has(name)) then
let lay = new LayerTableRecord()
lay.Color <-
Autodesk.AutoCAD.Colors.Color.FromRgb(byte r, byte g, byte b)
lay.Name <- name
lt.Add(lay) |> ignore
tr.AddNewlyCreatedDBObject(lay, true)
)
// Get a view by name
let getView (tr:Transaction) (db:Database) (name:string) =
let vt =
tr.GetObject(db.ViewTableId, OpenMode.ForRead) :?> ViewTable
if vt.Has(name) then
tr.GetObject(vt.[name], OpenMode.ForRead) :?> ViewTableRecord
else
null
// Add an entity to a block and a transaction
let addToDatabase (tr:Transaction) (btr:BlockTableRecord) o =
btr.AppendEntity(o) |> ignore
tr.AddNewlyCreatedDBObject(o, true)
// Flush the graphics for a particular document
let refresh (doc:Document) =
doc.TransactionManager.QueueForGraphicsFlush()
doc.TransactionManager.FlushGraphics()
// Transform between the Display and World Coordinate Systems
let dcs2wcs (vtr:AbstractViewTableRecord) =
Matrix3d.Rotation(-vtr.ViewTwist, vtr.ViewDirection, vtr.Target) *
Matrix3d.Displacement(vtr.Target - Point3d.Origin) *
Matrix3d.PlaneToWorld(vtr.ViewDirection)
// Poll until a music file has downloaded fully
// (could sleep or use a callback to avoid this being too
// CPU-intensive, but hey)
let rec waitForComplete (mp:MediaPlayer) =
if mp.DownloadProgress < 1. then
System.Windows.Forms.Application.DoEvents()
waitForComplete mp
// Poll until a specified delay has elapsed since start
// (could sleep or use a callback to avoid this being too
// CPU-intensive, but hey)
let rec waitForElapsed (start:DateTime) delay =
let elapsed = DateTime.Now - start
if elapsed.Seconds < delay then
System.Windows.Forms.Application.DoEvents()
waitForElapsed start delay
// Create the intro text as an MText object relative to the view
// (has a parameter to the function doesn't execute when loaded...
// also has hardcoded values that make it view-specific)
let createIntro _ =
let mt = new MText()
mt.Contents <-
"{\\fFranklin Gothic Book|b0|i0|c0|p34;" +
"A long time ago, in a galaxy far,\\Pfar away...}"
mt.Layer <- "Intro"
mt.TextHeight <- 0.5
mt.Width <- 10.
mt.Normal <- Vector3d.ZAxis
mt.TransformBy(Matrix3d.Displacement(new Vector3d(1., 6., 0.)))
mt
// Generate a quantity of randomly located stars... a list of (x,y)
// tuples where x and y are between 0 and 1. These will later
// get transformed into the relevant space (on the screen, etc.)
let locateStars quantity =
// Create our random number generator
let ran = new System.Random()
// Note: _ is only used to make sure this function gets
// executed when it is called... if we have no argument
// it's a value that doesn't require repeated execution
let randomPoint _ =
// Get random values between 0 and 1 for our x and y coordinates
(ran.NextDouble(), ran.NextDouble())
// Local recursive function to create n stars at random
// locations (in the plane of the screen)
let rec randomStars n =
match n with
| 0 -> []
| _ -> (randomPoint 0.) :: randomStars (n-1)
// Create the specified number of stars at random locations
randomStars quantity
// Take locations from 0-1 in X and Y and place them
// relative to the screen
let putOnScreen wid hgt dcs (x, y) =
// We want to populate a space that's 2 screens high (so we
// can pan/rotate downwards at the end of the crawl), hence
// the additional multiplier on y
let pt = new Point3d(wid * (x - 0.5), hgt * ((y * -1.5) + 0.5), 0.)
pt.TransformBy(dcs)
// Commands to recreate the open crawl experience for a selected
// Star Wars episode
[<CommandMethod("EPISODE")>]
let episode() =
// Make sure the active document is valid before continuing
let doc = Application.DocumentManager.MdiActiveDocument
if doc <> null then
let db = doc.Database
let ed = doc.Editor
// Start our transaction and create the required layers
use tr = doc.TransactionManager.StartTransaction()
createLayers tr db
// Get our special Initial and Crawl views
let ivtr = getView tr db "Initial"
let cvtr = getView tr db "Crawl"
if ivtr = null || cvtr = null then
ed.WriteMessage(
"\nPlease load StarWarsCrawl.dwg before running command.")
doc.TransactionManager.EnableGraphicsFlush(true)
let btr =
tr.GetObject(doc.Database.CurrentSpaceId, OpenMode.ForWrite)
:?> BlockTableRecord
// Set the initial view: this gives us higher quality text
ed.SetCurrentView(ivtr)
// First we create the intro text
let intro = createIntro ()
intro |> addToDatabase tr btr
// Make sure the intro text is visible
doc |> refresh
ed.UpdateScreen()
// We'll now perform a number of start-up tasks, while our
// initial intro text is visible... we'll start vy recording
// our start time, so we can synchronise our delay
let start = DateTime.Now
// Get our view's DCS matrix
let dcs = dcs2wcs(cvtr)
// Create a host of stars at random screen positions
locateStars 1000 |>
List.iter
(fun xy ->
let p = putOnScreen cvtr.Width cvtr.Height dcs xy
let dbp = new DBPoint(p)
dbp.Layer <- "Stars"
dbp |> addToDatabase tr btr)
// Open the intro music over the web
let mp = new MediaPlayer()
mp.Open(new Uri(mp3))
// Wait for the download to complete before playing it
waitForComplete mp
// Have a minimum delay of 5 seconds showing the intro text
waitForElapsed start 5
// Start the audio at 8.5 seconds in
mp.Position <- new TimeSpan(0, 0, 0, 8, 500)
mp.Play()
// Switch to the crawl view: this will also change the
// visual style from 2D Wireframe to Realistic
ed.SetCurrentView(cvtr)
// Remove the intro text
intro.Erase()
tr.Commit() // Commit the transaction
That’s it for this part in the series. In the next part we’ll take a look at the code to show the disappearing Star Wars logo inside AutoCAD. | http://through-the-interface.typepad.com/through_the_interface/2015/01/recreating-the-star-wars-opening-crawl-in-autocad-using-f-part-2.html | CC-MAIN-2017-51 | refinedweb | 1,687 | 55.24 |
).
// Get the normal to a triangle from the three corner points, a, b and c. function GetNormal(a: Vector3, b: Vector3, c: Vector3) { // Find vectors corresponding to two of the sides of the triangle. var side1 = b - a; var side2 = c - a;
// Cross the vectors to get a perpendicular vector, then normalize it. return Vector3.Cross(side1, side2).normalized; }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { Vector3 GetNormal(Vector3 a, Vector3 b, Vector3 c) { Vector3 side1 = b - a; Vector3 side2 = c - a; return Vector3.Cross(side1, side2).normalized; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Vector3.Cross.html | CC-MAIN-2017-43 | refinedweb | 101 | 58.18 |
Setting Up OpenGL
To set up OpenGL, depending on your programming platform, read:
- How to write OpenGL programs in C/C++.
- How to write OpenGL programs in Java: JOGL or LWJGL.
- How to write OpenGL|ES programs in Android.
Example 1: Setting Up OpenGL and GLUT (GL01Hello.cpp)
Make sure that you can run the "
GL01Hello.cpp" described in "How to write OpenGL programs in C/C++", reproduced below:
#include <windows.h>
The header "
windows.h" is needed for the Windows platform only.
#include <GL/glut.h>
We also included the GLUT header, which is guaranteed to include "
glu.h" (for GL Utility) and "
gl.h" (for Core OpenGL).
The rest of the program will be explained in due course.
Introduction
OpenGL (Open Graphics Library) is a cross-platform, hardware-accelerated, language-independent, industrial standard API for producing 3D (including 2D) graphics. Modern computers have dedicated GPU (Graphics Processing Unit) with its own memory to speed up graphics rendering. OpenGL is the software interface to graphics hardware. In other words, OpenGL graphic rendering commands issued by your applications could be directed to the graphic hardware and accelerated.
We use 3 sets of libraries in our OpenGL programs:
- Core OpenGL (GL): consists of hundreds of commands, (such as setting camera view and projection) and more building models (such as qradric surfaces and polygon tessellation). GLU commands start with a prefix "
glu" (e.g.,
gluLookAt,
gluPerspective).
- OpenGL Utilities Toolkit (GLUT): OpenGL is designed to be independent of the windowing system or operating system. GLUT is needed to interact with the Operating System (such as creating a window, handling key and mouse inputs); it also provides more building models (such as sphere and torus). GLUT commands start with a prefix of "
glut" (e.g.,
glutCreatewindow,
glutMouseFunc). GLUT is platform independent, which is built on top of platform-specific OpenGL extension such as GLX for X Window System, WGL for Microsoft Window, and AGL, CGL or Cocoa for Mac OS.. A standalone utility called "
glewinfo.exe" (under the "
bin" directory) can be used to produce the list of OpenGL functions supported by your graphics system.
- Others.
Vertex, Primitive and Color
Example 2: Vertex, Primitive and Color (GL02Primitive.cpp)
Try building and runnng this OpenGL C/C++ program:
The expected output and the coordinates are as follows. Take note that 4 shapes have pure color, and 2 shapes have color blending from their vertices.
I shall explain the program in the following sections.
OpenGL as a State Machine
OpenGL operates as a state machine, and maintain a set of state variables (such as the foreground color, background color, and many more). In a state machine, once the value of a state variable is set, the value persists until a new value is given.
For example, we set the "clearing" (background) color to black once in
initGL(). We use this setting to clear the window in the
display() repeatedly (
display() is called back whenever there is a window re-paint request) - the clearing color is not changed in the entire program.
// In initGL(), set the "clearing" or background color glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // black and opaque // In display(), clear the color buffer (i.e., set background) with the current "clearing" color glClear(GL_COLOR_BUFFER_BIT);
Another example: If we use
glColor function to set the current foreground color to "red", then "red" will be used for all the subsequent vertices, until we use another
glColor function to change the foreground color.
In a state machine, everything shall remain until you explicitly change it!
Naming Convention for OpenGL Functions
An OpenGL functions:
- begins with lowercase
gl(for core OpenGL),
glu(for OpenGL Utility) or
glut(for OpenGL Utility Toolkit).
- followed by the purpose of the function, in camel case (initial-capitalized), e.g.,
glColorto specify the drawing color,
glVertexto define the position of a vertex.
- followed by specifications for the parameters, e.g.,
glColor3ftakes three
floatparameters.
glVectex2itakes two
intparameters.
(This is needed as C Language does not support function overloading. Different versions of the function need to be written for different parameter lists.)
The convention can be expressed as follows:
returnType glFunction[234][sifd] (type value, ...); // 2, 3 or 4 parameters returnType glFunction[234][sifd]v (type *value); // an array parameter
The function may take 2, 3, or 4 parameters, in type of
s (
GLshort),
i (
GLint),
f (
GLfloat) or
d (
GLdouble). The '
v' (for vector) denotes that the parameters are kept in an array of 2, 3, or 4 elements, and pass into the function as an array pointer.
OpenGL defines its own data types:
- Signed Integers:
GLbyte(8-bit),
GLshort(16-bit),
GLint(32-bit).
- Unsigned Integers:
GLubyte(8-bit),
GLushort(16-bit),
GLuint(32-bit).
- Floating-point numbers:
GLfloat(32-bit),
GLdouble(64-bit),
GLclampfand
GLclampd(between 0.0 and 1.0).
GLboolean(unsigned char with 0 for false and non-0 for true).
GLsizei(32-bit non-negative integers).
GLenum(32-bit enumerated integers).
The OpenGL types are defined via
typedef in "
gl.h" as follows:
typedef unsigned int GLenum; typedef unsigned char GLboolean; typedef unsigned int GLbitfield; typedef void GLvoid; typedef signed char GLbyte; /* 1-byte signed */ typedef short GLshort; /* 2-byte signed */ typedef int GLint; /* 4-byte signed */ typedef unsigned char GLubyte; /* 1-byte unsigned */ typedef unsigned short GLushort; /* 2-byte unsigned */ typedef unsigned int GLuint; /* 4-byte unsigned */ typedef int GLsizei; /* 4-byte signed */ typedef float GLfloat; /* single precision float */ typedef float GLclampf; /* single precision float in [0,1] */ typedef double GLdouble; /* double precision float */ typedef double GLclampd; /* double precision float in [0,1] */
OpenGL's constants begins with "
GL_", "
GLU_" or "
GLUT_", in uppercase separated with underscores, e.g.,
GL_COLOR_BUFFER_BIT.
For examples,
glVertex3f(1.1f, 2.2f, 3.3f); // 3 GLfloat parameters glVertex2i(4, 5); // 2 GLint paramaters glColor4f(0.0f, 0.0f, 0.0f, 1.0f); // 4 GLfloat parameters GLdouble aVertex[] = {1.1, 2.2, 3.3}; glVertex3fv(aVertex); // an array of 3 GLfloat values
One-time Initialization initGL()
The
initGL() is meant for carrying out one-time OpenGL initialization tasks, such as setting the clearing color.
initGL() is invoked once (and only once) in
main().
Callback Handler display()
The function
display() is known as a callback event handler. An event handler provides the response to a particular event (such as key-press, mouse-click, window-paint). The function
display() is meant to be the handler for window-paint event. The OpenGL graphics system calls back
display() in response to a window-paint request to re-paint the window (e.g., window first appears, window is restored after minimized, and window is resized). Callback means that the function is invoked by the system, instead of called by the your program.
The
Display() runs when the window first appears and once per subsequent re-paint request. Observe that we included OpenGL graphics rendering code inside the
display() function, so as to re-draw the entire window when the window first appears and upon each re-paint request.
Setting up GLUT - main()
GLUT provides high-level utilities to simplify OpenGL programming, especially in interacting with the Operating System (such as creating a window, handling key and mouse inputs). The following GLUT functions were used in the above program:
glutInit: initializes GLUT, must be called before other GL/GLUT functions. It takes the same arguments as the
main().
void glutInit(int *argc, char **argv)
glutCreateWindow: creates a window with the given title.
int glutCreateWindow(char *title)
glutInitWindowSize: specifies the initial window width and height, in pixels.
void glutInitWindowSize(int width, int height)
glutInitWindowPosition: positions the top-left corner of the initial window at (x, y). The coordinates (x, y), in term of pixels, is measured in window coordinates, i.e., origin (0, 0) is at the top-left corner of the screen; x-axis pointing right and y-axis pointing down.
void glutInitWindowPosition(int x, int y)
glutDisplayFunc: registers the callback function (or event handler) for handling window-paint event. The OpenGL graphic system calls back this handler when it receives a window re-paint request. In the example, we register the function
display()as the handler.
void glutDisplayFunc(void (*func)(void))
glutMainLoop: enters the infinite event-processing loop, i.e, put the OpenGL graphics system to wait for events (such as re-paint), and trigger respective event handlers (such as
display()).
void glutMainLoop()
In the
main() function of the example:
glutInit(&argc, argv); glutCreateWindow("Vertex, Primitive & Color"); glutInitWindowSize(320, 320); glutInitWindowPosition(50, 50);
We initialize the GLUT and create a window with a title, an initial size and position.
glutDisplayFunc(display);
We register
display() function as the callback handler for window-paint event. That is,
display() runs when the window first appears and whenever there is a request to re-paint the window.
initGL();
We call the
initGL() to perform all the one-time initialization operations. In this example, we set the clearing (background) color once, and use it repeatably in the
display() function.
glutMainLoop();
We then put the program into the event-handling loop, awaiting for events (such as window-paint request) to trigger off the respective event handlers (such as
display()).
Color
We use
glColor function to set the foreground color, and
glClearColor function to set the background (or clearing) color.
void glColor3f(GLfloat red, GLfloat green, GLfloat blue) void glColor3fv(GLfloat *colorRGB) void glColor4f(GLfloat red, GLfloat green, GLfloat blue, GLfloat alpha) void glColor4fv(GLfloat *colorRGBA) void glClearColor(GLclampf red, GLclampf green, GLclampf blue, GLclampf alpha) // GLclampf in the range of 0.0f to 1.0f
Notes:
- Color is typically specified in
floatin the range
0.0fand
1.0f.
- Color can be specified using RGB (Red-Green-Blue) or RGBA (Red-Green-Blue-Alpha) components. The 'A' (or alpha) specifies the transparency (or opacity) index, with value of 1 denotes opaque (non-transparent and cannot see-thru) and value of 0 denotes total transparent. We shall discuss alpha later.
In the above example, we set the background color via
glClearColor in
initGL(), with R=0, G=0, B=0 (black) and A=1 (opaque and cannot see through).
// In initGL(), set the "clearing" or background color glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Black and opague
In
display(), we set the vertex color via
glColor3f for subsequent vertices. For example, R=1, G=0, B=0 (red).
// In display(), set the foreground color of the pixel glColor3f(1.0f, 0.0f, 0.0f); // Red
Geometric Primitives
In OpenGL, an object is made up of geometric primitives such as triangle, quad, line segment and point. A primitive is made up of one or more vertices. OpenGL supports the following primitives:
A geometric primitive is defined by specifying its vertices via
glVertex function, enclosed within a pair
glBegin and
glEnd.
void glBegin(GLenum shape) void glVertex[234][sifd] (type x, type y, type z, ...) void glVertex[234][sifd]v (type *coords) void glEnd()
glBegin specifies the type of geometric object, such as
GL_POINTS,
GL_LINES,
GL_QUADS,
GL_TRIANGLES, and
GL_POLYGON. For types that end with '
S', you can define multiple objects of the same type in each
glBegin/
glEnd pair. For example, for
GL_TRIANGLES, each set of three
glVertex's defines a triangle.
The vertices are usually specified in
float precision. It is because integer is not suitable for trigonometric operations (needed to carry out transformations such as rotation). Precision of
float is sufficient for carrying out intermediate operations, and render the objects finally into pixels on screen (with resolution of says 800x600, integral precision).
double precision is often not necessary.
In the above example:
glBegin(GL_QUADS); .... 4 quads with 12x glVertex() .... glEnd();
we define 3 color quads (
GL_QUADS) with 12x
glVertex() functions.
glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(-0.8f, 0.1f); glVertex2f(-0.2f, 0.1f); glVertex2f(-0.2f, 0.7f); glVertex2f(-0.8f, 0.7f);
We set the color to red (R=1, G=0, B=0). All subsequent vertices will have the color of red. Take note that in OpenGL, color (and many properties) is applied to vertices rather than primitive shapes. The color of the a primitive shape is interpolated from its vertices.
We similarly define a second quad in green.
For the third quad (as follows), the vertices have different color. The color of the quad surface is interpolated from its vertices, resulting in a shades of white to dark gray, as shown in the output.
glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray glVertex2f(-0.9f, -0.7f); glColor3f(1.0f, 1.0f, 1.0f); // White glVertex2f(-0.5f, -0.7f); glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray glVertex2f(-0.5f, -0.3f); glColor3f(1.0f, 1.0f, 1.0f); // White glVertex2f(-0.9f, -0.3f);
2D Coordinate System and the Default View
The following diagram shows the OpenGL 2D Coordinate System, which corresponds to the everyday 2D Cartesian coordinates with origin located at the bottom-left corner.
The default OpenGL 2D clipping-area (i.e., what is captured by the camera) is an orthographic view with x and y in the range of -1.0 and 1.0, i.e., a 2x2 square with centered at the origin. This clipping-area is mapped to the viewport on the screen. Viewport is measured in pixels.
Study the above example to convince yourself that the 2D shapes created are positioned correctly on the screen.
Clipping-Area & Viewport
Try dragging the corner of the window to make it bigger or smaller. Observe that all the shapes are distorted.
We can handle the re-sizing of window via a callback handler
reshape(), which can be programmed to adjust the OpenGL clipping-area according to the window's aspect ratio.
Clipping Area: Clipping area refers to the area that can be seen (i.e., captured by the camera), measured in OpenGL coordinates.
The function
gluOrtho2D can be used to set the clipping area of 2D orthographic view. Objects outside the clipping area will be clipped away and cannot be seen.
void gluOrtho2D(GLdouble left, GLdouble right, GLdouble bottom, GLdouble top) // The default clipping area is (-1.0, 1.0, -1.0, 1.0) in OpenGL coordinates, // i.e., 2x2 square centered at the origin.
To set the clipping area, we need to issue a series of commands as follows: we first select the so-called projection matrix for operation, and reset the projection matrix to identity. We then choose the 2D orthographic view with the desired clipping area, via
gluOrtho2D().
// Set to 2D orthographic projection with the specified clipping area glMatrixMode(GL_PROJECTION); // Select the Projection matrix for operation glLoadIdentity(); // Reset Projection matrix gluOrtho2D(-1.0, 1.0, -1.0, 1.0); // Set clipping area's left, right, bottom, top
Viewport: Viewport refers to the display area on the window (screen), which is measured in pixels in screen coordinates (excluding the title bar).
The clipping area is mapped to the viewport. We can use
glViewport function to configure the viewport.
void glViewport(GLint xTopLeft, GLint yTopLeft, GLsizei width, GLsizei height)
Suppose the the clipping area's (left, right, bottom, top) is (-1.0, 1.0, -1.0, 1.0) (in OpenGL coordinates) and the viewport's (xTopLeft, xTopRight, width, height) is (0, 0, 640, 480) (in screen coordinates in pixels), then the bottom-left corner (-1.0, -1.0) maps to (0, 0) in the viewport, the top-right corner (1.0, 1.0) maps to (639, 479). It is obvious that if the aspect ratios for the clipping area and the viewport are not the same, the shapes will be distorted.
Take note that in the earlier example, the windows' size of 320x320 has a square shape, with a aspect ratio consistent with the default 2x2 squarish clipping-area.
Example 3: Clipping-area and Viewport (GL03Viewport.cpp)
A
reshape() function, which is called back when the window first appears and whenever the window is re-sized, can be used to ensure consistent aspect ratio between clipping-area and viewport, as shown in the above example. The graphics sub-system passes the window's width and height, in pixels, into the
reshape().
GLfloat aspect = (GLfloat)width / (GLfloat)height;
We compute the aspect ratio of the new re-sized window, given its new
width and
height provided by the graphics sub-system to the callback function
reshape().
glViewport(0, 0, width, height);
We set the viewport to cover the entire new re-sized window, in pixels.
Try setting the viewport to cover only a quarter (lower-right qradrant) of the window via
glViewport(0, 0, width/2, height/2).
glMatrixMode(GL_PROJECTION); glLoadIdentity(); if (width >= height) { gluOrtho2D(-1.0 * aspect, 1.0 * aspect, -1.0, 1.0); } else { gluOrtho2D(-1.0, 1.0, -1.0 / aspect, 1.0 / aspect); }
We set the aspect ratio of the clipping area to match the viewport. To set the clipping area, we first choose the operate on the projection matrix via
glMatrixMode(GL_PROJECTION). OpenGL has two matrices, a projection matrix (which deals with camera projection such as setting the clipping area) and a model-view matrix (for transforming the objects from their local spaces to the common world space). We reset the projection matrix via
glLoadIdentity().
Finally, we invoke
gluOrtho2D() to set the clipping area with an aspect ratio matching the viewport. The shorter side has the range from -1 to +1, as illustrated below:
We need to register the
reshape() callback handler with GLUT via
glutReshapeFunc() in the
main() as follows:
int main(int argc, char** argv) { glutInitWindowSize(640, 480); ...... glutReshapeFunc(reshape); }
In the above
main() function, we specify the initial window size to
640x480, which is non-squarish. Try re-sizing the window and observe the changes.
Note that the
reshape() runs at least once when the window first appears. It is then called back whenever the window is re-shaped. On the other hand, the
initGL() runs once (and only once); and the
display() runs in response to window re-paint request (e.g., after the window is re-sized).
Translation & Rotation
In the above sample, we positioned each of the shapes by defining their vertices with respective to the same origin (called world space). It took me quite a while to figure out the absolute coordinates of these vertices.
Instead, we could position each of the shapes by defining their vertices with respective to their own center (called model space or local space). We can then use translation and/or rotation to position the shapes at the desired locations in the world space, as shown in the following revised
display() function.
Example 4: Translation and Rotation (GL04ModelTransform.cpp)
glMatrixMode(GL_MODELVIEW); // To operate on model-view matrix glLoadIdentity(); // Reset
Translation and rotation are parts of so-called model transform, which transform from the objects from the local space (or model space) to the common world space. To carry out model transform, we set the matrix mode to mode-view matrix (
GL_MODELVIEW) and reset the matrix. (Recall that in the previous example, we set the matrix mode to projection matrix (
GL_PROJECTION) to set the clipping area.)
OpenGL is operating as a state machine. That is, once a state is set, the value of the state persists until it is changed. In other words, once the coordinates are translated or rotated, all the subsequent operations will be based on this coordinates.
Translation is done via
glTranslate function:
void gltranslatef (GLfloat x, GLfloat y, GLfloat z) // where (x, y, z) is the translational vector
Take note that
glTranslatef function must be placed outside the
glBegin/
glEnd, where as
glColor can be placed inside
glBegin/
glEnd.
Rotation is done via
glRotatef function:
void glRotatef (GLfloat angle, GLfloat x, GLfloat y, GLfloat z) // where angle specifies the rotation in degree, (x, y, z) forms the axis of rotation.
Take note that the rotational angle is measured in degrees (instead of radians) in OpenGL.
In the above example, we translate within the x-y plane (z=0) and rotate about the z-axis (which is normal to the x-y plane).
Animation
Idle Function
To perform animation (e.g., rotating the shapes), you could register an
idle() callback handler with GLUT, via
glutIdleFunc command. The graphic system will call back the
idle() function when there is no other event to be processed.
void glutIdleFunc(void (*func)(void))
In the
idle() function, you could issue
glutPostRedisplay command to post a window re-paint request, which in turn will activate
display() function.
void idle() { glutPostRedisplay(); // Post a re-paint request to activate display() }
Take note that the above is equivalent to registering
display() as the
idle function.
// main glutIdleFunc(display);
Double Buffering
Double buffering uses two display buffers to smoothen animation. The next screen is prepared in a back buffer, while the current screen is held in a front buffer. Once the preparation is done, you can use
glutSwapBuffer command to swap the front and back buffers.
To use double buffering, you need to make two changes:
- In the
main(), include this line before creating the window:
glutInitDisplayMode(GLUT_DOUBLE); // Set double buffered mode
- In the
display()function, replace
glFlush()with
glutSwapBuffers(), which swap the front and back buffers.
Double buffering should be used in animation. For static display, single buffering is sufficient. (Many graphics hardware always double buffered, so it is hard to see the differences.)
Example 5: Animation using Idle Function (GL05IdleFunc.cpp)
The following program rotates all the shapes created in our previous example using idle function with double buffering.
In the above example, instead of accumulating all the translations and undoing the rotations, we use
glPushMatrix to save the current state, perform transformations, and restore the saved state via
glPopMatrix. (In the above example, we can also use
glLoadIdentity to reset the matrix before the next transformations.)
GLfloat angle = 0.0f; // Current rotational angle of the shapes
We define a global variable called
angle to keep track of the rotational angle of all the shapes. We will later use
glRotatef to rotate all the shapes to this angle.
angle += 0.2f;
At the end of each refresh (in
display()), we update the rotational angle of all the shapes.
glutSwapBuffers(); // Swap front- and back framebuffer glutInitDisplayMode(GLUT_DOUBLE); // In main(), enable double buffered mode
Instead of
glFlush() which flushes the framebuffer for display immediately, we enable double buffering and use
glutSwapBuffer() to swap the front- and back-buffer during the VSync for smoother display.
void idle() { glutPostRedisplay(); // Post a re-paint request to activate display() } glutIdleFunc(idle); // In main() - Register callback handler if no other event
We define an
idle() function, which posts a re-paint request and invoke
display(), if there is no event outstanding. We register this
idle() function in
main() via
glutIdleFunc().
Double Buffering & Refresh Rate
When double buffering is enabled,
glutSwapBuffers synchronizes with the screen refresh interval (VSync). That is, the buffers will be swapped at the same time when the monitor is putting up a new frame. As the result,
idle() function, at best, refreshes the animation at the same rate as the refresh rate of the monitor (60Hz for LCD/LED monitor). It may operates at half the monitor refresh rate (if the computations takes more than 1 refresh interval), one-third, one-fourth, and so on, because it need to wait for the VSync.
Timer Function
With
idle(), we have no control to the refresh interval. We could register a
Timer() function with GLUT via
glutTimerFunc. The
Timer() function will be called back at the specified fixed interval.
void glutTimerFunc(unsigned int millis, void (*func)(int value), value) // where millis is the delay in milliseconds, value will be passed to the timer function.
Example 6: Animation via Timer Function (GL06TimerFunc.cpp)
The following modifications rotate all the shapes created in the earlier example counter-clockwise by 2 degree per 30 milliseconds.
void Timer(int value) { glutPostRedisplay(); // Post re-paint request to activate display() glutTimerFunc(refreshMills, Timer, 0); // next Timer call milliseconds later }
We replace the
idle() function by a
timer() function, which post a re-paint request to invoke
display(), after the timer expired.
glutTimerFunc(0, Timer, 0); // First timer call immediately
In
main(), we register the
timer() function, and activate the
timer() immediately (with initial timer = 0).
More GLUT functions
glutInitDisplayMode: requests a display with the specified mode, such as color mode (
GLUT_RGB,
GLUT_RGBA,
GLUT_INDEX), single/double buffering (
GLUT_SINGLE,
GLUT_DOUBLE), enable depth (
GLUT_DEPTH), joined with a bit
OR'
|'.
void glutInitDisplayMode(unsigned int displayMode)
For example,
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH); // Use RGBA color, enable double buffering and enable depth buffer
Example 7: A Bouncing Ball (GL07BouncingBall.cpp)
This example shows a ball bouncing inside the window. Take note that circle is not a primitive geometric shape in OpenGL. This example uses
TRIANGLE_FAN to compose a circle.
[TODO] Explanation
Handling Keyboard Inputs with GLUT
We can register callback functions to handle keyboard inputs for normal and special keys, respectively.
glutKeyboardFunc: registers callback handler for keyboard event.
void glutKeyboardFunc (void (*func)(unsigned char key, int x, int y) // key is the char pressed, e.g., 'a' or 27 for ESC // (x, y) is the mouse location in Windows' coordinates
glutSpecialFunc: registers callback handler for special key (such as arrow keys and function keys).
void glutSpecialFunc (void (*func)(int specialKey, int x, int y) // specialKey: GLUT_KEY_* (* for LEFT, RIGHT, UP, DOWN, HOME, END, PAGE_UP, PAGE_DOWN, F1,...F12). // (x, y) is the mouse location in Windows' coordinates
Example 8: Switching between Full-Screen and Windowed-mode (GL08FullScreen.cpp)
For the bouncing ball program, the following special-key handler toggles between full-screen and windowed modes using F1 key.
[TODO] Explanation
[TODO] Using
glVertex to draw a Circle is inefficient (due to the compute-intensive
sin() and
cos() functions). Try using GLU's quadric.
Example 9: Key-Controlled (GL09KeyControl.cpp)
For the bouncing ball program, the following key and special-key handlers provide exits with ESC (27), increase/decrease y speed with up-/down-arrow key, increase/decrease x speed with left-/right-arrow key, increase/decrease ball's radius with PageUp/PageDown key.
[TODO] Explanation
Handling Mouse Inputs with GLUT
Similarly, we can register callback function to handle mouse-click and mouse-motion.
glutMouseFunc: registers callback handler for mouse click.
void glutMouseFunc(void (*func)(int button, int state, int x, int y) // (x, y) is the mouse-click location. // button: GLUT_LEFT_BUTTON, GLUT_RIGHT_BUTTON, GLUT_MIDDLE_BUTTON // state: GLUT_UP, GLUT_DOWN
glutMotionFunc: registers callback handler for mouse motion (when the mouse is clicked and moved).
void glutMotionFunc(void (*func)(int x, int y) // where (x, y) is the mouse location in Window's coordinates
Example 10: Mouse-Controlled (GL10MouseControl.cpp)
For the bouncing ball program, the following mouse handler pause the movement with left-mouse click, and resume with right-mouse click.
[TODO] Explanation
Example 11: A Simple Paint program
[TODO] Use mouse-motion and
GL_LINE_STRIP.
Link to OpenGL/Computer Graphics References and ResourcesLink to OpenGL/Computer Graphics References and Resources | http://www3.ntu.edu.sg/home/ehchua/programming/opengl/CG_Introduction.html | CC-MAIN-2017-17 | refinedweb | 4,460 | 55.03 |
cexp, cexpf, cexpl - complex exponential function
#include <complex.h> double complex cexp(double complex z); float complex cexpf(float complex z); long double complex cexpl(long double complex z); Link with -lm.
The function calculates e (2.71828..., the base of natural logarithms) raised to the power of z. One has: cexp(I * z) = ccos(z) + I * csin(z)
These functions first appeared in glibc in version 2.1.
C99.
cabs(3), clog(3), cpow(3), complex(7)
This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2008-08-11 | http://huge-man-linux.net/man3/cexp.html | CC-MAIN-2021-21 | refinedweb | 109 | 68.26 |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
In Chapter 1, I discuss the concepts behind the use of business objects and distributed objects. In Chapter 2, I explored the design of the business framework. Chapters 3 through 5 cover object-oriented design in general and then focus more around the specific stereotypes directly supported by CSLA .NET. In this chapter, I start walking through the implementation of the CSLA .NET framework by providing an overview of the namespaces and project structure of the framework. Then in Chapters 7 through 16, I provide detail about the implementation of each of the major features of the framework as discussed in Chapter 2.
The focus in this chapter is on the overall project structure and namespaces used to organize all the framework code and a walkthrough of the structure of the major types in the Csla and Csla.Core namespaces. | http://my.safaribooksonline.com/book/programming/csharp/9781430210191/business-framework-implementation/business_framework_implementation | CC-MAIN-2014-15 | refinedweb | 157 | 51.89 |
csKeyEventData Struct Reference
[Event handling]
Structure that collects the data a keyboard event carries. More...
#include <iutil/event.h>
Detailed Description
Structure that collects the data a keyboard event carries.
The event it self doesn't transfer the data in this structure; it is merely meant to pass around keyboard event data in a compact way within client code without having to pass around the event itself.
- See also:
- csKeyEventHelper
Definition at line 128 of file event.h.
Member Data Documentation
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4.1/structcsKeyEventData.html | CC-MAIN-2014-10 | refinedweb | 104 | 55.24 |
A brief intro to Plan 9 with no assumptions about prior knowledge
So, you have successfully booted a Plan 9 system (either a VM or native) and have more or less a blank screen staring back at you. What next? The very first thing to know is how to quit safely. The command
fshalt
stops the root fileserver and should always be used prior to exiting the system. "fshalt -r" will stop the root fs and then reboot.
Rio: The graphical user interface
Rio (or "grio" in the case of ANTS) is the main interface/window manager for Plan 9. It does not implement an interface with icons and/or "pull-down" menus. Instead it mostly just allows you to create/move/resize application windows, which run a textual shell named "rc" by default. Holding the right mouse button down brings up the basic menu of rio controls. In general, you use the right button to select an option from that menu, and then use the right button again to apply it. To make a new window, right-click and select "New" from the menu, then move the mouse cursor to the desired position and press and hold the right button, and "sweep out" the size of the new window.
rc: The textual shell
The shell is the primary interface to the system. "rc" stands for "run commands" and that is the primary job of the shell, to provide a way to run all the rest of the commands in the operating system. Many of the programs that rc runs are shared with the standard unix environment: "man" to view documentation ("man rc" for instance), "ls" to list files in the current directory ("lc" columnizes the list), "cat" to display the contents of a file, "cd" to change directories, and a slew of other common commands will be listed later. rc will also run graphical programs, which will then take over the window they are running in until the exit or are interrupted. The most important of these is "games/catclock" which should be run frequently and is worthy of deep contemplation. To send an interrupt note to a running program, press the delete key.
Acme: The editor/directory browser/alternate interface
Acme is an interesting program. It is rather an exception to the general Plan 9/unix philosophy that a program should only have one task. Acme is both a text editor, a directory browser, and a whole alternative interface. It has a different model of mouse usage than the rest of the system, and some people find it integrates poorly into the whole. The main thing to know is that the words in the upper title bars of the windows are commands that will be executed if you middle-click on them. So, to make a new panel, you middle click "New". To save a file that you are editing, you click "Put". Inside panels, right clicking acts in a way similar to following links. If a panel contains a directory listing, right clicking on a subdirectory will open a new panel listing it, and right clicking on a file will open that file in a new panel for editing. If you type "win" and middle click, a new panel will open with the rc shell running.
Mothra: The simple web browser
Mothra is a browser for "the web that we have lost" - it doesn't do javascript or multimedia, just text and simple pictures, and it eliminates most of the formatting used by modern websites. It has a url bar and history of pages visited at the top of the screen, the left button opens links, and the right mouse button opens a menu. The "save hit" and "hit list" options are for saving and viewing a set of bookmarks.
Page: The document/image viewer
Page will display pdf, ps, or image files. There are many documents about aspects of the os saved in /sys/doc. "Page /sys/doc/9.ps" opens the primary paper about the design of Plan 9, (which is now well over 20 years old, but mostly still applicable). The left button can be used to drag the document within the containing window, and the right mouse button brings up a navigation menu of the pages. The middle button has an additional menu of command such as resizing. Page can also be used to view files such as .jpg or .png images.
Winwatch: Window selection
The winwatch program shows a list of windows open within the current rio/grio. Right click on the name of a window to "surface" it, right click again to hide it. Middle-clicking allows you to change the displayed name of a window. Because rio can run inside rio, it is often useful to organize your workspace in terms of multiple subrios which you select between using a winwatch running in the top-level rio. (For convenience, I will be referring to grio as "rio" because grio has all of rio's functionality.)
Stats: Status monitoring
The stats program provides a graphical real-time view of system resource use. It can view many different aspects, but what I find most useful is to start the program with stats -lems which displays the load (how much work the cpu is doing), the ethernet (how much network traffic there is), the memory usage, and the amount of syscalls (requests to kernel functions.) Feel free to view different menus and do different things with the system and see how they affect what stats displays.
Working in rc
Feeling comfortable working in a text-based command shell is the mark of a seasoned user of unix-related operating systems. There is a core of commands which are shared between original unix, linux, and Plan 9. Before discussing these shared commands, let's look at something Plan 9 specific:
Namespace viewing and alteration
ns
Type "ns" into a shell to view the namespace that shell is running within. The namespace can be thought of as a map of your environment - what's where. The namespace is displayed as a list of the commands that are used to create it.
mount
The mount command adds a new resource into the namespace. The basic syntax is "mount SOURCE TARGET"
bind
The bind command makes a piece of the namespace visible at a new name as well as the original. Basic syntax is "bind SOURCE TARGET"
flags to mount and bind
The mount and bind commands share several flags. The -c flag means that (given correct permissions) new files can be created in the target of the command. The -a or -b flag makes the mount/bind a "union" - the newly added mount or bind is "merged" with the target and does not replace it. The -a or -b flags choose whether the newly added files are "after" or "before" those that were there originally. In other words, if a file named "foo" exists in both old and new, this specifies which version of foo will you see if you do "cat /target/foo"
kernel devices
Many of the commands that construct the namespace use # as the source. This special character in the pathnames indicates that the source is special files provided by the kernel. The drivers for the computer hardware are provided as # trees, and several special "synthetic" file systems are also created by the kernel. For instance, the kernel creates a special purpose filesystem for viewing and manipulating running processes named 'p' so the
bind '#p' /proc
in the namespace makes this kernel-based set of files visible at /proc.
The /srv device
One of the kernel devices, bound to /srv has the special function of being the place where userspace (non-kernel) fileservers register themselves and provide a "file descriptor" for access. You will see many of the mounts into the namespace come from #s, the /srv device.
The root filesystem
The most important of all of these is the root disk-based filesystem which provides the majority of the files you interact with - the programs you run, and the data files you use or create. There are several different disk-based fileservers but they all provide similar functionality to the user. They are started at boot, and place a filedescriptor at /srv/boot, and are then mounted right at the beginning of constructing the namespace.
Standard shell commands
These are the commands that are mostly equivalent between Plan9/unix/linux. There are some differeces (the linux versions usually have quite a few more options) but the basic use is very similar. A standard convention: if you place a leading / in front of a file path, it is an absolute path from the root. Without a leading slash, it is relative to your current working directory.
- man - the most important command for new users. "man foo" prints documentation about foo.
- lookman - "lookman foo" searches the documentation index for which manpages mention foo and lists them
- ls - list the contents of the current directory. You can also supply a path such as ls /foo/bar to view the contents of bar. "lc" prints in columns.
- cd - change directory. cd / will put you in the lowest level. just "cd" on its own will put you in your home directory. cd /foo/bar changes to the bar subdirectory of foo.
- pwd - print working directory. Shows the location of your current shell.
- cat - short for "concatenate" but mostly used to view files. "cat foo" dumps the contents of foo to standard output.
- cp - copy. the syntax is "cp OLD NEW" to make NEW a copy of OLD. In plan 9, cp only works on single files, not directories.
- dircp - copy a directory. "dircp OLD NEW" puts the contents of OLD into NEW. NEW must be an already existing directory.
- mv - move a file. "mv OLD NEW" moves OLD to NEW. OLD is deleted by this operation. In plan9, directories cannot be mv.
- rm - remove. "rm TARGET" removes target. If TARGET is a directory, you need to use rm -rf TARGET. Use caution.
- echo - print to standard output. "echo foo" just prints foo.
- grep - search. "grep foo bar" searches file bar for all lines containing the string foo.
sed - stream edit. The most common use of sed is to search and replace. To make NEWFILE a copy of FILE but with all instances of foo replaced with bar, use this syntax:
sed 's/foo/bar/g' FILE >NEWFILE
date - print the current time.
fortune - print a random piece of wisdom or humor.
Shell redirections and piping
The most famous innovation of unix is its use of "pipes" - this is part of the toolbox philosphy. Because many shell tools work by default on "standard input" and write to "standard output" they can be chained together. For instance:
cat /sys/man/7/* |grep are|sed 's/e/EE/g'
This uses a shell wildcard to concatenate all files in section 7 of the manual, and pipe them to grep, which searches for all lines containing the word "are", and print them, and sed receives this as input and substitutes EE for all "e"s. In addition to the | pipe character, the > character redirects output to a file, so adding >EEman.txt to the end of the above command would save the output in that file. The < character means "take input from the specified file".
Environment variables
If you type
foo=bar
Then type
echo $foo
The output produced will be "bar" because you have created a variable named foo with bar as the contents. The current variables defined in the shell are stored in a special directory called /env so if you do
ls /env
you will see what variables are currently defined in the environment. | http://doc.9gridchan.org/guides/9start | CC-MAIN-2021-21 | refinedweb | 1,956 | 70.13 |
/* * Copyright (c) 1993. * * Author: Hans-J. Boehm ([email protected]) */ /* Boehm, October 5, 1995 4:20 pm PDT */ /* * Cords are immutable character strings. A number of operations * on long cords are much more efficient than their strings.h counterpart. * In particular, concatenation takes constant time independent of the length * of the arguments. (Cords are represented as trees, with internal * nodes representing concatenation and leaves consisting of either C * strings or a functional description of the string.) * * The following are reasonable applications of cords. They would perform * unacceptably if C strings were used: * - A compiler that produces assembly language output by repeatedly * concatenating instructions onto a cord representing the output file. * - A text editor that converts the input file to a cord, and then * performs editing operations by producing a new cord representing * the file after echa character change (and keeping the old ones in an * edit history) * * For optimal performance, cords should be built by * concatenating short sections. * This interface is designed for maximum compatibility with C strings. * ASCII NUL characters may be embedded in cords using CORD_from_fn. * This is handled correctly, but CORD_to_char_star will produce a string * with embedded NULs when given such a cord. * * This interface is fairly big, largely for performance reasons. * The most basic constants and functions: * * CORD - the type of a cord; * CORD_EMPTY - empty cord; * CORD_len(cord) - length of a cord; * CORD_cat(cord1,cord2) - concatenation of two cords; * CORD_substr(cord, start, len) - substring (or subcord); * CORD_pos i; CORD_FOR(i, cord) { ... CORD_pos_fetch(i) ... } - * examine each character in a cord. CORD_pos_fetch(i) is the char. * CORD_fetch(int i) - Retrieve i'th character (slowly). * CORD_cmp(cord1, cord2) - compare two cords. * CORD_from_file(FILE * f) - turn a read-only file into a cord. * CORD_to_char_star(cord) - convert to C string. * (Non-NULL C constant strings are cords.) * CORD_printf (etc.) - cord version of printf. Use %r for cords. */ # ifndef CORD_H # define CORD_H # include <stddef.h> # include <stdio.h> /* Cords have type const char *. This is cheating quite a bit, and not */ /* 100% portable. But it means that nonempty character string */ /* constants may be used as cords directly, provided the string is */ /* never modified in place. The empty cord is represented by, and */ /* can be written as, 0. */ typedef const char * CORD; /* An empty cord is always represented as nil */ # define CORD_EMPTY 0 /* Is a nonempty cord represented as a C string? */ #define CORD_IS_STRING(s) (*(s) != '\0') /* Concatenate two cords. If the arguments are C strings, they may */ /* not be subsequently altered. */ CORD CORD_cat(CORD x, CORD y); /* Concatenate a cord and a C string with known length. Except for the */ /* empty string case, this is a special case of CORD_cat. Since the */ /* length is known, it can be faster. */ /* The string y is shared with the resulting CORD. Hence it should */ /* not be altered by the caller. */ CORD CORD_cat_char_star(CORD x, const char * y, size_t leny); /* Compute the length of a cord */ size_t CORD_len(CORD x); /* Cords may be represented by functions defining the ith character */ typedef char (* CORD_fn)(size_t i, void * client_data); /* Turn a functional description into a cord. */ CORD CORD_from_fn(CORD_fn fn, void * client_data, size_t len); /* Return the substring (subcord really) of x with length at most n, */ /* starting at position i. (The initial character has position 0.) */ CORD CORD_substr(CORD x, size_t i, size_t n); /* Return the argument, but rebalanced to allow more efficient */ /* character retrieval, substring operations, and comparisons. */ /* This is useful only for cords that were built using repeated */ /* concatenation. Guarantees log time access to the result, unless */ /* x was obtained through a large number of repeated substring ops */ /* or the embedded functional descriptions take longer to evaluate. */ /* May reallocate significant parts of the cord. The argument is not */ /* modified; only the result is balanced. */ CORD CORD_balance(CORD x); /* The following traverse a cord by applying a function to each */ /* character. This is occasionally appropriate, especially where */ /* speed is crucial. But, since C doesn't have nested functions, */ /* clients of this sort of traversal are clumsy to write. Consider */ /* the functions that operate on cord positions instead. */ /* Function to iteratively apply to individual characters in cord. */ typedef int (* CORD_iter_fn)(char c, void * client_data); /* Function to apply to substrings of a cord. Each substring is a */ /* a C character string, not a general cord. */ typedef int (* CORD_batched_iter_fn)(const char * s, void * client_data); # define CORD_NO_FN ((CORD_batched_iter_fn)0) /* Apply f1 to each character in the cord, in ascending order, */ /* starting at position i. If */ /* f2 is not CORD_NO_FN, then multiple calls to f1 may be replaced by */ /* a single call to f2. The parameter f2 is provided only to allow */ /* some optimization by the client. This terminates when the right */ /* end of this string is reached, or when f1 or f2 return != 0. In the */ /* latter case CORD_iter returns != 0. Otherwise it returns 0. */ /* The specified value of i must be < CORD_len(x). */ int CORD_iter5(CORD x, size_t i, CORD_iter_fn f1, CORD_batched_iter_fn f2, void * client_data); /* A simpler version that starts at 0, and without f2: */ int CORD_iter(CORD x, CORD_iter_fn f1, void * client_data); # define CORD_iter(x, f1, cd) CORD_iter5(x, 0, f1, CORD_NO_FN, cd) /* Similar to CORD_iter5, but end-to-beginning. No provisions for */ /* CORD_batched_iter_fn. */ int CORD_riter4(CORD x, size_t i, CORD_iter_fn f1, void * client_data); /* A simpler version that starts at the end: */ int CORD_riter(CORD x, CORD_iter_fn f1, void * client_data); /* Functions that operate on cord positions. The easy way to traverse */ /* cords. A cord position is logically a pair consisting of a cord */ /* and an index into that cord. But it is much faster to retrieve a */ /* charcter based on a position than on an index. Unfortunately, */ /* positions are big (order of a few 100 bytes), so allocate them with */ /* caution. */ /* Things in cord_pos.h should be treated as opaque, except as */ /* described below. Also note that */ /* CORD_pos_fetch, CORD_next and CORD_prev have both macro and function */ /* definitions. The former may evaluate their argument more than once. */ # include "private/cord_pos.h" /* Visible definitions from above: typedef <OPAQUE but fairly big> CORD_pos[1]; * Extract the cord from a position: CORD CORD_pos_to_cord(CORD_pos p); * Extract the current index from a position: size_t CORD_pos_to_index(CORD_pos p); * Fetch the character located at the given position: char CORD_pos_fetch(CORD_pos p); * Initialize the position to refer to the given cord and index. * Note that this is the most expensive function on positions: void CORD_set_pos(CORD_pos p, CORD x, size_t i); * Advance the position to the next character. * P must be initialized and valid. * Invalidates p if past end: void CORD_next(CORD_pos p); * Move the position to the preceding character. * P must be initialized and valid. * Invalidates p if past beginning: void CORD_prev(CORD_pos p); * Is the position valid, i.e. inside the cord? int CORD_pos_valid(CORD_pos p); */ # define CORD_FOR(pos, cord) \ for (CORD_set_pos(pos, cord, 0); CORD_pos_valid(pos); CORD_next(pos)) /* An out of memory handler to call. May be supplied by client. */ /* Must not return. */ extern void (* CORD_oom_fn)(void); /* Dump the representation of x to stdout in an implementation defined */ /* manner. Intended for debugging only. */ void CORD_dump(CORD x); /* The following could easily be implemented by the client. They are */ /* provided in cordxtra.c for convenience. */ /* Concatenate a character to the end of a cord. */ CORD CORD_cat_char(CORD x, char c); /* Concatenate n cords. */ CORD CORD_catn(int n, /* CORD */ ...); /* Return the character in CORD_substr(x, i, 1) */ char CORD_fetch(CORD x, size_t i); /* Return < 0, 0, or > 0, depending on whether x < y, x = y, x > y */ int CORD_cmp(CORD x, CORD y); /* A generalization that takes both starting positions for the */ /* comparison, and a limit on the number of characters to be compared. */ int CORD_ncmp(CORD x, size_t x_start, CORD y, size_t y_start, size_t len); /* Find the first occurrence of s in x at position start or later. */ /* Return the position of the first character of s in x, or */ /* CORD_NOT_FOUND if there is none. */ size_t CORD_str(CORD x, size_t start, CORD s); /* Return a cord consisting of i copies of (possibly NUL) c. Dangerous */ /* in conjunction with CORD_to_char_star. */ /* The resulting representation takes constant space, independent of i. */ CORD CORD_chars(char c, size_t i); # define CORD_nul(i) CORD_chars('\0', (i)) /* Turn a file into cord. The file must be seekable. Its contents */ /* must remain constant. The file may be accessed as an immediate */ /* result of this call and/or as a result of subsequent accesses to */ /* the cord. Short files are likely to be immediately read, but */ /* long files are likely to be read on demand, possibly relying on */ /* stdio for buffering. */ /* We must have exclusive access to the descriptor f, i.e. we may */ /* read it at any time, and expect the file pointer to be */ /* where we left it. Normally this should be invoked as */ /* CORD_from_file(fopen(...)) */ /* CORD_from_file arranges to close the file descriptor when it is no */ /* longer needed (e.g. when the result becomes inaccessible). */ /* The file f must be such that ftell reflects the actual character */ /* position in the file, i.e. the number of characters that can be */ /* or were read with fread. On UNIX systems this is always true. On */ /* MS Windows systems, f must be opened in binary mode. */ CORD CORD_from_file(FILE * f); /* Equivalent to the above, except that the entire file will be read */ /* and the file pointer will be closed immediately. */ /* The binary mode restriction from above does not apply. */ CORD CORD_from_file_eager(FILE * f); /* Equivalent to the above, except that the file will be read on demand.*/ /* The binary mode restriction applies. */ CORD CORD_from_file_lazy(FILE * f); /* Turn a cord into a C string. The result shares no structure with */ /* x, and is thus modifiable. */ char * CORD_to_char_star(CORD x); /* Turn a C string into a CORD. The C string is copied, and so may */ /* subsequently be modified. */ CORD CORD_from_char_star(const char *s); /* Identical to the above, but the result may share structure with */ /* the argument and is thus not modifiable. */ const char * CORD_to_const_char_star(CORD x); /* Write a cord to a file, starting at the current position. No */ /* trailing NULs are newlines are added. */ /* Returns EOF if a write error occurs, 1 otherwise. */ int CORD_put(CORD x, FILE * f); /* "Not found" result for the following two functions. */ # define CORD_NOT_FOUND ((size_t)(-1)) /* A vague analog of strchr. Returns the position (an integer, not */ /* a pointer) of the first occurrence of (char) c inside x at position */ /* i or later. The value i must be < CORD_len(x). */ size_t CORD_chr(CORD x, size_t i, int c); /* A vague analog of strrchr. Returns index of the last occurrence */ /* of (char) c inside x at position i or earlier. The value i */ /* must be < CORD_len(x). */ size_t CORD_rchr(CORD x, size_t i, int c); /* The following are also not primitive, but are implemented in */ /* cordprnt.c. They provide functionality similar to the ANSI C */ /* functions with corresponding names, but with the following */ /* additions and changes: */ /* 1. A %r conversion specification specifies a CORD argument. Field */ /* width, precision, etc. have the same semantics as for %s. */ /* (Note that %c,%C, and %S were already taken.) */ /* 2. The format string is represented as a CORD. */ /* 3. CORD_sprintf and CORD_vsprintf assign the result through the 1st */ /* argument. Unlike their ANSI C versions, there is no need to guess */ /* the correct buffer size. */ /* 4. Most of the conversions are implement through the native */ /* vsprintf. Hence they are usually no faster, and */ /* idiosyncracies of the native printf are preserved. However, */ /* CORD arguments to CORD_sprintf and CORD_vsprintf are NOT copied; */ /* the result shares the original structure. This may make them */ /* very efficient in some unusual applications. */ /* The format string is copied. */ /* All functions return the number of characters generated or -1 on */ /* error. This complies with the ANSI standard, but is inconsistent */ /* with some older implementations of sprintf. */ /* The implementation of these is probably less portable than the rest */ /* of this package. */ #ifndef CORD_NO_IO #include <stdarg.h> int CORD_sprintf(CORD * out, CORD format, ...); int CORD_vsprintf(CORD * out, CORD format, va_list args); int CORD_fprintf(FILE * f, CORD format, ...); int CORD_vfprintf(FILE * f, CORD format, va_list args); int CORD_printf(CORD format, ...); int CORD_vprintf(CORD format, va_list args); #endif /* CORD_NO_IO */ # endif /* CORD_H */ | http://opensource.apple.com//source/gcc/gcc-5363/boehm-gc/include/cord.h | CC-MAIN-2016-44 | refinedweb | 2,002 | 66.94 |
Instead of "static", shall i use "unsigned char" in my program code. What will be change in the code ?
Through Google search I found out " use of static keyword with a variable that is local to a function, it allows the last value of the variable to be preserved between successive calls to that function"
#include <stdio.h> int *Randombits( ) { static int result[33]; int bit; int i=0; static int reg = 0xACF5; // starting value, generates different patterns with each call to Randombits for (i=0;i<=32;i++) { bit = ((reg >> 0) ^ (reg >> 2) ^ (reg >> 3) ^ (reg >> 5) ) & 1; reg = (reg >> 1) | (bit << 15); result [i] = bit; } return result; } int main () { /* a pointer to an int */ int *p; int i; p = Randombits(); for ( i = 0; i < 30; i++ ) { printf("%d", *(p + i) ); } return 0; }
This post has been edited by ndc85430: 18 May 2017 - 10:56 PM
Reason for edit:: Added code tags. | https://www.dreamincode.net/forums/topic/404086-for-pseudo-random-number-generator-why-we-using-static-keyword/ | CC-MAIN-2020-45 | refinedweb | 153 | 60.32 |
schema
example needs to exist and the connection parameters need to be correct.
import 'package:ddo/ddo.dart'; import 'package:ddo/drivers/ddo_mysql.dart'; main() async { Driver driver = new DDOMySQL('127.0.0.1', 'example', 'root', 'password'); DDO ddo = new DDO(driver); await ddo.exec('DROP TABLE IF EXISTS person'); await ddo.exec(''' CREATE TABLE IF NOT EXISTS `person` ( `id` INT NOT NULL AUTO_INCREMENT, `name` VARCHAR(200) NOT NULL, `created` DATE NOT NULL, PRIMARY KEY (`id`)); '''); var now = new DateTime.now().toIso8601String(); for(var x = 1; x <= 10; x++) { await ddo.exec("INSERT INTO person (`name`, `created`) VALUES ('person-${x}', '${now}')"); } DDOStatement stmt = await ddo.query('select * from person'); var results = stmt.fetchAll(DDO.FETCH_ASSOC); for(Map<String, dynamic> row in results) { for (String cName in row.keys) { print("Column '${cName}' has value '${row[cName]}'"); } } } | https://pub.dartlang.org/documentation/ddo/latest/ | CC-MAIN-2018-30 | refinedweb | 135 | 53.47 |
Hi.
I have trouble linking an application against a static library in xcode. The static library is created with the introducer and uses some juce functions internally. I've created a small test library and it has the same problems. Here's what I did:
1. created a static library project with juce. It just has one function:
#include "test.h" #include "juceHeader.h" const char * testfunction() { juce::String test("test"); return test.toRawUTF8(); }
2. compiled the library without errors.
3. created a project for a console application in xcode. In build phases I added my library to "link binary with libraries". It shows up in the project and the path is added to the search path.
4. Added some code to test:
#include <iostream> extern const char * testfunction(); int main(int argc, const char * argv[]) { std::cout << testfunction() << std::endl; std::cin.get(); return 0; }
5. When compiling this, i get:
Undefined symbols for architecture x86_64:
"_CFArrayGetValueAtIndex", referenced from:
juce::SystemStats::getDisplayLanguage() in libNewProject.a(juce_core.o)
"_CFLocaleCopyCurrent", referenced from:
juce::SystemStats::getUserLanguage() in libNewProject.a(juce_core.o)
juce::SystemStats::getUserRegion() in libNewProject.a(juce_core.o)
"_CFLocaleCopyPreferredLanguages", referenced from:
juce::SystemStats::getDisplayLanguage() in libNewProject.a(juce_core.o)
... and a lot more of those. Compiling libraries for windows and android works just fine. I'm not a frequent mac user though. Maybe I'm missing something? (I did check that the library as well as the test app are compiled for x86_64. ) I also verified with:
nm -gU libNewProject.a | grep 'getDisplayLanguage'
and the library has this symbol:
000000000005f020 T __ZN4juce11SystemStats18getDisplayLanguageEv
Any help on this would be great. I'm completely clueless about what is wrong. | https://forum.juce.com/t/xcode-linker-problem/15356 | CC-MAIN-2020-40 | refinedweb | 276 | 53.68 |
- Compare Two Strings
In this article, you will learn and get code on comparing of two strings in C++. The program is created in following ways:
- Compare Two Strings without using strcmp() Function
- Using strcmp() Function
Compare Two Strings without strcmp()
To compare two strings in C++ programming, you have to ask from user to enter the two strings and compare them without using any type of library function like strcmp() as shown in the program given below. Let's have a look at the program first, will get the explanation later on:
#include<iostream> using namespace std; int main() { char str1[50], str2[50]; int i=0, chk=0; cout<<"Enter the First String: "; cin>>str1; cout<<"Enter the Second String: "; cin>>str2; while(str1[i]!='\0' || str2[i]!='\0') { if(str1[i]!=str2[i]) { chk = 1; break; } i++; } if(chk==0) cout<<"\nStrings are Equal"; else cout<<"\nStrings are not Equal"; cout<<endl; return 0; }
This program was build and run under Code::Blocks IDE. Here is its sample run:
Now enter the value of first string say codescracker and then again enter the value of second string
say codescracker. Press
ENTER key to see the output as shown in the snapshot given below:
Here is another sample run, with user input codescracker as first string and crackercodes as second string:
And here is the last sample run with user input code as first string and codes as second input:
When user enters first string, then it gets initialized to str1 in a way that (supposing user enters code as first string):
- First character (c) gets initialized to str1[0]
- Second character (o) gets initialized to str1[1]
- Similarly str1[2]=d, str1[3]=e
- Then a null terminated character \0 automatically assigned after the last character of entered string, so str[4]=\0
In similar way, the second string gets initialized to str2. Now with user input, code and codes as first and second string, the dry run of above program goes like:
- Inside the while loop, the condition:
str1[i]!='\0' || str2[i]!='\0'
since initially i=0, therefore after replacing i with 0, the condition becomes
str1[0]!='\0' || str2[0]!='\0'
on putting the character present at 0th index of both the string
c!='\0' || c!='\0'
then the condition evaluates to be true.
- Because neither first character of first string nor first character of second string is equal to null terminated character \0
- Therefore program flow goes inside the loop and evaluates the condition of if statement, that is
str1[i]!=str2[i]
or
c!=c
evaluates to be false. Therefore program flow does not goes inside the if's body, rather increments the value of i and goes back to the condition of while loop again
- Because i=1 now, therefore proceed the same process using the updated value of i
- Process continues until any of the two condition inside while loop evaluates to be false. Or if the condition of if (inside the loop) evaluates to be true, therefore program flow goes inside the if's body and initializes 1 to chk, then using break keyword, program flow quit the while loop for further execution
- After exiting from the while loop (either by evaluating its condition as false, or by evaluating the if's condition as true), check for the value of chk in a way that, if it holds its initial value (0), then program flow never goes inside the if's body. That means, no any character at same index (of both string) mismatched. It means, string are equal. Otherwise if any mismatched found, then strings are not equal
What if User enters Strings of Unequal length ?
As you can see the output with two strings code and codes, the while loop runs 4 times (comparing code, first 4 character of first string, with code, first 4 character of second string) without any reason. Because if the length of two strings are not equal, then it is surely that strings can not be same.
So we do not need to evaluate while loop in that case. Therefore the program given below first finds the length of string and then checks whether length of both strings are equal or not. If it is equal, then will proceed to compare the strings. Otherwise print a message like unequal length or anything else (depends on you):
#include<iostream> using namespace std; int main() { char str1[50], str2[50]; int i, chk=0, len1=0, len2=0; cout<<"Enter the First String: "; cin>>str1; cout<<"Enter the Second String: "; cin>>str2; i=0; while(str1[i]!='\0') { len1++; i++; } i=0; while(str2[i]!='\0') { len2++; i++; } if(len1==len2) { i=0; while(str1[i]!='\0' || str2[i]!='\0') { if(str1[i]!=str2[i]) { chk = 1; break; } i++; } if(chk==0) cout<<"\nStrings are Equal"; else cout<<"\nStrings are not Equal"; } else cout<<"\nString Length must has to be same, to Compare!"; cout<<endl; return 0; }
Here is its sample run with user input code and codes as first and second string:
Compare Two Strings using strcmp()
With the help of library functions, the program becomes smaller. Because we do not need to write some extra code like finding length of string or comparing string, that can easily be processed with the help of library function like strlen() or strcmp().
The function, strcmp() takes two string as argument and returns 0 if both strings are equal. And the function strlen() takes single string as argument and returns its length.
#include<iostream> #include<string.h> using namespace std; int main() { char str1[50], str2[50]; int len1, len2; cout<<"Enter the First String: "; cin>>str1; cout<<"Enter the Second String: "; cin>>str2; len1 = strlen(str1); len2 = strlen(str2); if(len1==len2) { if(strcmp(str1, str2)==0) cout<<"\nStrings are Equal"; else cout<<"\nStrings are not Equal"; } else cout<<"\nStrings are not Equal"; cout<<endl; return 0; }
Here is its sample run with user input codescracker as first and second string both:
Same Program in Other Languages
« Previous Program Next Program » | https://codescracker.com/cpp/program/cpp-program-compare-two-string.htm | CC-MAIN-2022-21 | refinedweb | 1,016 | 59.67 |
Docker¶
Tutorials¶
Installation¶
Directly type
docker in the terminal,
$ docker Command 'docker' not found, but can be installed with: sudo snap install docker # version 19.03.11, or sudo apt install docker.io See 'snap info docker' for additional versions.
then run
sudo apt install docker.io
Without permisson, it will report the following message
$ docker version Client: Version: 19.03.6 API version: 1.40 Go version: go1.12.17 Git commit: 369ce74a3c Built: Fri Feb 28 23:45:43 2020 OS/Arch: linux/amd64 Experimental: false Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get: dial unix /var/run/docker.sock: connect: permission denied
To avoid permission issue,
sudo usermod -aG docker $USER
But it is necessary to log out and log back in to re-evaluate the group membership.
change the root folder¶
to save space, I want to change the image installation directory:
$ sudo vi /etc/docker/daemon.json { "data-root": "/new/path/to/docker-data" } $ sudo systemctl daemon-reload $ sudo systemctl restart docker
where the official explanation is that
data-rootis the path where persisted data such as images, volumes, and cluster state are stored. The default value is /var/lib/docker
we can validate it with the
hello-world image,
$ docker image pull hello-world # or docker image pull library/hello-world $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest d1165f221234 7 weeks ago 13.3kB $ docker image inspect d1165 [ { "Id": "sha256:d1165f2212346b2bab48cb01c1e39ee8ad1be46b87873d9ca7a4e434980a7726", "RepoTags": [ "hello-world:latest" ... "GraphDriver": { "Data": { "Merged/merged", "Upper/diff", "Work/work" }, ...
R¶
r-base¶
The images are hosted on Docker Official Images: r-base.
Without pulling,
docker pull r-base:4.1.0
we can directly run the following code,
docker run -it r-base:4.1.0
then the R session would appear. We can install packages as usual, such as
> install.packages("dplyr")
but note that the modification would discard after quitting. So it is necessary to save the changes (refer to How to Commit Changes to a Docker Image with Examples) via
$ docker commit container-ID new-image-name
Next time, we can run with the installed packages,
$ docker run -it new-image-name
If we want to plot, then it is necessary to forward X11, see Alternatives to ssh X11-forwarding for Docker containers for more details. Or as @edd suggested in Enable plotting for r-base container via X-Forwarding, a better one might be to use a container with RStudio Server.
XAMPP¶
XAMPP is a free and open-source cross-platform (X) web server solution stack package developed by Apache Friends, consisting mainly of the Apache HTTP Server (A), MariaDB database (M) (formerly MySQL), and interpreters for scripts written in the PHP (P) and Perl (P) programming languages.
Info
Here is a great docker image!
Start via
#$ docker pull tomsik68/xampp $ docker run --name myXAMPP -p 41061:22 -p 41062:80 -d -v ~/my_web_pages:/www tomsik68/xampp:8
Tip
docker runand
docker container runare exactly the same
- since
docker runwill automatically download the image if no installed, then
docker pullis unnecessary.
-v /HOST-DIR:/CONTAINER-DIRcreates a bind mount.
-p hostPort:containerPortpublishes the container’s port to the host.
-druns the container in the background and print the new container ID.
More details can be checked via
man docker-run.
Then we can see the container via
$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 43e5a49cbfd5 tomsik68/xampp "sh /startup.sh" 18 seconds ago Up 17 seconds 3306/tcp, 0.0.0.0:41061->22/tcp, 0.0.0.0:41062->80/tcp myXAMPP
Stop via
#$ docker container stop/kill [containerID] $ docker stop/kill [containerID] # then $ docker stop 43e5 43e5 $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Tip
- similarly,
docker container stop/killcan be abbreviated as
docker stop/kill
kill向容器里面的主进程发出 SIGKILL 信号,而
stop发出 SIGTERM 信号,然后过一段时间再发出 SIGKILL 信号。两者差异是,应用程序收到 SIGTERM 信号后,可以自行进行收尾清理工作,但也可以不理会这个信号。如果收到 SIGKILL 信号,就会强行立即终止,那些正在进行的操作会全部丢失。
containerID无需写全,只要能区分即可
- since we have specified the name via
--name myXAMPP, we can replace the containerID with such name.
Restart via
# find the container ID $ docker container ls -a $ docker container start [containerID]/[containerNAME]
Tip
docker container lsonly shows the running ones, but
-awill show all containers. More details can be found in
man docker-container-ls
Establish a ssh connection,
$ ssh [email protected] -p 41061
it sounds like the port-forwarding if we view the container as another linux machine.
Info
Both default username and password are
root.
Alternatively, we can get a shell terminal insider the container, just like ssh,
$ docker exec -it myXAMPP bash
Tip
If we are inside the container, we can export the path to use the commands provided by XAMPP,
# inside docker container export PATH=/opt/lampp/bin:$PATH # or add it to `.bashrc` of the container
If we modified the configuration of XAMPP, we need to restart the Apache server via
docker exec myXAMPP /opt/lampp/lampp restart
Python (for a non-internet env)¶
First of all, write a dockerfile
cat Dockerfile FROM python:3.7 RUN pip install jieba
then build it with
$ docker image build -t py37jieba:0.0.1 .
and test it locally with
$ docker run -it py37jieba:0.0.1 Python 3.7.10 (default, May 12 2021, 16:05:48) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import jieba >>> jieba.cut("他来到了网易杭研大厦") <generator object Tokenizer.cut at 0x7fda1c981bd0> >>> print(", ".join(jieba.cut("他来到了网易杭研大厦"))) Building prefix dict from the default dictionary ... Dumping model to file cache /tmp/jieba.cache Loading model cost 0.924 seconds. Prefix dict has been built successfully. 他, 来到, 了, 网易, 杭研, 大厦
save the image with
$ docker save py37jieba:0.0.1 | gzip > py37jieba-0.0.1.tar.gz
refer to hubutui/docker-for-env-without-internet-access
no space left on device: unknown¶
$ docker system prune
refer to Docker error : no space left on device
ownership of generated files¶
The default ownership file is
root, and fail to accessed.
We can specify the owner if necessary,
$ docker run -v $PWD:/root -w /root -u $UID:$UID -it py37jieba:0.0.5 bash
where the valid formats from
man docker-run include
--user [user | user:group | uid | uid:gid | user:gid | uid:group ]
but a direct username
weiya throws,
docker: Error response from daemon: unable to find user weiya: no matching entries in passwd file. ERRO[0000] error waiting for container: context canceled
with a specific uid, the prompt displays
I have no name!@afea1a0b4bd7:/root$ echo "own by weiya" > test_ownship.txt
Test with
1000 and
1000:1000 respectively, the results are
-rw-r--r-- 1 weiya weiya 13 Jul 14 13:36 test_ownship2.txt -rw-r--r-- 1 weiya root 13 Jul 14 12:04 test_ownship.txt
refer to Files created by Docker container are owned by root
Run in background¶
Suppose I want to run a script in the background, the classical way is
$ python test.py &
But with docker, the correct way is to add option
-d.
$ docker run -d
refer to How to run a docker container in the background or detached mode in the terminal?
- directly appending
&did not work.
nohup docker ... &also failed
Info
See SZmedinfo/issues/11 for some failed attempts
Another related task is to run the script in the background of docker, which can keep the container running forever,
RUN bash -c "sh script.sh & tail -F /dev/null"
refer to Run a background script inside a docker container
from time import sleep sleep(30) f = open("test.txt", "w") f.write("test") f.close()
write out log file, do not use the abbreviated version,
&> since it is only supported in bash, and instead write the full format
RUN sh -c 'python test.py > out.log 2>&1' | https://tech.hohoweiya.xyz/dev/docker/ | CC-MAIN-2022-27 | refinedweb | 1,311 | 54.83 |
Python regular expressions for beginners: what it is, why and what for
Over the past few years, machine learning, data science, and related industries have made great strides forward. More and more companies and developers are using Python and JavaScript to work with data.
And this is where we need regular expressions. Whether parsing all or portions of text from web pages, analyzing Twitter data, or preparing data for text analysis – regular expressions come to the rescue.
By the way, Alexey Nekrasov, the leader of the Python department at MTS, and the program director of the Python department at Skillbox, added his advice on some functions. To make it clear where the translation is, and where are the comments, we will highlight the latter with a quote.
Why are regular expressions needed?
They help to quickly solve a variety of tasks when working with data:
- Determine the required data format, including phone number or e-mail address.
- Split strings into substrings.
- Search, extract and replace characters.
- Perform non-trivial operations quickly.
The good news is that the syntax of most of these expressions is standardized, so you need to understand them once, after which you can use them anytime, anywhere. And not only in Python, but also in any other programming languages.
When are regular expressions unnecessary? When there is a similar built-in function in Python, and there are quite a few of them.
What about regular expressions in Python?
There is a special re module here, which is exclusively for working with regular expressions. This module needs to be imported, after which you can start using regulars.
import re
As for the most popular methods provided by the module, here they are:
- re.match ()
- re.search ()
- re.findall ()
- re.split ()
- re.sub ()
- re.compile ()
Let’s take a look at each of them.
re.match (pattern, string)
The method is designed to search for a given pattern at the beginning of a string. So, if you call the match () method on the line “AV Analytics AV” with the template “AV”, then it will be completed successfully.
import re result = re.match(r'AV', 'AV Analytics Vidhya AV') print(result) Результат: <_sre.SRE_Match object at 0x0000000009BE4370>
Here we found the required substring. The group () method is used to display its contents. This uses “r” in front of the template string to indicate that it is a raw string in Python.
result = re.match(r'AV', 'AV Analytics Vidhya AV') print(result.group(0)) Результат: AV
Okay, now let’s try to find “Analythics” on the same line. We won’t succeed, since the line begins with “AV”, the method returns none:
result = re.match(r'Analytics', 'AV Analytics Vidhya AV') print(result) Результат: None
The start () and end () methods are used to find out the start and end position of the found string.
result = re.match(r'AV', 'AV Analytics Vidhya AV') print(result.start()) print(result.end()) Результат: 0 2
All of these methods are extremely useful when working with strings.
re.search (pattern, string)
This method is similar to match (), but the difference is that it searches not only at the beginning of a string. So, search () returns an object if we try to find “Analythics”.
result = re.search(r'Analytics', 'AV Analytics Vidhya AV') print(result.group(0)) Результат: Analytics
As for the search () method, it searches the entire string, returning, however, only the first match it finds.
re.findall (pattern, string)
Here we have a return of all found matches. For example, the findall () method has no restrictions on whether to search at the beginning or end of a line. For example, if you search for “AV” in a string, then we get all occurrences of “AV” returned. It is recommended to use this method for searching, since it knows how to work both re.search () and re.match ().
result = re.findall(r'AV', 'AV Analytics Vidhya AV') print(result) Результат: ['AV', 'AV']
re.split (pattern, string, [maxsplit=0])
This method splits a string according to a given pattern.
result = re.split(r'y', 'Analytics') print(result) Результат: ['Anal', 'tics']
In this example, the word “Analythics” is separated by the letter “y”. The split () method here also accepts a maxsplit argument with a default value of 0. Thus, it splits the string as many times as possible. However, if you specify this argument, then the division cannot be performed more than the specified number of times. Here are some examples:
result = re.split(r'i', 'Analytics Vidhya') print(result) Результат: ['Analyt', 'cs V', 'dhya'] # все возможные участки. result = re.split(r'i', 'Analytics Vidhya', maxsplit=1) print(result) Результат: ['Analyt', 'cs Vidhya']
Here the maxsplit parameter is set to 1, which results in the line being split into two instead of three.
re.sub (pattern, repl, string)
Helps to find a pattern in a string, replacing it with the specified substring. If the desired item is not found, then the string remains unchanged.
result = re.sub(r'India', 'the World', 'AV is largest Analytics community of India') print(result) Результат: 'AV is largest Analytics community of the World'
re.compile (pattern, repl, string)
Here we can collect the regular expression into an object, which in turn can be used for searching. This option avoids rewriting the same expression.
pattern = re.compile('AV') result = pattern.findall('AV Analytics Vidhya AV') print(result) result2 = pattern.findall('AV is largest analytics community of India') print(result2) Результат: ['AV', 'AV'] ['AV']
Up to this point, we have considered the option with the search for a specific sequence of characters? In this case, there is no pattern, the set of characters must be returned in the order corresponding to certain rules. This is a common task when dealing with retrieving information from strings. And this is easy to do, you just need to write an expression using a special. characters. The most common ones are:
- … Any single character except newline n.
- ? 0 or 1 occurrence of the pattern to the left
- + 1 or more occurrences of the pattern on the left
- * 0 or more occurrences of the pattern on the left
- w Any number or letter ( W – everything except letter or number)
- d Any digit [0-9] ( D – everything except digit)
- s Any whitespace character ( S is any non-whitespace character)
- b Word boundary
- [..] One of the characters in parentheses ([^..] – any character except those in brackets)
- Escaping special characters (. Stands for period or + for plus sign)
- ^ and $ Beginning and end of line respectively
- {n, m} n to m occurrences ({, m} – 0 to m)
- a | b Matches a or b
- () Groups the expression and returns the found text
- t, n, r Tab, newline, and carriage return, respectively
It is clear that there may be more symbols. Information about them can be found in documentation for regular expressions in Python 3.
Some examples of using regular expressions
Example 1. Returning the first word from a string
Let’s first try to get each character using (.)
result = re.findall we will do the same, but so that the final result does not include a space, we use w instead of (.)
result = re.findall(r let’s do a similar operation with each word. We use in this case * or +.
result = re.findall(r'w*', 'AV is largest Analytics community of India') print(result) Результат: ['AV', '', 'is', '', 'largest', '', 'Analytics', '', 'community', '', 'of', '', 'India', '']
But even here, as a result, there were gaps. Reason – * means “zero or more characters”. The “+” will help us remove them.
result = re.findall(r'w+', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'largest', 'Analytics', 'community', 'of', 'India']
Now let’s extract the first word using
^:
result = re.findall(r'^w+', 'AV is largest Analytics community of India') print(result) Результат: ['AV']
But if you use $ instead of ^, then we get the last word, not the first:
result = re.findall(r'w+$', 'AV is largest Analytics community of India') print(result) Результат: [‘India’]
Example 2. Returning two characters of each word
Here, as above, there are several options. In the first case, using w, we extract two consecutive characters, except for those with spaces, from each word:
result = re.findall(r'ww', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'la', 'rg', 'es', 'An', 'al', 'yt', 'ic', 'co', 'mm', 'un', 'it', 'of', 'In', 'di']
Now we try to extract two consecutive characters using the word boundary character ( b):
result = re.findall(r'bw.', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'la', 'An', 'co', 'of', 'In']
Example 3. Returning domains from a list of email addresses.
In the first step, we return all characters after the @:
result = re.findall(r'@w+', '[email protected], [email protected], [email protected], [email protected]') print(result) Результат: ['@gmail', '@test', '@analyticsvidhya', '@rest']
As a result, the parts “.com”, “.in”, etc. do not end up in the result. To fix this, you need to change the code:
result = re.findall(r'@w+.w+', '[email protected], [email protected], [email protected], [email protected]') print(result) Результат: ['@gmail.com', '@test.in', '@analyticsvidhya.com', '@rest.biz']
The second solution to the same problem is to extract only the top-level domain using “()”:
result = re.findall(r'@w+.(w+)', '[email protected], [email protected], [email protected], [email protected]') print(result) Результат: ['com', 'in', 'com', 'biz']
Example 4. Getting a date from a string
To do this, you need to use d
result = re.findall(r'd{2}-d{2}-d{4}', 'Amit 34-3456 12-05-2007, XYZ 56-4532 11-11-2011, ABC 67-8945 12-01-2009') print(result) Результат: ['12-05-2007', '11-11-2011', '12-01-2009']
To extract only the year, the parentheses help:
result = re.findall(r'd{2}-d{2}-(d{4})', 'Amit 34-3456 12-05-2007, XYZ 56-4532 11-11-2011, ABC 67-8945 12-01-2009') print(result) Результат: ['2007', '2011', '2009']
Example 5. Extracting words starting with a vowel
At the first stage, you need to return all the words:
result = re.findall(r'w+', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'largest', 'Analytics', 'community', 'of', 'India']
After that, only those that begin with certain letters, using “[]”:
result = re.findall(r'[aeiouAEIOU]w+', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'argest', 'Analytics', 'ommunity', 'of', 'India']
In the resulting example, there are two shortened words, “argest” and “ommunity”. In order to remove them, you need to use b, which is necessary to denote a word boundary:
result = re.findall(r'b[aeiouAEIOU]w+', 'AV is largest Analytics community of India') print(result) Результат: ['AV', 'is', 'Analytics', 'of', 'India']
Alternatively, you can use and ^ inside square brackets to help invert groups:
result = re.findall(r'b[^aeiouAEIOU]w+', 'AV is largest Analytics community of India') print(result) Результат: [' is', ' largest', ' Analytics', ' community', ' of', ' India']
Now we need to remove words with a space, for which we include the space in the range in square brackets:
result = re.findall(r'b[^aeiouAEIOU ]w+', 'AV is largest Analytics community of India') print(result) Результат: ['largest', 'community']
Example 6. Checking the format of a telephone number
In our example, the length of the number is 10 characters, it starts with 8 or 9. To check the list of phone numbers, use:
li = ['9999999999', '999999-999', '99999x9999'] for val in li: if re.match(r'[8-9]{1}[0-9]{9}', val) and len(val) == 10: print('yes') else: print('no') Результат: yes no no
Example 7. Splitting a string into multiple delimiters
Here we have several solutions. Here’s the first one:
line="asdf fjdk;afed,fjek,asdf,foo" # String has multiple delimiters (";",","," "). result = re.split(r'[;,s]', line) print(result) Результат: ['asdf', 'fjdk', 'afed', 'fjek', 'asdf', 'foo']
Alternatively, the re.sub () method can be used to replace all delimiters with spaces:
line="asdf fjdk;afed,fjek,asdf,foo" result = re.sub(r'[;,s]', ' ', line) print(result) Результат: asdf fjdk afed fjek asdf foo
Example 8. Extracting data from an html file
In this example, we extract data from an html file that is enclosed between and, except for the first column with a number. We also assume that the html code is contained in the string.
Sample file
In order to solve this problem, we perform the following operation:
result=re.findall(r'<td>w+</td>s<td>(w+)</td>s<td>(w+)</td>',str) print(result) Output: [('Noah', 'Emma'), ('Liam', 'Olivia'), ('Mason', 'Sophia'), ('Jacob', 'Isabella'), ('William', 'Ava'), ('Ethan', 'Mia'), ('Michael', 'Emily')]
Alexey’s comment
When writing any regex in the code, adhere to the following rules:
- Use re.compile for any more or less complex and long regular expressions. Also, avoid calling re.compile multiple times on the same regex.
- Write verbose regular expressions using the optional re.VERBOSE argument. When re.compile use the re.VERBOSE flag, write regex on multiple lines with comments on what is going on. See documentation on links here and here…
Example:
compact view
pattern = '^M{0,3}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$' re.search(pattern, 'MDLV')
Detail view)
Use named capture group for all capture groups if there is more than one (? P …). (even if there is only one capture, it is also better to use).
regex101.com is a great site for debugging and checking regex
When developing a regular expression, you must not forget about its complexity, otherwise you can step on the same rake that Cloudflare did relatively recently. | https://prog.world/python-regular-expressions-for-beginners-what-it-is-why-and-what-for/ | CC-MAIN-2022-21 | refinedweb | 2,280 | 55.24 |
Provides a generic brain tree item to hold real time data. More...
#include <networktreeitem.h>
Provides a generic brain tree item to hold real time data.
NetworkTreeItem provides a generic item to hold information about real time connectivity data to plot onto the brain surface.
Definition at line 101 of file networktreeitem.h.
Const shared pointer type for NetworkTreeItem class.
Definition at line 107 of file networktreeitem.h.
Shared pointer type for NetworkTreeItem class.
Definition at line 106 of file networktreeitem.h.
Default constructor.
Definition at line 86 of file networktreeitem.cpp.
Adds actual rt connectivity data which is streamed by this item's worker thread item. In order for this function to worker, you must call init(...) beforehand.
Definition at line 142 of file networktreeitem.cpp.
This function set the threshold values.
Definition at line 166 of file networktreeitem.cpp. | https://mne-cpp.github.io/doxygen-api/a02215.html | CC-MAIN-2022-27 | refinedweb | 141 | 53.37 |
Re: Misuse of XML namespaces; call for help in marshalling arguments
Discussion in 'XML' started by Peter Flynn, Aug 6,:
- 373
- Simon North
- Aug 5, 2004
Page won't validate -- misuse of A element?Michael Laplante, May 18, 2006, in forum: HTML
- Replies:
- 3
- Views:
- 480
- Jonathan N. Little
- May 18, 2006
Re: Misuse of <tab>John Roth, Jul 30, 2003, in forum: Python
- Replies:
- 8
- Views:
- 380
- Robin Munn
- Aug 12, 2003
Re: Misuse of <tab>Michael Sampson, Jul 30, 2003, in forum: Python
- Replies:
- 5
- Views:
- 369
- Ben Finney
- Jul 31, 2003
naive misuse?, Aug 28, 2006, in forum: Python
- Replies:
- 3
- Views:
- 348
- Simon Forman
- Aug 29, 2006 | http://www.thecodingforums.com/threads/re-misuse-of-xml-namespaces-call-for-help-in-marshalling-arguments.167613/ | CC-MAIN-2014-42 | refinedweb | 111 | 69.65 |
gradle dependencies allows to display dependencies in your project printed as pretty ascii tree. Unfortunately it does not work well for submodules in multi-project build. I was not able to find satisfactory solution on the web, so after worked out my own that blog post arose.
Multiple subprojects
For multi-project builds
gradle dependencies called in the root directory unexpectedly displays no dependencies:
In fact Gradle is right. Root project usually has no code and no compile or runtime dependencies. Only in case of using plugins there could be some additional configurations created by them.
You could think about
--recursive or
--with-submodules flags, but they do not exist. It is possible to display dependencies for subprojects with “
gradle sub1:dependencies” and “
gradle sub2:dependencies“, but this is very manual and unpractical for more than a few modules. We could write a shell script, but having regard to (potential) recursive folders traversal there are some catches. Gradle claims to be very extensible with its Groovy based DSL, so why not take advantage of that. Iteration over subprojects can give some effects, but after testing a few conception I ended with pure and simple:
subprojects { task allDeps(type: DependencyReportTask) {} }
When called
gradle allDeps it executes
dependencies task on all subprojects.
Remove duplication
All dependencies belong to us, but some parts of the tree looks similar (and duplication is a bad thing). Especially configurations
default,
compile and
runtime and the second group
testCompile and
testRuntime in most cases contain (almost) the same set of dependencies. To make the output shorter we could limit it to
runtime (or in case of test dependencies
testRuntime).
dependencies task provides convenient parameter
--configuration and to focus on test dependencies “
gradle allDeps --configuration testRuntime” can be used.
Summary
Where it could be useful? Recently I was pair programming with my old-new colleague in a new project (with dozens submodules) where SLF4J in addition to expected
slf4j-logback provider discovered on a classpath also
slf4j-simple. We wanted to figure out which library depends on it. Logging dependencies tree to file with a help of grep gave us the answer.
As a bonus during my fights with
DependencyReportTask I found an easier way how get know who requires given library. I will write about it in my next post.
Tested with Gradle 2.2.
Very nice (big) gradle banner, do you work for gradle?
Not yet, but it can be possibly changed soon.
[…] Gradle tricks – display dependencies for all subprojects in multi-project build […]
For a multiproject setup, i had to use following script:
Maybe it would cleaner to create that task only in the subproject with source code?
[…] Gradle tricks – display dependencies for all subprojects in multi-project build […]
If you would like to analyze result ascii tree dependencies, you can try to use this tool:
Looks interesting. Thanks for sharing.
Wonderful, what a web site it is! This weblog provides valuable facts to us, keep it up.
Thanks for sharing!
“`
}
“`
If gradle is configured to run tasks in parallel, this wil make the output useless, because it interweaves output from the different subprojects.
I added this to solve it:
// Create a chain of dependencies between all sub project’s “allDeps” tasks, so that the output is linear
// even when we run gradle in default “–parallel” mode.
def allSubProjects = subprojects as List
for (def index = 1; index < allSubProjects.size; ++index) {
allSubProjects[index].tasks.allDeps.dependsOn allSubProjects[index – 1].tasks.allDeps
Thanks for sharing. That in fact could be some problem, but for most of the projects I worked with it was doable to just wait a little bit longer executing just that one task. However then, you need to remember to disable the parallel mode :).
Btw, what Gradle version do you use? I believe in one of the Gradle versions released this year I have seen an improvement which was sorting out the console output in the parallel mode. Therefore, maybe it is already fixed out-of-box in 4.10? | https://solidsoft.wordpress.com/2014/11/13/gradle-tricks-display-dependencies-for-all-subprojects-in-multi-project-build/ | CC-MAIN-2022-21 | refinedweb | 666 | 55.54 |
Turbo C - Distance between the two points
Here is the Turbo C program for the Distance between the two points.
It uses the following formula give points are (x1,y1) and (x2, y2)
SQRT( (x2-x1) * (x2-x1) + (y2-y1) * (y2-y1) )
Source Code
#include <stdio.h>
#include <math.h>
void main()
{
float distance;
int x1, y1, x2, y2;
int dx, dy;
printf("Program for distance between the two points\n");
printf("Enter X1: ");
scanf("%d", &x1);
printf("Enter Y1: ");
scanf("%d", &y1);
printf("Enter X2: ");
scanf("%d", &x2);
printf("Enter Y2: ");
scanf("%d", &y2);
dx = x2 - x1;
dy = y2 - y1;
distance = sqrt(dx*dx + dy*dy);
printf("%.4f", distance);
}
Output
Program for distance between the two points
Enter X1: 10
Enter Y1: 10
Enter X2: 30
Enter Y2: 30
Distance between (10, 10) and (30, 30) = SQRT(800) = 28.2843 | http://www.softwareandfinance.com/Turbo_C/Distance_Two_Points.html | CC-MAIN-2017-43 | refinedweb | 141 | 64.44 |
table of contents
NAME¶
setgid - set group identity
SYNOPSIS¶
#include <sys/types.h>
#include <unistd.h>
int setgid(gid_t gid);
DESCRIPTION¶
setgid() sets the effective group ID of the calling process. If the calling process is privileged (more precisely:¶
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS¶
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, SVr4.
NOTES¶¶
getgid(2), setegid(2), setregid(2), capabilities(7), credentials(7), user_namespaces(7)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://dyn.manpages.debian.org/testing/manpages-dev/setgid.2.en.html | CC-MAIN-2022-21 | refinedweb | 112 | 60.61 |
April 25, 2017
Bioconductors:
We are pleased to announce Bioconductor 3.5, consisting of 1383 software packages, 316 experiment data packages, and 911 annotation packages.
There are 88 new software packages, and many updates and improvements to existing packages; Bioconductor 3.5 is compatible with R 3.4, and is supported on Linux, 32- and 64-bit Windows, and Mac OS X. This release will include an updated Bioconductor Amazon Machine Image and Docker containers.
Visit for details and downloads.
To update to or install Bioconductor 3.5:
Install R 3.4. Bioconductor 3.5 has been designed expressly for this version of R.
Follow the instructions at.
There are 88 new software packages in this release of Bioconductor.
AnnotationFilter This package provides class and other infrastructure to implement filters for manipulating Bioconductor annotation resources. The filters will be used by ensembldb, Organism.dplyr, and other packages.
ATACseqQC.
banocc.
basecallQC.
BiocFileCache This package creates a persistent on-disk cache of files that the user can add, update, and retrieve. It is useful for managing resources (such as custom Txdb objects) that are costly or difficult to create, web resources, and data files used across sessions.
BioCor Calculates functional similarities based on the pathways described on KEGG and REACTOME or in gene sets. These similarities can be calculated for pathways or gene sets, genes, or clusters and combined with other similarities. They can be used to improve networks, gene selection, testing relationships…
BioMedR The BioMedR package offers an R/Bioconductor package generating various molecular representations for chemicals, proteins, DNAs/RNAs and their interactions.
biotmle This package facilitates the discovery of biomarkers from biological sequencing data (e.g., microarrays, RNA-seq) based on the associations of potential biomarkers with exposure and outcome variables by implementing an estimation procedure that combines a generalization of the moderated t-statistic with asymptotically linear statistical parameters estimated via targeted minimum loss-based estimation (TMLE).
BLMA Suit of tools for bi-level meta-analysis. The package can be used in a wide range of applications, including general hypothesis testings, differential expression analysis, functional analysis, and pathway analysis.
BPRMeth.
branchpointer Predicts branchpoint probability for sites in intronic branchpoint windows. Queries can be supplied as intronic regions; or to evaluate the effects of mutations, SNPs.
BUMHMM.
cellbaseR.
cellscape CellScape facilitates interactive browsing of single cell clonal evolution datasets. The tool requires two main inputs: (i) the genomic content of each single cell in the form of either copy number segments or targeted mutation values, and (ii) a single cell phylogeny. Phylogenetic formats can vary from dendrogram-like phylogenies with leaf nodes to evolutionary model-derived phylogenies with observed or latent internal nodes. The CellScape phylogeny is flexibly input as a table of source-target edges to support arbitrary representations, where each node may or may not have associated genomic data. The output of CellScape is an interactive interface displaying a single cell phylogeny and a cell-by-locus genomic heatmap representing the mutation status in each cell for each locus.
chimeraviz chimeraviz manages data from fusion gene finders and provides useful visualization tools.
ChIPexoQual Package with a quality control pipeline for ChIP-exo/nexus data.
clusterSeq Identification of clusters of co-expressed genes based on their expression across multiple (replicated) biological samples.
coseq Co-expression analysis for expression profiles arising from high-throughput sequencing data. Feature (e.g., gene) profiles are clustered using adapted transformations and mixture models or a K-means algorithm, and model selection criteria (to choose an appropriate number of clusters) are provided.
cydar Identifies differentially abundant populations between samples and groups in mass cytometry data. Provides methods for counting cells into hyperspheres, controlling the spatial false discovery rate, and visualizing changes in abundance in the high-dimensional marker space.
DaMiRseq.
DelayedArray.
discordant Discordant is a method to determine differential correlation of molecular feature pairs from -omics data using mixture models. Algorithm is explained further in Siska et al.
DMRScan.
epiNEM epiNEM is an extension of the original Nested Effects Models (NEM). EpiNEM is able to take into account double knockouts and infer more complex network signalling pathways.
EventPointer.
flowTime.
funtooNorm Provides a function to normalize Illumina Infinium Human Methylation 450 BeadChip (Illumina 450K), correcting for tissue and/or cell type.
GA4GHclient GA4GHclient provides an easy way to access public data servers through Global Alliance for Genomics and Health (GA4GH) genomics API. It provides low-level access to GA4GH API and translates response data into Bioconductor-based class objects.
gcapc Peak calling for ChIP-seq data with consideration of potential GC bias in sequencing reads. GC bias is first estimated with generalized linear mixture models using weighted GC strategy, then applied into peak significance estimation.
- geneClassifiers This packages aims for easy accessible application of classifiers which have been published in literature using an ExpressionSet as input.
GenomicDataCommons Programmatically access the NIH / NCI Genomic Data Commons RESTful service.
GenomicScores Provide infrastructure to store and access genomewide position-specific scores within R and Bioconductor.
GISPA).
goSTAG.
GRridge.
heatmaps This package provides functions for plotting heatmaps of genome-wide data across genomic intervals, such as ChIP-seq signals at peaks or across promoters. Many functions are also provided for investigating sequence features.
hicrep.
ideal This package provides functions for an Interactive Differential Expression AnaLysis of RNA-sequencing datasets, to extract quickly and effectively information downstream the step of differential expression. A Shiny application encapsulates the whole package.
IMAS Integrative analysis of Multi-omics data for Alternative splicing.
ImpulseDE2.
IntEREst This package performs Intron-Exon Retention analysis on RNA-seq data (.bam files).
IWTomics.
karyoploteR.
Logolas Produces logo plots of a variety of symbols and names comprising English alphabets, numerics and punctuations. Can be used for sequence motif generation, mutation pattern generation, protein amino acid geenration and symbol strength representation in any generic context.
mapscape.
MaxContrastProjection A.
MCbiclust Custom made algorithm and associated methods for finding, visualising and analysing biclusters in large gene expression data sets. Algorithm is based on with a supplied gene set of size n, finding the maximum strength correlation matrix containing m samples from the data set.
metavizr.
methylInheritance Permutation analysis, based on Monte Carlo sampling, for testing the hypothesis that the number of conserved differentially methylated elements, between several generations, is associated to an effect inherited from a treatment and that stochastic effect can be dismissed.
MIGSA.
mimager Easily visualize and inspect microarrays for spatial artifacts.
motifcounter ‘motifcounter’ provides functionality to compute the statistics related with motif matching and counting of motif matches in DNA sequences. As an input, ‘motif, ‘motif.
msgbsR Pipeline for the anaysis of a MS-GBS experiment.
multiOmicsViz Calculate the spearman correlation between the source omics data and other target omics data, identify the significant correlations and plot the significant correlations on the heat map in which the x-axis and y-axis are ordered by the chromosomal location.
MWASTools MWAS.
NADfinder Call peaks for two samples: target and control. It will count the reads for tiles of the genome and then convert it to ratios. The ratios will be corrected and smoothed. The z-scores is calculated for each counting windows over the background. The peaks will be detected based on z-scores.
netReg.
Organism.dplyr This package provides an alternative interface to Bioconductor ‘annotation’ resources, in particular the gene identifier mapping functionality of the ‘org’ packages (e.g., org.Hs.eg.db) and the genome coordinate functionality of the ‘TxDb’ packages (e.g., TxDb.Hsapiens.UCSC.hg38.knownGene).
pathprint Algorithms to convert a gene expression array provided as an expression table or a GEO reference to a ‘pathway fingerprint’, a vector of discrete ternary scores representing high (1), low(-1) or insignificant (0) expression in a suite of pathways.
pgca.
phosphonormalizer It uses the overlap between enriched and non-enriched datasets to compensate for the bias introduced in global phosphorylation after applying median normalization.
POST Perform orthogonal projection of high dimensional data of a set, and statistical modeling of phenotye with projected vectors as predictor.
PPInfer urshered.
RaggedExperiment.
ramwas RaMWAS provides.
REMP Machine learing.
RITAN Tools for comprehensive gene set enrichment and extraction of multi-resource high confidence subnetworks.
RIVER An implementation of a probabilistic modeling framework that jointly analyzes personal genome and transcriptome data to estimate the probability that a variant has regulatory impact in that individual. It is based on a generative model that assumes. See the RIVER website for more information, documentation and examples.
RJMCMCNucleosomes This package does nucleosome positioning using informative Multinomial-Dirichlet prior in a t-mixture with reversible jump estimation of nucleosome positions for genome-wide profiling.
RnaSeqGeneEdgeRQL A workflow package for RNA-Seq experiments
rqt Despite>.
RTNduals RTNduals is a tool that searches for possible co-regulatory loops between regulon pairs generated by the RTN package. It compares the shared targets in order to infer ‘dual regulons’, a new concept that tests whether regulon pairs agree on the predicted downstream effects.
samExploreR.
sampleClassifier The package is designed to classify gene expression profiles.
scDD.
scone SCONE is an R package for comparing and ranking the performance of different normalization schemes for single-cell RNA-seq and other high-throughput analyses.
semisup.
sparseDOSSA.
splatter Splatter is a package for the simulation of single-cell RNA sequencing count data. It provides a simple interface for creating complex simulations that are reproducible and well-documented. Parameters can be estimated from real data and functions are provided for comparing real and simulated datasets.
STROMA4).
swfdr.
TCGAbiolinksGUI “TCGAbiolinksGUI: A Graphical User Interface to analyze cancer molecular and clinical data. A demo version of GUI is found in”
TCseq Quantitative and differential analysis of epigenomic and transcriptomic time course sequencing data, clustering analysis and visualization of temporal patterns of time course data.
timescape.
treeio Base classes and functions for parsing and exporting phylogenetic trees.
TSRchitect.
twoddpcr ‘definetherain’ (Jones et al., 2014) and ‘ddpcRquant’ (Trypsteen et al., 2015) which both handle one channel ddPCR experiments only. The ‘ddpcr’ package available on CRAN (Attali et al., 2016) supports automatic gating of a specific class of two channel ddPCR experiments only.
wiggleplotr Tools to visualise read coverage from sequencing experiments together with genomic annotations (genes, transcripts, peaks). Introns of long transcripts can be rescaled to a fixed length for better visualisation of exonic read coverage.
Changes in version 1.5.10:
NEW FEATURES
USER-LEVEL CHANGES
Changes in version 1.5.9:
NEW FEATURES
New function get_annotated_genes returns genes annotated to enriched or user-defined brain regions
New function get_id returns the ID of a brain region given its name
USER-LEVEL CHANGES
results from aba_enrich get sorted on times_FWER_under_0.05 followed by min_FWER and mean_FWER (order of min and mean switched)
genes in aba_enrich(…)[[2]] are sorted alphabetically
when genomic regions are provided as input, candidate regions are implicitly also part of the background for the randomsets (like it is for single-genes-input)
Changes in version 1.5.8:
IMPROVEMENTS
Use dynamic tolerance for FWER calculation
Added more tests and checks
Improved checking of arguments when genomic regions are provided as input
Changes in version 1.1.1:
NEW FEATURES
Changes in version 1.0.1:
USER-LEVEL CHANGES
Changes in version 1.2.0:
Changes in version 1.3.4:
NEW FEATURES
Proper print() methods for AneuFinder objects.
plotProfile(…, normalize.counts=’2-somy’) option added for plotting of normalized counts.
heatmapGenomewideCluster() added for convenient assessment of the clusterByQuality() result.
BUG FIXES
Fixed option Aneufinder(…, strandseq = TRUE) for DNAcopy method.
Fixed a bug for SCE plotting in heatmapGenomewide().
Proper creation of variable-width bins for huge reference files.
Better heatmap dimensions for few cells.
Bugfix for Inf values in clusterByQuality().
SIGNIFICANT USER-LEVEL CHANGES
Changes in version 1.3.3:
NEW FEATURES
New function plotPCA() to do principal component analysis.
Introduced parameter ‘exclude.regions’ to karyotypeMeasures(), plotHeterogeneity() and heatmapGenomewide(). This should facilitate excluding artifact regions from the clustering and karyotype measures.
Parameter ‘regions’ now also available in plotHeterogeneity() (only in karyotypeMeasures() before).
SIGNIFICANT USER-LEVEL CHANGES
Changes in version 1.3.1:
NEW FEATURES
Changes in version 0.99.5:
NEW FEATURES
Add convertFilterExpressionQuoted function.
Add field method.
Changes in version 1.18.0:
MODIFICATIONS
RSQLite deprecated dbGetPreparedQuery/dbSendPreparedQuery; Updated with dbGetQuery/dbSendQuery/dbBind/dbFetch
update for building BioC 3.5
BUG FIXES
Resolve tmp issue, change outputDir to NCBIFilesDir
Fixed a bug in makeOrgPackageFromNCBI when there are no GO terms
Changes in version 2.8.0:
NEW FEATURES
add .get1,RDSResource-method
add RdsResource class
add EnsDb dispatch class
expose rdatapath in metadata
MODIFICATIONS
modify records exposed as metadata - expose records added <= snapshot date - expose a single OrgDb per organism per BioC version
edits to .get1,GenomicScores-method and .get1,GenomicScoresResource-method
work on biocVersion and snapshotDate relationship: - snapshotDate() must be <= biocVersion() release date - possibleDates() are now filtered by snapshotDate()
remove GenomicScoresResource; Robert Castelo will handle loading these resources in his GenomicScores software package
Changed show method for hub object - removed sourcelastmodifieddate - added rdatadateadded
BUG FIXES
fix bug in ordering of output from .uid0()
fix bugs in ‘snapshotDate<-‘ method
Changes in version 1.6.0:
NEW FEATURES
add makeStandardTxDbsToSqlite() recipe
add ‘ensembl’ and ‘MySQL’ as possible SourceType values
tidy and export makeStandard*ToAHMs and makeNCBIToOrgDbsToAHMs
MODIFICATIONS
move currentMetadata
tidy pushResources interface
modified parsing of species name and genome in .ensemblMetadataFromUrl()
modified standard OrgDb recipe
enhance and clean vignette
move ‘Tags’ check from readCsvFromMetadata() to makeAnnotationHubMetadata()
remove dependency on xml2, curl, httr and probably other wheel reinventions, alter imports and suggests
specify multiple ‘Tags’ as colon separated string instead of comma separated; avoids problems with read.csv()
select data moved to GenomeInfoDbData package
Added additional documentation instructions for core members to add contributed data to AnnotationHub
rename files; remove old JSON test file no longer applicable
pass ‘install’ argument down through recipe
General code tidy; remove unused functions and comments; clarify checks
BUG FIXES
readMetadataFromCsv() fills in DataProvider and Coordinate_1_based if missing
fix bug introduced in checking ‘release’ in makeEnsemblTwoBit recipe
makeAnnotationHubMetadata() now processes all inst/extdata/*.csv files
fix subset and import bug in makeAnnotationHubMetadata()
Fix bug in Rdatapath and sourceurl for makeEnsemblFasta.R
Changes in version 1.49.1 (2017-03-15):
Changes in version 1.2.0:
NEW FEATURES
Add support for CpG annotations for hg38, mm10, and rn6 via the UCSC goldenpath URLs.
Add a function to build annotations from AnnotationHub resources, build_ah_annots().
Add support for chromHMM tracks (chromatin state) from the UCSC Genome Browser.
Users may annotate to chromatin states in multiple cell lines, if desired.
Use rtracklayer::liftOver to lift hg19 and mm9 enhancers into hg38 and mm10.
USER-FACING CHANGES
Add minoverlaps parameter to annotate_regions() that is passed to GenomicRanges::findOverlaps().
Change supported_annotations() and supported_genomes() into builtin_annotations() and builtin_genomes(). This enables more flexibility required for AnnotationHub annotations.
Added documentation for coercing result of annotate_regions() to data.frame and subsetting based on gene symbol to the vignette.
BUGFIXES
Fixed a bug in coercion of GRanges to data.frame where row.names could be duplicated. Thanks to @kdkorthauer.
Require GenomeInfoDb >= 1.10.3 because of changes to NCBI servers.
Change scale_fill_brewer() to scale_fill_hue() in plot_categorical() to enable more categories and avoid plotting abnormalities.
Fixed bug that mistakenly displayed some supported annotations.
Fixed a bug in lncRNA annotation building caused by incomplete reference.
Changes in version 3.5.1 (2017-04-14):
SIGNIFICANT CHANGES
SOFTWARE QUALITY
Changes in version 3.5.0 (2016-10-18):
Changes in version 1.1.1:
NEW FEATURES
Added Citation file
Added citation reference to documentation
Corrected typo in documentation
Changes in version 0.99:
Changes in version 2.2.0:
Instead of returning a topGO-compatible object, topAnat.R now returns an object from the topAnatData class, an extension of topGOdata class from the topGO package.
Fixed small issue with management of data types given as input by the user (dataType argument when creating new Bgee class)
Fixed bug in experiment Id check step. Now accomodates SRA Ids.
Fixed data frames header names that included double dots.
Removed dependency to biomaRt in vignette. Code is still detailed but not run, instead a pre-created gene list object is loaded from the data/ directory.
Version: 1.1.14 Text:
Version: 1.2.21 Category: Last commit was not completed Text:
Changes in version 0.99.0:
SIGNIFICANT NEW FEATURES
Changes in version 2.4.0:
NEW FEATURES
Vignette “Authoring R Markdown vignettes”
R Markdown templates for ‘pdf_document2’ and ‘html_document2’
Standard way of specifying author affiliations
Support for short title in R Markdown PDF output
Argument ‘relative.path’ to ‘latex2()’ ()
SIGNIFICANT USER-VISIBLE CHANGES
Increase column width in order to accommodate 80 characters wide code chunks
Separate caption title from description with newline
Use canonical URL to link to CRAN packages ()
Consistently number equations on right hand side across different output formats
Numerous CSS tweaks
BUG FIXES AND IMPROVEMENTS
Support for PDFs typeset with 9pt and 8pt font size
Proper formatting of ‘longtable’ captions
Fix to retain spaces in ‘\texttt’
Replace carets “\^{}” by “\textasciicircum” to fix incompatibility with LaTeX ‘soul’ package used for inline code highlighting
Patch to avoid overfull pages containing a float followed by a longtable
Changes in version 2.32.0:
BUG FIXES
Changes in version 1.3.6:
INTERNAL MODIFICATIONS
Changes in version 1.3.4:
MINOR MODIFICATIONS
Changes in version 1.3.2:
MINOR MODIFICATIONS
Changes in version 1.0.0:
Initial release of package BLMA includes
bilevelAnalysisGeneset: a function to perform a bi-level meta-analysis in conjunction with geneset enrichment methods (ORA/GSA/PADOG) to integrate multiple gene expression datasets.
bilevelAnalysisPathway: a function to perform a bi-level meta-analysis conjunction with Impact Analysis to integrate multiple gene expression datasets.
intraAnalysisClassic: a function to perform an intra-experiment analysis in conjunction with any of the classical hypothesis testing methods, such as t-test, Wilcoxon test, etc.
bilevelAnalysisClassic: a function to perform a bi-level meta-analysis in conjunction with any of the classical hypothesis testing methods, such as t-test, Wilcoxon test, etc.
intraAnalysisGene: a function to perform an intra-experiment analysis in conjunction with the moderated t-test (limma package) for the purpose of differential expression analysis of a gene expression dataset
bilevelAnalysisGene: a function to perform a bi-level meta-analysis in conjunction with the moderate t-test (limma package) for the purpose of differential expression analysis of multiple gene expression datasets
loadKEGGPathways: this function loads KEGG pathways and names
addCLT: a function to combine independent studies using the average of p-values
fisherMethod: a function to combine independent p-values using the minus log product
stoufferMethod: a function to combine independent studies using the sum of p-values transformed into standard normal variables.11:
bsseq now uses DelayedMatrix objects from the DelayedArray package for all matrix-like data. This enables large data to be stored on disk rather than in memory.
Serialized (saved) BSseq, BSseqTstat, and BSseqStat objects will need to be updated by invoking x <- updateObject(x).
Changes in version 0.99.6:
Changes in version 0.99.5:
Changes in version 0.99.3:
Changes in version 0.99.2:
Changes in version 0.99.1:
Changes in version 0.99.0:
Changes in version 1.18.0:
Added Charles Plessy as co-maintainer.
Remove warning by replacing deprecated ignoreSelf() with drop.self().
Changes in version 1.7.2:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.7.1 (2016-11-29):
NEW FEATURES
PCA is now supported for larger-than-memory on-disk datasets
External ‘matter’ matrices replace ‘Binmat’ matrices for on-disk support
Added ‘image3D’ aliases for all ‘ResultSet’ subclasses
SIGNIFICANT USER-VISIBLE CHANGES
Version: 1.9.1 Category: Bugfixes in functions pes, sfpmean and plot.cellsurvLQfit: calculation of mean values at different doses for curve and point plotting when PEmethod = “fix Text:
Changes in version 2.6.2:
DMRcate pacakge get updated, Error like “Error in if (nsig == 0) { : missing value where TRUE/FALSE needed” has been solved.
In champ.load(), instead of replacing all 0 and negative value into 0.0001, we relplace them as smallest positive now.
Fixed warnings() in GUI() functions.
In champ.runCombat() function, removed restriction on factors like Sample_Group. Also, added “variable” parameter so that user may assign other variables other then “Sample_Group”.
Modified champ.DMR() function, for ProbeLasso, there is no need to input myDMP anymore, ProbeLasso function would calculate inside the function.
Changes in version 2.0.0:
NEW FEATURES
A new method for enrichment, polyenrich() is designed for gene set enrichment of experiments where the presence of multiple peaks in a gene is accounted for in the model. Use the polyenrich() function for this method.
New features resulting from chipenrich.data 2.0.0:
New genomes in chipenrich.data: danRer10, dm6, hg38, rn5, and rn6.
Reactome for fly in chipenrich.data.
Added locus definitions, GO gene sets, and Reactome gene sets for zebrafish.
All genomes have the following locus definitions: nearest_tss, nearest_gene, exon, intron, 1kb, 5kb, 10kb, 1kb_outside_upstream, 5kb_outside_upstream, 10kb_outside_upstream, 1kb_outside, 5kb_outside, and 10kb_outside.
IMPROVEMENTS
The chipenrich method is now significantly faster. Chris Lee figured out that spline calculations in chipenrich are not required for each gene set. Now a spline is calculated as peak ~ s(log10_length) and used for all gene sets. The correlation between the resulting p-values is nearly always 1. Unfortunately, this approach cannot be used for broadenrich().
The chipenrich(…, method=’chipenrich’, …) function automatically uses this faster method.
Clarified documentation for the supported_locusdefs() to give explanations for what each locus definition is.
Use sys.call() to report options used in chipenrich() in opts.tab output. We previously used as.list(environment()) which would also output entire data.frames if peaks were loaded in as a data.frame.
Various updates to the vignette to reflect new features.
SIGNIFICANT USER-LEVEL CHANGES
As a result of updates to chipenrich.data, ENRICHMENT RESULTS MAY DIFFER between chipenrich 1.Y.Z and chipenrich 2.Y.Z. This is because revised versions of all genomes have been used to update LocusDefinitions, and GO and Reactome gene sets have been updated to more recent versions.
The broadenrich method is now its own function, broadenrich(), instead of chipenrich(…, method = ‘broadenrich’, …).
User interface for mappability has been streamlined. ‘mappability’ parameter in broadenrich(), chipenrich(), and polyenrich() functions replaces the three parameters previously used: ‘use_mappability’, ‘mappa_file’, and ‘read_length’. The unified ‘mappability’ parameter can be ‘NULL’, a file path, or a string indicating the read length for mappability, e.g. ‘24’.
A formerly hidden API for randomizations to assess Type I Error rates for data sets is now exposed to the user. Each of the enrich functions has a ‘randomization’ parameter. See documentation and vignette for details.
Many functions with the ‘genome’ parameter had a default of ‘hg19’, which was not ideal. Now users must specify a genome and it is checked against supported_genomes().
Input files are read according to their file extension. Supported extensions are bed, gff3, wig, bedGraph, narrowPeak, and broadPeak. Arbitrary extensions are also supported, but there can be no header, and the first three columns must be chr, start, and end.
SIGNIFICANT BACKEND CHANGES
Harmonize all code touching LocusDefinition and tss objects to reflect changes in chipenrich.data 2.0.0.
Alter setup_ldef() function to add symbol column. If a valid genome is used use orgDb to get eg2symbol mappings and fill in for the user. Users can give their own symbol column which will override using orgDb. Finally, if neither symbol column or valid genome is used, symbols are set to NA.
Any instance of ‘geneid’ or ‘names’ to refer to Entrez Gene IDs are now ‘gene_id’ for consistency.
Refactor read_bed() function as a wrapper for rtracklayer::import().
Automatic extension handling of BED3-6, gff3, wig, or bedGraph.
With some additional code, automatic extension handling of narrowPeak and broadPeak.
Backwards compatible with arbitrary extensions: this still assumes that the first three columns are chr, start, end.
The purpose of this refactor is to enable additional covariates for the peaks for possible use in future methods.
Refactor load_peaks() to use GenomicRanges::makeGRangesFromDataFrame().
Filtering gene sets is now based on the locus definition, and can be done from below (min) or above (max). Defaults are 15 and 2000, respectively.
Randomizations are all done on the LocusDefinition object.
Added lots of unit tests to increase test coverage.
Make Travis builds use sartorlab/chipenrich.data version of data package for faster testing.
DEPRECATED AND DEFUNCT
Calling the broadenrich method with chipenrich(…, method = ‘broadenrich’, …) is no longer valid. Instead, use broadenrich().
Various utility functions that were used in the original development have been removed. Users never saw or used them.
BUG FIXES
Fixed bug in randomization with length bins where artifactually, randomizations would sort genes on Entrez ID introducing problems in Type I error rate.
Fixed a bug where the dependent variable used in the enrichment model was used to name the rows of the enrichment results. This could be confusing for users. Now, rownames are simply integers.
Fixed a bug that expected the result of read_bed() to be a list of IRanges from initial development. Big speed bump.
Changes in version 1.12.1:
BUG FIXES
Changes in version 3.9.19:
Changes in version 3.9.18:
Changes in version 3.9.17:
Changes in version 3.9.16:
Changes in version 3.9.14:
Changes in version 3.9.13:
Changes in version 3.9.12:
Changes in version 3.9.11:
Changes in version 3.9.10:
Changes in version 3.9.9:
Changes in version 3.9.8:
Changes in version 3.9.7:
Changes in version 3.9.6:
Changes in version 3.9.5:
Changes in version 3.9.4:
update the documents of getEnrichedGO
add more output for findOverlapsOfPeaks.
Changes in version 3.9.3:
Changes in version 3.9.2:
Changes in version 3.9.1:
Changes in version 1.11.4:
Changes in version 1.11.3:
bug fixed of dropAnno <2017-04-10, Mon>
bug fixed of peak width generated by shuffle <2017-03-31, Fri> +
Changes in version 1.11.2:
optimize getGeneAnno <2016-12-21, Wed>
change plotAnnoBar and plotDistToTSS according to stacking bar order change in ggplot2 (v2.2.0) <2016-12-16, Fri> + +
Changes in version 1.11.1:
Changes in version 1.1.4:
BUG FIXES
Changes in version 1.1.2:
NEW FEATURES
BUG FIXES
Changes in version 1.1.1:
NEW FEATURES
Selection of peaks can be done with ‘changeFDR’.
Peak calls are available in each chromstaR-object as list entry ‘$peaks’.
SIGNIFICANT USER-LEVEL CHANGES
‘plotFoldEnrichment’ renamed to ‘plotEnrichment’.
‘exportBinnedData’, ‘exportUnivariates’, ‘exportMultivariates’, ‘exportCombinedMultivariates’ replaced by ‘exportCounts’, ‘exportPeaks’, ‘exportCombinations’.
BUG FIXES
DEPRECATED AND DEFUNCT
‘changePostCutoff’.
‘plotFoldEnrichment’.
‘exportBinnedData’, ‘exportUnivariates’, ‘exportMultivariates’, ‘exportCombinedMultivariates’.
Version: 1.3.1 Text:
Version: 1.3.0 Text:
Changes in version 1.10.0:
Changes in version 1.13.2 (2017-02-14):
Use
expect_equal instead of
expect_identical to avoid failing of
the
.revsequence unit test on “toluca2” (Mac OS X Mavericks
(10.9.5) / x86_64).
Add “importFrom(stats, setNames)” to NAMESPACE.
Changes in version 1.13.1 (2017-01-04):
Remove GPL headers from *.R files.
Remove useless return calls.
Changes in version 1.1.2 (2017-04-04):
Changes
RSEC now has option
rerunClusterMany, which if FALSE will not rerun
the clusterMany step if RSEC is called on an existing
clusterExperiment object (assuming of course, clusterMany has been
run already on the object)
setBreaks now has option
makeSymmetric to force symmetric breaks
around zero when using the quantile option.
setBreaks now has a default for breaks (i.e. for minimal use, the
user doesn’t have to give the argument, just the data) in which case
setBreaks will automatically find equal-spaced breaks of length 52
filling the range of data compatible with aheatmap. The order of the
arguments
data and
breaks has been switched, however, to better
accomodate this usage.
plotClusters can now handle NA values in the colData
plotClusters for
clusterExperiment object now allows for setting
sampleData=TRUE to indicate the plotting all of the sampleData in
the colData slot.
nPCADims now allows values between 0,1 to allow for keeping proportion of variance explained.
addClusters now allows for argument
clusterLabel to assign a
clusterLabel when the added cluster is a vector (if matrix, then
clusterLabel is just the column names of the matrix of cluster
assignments)
Bug fixes
fixed bug in clusterExperiment subsetting to deal with orderSamples correctly.
fixed bug in mergeClusters unable to plot when too big of edge lengths (same as plotDendrogram)
fixed bug in subsetting, where unable to subset samples by character
fixed bug in removeClusters so that correctly updates dendro_index and primary_index slots after cluster removed.
Changes in version 1.1.1 (2016-10-14):
Changes
Bug fixes
add check in clusterMany that non-zero dimensions
changed ‘warning’ to ‘note’ in combineMany when no clusters specified.
fixed bug in plotDendrogram unable to plot when makeDendrogram used dimReduce=”mad”
fixed bug in clusterMany where beta set to NA if clusteringFunction is type K. Added check that beta cannot be NA.
Added check that alpha and beta in (0,1)
Changes in version 3.3.6:
update kegg_species information <2017-03-26, Sun>
bug fixed of bitr_kegg for converting ID to kegg <2017-03-01, Wed> +
better support of plotGOgraph for gseGO, use core enriched gene info <2017-02-27, Mon>
Changes in version 3.3.5:
solve #81 <2017-02-12, Sun>
bitr_kegg support converting Path/Module to geneID and vice versa <2017-01-03, Tue>
Changes in version 3.3.4:
bug fixed of download_KEGG for supporting different keyType <2017-01-02, Mon>
split 3 GO sub-ontologies for enrichGO <2016-12-12, Mon>
dotplot for compareClusterResult to support 3 GO sub-ontologies <2016-12-8, Thu>
bug fixed of determine
ont from
expand.dots <2016-12-06, Mon> +
see
Changes in version 3.3.3:
switch from BiocStyle to prettydoc <2016-11-30, Wed>
change summary to as.data.frame in internal calls to prevent warning message <2016-11-14, Mon>
Changes in version 3.3.2:
Changes in version 3.3.1:
update startup message <2016-11-09, Wed>
bug fixed in enrichDAVID <2016-11-06, Sun> +
bug fixed in head, tail and dim for compareCluster object <2016-10-18, Tue>
Changes in version 1.3.2:
added p-value estimate confidence intervals via conf.int function
Some vignette updates including adding the “common questions” section.
Changes in version 1.3.1:
Added plotting of individual comparisons in classification and permutation plots.
Added df argument (degrees of freedom, passed to smooth.spline) to projection and permute function. This allows some degree of control over how linear or crooked the principal curve is drawn. NOTE: you must (!) give the same df value to the projection and permute functions for your results to be valid and, at the moment, there is no automated checking that this is the case.
The classification step now uses a mew method for separation scoring and fixes a previous bug which could occur when group sizes were not equal. This change is reflected in the vignette.
Fixed bug in concatenation c(), when some iterations fail for a specific comparison thus causing a error when concatenating permute results.
Changes in version 3.5:
NEW FEATURES
Add function orgKEGGIds2EntrezIDs to fetch the mapping between KEGG IDs and Entrez IDs
Add function makeAxtTracks
Add function addAncestorGO.13.2:
add
packLegend()
Legend() supports to add txt labels on grids
Changes in version 1.13.1:
Heatmap(): add
km_title to set the format of row title when
km
is set
anno_link(): add
extend to extend the regions for the labels
anno_boxplot(): for row annotation, outliers are in the correct in
y-axis. Thanks @gtg602c for the fix
HeatmapAnnotation(): gaps are included in the size of the
annotations
anno_link(): graphic parameters are correctly reordered
densityHeatmap(): viewport is created with
clip = TRUE
decorate_*(): add
envir option to control where to look for
variables inside
code
Legend(): title supports expression
anno_*(): if the input is a data frame, warn that users may convert
it to matrix
Changes in version 1.3.7:
Updates to vignette
Fix bug removing variants by name in variantCounts
Fixed argument legend.symbol.size being ignored in plotAlignmenta,DNAString-method.
Changes in version 1.3.6:
Changes in version 1.3.5:
New option “minoverlap” in readsToTarget allows reads that do not span the target region to be considered
plotAlignments now works with character as well as DNAString objects
Merging of long gaps mapped as chimeras now possible
Changes in version 1.3.4:
New function refFromAlns infers the reference sequence from aligned reads
Fixed bug causing an empty plot when plotting a single alignment with a large deletion
Changed annotateGenePlot from panel.margin to panel.spacing in accordance with recent ggplot2 versions
Added “create.plot” argument to plotAlignments for signature CrisprSet to make plot customisation easier.
Fixed bug in argument names when all alignments are chimeric
CrisprRun name now defaults to the coordinates when no name is provided
Changes in version 1.3.3:
Changes in version 1.10.0:
Added calculation of dominant directionality in combineTests(). Fixed out-of-array indexing bug in the C++ code.
Supported factor input for ids argument in combineTests(), getBestTest().
Added the empiricalFDR(), empiricalOverlaps() functions for controlling the empirical FDR.
Added the mixedClusters(), mixedOverlaps() functions for testing for mixed clusters.
Ensured that window-level FDR threshold chosen by controlClusterFDR() is not above the cluster-level FDR.
Minor fix to scaledAverage() to avoid slightly inaccurate results. Also, zero or negative scale factors now return -Inf and NA, respectively.
Switched to new scaleOffset() function for adding offsets in asDGEList(). Added option to specify the assay to be used.
Added multi-TSS support in detailRanges().
Modified paired-end machinery in windowCounts(), getPESizes() to be more accommodating of overruns.
Ignored secondary and supplementary alignments in all functions.
Added options to specify assay in SE objects in filterWindows().
Replaced weighting with normalization options in profileSites().
Updated user’s guide.
Changes in version 1.15.2:
UPDATED FUNCTIONS
Changes in version 1.14.1:
UPDATED FUNCTIONS
Update mouse dbsnp version part in function PrepareAnnotationEnsembl.R
Update function OutputVarproseq.R, JunctionType.R and calculateRPKM.R
NEW FEATURES
Changes in version 1.0.0:
Changes in version 1.6.6 (2017-04-11):
NEW FEATURES
add fixedLogicle transformation option on the GUI, with window popup to allow specifing the w, t, m, a parameters for logicle transformation.
add openShinyAPP (boolean parameter) option in cytofkit main function, which can open shinyAPP once the analysis was done and automatically load the result object into the shinyAPP for exploration.
add cytofkitShinyAPP2 function which can take cytofkit analysis_results (either file name of R object) as input and automatically load to shinyAPP once launched.
Changes in version 1.6.5 (2017-03-27):
MODIFICATION
Changes in version 1.6.4 (2017-03-17):
MODIFICATION
Changes in version 1.6.3 (2017-03-16):
MODIFICATION
Changes in version 1.6.2 (2017-03-08):
NEW FEATURES
add default linear transformation to FSC and SSC channels
add support for PDF figure download on shinyAPP, update the side panel to be tab dependent
add new color palatte in heatmap (greenred and spectral) and level plot (spectral)
Add cluster filter in cluster plot on shinyAPP
Allow multiple annotation for same cluster (specify cluster_annotation name) on shinyAPP
Allow color selection for each cluster on shinyAPP
Allow modification of the marker name on shinyAPP
Changes in version 1.6.1 (2016-10-27):
MODIFICATION
updated colorPalette options in cytof_colorPlot function, also the Shiny APP, added spectral.
debugged rowname conflication when regroup the samples in shinyAPP, now only use global ID, discarded the local cell ID, which avoid the dumplicate rownames conflication but results in failure in saving new FCS files.
Version: 1.1.2 Category: support diva workspace parsing Text:
Changes in version 0.99.0:”)
Version: 1.11.7 Text: 2017-04-08 Lorena Pantano [email protected] Fix: Add new contributor
Version: 1.11.6 Text: 2017-04-07 Lorena Pantano [email protected] Feature: Add function to plot genes in a wide format
Version: 1.11.4 Text: 02-17-2017 Lorena Pantano [email protected] Feature: Re-organize vignette. Feature: Ignore warnings when plotting Feature: Improve volcano plot
Version: 1.11.3 Text: 01-03-2017 Lorena Pantano [email protected] Fix: fix order of clusters figures that are not in the correct place in some cases with many groups.
Version: 1.11.2 Text: 12-09-2016 Lorena Pantano [email protected] Features: Add degCheckFactors functions to plot sizefactors used to normalize count data.
Version: 1.11.1 Text: 10-18-2016 Lorena Pantano [email protected] Fixes: print clusterProfiler output.
Changes in version 1.9.6:
BUG FIXES
Changes in version 1.9.3:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.9.2:
BUG FIXES
Changes in version 1.16.0:
DESeq() and nbinomWaldTest() the default setting will be betaPrior=FALSE, and the recommended pipeline will be to use lfcShrink() for producing shrunken LFC.
Added a new function unmix(), for unmixing samples according to linear combination of pure components, e.g. “tissue deconvolution”.
Added a new size factor estimator, “poscounts”, which evolved out of use cases in Paul McMurdie’s phyloseq package.
Ability to specify observation-specific weights, using assays(dds)[[“weights”]]. These weights are picked up by dispersion and NB GLM fitting functions.
Changes in version 1.15.40:
Changes in version 1.15.39:
Changes in version 1.15.36:
Changes in version 1.15.28:
Changes in version 1.15.12:
Changes in version 1.15.9:
Adding prototype function lfcShrink().
Vignette conversion to Rmarkdown / HTML.
Changes in version 1.15.3:
Changes in version 2.4.0:
Changes in version 1.8.0:
Streamlined filterDirect(), filterTrended(), and added tests for them. Also allowed specification of which assay to use for the data and reference objects.
enrichedPairs() and neighborCounts() now return counts for neighbourhood regions, not just the enrichment values.
filterPeaks() will compute (and optionally return) enrichment values from neighbourhood counts.
normalizeCNV() and correctedContact() allow specification of which assay matrix to use from the SE objects.
Refactored a great deal of the C++ code for improved clarity.
Overhauled handling of DNase Hi-C data, so that pseudo-fragments are no longer necessary. Most functions now automatically recognise DNase-C data from an empty GRanges in param$fragments. Deprecated segmentGenome() and prepPseudoPairs(), added the emptyGenome() function.
Updated user’s guide.
Changes in version 3.1.3:
output expected sample gene ID when input gene ID type not match <2017-03-27, Mon>
dotplot for gseaResult <2016-11-23, Fri> +
Changes in version 3.1.2:
in gseaplot, call grid.newpage only if dev.interactive() <2016-11-16, Wed>
change minGSSize < geneSet_size & geneSet_size < maxGSSize to minGSSize <= geneSet_size & geneSet_size <= maxGSSize <2016-11-16, Wed>
fixed show method issus of unknown setType in clusterProfiler::GSEA output <2016-11-15, Tue>
throw more friendly error msg if fail to determine setType automatically in setReadable function <2016-11-15, Tue> +
apply minGSSize and maxGSSize to fgsea <2016-11-14, Mon>
change summary to as.data.frame in internal calls to prevent warning message <2016-11-14, Mon>
Changes in version 3.1.1:
update startup message <2016-11-09, Wed>
fixed parallel in Windows (not supported) <2016-10-24, Mon> +
options(DOSE_workers = x) to set using x cores for GSEA analysis is
removed. <2016-10-24, Mon> instead let MulticoreParam() to decide how
many cores (can be set by
options(mc.cores = x)).
Changes in version 1.5.3:
Changes in version 1.5.2:
CITATIONfile was added.
Changes in version 4.18.0:
NEW FEATURES
new arguments to ‘display()’ enabling control over layout and appearance of image grid in “raster” mode: ‘nx’ (number of frames in a row), ‘drawGrid’ (draw lines between frames), ‘spacing’ (separation between frames) and ‘margin’ (outer margin around the image)
new function ‘clahe()’ for improving local contrast in images by performing Contrast Limited Adaptive Histogram Equalization
re-introduced ‘output.origin’ argument to ‘rotate()’
SIGNIFICANT USER-VISIBLE CHANGES
object masks returned by
bwlabel(),
propagate(), and
watershed(), as well as the result of thresh() are now of storage
mode integer rather than double
binary kernels constructed by ‘makeBrush()’ have storage mode integer (previously double)
‘rmObjects()’ and ‘reenumerate()’ now require input of storage mode integer
‘untile()’ and morphology operations preserve data storage mode
modified boundary behaviour of ‘thresh()’ to reduce artifacts at image borders and to match the output of a corresponding call to ‘filter2()’
added the ability for different boundary values for different frames in ‘filter2()’ linear mode ()
removed defunct ‘…GreyScale’ family morphological functions
PERFORMANCE IMPROVEMENTS
significantly improved performance of ‘transpose()’, ‘getFrame()’ and ‘getFrames()’ by using C implementation
numerous small improvements to execution time and memory consumption across the whole package, mostly by avoiding storage mode conversion and object duplication in C
BUG FIXES
proper origin handling in ‘resize()’
import ‘methods::slot’ (fixes)
fixed a bug in ‘filter2()’ ()
proper check of filter size in ‘thresh()’ and rectified behavior when filter dimensions are equal to image dimensions
correct computation of ‘selfComplementaryTopHat()’
address PROTECT errors reported by Tomas Kalibera’s ‘maacheck’ tool ()
fixed class retention in ‘colorLabels()’, ‘colormap()’, ‘rgbImage()’, ‘stackObjects()’, ‘tile() and untile()’
Changes in version 3.18.0:
roast.DGEList(), mroast.DGEList(), fry.DGEList() and camera.DGEList() now have explicit arguments instead of passing arguments with … to the default method.
New function scaleOffset() to ensure scale of offsets are consistent with library sizes.
Added decideTests() S3 methods for DGEExact and DGELRT objects. It now works for F-tests with multiple contrasts.
Report log-fold changes for redundant contrasts in F-tests with multiple contrasts.
Modified plotMD() S3 method for DGELRT and DGEExact objects. It now automatically uses decideTests() and highlights the DE genes on the MD plot.
New argument ‘plot’ in plotMDS.DGEList().
Removed S3 length methods for data objects.
gini() now support NA values and avoids integer overflow.
Changes in version 1.3.2 (2017-03-16):
Changes in version 1.3.1 (2017-01-28):
Fixed: bug in the row names of limmaTopTable. Thanks to Ali Jalali from MD Anderson for reporting it.
Fixed: bug in plotSummaryHeatmap when there is only one single contrast.
Added: “avg.logFC.Dir” to the EGSEA scores
Improved: the plotSummaryHeatmap function to work with Direction scores
Modified: EGSEA scores to be all small letters and updated egsea.sort() accordingly
Added: median to combining p-values
Added: plotBars function
Fixed: bug in buildMSigDBIdx when no genes mapped to c5
Improved: the wrapper I/O interfaces to become standard
Improved: the implementation of GSVA by parallelizing the calculations on gene sets and calculating the gene set scores using the whole expression matrix
Improved: the parallelization of several wrappers
Added: ability to accept a design matrix with an intercept and a contrast vector of coefficient indexes.
Added: a function to optimize the number of cores to be used for running EGSEA. It helps to avoid CPU overloading.
Added: an information about the runnign time of the analysis.
Added: a new way of report generation that completely depends on the EGSEAResults object. This allows users to re-generate their reports with different parameter values, e.g., display.top, sort.by, sum.plot.axis, sum.plot.cutoff.
Added: summary heatmaps and bar plots to the report.
Improved: the colour scheme of the summary heatmaps and bar plots.
Fixed: bug in visualizations when log10(x) = Inf
Added: fdr.cutoff to the calculation of Significance Score and Regulation Direction.
Improved: the colour of summary heatmaps.
Modified: buildMSigDBIdx to work with C5 collection of version 5.2
Changes in version 1.5.1:
anno_enriched(): can visualize positive signals and negative signals separatedly
add rbind.normalizedMatrix function
Changes in version 2.4.1:
Changes in version 1.99.13:
USER VISIBLE CHANGES
Most filter classes are now imported from the AnnotationFilter package.
Parameter ‘filter’ supports now filter expression.
Changes in version 1.99.11:
BUG FIXES
NEW FEATURES
Changes in version 1.99.10:
BUG FIXES
Changes in version 1.99.6:
BUG FIXES
Changes in version 1.99.5:
NEW FEATURES
Changes in version 1.99.3:
BUG FIXES
Add two additional uniprot table columns to internal variable and fix failing unit test.
Add two additional uniprot table columns to internal variable and fix failing unit test.
NEW FEATURES
USER VISIBLE CHANGES
Changes in version 1.99.2:
BUG FIXES
Changes in version 1.99.1:
BUG FIXES
Changes in version 1.99.0:
NEW FEATURES
The perl script to create EnsDb databases fetches also protein annotations.
Added functionality to extract protein annotations from the database (if available) ensuring backward compatibility.
Add proteins vignette.
USER VISIBLE CHANGES
NOTE: As of Ensembl release 88 the name of the script was changed from variant_effect_predictor.pl to vep.
NEW FEATURES
o add support for Ensembl release 85-88
MODIFICATIONS
o document parseCSQToGRanges() behavior when no 'CSQ' data are found o parseCSQToGRanges() returns mcols with names from CSQ field when CSQ is present but empty o add DBI perl module to SystemRequirements
Changes in version 2.5:
Add ‘revisualize’ method to add a new visualization using the same measurements as an existing visualization
can save an ‘EpivizApp’ to disk as an ‘rda’ file and restart it using the ‘restartEpiviz’ function
can use measurements from a remote epiviz UI server session to create visualizations from R. With this, remote epiviz UI sessions are now fully scriptable through R.
Changes in version 999.999:
Changes in version 999.999:
Changes in version 999.999:
Changes in version 1.1.2:
fixes for ggplot 2.2.0: count for stat_bin_hex, center title
fix use axis.text.y twice
Changes in version 1.1.1:
add Rbuildignore
test with rbokeh 0.5.0
correct documentation to avoid warning R CMD check
Changes in version 1.3.3:
NEW FEATURES
Changes in version 1.3.2:
BUG FIXES
Changes in version 1.3.1:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.1:
Changes in version 1.1.2:
Changes in version 1.1.1:
Changes in version 1.1.9 (2017-04-19):
User Visible Changes
Users may now specify one or more internal standard sizes in functions FlowHist() and batchFlowHist(), via the argument ‘standards’. These values will be presented to the user in the browseFlowHist() viewer, and after being set by the user, the value will be used to calculate the GC size in pg in the function tabulateFlowHist.
Users may now select which peak in the histogram to treat as the internal standard when calculating GC values.
New vignettes added: “Getting Started”, and “Histogram Tour”.
Old vignette “overview” removed.
Internal help pages greatly expanded, including many internal functions. See ?flowPloidy for an overview
Many minor bug fixes and GUI tweaks (for browseFlowHist).
Changes in version 1.1.3 (2016-11-25):
User Visible Changes
Changes in version 1.1.2 (2016-11-22):
User Visible Changes
Added support for processing files with two standards present. A new
argument is available for functions that load FCS files
(batchFlowHist, FlowHist etc.):
samples. By default this is set to
2, to account for a single unknown and an co-chopped standard. If you
are using two co-chopped standards (or really anytime you have three
distinct samples chopped together), set samples = 3. This can also be
changed interactively in the browseFlowHist GUI.
The layout of the browseFlowHist GUI has been re-arranged somewhat to accomodate the new features mentioned above.
The linearity flag is over-ridden when no G2 peaks are present. Without a G2 peak, linearity can’t be properly fit. This leads to linear gradients, because the linearity parameter is used in the S-phase components.
Changes in version 1.1.1 (2016-10-26):
Internal Changes
Improved peak finding algorithm
Reduced region searched for the starting bin for model fitting. Was originally 20, now set to 10. Need more data to establish best approach.
Changes in version 1.7.2:
NEW FEATURES
Changes in version 1.7.1:
NEW FEATURES
Version: 1.0.1 Text:
Changes in version 1.19.4:
Changes in version 1.19.1:
BUGFIX: Updated NAMESPACE file to conform with R CMD check.
BUGFIX: reactome2cmap function is available again.
NEW: Added citation information.
Changes in version 1.15.1:
Changes in version 1.12.0:
UTILITIES
update liblzma to v5.2.3
update lz4 to v1.7.5
a new citation
Changes in version 1.10.1:
NEW FEATURES
Changes in version 1.3.2:
BUGFIXES
Circularity (from LOCUS header information) is now applied to all sources in the file.
Improved assertion related to id ordering in makeTxDbFromGenBank which passed in previous R version but was failing in recent ones
Changes in version 1.0.0:
Changes in version 1.17.5:
Changes in version 1.17.4:
Changes in version 1.17.3:
Changes in version 1.17.2:
add function exportNetwork.
add cytoscape-searchbox
update documents.
Changes in version 1.17.1:
Changes in version 1.2.0:
Changes in version 2.6.0:
Major bug fix: assocTestSeq no longer drops some variants from aggregate tests in the case where the same variants are included in more than one aggregate unit.
Added function for analysis of admixture mapping data.
Changes in version 1.15.0:
Changes in version 1.7.3:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.7.2:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.7.1:
NEW FUNCTIONS AND FEATURES
scoreMatrixBin() calculates coverage over windows that are not only GRanges, but also GRangesList. It’s usefull for calculating transcript coverage of a set of exons.
ScoreMatrix-like functions work with bigWig files and supplied weight.col and is.noCovNA=TRUE
IMPROVEMENTS AND BUG FIXES
Added warning for rpm=TRUE and type=’bigWig’
type=’auto’ by default in ScoreMatrix-like functions
narrowPeak() and broadPeak() are 0-based by default (#144 fixed)
Fixed error in readGeneric when reading files with numeric chromosomes (#133 fixed)
Show warning if windows fall off target
Show error if windows have width 1
Changes in version 1.12.0:
NEW FEATURES
Add function standardChromosomes()
Seqlevels() setter now supports “fine” and “tidy” modes on GRangesList and GAlignmentsList objects
Add assembly_accessions dataset
MODIFICATIONS
Updated mapping table between UCSC and Ensembl to include recent builds
Use https instead of http to fetch stuff from NCBI
Replace ‘force=TRUE’ with ‘pruning.mode=”coarse”’ in seqlevels() setter
Add ‘pruning.mode’ argument to the keepSeqlevels(), dropSeqlevels(), and keepStandardChromosomes() functions. IMPORTANT NOTE: Like for the seqlevels() setter, the default pruning mode is “error”, which means that now these functions fail when some of the seqlevels to drop from ‘x’ are in use. The old behavior was to silently prune ‘x’ (doing “coarse” pruning)
Update files in data directory
Updated internal functions .lookup_refseq_assembly_accession() and fetch_assembly_report() for speed and efficiency
move some files from GenomeInfoDb/data/ to GenomeInfoDbData annotation package
BUG FIXES
Changes in version 1.31.1:
Changes in version 1.28:
NEW FEATURES
makeTxDbFromUCSC() supports new composite “NCBI RefSeq” track for hg38.
Add ‘metadata’ argument to makeTxDbFromGFF().
Add exonicParts() as an alternative to disjointExons(): - exonicParts() has a ‘linked.to.single.gene.only’ argument (FALSE by default) that is similar to the ‘aggregateGenes’ argument of disjointExons() but with opposite meaning. More precisely ‘exonicParts(txdb, linked.to.single.gene.only=TRUE)’ returns the same exonic parts as ‘disjointExons(txdb, aggregateGenes=FALSE)’. - Unlike ‘disjointExons(txdb, aggregateGenes=TRUE)’, ‘exonicParts(txdb, linked.to.single.gene.only=FALSE)’ does NOT discard exon parts that are not linked to a gene. - exonicParts() is almost twice more efficient than disjointExons().
Add intronicParts(): similar to exonicParts() but returns intronic parts.
SIGNIFICANT USER-VISIBLE CHANGES
DEPRECATED AND DEFUNCT
Argument ‘force’ of seqlevels() setters is deprecated in favor of new and more flexible ‘pruning.mode’ argument.
Remove the ‘vals’ argument of the “transcripts”, “exons”, “cds”, and “genes” methods for TxDb objects (was defunct in BioC 3.4).
BUG FIXES
Changes in version 1.28.0:
NEW FEATURES
Add coercion from ordinary list to GRangesList. Also the GRangesList() constructor function now accepts a list of GRanges as input (and just calls new coercion from list to GRangesList on it internally).
seqlevels() setter now supports “fine” and “tidy” pruning modes on GRangesList objects (in addition to “coarse” mode, which is the default).
“range” methods now have a ‘with.revmap’ argument (like “reduce” and “disjoin” methods).
Add a bunch of range-oriented methods for GenomicRangesList objects.
SIGNIFICANT USER-LEVEL CHANGES
Some changes/improvements to “precede” and “follow” methods for GenomicRanges objects motivated by discussion on support site:
Some changes/improvements to “rank” method for GenomicRanges objects:
DEPRECATED AND DEFUNCT
BUG FIXES
Changes in version 0.99.0:
USER VISIBLE CHANGES
Changes in version 1.4.8:
Changes in version 1.4.7:
Changes in version 1.4.6:
Changes in version 1.4.5:
fixed issue where plot would not render/take awhile for functions requiring a genome and using hg38
Dramatically increased performance of cnFreq()
cnFreq() no longer requires genomic segments to be identical across samples
cnFreq() can now selectively plot chromosomes
cnFreq() can now plot both frequency/proportion regardless of data input type
Changes in version 1.4.4:
Changes in version 1.4.3:
Changes in version 1.4.2:
Changes in version 1.4.1:
Changes in version 1.7.11:
Changes in version 1.7.10:
add message for subview, inset, phylopic, theme_transparent and theme_inset <2017-03-23, Thu> + will be defunct in version >= 1.9.0 + user should use ggimage package to annotate tree with graphic object or image file
update subview to support mainview produced by
ggplot() + layers
<2017-03-13, Mon>
Changes in version 1.7.9:
fixed geom_range to support height_0.95_HPD <2017-03-03, Fri>
fixed geom_tiplab(geom=’label’) <2017-03-02, Thu> +
Changes in version 1.7.8:
get_taxa_name now sorted by taxa position and also support whole tree <2017-03-01, Wed>
unrooted layout support branch.length=”none”, fixed #114 <2017-03-01, Wed>
remove apeBootstrap and raxml object support as they were removed from treeio <2017-02-28, Tue>
Changes in version 1.7.7:
supports parse=”emoji” in geom_cladelabel, geom_text2, geom_label2, geom_tiplab, geom_tiplab2 <2017-02-16, Thu>
aes(subset) now support logical vector contains NA <2017-02-16, Thu>
add legend transparency to theme_transparent <2017-02-13, Mon> +
update citation info <2017-01-20, Fri>
Changes in version 1.7.6:
Changes in version 1.7.5:
disable labeling collapsed node as tip <2017-01-03, Tue> +
fortify.phylo4d via converting phylo4d to treedata object <2016-12-28, Wed>
improve viewClade function, use coord_cartesian instead of xlim <2016-12-28, Wed>
remove codes that move to treeio and now ggtree depends treeio <2016-12-20, Tue>
Changes in version 1.7.4:
is.ggtree function to test whether object is produced by ggtree <2016-12-06, Tue>
now branch.length can set to feature available in phylo4d@data and yscale is supported for phylo4d object <2016-12-06, Tue>
bug fixed of rm.singleton.newick, remove singleton parent instead of singleton <2016-12-01, Thu>
reorder phylo to postorder before ladderrize <2016-11-28, Mon>
allow yscale to use data stored in phylo4d object <2016-11-24, Thu> +
groupOTU method now accept ‘overlap = c(“overwrite”, “origin”, “abandon”)’ parameter <2016-11-16, Wed> +
Changes in version 1.7.3:
drop.tip method for NHX object <2016-11-11, Fri>
update startup message <2016-11-09, Wed>
reverse timescale x-axis <2016-11-07, Mon> +
Changes in version 1.7.2:
make missing colors in gheatmap invisible (previously use ‘white’) <2016-11-03, Thu>
xlim_expand for setting x axis limits of specific panel <2016-11-01, Tue> + xlim_tree is now a specific case of xlim_expand(xlim, panel=’Tree’)
bug fixed of parsing tree text in beast file <2016-10-31, Mon> +
Changes in version 1.7.1:
xlim_tree layer and test <2016-10-31, Mon> + set x axis limits for Tree panel for facet_plot
update read.nhx <2016-10-30, Sun> + add tip numbers to @nhx_tags and add tests + + store nhx_tags$node as numeric values <2016-10-31, Mon>
facet_plot supports ggbio::geom_alignment <2016-10-26, Wed> +
make tree stats available in facet_plot <2016-10-24, Mon>
Changes in version 0.99.0:
Changes in version 1.3.0:
Added highlighting to bars in MDS plot.
Added interaction with table when clicking on points in MD plot.
Changed expressions to default to no transformation.
Changed default colours.
Changed style of table.
Changed size of highlighted points.
Changes in version 1.4.0 (2016-04-25):
Changes in version 1.9.2:
BUG FIXES
Changes in version 1.9.1:
BUG FIXES
GENERAL UPDATES
Version: 1.1.3 Text:
Changes in version 2.1.3:
friendly error message for using IC method without IC computed <2017-02-17, Fri> +
fixed <2016-12-20, Tue>
Changes in version 2.1.2:
use prettydoc for vignette <2016-11-30, Wed>
remove using BiocStyle <2016-11-23, Wed>
Changes in version 2.1.1:
Version: 1.8 Category: NEW FEATURES Text: AllAssoc() now reports a Z-score for HWE
Version: 1.8 Category: NEW FEATURES Text: tqbrowser() facilitates interactive viewing of trans
Version: 1.8 Category: associations Text:
Version: 1.8 Category: SIGNIFICANT USER-VISIBLE CHANGES Text: reliance on GGtools has been eliminated
Changes in version 1.21.1 (2017-04-24):
Changes in version 0.99.8 (2016-10-19):
Added functions
Added help pages
Added vignette
Changes in version 1.24:
BUG FIXES
Changes in version 1.7.1:
size of axis and title are correctly calculated
add_track supports raster image
Changes in version 1.20.0:
BUG FIXES
Changes in version 1.21.1:
Replace ZIP_RA with LZMA_RA for GDS compression.
Default is no compression for genotypes.
Changes in version 1.9.0:
Changes in version 1.11.1:
change “hg20” to “hg38” according to the UCSC Genome Browser datasets and documentation
add “DRB3” and “DRB4” to the HLA gene list
Version: 0.99.1 Text: 1. added unit tests 2. adjusted spaces for the source code 3. added some missing part in documentation
Version: 0.99.0 Category: INITIAL RELEASE Text:
Changes in version 1.5.2:
hc_mapsupports to add labels under pixel mode
Changes in version 1.17.2:
Changes in version 1.17.1:
Using travis and codecov <2016-12-22 Thu>
Migrate vignette to BiocStyle’s html2 <2016-12-22 Thu>
Changes in version 1.17.0:
Changes in version 4.5.1:
Changes in version 1.14.1:
Changes in version 0.99.0:
NEW FEATURES
Ready for Bioc submission
Completed the news
Changes in version 0.9.1:
NEW FEATURES
Added Instructions fully from rendered version of the vignette to have available at runtime
Added support for downloading all plots and tables
Changes in version 0.9.0:
NEW FEATURES
Interactive tours are covering now all tabs, with extensive walkthroughs for the user
Added all screenshots to vignette
Changes in version 0.6.2:
NEW FEATURES
Interactive tours are now available, coded in external files
Travis-CI is now supported
Changes in version 0.6.0:
NEW FEATURES
Added MA plot with extra custom list to avoid manual selection of many genes
MA plot function now automatically supports subset of gene to be extra plotted
Added documentation with roxygen to all functions
Heatmap functions for genes annotated to a GO term as signature
Template report also provided
Full draft of vignette now available, working towards bioc submission
Added textual help to all sections, with collapsible element
Added proof of principle to have interactive tours based on rintrojs
Changes in version 0.4.0:
NEW FEATURES
Gene box info added, based on rentrez
New look for MA plots and volcano plots
Changes in version 0.3.0:
NEW FEATURES
Changes in version 0.2.0:
NEW FEATURES
Changes in version 0.1.0:
NEW FEATURES
Changes in version 0.99.0 (2017-03-03):
Changes in version 1.7.5:
Changes in version 1.7.4:
BUG FIXES
Changes in version 1.7.3:
BUG FIXES
Changes in version 1.7.2:
BUG FIXES
Changes in version 1.7.1:
BUG FIXES
Changes in version 1.5.5:
Changes in version 1.5.4:
Changes in version 1.5.3:
Changes in version 1.5.2:
Changes in version 1.5.1:
Changes in version 1.4.0:
Deprecated anchors<- in favour of anchorIds<-, to avoid confusion about ‘value’ type.
Added first(), second() functions for convenience.
Updates to documentation, tests.
Changes in version 1.1.2:
vignette updated
plot margins omitted
Changes in version 1.1.1:
use package BiocParallel (via argument
BPPARAM) instead of
nSlaves to controll
xcms-parallelization
depends on xcms >= 1.50.0
formatting
writeRScript to output more beautifully
Changes in version 1.1.0:
Changes in version 2.10.0:
NEW FEATURES
“range” methods now have a ‘with.revmap’ argument (like “reduce” and “disjoin” methods).
Add coercion from list-like objects to IRangesList objects.
Add “table” method for SimpleAtomicList objects.
The “gaps” method for CompressedIRangesList objects now uses a chunk processing strategy if the input object has more than 10 million list elements. The hope is to reduce memory usage on very big input objects.
BUG FIXES
Fix “setdiff” method for CompressedIRangesList for when all ranges are empty.
Fix long standing bug in coercion from Ranges to PartitioningByEnd when the object to coerce has names.
DEPRECATED AND DEFUNCT
Remove the RangedDataList and RDApplyParams classes, rdapply(), and the “split” and “reduce” methods for RangedData objects. All these things were defunct in BioC 3.4.
Remove ‘ignoreSelf’ and ‘ignoreRedundant’ arguments (replaced by ‘drop.self’ and ‘drop.redundant’) from findOverlaps,Vector,missing method (were defunct in BioC 3.4).
Remove GappedRanges class (was defunct in BioC 3.4).
Changes in version 1.3.5:
FEATURES
Add isomiRs naming to documentation
Add design to the object to get better usability
Remove non-template addition with C/G nucleotides by default (canonicalAdd)
Remove sequences with mutations and more than one miRNA hit
Changes in version 1.3.4:
FIXES
FEATURES
Improve code to remove error sequencing from raw data
Improve code to show the raw data with isoSelect
Changes in version 1.3.3:
FIXES
Changes in version 1.3.2:
OTHERS
Changes in version 1.3.1:
OTHERS
Changes in version 1.95.6:
Changes in version 1.95.5:
Changes in version 1.9.0:
Modify RatioFromFPKM, Splicingfinder, and sQTLsFinder
Modify manual and tutorial
Changes in version 1.8.0:
Defunct MsqtlFinder, calSignificant, and sqtlfinder.
Adjust Chr names between GTF and SNP locus data.
Change test of UTR region.
sqtl finder reform
Create new functions (RatioFromFPKM, Splicingfinder, and sQTLsFinder)
Create ASclass object
Changes in version 1.5:
NEW FEATURES
s3_1kg() generates TabixFile references to 1000 genomes VCF in AWS S3 bucket
ldByGene() obtains linkage information using snpStats ld() and erma genemodel() to retrieve focused information from VCF
Changes in version 3.32.0:
New function cameraPR(), which implemented a pre-ranked version of camera().
New function alias2SymbolUsingNCBI(), which converts gene aliases or synonyms into official gene symbols using an NCBI gene-info file.
New function wsva() for weighted surrogate variable analysis.
New function coolmap(). This is essentially a wrapper for the heatmap.2() function in the ggplots package, but with sensible default settings for genomic log-expression data.
decideTests() is now an S3 generic function with a default method and a method for MArrayLM objects. decideTests() now selects all null hypotheses as rejected if p.value=1.
length() methods removed all limma data objects (objects of class EList, EListRaw, RGList, MAList or MArrayLM). length(x) will now return the number of list components in the object rather than the number of elements in the expression matrix.
New argument ‘style’ for volcanoplot(). The default is now to use -log10(p-value) for the y-axis instead of the B-statistic.
New argument ‘xlab’ for barcodeplot().
New argument ‘col’ for plotSA(). plotSA() now longer plots a lowess curve trend, but if appropriate both high and low outlier variances are highlighted in a different color.
Argument ‘replace.weights’ removed from voomWithQualityWeights(). The function now always produces an EList, similar to voom(). The default behavior of the function is unchanged.
barcodeplot() now ranks statistics from low to high, instead of from high to low, following the usual style of axes in R plots. This means that left and right are now interchanged.
plotSA() now plots quarter-root variances instead of log2(variances).
Default for ‘legend’ argument of plotWithHighlights() changed from “topleft” to “topright”.
fitFDist() now estimates the scale by mean(x) when df2 is estimated to be Inf. This will make the results from eBayes() less conservative than before when df.prior=Inf.
plotSA() now indicates, by way of an open plotting symbol, any points that have low robust df.prior values.
Clearer error message from fitFDistRobustly() when some variances are zero.
C functions are now registered using R_registerRoutines.
Bug fix for contrastAsCoef() when there is more than one contrast. Previously the coefficients for the transformed design matrix were correct only for the first contrast.
Bug fix for kegga() when the universe is explicitly specified.
Bug fix for fitFDistRobustly() when there is an extreme outlier. Previously floating point underflow for the outlier p-value could cause an error.
Bug fix to mroast(), which was ignoring ‘geneid’ argument.
Bug fix to printHead() for arrays with 1 column.
Changes in version 0.99.0:
Changes in version 1.3.0:
Added function alignSeq
Added funciton phyloSeq
Added function exportFasta
Added function differentialAbundance
Added function clonalRelatedness
Added function commonSeqsBar
Changes in version 1.9.2-3:
Changes in version 1.9.1:
Changes in version 1.2.0:
NEW FUNCTIONS
mafSurvival - Performs survival analysis.
tcgaCompare - Compares mutation load from given MAF against all 33 TCGA cohorts.
pancanComparision - Perform PacCancer analysis/comparision
prepareMutSig - Prepares MAF file for MutSig analysis by fixing descrepencies in gene symbols.
SIGNIFICANT USER-LEVEL IMPROVEMENT
NON SIGNIFICANT CHANGES
Changes in version 1.1.3:
NEW FEATURES
Added class ‘drle’ for delta-run-length encoding vectors
Added ‘+’, ‘-‘, ‘*’, ‘/’, ‘^’, ‘exp’, ‘log’, ‘log2’, and ‘log10’ as possible delayed operations to on-disk atoms
SIGNIFICANT USER-VISIBLE CHANGES
Slots of ‘atoms’ class now use delta-run-length encoding
Reduced metadata size by changing ‘atoms’ class to use groups rather then relying on a ‘list’ of ‘atoms’
The ‘scale’ method for ‘matter_mat’ now matches ‘scale.default’ more correctly when ‘center = FALSE’ and ‘scale = TRUE’
Changes in version 1.1.2:
NEW FEATURES
Added support for char, uchar, ushort, uint, and ulong datamodes
Added support for raw (Rbyte) matter objects
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 1.1.1 (2016-11-29):
NEW FEATURES
Added ‘crossprod’ (t(x) %% y) and ‘tcrossprod’ (x %% t(y)) methods
Added ‘atomdata’ accessor method, for which ‘adata’ is now an alias
BUG FIXES
Added S3 versions of some S4 methods to fix scoping issues
Removed Cardinal package from Suggests to avoid circular dependency
Reduced memory consumption in bigglm-matter method
Changes in version 1.0.0:
Changes in version 1.9.1:
Changes in version 1.5.2:
MetaboSignal includes a new function: “MS_interactionType()”. This function allows getting the interaction subtype between signaling nodes. The output matrix generated by this function can be used for “MetaboSignal_NetworkCytoscape()” and also for “MS_GetShortestpaths()”.
“MS_GetShortestpaths()” has been modified and now the output shortest path(s) can be represented as a network-table (i.e. 2-column matrix).
Changes in version 1.6.0 (2017-04-14):
Changes in version 1.5.1 (2017-03-20):.1.5:
Changes in version 1.1.4:
change MSP class, create slots mz, rt, names, classes, information and adduct [2017-01-28 Sat]
add tabPanels in shinyCircos (Main, Appearance) [2017-01-28 Sat]
rearrange position of legend, implement option to show/not show l egend [2017-01-28 Sat]
rescale plot when changing window size, allow for further scaling/descaling of the plot [2017-01-28 Sat]
adjust convertMSP2MSP to new class MSP, create unit tests [2017-01-29 Sun]
include msp2msp matrix, a test data set for convertMSP2MSP [2017-01-29 Sun]
set methods for names, classes, adduct and information [2017-01-29 Sun]
change the interactive shinyCircos such that the user can update the annotation data of an MSP object (name, class, information and adduct ion information) [2017-01-29 Sun]
Changes in version 1.1.3:
use new email adress [2016-12-05 Mon]
use option to calculate MSP-object from msp-file directly [2016-12-05 Mon]
Changes in version 1.1.2:
use absolute masses when calculating similarities in createSimilarityMatrix (bug fix) [2016-11-17 Thu]
add option links in highlight, i.e. should links be plotted or not? [2016-11-17 Thu]
Changes in version 1.1.1:
Changes in version 1.9:
adapted to changes in minfi such as, read.meth.array
now fully support for EPIC arrays
Changes in version 1.1.8:
IMPROVEMENTS AND BUG FIXES.7:
IMPROVEMENTS AND BUG FIXES
fix methCall segementation fault, added tests and test files from bismark
fixed missing p.value at coercion of methylBase to GRanges
changes to the pool() function: save.db=TRUE for methylBaseDB by default, differing lengths of given sample.ids and unique treatment lead to error, added tests
fix osx related error when reading gzipped files with methRead
change deprecated function names in test files
fixed bug with dataSim() function updated the manual.6:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.1.5:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.1.4:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.1.3:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.1.2:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.1.1:
IMPROVEMENTS AND BUG FIXES
Fisher’s exact test now works as described in the manual. It is automatically applied in calculateDiffMeth() when there are only two groups with one replicate each.
During logistic regression modeling, the samples without counts are removed from the model but the same filtering is not applied for covariates data.frame, which can cause errors if min.per.group argument is used. Now this is fixed:
Changes in version 0.99.0:
DOCUMENTATION
NEWS file was added.
First functional version
Changes in version 1.21:
Moving RGChannelSet, MethylSet and RatioSet from building on eSet (from Biobase) to SummarizedExperiment (from SummarizedExperiment). Most important changes are that the constructor functions now uses the argument colData instead of pData; some of them have more arguments. The updateObject methods have been extended to update to the new class backend. While the pData, sampleNames, featureNames methods still work, we recommend (at least for package writers) to move to colData, colnames and rownames.
Reverted the bugfix to preprocessQuantile mentioned under news for version 1.19. Our fix was wrong; the original code did not have a bug. Thanks to users who reported issues with the function (Frederic Fournier and David Martino).
bugfix for getSnpBeta for subsetted (and combined) RGChannelSets (reported and diagnosed by Warren Cheung).
Accessing the manifest or annotation now fails for an ‘unknown’ array.
We now support gzipped IDAT files.
Fixed a bug in read.metharray() which resulted in an error in some situations when running the function with argument force=TRUE to read IDAT files of different length. Reported by Maria Calleja Cervantes [email protected].
Changes in version 1.5.1:
Changes in version 1.3.1:
SIGNIFICANT USER-LEVEL CHANGES
Updated citation
Updated vignette
Changes in version 1.1.2:
NEW FEATURES
USER-LEVEL CHANGES
Changes in version 1.0.1:
USER-LEVEL CHANGES
BUG FIXES
Changes in version 2.4.0:
The default expressionFamily is now negbinomial.size, instead of Tobit. If you are using TPM or FPKM data, we urge you to convert it to relative transcript counts with relative2abs and use the negative binomial distribution in your CellDataSet objects.
Revamped clusterCells functionality based on t-SNE and densityPeak
New procedure for selected ordering genes called “dpFeature”. See vignette for details.
Changes in version 0.99.0:
Changes in version 1.19.4:
NEW FEATURES
Changes in version 1.19.3:
NEW FEATURES
Changes in version 1.19.2:
NEW FEATURES
Changes in version 1.19.1:
NEW FEATURES
Changes in version 1.7.2:
Changes in >= 2.40.0 in order to make sure that consensusMatrix() also works correctly for masked alignments
corresponding changes in documentation and vignette
Changes in version 1.7.0:
Changes in version 2.1.18:
Changes in version 2.1.17:
Changes in version 2.1.16:
Changes in version 2.1.15:
Changes in version 2.1.14:
Internal rewrite and speedup of topN; Briefly multiple apply calls
are avoided,
getTopIdx and
subsetById are replaced by
.topIdx.
See PR #199 for details. <2017-03-20 Mon>
Fix mz calculation for terminal modifications and z > 1 in
calculateFragments; closes #200 <2017-03-22 Wed>
Fix errors and notes <2017-03-30 Thu>
Changes in version 2.1.13:
Changes in version 2.1.12:
Import dist from stats <2017-02-25 Sat>
Fix filing example <2017-02-25 Sat>
Changes in version 2.1.11:
Fix breaks calculation for binning single (closes #191) and multiple (closes #190) spectra. The fix for single spectra (#191) could result in slightly different breaks on the upper end of the m/z values. <2017-02-10 Fri>
New
aggvar function, to assess aggregation variability <2017-02-11
Sat>
Changes in version 2.1.10:
New
diff.median normalisation for
MSnSets. <2017-01-26 Thu>
Fix combineFeatures message <2017-02-01 Wed>
Changes in version 2.1.9:
When fully trimmed, an (empty) spectrum has peaksCount of 0L - see <2017-01-20 Fri>
Add filterEmptySpectra,MSnExp method (see issue #181) <2017-01-20 Fri>
Add a section about notable on-disk and in-memory differences (was issue #165) <2017-01-20 Fri>
Changes in version 2.1.8:
Changes in version 2.1.7:
Setting default sorting using “auto” on R < 3.3 and “radix” otherwise <2017-01-03 Tue>
filterMz returns an empty spectrum when no data is within the mz range (see issue #181) <2017-01-16 Mon>
Performance improvement: a new private .firstMsLevel will efficiently return the first MS level in an MSnExp and OnDiskMSnExp. See issue #183 for details/background <2017-01-18 Wed>
Changes in version 2.1.6:
Migrate io and dev vignettes to BiocStyle’s html_document2 style <2016-12-23 Fri>
Update show method to display class.
Migrated to NEWS.md <2016-12-23 Fri>
Update DESCRIPTION (and README) to reflect wider usage of MSnbase (replaced MS-based proteomics by mass spectrometry and proteomics) <2016-12-23 Fri>
Changes in version 2.1.5:
Changes in version 2.1.4:
Changes in version 2.1.3:
Changes in version 2.1.2:
Update readMSnSet2 to save filename <2016-11-09 Wed>
Ensure that header information is read too if spectra data is loaded for onDiskMSnExp objects (see issue #170) <2016-11-24 Thu>
Changes in version 2.1.1:
Fix typo in impute man page <2016-10-19 Wed>
Cite Lazar 2016 in vignette imputation section <2016-10-28 Fri>
Changes in version 2.1.0:
New version for Bioconductor devel
New version for Bioconductor release version 3.4
Changes in version 1.1.1:
Added pcalc functions to be used by user
Added option to remove isotopes from calculation
Changes in version 1.0.0:
SIGNIFICANT USER-VISIBLE CHANGES
NEW FEATURES
Changes in version 2.9.11:
Changes in version 2.9.10:
Changes in version 2.9.9:
Changes in version 2.9.8:
Changes in version 2.9.7:
Changes in version 2.9.6:
Changes in version 2.9.5:
Changes in version 2.9.4:
Changes in version 2.9.3:
Changes in version 2.9.2:
cleanup CFLAGS and LIBS for libnetcdf
add file missing for oaxaca (Apple clang 3.5svn / 600.0.57)
Changes in version 2.9.1:
Delete RAMPAdapter pointer in pwiz backend (by jotsetung) <2016-11-20 Sun>
Use spectra in addition to peaks (see issue #15) <2016-12-09 Fri>
New pwiz (commit 946d23d75dc70a7a4913d8e05e3d59b9255f278e)
Changes in version 2.9.0:
Changes in version 1.7.1 (2017-04-10):
Changes in version 1.0.0:
Changes in version 0.99.0:
Devel version 0.99.0
edgenet penalization using CCD
Implementation of model selection using Dlib
Changes in version 1.15.1:
Changes in version 1.3.3 (2017-03-27):
More bug fixes to the queries to PubMed.
Implementing a function for combining genes.
Support HTTPS.
Changes in version 2.6.0:
Many additions to the vignette and documentation.
LOD and POM (lines of descent, path of maximum, sensu Szendro et al.).
Diversity of sampled genotypes.
Genotyping error can be added in samplePop.
fixation of a genotype/gene as stopping mechanism.
rfitness: shifting by subtraction and mu of normal distribution.
simOGraph: using proper transitive reduction.
simOGraph can also output rT data frames.
accessible genotypes now done in C++.
Handling of trivial cases in genotFitness.
Clarified McFarland parameterization.
Better (and better explained) estimates of simulation error for McFL.
AND of detectedSizeP and lastMaxDr.
sampledGenotypes in user code.
clonePhylog et al: deal with never any descendant.
samplePop can handle failed simulations graciously.
summary.oncosimulpop can handle failed simulations graciously.
Citation shows Bioinformatics paper.
Changes in version 2.5.14 (2017-04-07):
Changes in version 2.5.13 (2017-04-07):
Changes in version 2.5.12 (2017-02-18):
rfitness: allow simple forcing of wt to 1, shifting by subtraction, and specifying mu of normal distribution.
simOGraph: proper trm comparison.
Citation now shows Bioinformatics reference.
Changes in version 2.5.11 (2017-01-27):
Changes in version 2.5.10 (2017-01-27):
Changes in version 2.5.9 (2017-01-09):
Changes in version 2.5.8 (2016-12-17):
Changes in version 2.5.7 (2016-12-15):
Changes in version 2.5.6 (2016-12-14):
Changes in version 2.5.5 (2016-12-14):
Changes in version 2.5.4 (2016-12-12):
Changes in version 2.5.3 (2016-12-12):
Vignette uses pander in tables.
Typos fixed and other enhancements in vignette.
Changes in version 2.5.2 (2016-12-10):
Lots and lots of addition to vignette including benchmarks.
Diversity of sampled genotypes.
Genotyping error can be added in samplePop.
LOD and POM (lines of descent, path of maximum, sensu Szendro et al.).
simOGraph can also out rT data frames.
Better (and better explained) estimates of simulation error for McFL.
Changes in version 2.5.1 (2016-11-12):
AND of detectedSizeP and lastMaxDr.
fixation as stopping mechanism.
sampledGenotypes in user code.
clonePhylog et al: deal with never any descendant.
samplePop can handle failed simulations graciously.
summary.oncosimulpop can handle failed simulations graciously.
accessible genotypes now done in C++.
OcurringDrivers should not be a factor.
samplePop always returns gene names.
to_Magellan is much faster with rfitness objects.
Several improvements in vignette (English and additional explanations).
Changes in version 1.0.0:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 1.18.0:
BUG FIXES
Changes in version 1.15.1:
Changes in version 0.15.3:
Working up new API (see issue #41) <2017-04-09 Sun>
Temporarily remove some vignettes, until the API has stabilised. <2017-04-09 Sun>
Changes in version 0.15.2:
New Proteins class implementation - see issue #38. <2016-12-10 Sat>
Fixed many errors from current rewrite <2017-04-08 Sat>
Changes in version 0.15.1:
Proteins,EnsDb method allowing to fetch a Proteins object from an EnsDb database.
Fixes in the mapToGenome method: - Works also for negative strand encoded proteins (issue #29). - Supports a GRangesList object with arbitrary mcols. - Faster implementation (issue #30).
mapToGenome,Proteins,EnsDb and pmapToGenome,Proteins,EnsDb methods that map peptide features to the genome using annotations fetched from an EnsDb.
The seqnames of the Proteins object are used as names for the resulting GRangesList object from the mapToGenome and pmapToGenome methods (issue #34).
Drop unique
seqnames requirement; see #28, #32
Create
names as synonym for
seqnames; close #32
Changes in version 1.3.2:
CODE
test_subjectReportupdate after subjectReport modification.
DEPENDENCIES
Changes in version 1.3.1:
CODE
subjectReportbug removed over axis.text.x duplicated parameter.
Changes in version 1.3.0:
VERSION
Changes in version 2.2.0:
NEW FEATURES
BUG FIXES
OTHER NOTES
Changes in version 1.5.1:
Changes in version 1.1.0:
USER-VISIBLE CHANGES
Inverse ILR ilrpInv is now implemented as well as the inverse clrp transform (clrpInv) and the inverse (shiftpInv) function.
In order to untransformed more generally transformed PhILR data (e.g., with branch length weights), a philrInv function has been created, this is likely the most userfriendly way to invert any transformed data.
Updated documentation
Various Bugfixes
Added updated citation information for package
Changes in version 1.3.3:
NEW FEATURES
Changes in version 1.3.2:
NEW FEATURES
Changes in version 1.14.5:
BUG FIXES
Reset plot layout after networkPlot.
Issue warning instead of error in writeFilesForKiwi when gene-level statistics are not p-values. This means that the GLS file will not be generated, but the GSS and GSC files are.
Changes in version 1.14.3:
BUG FIXES
Changes in version 1.1.10 (2017-03-30):
Changes in existing functions
Checking the pigengene input of module.heatmap().
Issues in the balance() function (not exported) where Labels is a factor are resolved. Also, if all sampls have the same size, oversampling is automatically turned off.
If Labels is a factor, it is now converted to a character vector in check. pigengene.input().
Changes in version 1.1.6 (2017-03-27):
Changes in existing functions
The module.heatmap() function now has the doAddEigengene and scalePngs arguments.
The compute.pigengene() function now reports also the size of modules in the pigengene_pvalue.csv output file. 1.3:
NEW FEATURES
Added estimated time to complete (ETTC) to the progress line.
New default values for scoring parameters as a result from training on G4-seq experimental data and co-testing on a set of known quadruplexes from literature.
New PQS metadata reported: number of tetrads, bulges, mismatches and loop lengths. To get this data, use elementMetadata accessor function.
New algorithm option: reporting of all overlapping PQS.
Score distribution is reported. For each sequence position you get maximal score of PQS that was found overlapping the position. For details, see scoreDistribution function.
Novel scoring parameter: exponent of loop length mean in the scoring equation to express non-linear dependency of the PQS propensity to the loop lengths.
BUG FIXES
Minimal loop length is set back to 0 by default, but only one zero-length loop is allowed by the scoring system.
Fixed unintentional cast of loop length mean factor and loop standard deviation factors from float to integer.
Changes in version 1.15.9:
Changes in version 1.15.8:
Changes in version 1.15.7:
Fix warnings and notes <2017-02-25 Sat>
Add section about dimensionality methods reduction and t-SNE in the tutorial <2017-03-07 Tue>
Fix error due to new uniprot attribute names <2017-04-06 Thu>
Changes in version 1.15.6:
fix (unexported) remap function - see issue #92 <2016-12-15 Thu>
plot2Ds now only adds segments when the featureNames are identical <2017-01-11 Wed>
Import (rather than Suggest) hexbin <2017-01-18 Wed>
Increase margin in QSep plotting (contributed by S. Gibb) <2017-02-07 Tue>
Changes in version 1.15.5:
Update plotDist to use sampleNames() to label x axis ticks and getUnknowncol() as default pcol (see issue #91) <2016-11-08 Tue>
xlab and ylab are args in plotDist <2016-12-04 Sun>
Changes in version 1.15.4:
Changes in version 1.15.3:
Changes in version 1.15.2:
Changes in version 1.15.1:
new plot3D function <2016-10-27 Thu>
update CITATION <2016-10-27 Thu>
use predict:::predict.plsa, as it is not exported anymore <2016-11-01 Tue>
Changes in version 1.9.4:
fixed remap=FALSE bug in compare app <2017-01-12 Thu>
Added mirrorX and mirrorY to the compare app <2017-01-12 Thu>
Changes in version 1.9.3:
Update vignette use latest BiocStyle::html_document2() with floating table-f content <2016-12-30 Fri>
Change to NEWS.md <2016-12-30 Fri>
mirrorX and mirrorY are now ignored in pRolocVis - see issue #84 <2017-01-11 Wed>
Changes in version 1.9.2:
Changes in version 1.9.1:
Changes in version 1.9.0:
Changes in version 1.7.21:
NEW FEATURES
Changes in version 1.7.19:
NEW FEATURES
Changes in version 1.7.17:
NEW FEATURES
Changes in version 1.7.13:
NEW FEATURES
Changes in version 1.7.11:
NEW FEATURES
Changes in version 1.7.9:
NEW FEATURES
Changes in version 1.7.7:
NEW FEATURES
Changes in version 1.7.5:
BUG FIXES
Changes in version 1.7.3:
NEW FEATURES
Changes in version 1.1.10:
Gene, protein and transcript information:
Fix tooltip text presentation in transcript plot
Fix JavaScript issues when zooming the transcript plot
Fix error when plotting events associated with multiple genes
Fix error when plotting single-exon transcripts
Protein name, length and function are now presented when available
Improved general presentation of the information
Differential splicing analyses:
Click and drag in the plot to zoom in and subsequently filter events shown in the table
Decreased step of sliders
Improve interface of previewed survival curves
When clicking on a table link to navigate to differential splicing analyses of a single event, the appropriate analyses will now be automatically rendered with the respective options, as expected
Settings (renamed to “Help”):
Add links to tutorials and user feedback
Add app information and acknowledgments
Remove unused option for choosing cores (all performed operations are still single-core, given the difficulty of working with multiprocesses in Shiny)
Improve dialogs regarding missing data and other minor interface elements
Update documentation with volcano plot
Changes in version 1.1.9:
Differential splicing analyses:
Add volcano plot to represent events through selected attributes, such as p-values and descriptive statistics (e.g. median and variance) between groups of interest
Transform values of the X and Y axis in the plot using log transformed, inverted and absolute values, for instance
Highlight events in the plot based on values of the X and Y axis
Table of differential analyses per alternative splicing event is filtered according to highlighted and selected events in the plot
Gene, protein and transcript information:
Transcript plot is now interactive and zoomable
Protein are now rendered based on selected transcript alone
Faster parsing of Uniprot’s web API response
Improve display of article information when data is missing
Principal component analysis:
Improve presentation of available options
When clicking on previews of differential splicing and survival analyses, the appropriate analyses will now be automatically rendered with the respective options
Fix buggy browser history when the user is directed to a different tab
Consistently use Firebrowse and Firehose across the package
Update documentation
Changes in version 1.0.8:
Support GTEx data loading and analysis
Fix clinical data dependency: - Fix error when trying to load a file containing alternative splicing quantification without first loading clinical data - Fix error where samples from junction quantification were matched to clinical information even if clinical data were not loaded - Inform user when clinical data is not loaded while trying to plot survival curves
Improve data grouping: - Create sample groups like patient groups and perform set operations between any created groups - Create groups using patient and sample identifiers - Check number of patients and samples per group - Rename selected groups - Alert user when groups cannot be created due to missing data
Differential splicing analysis: - Analyse all samples as one group
Survival analysis: - Select any clinical attribute for starting/follow up and ending times
Create table containing TCGA sample metadata when calculating or loading alternative splicing quantification
Minor UI improvements
Changes in version 1.0.7:
Changes in version 1.0.6:
Update tutorials with more relevant and complex examples
Update minimum versions required of highcharter (0.5.0) and shiny (1.0.0): - Fix function usage as according to new version of highcharter - More options available when exporting plots (PNG, JPEG, SVG, XLS and CSV)
Faster alternative splicing quantification
Differential splicing analysis: - Fix major bug where samples could
be placed in the wrong groups - Shorten speed of the calculation for
the optimal PSI cut-off that minimises the survival difference - Fix
not performing statistical tests for two selected sample types while
analysing a single event with three or more sample types - Fix
differential analysis on one splicing event not working when using
diffAnalyses() function - Fix differential analysis not showing for
individual events before navigating to the page where the analysis is
performed for all events - Improve readability and information of
statistical tests for single events
Principal component analysis: - Shorten time taken to calculate principal components and to render the loadings plot - Fix loadings plot error when rendering some principal components
Survival analysis: - Fix incorrect number of patients from the survival groups in the contextual information for the selected cut-off (below the slider) - Improve how alternative splicing quantification is assigned to patients based on their samples
Protein annotation: - Warn user when trying to render proteins with no domains
Changes in version 1.0.5:
Navigate history using the browser forward and back buttons
Fix delay when displaying large data by removing columns containing missing values exclusively
Principal component analysis: - Improve speed when calculating total contribution of each variable to the principal components
Survival analysis: - Shorten calculation of optimal PSI that minimises the survival difference - Improve visual cues of optimal PSI cut-off and present p-value of selected PSI cut-off - Fix ambiguous error messages - Fix incorrect Cox model results for formula-based calculations - Fix null Cox models crashing the program
Differential splicing analysis: - Select sample types for differential splicing analysis - Fix statistical tests not displaying for individual events after differentially analysing all events using the other statistical tests
Changes in version 1.0.4:
Correctly load files and quantify alternative splicing for PRAD, OV and PAAD tumour types from The Cancer Genome Atlas (TCGA)
Fix session disconnecting when exporting plots in Firefox
Improve text and behaviour of fields to select datasets and AS events
Fix author names and add contributor
Changes in version 1.0.3:
Bug fixes regarding gene annotation: - Fix disabled gene selection when choosing a splicing event associated with a single gene after selecting an event related to multiple genes - Fix display of PubMed articles related to previously selected gene when selecting a single-gene-associated event after selecting an event related to multiple genes
Bug fixes regarding groups: - Fix groups by rows not working - Fix group selection not working when only one group exists - Improve argument name of getGroupsFrom()
Other minor improvements
Changes in version 1.0.2:
Changes in version 1.0.1:
Changes in version 1.6.0:
Lots of improvements to command line scripts
Improved somatic vs. germline status calling
Better mapping bias estimation and correction
Better integration into existing copy number pipelines
Support for cell lines
New GC-normalization for smaller gene panels
Added sub-clonal SNV state (SOMATIC.M0)
Polished plots, added new GC-normalization and volcano plots
Better copy number normalization using multiple best normals
Removed automatic curation, since the tuned likelihood model of runAbsoluteCN was hard to beat
More control over homozygous deletions (significant portion of wrong maximum likelihood solutions had many homozygous deletions)
Faster post.optimize=TRUE by not optimizing poor fits or unlikely solutions
Automatic 50bp interval padding
Tweaks to segmentationPSCBS
seg.file can contain multiple samples
Contamination rate estimation (experimental)
Code cleanups (switch from inlinedocs to roxygen, from message/warn to futile.logger) API CHANGES
runAbsoluteCN output from PureCN 1.2 cannot be analyzed with PureCN 1.6 and needs to be re-run. We hope to avoid this in the future.
Renamed functions: readCoverageGatk to readCoverageFile since future versions will likely support additional third-party tools.
Deprecated functions: createSNPBlacklist, getDiploid, autoCurateResults
Defunct functions: createExonWeightFile
Changed defaults:
min.normals 4 (from 10) in setMappingBiasVcf
max.segments 300 (from 200) in runAbsoluteCN
min.targeted.base 5 (from 4) in filterTargets
max.homozygous.loss now a double(2) vector, with first element specifying the maximum fraction of genome deleted (default 5%) and the second value the maximum size of a homozygous loss (default 10mb).
prior somatic for variants in both dbSNP and COSMIC changed from 0.01 and requiring 3 hits to 0.5 and requiring 4 hits.
Other minor changes:
Renamed some predictSomatic() output column names
Removed “beta.model” from “SNV.posterior” slot since model is now an option
Moved remove.off.target.snvs to filterVcfBasic
Moved normalDB from filterTargets to runAbsoluteCN, since it is now used for more than target filtering
Dropped BED file support in calculateGCContentByInterval Instead provide support for GRanges
poolCoverage: w argument now used as provided, not normalized so that w1 is 1
Removed … from runAbsoluteCN
min.coverage removed from segmentation function, since this is now done by filterTargets
Added centromeres to segmentation function
Replaced contamination.cutoff with contamination.range in filterVcfBasic
Removed verbose from most functions, since messages are now controlled with futile.logger
Smoothing of log-ratios before segmentation now optionally done by runAbsoluteCN, not segmentation function
setMappingBiasVcf now returns a list with elements bias (the old return value) and pon.count, the number of hits in the PON PLANNED FEATURES FOR 1.8
Better sample summary statistics, like mutation burden, chromosomal instability
Better performance in low purity samples
Better performance in high purity samples with significant heterogeneity
LOH database
Switch to S4 data structures (maybe)
Whole dataset visualizations (maybe)
Better support for known, small deletions and amplifications (e.g. EGFRvIII, MYC)
Support for GATK4
Better runtime performance by ignoring unlikely solutions early
Changes in version 1.7.1:
include signaling axes identification functions
include phosphorylation information prefiltering in intersection analysis
include direction of regulation prefiltering in intersection analysis
include visualization of temporal correlations between phosphosite expression data and transcriptome data
Changes in version 1.12.0:
RELEASE
IMPROVEMENTS
VCF and SEG file export have been implemented to allow use of downstream analysis tools such as Cartegenia (NGS) Bench.
binReadCounts() now supports parallel computing
calculateBlackListByRegions() has been implemented for convient bin overlap calculation of any set of regions.
Changes in version 2.10:
BUG FIXES
Changes in version 1.3.2:
Version: 1.8.0 Text: Updates: * Fixed some import issues * Fixed a bug in the visualizeCircos function * updated the documentation
Changes in version 1.0.0:
NEW FEATURES
BUG FIXES.19.1:
Changes in version 1.1.27:
NEW FEATURES
Changes in version 1.1.26:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.25:
NEW FEATURES
Changes in version 1.1.24:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.19:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.18:
BUG FIXES
Changes in version 1.1.16:
BUG FIXES
Changes in version 1.1.14:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.13:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.12:
BUG FIXES
Changes in version 1.1.8:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.6:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.5:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.2:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.1:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.24.0:
Changes in version 1.9.1:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 2015-3-27:
Changes in version 2013-4-1:
Minor bug fixes in NAMESPACE and DESCRIPTION and css
Enhanced vignettes to include information on our website and publication
Enhanced vignettes to clarify use of .modifyDF() ‘name’.7.1:
Changes in version 1.7.1:
BUG FIXES
Changes in version 2.20.0:
NEW FEATURES
Indexing into spaces with more than .Machine$integer.max elements is supported using numeric (rather than integer) indexing; this provides exact indexing into spaces with about 51 bits of precision.
Zero-length indexing is now supported (returning zero-length slabs).
BUG FIXES
Changes in version 1.5.3:
Changes in version 1.5.2:
Down-sampled the ctrlGAlignments.rda object from the data folder
Changed conflicting sweave-knitr in the vignette
Changes in version 1.5.1:
For the vignette the example bam files have been down-sampled and added to the extdata folder
Changed the vignette accordingly
Changes in version 1.7.5:
Several minor bugfixes and performance improvements
added a vignette section on working with RnBSet objects
Changes in version 1.7.4:
Several minor bugfixes and performance improvements
Vignette installation instructions updated
Reduce warnings in R CMD check
Changes in version 1.7.3:
Age predictor (MethylAger) updates and documentation
Support for external tools bedToBigBed and bedGraphToBigWig
Minor bugfixes
Changes in version 1.7.2:
Changes in version 1.7.1:
Added support for the ENmix.oob background subtraction method
Several improvements in age prediction module
Added conversion from minfi raw dataset to RnBeadRawSet
Several minor bug fixes
Changes in version 2.3.4:
Changes in version 2.3.3:
Add ctb to Authors@R <2016-12-28 Wed>
use NEWS.md <2016-12-28 Wed>
Ammend failing Ontologies unit test <2017-01-02 Mon>
Changes in version 2.3.2:
Changes in version 2.3.1:
Changes in version 2.3.0:
Changes in version 1.7.2:
INTERNAL MODIFICATION
Changes in version 1.11.2:
Changes in version 1.11.1:
Migrate vignette to BiocStyle::html_document2 <2016-12-22 Thu>
Use NEWS.md <2016-12-28 Wed>
Changes in version 1.11.0:
Changes in version 1.10:
BUG FIXES
Changes in version 1.27:
BUG FIXES
qnameSuffixStart<-(), qnamePrefixEnd<-() accept ‘NA’ (bug report from Peter Hickey).
scanBam() accepts a single tag mixing ‘Z’ and ‘A’ format. See
Changes in version 1.26.0:
NEW FEATURES
Gene annotation can be provided to align() and subjunc() to improve exon junction detection.
Improve sanity checking for input and output data for align(), subjunc() and featureCounts().
Resolve inconsistency between runs for align() and subjunc() when more than one CPU thread is used..14.0:
Concluded the VSE/EVSE pipeline, including examples.
Improved computational performance and RTN workflows.
In order to improve stability and portability, all dependencies have been extensively revised and, when available, replaced with more stable options.
In order to simplify documentation and usability, some pipelines have been revised and should be distributed as separated packages, or ‘on demand’, focused on the main workflow, as for example the new ‘RTNduals’ package.
Changes in version 1.0.0:
Changes in version 0.14.0:
NEW FEATURES
Add Linteger vectors: similar to ordinary integer vectors (int values at the C level) but store “large integers” i.e. long long int values at the C level. These are 64-bit on Intel platforms vs 32-bit for int values. See ?Linteger for more information. This is in preparation for supporting long Vector derivatives (planned for BioC 3.6).
Default “rank” method for Vector objects now supports the same ties method as base::rank() (was only supporting ties methods “first” and “min” until now).
Support x[[i,j]] on DataFrame objects.
Add “transform” methods for DataTable and Vector objects.
SIGNIFICANT USER-VISIBLE CHANGES
Rename union classes characterORNULL, vectorORfactor, DataTableORNULL, and expressionORfunction -> character_OR_NULL, vector_OR_factor, DataTable_OR_NULL, and expression_OR_function, respectively.
Remove default “xtfrm” method for Vector objects. Not needed and introduced infinite recursion when calling order(), sort() or rank() on Vector objects that don’t have specific order/sort/rank methods.
DEPRECATED AND DEFUNCT
Remove compare() (was defunct in BioC 3.4).
Remove elementLengths() (was defunct in BioC 3.4).
BUG FIXES
Make showAsCell() robust to nested lists.
Fix bug where subsetting a List object ‘x’ by a list-like subscript was not always propagating ‘mcols(x)’.
Changes in version 0.99.0:
Changes in version 1.3.49:
plotRLE() function to make relative log expression plots to assess and compare normalizations
Refactored newSCESet() with defined hierarchy of data types
read10XResults() to read in results from 10x Chromium CellRanger output
Refined QC metrics
Bug fixes, efficiency improvements and more tests
Changes in version 1.0.0:
Faster SVD with rARPACK.
New zero mode options in scone.
Newly exported get functions.
Updated scone scaling function defaults.
Error handling and documentation updates.
Bug fixes to sample filtering functions.
Changes in version 0.99.0 (2016-11-14):
Major release: many changes due to porting to S4.
Widespread updates to documentation, including examples.
Compatibility with Bioconductor format, including biocViews.
Updated cell-cycle genes.
Default BPPARAM value, now passed as argument to scone.
Removed zinb function.
Test different parallel back ends.
Added class SconeExperiment based on SummarizedExperiment.
Added constructor sconeExperiment to create SconeExperiment objects.
Added many helper methods to retrieve content of slots.
Wrapper get_normalized() to extract/compute single normalization from scone object.
Wrapper get_design() to extract the design matrix associated with a given normalization.
Method select_methods() to get a smaller SconeExperiment object containing only the requested normalization schemes.
biplot_interactive() now works with SconeExperiment objects.
Dropped date from DESCRIPTION as better practice is to add date to NEWS file.
Added wrapper for scran normalization, removed FQP.
Changes in version 0.0.8 (2016-09-26):
Added sconeReport() function for shiny browser of SCONE results.
scone() outputs are now sorted according to mean rank of scores rather than mean scores. Although this is a relative measure it accounts for differing variability of metrics.
RLE_IQR now quantifies the variability of the IQR RLE across samples, rather than the mean.
Bug Fix: Previously imputation could be applied to the wrong subset of params rows when user passed params arguments. Imputation functions are now indexed by name to avoid this error.
Bug Fix: Negative infinity expected likelihood is temporarily permitted in the ziber loop.
“ezbake” script and scone_easybake() function added for pipelined SCONE commands.
Revised documentation and removed old scripts.
Various other bug fixes.
Changes in version 0.0.6 (2016-07-22):
Added option for restoring zeroes after scaling step.
New argument format for imputation via impute_args.
Simplified ziber fnr estimation function - requires control genes.
Fixed bug when using plot functionality of filtering functions.
“Conditional” pam replaced with “Stratified” pam, clustering each bio-cross-batch condition separately, rather than simply each bio condition.
Simple FNR for filtering is now based on medians of natural log expression in “expressing” cells / robust to convergence issues.
Removed all sufficient thresholds for metric sample filter.
Added option to write normalized matrices to HDF5 file.
Added wrapper function get_normalized() to retrieve normalized data.
New biplot_interactive function to explore the results.
Changes in version 0.0.5:
Modified biplot to handle general coloring schemes.
Limit number of WV and UV factors to eval_pcs in computing WV and UV scores.
Updated dependencies.
Added error-handling to sample filter.
Removed var preserved measure due to length of running time.
Changes in version 0.0.4:
Fixed a few bugs and documentation mismatches.
Removed stability evaluation (redundant with sil width and slow).
Removed clusterExperiment dependency.
Removed RUV correlation score. UV correlation now takes ruv control genes as default.
Added RLE measures to scone evaluation.
Added FQT_FN to implement careful ties handling by FQ.
Better handling of plots in sample filter functions.
Mean score rather than median rank to evaluate normalizations.
Default value for imputation.
Minor optimizations to evaluation functions.
Changes in version 0.0.3:
Fixed various bugs.
Added “compactness” measure for stability evaluation.
Compute RUV factors only when needed.
Fixed Github issues #11, #12, #13, #14, #21, #28.
zinb now works for non-integer whole numbers.
Updated tests.
Added documentation for datasets.
Added biplot_colored function.
Changes in version 1.4.0:
Switched default BPPARAM to SerialParam() in all functions.
Added run argument to selectorPlot(). Bug fix to avoid adding an empty list.
Added exploreData() function for visualization of scRNA-seq data.
Minor bug fix to DM() when extrapolation is required.
Added check for centred size factors in trendVar(), decomposeVar() methods. Refactored trendVar() to include automatic start point estimation, location rescaling and df2 estimation.
Moved spike-in specification to the scater package.
Deprecated isSpike<- to avoid confusion over input/output types.
Generalized sandbag(), cyclone() to work for other classification problems.
Added test=”f” option in testVar() to account for additional scatter.
Added per.gene=FALSE option in correlatePairs(), expanded accepted value types for subset.row. Fixed an integer overflow in correlatePairs(). Also added information on whether the permutation p-value reaches its lower bound.
Added the combineVar() function to combine results from separate decomposeVar() calls.
Added protection against all-zero rows in technicalCV2().
Added the improvedCV2() function as a more stable alternative to technicalCV2().
Added the denoisePCA() function to remove technical noise via selection of early principal components.
Removed warning requiring at least twice the max size in computeSumFactors(). Elaborated on the circumstances surrounding negative size factors. Increased the default number of window sizes to be examined. Refactored C++ code for increased speed.
Allowed quickCluster() to return a matrix of ranks for use in other clustering methods. Added method=”igraph” option to perform graph-based clustering for large numbers of cells.
Added the findMarkers() function to automatically identify potential markers for cell clusters.
Added the overlapExprs() function to compute the overlap in expression distributions between groups.
Added the buildSNNGraph() function to build a SNN graph for cells from their expression profiles.
Added the correctMNN() function to perform batch correction based on mutual nearest neighbors.
Streamlined examples when mocking up data sets.
Changes in version 1.0.0 (2016-04-25):
Added functions
Added help pages
Added vignette
Changes in version 1.16.0:
a new argument ‘intersect’ in
seqSetFilter() and
seqSetFilterChrom()
a new function
seqSetFilterCond()
seqVCF2GDS() allows arbitrary numbers of different alleles if REF
and ALT in VCF are missing
optimize internal indexing for FORMAT annotations to avoid reloading the indexing from the GDS file
a new CITATION file
‘LZMA_RA’ is the default compression method in
seqBED2GDS() and
seqSNP2GDS()
seqVCF_Header() correctly calculates ploidy with missing genotypes
Changes in version 1.15.0-1.15.6:
Changes in version 1.14.1:
The default compression setting in
seqVCF2GDS() and
seqMerge() is
changed from “ZIP_RA” to “LZMA_RA”
seqVCF2GDS(): variable-length encoding method is used to store
integers in the FORMAT field of VCF files to reduce the file size and
compression time
Changes in version 1.9.1:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 1.10.0:
Changes in version 1.1.7:
Reorder method was renamed to Reorder_signatures.
New methods Reorder_samples and Reorder_mutations was added.
Changes in version 1.1.2:
Changes in version 1.7.1:
Changes in version 1.10.0:
new functions
snpgdsAdmixPlot() and
snpgdsAdmixTable()
snpgdsPCASNPLoading() and
snpgdsPCASampLoading() support the
eigen results of
snpgdsEIGMIX() allowing projecting new samples to
the existing coordinate
snpgdsFst() provides W&C84 mean Fst together with weighted Fst
a new argument ‘outgds’ in
snpgdsPCACorr() allows exporting
correlations to a gds file
a friendly warning is given when openning a SeqArray file with
snpgdsOpen()
a new option “Corr” in
snpgdsGRM() for scaled GRM
Changes in version 1.9.15 (2017-04-21):
added prozor (>= 0.2.2) to the Suggest list.
added more specific R package version numbers in DESCRIPTION file.
in plot.specLSet (normalized RT versus RT) use pch=16 and color with parameter alpha=0.1.
fixed issue #22 by including the iRTs in the ionlibrary; LIB <- genSwathIonLib(data=peptideStd, data.fit=peptideStd.redundant); [email protected]$iRTpeptides.
fixed issue #19.
removed par command in specLset plot function.
added vignettes/report.Rmd file, see also <URL:>.
Changes in version 0.99.16 (2017-04-23):
Splatter is a package for the simple simulation of single-cell RNA-seq data, including:
Multiple simulation models
Parameter estimation from real data
Functions for comparing simulations and real datasets
Simulation of complex groups and differentiation paths
Changes in version 0.99.0 (2016-12-05):
Changes in version 1.23.5:
Corrected two errors (addMaxEnt function and addGenomeData function) Using the old version, errors in calculation of mxe_ps3, mxe_ms5 and mxe_ms3 have occured. Also erroneus wgis values were calculated. BUG FIXES
(none)
Version: 1.5.7 Category: The result of Fold Change will be calculated accroding to the raw data in Text:
Version: 1.5.6 Category: NEW FEATURES Text: transX and transCode was added to generate statTarget inputs from Mass Spectrometry Data softwares, like XCMS.
Changes in version 1.6.0:
NEW FEATURES
DEPRECATED AND DEFUNCT
Changes in version 1.5.8:
NEW FEATURES
filter_on_max_peptides: add removeDecoyProteins and unifyProteinGroupLabels to function
filter_on_min_peptides: add removeDecoyProteins and unifyProteinGroupLabels to function
BUG FIXES
Changes in version 1.5.7:
NEW FEATURES
Changes in version 1.5.6:
NEW FEATURES
plot.fdr_cube: add option to select mscore levels to plot FDR estimation.
assess_fdr_byrun: add option to select mscore levels to plot FDR estimation.
Changes in version 1.5.5:
NEW FEATURES
BUG FIXES
Changes in version 1.5.4:
BUG FIXES
Changes in version 1.5.3:
NEW FEATURES
plot_variation: make function to work also if comparison contains more than 3 elements
plot_variation_vs_total: make function to work also if comparison contains more than 3 elements
sample_annotation: introduce fail-safe if input data is in the data.table format.
Changes in version 1.5.2:
NEW FEATURES
unifyProteinGroupLabels: unifies different ProteinGroupLabels
removeDecoyProteins, rmDecoyProt: Removes decoy protein labels from protein Group label
Changes in version 1.5.1:
NEW FEATURES
Changes in version 1.4.1:
NEW FEATURES
Changes in version 0.99.0:
Version: 1.99.2 Text: update show,MasterPeptides to check of there’s a fragmentlibrary slot before trying to access it <2017-04-19 Wed>
Version: 1.99.1 Text: update NEWS file <2017-04-11 Tue>
Version: 1.99.0 Category: NEW FEATURES Text:
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text: Replace 2D grid search (retention time, m/z) of synapter1 by 3D
grid search (retention time, m/z, ion mobility); set argument
imdiff = Inf to get the original 2D grid search; closes #33.
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text: Add
{set,get}ImDiff methods.
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text:
getGrid returns an array instead of a matrix (because of the
new 3D grid search) [2014-05-16 Fri].
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text:
plotFeatures(..., what = "all") gains a new argument:
“ionmobilty” to plot m/z vs ionmobility as well. [2014-05-16
Fri]
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text:
plotGrid gets a new argument “maindim” to decided which of the
three dimension should be used. [2014-05-16 Fri]
Version: 1.99.0
Category: ION MOBILITY/GRID SEARCH
Text: Add
filterNonUniqueIdentMatches to remove matches of multiple
identification data to a single quantitation entry (see #111
for details) [2016-02-22 Mon].
Version: 1.99.0
Category: FRAGMENT MATCHING
Text: Load identification fragments (
final_fragments.csv) and
quantitation spectra (
Spectrum.xml) via
Synapter
constructor.
Version: 1.99.0
Category: FRAGMENT MATCHING
Text: New functions:
fragmentMatchingPerformance,
filterUniqueMatches,
filterNonUniqueMatches,
filterFragments,
plotCumulativeNumberOfFragments,
plotFragmentMatchingPerformance,
getIdentificationFragments
and
getQuantitationSpectra.
Version: 1.99.0 Category: FRAGMENT MATCHING Text: Integrate a fragment library into master objects; closes #63 and #74.
Version: 1.99.0
Category: MISC
Text: Allow to use an RDS instead of a fasta file as ‘Unique Peptides
Database’, adds
createUniquePeptideDbRds; closes #55
[2014-04-29 Tue].
Version: 1.99.0
Category: MISC
Text: Introduce
IisL argument to
dbUniquePeptideSet which treats
I/L as same aminoacid if
IisL == TRUE (default:
IisL =
FALSE); closes #60 [2014-04-30 Wed].
Version: 1.99.0
Category: MISC
Text: Add
rescueEMRTs functions; replaces the argument
mergedEMRTs
in
findEMRTs; closes #93 [2015-07-26 Sun].
Version: 1.99.0
Category: MISC
Text: Add
synergise2 which combines the integrates the new 3D grid
search, the fragment matching; and uses slightly different
default arguments than
synergise1; closes #119 [2016-10-25
Di].
Version: 1.99.0 Category: MISC Text: Load isotopic distributions from Pep3D data and also export them to MSnSet, to allow the correction of detector saturation; closes #39 [2015-03-29 Sun].
Version: 1.99.0
Category: MISC
Text: Add
synapterPlgsAgreement to find agreement between synapter
and PLGS; closes #73.
Version: 1.99.0
Category: MISC
Text: Introduce
modelIntensity to correct systematic intensity shifts
(similar to
modelRt); closes #116.
Version: 1.99.0
Category: IMPROVEMENTS
Text: Extract the ion that was used for identification (
isFid == 1)
from the Pep3D file instead of the first instance [2014-05-13
Tue].
Version: 1.99.0
Category: IMPROVEMENTS
Text: Add
updateObject and
validObject method [2014-11-16 Sun].
Version: 1.99.0
Category: IMPROVEMENTS
Text: Rename
QuantPep3DData$Function column into
QuantPep3DData$matchedEMRTs; closes #67 [2015-07-26 Sun].
Version: 1.99.0 Category: IMPROVEMENTS Text: Use just unique peptides in master creation (see #107) [2016-01-23 Sat].
Version: 1.99.0
Category: IMPROVEMENTS
Text: New
rmarkdown based reports for
synergise1 (synonym to
synergise) and
synergise2.
Version: 1.99.0 Category: BUGFIXES Text: Use new loess model in master creation (now based on m-estimator instead of least squares, identical to retention time model in classical synergise workflow; see #107 for details) [2016-01-23 Sat]
Version: 1.99.0
Category: BUGFIXES
Text: Fix retention time model calculation in
plotFeatures(...,
what="some") [2014-04-28 Mon].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Add
testthat to Suggests [2014-04-25 Fri].
Version: 1.99.0 Category: INTERNAL CHANGES Text: Add recommended biocView [2014-06-05 Thu].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Replace
any(is.na(...) by
anyNA(...); synapter depends on
R >= 3.1.0 now [2014-11-01 Sat].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Add
ClassVersion field to
Synapter class [2014-11-21 Fri].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Add
Versioned class as parent class to
MasterPeptides and
MasterFdrResults [2014-11-22 Sat].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Adapt
synergise to new grid search (closes #81) [2016-10-16
So].
Version: 1.99.0
Category: INTERNAL CHANGES
Text: Replace
hwriter by
rmarkdown report in
synergise; closes
#120. [2016-10-17 Mon]
Version: 1.99.0
Category: REMOVED FUNCTIONS/ARGUMENTS
Text: Remove
synapterGUI.
Version: 1.99.0
Category: REMOVED FUNCTIONS/ARGUMENTS
Text: Remove unused internal functions:
filterCommonSeq,
filterKeepUniqueSeq,
filterKeepUniqueProt [2014-11-27 Thu].
Version: 1.99.0
Category: REMOVED FUNCTIONS/ARGUMENTS
Text: Remove “mergedEMRTs” argument from
findEMRTs. Now
rescueEMRTs
has to be called manually at the end of the processing; close
#93 [2015-07-26 Sun]
Version: 1.99.0
Category: REMOVED FUNCTIONS/ARGUMENTS
Text: Remove “light” version of
writeMergedPeptides and
writeMachtedPeptides (now always the full
data.frame is
saved; see #95) [2016-10-16 Sun]
Version: 1.99.0
Category: REMOVED FUNCTIONS/ARGUMENTS
Text: Update
synapterTiny and
synapterTinyData [2016-10-16 So]
Changes in version 1.5.1:
CODE
Fix error of duplicated parameter in plotRegion function
Changes in TargetExperiment contstructor to avoid errors related to unmapped reads in the alignment BAM files
The checkBedFasta function was added to perform a control of the Bed and Fasta file consistency
Changes in plotRegion methods to allow filter noise SNPs
Changes in plotGeneAttrPerFeat method to incorporate the exploration of overlapped amplicons
VIGNETTE
Version: 0.99.0 Text: FIRST VERSION. Demo:
Changes in version 3.5:
NEW FEATURES
Changes in version 1.30.0:
BUG FIXES
Version: 1.3.1 Category: SIGNIFICANT USER-VISIBLE CHANGES Text: Import of Ulvac-Phi Raw data from WinCadence V1.18.0.22 is now supported
Version: 1.3.1 Category: INTERNALS Text:
Version: 1.3.1 Category: BUGFIXES Text: 3.2.2:
Changes in version 3.2.0:
Changes to the Excel output of TPP-TR experiments: - Plot paths in excel output are stored in separate columns for spline and melting curve fits - Reported p-values for Tm based analysis are now increased in value after removing a bug in their calculation. The thresholds applied for determining significant hits have been updated justed accordingly.
Improvements to the 2D-TPP analysis: - Split rows with multiple identifiers (e.g. separated by ‘|’) into separate rows (was already present in TR- and CCR analysis) - Sort result table rows by temperature for each ID
Changes in version 3.1.3:
Bug fix: ensure correct handling of drug concentrations when they are imported in scientific notation (xx.xxE-x)
Deprecate functions and arguments, primarily those used in the first version of TPP-2D analysis workflow.
Bug fixes: - sort x and y values before DR curve fitting because the functions for initial parameter estimation are dependent on the correct ordering - supress useless VD logfiles when creating VENN diagrams
Changes in version 1.11.8:
Changes in version 1.11.7:
Changes in version 1.11.6:
Changes in version 1.11.5:
Changes in version 1.11.4:
Changes in version 1.11.3:
Changes in version 1.11.2:
Changes in version 1.11.1:
Changes in version 0.99.11:
bug fixed in get.fields method for paml_rst <2017-03-20, Mon>
fixed raxml2nwk for using treedata as output of read.raxml <2017-03-17, Fri>
taxa_rename function <2017-03-15, Wed>
phyPML method moved from ggtree <2017-03-06, Mon>
Changes in version 0.99.10:
remove raxml class, now read.raxml output treedata object <2017-02-28, Tue>
bug fixed of read.beast <2017-02-27, Mon>
Changes in version 0.99.9:
read.newick for parsing node.label as support values <2017-01-03, Tue>
read.beast support MrBayes output <2016-12-30, Fri>
export as.phylo.ggtree <2016-12-30, Fri>
Changes in version 0.99.8:
as.treedata.ggtree <2016-12-30, Fri>
as.treedata.phylo4 & as.treedata.phylo4d <2016-12-28, Wed>
Changes in version 0.99.7:
Changes in version 0.99.6:
Changes in version 0.99.3:
Changes in version 0.99.1:
Changes in version 0.99.0:
add vignette <2016-12-06, Tue>
move parser functions from ggtree <2016-12-06, Tue>
Changes in version 0.0.1:
read.nhx from ggtree <2016-12-06, Tue>
as.phylo.treedata to access phylo from treedata object <2016-12-06, Tue>
as.treedata.phylo to convert phylo to tree data object <2016-12-06, Tue>
treedata class definition <2016-12-06, Tue>
Changes in version 2.7.7:
RNA Seq validation
Random restart on Hill Climbing added to CAPRI algorithm
Minor fixes to algorithms and error model
Changes in version 2.7.3:
Changes in version 2.6.1:
Changes in version 1.1.12 (2017-04-10):
Major changes
Changes in version 1.1.11 (2017-04-08):
Major changes
Changes in version 1.1.10 (2016-11-18):
Bug fix
Major changes
Changes in version 1.1.9 (2016-11-18):
Major changes
Changes in version 1.1.8 (2016-11-14):
New features
New method pairsInfo to visualise a matrix of pairwise plots that displays a metric calculated in levels of a given phenotype, and stored in columns of the info slot of a VCF object.
ggpairs method imported from the GGally package.
Bug fix
Major changes
Internal method .findInfoMetricColumns moved to utils.R, as it is now used in two different user-visible methods.
Updated Introduction vignette to better present usage of the addFrequencies method, better present VCF filter rules, and introduce the new method pairsInfo.
Changes in version 1.1.7 (2016-11-11):
Major changes
Minor changes
Update to README.
AppVeyor caches R library.
Changes in version 1.1.6 (2016-11-10):
New features
plotInfo method to visualise a metric calculated in levels of a given phenotype, and stored in columns of the info slot.
Methods imported from Gviz package.
More methods imported from ensembldb package.
Major changes
Added new VCF file and associated preprocessing script in extdata/ for gene ADH1B.
Introduction vignette updated to change activated filter rules.
Introduction vignette updated to introduce the plotInfo method.
Minor changes
Moved content of the table of motivations to implement VCF filter rules to a CSV file in misc/.
BED file and VCF files for gene ADH1B.
Updated Shell script to preprocess VCF files with the VEP script.
Ignore .svn/.
Changes in version 1.1.5 (2016-11-08):
Bug fix
Updated reference to renamed object in Shiny app.
When no phenotypes are supplied, set phenoData slot to a DataFrame with rownames set to colnames(vcf) and 0 columns, instead of the default behaviour of the VariantAnnotation package which is to create a column named Sample filled with seq_along.
NEWS file closing brackets.
Major changes
autodetectGTimport setting available in tSVE method.
New checkbox in Shiny app to update selected genotypes after importing variants and autodetection of genotypes present in the data.
Minor changes
Do not ignore *.Rproj files.
Removed commented lines in AppVeyor YAML file.
Removed files in misc/.
Display list of error messages in a new session panel of the Shiny app.
Changes in version 1.1.4 (2016-11-04):
Bug fix
Minor changes
Added TVTB.Rproj to tracked files.
Deleted deprecated and misc files in inst/.
Deleted commented lines from AppVeyor settings.
Changes in version 1.1.3 (2016-11-03):
New features
The autodetectGenotypes method creates or updates the genotypes defined in the TVTBparam that is stored in the metadata slot of a VCF object.
The argument autodetectGT of the readVcf method may be used to call the new autodetectGenotypes method immediately after a VCF object is initialised from the parsed VCF file.
Major changes
vepInPhenoLevel returns a GRanges instead of a data.frame; the key advantage is that ranges may have non-unique names.
Genotypes objects can now be initialised without specifying ref, het, and alt genotype vectors (with a warning). A default Genotypes object is created with ref, het, and alt slots set to NA_character_. The new autodetectGenotypes method may be used to populate those slots after variants are imported (see New features section).
TVTBparam objects can now be initialised without supplying a Genotypes object (with a warning). A default Genotypes object is created (see above).
Constructors for classes Genotypes and TVTBparam are now high-level methods, not S4 methods methods anymore.
Default settings of the Shiny app are stored as an environment that can be overriden by arguments of the tSVE method.
Shiny app stores more objects in reactiveValues.
Shiny app stores more error messages in reactiveValues to better deal with optional inputs and better help users to resovle sources of errors.
Minor changes
The show method throws warning messages for TVTBparam and Genotypes objects that have not fully defined all genotypes.
Better layout of badges in README.
Non-reactive settings of the Shiny app stored in hidden objects.
Helper methodS getEdb, tryParsePheno, tryParseBed, tryParseVcfHeader, tryParseMultipleVcf, and tryParseSingleVcf removed and integrated into the server side of the Shiny app.
Massive cleaning of messages in the global.R file of the Shiny app.
GRanges, Genotypes, and Phenotypes panels removed from Session panel of the Shiny app.
Table reporting status of BiocParallel configurations of the Shiny app on various system stored as an RDS file.
Shiny app displays a warning at the top of the screen if the genotypes are not fully defined.
Tab width of Shiny files set to 2.
Branches tracked by Travis CI.
Added a couple of files in inst/badexamples folder.
Added YAML file for AppVeyor.
Added pander in Suggests section of DESCRIPTION, to render vignette tables.
Changes in version 1.1.2 (2016-10-21):
Minor changes
Changes in version 1.1.1 (2016-10-21):
Minor changes
Updates to README: weblinks, installation, unit tests.
Branches tracked by Travis CI.
Coverage: exclude AllClasses.R, tSVE.R.
Four-space indents in DESCRIPTION.
Changes in version 1.3.8:
Changes in version 1.3.6:
Changes in version 1.3.4:
Support for kallisto HDF5 files thanks to Andrew Parker Morgan and Ryan C Thompson
Removing ‘reader’ argument, leaving only ‘importer’ argument. In addition, read_tsv will be used by default if readr package is installed.
Messages from the importing function are captured to avoid screen clutter.
Changes in version 1.5.7:
Changes in version 1.5.6:
Changes in version 1.5.5:
Decrease computing time of effective sample size with ESS() by additional ~10x with sparse solver
fix margins for plotPercentBars()
Fix bug for getVarianceComponents() when correlated continous variables are included
compatibility with ggplot2 2.2.0
center plot titles
fix order of bars in plotPercentBars()
legend background to transparent
set text to be black
include lme4 in foreach .packages
change residuals color to not be transparent
add CITATION information
plotCorrMatrix now shows dendrogram by default
Estimate run time for fitExtractVarPartModel() / fitVarPartModel()
improve warnings for plotPercentBar()
improve warnings for plotCorrStructure()
define ylab for plotVarPart() - add as.matrix.varPartResults() (hidden) - define isVaryingCoefficientModel() (hidden)
Changes in version 1.22.0:
NEW FEATURES
add import() wrapper for VCF files
add support for Number=’R’ in vcf parsing
add indexVcf() and methods for character,VcfFile,VcfFileList
MODIFICATIONS
throw message() instead of warning() when non-nucleotide variations are set to NA
replace ‘force=TRUE’ with ‘pruning.mode=”coarse”’ in seqlevels() setter
add ‘pruning.mode’ argument to keepSeqlevels() in man page example
idempotent VcfFile()
add ‘idType’ arg to IntergenicVariants() constructor
modify locateVariants man page example to work around issue that distance,GRanges,TxDb does not support gene ranges on multiple chromosomes
modify VcfFile() constructor to detect index file if not specified
order vignettes from intro to advanced; use BiocStyle::latex2()
remove unused SNPlocs.Hsapiens.dbSNP.20110815 from the Suggests field
follow rename change in S4Vectors from vector_OR_factor to vector_or_factor
pass classDef to .RsamtoolsFileList; VariantAnnotation may not be on the search path
BUG FIXES
Changes in version 1.12:
USER VISIBLE CHANGES
The VariantFilteringParam constructor is restricted to one (multisample) VCF file.
mafByOverlaps() returns now a GRanges object with minor allele frequency values in the metadata columns.
Changes in version 3.43.10:
Version: 1.19.1 Text: and methods will return ‘MethylSet’ instead of beta matrix.
Changes in version 1.0.0:
Changes in version 1.51.11:
NEW FEATURES
Parameter “filled” for featureValues (issue #157).
Parameters “rt” and “mz” in chromPeaks method allowing to extract chromatographic peaks from the specified ranges (issue #156).
BUG FIXES
Fixed possible memory problem in obiwarp (issue #159).
Update getPeaks to use non-deprecated API (issue #163).
Changes in version 1.51.10:
NEW FEATURES
filterRt for Chromatogram class (issue #142).
adjustRtimePeakGroups function (issue #147).
adjustRtime,XCMSnExp,PeakGroupsParam and do_adjustRtime_peakGroups support use of pre-defined matrix to perform alignment (issue #153).
plotAdjustedRtime to visualize alignment results (issue #141).
USER VISIBLE CHANGES
featureDefinitions and featureValues return DataFrame and matrix with rownames corresponding to arbitrary feature IDs (issue #148).
New peakGroupsMatrix slot for PeakGroupsParam class (issue #153).
BUG FIXES
Changes in version 1.51.9:
NEW FEATURES
fillChromPeaks, dropFilledChromPeaks methods and FillChromPeaksParam class.
featureValues method.
USER VISIBLE CHANGES
Extended new_functionality vignette.
Change default backend for reading mzML files to pwiz.
BUG FIXES
Issue #135: fix peak signal integration for centWave.
Issue #139: problem with expand.mz and expand.rt in fillPeaks.chrom.
Issue #137: Error in findChromPeaks if no peaks are found.
Changes in version 1.51.8:
NEW FEATURES
BUG FIXES
Issue #118: failing unit test on Windows build machine.
Issue #133: error with c() and xcmsSet without peaks.
Issue #134: xcmsSet constructor endless loop.
Changes in version 1.51.7:
USER VISIBLE CHANGES
Major renaming of methods and classes to follow the naming convention:
chromatographic peak (chromPeak): the peaks identified in rt dimension.
feature: mz-rt feature, being the grouped chromatographic peaks within and across samples.
BUG FIXES
Changes in version 1.51.6:
NEW FEATURES
groupFeatures and adjustRtime methods for XCMSnExp objects.
New Param classes for groupFeatures and adjustRtime analysis methods: FeatureDensityParam, MzClustParam, NearestFeaturesParam, FeatureGroupsParam and ObiwarpParam.
BUG FIXES
Changes in version 1.51.5:
NEW FEATURES
MsFeatureData and XCMSnExp objects.
features, features<-, adjustedRtime, adjustedRtime<-, featureGroups, featureGroups<-, hasAlignedFeatures, hasAdjustedRtime and hasDetectedFeatures methods.
dropFeatures, dropFeatureGroups and dropAdjustedRtime methods.
filterMz, filterRt, filterFile etc implemented.
mz, intensity and rtime methods for XCMSnExp allowing to return values grouped by sample.
BUG FIXES
Issue #99 (rtrange outside of retention time range in getEIC,xcmsSet).
Issue #101 (xcmsRaw function returns NULL if mslevel = 1 is specified).
Issue #102 (centWave returns empty matrix if scales not OK). Thanks to J. Stanstrup.
Issue #91 (warning instead of error if no peaks in ROI). Thanks to J. Stanstrup.
Changes in version 1.51.4:
BUG FIXES
Changes in version 1.51.3:
NEW FEATURES
binYonX binning function.
imputeLinInterpol function providing linear interpolation of missing values.
breaks_on_binSize and breaks_on_nBins functions to calculate breaks defining bins.
New vignette “new_functionality.Rmd” describing new and modified functionality in xcms.
Add do_detectFeatures_matchedFilter function.
Add do_detectFeatures_centWave function.
Add do_detectFeatures_centWaveWithPredIsoROIs function and unit test.
Implement a new data import function.
Add do_detectFeatures_MSW function and unit test.
Argument stopOnError in xcmsSet function that allows to perform feature detection on all files without stopping on errors.
Method showError for xcmsSet objects that list all errors during feature detection (if stopOnError = FALSE in the xcmsSet function).
[ method to subset xcmsRaw objects by scans.
profMat method to extract/create the profile matrix from/for an xcmsRaw.
Add new detectFeatures methods for MSnExp and OnDiskMSnExp objects from the MSnbase package.
Add new CentWaveParam, MatchedFilterParam, MassifquantParam, MSWParam and CentWavePredIsoParam parameter class to perform method dispatch in the detectFeatures method.
retcor.obiwarp uses the new binning methods for profile matrix generation.
scanrange,xcmsRaw reports always a scanrange of 1 and length(object@scantime).
scanrange,xcmsSet reports the scanrange eventually specified by the user in the xcmsSet function.
Fixed bug in rawMat (issue #58).
Fix issue #60: findPeaks.massifquant always returns a xcmsPeaks object.
Changes in version 1.51.2:
USER VISIBLE CHANGES
Changes in version 1.51.1:
BUG FIXES
Changes in version 3.3:
VERSION xps-1.35.4
eliminate dependency on ROOTSYS
update configure.in file
update Makefile.arch to add ROOTSYS an update -rpath
VERSION xps-1.35.3
eliminate dependency on DYLD_LIBRARY_PATH and LD_LIBRARY_PATH
update configure.in file
update Makefile.arch to use -rpath with ld
VERSION xps-1.35.2
update Makefile for MacOS Sierra
update README file
VERSION xps-1.35.1
Changes in version 1.1:
Chunk matrix multiplications in density estimation for faster run times.
Change vignette format from PDF to HTML.
Fix sessionInfo format in vignette and triggering of data.table warnings with nomatch.
Update citation.
Fix accent in citation.
One software package (betr) was removed from this release (after being deprecated in BioC 3.4).
Nine software packages (AtlasRDF, coRNAi, saps, MeSHSim, GENE.E, mmnet, CopyNumber450k, GEOsearch, pdmclass) are deprecated in this release and will be removed in BioC 3.6.
Two experimental data packages (encoDnaseI, ggtut) were removed from this release (after being deprecated in BioC 3.4).
One experimental data package (CopyNumber450kData) is deprecated in this release and will be removed in BioC 3.6. | http://bioconductor.org/news/bioc_3_5_release/ | CC-MAIN-2019-18 | refinedweb | 21,782 | 50.73 |
Hi all, I'm working on converting a legacy pipeline to Metaflow and was wondering whether there is any way to do something like the following.
@step def map_step(self): self.vars = ['a', 'b'] self.next(self.do_compute, foreach='vars') @step def do_compute(self): self.var = self.input self.artifact1 = do_something(self.var) self.artifact2 = do_something_else(self.var) self.artifact3 = do_something_else_yet(self.var) self.next(self.join_step) @step def join_step(self, inputs): self.artifact_dict = dict() for inp in inputs: self.artifact_dict[inp.var] = inp
I was hoping that this would give me programmatic, lazily-loading access to the artifacts computed in
do_compute for each value of
var (a la
self.artifact_dict['a'].artifact1), but of course I am getting this error message:
Flows can't be serialized. Maybe you tried to assign self or one of the inputs to an attribute? Instead of serializing the whole flow, you should choose specific attributes, e.g. input.some_var, to be stored.
Is there a recommended way to achieve this programmatic, lazy access? I see a workaround programmatically defining names and calling
setattr and
getattr, and searching through this gitter's history, this approach seems to have been recommended before. Is that still the recommended approach? Thanks!
Hey Metaflow,
I'm having an issue with running one of my flows on AWS Batch. The issue is as follows
mkdir: cannot create directory ‘metaflow’: Permission denied /bin/sh: 1: [: -le: unexpected operator tar: job.tar: Cannot open: No such file or directory tar: Error is not recoverable: exiting now
I run it on a pre-built image hosted on ECR. The
Dockerfile contains a
WORKDIR command which points at
/home/my_proj/code. I can successfully build my image locally,
bash into it and
mkdir metaflow (under the default
/home/my_proj/code/ directory) without an issue.
What I suspect might be happening is that the
WORKDIR statement is somehow ignored and the Metaflow command
["/bin/sh","-c","set -e ... is run from within
/.
It's worth noting that I have several flows running on AWS Batch already with no problem at all. Their
Dockerfiles are almost identical to the one that is having the problem.
Not really sure if it's a Metaflow issue but hoping for somebody to have seen this already.
Thank you.
Hey Metaflow,
I am new to Metaflow, and trying to update a parameter in a
step, but get "AttributeError: can't set attribute".
Here is the snippet:
class TestFlow(FlowSpec): param = Parameter( 'param', type=str, help='test parameter', default='OD' ) @step def start(self): self.param = self.param.lower() ... ....
This seems a common use cases. Probably there's some mistake in my usage. What am I missing?
Hi there,
I have a Flow that is supposed to run completely on batch. I'd like to run one of the steps in a different Docker container than the default one. Is it possible to run something like this:
# pipeline.py from metaflow import FlowSpec, step, batch class TestFlow(FlowSpec): @batch(image='python:3.6.12-buster') @step def start(self): import sys print(sys.version) self.next(self.end) @step def end(self): import sys print(sys.version) if __name__ == "__main__": TestFlow()
with the following command:
python pipeline.py run --with batch
Here the end step should run with the default image and the start step with
python:3.6.12-buster.
Hey Metaflow, I stumbled upon a situation that I was hoping you guys could comment on. I'm trying out Metaflow in a code base that already exists, which provides numerous utility files and functionality.
I was read ingthrough this page on managing external libraries () and was wondering if there is any other way to let a Metaflow flow import these utility files without having to copy them from all over the repository into the same folder the flow is defined in.
I'm aware of symbolic links and file system approaches but was wondering if there was any other Metaflow approach for a scenario like this
Hi, I recently upgraded
metaflow from 2.0.1 to 2.2.3, and when I execute a parameterized flow I got the following error which I haven't seen before (truncated to the last few lines):
... File "/home/ji.xu/.conda/envs/logan_env/lib/python3.7/site-packages/metaflow/includefile.py", line 229, in convert param_ctx = context_proto._replace(parameter_name=self.parameter_name) AttributeError: 'NoneType' object has no attribute '_replace'
Any suggestions?
production:FlowName-<deployment #>-<hash>. This can be worked around by specifying the whole production token for the namespace, but curious what your all's thoughts are for this usage.
[Errno 62] Too many levels of symbolic linkswhen running a job with AWS batch on my mac. It worked fine before but just started today. Any ideas why? Only for environment=conda. Not obvious at the moment-
RetryStrategyat the batch job level, which doesn't support customization, whereas the step functions API allows for step-level retry logic that also supports delay intervals, exponential backoffs, etc. It may not be feasible to switch this over, but wanted to run it by you all and see if it's something you've considered.
Hey guys, I'm aware of the
resources decorator for memory, CPU and GPU requests when running on AWS Batch but was wondering how Metaflow recommends handling the need of more disk space?
I've read that one can modify the AMI deployed on batch to get a larger than 8GB default volume size.
Is there a more friendly way to achieve this? I find myself working with datasets that are bigger than 8GB for some experiments but others use much less than 8GB.
Thanks!
Is there any documentation, procedure or scripts for transferring one Metdata service to another?
Imagine a user stood up all the infra in region A on AWS and wanted to move to region B without data loss.
I can write the S3 and postgres transfer scripts myself but was hoping to not re-invent the wheel.
Thanks!
Hi guys, I created a preprocessing Flow with some (default) parameters. It works well in my local machine and I able to run steps on the AWS. Now I want to integrate AWS Step functions to schedule my preprocessing and I created step function in the AWS with
step-functions create command but when I execute manually from AWS console, I will get
'$.Parameters' Error in AWS. Here is the whole message:
"An error occurred while executing the state 'start' (entered at the event id #2). The JSONPath '$.Parameters' specified for the field 'Value.$' could not be found in the input '{\n \"Comment\": \"Insert your JSON here\"\n}'"
When I checked the state machine generated in AWS, I see that there is item in the Environment section as follow:
{ "Name": "METAFLOW_PARAMETERS", "Value.$": "$.Parameters" }
I checked following AWS resource :
But I couldn't solve my problem. I believe I don't need to provide any parameters because I provided a default value for all parameters.
Do you have any idea? I appreciate your help.
Hey Metaflow, has anyone been able to create a single Batch Job Queue and Compute Environment that handles both CPU and GPU jobs, say with a
p3.2xlarge?
I ask as I've seen others suggest online using two separate Job Queues, one for CPU and one for GPU jobs but Metaflow only supports a single Job Queue.
While my Compute Environment has successfully spun up
p3.2xlarge instances, I have been unable to get a single GPU Step to leave the
RUNNABLE state . I've been exploring if this is related to the AWS Launch Template I created to increase the disk size of my instances.
If anyone has any advice, documentation or examples of running GPU jobs along side CPU jobs in the same Batch Job Queue and Compute Environment with Metaflow, I'd very much appreciate it
Hey guys, nearly forgot to follow up and share some advice when creating Batch compute environments that was especially relevant towards my previous issues when having SFN-executed, highly parallel and short-running jobs being co-located on large instances:
Not using the default Batch ECS-optimized AMIs that are still using the soon-to-be deprecated Amazon Linux 1 AMIs instead of the latest ECS-optimized Amazon Linux 2 AMIs.
The Linux 1 AMI uses the Docker
devicemapper storage driver, and preallocates 10GB of per-container storage. The Linux 2 AMIs use the Docker
overlay2 storage driver, which exposes all unused space on the disk to running containers.
Manually setting my Batch compute environments to use the latest ECS-optimized Linux 2 AMIs seems to be the cleanest approach, rather than playing with custom ECS Agent docker cleanup parameters. I also reached out to AWS support to see if there’s a reason why Batch hasn’t updated their default AMI, even though the Linux 1 AMI is end-of-life in 2 months. No information was given, but mentioned that they have an internal feature request for it without any guarantees or ETA on when it’d be changed.
Sharing in case this is useful for anyone else!
python parameter_flow.py --with retry step-functions createbut I get no such command step function, can some one maybe refer me to a good documntation?
Hey People. I would like to know a way to restrict the amount of parallelization that should be done in my local instance at any point in time. Parallelization meaning amount of cpu-cores used by the program. Say I have a task that has to executed parallelly as 50 threads, each requires 2 core to process and if my machine is a 32 core machine, Metaflow runs ~15-16 threads at a time utilizing all the processing-cores in the machine. I would like to restrict this parallelization to, say 12 threads at any given point of time.
In python's multiprocessing library, there is an option of setting the number of pool workers as a required number. Is there a way to achieve the same with Metaflow?
Hello, I see that Metaflow snapshots the code used in a run
From the docs: "Code package is an immutable snapshot of the relevant code in the working directory, stored in the datastore, at the time when the run was started. A convenient side-effect of the snapshot is that it also works as a code distribution mechanism for runs that happen in the cloud."
How would I access the code from previous runs?
Thanks!
Hey all, I assessed MetaFlow as an alternative to our Kedro + Airflow infra. Thought I'd share my assessment. One blocker for adopting MetaFlow is the inability to separate parameters from pipeline definitions.
For context, we currently use Kedro to generate many "flavors" of the same pipeline for different scenarios. For instance, we use the same template inference pipeline for model validation, active learning, detecting label noise, etc. We do this by defining our parameters separately from our DAGs. It would be nice if MetaFlow had integrations with (say) Facebook's Hydra so that we could easily compose config files and separate parameter definitions from DAG definitions.
Hey all, I have a question about logging. In our project, we are using python standard logging. () When we send a warning, debug vs. logs with it, Metaflow overrides these logs and sends it's to info.
Here is a code example;
import logging.config if __name__ == '__main__': logger = logging.getLogger('DebugFlow') DebugFlow()
When I took a look at how Metaflow handles logging, I realized that Metaflow uses different logging systems. I also tested logging configuration with
--event-logger. it looks like it doesn't work.
import logging.config from metaflow.plugins import LOGGING_SIDECAR, SIDECAR class CustomEventLogger(object): TYPE = 'customEventLogger' def __init__(self): self.logger = logging.getLogger('DebugFlow') def log(self, msg): self.logger.info('event_logger: %s', str(msg)) def process_message(self, msg): # type: (Message) -> None self.log(msg.payload) def shutdown(self): pass def setup_logger(): logger_config = { 'customEventLogger': CustomEventLogger } LOGGING_SIDECAR.update(logger_config) SIDECAR.update(logger_config) logging.config.dictConfig(LOGGING_CONFIG) if __name__ == '__main__': setup_logger() logger = logging.getLogger('DebugFlow') DebugFlow()
python debug_flow.py --event-logger=customEventLogger run
How can I configure the Metaflow logger? if it is not possible, how can I send debug, warning logs with Metaflow logger? Thanks.
Hello everyone ! I am exploring options for my next project implementation. Based on initial documentation metaflow seems to hit all the points my team is looking for in a framework. The only question I have is:
Our team uses Azure and not AWS. Are there going to be issues in deploying and scaling metaflow based solutions on Azure ?
hi all,
I'd like to know what the best way of passing a variable defined in a step that gets split and then use it after joining.
I could do something like use
self.merge_artifacts(inputs,include=[<vars>])? Im sure
inputs[0].<var> also works. These are fine, but Im not sure how efficient it is, or how it will cope with many more splits
Fuller simple example to see what I mean:
from metaflow import FlowSpec, step class Foo(FlowSpec): @step def start(self): self.msg = 'hi %s' self.steps = list(range(0,10)) self.next(self.bar, foreach='steps') @step def bar(self): print (self.input) print (self.msg%(' from bar')) self.next(self.join) @step def join(self,inputs): #to be able to use self.mg in the next step, use merge_artifacts self.merge_artifacts(inputs,include=['msg']) self.next(self.end) @step def end(self): print (self.msg%(' from end')) print ('end') if __name__ == "__main__": Foo()
I want to make sure I'm doing this in the best way
cheers. Loving metaflow btw , top work on all the docs!
Hello Metaflow community! After setting up Airflow for a proof of concept and evaluating the other obvious/recent options, I am trying to decide between Prefect (self-hosted) and Metaflow for next steps.
There seems to be a gap when it comes to monitoring Metaflow jobs (no ui/dashboard). How do you handle this? Am I missing something or do you fall back on AWS monitoring features?
You have reached your pull rate limit.. I believe this is due to the recent (November 2, 2020) change:. What is recommended approach to resolve this? Do you guys have a step by step instruction how we can set up a private account?
I have the following folder structure:
-metaflow project/
- flow_a.py - flow_b.py - helpers.py
Flow a and flow b are separated independent flow, but there some functions that occurs both in a and b,
For avoiding duplicate code I made helper function in
helpers.py which I import in both flow a and b.
My problem is, when I deploy on AWS step function with
python flow_a.py step-functions create
The flow Is uploaded but helpers.py not, therefore when I try to import in my steps function from the helpers.py the code fail,
What is the correct approach to address this problem?
Thx
{"Parameters": "{\"key1\": \"value1\", \"key2\": \"value2\"}"}always with those
"\.... but how do I send a dictionary as a params I tried several method and nothing works..
{"Parameters": "{\"key1\": \"value1\", \"key2\": \"value2\, \"key3\": \"json.dumps(my dict\"}"}this is not working, what is the corrct way?
Is there any reason why one would be unable to store a function in Flows' step and call said function in a subsequent step?
from metaflow import FlowSpec, step, conda_base @conda_base(python="3.8.3") class FunctionStateFlow(FlowSpec): """Explores how one can pass functions through Metaflow's state from one step to another. """ def simple_function(self): """Defines a simple function that we can use to pass throughout Metaflow """ return 42 @step def start(self): """Initial step in DAG.""" self.fun = self.simple_function print(f"is the variable 'fun' available in 'self'? {hasattr(self, 'fun')}") # prints true self.next(self.should_print_forty_two) @step def should_print_forty_two(self): """Prints forty two as it leverages the pickled function from the start step""" print(f"is the variable 'fun' available in 'self'? {hasattr(self, 'fun')}") # prints false print(self.fun()) # AttributeError: Flow FunctionStateFlow has no attribute 'fun' self.next(self.end) @step def end(self): """Does nothing, exists as a formality""" pass if __name__ == "__main__": FunctionStateFlow()
I know Metaflow does not support the storage generators but cannot see why storing this function would not work.
Using SAP HANA in one step
Hi, I need to connect to SAP Hana in one of my steps. I followed official documentation and tested it in sample flow and it works well with batch decorator.
The problem arise when I use @conda (or @conda_base) decorator. The error is
ModuleNotFoundError No module named 'hana_ml'.
I think I need something like
os.system(path/to/conda install hana-ml)
I posted my code inside this thread.
I appreciate your help. | https://gitter.im/metaflow_org/community?at=5fa4708db4283c208a4960bd | CC-MAIN-2021-04 | refinedweb | 2,808 | 56.55 |
Minimum Number of Platforms Required for a Railway/Bus Station
Introduction
In this blog, we will discuss how to find the Minimum Number of Platforms Required for a Railway/Bus Station. Such problems do come in interviews as well as in many contests. Before solving the problem, it’s recommended to have a good understanding of sortings and its applications. In this Blog we will dive deep into each detail to get a firm hold over the application of sorting in problems.
Let’s look at the problem statement.
Given a list of arrival and departure times of trains that reach a railway station. The task is to find the minimum number of railway platforms required such that no train waits.
Let’s understand the problem using an example.
Input: N and arrival and departure times of length N,
arr[] = {9:00, 9:40, 9:50, 11:00, 15:00, 18:00}
dep[] = {9:10, 12:00, 11:20, 11:30, 19:00, 20:00}
Output: 3
Explanation: The minimum number of railway platforms needed is 3 because the maximum number of trains is 3 from 11:00 to 11:20.
Approach(Naive)
First, let’s look at the naive approach that would probably strike anybody’s mind before jumping to an optimal approach.
So, the basic approach is to start from each index and find the number of overlapping intervals. We can maximize this value by computing the number of overlapping intervals from each index and updating it in each iteration.
So the naive/basic approach could be formulated as:
- For each index, compute the number of overlapping intervals.
- Keep updating the maximum value in each iteration.
- Finally, return the maximum value.
Now let’s look at the PseudoCode.
PseudoCode
Algorithm
___________________________________________________________________
procedure findMinimumPlatforms(arr, dep, N):
___________________________________________________________________
1. minPlatforms ← 1 #initially the maximum number of platforms needed is 1.
2. for time_i = 0 to N-1 do # from every time at index i
3. platforms_needed ← 1 # for each index min platform is at least 1.
4. for time_j = time_i+1 to N-1 do
5. if overlap(arr[time_i], dep[time_i], arr[time_j], dep[time_j]) == True do
6. platforms_needed ← platforms_needed + 1
7. end if
8. end for
9. minPlatforms ← max(minPlatforms, platforms_needed)
10. end for
11. return minPlatforms
12. end procedure
___________________________________________________________________
CODE IN C++
//C++ program to find the minimum number of platforms needed #include <bits/stdc++.h> using namespace std; // function which returns a boolean value if 2 intervals overlap or not bool overlap(int arr_i, int dep_i, int arr_j, int dep_j) { return (arr_i >= arr_j and arr_i <= dep_j) or (arr_j >= arr_i and arr_j <= dep_i); } // function that returns minimum number of platforms Needed int minPlatformsNeeded(vector<int> &arr, vector<int> &dep, int N) { int minPlatforms = 0; // minimum number of platforms // from each index find the number of platforms_needed for (int i = 0; i < N - 1; ++i) { int platforms_needed = 1; // at least one platform is needed for (int j = i + 1; j < N; ++j) { // if the time intervals overlap increment the number of platforms needed if (overlap(arr[i], dep[i], arr[j], dep[j])) platforms_needed++; } minPlatforms = max(minPlatforms, platforms_needed); }2)
This is because we iterate through all indices for each index. Hence it’s O(n2).
Space complexity: O(1) at the worst case because no extra space is used.
The above algorithm works in O(n2) time which is pretty slow. This gives us the motivation to improve our algorithm.
So let’s think about an efficient approach to solve the problem.
Approach(Efficient)
If we want to build up an efficient solution, we will need to find out if there is any redundancy or any repetition that we are doing.So the redundancy in our above approach is that we are finding the number of platforms needed from each index such that no train waits and maximize the answer. If we make some observations we can avoid finding the number of platforms needed from each index and maximizing it repetitively.
Carefully observe that we will get the maximum number of platforms when the maximum number of trains arrive and wait during a particular time interval. Once we make this observation, the rest of the solution will become very easy to think about.
So, to find the maximum number of trains at any point of time, we can take the time stamps, i.e, arrival and departure time-stamps one by one in the sorted order. As soon as a train arrives we increment the train counter and once it departs, we can decrement the train counter. At each point, we can maximize the number of platforms needed.
Following this approach will definitely improve the efficiency of the algorithm because we don’t require any repetitive checks.
Now we can formulate our approach:
- Put the arrival and departure times in another list and mark each time-stamp as arrival or departure so that we can decide to increment/decrement the train counter.
- Sort the list of timestamps.
- Iterate over each time-stamp one by one.
- Increment the counter if it’s an arrival time stamp, else decrement the counter.
- At each point maximize the answer.
Let’s look at the Code.
CODE IN C++(Efficient)
//C++ program to find the minimum number of platforms needed #include <bits/stdc++.h> using namespace std; // function that returns minimum number of platforms Needed int minPlatformsNeeded(vector<int> &arr, vector<int> &dep, int N) { int minPlatforms = 0; // minimum number of platforms // vector storing all arrival and departure time_stamps vector<pair<int, char>> time_stamps; // 'a' --> arrival // 'd' --> departure for(int time_s: arr) time_stamps.push_back({time_s, 'a'}); for(int time_s: dep) time_stamps.push_back({time_s, 'd'}); // sort time_stamps by values sort(time_stamps.begin(), time_stamps.end()); // train counter int train_counter = 0; // iterate over each time stamp for(pair<int, char> t: time_stamps) { if(t.second=='a') train_counter++; // increment if it's arrival time stamp else train_counter--; // decrement if it's departure time stamp minPlatforms = max(minPlatforms, train_counter); // maximise the answer }*LogN)
Since we are sorting the set of intervals, which takes most of the time, it takes O(N*LogN) time.
Space complexity: O(N) at the worst case, as we are using auxiliary space to store all arrival and departure times.
Hence we reached an efficient solution from a quadratic solution.
Frequently Asked Questions
Q1. Does a greedy algorithm always work?
Answer) No, a greedy algorithm does not always work. To solve a problem via a greedy algorithm, you need to have intuitive proof in your mind, at least to lead to the correct solution. To show when it doesn’t work, you can think of a counter-example that will help rule out the greedy algorithm.
Q2. Why is sorting important?
Answer) Sorting is a very useful technique because we can reduce a significant number of comparisons. With reference to the question discussed in this blog, we moved from an exponential solution to an O(N*LogN) solution. This is sufficient to explain the power of sorting.
Q3. What is a comparator?
Answer) Comparator is an important concept. It is a function object that helps to achieve custom sorting depending on the problem requirements.
Key Takeaways
This article taught us how to solve the problem of the Minimum Number of Platforms Required for a Railway/Bus Station. We also saw how to approach the problem using a naive approach followed by an efficient solution. We discussed an iterative implementation using examples, pseudocode, and proper code in detail.
We hope you could take away critical techniques like analyzing problems by walking over the execution of the examples and finding out how we can apply sorting to simplify our task and make fewer comparisons. Testing a greedy algorithm as early as possible is a good practice as there is no guarantee that it will always work. Hence, if it does not work, you could rule it out and not follow a similar approach to solve the problem.
Now, we recommend you practice problem sets based on the concepts of Minimum Number of Platforms Required for a Railway/Bus Station to master your fundamentals. You can get a wide range of questions similar to the problem Minimum Number of Platforms Required for a Railway/Bus Station on CodeStudio. | https://www.codingninjas.com/codestudio/library/minimum-number-of-platforms-required-for-a-railway-bus-station | CC-MAIN-2022-27 | refinedweb | 1,373 | 55.03 |
With.
However, there’re two extra requirements for SharePoint remoting. I just list them here, if you want further details, Zach Rosenfield, the Program Manager who owns SharePoint Windows PowerShell support, explained in his blog SharePoint PowerShell “Remoting” Requirements.
If this value is too low, then you may have error messages like:System.Management.Automation.RemoteException: Process is terminated due to StackOverflowException.
Setup.
To enable CredSSP on the server, use the following command:
Enable-WSManCredSSP –Role Server *
Use Get-WSManCredSSP to check if it is enabled correctly.
If\username
When you need to create a credential object, read this password (the secure string) from the file and create the credential with the following command:
$pwd = Get-Content C:\crd-sharepoint.txt | ConvertTo-SecureString
then
Jie.
Excellent walk through.
can I do this for 2007 ... i tried using add-pssnapin and it doesn't work
also, Microsoft.SharePoint is not listed as a get-pssnapin -registered item
SharePoint 2007 doesn't have the PowerShell cmdlets. You have to use the stsadm tool.
I have one question. Do I still have to set the thread options for the shell to "ReuseThread"?
This is great. But why is it that when I am physically I am on the SharePoint box everything works great, but when I remote into the box with powershell it tells me I do not have sufficient access or permissions (simply trying a get-spsite)?
Excellent walk through, however it seems that you must obtain SharePoint objects over PSRemoting using SPSecurity.RunWithElevatedPrivileges, otherwise yo do not have proper access. This renders a lot of the cmdlets useless.
Also for anyone interested in being able to use the RunWithElevatedPrivileges method using PSRemoting, you can run this nice piece of code:
Add-Type -Language CSharpVersion3 -TypeDefinition @"
using System;
using Microsoft.SharePoint;
public class GetElevatedSPSite
{
public static SPSite GetSPSite(String SiteName)
{
SPSite mysite = null;
SPSecurity.RunWithElevatedPrivileges(delegate(){ mysite = new SPSite(SiteName); });
return mysite;
}
}
"@ -ReferencedAssemblies @("C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI\Microsoft.SharePoint.dll")
And then you can run this to get your site object
[GetElevatedSPSite]::GetSPSite($SiteURL)
For me your screenshots do not show! A shame as this is very interesting info!
Very helpful post. Very clear commentary and suggested phrasing are most impressive, as are his and your generosity in sharing this explanation and example.
pfefferspray-discount.de
stackoverflow.com/.../credssp-not-recommended-in-production-environments ?
What ports do you need to open for remote windows powershell if your SharePoint runs in an application vault?
zsharepoint.wordpress.com - new place of zach's blog
Not working here. On the server (a MS Windows 2008 R2 Server running Sharepoint 2010 in Windows Azure.), I've ran Enable-PSRemoting and Enable-WSManCredSPP with server role. Get-WSManCredSPP tells me that "This computer is configured to receive credentials from a remote client computer."
On the client, I'm running Powershell ISE locally on a MS Windows 8 without Sharepoint installed. The command "Enable-WSManCredSSP -Role client -DelegateComputer *" yields an error "The client cannot connect to the destination specified in the request." I've also tried with specifying my server using "myServer" and "myserver.cloudapp.net" but in vain.
Note, that I'm able to send start and stop commands from ISE to virtual machines in Azure, i.e. I should have some settings ok.
Pointers appreciated.
Great walk through. Only thing missing is mention that remote commands require running SharePoint cmdlets within a RunWithElevatedPrivileges command block.
[Microsoft.SharePoint.SPSecurity]::RunWithElevatedPrivileges({
<SharePoint Commands>
})
Thank you for this excellent article! I am testing migrating from Sharepoint 2010 (Windows Server 2008 R2) to Sharepoint 2013 (Windows 2012). Have encountered the problem of "No snap-ins have been registered for Windows PowerShell 3". After weeks of googling, this is the only site who have a real solution! And it works with Sharepoint 2013 running on Windows 2012!
I have save this entire web page in case it goes down!!! Thank you!!!
Hi there, excellent walkthrough, I have totally automated sharepoint administration with guidance from your post and then started this strange behaviours in my computer. Every time I run the script then it gives an exception saying the rpc server is unavailable, meaning there is no connectivity to the remote server.
so tried to ping it and it indeed wasn't reachable. So I called the network team to look into the issue as i assumed it be some network thing, the network guy just restarted my pc and it worked like a charm the next moment. He advised me to restart again if such issue appears as it may be due to dns cache. Now this thing was repeated for a couple of times and every time i have restarted my pc. Then my computer was replaced and surprisingly this issue still continued, it was then i realised it was problem with my script and not with computer or network.Then after terrible research on sharepoint powershell remoting i landed into this page which talks about Start-SPAssignment and Stop-SPAssignment and how it can prevent the DNS cache flooding. Initially i have skipped it as it didn't make much sense to me but now i have realised my mistake.
It would be nice of you add the SP-Assignment thing in your post as i have wasted lot of days not knowing the issue.
source: | http://blogs.msdn.com/b/opal/archive/2010/03/07/sharepoint-2010-with-windows-powershell-remoting-step-by-step.aspx | CC-MAIN-2015-06 | refinedweb | 895 | 56.25 |
SOLVED [Request] Paste special (i.e. from other vector apps)
Can there be a special Paste function that pastes in the middle of the current glyph view regardless of the coordinates of the vector art from which it came in Illustrator? Maybe
Shift+Cmd+V?
I see benefits for the current paste behavior, like if you pay close attention to position and want RF to honor it. But I think that often if vector art is pasted from elsewhere, it's lost off the main work area, and then it becomes a game of hide-and-seek. Video
Would be great to have the option to paste it within view.
TIA,
Ryan
here’s a quick way to move the pasted glyph contours to the origin position:
g = CurrentGlyph() L, B, R, T = g.bounds g.moveBy((-L, -B))
hope this helps!
Great thanks @gferreira! Is there a way before that to get the bezier data from the pasteboard and append that to CurrentGlyph? Is it NSPasteboard?
the pasted contours are selected, so you can get the selection bounds and move only the selected contours to the origin:
from mojo.events import addObserver class MyCustomPaste: def __init__(self): addObserver(self, "pasteCallback", "paste") def pasteCallback(self, notification): g = notification['glyph'] L = min([c.bounds[0] for c in g.selectedContours]) B = min([c.bounds[1] for c in g.selectedContours]) for c in g.selectedContours: c.moveBy((-L, -B)) MyCustomPaste()
cheers!
RoboFont uses the old aicbTools from @tal.
a small example of getting copied vector data and draw it in the current glyph scaled to the x-height
from lib.contrib.aicbTools import readAICBFromPasteboard, drawAICBOutlines data = readAICBFromPasteboard() source = RGlyph() drawAICBOutlines(data, source.getPen()) dest = CurrentGlyph() scaleToHeight = dest.font.info.xHeight bounds = source.bounds if bounds: minx, miny, maxx, maxy = bounds h = maxy - miny source.moveBy((-minx, -miny)) scale = scaleToHeight / h source.scaleBy(scale) dest.clear() dest.appendGlyph(source)
Great thanks you two!
I've combined these things into one script that does pretty much what I was looking to do. Here it is:
# menuTitle : Paste to Baseline (Pasteline?) # shortCut : shift+command+v from lib.contrib.aicbTools import readAICBFromPasteboard, drawAICBOutlines g = CurrentGlyph() data = readAICBFromPasteboard() source = RGlyph() drawAICBOutlines(data, source.getPen()) bounds = source.bounds if bounds: L = bounds[0] B = bounds[1] source.moveBy((-L, -B)) g.appendGlyph(source)
Pastaline? | https://forum.robofont.com/topic/800/request-paste-special-i-e-from-other-vector-apps | CC-MAIN-2022-40 | refinedweb | 387 | 61.02 |
Re: precompiled headers question
- From: Bruno van Dooren <bruno_nos_pam_van_dooren@xxxxxxxxxxx>
- Date: Mon, 3 Apr 2006 04:39:02 -0700
What does the directory structure of your project look like? Inparticular,
where are stdafx.h and the file containing the #include in relation to one
All my source files are not in the same folder. I have made extra folders to
contain source and headers. For example, there is a folder called Publish,
which contains headers and sources of publishing functionality. When I added
these files to my project as new items, they had there first line as
"#include "stdafx.h"". That was the time when my project settings had pch
enabled. Than at some point I changed project settings to not using pch,
after which when I compiled the project I started getting these errors that
there was no stdafx.h. But I knew that stdafx.h is at the root of the source
tree, so I just changed it to "#include "../stdafx.h"" and it compiled fine,
which also got me thinking why was it (lets say blockpublishing.cpp)
compiling before when it had no stdafx.h on its path. (my lack of cpp
knowledge).
That is because the compiler will look for the PCH instead of trying to load
the real include file.
if you separate include and source files you have to add additional include
directories in your project settings. otherwise you have to explicitly tell
the compiler where those files are (by using ../..) and that removes
flexibility from your project layout.
I have other problems as well, related to header files includes. Like I have
a common.h file that has some utility functions and some global variables
that I intend to use at more than one place. Now I include common.h at one
file, lets say one.h and use its functions which works ok, and than at some
other place,lets say two.h, I include the common.h and compiler starts
giving me errors, while linking the source, saying that
"something__proc___here" is already defined in "something.obj". However if I
dont include common.h in two.h, the two.h has no idea of the things (global
functions and variables) that I write, which I assume will me visible to
two.h that are inside the common.h. Now the next thing I'm planning to do is
do it like this:
//common.h
extern int g_Global;
extern int MyFunction(void);
//common.cpp
#include "common.h"
int g_Global;
int MyFunction(void)
{
//...
}
that way there is only 1 definition, and you can use them by simply
including common.h
inlcude my common.h in stdafx.h and see what happens. Also will play with
#pragma once stuff which looks insteresting and I hope its not microsoft
specific.
another thing you'll see often is this:
#ifndef __SOME_INCLUDE_GUARD__
#define__SOME_INCLUDE_GUARD__
//declarations here
#endif
The purpose I wrote this second para above is:
Having a C# background, I think I really need to read some really good and
detailed article on the problems that a dev can face while working on c++
projects having to do with #include. It would be great if you or anyone else
has some web links on this, or maybe discuss here.
to find out if something is MSFT specific, find the topic in the MSDN help
collection. it is always mentioned if something is ANSI or microsoft specific. is always a good place to start when looking for info.
if you are serious about C++ you should buy a good book. there are several.
my favorite is still 'the C++ programming language' by stroustrub
it can be a bit dull at times, but it is complete.
if you have specific questions you can always ask them here also.
--
Kind regards,
Bruno.
bruno_nos_pam_van_dooren@xxxxxxxxxxx
Remove only "_nos_pam"
.
- Follow-Ups:
- Re: precompiled headers question
- From: Abubakar
- Re: precompiled headers question
- From: Abubakar
- References:
- Re: precompiled headers question
- From: Abubakar
- Prev by Date: Re: VS2005 - question about copy-ctor
- Next by Date: Re: Managed C++ Classes
- Previous by thread: Re: precompiled headers question
- Next by thread: Re: precompiled headers question
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vc/2006-04/msg00021.html | crawl-002 | refinedweb | 689 | 75.61 |
Why do I get Unknown Format error when committing my Calendar value to my DB?
I have bound a database field to the selected date property of the calendar component. The date is displayed correctly in the designer and at runtime but when I try to commit data to the Database (eg. using a oracle database with a column of type DATE) it gives me an exception, that it is an object of unknown format. How do I fix this?
Instead of directly binding to the date column, use a property called Date (see the code below) in the Page1.java and bind that property to the Calendar
public java.util.Date getDate(){
return (java.util.Date) getValue("#{currentRow.value[TRIP.DEPDATE]}");
}
public void setDate(java.util.Date date){
return setValue("#{currentRow.value[TRIP.DEPDATE]}", new
java.sql.Date(date.getTime())));
}
For more detail see blog: | http://wiki.netbeans.org/VwpFAQInputCalFormat | CC-MAIN-2019-39 | refinedweb | 146 | 52.36 |
Simulating the motion of an object within a real time environment with Gravity and Collision effects may not be a straightforward task in ordinary programming languages; doing such tasks require a good understanding of using timers and sometimes thread management, and this is why there are separate simulation tools for this and other similar tasks.
In this article, I will demonstrate to you how, using a timer and basic motion and collision equations, we could model the motion of three balls in a gravity enabled environment. You will see how these balls are going to collide each other and reflect from a wall, and even more, you can control their motion by updating some motion variables.
The motion of these balls is controlled and operated under the gravity and collision systems, using Newton's basic motion equations and collision equations. The positions of the three balls are updated every 20ms using a timer which will also take a snapshot of that motion.
Before you see the code, I believe we should review the basic Gravity and Collision equations first.
The position equation:
X = Xi + Vx * tx
Motion third equation:
Y = Y0 + Vy * ty – 0.5 * g * t^2
The velocity equations:
Vy = Vy0 – g*t
Vx = 0.99*Vx0
Collision: an action between two or more bodies, each one affecting the others by a great power in a very short time, the bodies might not even touch!!!
+ Notes:
Theory:
V1×m1 + V2×m2 = V1`×m1 + V2`×m2
Assuming same mass:
V1 + V2 = V1` + V2`
The code below will clarify these equations in their context.
First of all, we should define the motion variables for each ball in our simulation.
///////////////////////////////////////// ball /////////////////////////////////////
// xspeed: The X axis speed of the ball – //
// it will be calculated based on the mouse movement speed. //
// yspeed: The Y axis speed of the ball – //
// it will be calculated based on the mouse movement speed. //
// newyspeed: The updated Y acis speed of the ball //
// after applying Newton and collision equations. //
// startingypos: The initial Y position of the ball – //
// when stop dragging the ball. //
// newxpos: The updated X position of the ball //
// newypos: The updated Y position of the ball //
// oldxpos: The previous X position of the ball //
// oldypos: The previous Y position of the ball //
// newx: The new X position of the mouse after dragging //
// oldx: The old X position of the mouse after dragging //
// newy: The new Y position of the mouse after dragging //
// oldy: The old Y position of the mouse after dragging //
// acc: The acceleration = 10 //
// t: The time //
// xmouse: The X axis of the mouse pointer position //
// ymouse: The Y axis of the mouse pointer position //
// dragging: Boolian variable to check whether the ball is being dragged or not. //
// trace: Boolian variable to check if the trace option is on or off. //
// collisiony: Boolian variable to check if the ball hits the ground or not. //
////////////////////////////////////////////////////////////////////////////////////
// ball 1 variables
double xspeed,yspeed,newyspeed,startingypos;
double newxpos,newypos,oldxpos,oldypos;
double newx,oldx,newy,oldy;
double acc,t;
const int ground = 500;
int xmouse,ymouse;
bool dragging=true,trace,collisiony;
int choice = 1;
int numberofballs = 1;
Ballinstance b1 = new Ballinstance();
Next, we will track the ball motion and check for a collision every 20 ms in our timer, and accordingly we will update the balls' positions.
private void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
b1.play(ref xspeed,ref yspeed,
ref newyspeed,ref startingypos,
ref newxpos,ref newypos,ref oldxpos,ref oldypos,
ref newx,ref oldx,ref newy,ref oldy,
ref acc,ref t,
ref xmouse,ref ymouse,
ref dragging,ref trace,ref collisiony);
ball.Left = (int)newxpos;
ball.Top = (int)(ground - newypos);
Collision();
}
Below is the Ballinstance class, and the play function where most of the work is done. As you will see, this function will be visited every 20 ms, the timer period, and then will check for the calling ball status, which can be as follows:
Ballinstance
play
If the ball calling the play function was in the drag mode, then the ball position will be updated according to the mouse pointer position, and the ball's initial speed will be calculated by measuring the change of the ball position between two successive calls to the play function; within 20 ms.
If the ball calling the play function wasn't in the drag mode, then the ball position will be updated according to Newton's and projectile motion equations and the Collision preserved momentum equation.
public class Ballinstance
{
int xpos,ypos;
const int ground = 500;
public void play(ref double xspeed,
ref double yspeed,
ref double newyspeed,
ref double startingypos,
ref double newxpos,
ref double newypos,
ref double oldxpos,
ref double oldypos,
ref double newx,
ref double oldx,
ref double newy,
ref double oldy,
ref double acc,
ref double t,
ref int xmouse,
ref int ymouse,
ref bool dragging,
ref bool trace,
ref bool collisiony)
{
xpos = (int)newxpos;
ypos = (int)newypos;
// this code will be visited 50 times per second while dragging
if (dragging)
{
// Grip the center of the ball when dragging
xpos = xmouse;
ypos = ymouse;
// While dragging the starting y-axis position of the ball is ball.Top
startingypos = ground - ypos;
// Calculate the x and y speed based
// on the mouse movement within 20 msec
// speed = distance/time -> time = 20 millisecond
// the speed is the change in the displacement
// with respect to the time which
// is already running (the code is within
// the timer), so we don't have to divide
// by the time
newx = xpos;
newy = ground - ypos;
xspeed = (newx-oldx)/1;
yspeed = (newy-oldy)/1;
oldx = newx;
oldy = newy;
// The time -while dragging- will not start yet
t=0;
}
else
{
// This code will be visited 50 times per second while not dragging
// The ball position is where it's last dragged
oldxpos = xpos;
// X-axis motion
if(xpos < 580 && 0 < xpos)
{
newxpos = oldxpos + xspeed;
}
else
{
// Here the ball will hits the wall
// Ball xspeed will decrease every time it hits the wall
// Minus sign: to change the ball direction
// when it collides with the walls
// wall resestance, the ball will
// lose some energy when hitting the wall
xspeed *= -0.9;
newxpos = oldxpos + xspeed;
}
// Y-axis motion
if(0 < newypos || collisiony)
{
// Newton first motion equation
newyspeed = yspeed - (acc*t);
// Newton third motion equation
newypos = startingypos + ((yspeed*t)- 0.5*acc*(t*t));
// no collision happend
collisiony = false;
}
else
{
// Here the ball will hits the ground
// Initialize the ball variables again
startingypos = -1;
// Here set startingypos=-1 not 0, because
// if 0 newypos will be 0 every time the ball
// hits the ground so no bouncing
// will happens to the ball, look to the
// eguation of newypos below when t = 0
t = 0;
// Ball yspeed will decrease every time it hits the ground
// 0.75 is the elasticity coefficient
// the initial speed(yspeed)
// is 0.75 of the final speed(newyspeed)
yspeed = newyspeed * -0.75;
newypos = startingypos + ((yspeed*t)- 0.5*acc*(t*t));
collisiony = true;
}
// Always
// Ball xspeed will always decrease, even if it didn't hit the wall
xspeed *= 0.99; // air resistance
#region explination of xspeed condition
// This to stop the ball when it heading
// to the left, you can notice that removeing
// this condition will make the ball never
// stop while its heading to the left until it will
// hit the left wall, to know why,
// run the simulation under the debuging mode and watch
// the value of newxpos
// newxpos = oldxpos + xspeed
// when 0 < xspeed < 1 (the ball heading right),
// ball.left = (int)newxpos, the casting
// forces the ball left position value
// to be the same as its previous value
// because oldxpos and newxpos are equals,
// and hence the ball will stop.
// but when -1 < xspeed < 0 (the ball heading left),
// ball.left = (int)newxpos, the casting
// here will not work correctly, because
// the value of oldxpos(which is integer value)
// will always be decremented by the xspeed,
// this will force the newxpos also to be
// always decremented by xspeed and
// hence ball.left will always decremented
// by 1 (int) casting, and hence the ball will never stop.
#endregion
if(xspeed > -0.5 && xspeed < 0)
xspeed = 0;
// Update the ball position
xpos = (int)newxpos;
ypos = (int)(ground - newypos);
// Increase the time
t += 0.3;
}
}
}
The project is not completed yet. I was thinking of creating some obstacles to see how the balls will collide them, seems funny :). You also can improve the way it looks and make it more usable if you write a routine to drag and drop the ball by grapping it, which I can't find out how to do in C#!!
I'd like to thank Anas Trad and Du3a2 Al-ansari, my friends, for their contributions to help finish this. | https://www.codeproject.com/Articles/22438/Gravity-and-Collision-Simulation-in-C?fid=953330&df=90&mpp=10&sort=Position&spc=None&tid=3189947 | CC-MAIN-2017-26 | refinedweb | 1,447 | 58.05 |
A python wrapper for the Nomics API
Project description
nomics-python
A Python wrapper for the Nomics Crypto Market Data API. For some context, Nomics is a crypto market cap and pricing data provider.
Disclaimer
Although the api call descriptions are from the official documentation, this is an unofficial API wrapper.
Getting Started
Before using the Nomics API, sign up for a free API key here.
To install the wrapper, enter the following into the terminal.
pip install nomics-python
Every api call requires this api key. Make sure to use this key when getting started.
from nomics import Nomics nomics = Nomics("This-Is-A-Fake-Key-123") markets = nomics.Markets.get_markets(exchange = 'binance')
Wrapper Wiki
More information on all of the different API methods are available on the Github wiki page here.
Contributing
This project is open for contributions! Teamwork makes the dream work.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/nomics-python/3.0.3/ | CC-MAIN-2022-40 | refinedweb | 168 | 67.76 |
When people interact with their computer or phone, often times it looks like this: activate the screen, open the browser, type cryptic strings into an empty search bar, scan the results for a moment, then click on a top result. Search has given tremendous power to Internet users. One can find their desired information or product and they can find it fast. Miraculous, indeed, but with great power comes great expectations.
If you have a website or application, your users will demand similar expediency. The Elastic App Search Service (formerly Swiftype App Search) is a product that can help you streamline the information or product acquisition phase of the users'; web experience. You want that, too, because more rewarding interactions will help you accomplish your business goals. No one wants to wade through pages of results! They want magical boxes that transport them to exactly what they are seeking - or something even better. This tutorial will demonstrate how to start building such an experience.
The completed sample application is live here: gemhunt.swiftype.info. Try it out!
The unfinished code is available on GitHub.
You can access the completed branch to see the finished source code.
Requirements
- A recent version of Ruby installed on your device. Need help? See Ruby.
- Half an hour or less.
- An active Elastic App Search Service account or free trial. Sign up here!
Getting Search-y
We are going to build a simple, engaging search experience on top of Ruby on Rails.
In doing so we shall learn how to…
- Set-up a sample search application.
- Create an Engine within App Search.
- Configure the App Search Ruby Client.
- Ingest documents.
- Alter the schema.
- Fine-tune Search relevance.
In the end, we will have a powerful, slick and intuitive, search-based Rails application. As one gains comfort with Search development, they can use their own data and modify the application as they see fit. Perhaps this sample application will become the foundation for something magnificent.
Setup
To get started, clone the tutorial repository and run
bin/setup. This will install bundler and the required gems, setup the SQLite database and populate it with seed data. The sample seed data that we will search over is composed of JSON. The JSON contains a set of popular RubyGems. Everyone loves RubyGems! We can examine the raw data within
data/rubygems.json.
$ git clone [email protected]:Swiftype/app-search-rails-tutorial.git $ cd app-search-rails-tutorial $ bin/setup
To make sure everything is in order, start the app with
rails server.
$ rails server => Booting Puma => Rails 5.2.0 application starting in development => Run `rails server -h` for more startup options Puma starting in single mode... * Version 3.11.3 (ruby 2.5.1-p57), codename: Love Song * Min threads: 5, max threads: 5 * Environment: development * Listening on tcp://0.0.0.0:3000 Use Ctrl-C to stop
Once the server has started, point your browser at localhost:3000.
It should look something like this:
Looks gooood ~ now, try a query. Hmm. That is not optimal. No matter what we query, a gigantic, linear list of RubyGems comes back. It seems we are just showing the set of data! Although it is paginated and styled in a crisp and tidy way, this is not valuable. It would be better if visitors could search!
Enter App Search
Begin a free trial of the Elastic App Search Service by creating an account. Once we have logged in for the first time, we will be prompted to create an Engine.
An Engine is a repository that houses our indexed documents. The App Search platform interacts with the Engine, providing search analytics and tools to help curate results, manage synonyms and much more. An Engine contains documents; documents are often objects, products, profiles, articles -- they can be many things.
Given that we are going to fill our Engine with RubyGems, how about we keep it simple and call it
ruby-gems.
Install & Configure Elastic App Search Client
We provide an official Ruby Client. Through it, we can access the App Search API from within Ruby-based applications. We want to use it!
Open up the
Gemfile and add:
gem 'swiftype-app-search', '~> 0.3.0'
Then, run
bundle install to install the gems.
$ bundle install
Next, we will need credentials to authorize against the App Search API. We will need the Host Identifier and the Private API Key.
The Host Identifier is a unique value that represents an account. The Private API Key is a standard, all-access key that can manipulate any resource except those dealing with other credentials. Given its powerful nature, we want to keep it secret - and safe.
There are many different ways to keep track of API Keys and other secret information in your development environment. The dotenv gem is a strong choice. However, to keep things nice and clear - albeit, not as secure - we have placed the values within a
swiftype.yml file. The
swiftype.yml file is included within our
.gitgnore. Should you want to host your application somewhere, you will need to bring the credentials with you.
The tutorial's setup script created
config/swiftype.yml for us. We should now fill in our Host Identifier and Private API Key.
# config/swiftype.yml app_search_host_identifier: [HOST_IDENTIFIER] # It should start with "host-" app_search_api_key: [API_KEY] # It should start with "private-"
Initialize ~
With your new Engine, matching key, and Host Identifier, we can create a new initializer within
config/initializers so that we may bring App Search to life:
# config/initializers/swiftype.rb Rails.application.configure do swiftype_config = YAML.load_file(Rails.root.join('config', 'swiftype.yml')) config.x.swiftype.app_search_host_identifier = swiftype_config['app_search_host_identifier'] config.x.swiftype.app_search_api_key = swiftype_config['app_search_api_key'] end
The client will be used in several places. We should wrap it in a small class. To do that, we will craft a new
lib directory within
app/ for our new
Search class.
# app/lib/search.rb class Search ENGINE_NAME = 'ruby-gems' def self.client @client ||= SwiftypeAppSearch::Client.new( host_identifier: Rails.configuration.x.swiftype.app_search_host_identifier, api_key: Rails.configuration.x.swiftype.app_search_api_key, ) end end
We are almost ready to ingest some documents. Before we do that, we need to restart Spring so that it will pick-up our new
app/lib directory…
$ bundle exec spring stop
Bring on the Documents
For now, our documents exist within our local SQLite database. We need to move these documents into App Search, into our Engine. The act of doing so is known as ingestion. We want to ingest the data, so that it may be indexed - or structured - and searched upon.
If we have documents in two places: the Engine and the database, then we need to establish truth. The application database is our "Source of Truth". As users interact with our application, the state of database items will change. Our Engine needs to be aware of those changes.
We can take advantage of Active Record Lifecycle Callbacks to keep the two data sets in sync. To do so, we will add an
after_commit callback to notify App Search of any new or updated documents committed to the database and an
after_destroy callback for when a document is removed.
# app/models/ruby_gem.rb class RubyGem < ApplicationRecord validates :name, presence: true, uniqueness: true after_commit do |record| client = Search.client document = record.as_json(only: [:id, :name, :authors, :info, :downloads]) client.index_document(Search::ENGINE_NAME, document) end after_destroy do |record| client = Search.client document = record.as_json(only: [:id]) client.destroy_documents(Search::ENGINE_NAME, [ document[:id] ]) end # ... end
As this is an example case, we are calling the Elastic App Search API in a synchronous way. The optimal method when dealing with external services like the App Search API is to use an asynchronous callback to avoid hanging up other application requests. For more information on asynchronous call writing, check out the ActiveJob framework provided by Rails.
Catchy Hooks
Before we apply more code changes to the application, a demonstration of our callbacks.
Open up a
rails console from within the project directory.
$ rails console ... Running via Spring preloader in process 15983 Loading development environment (Rails 5.2.0) irb(main):001:0>
Within the console, we can explore our documents. Reveal yourself,
puma!
irb(main):008:0> puma = RubyGem.find_by_name('puma') => # ...
Next, we can make a small change to the document named
puma that we have found...
irb(main):009:0> puma.info += ' Also, pumas are fast.' => # ...
... and then save the document.
irb(main):010:0> puma.save => true
The call to
save should trigger the
after_commit callback. Moments later, if we open the documents panel in the App Search Dashboard, we should see a document that corresponds to the
puma gem.
Our first indexed document! Huzzah! Although, we have many documents yet to index...
Mass Ingestion
If we were building an App Search application from scratch, we would not need to worry about ingesting our existing data; the
after_commit hook would handle new documents as they are added. However, our example application already has more than 11,000
RubyGem documents.
To ingest them all for indexing without waiting for individual
after_commit hooks on each record, we can write a rake task.
We will place our
app_search.rake task within the
lib/tasks/ directory that lives under the project root. Do note that this is not our
app/lib/ directory:
# lib/tasks/app_search.rake namespace :app_search do desc "index every Ruby Gem in batches of 100" task seed: [:environment] do |t| client = Search.client RubyGem.in_batches(of: 100) do |gems| Rails.logger.info "Indexing #{gems.count} gems..." documents = gems.map { |gem| gem.as_json(only: [:id, :name, :authors, :info, :downloads]) } client.index_documents(Search::ENGINE_NAME, documents) end end end
The next step is to run this task from the command line. Consider watching the log file in another terminal to see it in action. Seeing documents race into your Engine is fun!
To do so, type:
tail -F log/development.log within another terminal window, then use
rails to initiate the task:
$ rails app_search:seed
Ingestion begins! If you take another look at the documents panel in the App Search Dashboard, you should see that all of your documents are now indexed within your Engine. Check out your schema, too, and perhaps try some sample queries from the dashboard:
Search!
We now have an Engine bubbling with documents. It is time to alter our
RubyGemsController#index. This is when search starts to come to life! We will re-construct the controller so that we transform our current return all-the-things; text box into a true search bar.
# app/controllers/ruby_gems_controller.rb class RubyGemsController < ApplicationController PAGE_SIZE = 30 def index if search_params[:q].present? @current_page = (search_params[:page] || 1).to_i search_client = Search.client search_options = { def show @rubygem = RubyGem.find(params[:id]) end private def search_params params.permit(:q, :page) end end
Open up localhost:3000 and try it out! Neat! We can search, a little…! But we want to search well.
The Following is Highly Relevant
Results appear, which is good. However, searching with a query string of
rake returns the rake gem as the 14th result. This is not ideal! What can we do!?
By default, App Search treats all fields with equal importance. We know from experience that users will search by the name of the gem. We should give that field more importance than the others. If we give the
name field a higher weight than
info and
authors, then it will have the greatest impact on the final document score. The document score, the relevance, is what governs the response order.
# app/controllers/ruby_gems_controller.rb # ... def index if search_params[:q].present? @current_page = (search_params[:page] || 1).to_i search_client = Search.client search_options = { search_fields: { name: { weight: 2.0 }, info: {}, authors: {}, }, # ...
Note: If you provide the
search_fields option to the Searching API, you must include every field you would like to be included in the search. This is why we added
info and
authors, even though we are not passing any search options for those fields.
Weights are powerful. You can read more about them within our App Search searching guide. If we try another search, we can see that the rake gem is first result! Relevance is improved. But there is even more that we can do.
Change the Field
We want to add an option to our search interface that filters out
RubyGems that do not have many downloads. We want to see what is Popular.
First, we will need to make a small change to our Engine's schema to filter the
downloads field within a numeric range. Our Engine schema displays the type of data that is contained within each document field. By default, App Search assumes every field is Text. Fields can be: Text, Number, Date, or Geo Location.
To address this, we can change the
downloads field to type Number from within the Schema tab of the App Search Dashboard.
Before:
downloads: Text
After:
downloads: Number
Be sure to click Change Types after making the change.
Note: Changing these fields begins a reindex of your data. This might take some time, depending on the size of your Engine. You are unable to change Fields during a reindex.
App Search will now consider the
downloads field to be a Number.
Chart Topping
Our designer - as usual - is ahead of the game. The application already contains a check-box to emphasize Only Popular results. This
form_tag is what will allow us to define a
:popular parameter within our controller:
# app/views/ruby_gems/index.html.erb # ... <%= form_tag({}, {method: :get}) do %> <div class="form-group row"> <%= text_field_tag(:q, params[:q], class: "form-control", placeholder: "My favorite gem...") %> </div> <div class="form-check"> <%= check_box_tag('popular', 1, params[:popular], class: 'form-check-input') %> <label class="form-check-label" for="popular">Only include gems with more than a million downloads.</label> </div> <div class="form-group row"> <%= submit_tag("Search", class: "btn btn-primary mb-2") % > </div> <% end %> #...
Back in our controller, we will do just that. Elastic App Search allows us to pass filters along with our search options. In this case, our filter will prioritize results that have *at least*1,000,000 views. When a field is a Number, numerical filtering like this becomes possible.
# app/controllers/ruby_gems_controller.rb # ... def index if search_params[:q].present? @current_page = (search_params[:page] || 1).to_i search_client = Search.client search_options = { search_fields: { name: { weight: 2.0 }, info: {}, authors: {}, }, page: { current: @current_page, size: PAGE_SIZE, }, } if search_params[:popular].present? search_options[:filters] = { downloads: { from: 1_000_000 }, } end # ... end private def search_params params.permit(:q, :page, :popular) end
When we venture back to our application, we can try some nifty queries. Let us search for heyzap-authlogic-oauth. With the checkbox un-checked, it is the first result. With the box checked we return more popular gems with a wider audience, like authlogic and oauth. That is more like it! Our sample is complete. But we have just scratched the surface when it comes to building search.
Summary
Excellent search is delightful for users. Whether you want search to help people explore products, find relevant content or helpful documents, or are basing your entire application around robust discovery through geolocation or time frames, Elastic App Search is a wise choice. With all the tools that the App Search APIs present to you, the power to craft imaginative and intuitive search experiences is at your finger-tips. | https://www.elastic.co/blog/how-to-build-application-search-with-ruby-on-rails-and-elastic | CC-MAIN-2021-21 | refinedweb | 2,555 | 68.26 |
C++ name manglingPosted: November 17, 2011 Filed under: C++ Leave a comment
There is a topic I have referred to several times on this blog, yet in four years I still haven’t explained what it is. I plan to correct this by explaining a little bit about C++ name mangling, and although I won’t expect to write anything you couldn’t learn by reading Wikipedia, I’ll try to have a more practical approach.
Whenever you compile and link something, there is a lot of information the compiler deduces that you don’t really care about. Things like calling conventions, overloads or namespaces. Yet this information is crucial for other stages of the compiler (or linker) to work. For this reason, the compiler will create a decorated version of any object’s or function’s name.
In its most simple case, it would be something like this:
void overloaded_function(int); void overloaded_function(string);
Which would then be translated to something like:
void fastcall_int_overloaded_function(int); void fastcall_string_overloaded_function(string);
Of course, for more complex functions (like class methods) the mangling is much more complicated. Also, remember that’s just a mangling convention I just invented, and most likely not used by any compiler in existence.
Although for the most part we can just ignore name mangling, this has a couple of consequences of which we should be aware:
Creating a name for anonymous objects/functions
I will not explain much about this, it might be the topic of another post, but there are certain cases in which you can have a struct or a function defined inside another object anonymously. In these cases, the mangler will assign some sort of denomination for this anonymous object.
Linking with C symbols
C has no mangling. It just doesn’t need it. This has a very important consequence, whenever you use C code in C++ you need to specify that your doing so, by using an extern “C” declaration.
Debugging
gdb already takes care of this so it may be transparent to you, but if you are using a debugger not aware of how your compiler mangles names, you may end up with a lot of very difficult to understand names.
Bonus: Demangling C++ names
If you find yourself in the last case, for example when running an nm to get the names defined in a (compiled) object, you can use c++ filt. Like this:
nm foo.o | c++filt | https://monoinfinito.wordpress.com/2011/11/17/c-name-mangling/ | CC-MAIN-2017-39 | refinedweb | 407 | 55.17 |
#include <wx/socket.h>
Constructs a new server and tries to bind to the specified address.
Before trying to accept new connections, remember to test whether it succeeded with wxSocketBase:IsOk().
Destructor (it doesn't close the accepted connections).
Accepts an incoming connection request, and creates a new wxSocketBase object which represents the server-side of the connection.
If wait is true and there are no pending connections to be accepted, it will wait for the next incoming connection to arrive.
If wait is false, it will try to accept a pending connection if there is one, but it will always return immediately without blocking the GUI. If you want to use Accept() in this way, you can either check for incoming connections with WaitForAccept() or catch wxSOCKET_CONNECTION events, then call Accept() once you know that there is an incoming connection waiting to be accepted.
Accept an incoming connection using the specified socket object.
Wait for an incoming connection.
Use it if you want to call Accept() or AcceptWith() with wait set to false, to detect when an incoming connection is waiting to be accepted. | http://docs.wxwidgets.org/trunk/classwx_socket_server.html | CC-MAIN-2017-43 | refinedweb | 185 | 53.21 |
/>
There are many sorting algorithms, often with several variations, in Python. Two Values with a Temp Variable
a = 45 b = 99 temp = a # temp = 45 a = b # a = 99 b = temp # b = 45
However, if we are swapping integers we can use a trick with XOR, the Python symbol for which is ^. The following code and comments show how this works. Pay particular attention to the binary values in the comments.
Swapping Two Integers with Exclusive Or
a = 45 # 00101101 b = 99 # 01100011 a = a ^ b # 01001110 b = a ^ b # 00101101 a = a ^ b # 01100011
I think this is a neat and clever bit of code so I'll be using it in this project.
Implementing Bubble Sort in Python
Create a new folder and within it create the following empty files:
- bubblesort.py
- main.py
You can download the files as a zip from the Downloads page, or clone or download the Github repo if you prefer.
Source Code Links
bubblesort.py
def bubblesort(data): """ Takes a list of numbers and runs the bubblesort algorithm on them, printing each stage to show progress. """ print("Unsorted...") print_data(data, len(data)) print("Bubble sorting...") lastindex = len(data) - 1 while lastindex > 0: for i in range(0, lastindex): if data[i] > data[i+1]: data[i] = data[i] ^ data[i+1] data[i+1] = data[i] ^ data[i+1] data[i] = data[i] ^ data[i+1] print_data(data, lastindex) lastindex-= 1 print("Sorted!") def print_data(data, sortedto): """ Prints data on a single line. Takes a sortedto argument, printing the sortedto portion in green and the unsorted portion in red. """ for i in range(0, len(data)): if i >= sortedto: print("\x1B[32m{:3d}\x1B[0m".format(data[i]), end="") else: print("\x1B[31m{:3d}\x1B[0m".format(data[i]), end="") print("")
The bubblesort function takes a list to sort and starts off by printing the list_data function to show the state of the list at that point in time, including the colour coding.
The other function in this file is print_data. As well as the list to print it takes a sortedto argument to enable the sorted part of the data to be printed in green and the unsorted part in red. The colours are set using ANSI codes which I'll cover in a later post, but for now just note that this function uses the following three codes.
- \x1B[32m - green
- \x1B[31m - red
- \x1B[0m - reset to default
We now need a main function to call the bubblesort function, which I have included in main.py.
main.py
import bubblesort import random def main(): """ Create a list of random numbers and pass them to the bubblesort function. """ print("-----------------") print("| codedrome.com |") print("| Bubble Sort |") print("-----------------\n") data = [] for i in range(0, 16): data.append(random.randint(1, 99)) bubblesort.bubblesort(data) main()
This function simply creates a list and populates it with random numbers, before passing it as an argument to bubblesort. After the function definition the main function is called.
That's the source code finished so we can run it by typing the following in the terminal:
Running the Program
python3.7 main.py... 78 25 32 58 44 8 55 9 49 6 92 61 12 64 42 86 Bubble sorting... 25 32 58 44 8 55 9 49 6 78 61 12 64 42 86 92 25 32 44 8 55 9 49 6 58 61 12 64 42 78 86 92 25 32 8 44 9 49 6 55 58 12 61 42 64 78 86 92 25 8 32 9 44 6 49 55 12 58 42 61 64 78 86 92 8 25 9 32 6 44 49 12 55 42 58 61 64 78 86 92 8 9 25 6 32 44 12 49 42 55 58 61 64 78 86 92 8 9 6 25 32 12 44 42 49 55 58 61 64 78 86 92 8 6 9 25 12 Sorted!
As I mentioned earlier bubble sort is rather inefficient, and this project is intended only as a demonstration and programming exercise. You might find the xor swap and terminal colours more useful than the algorithm itself! | https://www.codedrome.com/bubble-sort-in-python/ | CC-MAIN-2021-31 | refinedweb | 700 | 67.59 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
Vinstall
Vinstall is an application toolkit for the Vector installer, implementing a MVCish framework. Currently a work in progress.
Overview
Controller
An application written using vinstall usually consists in just a set of controller classes, implementing a required interface. Each controller class represent a state in the application and they have the following responsibilities:
- Defining the next controller class
- Defining the previous controller class
- Defining the information that will be rendered in the screen
- Reacting to user input
The first two are accomplished by defining the next() and previous() methods. The third is accomplished by implementing the render() method which needs to return a Render object (see example). The final responsibility can be implemented by providing a process() or a command() method (or both.) The first one will be executed after user input is processed (usually after user clicks on "next", this of course depends on how you write your views). The last one is scheduled for execution after all the controllers are processed. A controller class could look like this:
from vinstall.core.render import Render import vinstall.core.model as model class MyController(object): """A controller """ def render(self): """A method returning a Render instance. The first to args of the Render's constructor are a title and an intro text. The rest of the arguments are model objects The BooleanOption will be rendered as a checkbox. A few model objects representing common form elements are provided by the vinstall.core.model module. """ return Render("Hello world", "This is the intro", model.BooleanOption("This is a boolean option") def next(self): """Return the next controller class. If this returns None, we assume it is the end of the application. """ return None def previous(self): """Return the previous controller class. """ return TheFirstController def command(self, boolean): """The signature matches the number of model objects in the render method. This will be executed later. """ if boolean: myapp.do_something()
The Application object
Finally, you just need to start you app by creating an Application instance, passing the first controller class (first as in the one representing the initial state of your app) and the name of a view module as a string. There are two views defined for the provided model objects, "urwid" and "gtk". You can start the app using the run method:
app = Application("urwid", MyFirstController) app.run()
This should be all you need to know for writting a simple app.
Installing
Run python setup.py install. Depending on the view modules you want to use, you will need to install urwid or gtk plus the required stuff for the backend module. Once the dependencies are clarified this will be improved. | https://bitbucket.org/m0e_lnx/vinstall/overview | CC-MAIN-2015-18 | refinedweb | 465 | 56.35 |
Temasek declares
huge profits in first-ever report
Temasek, whose stable includes Singapore Airlines, said net profit rose
to S$7.4 billion (US$4.4 billion) on revenues of $56.5 billion for the
year to March 2004 .
That compared with a modest profit of $241 million on revenues of $49.65
billion in the previous financial year, and $4.92 billion on revenues of
$42.56 billion in the year to March 2002.
This was the first annual report made public by the traditionally low-key
Temasek since it was founded 30 years ago. Nearly half of its portfolio
is now invested overseas.
Temasek manages $90 billion in assets, invested in companies ranging
from Asian icons like Singapore Telecommunications and the Raffles Hotel
to the Singapore Zoo and a betting company, Singapore Pools.
Overseas, Temasek -- which means "sea town", Singapore's historical
name -- has stakes in Telekom Malaysia and Indonesia's Bank Danamon, while
Temasek-linked companies have extensive investments across the globe.
Since it was set up in 1974, Temasek's total shareholder's return (TSR)
has averaged 18 percent annually, thanks to the early years when it was
expanding from a low base, the report said.
In the past 10 years, which included the 1997-98 Asian financial crisis,
the impact of terrorism and other problems, TSR averaged only three percent
a year.
In the year to March 2004, however, this zoomed back to 46 percent,
reflecting Singapore's sharp economic rebound.
Right after the Temasek report was made public, ratings agencies Moody's
Investors Service and Standard and Poor's both assigned their highest possible
corporate credit ratings to Temasek and said its outlook was stable.
"The exceptionally strong rating (AAA) on Temasek reflects the
leading or dominant market positions in the key business segments in which
the Temasek group of companies operate and its high degree of investment
diversity," SP credit analyst Greg Pau said.
"The rating also takes into account Temasek's extremely strong
financial flexibility at the holding company level, as well as the group's
solid financial profile on a consolidated basis," he added.
SP noted that Temasek's shareholder, the Singapore government, "is
financially strong" and while it does not guarantee Temasek's obligations,
"the strength of the shareholder and the constitutional arrangements
that protect Temasek's reserves provide comfort."
Temasek Chairman Suppiah Dhanabalan said that despite being an exempt
private company, it decided to make its group review public for the first
time as part of efforts to "institutionalise Temasek's role as a long-term
shareholder and an active investor."
The last five years have been difficult due to problems such as terrorism
and the outbreak of Severe Acute Respiratory Syndrome (SARS) in 2003 but
"new opportunities have emerged in a bustling China and a confident
India, alongside a recovering (Southeast Asia)," he said.
Other major companies in the Temasek stable include the DBS banking
group, subway operator SMRT Corp., port operator PSA International, broadcaster-publisher
MediaCorp, shipping giant Neptune Orient Lines and diversified industrial
group Singapore Technologies.
The city-state's other main investment arm is the Government of Singapore
Investment Corp. (GIC).
The GIC invests over US$100 billion in foreign reserves in such areas
as equities, fixed income, money market instruments and prime real estate
in key financial centres worldwide.
<![if !supportEmptyParas]> <![endif]> | http://www.singapore-window.org/sw04/041012af.htm | crawl-001 | refinedweb | 558 | 50.06 |
init: cannot be run as a PID other than 1
Bug Description
sysvinit supports --init to make it operate as if it's pid 1 even if it's not pid 1. Upstart not only doesn't support this functionality, but it dies if it receives parameters that it doesn't understand.
My use case: The OLPC has a special process that does a bunch of boot and security stuff before kicking off /sbin/init. With sysvinit and --init, this works fine. If Upstart is installed, it errors out with: "init: invalid option: --init"
Thus, if using an Ubuntu image on an OLPC, you must replace Upstart with sysvinit, which has an impact on various levels of Ubuntu integration work. :-)
There are quite a few other use cases, from the clever to the crack-addled, so it would be nice to fix this for Upstart.
Thanks!
The OLPC has a special process that does a bunch of boot and security stuff before kicking off /sbin/init, called oatc.
Right, but if that's pid #1, it should exec /sbin/init and thus give that pid #1, no?
On Nov 12, 2007 4:17 PM, Scott James Remnant <email address hidden> wrote:
> Right, but if that's pid #1, it should exec /sbin/init and thus give
> that pid #1, no?
Nup, oatc runs all the time.
In that case, Upstart cannot work on the OLPC since it must be pid #1 (it's an init daemon).
There is perhaps some small argument for allowing Upstart to run as another pid, but that would necessate disabling features such as supervision of forking daemons, control-alt-delete and alt-uparrow handling, SIGPWR handling, etc.
Since this is a significant functionality reduction, it makes more sense as a compile-time option that a command-line one.
On Nov 12, 2007 5:32 PM, Scott James Remnant <email address hidden> wrote:
> In that case, Upstart cannot work on the OLPC since it must be pid #1
> (it's an init daemon).
sysvinit works okay -- does it suffer reduced functionality when not
running as pid #1?
It doesn't offer the functionality that Upstart does.
That's a bit like asking why GNOME requires a graphics card when getty doesn't ;-)
Has this been fixed? I'm currently using a Fedora 10 based XO Joyride build and it uses upstart (and it works).
It's been patched in Fedora/OLPC-3: http://
I did not check if the patch makes sense to be included in Ubuntu.
With the move to using netlink, this becomes possible
Is this now fixed with the addition of Session Jobs?
@Bryan: no - when Upstart runs as a Session Init ("init --user"), it is not running as the system init. OLPC should either have oatc fork itself then exec /sbin/init as PID 1 (I see no requirement on this bug that oatc should run as PID 1?), or consider creating a PID namespace such that oatc runs in the main namespace and Upstart can run as PID 1 in the created namespace.
@James oh ok.
The original need is gone because OLPC is based on Fedora and they have switched to systemd. Not sure if there is another use case for this feature.
Confirmed; I hadn't seen this in the sysvinit code.
Though this won't fix your problem, since the only resonable implementation in Upstart is to error with "not pid #1"
Why isn't OLPC running init as pid 1? | https://bugs.launchpad.net/upstart/+bug/160150 | CC-MAIN-2015-11 | refinedweb | 583 | 76.86 |
Https and Http only on .net webapi v2 action
I am working on a project with webapi 2 using toereh 2 (openid connect to exact) bearer tokens to grant access. Now the whole idea is that bearer tokens are only secure when used with a secure connection.
Until now, I just didn't allow HTTP calls to the webserver, which worked as no one could make an HTTP call with the bearer token.
We now have some endpoints that need to be accessible over http (no token / script required for that) and we're going to enable HTTP of course. Now my question is, what is normal in these situations?
I have an attribute that I can use for all actions that only accept https? Can I make it the default and only put the attribute on the ones that fit http?
What is the advice, is it our responsibility to ensure that no one is using the oauth token on an unsafe string, or is it an api user?
source to share
I believe the correct way to do this is to add a global action filter that will force you to use HTTP files for all controllers / actions in your web API. The implementation of this HTTP action filter could be as follows:
public class ForceHttpsAttribute : AuthorizationFilterAttribute { public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext actionContext) { var request = actionContext.Request; if (request.RequestUri.Scheme != Uri.UriSchemeHttps) { var html = "<p>Https is required</p>"; if (request.Method.Method == "GET") { actionContext.Response = request.CreateResponse(HttpStatusCode.Found); actionContext.Response.Content = new StringContent(html, Encoding.UTF8, "text/html"); UriBuilder httpsNewUri = new UriBuilder(request.RequestUri); httpsNewUri.Scheme = Uri.UriSchemeHttps; httpsNewUri.Port = 443; actionContext.Response.Headers.Location = httpsNewUri.Uri; } else { actionContext.Response = request.CreateResponse(HttpStatusCode.NotFound); actionContext.Response.Content = new StringContent(html, Encoding.UTF8, "text/html"); } } } }
Now you want to register this globally in the WebApiConfig class like this:
config.Filters.Add(new ForceHttpsAttribute());
As I understand from your question, the number of controllers you want to call over https is greater than the controllers over http, so you want to add an override attribute for those http controllers.
[OverrideActionFiltersAttribute]
Don't forget to include your anonymous controllers with the attribute
[AllowAnonymous]
. But my recommendation is to keep communicating over https and you just allow anonymous calls. Read more about securing https on my blog here: Hope this helps.
source to share
First, I believe that you definitely need to do your best to keep this token secure and therefore the server must enforce SSL.
We are using web api v1 (infrastructure constraints :() and we have a global DelegatingHandler that enforces SSL for all requests except for certain urls that are whitelisted (not the nicest solution, but it works for now).
In web api 2, I believe you can have a global FilterAttribute to enable SSL connectivity and then use the new attribute override feature to get yours exceptions - the whole theory! :)
Hope this helps Garrett
source to share | https://daily-blog.netlify.app/questions/2164359/index.html | CC-MAIN-2021-21 | refinedweb | 496 | 55.74 |
While using EF with wcf come across the condition where i need to map entity to data contract and vice versa because EF objects are burdened with additional data provided by EF. So tried few functions for mapping.
[DataContract]
public class WebsitesD
{
[DataMember]
public int Id { get; set; }
[DataMember]
public string Domain { get; set; }
[DataMember]
public string UserId { get; set; }
[DataMember]
public string Title { get; set; }
}
private WebsitesD mapWebsite(Website w)
{
WebsitesD wd = new WebsitesD();
wd.Id = w.Id;
wd.Title = w.Title;
wd.UserId = w.UserId;
wd.Domain = w.Domain;
return wd;
}
public int insertWebsite(WebsitesD d)
{
try
{
using (MyInfoEntities entities = new MyInfoEntities())
{
entities.Websites.Add(mapWebsite(d));
entities.SaveChanges();
return 1;
}
}
catch (Exception e)
{
throw e;
}
}
where WebsitesD is my data contract and website is entity object. With this i can achieve my objective but problem is that whenever i need to perform any database operation i need to do mapping which i think can be costly operation.
Should i leave Entity Framework and go with ADO .net as i don't need to do any mapping over there. Please suggest me pros and cons with which approach i should go.
As with any ORM there is trade-off between performance and developer productivity. As you said ADO.NET will be the fastest method to fill your data contracts from datareader/dataset, with EF/NHibernate you will always have this situation. However mapping is not expensive for solo entities , it becomes expensive when you map list of entities.If you dont need mapping at all , you can also put [DataContract] on entity classes and [DataMember] on members which wcf want to send to client. but when your EF code is autogenerated when your schema changes , that is all wiped out. You can also opt for EF Code First approach.
Another approach which involves less coding for mapping is to use AutoMapper Check this out
Also there is good thread on ORM tradeoffs here
Do what's best for the code base. Consider maintainability and productivity. Servers are cheap and developers are expensive. Very few companies in the world are of such a scale that it is worth maintaining more complicated code rather than buying another server.
With EF 6, you can use the code generation item EF 6.x DbContext Generator with WCF Support.
Just right click on the designer of the EDMX and go to Add Code Generation Item...
Click Online on the left and search for DbContext.
Using this will auto generate the DataMember and DataContract Attributes in your classes. Also, you'll probably want to delete the regular template generated with the edmx, or you will have two sets of entities. | http://m.dlxedu.com/m/askdetail/3/4857fed4e9e13a19c18132a0452a9562.html | CC-MAIN-2018-47 | refinedweb | 446 | 63.49 |
If you are like many developers, you are using Windows Communication Foundation (WCF) to provide services to Windows Forms, WPF, Silverlight, ASP.NET and possibly Windows 8 Store applications. Now your boss is asking you to develop some mobile applications using HTML 5 and jQuery. You know you can reuse many of your WCF services, but you are having trouble calling them from jQuery without breaking your existing applications. In this article, I will walk you through the steps for taking a sample WCF service from working just for .NET applications to working with jQuery as well. Don’t worry, your existing applications will still work with the changes you are going to make.
A Typical WCF Service
To start, let’s take a look at a typical WCF service that might be used to interact with a Product table. I am not going to show all of the code, but instead will just focus on the interface class. I am assuming you use one of the many data-access technologies available to retrieve single or multiple products and also to insert, update and delete products. Listing 1 shows a sample interface class used in a WCF Service Application project named ProductServiceHost.
In this WCF service interface, the GetProducts method returns a complete list of Product objects by selecting all rows and columns from the Product table. The GetProduct method returns a single Product object based on the primary key (ProductId) that you pass into this method. The Insert, Update, and Delete methods are used to add, update, and delete data from the Product table.
In Figure 1, you can see the sample solution that comes with this article. The ProductDataLayer class library project contains some reusable classes such as AppSettings and DataConvert. AppSettings retrieves the connection string from your .config file. The DataConvert class helps you work with null values coming from your tables. The Product class contains a single property for each column in the Product table. The ProductManager class performs all of the CRUD logic and uses the Product class as both inputs and outputs from the various methods in this class.
The ProductServiceHost project is a WCF Service Application project that references the ProductDataLayer project.
The ProductServiceHost project is a WCF Service Application project that references the ProductDataLayer project. The ProductService class makes calls to the ProductManager class to perform all of the operations defined in the interface class shown in Listing 1. The last project in the solution is a WPF application used to test the functionality of the WCF services.
The Product Class
The WCF service returns a set of Product objects from the GetProducts method, returns a single Product object from the GetProduct method, or accepts a Product object as a parameter when inserting, updating, or deleting. The code snippet below shows the definition of the Product class that will be used for this article.
public partial class Product { public int? ProductId { get; set; } public string ProductName { get; set; } public DateTime? IntroductionDate { get; set; } public decimal? Cost { get; set; } public decimal? Price { get; set; } public bool? IsDiscontinued { get; set; } }
Add Attributes for Web Access to Your Interface Class
Modifying your WCF service to return JSON instead of the default XML is very easy.
One method you can use to access your WCF services from a Web page is making Ajax calls. Most developers prefer to use JSON when making Ajax calls with JavaScript and/or jQuery because JSON is easier to process in JavaScript and is smaller to transmit across HTTP than XML. Modifying your WCF service to return JSON instead of the default XML is very easy. First, ensure that you have a reference to the System.ServiceModel.Web.dll if it is not already in your WCF service application. This library is required in order to use the attributes that you add to your interface class to give JSON capabilities. Open your interface file in Visual Studio and add a Using statement at the top of the file, as shown in the following code snippet.
using System.ServiceModel.Web;
Go through each of the methods in your interface that retrieve data and do NOT have parameters passed to them, and add the attribute shown in the following code snippet:
[WebGet(ResponseFormat = WebMessageFormat.Json)]
You call those methods in your WCF service that return data and do not have parameters using a GET call from Ajax. In the interface class, the GetAllProducts method is the only method that returns data and does not have any parameters, so you decorate that method with the WebGet attribute shown above.
WCF takes care of automatically formatting your data as JSON from the generic List that is returned from GetProducts.
The WebGet attribute tells the WCF Service call that it needs to respond to the caller (in this case a Web caller) by serializing the data in a JSON format. There is nothing more needed to ensure that JSON is returned. WCF takes care of automatically formatting your data as JSON from the generic List that is returned from GetProducts. Your complete interface signature for GetProducts now looks like the following:
[OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] List<Product> GetProducts();
To call those methods in your WCF service that accept parameters, you need an Ajax POST. For these methods you decorate them with the following attribute:
[WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json )]
Calling GetProduct, Insert, Update, or Delete methods in your WCF service from Ajax requires the WebInvoke attribute to be added. Whether you are passing and/or returning a simple data type such as an integer or an object, you still need to decorate your method with this WebInvoke attribute. The WebInvoke attribute has several properties that you can set, I am setting four in this example.
The Method property specifies which protocol this service operation responds to. By setting this property to “POST”, WCF knows to use all of the properties of the WebInvoke attribute only when it receives a POST request. So, for instance, if your WPF application is making the call to the GetProduct method, that is NOT a POST operation and your method runs using the default serialization method (XML). If a jQuery/Ajax call is made using POST, the WebInvoke attribute knows that the Request format will be JSON, the Response format to send back will be JSON, and the body style will be a WrappedRequest.
Add the WebInvoke attribute to all methods in your service interface that accept a parameter. At this point, you are done modifying the code in order to be able to call your service from a Web page. You can re-run your normal Web, WPF, Windows Forms, or Silverlight application to verify that your application still runs. Remember, attributes do not do anything unless some other code is specifically looking for that attribute. So normally, adding attributes does not change how a method runs.
Remember, attributes do not do anything unless some other code is specifically looking for that attribute.
Set Up New Configuration Settings
Now that you have your interface class decorated with the appropriate attributes, you need to configure your Web.Config file to work with Web applications. Again, you want to keep the behavior the same for your existing applications. Depending on when you built your WCF service application, your configuration file may or may not have more than the code snippet shown below:
above is the minimum configuration you need if you are running your WCF service application from .NET 4.0. You can now make a couple of minor changes to keep your WCF Service running with your existing applications, but also prepare to call your service from an HTTP Ajax call. First, you add a <services> element within the <system.serviceModel> element. Then, place this new element, shown below, just after the </behaviors> ending tag.
<services> <service behaviorConfiguration="ProductServiceBehavior" name="ProductServiceHost.ProductService"> <endpoint address="" binding="basicHttpBinding" contract="ProductServiceHost.IProductService" /> </service> </services>
Next, locate the <behavior> tag under the <serviceBehaviors> element and add a name attribute to it. This name matches the value (“ProductServiceBehavior”) in the behaviorConfiguration attribute in the <service> element that is shown in the above snippet. The snippet that follows is what the change will now look like in the behavior element:
<behaviors> <serviceBehaviors> <behavior name="ProductServiceBehavior"> ... ... </serviceBehaviors> </behaviors>
You should run your existing application at this point to ensure that everything still works. Adding this <endpoint> for your WCF service explicitly specifies to use basicHttpBinding for all service calls. This is the default WCF behavior for calling services.
Configure for HTTP Ajax Calls
Now it is time to add configuration settings to allow an Ajax call to be made to your services and take advantage of the attributes you added to the interface class earlier. First, add an <endpointBehaviors> element within the <behaviors> element. Add the following element just below the </serviceBehaviors> end tag.
<endpointBehaviors> <behavior name="WebBehavior"> <enableWebScript/> </behavior> </endpointBehaviors>
This end point behavior specifies that Web scripts are allowed to call your WCF Service. The name attribute you added, “WebBehavior”, can be any name you want. Match this name in an end point configuration setting that you add next.
Create a new <endpoint> element below the basicHttpBinding <endpoint> element you added earlier in the <service> element. This new endpoint element is specifically for HTTP binding.
<endpoint address="Web" binding="webHttpBinding" contract="ProductServiceHost.IProductService" behaviorConfiguration="WebBehavior" />
What is very important in the above element is the address attribute of “Web”. Whatever you put into this address attribute is appended to any calls you make from Ajax. There is another example of using this attribute later in this article.
The binding attribute of “webHttpBinding” means that this endpoint is only for Web clients. Notice that the behaviorConfiguration attribute is set to the same name you used in the <endpointBehavior> element.
This next part is optional, but if you will be making service calls from an ASP.NET application and you wish to access Session, Application, or other objects from your ASP.NET application, then add an aspNetCompatibilityEnabled attribute and set it to true in the <serviceHostingEnvironment> element, which should already be in your configuration file. If this element does not exist, add it within the <system.serviceModel> element.
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" />
Listing 2 shows the complete listing of what the Web.Config file in your WCF service should now look like. Again, after completing all of these changes, you should run your existing application to ensure that it all still works as it did before.
Modify Your Service Class (Optional)
Up to this point, you have modified the interface and the Web.Config file in your WCF service application project. The last item to modify within your WCF service is the service class itself. This is the class that implements the interface you modified with the attributes. Adding this attribute is optional and goes hand-in-hand with the aspNetCompatibility=”true” that you set in the Web.Config file. Open up the service class and add the following using statement:
using System.ServiceModel.Activation;
Decorate the service class itself with the AspNetCompatibilityRequirements attribute. Remember: In the Web.Config file, you added the attribute for aspNetCompatibilityEnabled. You now add this attribute to your class so this class is allowed to run within the ASP.NET AppDomain and have access to any Session, Application, or other ASP.NET objects.
[AspNetCompatibilityRequirements( RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService : IProductService { ... ... }
Run your application again to ensure that all of your WCF services still work with your application as expected.
Get Data via jQuery
Now that your services are all prepared, you are ready to call your WCF service application from an HTML page. Create a new ASP.NET Empty Web Application in Visual Studio 2010 or 2012. Add a Default.html page to this project. Within the <form> tag of your HTML page, add the following code:
<div style="display: inline;"> <input id="btnProduct" type="button" value="Get Products" onclick="GetProducts()" /> </div> <p> <select id="ddlProduct"> </select> </p>
You need the current jQuery scripts in order to make the Ajax calls to your WCF service. Go to and download the latest jQuery scripts. Add a new folder to your ASP.NET project in Visual Studio called Scripts and add the downloaded .js file to this new folder. Within the <head> tag in your Default.html page, add the following scripts:
<script type="text/javascript" src="Scripts/jquery-1.9.1.min.js"> </script> <script type="text/javascript"> </script>
It’s important to note about the above code that you make sure to use a closing </script> tag and don’t close the script tag all on one line, as in <script type=”text/javascript” … />. Note the /> closing tag. If you close your <script> tag all on one line, jQuery will not work.
You are now ready to write a function to call the GetProducts method in your Web service and return a list of product objects. Within the second <script> tag in your Default.html page, create a function called GetProducts that looks like Listing 3.
In the $.ajax call, there are several properties that you need to set. The “url” parameter is set to the http endpoint for your WCF service. The normal syntax for a call to your WCF Service is where you list the name of the method after the .SVC file name. However, you must add the “Web” attribute after the .SVC file name to match the “address” attribute you added in the Web.Config file of your WCF Service. When running your service from Visual Studio, set a specific port in the properties of the WCF service project instead of letting it generate a new one each time. Use that port number in the “url” parameter.
When running your service from Visual Studio, set a specific port in the properties of the WCF service project instead of letting it generate a new one each time.
The “type” parameter is set to “GET” because no parameters are posted to the GetProducts method. If you decorate your WCF service method with the WebGet attribute, you always set this property to “GET”. The “cache” parameter can be set to false if you want to explicitly call GetProducts each time, and get new data from your service. By default, “cache” is set to true and the data is cached on the client-side. The “data” parameter can be set to either null or “{}” to specify that no parameters are being passed to the service call. The “contentType” and “dataType” properties need to be set to JSON to specify that the data sent and returned should be in a JSON format.
You do things with the data that is retrieved within the “success” block. Your results come back in a JSON string into the “data” parameter passed to the success function. Within the data parameter, you’ll find a property called “d” where your actual results are located. To access any of your properties returned from your result set, you use the syntax data.d[index].PropertyName. To load a <select>, append a new Option object with the text and value portions filled in, and do the same with the values in the PropertyName and ProductId properties.
Run the WCF service from Visual Studio and once it is running, start your ASP.NET Web application to display the Default.html page. Click on the Get Products button and you should see your data loaded into the drop-down list in your Web page.
Get Data via jQuery using .getJSON()
Instead of using $.ajax(), you can also use a shorthand function called $.getJSON().This method is a little simpler to use than .ajax(), but does not have all of the options available. Listing 4 is an example of calling your GetProducts service using the $.getJSON() function.
In this version of the GetProducts call, you append the price to the product name prior to adding the data to the select list.
In this version of the GetProducts call, you append the price to the product name prior to adding the data to the select list. However, the price may come back as null, so you should check the Price property for a null value before you append the data to the product name.
Pass an Object to a WCF Service
The Insert method in the WCF service expects a Product object to be passed in, as shown in this code snippet:
[OperationContract] bool Insert(Product entity);
You are not going to be able to create a .NET Product object in JavaScript or jQuery, so there has to be a mechanism to convert a JSON object into a .NET object. All of this is done behind the scenes for you, as part of the WCF service layer. Let’s learn how to create a JSON object in such a way that it will be converted to a .NET Product object when that JSON object is passed to a WCF service.
All of this is done behind the scenes for you, as part of the WCF service layer.
The first way is to use a typical JSON object notation in the format {“key”:”value”}. You can see an example of creating a Product object to pass to the Insert method below:
var dataToSend = '{"entity":{"ProductId":"999", "ProductName":"A New Product", "IntroductionDate": "\/Date(1357027200000-0800)/\", "Cost":"10", "Price":"20", "IsDiscontinued":"false"}}';
Notice the wrapper above around the “Product” object notation where the outside object is {“entity”:{ … }}”. The name for this outside object needs to be the same name as the parameter you use in the WCF service method. Notice that the date format in the above code is a little strange. The format is “/Date(int)/” where the “int” is the number of milliseconds since midnight January 1, 1970. You will learn how to take a normal date that a user would enter in a format such as mm/dd/yyyy and convert it to this format later in this article.
Another method of creating a JSON object is to use the following syntax:
var product = { ProductId: "999", ProductName: "A New Product", IntroductionDate: new Date("1/1/2013 08:00am").toMSDate(), Cost: "10", Price: "20", IsDiscontinued: "false" };
What is nice about this syntax is that it is easier to read and create. Notice that on the Date creation, I pass the newly created date to a function called toMSDate(). Let’s take a look at how to convert a normal date into a date that can be serialized and passed to a WCF service.
Convert Date to Microsoft Format
One of the methods you can use convert a normal date to the date required by the WCF service is to create a prototype function (see Listing 5). This is similar to creating an extension method in .NET. The function you create takes the date and time passed to the function, converts it to a UTC date and with this new date, converts it to a time in milliseconds using the getTime function. You then wrap the “/Date()/” string around this new time.
The function you create takes the date and time passed to the function, converts it to a UTC date and with this new date, converts it to a time in milliseconds using the getTime function.
Now that you know how to create an object and deal with date data types, let’s write a jQuery function to post this object to the Insert method of your WCF service. Listing 6 shows the complete call to add a new Product object to the Product table by calling your Web service. The data in this sample is hard-coded to keep things simple for the article. However, in the downloaded samples for this article, you will find a complete Web page for adding, editing, and deleting data from fields on an HTML page.
To call the method in Listing 6, add a button to the HTML page with the following button definition:
<input id="btnInsert" type="button" value="Insert Product" onclick="InsertProduct()" />
Open the WCF service and set a breakpoint in the Insert method. Run the Web application and click on the Insert Product button and you should see the data being passed into the Insert method once it hits the breakpoint.
Updating and Deleting Data
Updating data uses almost the exact same code that you used for inserting data. Listing 7 shows a function called UpdateProduct that you might include in your HTML page to update a specific record in your Product table. Building the object is exactly the same, and the only difference in the Ajax call is that you use Update on the end of the URL instead of Insert.
When you build the Delete function, you only need to fill in the ProductId in your JSON object, as shown in Listing 8. When you pass this data into the Delete method of your WCF Service, all of the other properties of the Product object will be null.
Summary
The ability to reuse your existing WCF services with just a few changes is a great time saver since you do not need to write complete new services in order to consume data from jQuery. As our world moves more and more to mobile devices, the need for connecting to existing services will only increase. You don’t want to have to rewrite any existing services if you don’t have to, so with just a few tweaks of your code and your configuration file, you are good to go. There are a few things you need to worry about on the jQuery client-side of things such as date handling, but for the most part, passing data back and forth works very well.
NOTE: You can download the complete sample code at my website.. Choose “PDSA Articles”, then "Code Magazine - Reuse WCF Services from jQuery" from the drop-down. | https://www.codemag.com/Article/1308041/Reuse-Your-WCF-Services-from-jQuery | CC-MAIN-2020-10 | refinedweb | 3,654 | 62.27 |
Okay, I’ll keep it simple.
What is ‘the best way’ of activating the backdrop function in the compositor with a python script?
Thanks,
GB
Okay, I’ll keep it simple.
What is ‘the best way’ of activating the backdrop function in the compositor with a python script?
Thanks,
GB
The best way I know:
for area in bpy.context.screen.areas: if area.type == 'NODE_EDITOR': for space in area.spaces: if space.type == 'NODE_EDITOR': space.show_backdrop = True break
hmm, I’ve seen a similar way of doing it like this:
import bpy for area in bpy.context.screen.areas: if area.type == 'DOPESHEET_EDITOR': space_data = area.spaces.active break else: space_data = None if space_data is not None: pass
found here by CoDEmanX:.
But, the solution that you and he suggested only works if the space I wanna reach is opened on your screen somewhere. I actually found a better solution recently, which is this:
area = bpy.context.area old_area = area.type area.type = 'NODE_EDITOR' space_data = area.spaces.active space_data.show_backdrop = True area.type = old_area
found here also by CoDEmanX:.
Had no idea you could set the active area that easy. This does work well, but can you access a certain space (like the node editor) without actually changing the space in blender? Because that would be even better than this. And, I wonder, how come you can access other functions in the node editor without actually being in that space but not the backdrop function?
Thanks for your reply anyways!
I would think because of the different data paths. Since the backdrop is part of the space, that space would have to exist somewhere. I’m just assuming though.
Agree, thought so too, anyone know? | https://blenderartists.org/t/compositor-backdrop/619243 | CC-MAIN-2019-30 | refinedweb | 285 | 70.5 |
Ticket #1480 (closed defect: fixed)
send charset with text/javascript JSON replies
Description
Controllers returning JSON data do set the content-type to text/javascript but omit indicating a charset.
This breaks displaying of a page in UTF-8 receiving replies with data which' charset cannot be unambigiously determined by the browser reading the JSON reply.
Thus, setting content-type to (e.g.,) text/javascript; charset=utf-8 fixes this issue.
Change History
comment:2 Changed 8 years ago by Chris Arndt
- Keywords json, expose, jsonify added; JSON, expose removed
comment:4 Changed 8 years ago by chrisz
Can you give an example where the charset matters? I see TurboJson (i.e. simplejson) only creating pure ascii data with non-ascii chars escaped, like in 'k\u00e4se'.
On another note, however, why do we actually use "text/javascript" as the content type by default, and for an Opera browser, we even use "text/plain"?
def get_content_type(self, user_agent): if "Opera" in user_agent.browser: return "text/plain" else: return "text/javascript"
It seems "application/json" is the standard now and all modern browsers including Opera 9 should understand that. If we keep that distinction, we should at least check the version of the user_agent. But I don't think we should be considerate of old, broken Opera versions.
comment:5 Changed 8 years ago by Chris Arndt
Why do we use text/javascript? Maybe because one documented way to request a JSON response is to specify accept_format="text/javascript" in expose?
comment:6 Changed 8 years ago by chrisz
I think we should change expose() as well, so that "application/json" is recognized in addition to "text/javascript". Shall we make the switch from "text/javascript" to "application/json" in TG 1.0 or 1.1 then? I think we should fix it in 1.0 already.
Concerning the encoding problem: In fact TurboJson creates only ascii because it uses simplejson with default parameters, which means ensure_ascii=True. And currently, there is no way to use non-default simplejson parameters with TurboJson. Therefore, the charset is currently not a problem.
(As an aside, ensure_ascii=False is ignored for simple strings in simplejson 1.8.1, but this is a bug that should be fixed.)
Shall I enhance TurboJson so that it evaluates a turbojson.ensure_ascii config parameter? (Note: The JSONEncoder instance must then not be created immediately, because the ocnfig is only available later.) There are some other simplejson parameters (skipkeys, allow_nan etc.) that could be made configurable similarly.
If we return "application/json" as content type instead of "text/javascript", then I think we can or even must omit the charset="utf-8" since it is implicit. But we need to check that.
comment:7 Changed 8 years ago by Chris Arndt
1) Switching to "application/json" in TG 1.0: I guess you mean sending "application/json" as the content type, or do you only mean allowing to set accept_format="application/json" in expose? The former has the potential to break existing client JavaScript code, so I'm not sure if we should change this in TG 1.0. The current content-type does not cause problems, so we should not make incompatible changes without necessity.
2) JSONEncoder: when we want to use ensure_ascii, we must also provide a possibility to set the input encoding for JSONEncoder, right? How would it convert strings to unicode otherwise?
comment:8 Changed 8 years ago by chrisz
1) Yes, I mean both. I don't think that it can break client code, because the header is interpreted by the browser and the client javascript code doesn't care. So it is more a matter of browsers, not of code, i.e. I consider it an adaption of TG to modern browsers. I have already checked that all standard ajax widgets continue to work.
I think that the get_content_type mechanism with evaluation of the user agent should be improved to consider the Accept header instead of (or in addition to) the User-Agent header. Then we could simply return what the browser wants. If it accepts "application/json", we deliver that, otherwise (for old browsers) we deliver "text/javascript".
Furthermore, I just noticed that the get_content_type mechanism with evaluation of the user agent was broken anyway. There is a bug in this line - instead of getattr, get must be used here. So the case distinction for Opera never worked anyway.
2) The input encoding of JSONEncoder can be set with the encoding parameter (default is utf-8). We could make that configurable as well, but it has nothing to do with ensure_ascii. The output of JSONEncoder if ensure_ascii=False will be unicode, which will be delivered as utf-8 by default with cherrypy, which is fine for application/json (just tested this, works nicely).
You can set the charset in your decorator:
But I agree, it's more consistent, if the charset would be set automatically. | http://trac.turbogears.org/ticket/1480 | CC-MAIN-2015-48 | refinedweb | 823 | 56.35 |
On Wed, May 12, 2010 at 02:13:05PM +0200, Peter Zijlstra wrote:> On Wed, 2010-05-12 at 15:55 +0530, Srikar Dronamraju wrote:> > * Peter Zijlstra <[email protected]> [2010-05-11 22:59:45]:> > > > > On Thu, 2010-05-06 at 23:31 +0530, Srikar Dronamraju wrote:> > > > - Addressed comments from Oleg, including removal of interrupt context> > > > handlers, reverting background page replacement in favour of> > > > access_process_vm(). > > > > > > > > > > +static int write_opcode(struct task_struct *tsk, unsigned long vaddr,> > > > + user_bkpt_opcode_t opcode)> > > > +{> > > > + int ret;> > > > +> > > > + if (!tsk)> > > > + return -EINVAL;> > > > +> > > > + ret = access_process_vm(tsk, vaddr, &opcode, user_bkpt_opcode_sz, 1);> > > > + return (ret == user_bkpt_opcode_sz ? 0 : -EFAULT);> > > > +}> > > > > > Why!> > > > > > That's not not the atomic sequence outlined.> > > > > > > > > Yes, we had moved away from access_process_vm to background page> > replacement in Version 1 and Version 2. > > > > One of the reasons being Mathieu suggesting to Jim in LFCS that > > for almost all architectures insertion of a breakpoint instruction on a> > user page is an atomic operation, as far as the CPU is concerned.> > That is true, however when restoring the old instruction you do need to> take care, so your usage from set_orig_insn() is dubious.> > > Can you and other VM experts tell me if access_process_vm isnt going to> > be atomic with respect to inserting/deleting a breakpoint instruction? > > Well, clearly not, since it simply does a gup(.force=1), if whatever> page is there is writable it will simply poke at it.> > Now writing the INT3 is special, but restoring the old insn is not and> might confuse another CPU that is currently trying to execute code near> there.Yes, this helps for breakpoint insertion, but...The question is whether only INT3 special or single-byte changes arealso guaranteed to be atomic. In Anvin states 'specific case of a more generic rule'.For restoring the old instruction, we technically need to put back justone byte, irrespective of the actual length of the underlyinginstruction. Now, as long as we have the housekeeping code to handle thepossibility of a thread hitting the said breakpoint when its beingremoved, is it safe to assume atomicity for replacing one byte ofpossibly a longer instruction?Ananth | https://lkml.org/lkml/2010/5/12/140 | CC-MAIN-2019-18 | refinedweb | 344 | 53.51 |
Back to index
#include <stdlib.h>
#include <sys/types.h>
Go to the source code of this file.
Definition at line 66 of file memrchr.c.
{ const unsigned char *char_ptr; const unsigned long int *longword_ptr; unsigned long int longword, magic_bits, charmask; unsigned reg_char c; c = (unsigned char) c_in; /* Handle the last few characters by reading one character at a time. Do this until CHAR_PTR is aligned on a longword boundary. */ for (char_ptr = (const unsigned char *) s + n; n > 0 && ((unsigned long int) char_ptr & (sizeof (longword) - 1)) != 0; --n) if (*--char_ptr == c) return (__ptr_t) char_ptr; /* All these elucidatory comments refer to 4-byte longwords, but the theory applies equally well to 8-byte LONG_MAX > 2147483647 if (cp[7] == c) return (__ptr_t) &cp[7]; if (cp[6] == c) return (__ptr_t) &cp[6]; if (cp[5] == c) return (__ptr_t) &cp[5]; if (cp[4] == c) return (__ptr_t) &cp[4]; #endif if (cp[3] == c) return (__ptr_t) &cp[3]; if (cp[2] == c) return (__ptr_t) &cp[2]; if (cp[1] == c) return (__ptr_t) &cp[1]; if (cp[0] == c) return (__ptr_t) cp; } n -= sizeof (longword); } char_ptr = (const unsigned char *) longword_ptr; while (n-- > 0) { if (*--char_ptr == c) return (__ptr_t) char_ptr; } return 0; } | https://sourcecodebrowser.com/glibc/2.9/memrchr_8c.html | CC-MAIN-2016-44 | refinedweb | 197 | 57.61 |
The success of any application depends on its quality. For customers to love an app and evangelize it via word-of-mouth advertising, it must provide the highest quality possible and withstand adverse conditions.
Quality assurance plays an important role in addressing an application’s defects before it reaches production. Almost all software teams have some form of QA as part of their development lifecycle, even if there is no dedicated QA team that only does this job.
It’s the nature of software engineering that new features are built on top of the existing codebase. Hence, whoever is responsible for QA will have to test not only the new features, but the existing features as well to ensure the app works nicely with the new features integrated.
Now the problem is: the time spent in QA will increase with every new feature, and there is a very real chance that not everything will be well-tested. Bugs can easily slip into the user’s hand.
Automation testing really helps here by automating some of the work that QA would do manually. We can write an automation test for those features that QA has already tested so the team can focus on testing new features while the old features will be tested automatically. This saves a lot of time and brings a higher level of confidence in shipping the app to production.
In this tutorial, we’ll introduce automated testing for Flutter and review how to write each type of automation test with an example.
Here are the three types of tests we’ll cover:
- Unit tests
- Widget tests
- Integration tests
Reviewing our example Flutter app
Let’s have a look at the sample app we’ll be testing:
For the purposes of this tutorial, our requirement is that the list of all products should be available on the app homepage. The user can add the product to the cart by clicking the cart icon beside it. Once added, the cart icon should be changed.
Clicking on the Cart text should open up a cart page, which displays a list of all the products added to the cart. The products can be removed from the cart either via the cancel button or a swipe to dismiss.
Writing the tests for our Flutter app
As discussed above, we’ll automate three types of tests for our Flutter app: unit tests, widget tests, and integration tests. An app can have several combinations of these three tests, but it’s up to you to design and implement the tests in a way that provides the most confidence for your use case.
Unit tests
Let’s begin with the unit test for the app. This tests the single method of the class by ensuring the method provides the expected result based on the input given to it. It helps you to write more testable and maintainable code.
Our goal is to write unit tests for the
Cart class — to be more specific, we will make sure that adding and removing methods for products gives the correct result.
First, dd the test dependency:
dev_dependencies: test: ^1.14.4
Here is the
Cart class, which has methods to add and remove items:
import 'package:flutter/material.dart'; /// The [Cart] class holds a list of cart items saved by the user. class Cart extends ChangeNotifier { final List<int> _cartItems = []; List<int> get items => _cartItems; void add(int itemNo) { _cartItems.add(itemNo); notifyListeners(); } void remove(int itemNo) { _cartItems.remove(itemNo); notifyListeners(); } }
Next, we’ll create a file to write test cases. Inside the
test folder (at the root of the project), create a new file
cart_test.dart. It should look something like this:
Now add the below code inside it:
N.B., make sure to give a test file named as
(classToTest)_test.dart.
import 'package:flutterdemos/testingdemo/models/cart.dart'; import 'package:test/test.dart'; void main() { group('Testing Cart class', () { var cart = Cart(); //Test 1 test('A new product should be added', () { var product = 25; cart.add(product); expect(cart.items.contains(product), true); }); // Test 2 test('A product should be removed', () { var product = 45; cart.add(product); expect(cart.items.contains(product), true); cart.remove(product); expect(cart.items.contains(product), false); }); }); }
Here, Test 1 verifies the added item should exist in the cart list, and Test 2 checks whether the removed item does not exist in the cart. The
expect() method is a way to validate our output with expectation.
Now we’ll run the unit test. Simply hit the play button in the IDE.
You can also try with the terminal using the following
flutter test test/cart_test.dart
Widget tests
As its name suggests, the widget test focuses on a single widget. Unlike the unit test, the widget test makes sure that a particular widget is looking and behaving as expected. You should write a widget test for at least all common widgets.
Our goal here is to write a widget test to ensure that the homepage is working as expected.
First, add one more test dependency:
dev_dependencies: test: ^1.14.4 # for unit test flutter_test: sdk: flutter
Similar to the
cart_test.dart file we created in the previous section, we’ll now create one more file
home_test.dart inside the
test folder. Let’s add the below code to it.
Widget createHomeScreen() => ChangeNotifierProvider<Cart>( create: (context) => Cart(), child: MaterialApp( home: HomePage(), ), ); void main() { group('Home Page Widget Tests', () { // Test 1 testWidgets('Title should be visible', (tester) async { await tester.pumpWidget(createHomeScreen()); expect(find.text('Shopping App Testing'), findsOneWidget); }); }); }
The methods below are the building blocks for writing our widget test:
createHomeScreen()– provides the UI for the home screen that we would normally do in the
main.dartfile
testWidgets()– creates the
WidgetTesterthat provides ways to interact with the widget being tested
await tester.pumpWidget()– renders the provided widget
find.text()– finds the widget with the given text. Sometimes we may have the same text in the UI, so
find.byKey(Key('string'))becomes really helpful
expect()– takes the found widget and compares it with the expected
Matcher, which can be
findsOneWidget,
findsNothing, etc.
Let’s cover a couple other important test cases that we would otherwise have to perform manually. Here, we test that the product list is visible on the homepage:
testWidgets('Product list should be visible', (tester) async { await tester.pumpWidget(createHomeScreen()); expect(find.byType(ListView), findsOneWidget); });
And here, we test that the user is able to scroll the product list:
testWidgets('Scroll test', (tester) async { await tester.pumpWidget(createHomeScreen()); expect(find.text('Product 0'), findsOneWidget); await tester.fling(find.byType(ListView), Offset(0, -200), 3000); await tester.pumpAndSettle(); expect(find.text('Product 0'), findsNothing); });
A full list can be found here.
Now run the test.
Integration tests
Integration tests help to achieve end-to-end testing for the app. They enable us to understand whether users are able to complete the full flow of the app. It’s essentially like testing a real application.
Unlike unit tests and widget tests, integration tests run on a real device, so we get a chance to see how tests are being performed. In a perfect world, we’d write and run as many tests as we need. But if we have limited time, we should absolutely write an integration test at the very least.
Our goal here is to test that the user is able to add and remove products to and from the cart. Here’s the dependency required for the integration test:
dev_dependencies: test: ^1.14.4 # for unit test flutter_test: # for widget test sdk: flutter flutter_driver: sdk: flutter integration_test: ^1.0.1
Now we create the
integration_test folder at the project root and add a file
driver.dart inside it with the following code:
import 'package:integration_test/integration_test_driver.dart'; Future<void> main() => integrationDriver();
Then we’ll create a file
app_test.dart and add the below code:
void main() { group('Testing full app flow', () { IntegrationTestWidgetsFlutterBinding.ensureInitialized(); testWidgets('Add product and remove using cancel button', (tester) async { await tester.pumpWidget(MyApp()); //Add await tester.tap(find.byIcon(Icons.shopping_cart_outlined).first); await tester.pumpAndSettle(Duration(seconds: 1)); expect(find.text('Added to cart.'), findsOneWidget); // Move to next page await tester.tap(find.text('Cart')); await tester.pumpAndSettle(); //Remove via cancel button await tester.tap(find.byKey(ValueKey('remove_icon_0'))); await tester.pumpAndSettle(Duration(seconds: 1)); expect(find.text('Removed from cart.'), findsOneWidget); }); }); }
As we can see in the code above, there are instructions to perform actions and verify the results, just as we would do manually:
await tester.tap()– clicks on the specified widget
await tester.pumpAndSettle()– when users click on a UI element, there might be an animation. This method ensures that the animation has settled down within a specified duration (e.g., if we think the required widget is not yet available), after which period we’re good to go for new instructions
We also have a provision for removing products via swipe. The code for achieving this behavior goes here:
//Remove via swipe await tester.drag(find.byType(Dismissible), Offset(500.0, 0.0)); await tester.pumpAndSettle(Duration(seconds: 1)); expect(find.text('Removed from cart.'), findsOneWidget);
Finally, we’ll run the test on a real device. Run following command in the terminal:
flutter drive — driver integration_test/driver.dart — target integration_test/app_test.dart
Conclusion
In this tutorial, we’ve introduced automation testing in Flutter and walked through the various types of tests we can write via examples. You can view the complete source code with all the test cases in my GitHub. “Automated testing in Flutter: An overview”
Whole blog is very informative but i want to know that can we discard multiple card in once?
It depends on your design. For this particular use case you can probably loop the discarding behaviour.
How to handle system default permission in code ? | https://blog.logrocket.com/automated-testing-flutter/ | CC-MAIN-2022-40 | refinedweb | 1,641 | 57.47 |
Terms
First let's introduce terms describing elements of Mork documents. Each term describes an entity used in Mork. Each type of entity has it's own style of markup in Mork.
Before we describe the markup, let's define each type of entity.
Each entry will include a short name in italic , a typical representative letter in bold, and a full name in blue . Sometimes we prefer short names, using (eg) lit in preference to literal.
- val - (V) value : the value part of a cell; a lit eral or a ref erence
- lit - (L) literal : a binary octet sequence; e.g., a text value; e.g., a cell value in a row.
- tab - (T) table : a sparse matrix; a collection of row s; a garbage collection root.
- row - (R) row : a sequence of cell s; attributes of one object; a horizontal tab le slice.
- col - (C) column : a lit eral used to name an attribute; a vertical tab le slice.
- cell - (.) cell : a pair ( col val ) at the intersection of col and row in a tab le.
- id - (I) identity : a unique hex number in some namespace scope , naming a literal or object.
- oid - (O) object identity : a scoped unique name or id of an object or entity, large or small.
- ref - (^) reference : a val ue expressed using the id of an object or lit eral, instead of a lit eral.
- dict - (D) dictionary : a map of lit erals associated with assigned id s inside some scope .
- atom - (A) atom : a lit eral which is not used as a col umn name; e.g., a text value.
- map - (M) hashmap : a hash table mapping keys to values (e.g. ids to objects).
- scope - (S) scope : a (colon suffixed) namespace within which id s are unique
- group - (G) group : a set of related changes applied atomically ; a markup transaction.
Model
The basic Mork content model is a table (or synonymously, a sparse matrix ) composed of rows containing cells , where each cell is a member of exactly one column ( col ). Each cell is one attribute in a row. The name of the attribute is a lit eral designating the col umn, and the content of the attribute is the value . The content val ue of a cell is either a literal ( lit ) or a reference ( ref ). Each ref points to a lit or row or table, so a a cell can "contain" another shared object by reference.
Mork content exists as mentioned, without need of forward references or declarations. By default, new content is considered additive to old content if something already exists due to earlier mention. Mork markup is update oriented, with syntax to edit existing objects, or objects newly mentioned. Mork updates can be gathered into group s of related edits that must be applied atomically (all edits or none) as a transaction. A typical Mork file might contain a sequence of groups in a log structured format. Any group that starts but does not end correctly must be ignored by a reader.
Roots
When copying an old Mork store to a freshly written new file, unused content in the old copy can be omitted. But what defines unused ? Any or root or literal not reachable from at least one table is considered unused, and can be pruned or scavenged to reduce space consumption.
Any table is a garbage collection root making referenced content reachable for preservation. Tables are never collected. To reduce table space consumption, one can only remove all row members.
Naive Mork implementations must avoid a subtle refcounting bug that can arise due to content written in nearly random hash based order. Markup might say to remove a row from one table early on, then add this same row to another table later. In between, the refcount might be zero, but the row's content must not be destroyed eagerly. A zero refcount only means dead content after all markup has been applied.
Parser
You're expected to write a parser by hand. The grammar is only a clue, not a spec. The hard part is not the grammar, which is is simple and unambiguous. The hard part is the meaning of the Mork markup.
Start
In version 1.4, a Mork file begins with this on the first line:
// <!-- <mdb:mork:z -->
You can ignore this magic header because it's meaningless. All it does is announce the file format. Presumably later versions of Mork will have different values where 1.4 appears.
Even if you did not ignore the first line, this is a C++ style comment which Mork takes as equivalent to whitespace, hence meaningless.
* start ::= magic (items | group)+
Your Mork tokenizer should treat a C++ style comment -- starting with // and ending at line's end -- as a single whitespace byte.
Mork considers any combination of CR or LF (#xD or #xA) to be a line ending, taken singly or in pairs.
Space
Whitespace is optional in most places where Mork allows it to appear.
The grammar is apt to sprinkle space where it's likely to occur.
- space ::= (white)*
- white ::= ( #xA | #xD | #x9 | #x20 | comment | sp )
- sp ::= /*other bytes you might wish to consider whitespace*/
- comment ::= // not-endline endline
Header
This entire section might be obsolete. The 1.4 source code doesn't seem to process the 1.1 grammar showing a file row. So perhaps this was dropped before version 1.4. The parser in morkParser.cpp appears to recognize @ only as part of group syntax.
Content
After the magic first line and the distinguished file row, the rest of the content in a Mork file is a sequence of items: either objects (rows, dicts, tables) or updates.
Items are optionally collected into groups applied as atomic transactions.
- items ::= (object | update)*
- group ::= gather items (commit | abort)
The syntax for groups is most complex, so let's address this one first.
Groups
A Mork group is a related set of top level content applied all together in one atomic unit, either in its entirety or not at all, as a mechanism for handling transactions in a log structured format.
In version 1.4, @ is always followed by $$ as part of the group markup syntax.
- group ::= gather items (commit | abort)
- gather ::= space @$${ id {@
- commit ::= space @$$} id }@
- abort ::= space @$$}~abort~ id }@
In practice, a parser should expect to see $$ after @ is seen. Then { says a transaction group is starting, and
id
is the hex identity of this transaction. For example, suppose
id
is equal to FACE.
The same value for id must appear in commit or abort for an end-of-group to be recognized.
How should a parser go about ensuring that content through either commit or abort is applied atomically (all or none)?
Basically, a parser should remember the byte position immediately following gather , so it can reseek this position after locating the group end. Then it should scan forward looking for @$$}FACE}@ or for @$$}~abort~FACE}@. If the first is found, then the parser should seek the saved position after gather and parse all the content inside the group. Otherwise the group should be ignored.
If a parser sees a new group start before the old one ends, this should probably be seen as equivalent to an abort of the first group. This interpretation allows updates to be added on the end of stores that have previously aborted due to truncated file writes.
Note : the choice of group markup syntax seen here and literal escape metachar syntax shown later are related. Well-formed Mork documents should not be able to embed $$ inside a literal, because $ is a metachar which only escapes hex and not itself. This was done on purpose, so content inside a literal value would not accidentally terminate a transaction group, just because @$$ appears inside a literal. A well-formed literal would encode this sequence as @$\$ instead. This is the answer to jwz's question, about why there is more than one escape metachar: because it helps avoid corruption of user data.
As a result, a parser can generally get along with one non-space character lookahead. Seeing @ means group markup is expected. Then a parser looks for $$ which can't appear inside literals when Mork writers follow the rules. This is exactly why $$ was chosen here.
Tokens
You were probably expecting to see material about item s here, right after group s. But here we switch tactics momentarily and take a bottom-up look at tokens and primitive data representation used to compose the higher level constructs used by items further below.
Your Mork tokenizer might want to see @[ and ]@ as special file row tokens. Most tokens are single bytes, or can be treated as if single bytes had been seen. Some complex multibyte tokens that are not
lit
erals have the same meaning as a single #x20 whitespace byte after having whatever parsing effect is needed.
For example, markup for indicating a group is equivalent to whitespace once it serves the purpose of clustering other content inside a group.
Note that a line continuation , consisting of a backslash \ followed by line-end , is not a token at all when it appears inside a lit eral. Instead, a tokenizer should ignore a line continuation as if it never appeared.
In addition, Mork uses the following set of single byte tokens:
- < - open angle - begins a dict (inside a dict, begins metainfo row)
- > - close angle - ends a dict
- [ - open bracket - begins a row (inside a row, begins metainfo row)
- ] - close bracket - ends a row
- { - open brace - begins a table (inside a table, begins metainfo row)
- } - close brace - ends a table
- ( - open paren - begins a cell
- ) - close paren - ends a cell
- ^ - up arrow - dereference following id for lit eral value
- r - lower r - dereference following oid for row (by ref ) value
- t - lower t - dereference following oid for tab le (by ref ) value
- : - colon - next value is scope namespace for preceding id
- = - equals - begin a lit eral value inside a cell
- + - plus - add update: insert content
- - - minus - cut update: remove content
- ! - bang - put update: clear and set content
Ids
Under Mork, object id entities are written in hex. So an id token is just a naked sequence of hex bytes, either upper or lowercase. Because the interpretation is an integer, case is not significant.
Some namespace is always understood as the scope for every id , but this does not appear in the id token itself. Whenever a scope is explicitly given, it appears after a colon following the id , as described next for oid s.
- id ::= hex+.
- hex ::= [0-9a-fA-F]
Oids
A Mork oid is an object id , and includes both the hex id and the namespace scope . When not explicitly stated, the scope is implicitly understood as some default for each context.
- oid ::= id | id:scope
- scope ::= literal | ^id | ^oid
Note the third option for scope might not be supported or used in practice, since it would make oid and scope recursive. You might expect to see ^id:literal and ^id:^id, but probably not ^id:^id:literal.
Literals
A Mork lit eral is a binary octet sequence encoded as plain text. Every byte becomes literally the next byte of the value, unless certain bytes are seen which have metacharacter status. (Why is there more than one metachar? Because it might shrink markup. Complexity here is worth some compression.)
- ) - close paren - end of literal
- $ - dollar - escape the next two hex bytes which encode a single octet
- \ - backslash - if the next byte is either #xA or #xD, omit linebreak from literal; otherwise escape next byte. For example, \ removes metachar status from an immediately following \, $, or ).
The first metachar is close paren . A lit eral always appears at the tail end of a cell which is always terminated by a close paren ), so in practice every lit eral is terminated by ). The only way to get ) inside a literal is by escaping the ) byte one way or other.
The second metachar is dollar $, which allows you to encode any octet as two digits of hex. Some writers might encode all non-ascii octets this way, and the year 2000 version of Mozilla did this, but it's not required. You are never required to use $ to escape bytes when writing, but readers must escape hex following $ when $ itself is not escaped, say using \. (Why did I choose $ for this metachar? Because I thought URLs might be common Mork content, and I wanted to use a byte that might appear less often in URLs.)
The third metachar is backslash \, which was added to allow escaping metachars using C like syntax, and to allow line continuation in a C like manner so very long lines need not be generated unless a writer insists.
(If I was going to extend Mork for base 64, I'd probably extend the meaning of the $ metachar to understand a following non-hex byte was the start of a base 64 sequence. For example, ${ABC} might mean ABC was base 64 content. I've seen folks online bitch about Mozilla's verbose encoding of unicode under Mork using $ to encode every null byte. Why didn't you speak up during the year I discussed it online? In five years, why did no one tweak Mork so version 1.5 would do better? Why not just write unicode as raw binary, since Mork supports that? Why does Mork suck because no one spends an hour making changes? Whatever.)
Items
Okay, now we'll finally cover items , which is where all interesting Mork content is actually found.
- items ::= (object | update)*
- object ::= (dict | row | table)
- edit ::= (+ | - | !)
- update ::= edit (row | table)
(Note dict does not appear in update only because dicts have no identity, and thus can't be updated.)
When a parser does not see @ for a group at top level, it expects to see one of these first:
- + : the next object adds or inserts content listed to existing content.
- - : the next object cuts or removes content listed from existing content.
- ! : the next object adds content listed, after clearing or deleting old existing content.
- < : begin a dict .
- [ : begin a row .
- { : begin a table .
Because following sections describe dict s, row s, and table s, this rest of this section says more about those three edit bytes. The + for add is really Mork's default state because it is implied when missing before a row or table. If a row or table already exists, then new content listed is simply added to whatever already exists. (Of course, adding a cell with the same column as an existing column will replace the old value.)
When a writer can see that rewriting an object from scratch is more space efficient that incremental adds or cuts, it can use ! to clear all old content and start afresh.
When a writer sees an incremental cut is cheaper than rebuilding a large object from scratch, it can use - before an object to indicate content removal instead of content insertion .
Dicts
The purpose of a dict is to enumerate instances of literals with associated ids. A parser is expected to populate a map (hashmap) associating each id key with each lit value, so later references to any given id inside cells is understood as a reference to the lit value.
A Mork implementation should have more than one map -- one for each scope is needed. But version 1.4 of Mork only uses two scopes: a for atom literals and c for col umn literals. The former, a, is the default scope for ids s in a dict , unless explicitly changed to the latter by a metadict containing a (atomScope=c) cell.
(Note: to simplify this grammar, optional space has been omitted before each literal token, except those inside value . We don't want optional space before value because it encourages space inefficienty, but you might choose to be tolerant here.)
- dict ::= < (metadict | alias)* >
- metadict ::= < (cell)* >
- alias ::= ( id value )
- value ::= ^oid | =literal
- oid ::= id | id:scope
This grammar does not show a constraint on id , which should be a hex value not less than 80. All id s less than 80 have reserved definitions: any single ascii byte lit eral with ascii value 0xNN is defined to have id NN. Mork implementations which assign id s dynamically should use ascending values from 80 (or from the greatest id already assigned).
In principle, metadict might contain cells of any type. But in Mork version 1.4, only cells equal to (atomScope=a) and (atomScope=c) have any meaning. These change the default scope for the id found in subsequent alias definitions. But instead of using (atomScope=a) in a metadict to return the default scope to a, you can simply use >< to close and reopen a new dict with fewer bytes, which also returns the default scope to)>
After the examples above have been parsed, the oid 85:c has value mail, and the oid 96:a has value Hackworth. Using ^ syntax to indicate an oid, these are usually written ^85 when column scope c is the default, and ^96 when atom scope a is the default.
Alias
Each alias inside a dict has syntax almost identical to that of cell s; they both use parens to delimit a pair. But the first element of the pair in an alias is an id , where a cell has a col in the first position. Additionally, a cell allows more variation in the second element of the pair, where an alias is constrained to a lit eral or an oid that references a lit eral.
- alias ::= ( id value )
- value ::= ^oid | =literal
Optional space here is a bad idea for writers, because it decreases space efficiency. But readers might want to tolerate unexpected space .
Should you allow forward references to id s that have not yet been defined by an alias ? Maybe, since it does little harm. It would be consistent with the system of making things exist as soon as they are mentioned. So an oid with no definition might refer to an empty value in general. This is what happens with row s and table s, and you might do this with lit eral oid s as well.
Rows
A row is a logically unordered sequence of cell s. Any physical ordering is not considered semantically meaningful. All that matters is which col umns appear in the cell s, and what lit erals are referenced by values.
(To simplify the grammar, optional space before tokens is not shown.)
- row ::= [ roid (metarow | cell)* ]
- roid ::= oid /*default scope is rowScope from table*/
- metarow ::= [ (cell)* ]
- cell ::= ( col slot )
- col ::= ^oid /*default scope is c*/ | name
- slot ::= ^oid /*default scope is a*/ | =literal
In the col position, the scope of all id s defaults to c, and in the slot position, the scope defaults to a. The default scope for roid depends on context -- a containing table will supply the default. If the scope is itself an oid for a lit eral, the default scope is c; so when roid is 1:^80 this means 1:^80)> [1:^80 (^81^90)(^82^91)(^83^92)(^84^93)(^85^94)(^86^95)(^87^96)]
This row example was written entirely with oid s, which is not very human readable. So let's rewrite this row using all lit erals with identical resulting content. Note that all col s default to scope c and all slot s default to scope a.
[ 1:cards (dn=cn=John Hackworth,[email protected]) (modifytimestamp=19981001014531Z)(cn=John Hackworth)(givenname=John) ([email protected])(^xmozillausehtmlmail=FALSE)(sn=Hackworth)]
This version of the row has identical content, but is much more readable, at the expense of duplicating strings in places when other rows have the same values, which is often the case.
Cells
A cell is basically a named value, where the name is a col umn lit eral constant. If you line up cells with the same col inside a table, the cells will form a column in the sparse table matrix.
- cell ::= ( col slot )
- col ::= ^oid /*default scope is c*/ | name
- name ::= [a-zA-Z:_] [a-zA-Z:_+-?!]*
- slot ::= ^oid /*default scope is a*/ | =literal
- oid ::= id | id:scope
Note cells can also refer to a row or a table by oid , even if this feature is not actually used by specific Mork applications.
In practice, a reader might allow col umn name s to include any byte except ^, =, or ). Notice that an empty literal value in column sn will be written like this: (sn=).
Tables
A table is a collection of row s. I can't recall whether they are ordered; my guess is no, since otherwise you'd plausibly be able to add a twice twice in different positions. I think each table is a map (hashmap) of row s, mapping roid s to row s, in which case member row s might appear in any order.
But I seem to recall the idea of using tables to represent the sorted ordering of other tables. So maybe tables are ordered, in which case an array representation is also a good idea. In any case, the Mork syntax for serialization is immune to such concerns.
- table ::= { toid (metatable | row | roid )* }
- toid ::= oid /*default scope is c*/
- metatable ::= { (cell)* }
{ 1:cards {(rowScope=cards)(tableKind=Johns)} 1 2 }
The metatable contains cell s of metainfo, but Mork only considers a few possible col umns interesting. Here we see the default scope for roid row id s has been set to cards using (rowScope=cards). This means the row id s 1 and 2 are understood the same as if they had been written 1:cards and 2:cards, or alternatively as 1:^80 and 2:^80 (assuming ^80 resolves to cards).
The role (or purpose or type) of the table is specified by (tableKind=Johns). This supports an examination of tables at runtime by kind instead of merely by toid table oid .
{ 1:^80 {(rowScope^80:c)(tableKind=Johns)} 1:^80 2:^80 } // with oids
Here the table has been rewritten using lit eral oid s instead of string lit erals. But this table is considered identity to the last, if oid 80:c has been defined as cards in some dict .
{ 1:^80 {(rowScope^80:c)(tableKind=Johns)} // with explicit row cells [ 1:^80 (^81^90)(^82^91)(^83^92)(^84^93)(^85^94)(^86^95)(^87^96)] [ 2 ([email protected])(cn=John Galt)] }
This last version of the same table actually defines contents of the member row s using explicit [] notation. You need only define a row in one place, and any other use of that same row can include the row as a member by reference, using only the row's oid . The Mork writer for Mozilla in 2000 made an effort to write the actual contents of any entity one time only, by keeping track of whether it was "clean" in the sense of having been written once already in the current printing task.
Identity
Each object has a unique identity called an oid (for o bject id entity), composed of two parts: id and scope .
Note scope is a short synonym for namespace . Please avoid confusing this use of the word namespace with any other. (There is no relation to the meaning of namespace under XML, except in similar function.)
Mork reverses conventional namespace order, with namespace coming last instead of first. Mork uses id : scope instead of namespace : name . But it's all the same.
The purpose of scope namespaces is to allow content from multiple apps to mix in a single Mork store without possibility of identity collision, as long as apps use different namespaces. Very long scope names to ensure uniqueness has no penalty because scopes can appear by id instead of by name.
Atoms
Mork is grounded in primitive interned-string octet sequence values, usually called atom s. (David Bienvenu prefers atomized strings to interned strings.) This terminology makes an assumption about Mork implementations: each value namespace scope should have a map hashing string keys to atom values, so every instance of the same string can be shared as the same atom instance. You needn't do this, but it's less confusing if you do. You also want a map from id to atom, to resolve val hex oid references.
Unicode
Mork doesn't know or care about unicode or wide characters. All Mork content is uninterpreted octet sequences, containing whatever an app likes. All content that looks like binary is escaped, so it won't interfere with Mork's markup, which is basically ascii (or the moral equivalent). Mork's simple syntax for framing embedded content does not constrain the type of content put inside. Mork does not escape binary content very efficiently, typically using three bytes to represent each awkward byte: zero appears in hex as $00. (The choice of $ as metacharacter was arbitrary, and picked to be less common within Mozilla embedded content. Space efficiency for binary was never high priority in whimsical demands from managment.)
Brevity
Why is there more than one namespace for octet string vals? Column names have their own namespace so the ids of columns will usually be no more than two hex digits in length, assuming an app does not have a huge number of distinct attribute names.
If cols and vals were in the same namespace, Mork files would likely be larger if common column names had ids that were longer. So this is a compression strategy, making the format more complex.
Bootstrap
Parts of Mork are self describing, making it reflective. This causes a bootstrap problem where metainfo would describe itself, and well known constants would be desired for scope names. For this reason, Mork defines the ids of single byte scope names specially: the integer value of a single octet name is defined to be the id of that name. This means all scope name ids less than 0x80 are already assigned. A Mork implementation must dynamically assign ids increasing from 0x80.
Default scope names for col 's and ordinary atom val' s are defined as follows:
- col - c (for col) with id 0x63 (equal to 'c')
- val - a (for atom) with id 0x61 (equal to 'a')
Objects
Mork models an app object with a row , where each object attribute is a separate cell in the row. The attribute name is the column. A collection of objects is a sparse matrix in the form of a table enumerating rows.
Mork also represents the attributes as metainfo as a row, so the anotation of a table or row with metainfo appears as another row, distinguished by privileged syntax. Primary parts of metainfo include object identity, kind or type, and default scope for contained objects. | https://developer.mozilla.org/en-US/docs/Mozilla/Tech/Mork/Structure | CC-MAIN-2018-13 | refinedweb | 4,461 | 70.53 |
Giving the Elephant Some Elixir
MapReduce is a common Big Data pattern for analyzing a data set concurrently. This tutorial will introduce you to Elixir and the principals behind Hadoop. We will be building the equivalent of Hello World in MapReduce which is a word count program. Map and Reduce are also common higher order functions in the world of functional programming. Map is a function that takes a list and an anonymous function or lambda as arguments, applies the function to each element in the list, and returns a new list with the output of the lambda on each element. Reduce is a similar function in that it takes the same arguments with one additional argument in Elixir, an accumulator, but returns an accumulated value instead of a list. Elixir is a great language to learn concurrency and MapReduce is both a useful example and shows off many of Elixir’s features.
MapReduce Flow
MapReduce is a pipeline through which data flows and is processed. It can be broken down into roughly 5 steps which correspond to 5 modules we will write in Elixir. Our first step is the Input Reader. This takes in data, splits it into a form that our Map process can read, and concurrently launches Map processes. Our Map process reads the data given to it, runs a function on each piece of data, and outputs a key value pair to a Partition/Compare process. The Partition process accumulates key value pairs from all Map processes, compares the pairs, and spawns Reduce processes for each unique key. Each Reduce process runs a function on each value that adds up all the values for the given key, and emits these values to the Output Writer. Finally, the Output Writer yields your data in a format of your choice.
Step 0: Setup
First make sure you have Elixir installed for your current system. Instructions can be found here. An excellent introduction to the language is available from the main site here and hexdocs.pm provides Elixir module documentation here. Learnxiny also provides an excellent Elixir syntax primer. This tutorial assumes little to no experience with Elixir but familiarity with basic programming concepts and the command line of your operating system. All code for this project is available on Github.
Once you have Elixir installed create a new project using
mix new.
mix new mapreduce --module MapReduce
Next move into your
mapreduce directory and edit
mix.ex with this line inside
def project do [].
escript: [main_module: MapReduce]
Step 1: The Parent Process
Once you have a new project navigate into
lib/ in your project directory and open
mapreduce.ex which should have been autogenerated by
mix . First remove everything from the file that is not
defmodule MapReduce do and
end. Let’s add a couple of imports for modules we will be writing later to the top of the module.
defmodule MapReduce do
require InputReader
require Partition
Next let’s create a main function and pipe its argument into several functions. Pipe,
|>, is an operator that acts very similarly to bash’s pipe.
def main(args) do
args |> parse_args |> pipeline
end
After that let’s write a private function (denoted by
defp instead of
def) to parse our command line arguments. Here we are creating a variable with type tuple which contains the result of
OptionParser.parse . Our command line argument is
--file=example.txt and so we set our switches in the arguments to the parser function accordingly. We only need the first output from the parser which we will be returning from
parse_args the other outputs are represented with an underscore to indicate we won’t need them.
defp parse_args(args) do
{options, _, _} = OptionParser.parse(args,
switches: [file: :string]
)
options
end
end
The final part of our parent process is the pipeline. The first
pipeline function is a pattern match case that checks for an empty file argument. The next
pipeline function launches a Partition process but only stores the process id in a variable. We are using
elem to give us the second element of the tuple returned by starting our process because we don’t need the atom,
:ok, first element. Then the Partition process id and the file name are passed to an Input Reader that we will write in the next step. Finally we use a recursive function,
forever, to keep the parent process alive while the rest of our MapReduce flow executes.
defp pipeline([]) do
IO.puts "No file given"
end
defp pipeline(options) do
partition = elem(Partition.start_link, 1)
InputReader.reader("#{options[:file]}", partition)
forever()
end
defp forever do
forever()
end
Step 2: The Input Reader
This module is fairly simple, it contains one function that takes a filename and process id. This function will attempt to open a file, sending a message to STDERR if this fails. If our file is successfully opened we will execute a function on every line of the file. We use a regular expression to parse lines from our file and return them as a list.
Enum.each is identical to map except that it returns an atom if it completes successfully instead of a list. For every line in our list we will
spawn a Map process with the line and the Partition process id as arguments.
defmodule InputReader do
require Mapper
def reader(file, partition) do
case File.read(file) do
{:ok, body} -> Enum.each(Regex.split( ~r/\r|\n|\r\n/, String.trim(body)), fn line -> spawn(fn -> Mapper.map(line, partition) end) end)
{:error, reason} -> IO.puts :stderr, "File Error: #{reason}"
end
end
end
Step 3: The Mapper
The Mapper module is quite short, first we send the process id of the current Map process to the Partition process. This will enable us to check whether our Map processes are still running later on in Partition. After that we will again use
Enum.each to send individual words from a list generated by splitting each line based on the space character to the Partition process.
defmodule Mapper do
def map(line, partition) do
send(partition, {:process_put, self()})
Enum.each(String.split(line, " "), fn key -> send(partition, {:value_put, key}) end)
end
end
Step 4: The Partition
The Partition module is the most complex module we will be creating. Here we will be using the
Task module instead of just
spawn . We are using
start_link instead of
start because we want the parent process to be killed when this process is killed. Our linked processes are MapReduce, Partition, and OutputWriter in that order. OutputWriter will check if all Reduce processes have finished before exiting its own process. This will exit all linked processes all the way back to the parent. We use a lambda in
start_link to start a recursive function that takes 2 lists as arguments. Note that end is necessary to close all lambdas.
defmodule Partition do
require Reducer
require OutputWriter
def start_link do
Task.start_link(fn -> loop([], []) end)
end
end
Next we will write our recursive
loop function. This will first check the length of the mailbox of messages sent by our Map processes. If it has processed all messages it will launch a check to see if we should launch our Reduce processes yet. We use
Keyword.delete to remove all null or whitespace characters that have snuck into our key value pairs. Note the use of a sigil,
~s(\s), to represent a whitespace character. Next we have some pattern matching code that checks all of our received messages for specific tuples. If we receive the atom
:processor_put we will append the process id of the caller Map process to our process list inside of a recursive
loop call. If we instead receive the atom
:value_put we will append a Keyword containing the key sent to us by Map and a value of 1 for the word count. Any other message will produce an error.
defp loop(processes, values) do
mailbox_length = elem(Process.info(self(), :message_queue_len), 1)
if (mailbox_length == 0), do: (
mapper_check(processes, Keyword.delete(Keyword.delete(values, String.to_atom(~s(\s))), String.to_atom("")))
)
receive do
{:process_put, caller} ->
loop([caller | processes], values)
{:value_put, key} ->
loop(processes, [{String.to_atom(key), 1} | values])
error -> IO.puts :stderr, "Partition Error: #{error}"
end
end
The final piece of Partition is
mapper_check . This function checks if all of our Map processes are dead and launches Reduce processes for each unique word if they are. First we use
Enum.filter to return a list,
check,of any process that is still alive. Then we create a list,
unique, of every unique key/word. If we have a non-zero number of keys and no Map process is alive then we
start_link OutputWriter and pass its process id to every Reduce process we spawn. After this we use
Enum.each on uniques and use
Keyword.take to pull out every instance of each unique and then spawn a Reduce process with a list of all of those instances.
defp mapper_check(processes, values) do
check = Enum.filter(processes, fn process -> Process.alive?(process) == true end)
uniques = Enum.uniq(Keyword.keys(values))
if (length(check) == 0 && length(uniques) != 0), do: (
output_writer = elem(OutputWriter.start_link, 1)
Enum.each(uniques, fn unique -> spawn(fn -> Reducer.reduce(Keyword.to_list(Keyword.take(values, [unique])), output_writer) end) end)
)
end
end
Step 5: The Reducer
The second to last module is our Reduce process. This takes a list of tuples (key value pairs) and the process id of Output Writer. Similar to Map we send Output Writer the process id of Reduce to keep track of its status. Then we check to make sure
tuples is not empty with a case pattern match. If
tuples is not empty we send a string to Output Writer. This string might look a little strange but that’s because we are using Elixir’s string interpolation syntax,
"#{}", to place 2 expressions separated by a space. Let’s break down the two expressions in the string we send to Output Writer. First we use
elem, which we’ve seen before, to get the key from the first tuple in the list using
hd or head. All of the keys in the list should be the same so it doesn’t matter which one we use. Second we use
Enum.reduce to add up all of the values from our key values tuples using the accumulator given as the second argument in both
reduce and the lambda in
reduce.
defmodule Reducer do
def reduce(tuples, output_writer) do
send(output_writer, {:process_put, self()})
case tuples do
[] -> IO.puts :stderr, "Empty List"
tuples ->
send(output_writer, {:value_put, "#{elem(hd(tuples), 0)} #{Enum.reduce(tuples, 0, fn ({_, v}, total) -> v + total end)}"})
end
end
end
Step 6: The Output Writer
This module should look very familiar since it’s a slightly modified version of Partition. The only difference is in
reducer_check where we open a file, write each word and its count to STDOUT and the file we just opened, and then close the file and the whole process chain all the way back to our parent process. One important detail is the use of
Path.join to give us consistent file paths across various operating systems. Another is the use of
<> which is Elixir’s string concatenation operator.
defmodule OutputWriter do
def start_link do
Task.start_link(fn -> loop([], []) end)
end
defp loop(processes, values) do
mailbox_length = elem(Process.info(self(), :message_queue_len), 1)
if (mailbox_length == 0), do: (
reducer_check(processes, values)
)
receive do
{:process_put, caller} ->
loop([caller | processes], values)
{:value_put, value} ->
loop(processes, [value | values])
end
end
defp reducer_check(processes, values) do
check = Enum.filter(processes, fn process -> Process.alive?(process) == true end)
if (length(check) == 0 && length(processes) != 0), do: (
{:ok, file} = File.open(Path.join("test", "output.txt"), [:write])
for value <- values do
IO.puts value
IO.write(file, value <> ~s(\n))
end
File.close(file)
Process.exit(self(), :kill)
)
end
end
Testing and Wrap-up
Congratulations! You’ve successfully written a small but non-trivial Elixir program that does something useful. Let’s do some testing and then we’re finished. First create a directory called test and a file inside it filled with random text. I used randomtextgenerator.com to for my file. Next compile your project with
mix escript.build. Finally run your code using
./mapreduce --file=test/input.txt. You should see several hundred lines of text on your command line like this.
Finally open your
test directory and there should be a new file name
output.txt with output identical to your command line except that the exit message won’t be at the end.
Thanks for reading, please leave a clap or several if this tutorial was helpful to you!
Joe Cieslik is the CEO of Whiteboard Dynamics, a full stack development team specializing in functional programming and Android. You can find out more about us, how we can help you, and our past projects at whiteboarddynamics.co. | https://hackernoon.com/build-a-mapreduce-flow-in-elixir-f97c317e457e | CC-MAIN-2019-35 | refinedweb | 2,156 | 65.62 |
Barcode Recognition
Every so often, I get on to a kick where I start reading up on lisp, erlang and others that have functional, reliable, metaprogrammable, or other ‘able’ traits that indicate a more powerful language. In a way, this is partly responsible for my investigation of ruby on rails, as a metaprogramming language that has one dominant web framework that seems to do things right.
Some of this is because I see people using these tools to create software that is elegant, effective, and done with few resources. The most recent iteration is dabbledb. This is software that I wish that I’d written. I’m pretty sure that it’s written in smalltalk by a small team, who did the work between consulting jobs.
Some of this is because I want things to get easier. However, I’m reminded of Lance Armstrong: “It doesn’t get any easier, you just go faster.” The programming is still going be ‘hard’, but it better be solving bigger problems.
I’ve had a problem at work that I’ve thought about, on and off, for a year or so. We drive several check scanners, one of them with a barcode recognizer. Helpfully enough, that’s the lowest volume one — generally the scanner that one wants to upgrade from. But for people who need that barcode scanning to match a payment coupon to their database, it’s kind of an important feature. There are a few open source packages out there that do barcode recognition, some as part of an ocr engine, some as stand alone. Lots of payware activex stuff, com objects, or other closed source windows stuff. But nothing that gave a quick and easy overview that’s incorporatable or reimplementable. So I need a barcode recognizer from a black and white image. I don’t need to find the barcode, since it’s in the same place every time.
So, Code 39 barcode. There are a lot of barcode symbologies. This is a little different than the upc symbol, it’s an early attempt that’s had wide use, it’s not especially dense, but it’s easy to read.
Each ‘character’ has 9 bars, starting and ending with black, and a narrow white bar between character. 3 of those bars are wide, generally 3x wider than the narrow bars. This means that each character is a fixed width, probably about 16x the smallest unit (3×3 + 7×1). Also, there’s supposed to be a large white space at the beginning and end of the barcode, and start and stop characters at each end. So the total number of bars, white and black, including start/stop codes, but not the end buffer space, will be (chars)*10 + 19 bars.
So now, for some functional goodness. The essential trick here is to encode a string of bits from the image in run length encoding, here represented by tuples of (length, color) in an array. Once you have that, you can figure out if you have a reasonable number of bars (one per run length entry), characters, and from that, the barcode. Everything in the problem can be modeled as a list, and all (save one) of the operations can be a map or filter on the list.
I’m doing this in python, usng the itertools package and a curry implementation that gives me partial function application. (e.g. bar = curry(foo,a), bar(b) == foo(a,b)) Itertools gets me the groupby function, which returns lists of identical(ish) items. Curry, well, curry gets me partial function evaluation, which is really some syntatcic sugar to let me use map instead of an explicit for loop.
First up, we need a row of pixels. I’m assuming that we know where the bar code is in the image, and we just need to interpret it. Finding the barcode is a problem left for the reader. We’re using the python imaging library, so extracting a row is a quick function. Here I’m cropping the barcode region to a 1 pixel high area at a height v in the region, then getting the data from the image.
def extract_row(self, img, v): (w,h) = img.size return img.crop((0,v,w,v+1)).getdata()
For a black and white image, this comes back as a list of integers, either 0 (black) or 255 (white).
Next, we need to take this pixel data and get something that approximates bars. Run length encoding does the trick here, since we would like color and width for each item on our scan line. This is the first use of the itertools.groupby function, which returns a value and an iterator of the items for each identical value. I’m grabbing the length of the value list and the value, mapping the value to a color, and returning it as a tuple, then chopping off the whitespace at each end.
def to_rle(self, row): mp = {0: 'b', 255: 'w'} return [(len(list(g)), mp[k]) for k,g in itertools.groupby(row)][1:-1]
This should return a list of [(len, ‘b’), (len, ‘w’), (len,’b’) …] where the first and last items are black. If they’re not, then we should just discard the line. It’s much easier to just discard lines that don’t make sense, either from a bad scan or missing barcode information than to try to tough it up. There are always more scanlines to try. (Until there aren’t, and then it may be worth some touchup).
Now, we have a list of white an black regions, we need to determine if they are wide or narrow bars, then split them into individual characters. Since there is a rough correspondence between the pixel width of the barcode and the number of narrow bar widths, I’ve set a threshold of 2x the narrow bar width as the divider for wide/narrow. We can calculate the narrow bar spacing by pxLen/(16*characters), and # chars = (len(rle)+1)/10. Or, in code:
def threshold(self, rle): n = (len(rle)+1)/10 pxlen = sum(map(lambda x: x[0], rle)) return 2*(pxlen / (16*n))
And apply it using:
def to_bars(self, rle): return map(curry(lambda x,y: str(int(y[0] > x)), self.sym.threshold(rle)), rle)
In this case, I’m returning it as ‘1’, and ‘0’ for wide and narrow, respectively. I could return bits or booleans or strings, but I found the 1 and 0 easy enough for the character substitution. In the future, I’ll probably make them binary and try to specify the mappings as character set transformations. But not today.
The penultimate step is to chunk into characters, another use of the groupby function, this time with a little stored state. I want to pull off 10 bars at a time (9 + whitespace), so I’m using a helper that will give me a integer div 10 of the number of times that it’s been called.
def _iterkey(self, val): self._iterstate +=1 return (self._iterstate-1) / 10 def chunk(self, bars): self._iterstate = 0 return [''.join(list(g)[:9]) for k,g in itertools.groupby(bars, self._iterkey)]
Finally, we need to turn the chunks into characters. I’ve got a map of the 9 character chunks to the character that they represent, processed from the wikipedia and other documentation above. So it’s a simple matter of subbing into the map and dropping the start and stop characters:
def to_chars(self, chunked): return ''.join(map(lambda x: self.sym.bkw[x], chunked)[1:-1])
Putting this all together with some error checking, retries on the next scan line, we get:
def recognize(self, img): (w,h) = img.size for scan in range(1,h-1): rle = self.to_rle(self.extract_row(img, scan)) if len(rle) < self.sym.min_length : continue if not self.sym.check_len(rle): continue chunks = self.sym.chunk(self.to_bars(rle)) if len(self.sym.invalid_chunks(chunks)) : continue try: ret = self.sym.extract(self.to_chars(chunks)) if len(ret): return ret except: continue return None
On my linux (ubuntu 6.06, amd64, 3600?) box, this does about one barcode per .01 sec, where the barcodes are about 280x100px, extracted from about 1 megapixel images. On average, I'm trying about 40 scanlines before I hit on one that's error free. It's about 2.5x slower on the mac (macbook core duo) for reasons that I haven't figured out yet.
Link to the full source.
Link to an archive with the code, test code and a sample image. | http://www.wiredfool.com/2006/07/04/ | CC-MAIN-2021-43 | refinedweb | 1,444 | 73.27 |
I want to get column names of a matrix to set another one, but if matrix does not have column names (or is set to NULL), the following code crashes my R session.
CharacterVector cn = colnames(x);
The following code is the way how I get column names of a matrix even if it does not have.
#include <Rcpp.h> using namespace Rcpp; // Get column names or empty // [[Rcpp::export]] CharacterVector get_colnames(const NumericMatrix &x) { CharacterVector cn; SEXP cnm = colnames(x); if (!Rf_isNull(cnm)) cn = cnm; return(cn); }
Is there a more elegant way?
I had started this and then got distracted. @coatless covered it, this is simply shorter.
Code
Output | https://techqa.club/v/q/how-to-get-column-names-even-if-it-is-null-in-rcpp-c3RhY2tvdmVyZmxvd3w1NTg1MDUxMA== | CC-MAIN-2021-17 | refinedweb | 111 | 62.38 |
6325/java-static-nested-class
I was going through some examples and found this code with static nested class:
public class LinkedList<E> ... {
...
private static class Entry<E> { ... }
}
But I am not able to understand, why one should go for a static nested class instead of a general inner class? Please explain.
Hi, to understand their usage, you must know the difference between the two.
As you might know that a nested class is by default a member of its enclosing class. An inner class (or the non-static nested class) can access any of the data members of its enclosing class i.e., even if its declared private, it will be accessible by it. Whereas a static nested class can’t do so, rather it can interact with the instances of its outer class. In other words, a static nested class behaves just like a top-level class. In your example, since your LinkedList. Entry class in not accessing any of the LinkedList members, it is kept as static.
Hope this clears your doubt.
A static keyword can be used with ...READ MORE
This program will help you understand the ...READ MORE
A static method has two main purposes:
For ...READ MORE
A subclass inherits all of the public ...READ MORE
Nested classes are divided into two categories: ...READ MORE
You can use Java Runtime.exec() to run python script, ...READ MORE
First, find an XPath which will return ...READ MORE
See, both are used to retrieve something ...READ MORE
Well, Java doesn't allow this because of ...READ MORE
Can’t tell you the exact reason as ...READ MORE
OR | https://www.edureka.co/community/6325/java-static-nested-class | CC-MAIN-2019-30 | refinedweb | 272 | 77.43 |
Introduction to malloc() in C++
Malloc function in C++ is used to allocate a specified size of the block of memory dynamically uninitialized. It allocates the memory to the variable on the heap and returns the void pointer pointing to the beginning address of the memory block. The values in the memory block allocated remain uninitialized and indeterminate. In case the size specified in the function is zero then pointer returned must not be dereferenced as it can be a null pointer, and in this case, behavior depends on particular library implementation. When a memory block is allocated dynamically memory is allocated on the heap but the pointer is allocated to the stack.
Syntax
Malloc function is present in <cstdlib> header file in library of C++. This is used to invoke dynamic memory allocation to the variables where the size of the block is defined at the compile time. Below is the syntax for malloc function:
void* malloc(size_t size);
Parameters
Only one parameter needs to be passed to call the malloc method that is the size of the memory block one needs to allocate. The data type for this parameter is size_t. Memory allocated is initialized with random values and must be initialized again.
Return Type: void* is a return type. This signifies that this method returns a pointer to the address of the first memory block allocated on the heap. This pointer is made on the stack. In case the size specified in the parameters is 0 then the pointer being returned is null and shall not be referenced.
How does the malloc() method work in C++?
Malloc function is present in <cstdlib> header file of C++ library. This method is used to allocate memory block to a variable or array on heap where variables have a better life.
When this method is called for a specified size_t variable, the compiler searches the same memory block size on the heap and returns a pointer to the starting address of that memory block. The pointer returned is a void pointer that means it can be easily converted to a pointer of any datatype. In case the specified size for a memory block is 0 then a NULL pointer is returned working in indeterminate behavior, and shall not be dereferenced.
This function does not call the constructor. Since memory is allocated dynamically thus leads to avoid various segmentation fault errors. Memory allocated using this function cannot be overridden that is no other program will be able to use that memory block until it is freed from that particular pointer. Thus one must free the memory being allocated using the malloc method and thus we can experience good memory management by our system and enhanced performance.
Also, we must note that the size of the block being specified needs to be calculated manually as per the requirement such as in case the array consists of int type values thus memory being allocated must be multiple of memory size of an int variable.
Examples to Implement malloc() in C++
Below are examples mentioned:
Example #1
In our first example we will use malloc function to create an array for 6 number of elements of int type:
Code:
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
{
int *my_ptr;
my_ptr = (int*) malloc(6*sizeof(int));
if(my_ptr)
{
cout << "Lets intilize 6 memory blocks with odd numbers" << endl << endl;
for (int i=0; i<6; i++)
{
my_ptr[i] = (i*2)+1;
}
cout << "Lets see the values" << endl << endl;
for (int i=0; i<6; i++)
{
cout << "Value at position "<<i << " is "<< *(my_ptr+i) << endl;
}
free(my_ptr);
return 0;
}
}
Output:
Example #2
Let’s see the scenario if 0 is specified as size in malloc function:
If the size is 0, then malloc() returns either NULL or a unique pointer value that can later be successfully passed to free(). That means, there is no guarantee that the result of a malloc(0) is either unique or not NULL.
Code:
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
{
size_t size =0;
int *my_ptr = (int *)malloc(size);
if(my_ptr==NULL)
{
cout << "Null pointer has been returned";
}
else
{
cout << "Memory has been allocated at address" << my_ptr << endl;
}
free(my_ptr);
return 0;
}
Output:
Advantages of malloc() in C++
There are a lot of advantages to using the malloc method in one’s application:
Dynamic Memory allocation: Usually we create arrays at compile time in C++, the size of such arrays is fixed. In the case at run time we do not use all the space or extra space is required for more elements to be inserted in the array, then this leads to improper memory management or segmentation fault error.
Heap memory: Local arrays that are defined at compile time are allocated on the stack, which has lagged in memory management in case the number of data increases. Thus one needs to allocate memory out of the stack, thus malloc comes into the picture as it allocates the memory location on the heap and returns a pointer on the stack pointing to the starting address of the array type memory being allocated.
Variable-length array: This function helps to allocate memory for an array whose size can be defined at the runtime. Thus one can create the number of blocks as much as required at run time.
Better lifetime: Variable created using malloc method is proved to have a better life than the local arrays as a lifetime of local arrays depends on the scope they are being defined and cannot access out of their scope. But variables or arrays created using malloc exist till they are freed. This is of great importance for various data structures such as linked list, binary heap, etc.
Conclusion
Malloc method is used to allocate memory to the variables dynamically in the form of an array and returns a void pointer pointing to the starting address of the memory block. The null pointer is returned in case the size of the block specified is 0. This memory is allocated on the heap and the pointer is made on the stack. Memory allocated cannot be overridden and the size must be calculated manually.
Recommended Articles
This is a guide to malloc() in C++. Here we discuss an introduction to malloc() in C++, syntax, how does it work, examples. You can also go through our other related articles to learn more – | https://www.educba.com/malloc-in-c-plus-plus/?source=leftnav | CC-MAIN-2021-21 | refinedweb | 1,067 | 55.27 |
Awesome!
Peter, do you think it would be potentially worthy to put the source
code somewhere where people could add/ modify things without branching
endlessly? SourceForge? Would ant-contrib assign a spot in the CVS? I can
of course expose CVS from my own server, but it would be probably more
natural if such extension were available on ANT's cvs somewhere...
Dawid
P.S. Send me the code, please - I would love to check it out! :)
pr> Excellent....
pr> This is mega cool.
pr> I modified DynamicBSHTask#execute to use createDataType() rather
pr> than createTask() and with ant 1.6 one can use the dynamic bsh
pr> to define conditions, filters, and tasks very easily:
pr> <project name="t">
pr> <taskdef name="bshdef"
pr>
pr> </taskdef>
pr> <taskdef resource="net/sf/antcontrib/antcontrib.properties"/>
pr> <target name="condition">
pr> <bshdef name="all.lower">
pr> /*% @implements org.apache.tools.ant.taskdefs.condition.Condition %*/
pr> String message;
pr> void setString(String message) {
pr> global.message = message;
pr> }
pr> boolean eval() {
pr> return (message.toLowerCase() == message);
pr> }
pr> </bshdef>
pr> <if>
pr> <all.lower
pr> <then>
pr> <echo>the string is all lower</echo>
pr> </then>
pr> </if>
pr> </target>
pr> <target name="path">
pr> <bshdef name="showpath">
pr> /*% @extends org.apache.tools.ant.Task %*/
pr> import org.apache.tools.ant.types.Path;
pr> Path path;
pr> org.apache.tools.ant.types.Path createPath() {
pr> global.path = new Path(self.getProject());
pr> return global.path;
pr> }
pr> void execute() {
pr> self.log("Path is " + global.path);
pr> }
pr> </bshdef>
pr> <showpath>
pr> <path>
pr> <fileset dir="."/>
pr> </path>
pr> </showpath>
pr> </target>
pr> </project>
pr> On Tuesday 10 June 2003 09:43, you wrote:
>> Ah... yeah - this is tricky.... but possible:
>>
>> pr> void execute() { log("hello world"); }
>>
>> not possible because once you're inside the script, you are not anymore
>> aware that you're a subclass of some other method (BeanShell currently
>> doesn't support this).
>>
>> pr> void execute() { this.log("hello world"); }
>>
>> Nope, all BeanShell scripts have a 'this' object which is of type XThis.
>> Again: beanshell archtiecture, which is hard to modify (and it wouldn't
>> make much sense anyway - Pat knows what he's doing :).
>>
>> pr> void execute() { global.log("hello world"); }
>>
>> global is for "instance" scope. This is misleading, but is explained in the
>> BeanShell manual - in my application every script has its own namespace, so
>> it is like class's instance scope.
>>
>> What you need to do is refer to methods of the 'self' object, which I
>> register for every script as a public global variable. This is the
>> reference to your superclass.
>>
>> So: self.log("hello world"); should do the trick.
>>
>> check out the examples (test cases) - there's some code that does access
>> superclass methods and fields.
pr> My bad, I should have looked harder.
pr> Peter.
>>
>> Unfortunately you won't be able to access protected methods/ fields of the
>> superclass - haven't figured out how to overcome this problem yet.
>>
>> Dawid
pr> Ps.
pr> My direct e-mail gets bounced.
pr> ---------------------------------------------------------------------
pr> To unsubscribe, e-mail: [email protected]
pr> For additional commands, e-mail: [email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/ant-user/200306.mbox/%[email protected]%3E | CC-MAIN-2017-22 | refinedweb | 547 | 60.41 |
In this article you will learn what is new in Prism 5.0.
Are you a WPF, Silverlight or Windows Phone developer and article, I'll be discussing the new assemblies, new objects and deprecated objects that can/can't be used with Prism 4.1 and Prism 5.0.Downloading Prism 5.0Prism 5.0 can be downloaded and installed either from the Patterns and Practices site having the URL or by using the Nugget package inside Visual Studio. The specified link also discusses all the changes that are part of Prism 5.0Supported PlatformsLet's have a quick look at the supported platforms of Prism 5.0. When working with previous versions of Prism (in other words 4.1), one was able to create applications like WPF (.Net 4.0), Silverlight 5 and Windows Phone (7.5). The need to use some tool and make your application a WPF application (if possible) or simply continue with the existing version of Prism.Assembly ChangesThis section discusses about all the assembly-related changes that Prism 5.0 introduces. Please note, in the following table all the assemblies are prefixed with Microsoft.Practices:Note that in the preceding Portable Class Library (PCL).On the MVVM front, there are two more additions. One is Prism.MVVM that is again a PCL and shares a common MVVM functionality across Windows Store Apps and Windows Presentation Foundation applications. Now to get around a few limitations and some necessary enhancements on WPF, a separate assembly is created having a name the Prism.MVVM.Desktop, that is specifically meant to be used on the desktop.One more addition is Prism.PubSubEvents. This is an event aggregator. So, the event aggregator is called out of Prism and has been kept into its own PCL library in Prism 5.0.Deprecated ObjectsDeprecated objects are the objects that are still in assemblies, but we just don't want to use them anymore and if you are currently using them, then you need to move them to a different object instance. The following is the list of such objects:Objects moved to a new locationThere are a few objects that provide the new home in Prism 5.0. If you are using any of the following specified objects, then you need to re-reference your assemblies with the new ones. Apart from assemblies, the namespace is also changed. Please note, the following changes are the breaking changes.Removed ObjectsThere are a few objects in Prism 4.1 that are completely removed from Prism 5.0. This section discusses the objects that are completely gone. This is again considered to be breaking changes.Apart from all these changes, a few changes are made to Quick Starts and help files also.
View All | https://www.c-sharpcorner.com/UploadFile/41e70f/whats-new-in-prism-5-0/ | CC-MAIN-2019-35 | refinedweb | 461 | 66.44 |
This set of pages documents the setup and operation of the GPU bots and try servers, which verify the correctness of Chrome's graphically accelerated rendering pipeline.
The GPU bots run a different set of tests than the majority of the Chromium test machines. The GPU bots specifically focus on tests which exercise the graphics processor, and whose results are likely to vary between graphics card vendors.
Most of the tests on the GPU bots are run via the Telemetry framework. Telemetry was originally conceived as a performance testing framework, but has proven valuable for correctness testing as well. Telemetry directs the browser to perform various operations, like page navigation and test execution, from external scripts written in Python. The GPU bots launch the full Chromium browser via Telemetry for the majority of the tests. Using the full browser to execute tests, rather than smaller test harnesses, has yielded several advantages: testing what is shipped, improved reliability, and improved performance.
A subset of the tests, called “pixel tests”, grab screen snapshots of the web page in order to validate Chromium's rendering architecture end-to-end. Where necessary, GPU-specific results are maintained for these tests. Some of these tests verify just a few pixels, using handwritten code, in order to use the same validation for all brands of GPUs.
The GPU bots use the Chrome infrastructure team‘s recipe framework, and specifically the
chromium and
chromium_trybot recipes, to describe what tests to execute. Compared to the legacy master-side buildbot scripts, recipes make it easy to add new steps to the bots, change the bots’ configuration, and run the tests locally in the same way that they are run on the bots. Additionally, the
chromium and
chromium_trybot recipes make it possible to send try jobs which add new steps to the bots. This single capability is a huge step forward from the previous configuration where new steps were added blindly, and could cause failures on the tryservers. For more details about the configuration of the bots, see the GPU bot details.
The physical hardware for the GPU bots lives in the Swarming pool*. The Swarming infrastructure (new docs, older but currently more complete docs) provides many benefits:
(* All but a few one-off GPU bots are in the swarming pool. The exceptions to the rule are described in the GPU bot details.)
The bots on the chromium.gpu.fyi waterfall are configured to always test top-of-tree ANGLE. This setup is done with a few lines of code in the tools/build workspace; search the code for “angle”.
These aspects of the bots are described in more detail below, and in linked pages. There is a presentation which gives a brief overview of this documentation and links back to various portions.
Please see the GPU Pixel Wrangling instructions for links to dashboards showing the status of various bots in the GPU fleet.
Most Chromium developers interact with the GPU bots in two ways:
The GPU bots are grouped on the chromium.gpu and chromium.gpu.fyi waterfalls. Their current status can be easily observed there.
To send try jobs, you must first upload your CL to the codereview server. Then, either clicking the “CQ dry run” link or running from the command line:
git cl try
Sends your job to the default set of try servers.
The GPU tests are part of the default set for Chromium CLs, and are run as part of the following tryservers' jobs:
tryserver.chromium.linuxwaterfall
tryserver.chromium.macwaterfall
tryserver.chromium.winwaterfall
Scan down through the steps looking for the text “GPU”; that identifies those tests run on the GPU bots. For each test the “trigger” step can be ignored; the step further down for the test of the same name contains the results.
It's usually not necessary to explicitly send try jobs just for verifying GPU tests. If you want to, you must invoke “git cl try” separately for each tryserver master you want to reference, for example:
git cl try -b linux-rel git cl try -b mac-rel git cl try -b win7-rel
Alternatively, the Gerrit UI can be used to send a patch set to these try servers.
Three optional tryservers are also available which run additional tests. As of this writing, they ran longer-running tests that can't run against all Chromium CLs due to lack of hardware capacity. They are added as part of the included tryservers for code changes to certain sub-directories.
Tryservers for the ANGLE project are also present on the tryserver.chromium.angle waterfall. These are invoked from the Gerrit user interface. They are configured similarly to the tryservers for regular Chromium patches, and run the same tests that are run on the chromium.gpu.fyi waterfall, in the same way (e.g., against ToT ANGLE).
If you find it necessary to try patches against other sub-repositories than Chromium (
src/) and ANGLE (
src/third_party/angle/), please file a bug with component Internals>GPU>Testing.
All of the GPU tests running on the bots can be run locally from a Chromium build. Many of the tests are simple executables:
angle_unittests
gl_tests
gl_unittests
tab_capture_end2end_tests
Some run only on the chromium.gpu.fyi waterfall, either because there isn‘t enough machine capacity at the moment, or because they’re closed-source tests which aren't allowed to run on the regular Chromium waterfalls:
angle_deqp_gles2_tests
angle_deqp_gles3_tests
angle_end2end_tests
audio_unittests
The remaining GPU tests are run via Telemetry. In order to run them, just build the
chrome target and then invoke
src/content/test/gpu/run_gpu_integration_test.py with the appropriate argument. The tests this script can invoke are in
src/content/test/gpu/gpu_tests/. For example:
run_gpu_integration_test.py context_lost --browser=release
run_gpu_integration_test.py pixel --browser=release
run_gpu_integration_test.py webgl_conformance --browser=release --webgl-conformance-version=1.0.2
run_gpu_integration_test.py maps --browser=release
run_gpu_integration_test.py screenshot_sync --browser=release
run_gpu_integration_test.py trace_test --browser=release
If you're testing on Android and have built and deployed
ChromePublic.apk to the device, use
--browser=android-chromium to invoke it.
Note: If you are on Linux and see this test harness exit immediately with
**Non zero exit code**, it‘s probably because of some incompatible Python packages being installed. Please uninstall the
python-egenix-mxdatetime and
python-logilab-common packages in this case; see Issue 716241. This should not be happening any more since the GPU tests were switched to use the infra team’s
vpython harness.
You can run a subset of tests with this harness:
run_gpu_integration_test.py webgl_conformance --browser=release --test-filter=conformance_attribs
Figuring out the exact command line that was used to invoke the test on the bots can be a little tricky. The bots all run their tests via Swarming and isolates, meaning that the invocation of a step like
[trigger] webgl_conformance_tests on NVIDIA GPU... will look like:
python -u 'E:\b\build\slave\Win7_Release__NVIDIA_\build\src\tools\swarming_client\swarming.py' trigger --swarming --isolate-server --priority 25 --shards 1 --task-name 'webgl_conformance_tests on NVIDIA GPU...'
You can figure out the additional command line arguments that were passed to each test on the bots by examining the trigger step and searching for the argument separator ( -- ). For a recent invocation of
webgl_conformance_tests, this looked like:
webgl_conformance --show-stdout '--browser=release' -v '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc' '--isolated-script-test-output=${ISOLATED_OUTDIR}/output.json'
You can leave off the --isolated-script-test-output argument, because that's used only by wrapper scripts, so this would leave a full command line of:
run_gpu_integration_test.py webgl_conformance --show-stdout '--browser=release' -v '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc'
The Maps test requires you to authenticate to cloud storage in order to access the Web Page Reply archive containing the test. See Cloud Storage Credentials for documentation on setting this up.
The pixel tests
Any binary run remotely on a bot can also be run locally, assuming the local machine loosely matches the architecture and OS of the bot.
The easiest way to do this is to find the ID of the swarming task and use “swarming.py reproduce” to re-run it:
./src/tools/swarming_client/swarming.py reproduce -S [task ID]
The task ID can be found in the stdio for the “trigger” step for the test. For example, look at a recent build from the Mac Release (Intel) bot, and look at the
gl_unittests step. You will see something like:
Triggered task: gl_unittests on Intel GPU on Mac/Mac-10.12.6/[TRUNCATED_ISOLATE_HASH]/Mac Release (Intel)/83664 To collect results, use: swarming.py collect -S --json /var/folders/[PATH_TO_TEMP_FILE].json Or visit:[TASK_ID]
There is a difference between the isolate‘s hash and Swarming’s task ID. Make sure you use the task ID and not the isolate's hash.
As of this writing, there seems to be a bug when attempting to re-run the Telemetry based GPU tests in this way. For the time being, this can be worked around by instead downloading the contents of the isolate. To do so, look more deeply into the trigger step's log:
As of this writing, the isolate hash appears twice in the command line. To download the isolate‘s contents into directory
foo (note, this is in the “Help” section associated with the page for the isolate’s task, but I‘m not sure whether that’s accessible only to Google employees or all members of the chromium.org organization):
python isolateserver.py download -I --namespace default-gzip -s [ISOLATE_HASH] --target foo
isolateserver.py will tell you the approximate command line to use. You should concatenate the
TEST_ARGS highlighted in red above with
isolateserver.py's recommendation. The
ISOLATED_OUTDIR variable can be safely replaced with
/tmp.
Note that
isolateserver.py downloads a large number of files (everything needed to run the test) and may take a while. There is a way to use
run_isolated.py to achieve the same result, but as of this writing, there were problems doing so, so this procedure is not documented at this time.
Before attempting to download an isolate, you must ensure you have permission to access the isolate server. Full instructions can be found here. For most cases, you can simply run:
./src/tools/swarming_client/auth.py login --service=
The above link requires that you log in with your @google.com credentials. It‘s not known at the present time whether this works with @chromium.org accounts. Email kbr@ if you try this and find it doesn’t work.
See the Swarming documentation for instructions on how to upload your binaries to the isolate server and trigger execution on Swarming.
To create a zip archive of your personal Chromium build plus all of the Telemetry-based GPU tests' dependencies, which you can then move to another machine for testing:
out/Releasein this example).
python tools/mb/mb.py zip out/Release/ telemetry_gpu_integration_test out/telemetry_gpu_integration_test.zip
Then copy telemetry_gpu_integration_test.zip to another machine. Unzip it, and cd into the resulting directory. Invoke
content/test/gpu/run_gpu_integration_test.py as above.
This workflow has been tested successfully on Windows with a statically-linked Release build of Chrome.
Note: on one macOS machine, this command failed because of a broken
strip-json-comments symlink in
src/third_party/catapult/common/node_runner/node_runner/node_modules/.bin. Deleting that symlink allowed it to proceed.
Note also: on the same macOS machine, with a component build, this command failed to zip up a working Chromium binary. The browser failed to start with the following error:
[0626/180440.571670:FATAL:chrome_main_delegate.cc(1057)] Check failed: service_manifest_data_pack_.
In a pinch, this command could be used to bundle up everything, but the “out” directory could be deleted from the resulting zip archive, and the Chromium binaries moved over to the target machine. Then the command line arguments
--browser=exact --browser-executable=[path] can be used to launch that specific browser.
See the user guide for mb, the meta-build system, for more details.
The goal of the GPU bots is to avoid regressions in Chrome‘s rendering stack. To that end, let’s add as many tests as possible that will help catch regressions in the product. If you see a crazy bug in Chrome's rendering which would be easy to catch with a pixel test running in Chrome and hard to catch in any of the other test harnesses, please, invest the time to add a test!
There are a couple of different ways to add new tests to the bots:
Adding new tests to the GTest-based harnesses is straightforward and essentially requires no explanation.
As of this writing it isn‘t as easy as desired to add a new test to one of the Telemetry based harnesses. See Issue 352807. Let’s collectively work to address that issue. It would be great to reduce the number of steps on the GPU bots, or at least to avoid significantly increasing the number of steps on the bots. The WebGL conformance tests should probably remain a separate step, but some of the smaller Telemetry based tests (
context_lost_tests,
memory_test, etc.) should probably be combined into a single step.
If you are adding a new test to one of the existing tests (e.g.,
pixel_test), all you need to do is make sure that your new test runs correctly via isolates. See the documentation from the GPU bot details on adding new isolated tests for the gn args and authentication needed to upload isolates to the isolate server. Most likely the new test will be Telemetry based, and included in the
telemetry_gpu_test_run isolate. You can then invoke it via:
./src/tools/swarming_client/run_isolated.py -s [HASH] -I -- [TEST_NAME] [TEST_ARGUMENTS]
o## Adding new steps to the GPU Bots
The tests that are run by the GPU bots are described by a couple of JSON files in the Chromium workspace:
chromium.gpu.json
chromium.gpu.fyi.json
These files are autogenerated by the following script:
generate_buildbot_json.py
This script is documented in
testing/buildbot/README.md. The JSON files are parsed by the chromium and chromium_trybot recipes, and describe two basic types of tests:
base/test/launcher/frameworks.
The majority of the GPU tests are however:
A prerequisite of adding a new test to the bots is that that test run via isolates. Once that is done, modify
test_suites.pyl to add the test to the appropriate set of bots. Be careful when adding large new test steps to all of the bots, because the GPU bots are a limited resource and do not currently have the capacity to absorb large new test suites. It is safer to get new tests running on the chromium.gpu.fyi waterfall first, and expand from there to the chromium.gpu waterfall (which will also make them run against every Chromium CL by virtue of the
linux-rel,
mac-rel,
win7-rel and
android-marshmallow-arm64-rel tryservers' mirroring of the bots on this waterfall – so be careful!).
Tryjobs which add new test steps to the chromium.gpu.json file will run those new steps during the tryjob, which helps ensure that the new test won't break once it starts running on the waterfall.
Tryjobs which modify chromium.gpu.fyi.json can be sent to the
win_optional_gpu_tests_rel,
mac_optional_gpu_tests_rel and
linux_optional_gpu_tests_rel tryservers to help ensure that they won't break the FYI bots.
If pixel tests fail on the bots, the.
It‘s critically important to aggressively investigate and eliminate the root cause of any flakiness seen on the GPU bots. The bots have been known to run reliably for days at a time, and any flaky failures that are tolerated on the bots translate directly into instability of the browser experienced by customers. Critical bugs in subsystems like WebGL, affecting high-profile products like Google Maps, have escaped notice in the past because the bots were unreliable. After much re-work, the GPU bots are now among the most reliable automated test machines in the Chromium project. Let’s keep them that way.
Flakiness affecting the GPU tests can come in from highly unexpected sources. Here are some examples:
sem_post/
sem_waitprimitives breaking V8’s parallel garbage collection (Issue 609249).
If you notice flaky test failures either on the GPU waterfalls or try servers, please file bugs right away with the component Internals>GPU>Testing and include links to the failing builds and copies of the logs, since the logs expire after a few days. GPU pixel wranglers should give the highest priority to eliminating flakiness on the tree. | https://chromium.googlesource.com/chromium/src/+/72e981fb0457d7b7ea0c4f77b17b9abd00d2c73e/docs/gpu/gpu_testing.md | CC-MAIN-2019-39 | refinedweb | 2,757 | 55.13 |
Hi all,
Sorry for this, but it's driving me insane now, been at it for hours. Been searching the boards and google for similiar errors or even code examples of returning vectors, but not seeing anything similiar. Building on my last project of generating stars and planets, I decided to use a class to generate a vector filled with star names to associate with the stars I generated.
The problem I'm running into, is that I keep getting the compiler error " 'vector' does not name a type".
I realize(I think) that my problem is that the compiler is looking for a datatype to return from my method, but for the life of me I can't figure out what I should be placing there.
I'm guessing that it doesn't associate vector as a datatype, but rather a container, and wants to know what it contains. Is that right?
Anyways, the code I'm using is this...
If anyone could please toss me a hint how to return my vector?If anyone could please toss me a hint how to return my vector?Code:#include <iostream> #include <vector> using namespace std; class Names { public: Names() { vector<string> name_getter; name_getter.push_back("Acrux"); name_getter.push_back("Alcor"); ........ name_getter.push_back("Yildun"); name_getter.push_back("Zosma"); } ~Names(); } vector Names :: getNames() { return name_getter; }
Again, any help is much appriciated, Hope I'm not being annoying. | http://cboard.cprogramming.com/cplusplus-programming/63519-new-silly-newb-questoin-returning-vector-class.html | CC-MAIN-2015-06 | refinedweb | 232 | 61.97 |
I have to make this program: I have to enter x number of letters in the same line and the program has to count vowels and consonants. So I've made a program that does all that except I don't know how to enter the letters in the same line.
P.S. I know this is a basic c++ but couldn't find a better place to post this.
Code:#include <iostream> using namespace std; int main() { char letter; int vow = 0; int con = 0; cout << "Enter the letters. To end the input insert a: " << endl; do { cin >> letter; if (letter=='a'||letter=='e'||letter=='i'||letter=='o'||letter=='u'||letter=='A'||letter=='E'||letter=='I'||letter=='O'||letter=='U') { vow++; }else { con++; } }while(letter!='a'); cout << "Number of vowels is: " << vow << endl; cout << "Number of consonants is: " << con << endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/145188-how-input-char-variables-same-line.html | CC-MAIN-2015-18 | refinedweb | 143 | 68.6 |
explanation
the explanation in internet,but not very clear about it. Thank you
Struts Tell me good struts manual
struts - Struts
struts I want to know clear steps explanation of struts flow..please explain the flow clearly
Struts Books
. The book starts with an explanation of why Struts is a "good thing" and shows how Struts fits into a web architecture. The author then gives an explanation... started really quickly? Get Jakarta Struts Live for free, written by Rick Hightower
Struts - Struts
Struts Can u giva explanation of Struts with annotation withy an example? Hi friend,
For solving the problem visit to :
Thanks
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...://
Thanks
Struts 2
Really Simple History (RSH)
Really Simple History (RSH)
The Really Simple History (RSH) framework makes it easy for AJAX applications
to incorporate bookmarking and back and button support. By default, AJAX
example explanation - Java Beginners
example explanation can i have some explanation regarding the program given as serialization xample.... Hi friend,
import java.io.*;
import java.util.*;
import java.util.Date;
import java.io.Serializable
struts ebook
struts ebook please suggest a good ebook for
Struts - Framework
/struts/". Its a very good site to learn struts.
You dont need to be expert...Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary
Struts
Hibernate Tools Download
Hibernate Tools Download
In this we will show you how to download the Hibernate
Tools for the development. Hibernate Tools is really a good tool that help
servlet not working properly ...pls help me out....its really urgent
servlet not working properly ...pls help me out....its really urgent Hi,
Below is the front page of my project
1)enty.jsp
</form> </body> </html>
</form> </body> </html>> < | http://roseindia.net/tutorialhelp/comment/14494 | CC-MAIN-2014-41 | refinedweb | 318 | 68.06 |
Agenda
See also: IRC log
<Steven> can do
<Steven> :-)
<ShaneM> User Interface Rule #1: never anthropomorphize your software
<Steven> hwo many do we think are attending?
markus, shane, steven, gregory, roland
those that are here - will be here
didn't catch any regrets
<Steven> tina, mark
ah, i had hoped mark could be here, especially on the topic of forms in XHTML2
<Steven> alessio
markus is grabbing a bite to eat and a cup of coffee
<Steven> So that's 8 possibles
how many is the zakim reservation for?
<Steven> Gregory, watch this
User Interface Rule #2: if you do antrhomorphize your software, don't be surprised when it de-humanizes you
<Steven> woh
<Steven> 04 01zakim, room for 8 people at 13:00Z for 240 minutes?
zakim was REALLY flakey yesterday
even for Zakim
<Steven> Note code is CONF2
i can't tell the difference between the last rejected command, and the one zakim acknowledged ;-)
<Steven> nor can I, nor can I
ROTFL
<Steven> Doesn't look overbooked to me....
<ShaneM> low tech crap
<Steven> LOL
<Steven> I *really* must get that button made for you Shane"
<Steven> another 2 mins
<ShaneM> again - low tech crap
<Steven> booked at the hour
<Steven> shane jetlagged???
<Steven> This can't be the real Shane
<Steven> An imposter!
<ShaneM> this is what happens when i dont travel for ages.
<Steven> code CONF2 Roalnd
<Steven> Roland
<Steven>.
<Steven> Scribe: Steven
Steven: Great news about Vivek as CIO
<oedipus> GJR: amen
[discussion of Gov websites, XML, and use of]
<oedipus> second: Modules, Modularization, and the XHTML Family
XHTML2+XForms attribute clashes
<oedipus>
<inserted> ScribeNick: oedipus
SP: have to say 1.1 because fixes
so many things in XForms 1.0
... 1.1 in CR at moment
... we need to get the test suite through to implementations; EMC turned up with a test report on use of XForms; Chiva and Orbeon are fighting to be number 2; ubiquity is coming on strong
... looking good for 2 test suites for XForms 1.1 completion
<Steven> Roland: s/Chiva/Chiba/
SP: confident 1.1 will be out of last call by time XHTML2 goes to LC
RM: what about XML Events 2?
SP: xml events 2 taken things
over from XForms 1.1 - borrowed good ideas - should be in
events really
... not sure too much of a clash between 1.1 and 1.2
... xml events overlap is small
RM: something we should understand
<scribe> ACTION: Steven - investigate overlap between XML Events 2 and XForms 1.1 [recorded in]
<trackbot> Created ACTION-53 - - investigate overlap between XML Events 2 and XForms 1.1 [on Steven Pemberton - due 2009-03-17].
SM: Events 2 now modularized; handler module can flip directly with the builtin handlers in XForms 1.1
SP: doesn't handler module add action
SM: improved it (in air quotes) - don't think consistent or backwards compatible
SP: improvements
SM: MarkB's wish list; actions can use script
SM: third module - SCRIPT (XML Scripting Module)
SP: will investigate
... changes in Events 2 - conditional actions taken straight from XForms 1.1
... some changes in XML Events 2 that are part of XForms 1.0
SM: targetid in 1.1
GJR: only reason for 1.2 was attempt to harmonize forms between XHTML2 and HTML5
SM: conflicts between XForms 1.1
MG: submission
SM: submission element
SP: received email on that
MC: paste list into chat (or URI)
<Roland>
SP: encoding less of problem; target is a pain
SM: not convinced target is a pain - take them one at a time
GJR: target ok if strictly defined - otherwise people will use javascript hacks
MG: fear that have universal
problem -- XForms is the first external grammar trying to
incorporate; Common Attribute Collection growing exponentially;
i would like to think about our Common Attribute
Collection
... MathML and SVG invoked externally - today garuntee collisions; investigate generic solution
SM: in those cases, those
grammars not in XHTML2 namespace, so doesn't matter
... if role in global attributes, do it using namespace modifiers/prefixes
SP: agreed at some point to make
special exceptions with XForms; XHTML2 began with understanding
that XForms an integral part of XHTML2
... offered to import into XHTML2 namespace
... rules for porting XHTML2 onto other elements is that MUST be prefixed
MG: that should be a SHOULD, not a MUST
SP: if import XHTML2 href should namespace qualify with xh2:
MG: RelaxNG point of view - if define attributes, doesn't inherit namespace of parent element -- only belongs to namespace if prefixed
SP: attributes are not in a namespace unless namespace qualified; don't have to look into namespace to find attribute, but can also add namespace attributes to elements
SM: not sure understand MG's question
MG: 2 things: first, if there is
quirkiness in way make schemas so can be qualified and
unqualified depending on context
... second: how XForms editing will appear for users; if swallow all of XForms element set with our common attributes, may be recipie for confusion by authors; strikes me as strange to have @target on every XForms element in XML namespace
RM: take Common and break into smaller chunks so people can take what is most appropriate for their attribute collection needs
MG: looking forward at incorporation of future modules; common collection very large
SM: done what RM suggested - attribute collections and common
SP: common doesn't start big, but
grows in accordance with elements used
... thought every element had "common" on it
SM: not all have common and not all need it
MG: from XHTML2 PoV that is right, XHTML M12n different?
SM: no
RM: need to make this point crystal clear so as to avoid misunderstanding
SM: open to defining which of XHTML2 attribute collections are added to the common collection
MG: [reads from spec] -- no requirement on common; you are correct shane
SM: same thing M12n 1.0 says
MG: either change XHTML2 to
export reduced number of attribute collections; or try to
disambiguate all collections one-by-one
... user point of view, would counsel first suggestion
SP: what does "introduce a
reduced set" mean?
... RDFa - by importing RDFa adds to common because every element can have @property or @about
... not using common as a catch-all -- predicated on what other tech one is integrating
... many modules add attributes that have general effect
... regretable that if introduce @href, it bring @target with it -- problem @target used in XML Events and XForms
... solution not to reduce attribute sets -- doesn't solve problem - just makes certain things impossible
RM: need alternatives, then
SM: 2 points: 1) @target comes along with @href (could split them); 2) disagree with premise that by including whatever modules one is using, one is including every other module in attributes
SP: bit of risk: 1 place have @target and another @target that does something different
SM: not disagreeing with that
SP: one @target for XML Events and @target on submission from XForms
SM: removed @target from XML
Events 2
... @target a very minor issue; would have same name, but in VERY different contexts; don't see as source of confusion
GJR: let commentors decide
SM: @resource is bigger problem; RDFa attributes need to be available for XForms elements; how do we deal with that? no proposal
SP: @resource in XForms i opposed; just a renaming of the @src attribute, which is still there; created child of sumbission, resource, and wanted both to have same name;
SM: proposal?
SP: is possible to drop @resource in XForms and still retain functionality;
SM: can XForms handle that?
SP: no content in world except
for test suites that uses @resource -- everyone uses @src --
@resource added because "looked better" -- wanted to retain
@src
... can ask XForms to drop @resource and reinstate @src
MG: other way is go route of namespace-qualified attributes; would be good if have solution that will work universally; using namespace qualified in XHTML would garuntee that would work forever
SM: that's what we tell language
designers to do -- use MathML or SVG with our attributes, and
when do so MUST do so with namespace qualifiers
... the SHOULD should be a MUST
MG: from use perspective won't be great for authors, but satisfies engineering reqs
RM: for those creating dialects,
avoid clashes so not to have to create new namespace
... namespaces not popular; WGs going out of the way to avoid them
SM: appreciate MG's
proposal
... not sure we can achieve this politically; don't like rolling stuff into namespace
... XForms will be part of XHTML5 namespace
... wrap up - @target discussed - my position is don't include @target in common or @href in Submission
... hypertext attribute collection is not relevant and should not include @target
... as SP pointed out, coding not an issue; in XForms call "string" in our document make data-type more explicit
MG: i agree, but a schema
processor wouldn't
... ask XForms group to specify data-type in the case of encoding more specifically
SM: like that idea
GJR: plus 1
SP: sounds good - checking XForms 1.1
MG: on SUBMISSION element
"This element represents declarative instructions on what to submit, and how. Details of submit processing are described at 11 Submit."
"Common Attributes: Common"
<Steven> encoding
<Steven> Optional attribute specifying an encoding for serialization. The default is "UTF-8".
<ShaneM> ACTION: Shane to write up concrete proposal for dealing with XHTML 2 vs. XForms 1.1 attributes on SUBMISSION element etc. [recorded in]
<trackbot> Created ACTION-54 - Write up concrete proposal for dealing with XHTML 2 vs. XForms 1.1 attributes on SUBMISSION element etc. [on Shane McCarron - due 2009-03-17].
<Steven> XHTML2 - encoding = Encodings
<Steven> This attribute specifies the allowable encoding of the external resource referenced by the @src attribute. At its most general, it is a comma-separated list of encodings, such as "utf-8", "utf8, utf-16", or "utf-8, utf-16, *".
Special Attributes for XForms 1.0 SUBMISSION bind, ref, action, method, version, indent. mediatype, encoding. omit-xml-declaration, standalone, cdata-section-elements, replace, instance, separator, includenamespaceprefixes
MG: i asked to have on agenda, but referring to different post
SP: strongly support this
... long had support for this PoV bar one member - can now get rid of h1 to h6
... like approach of putting them in legacy as long as make clear are legacy
RM: single module/collection
called "legacy"?
... need a definition
SM: don't have legacy module yet - anything legacy and groups should be in own modules
<ShaneM> ACTION: Shane to create a new h1-h6 module marked as legacy. [recorded in]
<trackbot> Created ACTION-55 - Create a new h1-h6 module marked as legacy. [on Shane McCarron - due 2009-03-17].
MG: what goes in there in place of h1 to h6
SM: @target is good example
GJR: as long as author suggests, user accepts or rejects
SM: don't remember this discussion
MG: recap - have caption module
now - available in TABLE, OBJECT and LISTS (label element for
lists gone) - question is, in terms of CAPTIONs are we really
done there -- should it be made part of common element
collection so anything can be CAPTIONed
... second question: what happens with @title - perhaps candidate for legacy module
SM: understand @title causes internationalization problem
MG: can we make CAPTION full replacement for @title and kill @title
SM: CAPTION part of text content module?
GJR: CAPTION used as header in TABLE in HTML
MG: have on TABLE and LISTS,
which is good
... if in text module, could have captions on ABBR etc.
SM: if replacing @title, needs to
be allowed everywhere @title is currently allowed
... or, this could tie back into discussion of the for attribute
SP: will take up when discuss @for
MG: if replace @title needs to be everywhere - assuming that title useful everywhere - is that true
GJR: needed for abbreviated form markup
SM: allowing CAPTION everywhere to replace @title
RM: rule for CAPTION?
GJR: nested header in TABLE context
SP: CSS selectors for
CAPTION
... @title used for hover in HTML4x
<Steven>:
GJR: in audio context state-of-art is either speak @title or speak link text
SP: XForms has HINT element for hover events; title widely used for abbr
SM: CAPTION part of text module, can be child of ABBR
<Zakim> oedipus, you wanted to ask what is content model for CAPTION for ABBR?
SP: i18n wanted CAPTION as well as @title so can markup @title values
GJR: intention of CAPTION in
table is to provide terse descriptor
... what is a caption and what is a description
<Steven> They wanted a TITLE element as child of all elements to allow marked up versions of @title
ABBR<TITLE><STYLE>rich text here</STYLE></TITLE>
ABBR TITLE STYLE to apply speech/audio CSS
GJR: alt and labelledby from
ARIA
... question for ARIA 2.0 is can labelledby and describedby take an IDREF
RM: XForms: has LABEL (rendered user has to do nothing to understand), HINT (implied gesture by user), HELP (available on user demand)
suggested model for HTML5:
<ELEMENT>
<LEGEND> </LEGEND> - required (maps to HTML4's @alt)
<CAPTION> </CAPTION> - required
<DESC> </DESC> - required (maps to HTML4's @longdesc)
<HELP> </HELP>
</ELEMENT>
RM: not clear on how to say CAPTION is alternative to @title -- behavior different
SM: correct
RM: content model for LABEL and HINT the same - rendering instructions different
MG: need to look both at CAPTION and TITLE element, not either or
GJR: yes
MG: for abbreviations, not @title or @caption, but expansion
MG: @title needs to get bug fixed
GJR: also exposition methods are many - show expansion on status line
SM: proposal somewhere for "full" so don't have to repeat expansions
<Steven> <abbr id="bbc" full="British Broadcasting Corporation">BBC</a>
MG: RelaxNG shows full attribute available on ABBR
GJR: thanks!!!!
f u l l
<Steven>.
<Steven> <p>The <span id="w3c">World Wide Web Consortium</span> (<abbr full="#w3c">W3C</abbr>)
<Steven> develops interoperable technologies (specifications, guidelines, software, and tools)
<Steven> to lead the Web to its full potential. <abbr full="#w3c">W3C</abbr> is a forum for
<Steven> information, commerce, communication, and collective understanding.</p>
GJR: doesn't like use of SPAN and
id
... in your example rather SPAN to provide the ID, whata about DFN
<dfn id="w3c">World Wide Web Consortium</dfn> (<abbr full="#w3c">W3C</abbr>)
SM: CAPTION part of text content set?
RM: what is its role and how do we define how it is rendered?
SP: CAPTION a child of TABLE -- what else? IMG?
MG: because anything can be image, leads logically to inclusion in common set
SP: yep
MG: no or yes on moving @title to legacy and introducing TITLE element
SP: not make legacy, but stating
have option to use on or the other
... @title and TITLE have same meaning
... HenryT suggested should be format for attributes that allow them to be children
... for simple use, @title attribute ok, if want something richer, use TITLE
GJR: deprecate @title in favor of TITLE?
SP: if conflict, child wins
<Steven> Not deprecate, just allow both
SM: content of title element the tooltip for the entire enclosed element
RM: yes
SM: RDFa - @title has a property of "DC.title" - only for @title in HEAD or all elements?
SP: attribute and elements; actual reason for title is to provide a DC.title
SM: never captured that in spec - will add to section currently revising
steven, what about @title and @style deprecated in favor of TITLE and STYLE
SM: takes interpretation; need to explictly state what we mean
SP: agreed
SM: different TITLE element than one in draft
SP: not different in meaning
SM: sounds ok
<ShaneM> ACTION: Shane to update the title element so it is clear that it can also be used in the text content set and that its contents become the "tooltip" for the enclosing element. [recorded in]
<trackbot> Created ACTION-56 - Update the title element so it is clear that it can also be used in the text content set and that its contents become the \"tooltip\" for the enclosing element. [on Shane McCarron - due 2009-03-17].
GJR: "tooltip" makes me squirm uncomfortable
SM: have to carefully craft text; content of TITLE element becomes tool-tip for HEAD element
SP: if made visible, should have tool-tip
SM: when TITLE defined in head, applies to document as whole, when used inline, refers to what it encases
SP: since TITLE element says is shorthand for property="title", RDFa already has special rules for children of HEAD which apply to document as whole; shorthand for meta
RM: would be good to restate that explicitly
GJR: replace "tooltip" with "user notification"?
SM: text module defines text content stuff including TITLE element
MG: some elements have structure
- lists and tables
... applies to TABLE, lists, OBJECT, what else?
<Steven> [take 10]
TEN MINUTE BREAK - RECONVENE AT QUARTER TO HOUR
Scribe+ Gregory_Rosmaita
RM: suggest moving on to "TOPIC:
Title Element and meta properties"
... can determine if earlier decision impacts title
RM: related to earlier discussion on title
SM: reading the definition of
TITLE element right now - wanted on agenda because didn't
understand how tied together meta property of title ties to the
TITLE element
... have very casual statement in spec
<ShaneM>
SM: only place addressed in spec,
currently
... thought said that about other things
... removed most of meta-data stuff from XHTML2 because it is available via RDFa
SP: if TITLE equivalent to meta-title, anything meta-title can do TITLE should be able to do
SM: disagree - UA not going to look for TITLE in head, name document, but then change title in window frame when encounters new TITLE
SP: if say TITLE is metadata about document and state that UA has to deal with metadata in uniform way, then we can treat them identically
RM: TITLE is required in
HEAD
... RDF processor would have to disambiguate
SP: if metadata, should have 1 story about metadata -- up to now, saying TITLE in HEAD is shorthand for property="DC.title" -- all metadata, should treat all metadata in same way
RM: as the TITLE element is currently defined is a moot point; don't need processor to look anywhere but HEAD
SM: Steven trying to divorce those issues, which may be a good thing
<Steven> I think they are orthogonal
SM: don't believe that we have
anything in XHTML2 to date othere than following sentence that
implies that UA has to understand anything about RDFa
... lots of properties that have interesting correspondences with existing elements
... more appropriate for us to put requirements on RDFa processors - that they extract semantics from markup; put reqs on UA that interpret properties as markup
SP: what do you mean by
"interpret properties as markup"?
... RDFa generalized method to add meta-data -- want to integrate the 2
SM: should be integrated in
regards metadata
... UA looks at elements for attributes and does things with them; what it does with them is metadata
SP: triples are simply a way of storing metadata; unified how metadata works in XHTML, underlying info the same no matter what format stored in; HTML browser takes part of metadata and puts in window title doesn't change fact that TITLE in HEAD is metadata
RM: UA has to hunt for RDFa
SP: why "hunting"? TITLE element or META property="DC.title" -- both should be stored in same way
SM: UAs don't look at metainfo; for servers, archiving, etc.
SP: my UA supplies bar across the
top that tells me all about the metadata and enables me to use
them
... at top of XHTML2 draft, button to take me home, button to take to previous, etc. -- plucking out metadata from head and doing something with it
RM: not expected to understand arbitrary RDFa properties
SP: no, just doesn't matter origin of metadata, UA should respond in uniform way
SM: have a vocabulary for that
SP: in doc say TITLE element equivalent to title property for document
SM: challanging fact that say that in 1 sentence; think unreasonable requirement on UA
SP: not unreasonable - metadata
is metadata; would be mistake to separate them -- how to
differentiate?
... should be unified
SM: if going to say UAs interpret "well-known metadata in XML vocabulary" as document properties, RDFa processors must extract other metadata?
<Roland>
RM: clarification question: we have metadata defined in vocab document, but title not included in vocab
GJR: vocab needs to be updated with LC aria roles
SM: took out of vocab because
RDFa handles that now
... forgot we had removed from the vocab document
... example in section 7.3 needs to change - no property "title"
... do have defined vocabulary collection; doesn't have to be a title term in it, but there are other terms; do these have corresponding attributes in XML
RM: have to define it;
q_
SM: if turn whole thing around -
there is this mapping, want unified method of metadata, RDFa
processors need to extract as well
... unified view needs both sides of problem described?
SP: definitely
... RDFa is described for XHTML 1.1 + RDFa
<Zakim> oedipus, you wanted to say want to discuss bringing CITE and @cite into harmony
GJR: discussion of bringing CITE element in line with the @cite
SP: CITE and @cite do need to be
unified
... don't know why need cite element
GJR: 2 scenarios - one is as a
pointer to CITE element for that resource
... the other is text string that contains human-parseable information
SP: CITE element gives human
readable text - provides attribute that says where citing
from
... Q and BLOCKQUOTE use @cite as @src
... why need @cite attribute if CITE does same work
: "
<Steven> By the way, I said the opposite - we don't need CITE element, since @cite adds the necessary magic to any element
oops, sorry steven
GJR: i suggested tying CITE element to Q and BLOCKQUOTE, etc. by a for/id mechanism
<ShaneM> What does this mean in terms of document meta data? In terms of the Role module? In terms of assistive technologies? <img src="some.jpg" property="xv:role" about="#this" id="this" content="xv:banner" />
SM: point is "if RDFa style annotation affects way UAs treat metadata globally, UA and AT have to know what RDFa applies to
<Roland>
RM: don't think we finished discussion of
"In XHTML 2 we have meta only taking "Common" as its attributes. That
means we are dropping name, scheme, and http-equiv. I am pretty sure we
<scribe> dropped http-equiv on purpose, but I feel like name and scheme should
still be there?"
SP: for http-equiv, said ought to
be done differently --- http-equiv mixes a lot of things in one
bucket (media type and encoding, just 2)
... @name i'm not sure upon; if attempting to be compatible with old XHTML, leave @name on META in same way left on A (anchor) for legacy content so works in both old and new UAs
... if don't care about legacy, would suggest drop @name and use meta properties
... can do META in body -- that's what RDFa is all about -- don't have scheme there, so why in HEAD because of BODY?
SM: raised issue because wanted
to determine if wanted to be backwards compatible
... don't think @name or @scheme should be retained
GJR: plus 1 to SM
SM: larger issue:
... what it means for document metadata - META element anywhere in document, related issue
<ShaneM>
SM: currently if look at content model for meta and link, link permits active links, meta permits PCDATA (to support RDFa in early stages) -- overcome by events, could put in historical module
SP: functionality of supplying META in body is provided by other means
SM: rich content in META may break support for XHTML2 in current user agents; should not have rich content model for meta or link
SP: can live with that
RM: if have alternative, let's stick with that -- keep as easy as possible
RESOLUTION: remove @name or @scheme from XHTML2; investigate feasibility of historical module
SP: in terms of what META does
now, defined using RDFa techniques, easy to assert -- span in
xhtml; nothing extra needs to be done in body -- can have
nested spans with properties; not functional but purely
syntatic
... doesn't add extra functionality, but does add consistency; removing fences is best; but don't feel strongly either way
... channelling MarkBirbeck -- having a HEAD is an historical artifact; could just have TITLE in BODY; only reason HEAD there was to mark "stuff not presented to user" before CSS; nowadays, distinction between HEAD and BODY less relevant
SM: should we permit nested META
elements
... i say "no" because not supported
... but do understand other side
RM: is nested META only way to achieve what looking for?
SM: in HEAD it is; in BODY can use RDFa
<ShaneM> ACTION: Shane to revert the link and meta content models to their XHTML M12N 1.1 content model but permit nested meta elements in the head. Do not permit @name and @scheme on meta though - they are not needed. [recorded in]
].
SP: if a use case and request, then should do it; those who don't want to use it can ignore it, those who do want to use nested META can
SM: In XHTML 2 we have meta only taking "Common" as its attributes. That
means we are dropping name, scheme, and http-equiv. I am pretty sure we
<scribe> dropped http-equiv on purpose, but I feel like name and scheme should
still be there?
RM: can be satisfied by other approach?
SM: Markus, can you provide me with details on nested META use cases?
MG: will do
timer, stop
<Roland>
... in XHTML2, not possible for LABEL to be separated from control; presentation methodology where appears on screen, so don't need @for for that purpose
... however, i do like the use of for/id to link Q and CITE and other links
GJR: one of the request i fielded about INS and DEL is that there is no way of binding what is being deleted to what has been inserted and i suggested that for/id relationship could fill that need
<INS id="insert13">This is the new text</INS>
<DEL for="insert13">This is the text to be deleted.</DEL>
s/for=insert11"
proposed:
1. That the for/id mechanism, which is already broadly supported in user agents and assistive technologies, be reused and extended in XHTML2 to provide explicit bindings between labelling text and the object or objects that text labels;
2. That the for/id mechanism serve as a means of re-using values for ABBR, D, DFN, Q and CITE;
3. A for/id relationship should also be used to mark the text which has been inserted, contained in an INS, and that which it is intended to replace, contained in a DEL tag, as in the following example:
GJR: value would be IDREF
SP: instead of "full" on ABBR use "for" on ABBR
<dfn full="expansion">
SP: appreciate idea, but if going to generalize @for, have to ensure there is a general meaning - what does a SPAN for="" -- should only be added to common if used in single way
RM: difficulties of common -- already very large
SP: if has general meaning should be in common - principle if attributes are generalizable, then more useful
GJR: @id globally for free,
... @for - purpose to establish explicit bindings and a re-use mechanism
RM: limited set of
elements?
... start with expansion and then consider if should be common element
... references to common definition
<scribe> ACTION: Gregory - investigate use cases for genericizing @for to ascertain if should be added to common/core attributes [recorded in]
<trackbot> Created ACTION-58 - - investigate use cases for genericizing @for to ascertain if should be added to common/core attributes [on Gregory Rosmaita - due 2009-03-17].
<scribe> ACTION: Gregory - is @for useful in specific cases (enumerate) or can it be used generally [recorded in]
<trackbot> Created ACTION-59 - - is @for useful in specific cases (enumerate) or can it be used generally [on Gregory Rosmaita - due 2009-03-17].
GJR: will provide concrete examples as per RM's suggestion
<ShaneM> Trying to remember what we agreed...
SP: why elements over attributes
GJR: to keep the attributes from being attached to SPAN
SM: should have to insert elements for elements sake -- shouldn't need INS or DEL to carry this information; one option is use of semanticless element by adding attributes to annotate a change, the other is to do it declaratively with elements
<ShaneM> Content models of historical INS and DEL are not supportable.
* Roland's straw-man example: @diff, with values of add, chg, del
<ShaneM> So it is possible to have them within the text module as a way of having elements with explicit semantics as opposed to inserting a "span"...
is d<DEL>i</DEL><INS>o</INS> legal?
is d<DEL>i</DEL><INS>o</INS>g legal?
1. a means of marking editorial changes;
2. a means of classifying an editorial change;
3. a means of conveying when and by whom the change was affected;
4.? "
<ShaneM> WOW - wonder if these modification elements/attributes define document properties that need tied in meta data via RDFa?
ould MOD with @src be handled differently than @src on other elements? should @href be used instead?
SM: don't understand how RDFa ties into this
GJR: just tasked to see if RDFa could satisfy use cases
RM: fact of insertion and deletion more than RDFa
GJR: RDFa is useful for marking who made the change - when was made
SM: can make a triple out of
anything, but just because one can doesn't mean one
should
... information interesting to those data-mining; implicit relationship between who, the when and the what of the change should be sloughed off on RDFa
SP: details of who made change is
metadata, and if is metadata, then should be treated as
metadata
... all metadata should be treated the same
SM: follow that to the logical end - every paragraph is metadata -- everything can be tagged metadata
SP: don't consider P metadata, but person who changed contents of P (data about data) is metadata
SM: can argue pretty cogently that everything is metadata
RM: this is data that has been inserted; this has been inserted; don't care who wrote or when or why?
GJR: right but that underlying should be able to provided to a user who wants to know it
1. a means of marking editorial changes;
2. a means of classifying an editorial change;
3. a means of conveying when and by whom the change was affected;
4. a means of binding an insertion with a deletion through for/id
RM: 1 and 2 tied together
GJR: yes
... important new consideration is 4 - binding waht has been deleted to what has been instered when both are in the same document
RM: that is metadata -- have to INS something in several places in document -- all created by same purpose and on same page -- bunch of changes for particular purpose
GJR: one thing we discussed was INS and DEL as inline and MOD as the block element for marking change
SP: attributist - not big fan of reintroducing these elements; use of generic SPAN is frowned upon by some
SM: should i conclude you are in favor of including INS and DEL
GJR: yes
RM: can't get excited over issue
- can live with attributes or elements, as long as elements are
local in scope
... insert a section with an INS shouldn't be allowed
SM: agree
... how opposed are you, steven?
SP: not sure -- very much prefer
attribute solution, but understand long history of element
version;
... on the other hand, point of moving to XHTML2 is removing elements that mark up structures, but semantics
... part of semantics, not structure which is why i prefer attribute solution
<ShaneM> PROPOSAL: introduce the INS and DEL elements as "legacy" in their own module and only in the text content set.
MG: could we put it in legacy module?
SP: could live with that
GJR: so could i
RM: sounds ok to me
<ShaneM> ACTION: Shane to develop a legacy INS and DEL module that adds those elements to the text content set. [recorded in]
<trackbot> Created ACTION-60 - Develop a legacy INS and DEL module that adds those elements to the text content set. [on Shane McCarron - due 2009-03-17].
RESOLUTION: introduce the INS and DEL elements as "legacy" in their own module and only in the text content set
SM: identify granular features and feature collections that can map to the @implement for Script Module
RM: yes
SM: architecture for document:
use RDFa and the annotation conventions from the vocab document
to identify these collections and their granular parts; parts
and selections map essentially to modules in M12n 2.0
... question: do we intend to support @implements in M12n 1.0
... my answer is yes
SP: sooner the better
SM: features should map to M12n
2.0 - M12n 1 and M12n 2 don't overlap - do we need 2 features
modules, one for each?
... don't think can have 2 versions of features, because need to move language forward
... if don't have 2 versions of features document, needs a well known URI (as with the vocab document) -- will need version names
... summary: features doc needs to represent current state-of-the art and a conversion/adaptation guidance; need to organize a heirarchy in RDF
SP: that's a lot to think about
SM: that's why the action is
still outstanding!
... suggestion: can get movement on this by picking core features we care about having in attributes today and call it the features document and say will be added to
SP: couldn't it just be a CURIE?
RM: if feature is a URI, should define URI for each feature
SP: implements=xh1:foo
... implements=xh2:xforms
SM: URI or safe CURIE
... reserved words in single repository
... implement XForms 1.1 would then have meaning -- reserved word mapping available
SP: if do that, CURIEs allows one to say if prefixed use certain namespace
SM: "reserved words" can mean
whatever we want - take out of context of CURIEs
... pre-fix-less CURIEs are problematic; can only have 1 default for each
SP: not going into vocab document, but if can't have key words as appropriate value of a URI
SM: #$@!%
SP: accept what RM suggested - not going to write very often; will be copied-and-pasted
SM: done all the time by developers -- bringing in scripts
RM: cut-and-paste
SP: like namespace or doctype - part of standard template
30 MINUTE WARNING
SM: probably just use URIs for most part in examples to keep simple and clear
+
SM: features we need implements
values for today: Access, Role, XHTML 1.2, client-side
RDFa
... will put up skeletal document for review and hope someone is inspired to help me
<ShaneM> Note - NO RESERVED WORDS FOR @implements
MG: about M12n 1.0, right?
SM: true
MG: those are intertwined heavily - hard to seperate them
GJR: agrees with Markus
SM: tangental issue on how to do fragment announcements - would like to address seperately
SP: difference between language sub-setting and UA sub-setting
SM: yes
SP: should be allowable to sub-set languages, but not user agents
SM: yes
... UAs need to support everything through modules
SP: one reason introduced M12n
was to try and pull the world into a decent standard set of
languages; needed differences to be consistent and
predictable
... m12n allows sub-setting and extension in defineable and controllable ways
... can sub-set language as much as you want provided UA accepts that sub-set as well as super-set
MG: provider of module can mark / designate module by pointing to it
SM: optional
MG: yes
SP: UA still accepts full
language, but some versions of language are checkable in
reduced version
... what is the "win"? if all UAs have to accept whole module, who wins?
SM: language designers - XHTML family markup languages with restriction on content authors can create
SP: think i can live with
that
... didn't distinguish between content sub-setting and UA sub-setting
SM: exactly; mea culpa
SP: while on the topic, reraise RM's complaint about not being able to declare taht content is XHTML Print and XHTML Base compliant
RM: would like to address at some
point
... if written content specifically so will validate by XHTML 1.1 processor and XHTML Basic processor
... when put in declaration i want, mobile processor won't accept
... current limitation is have to declare 1
SM: real problem, agree
RM: recognize this happens and people want to do these things - out of one, many; how to write content for the entire web?
SM: doctype -- TBL doesn't like
use of doctype anymore; but if using doctype, can't use more
than one
... in theory, could define a set of rules for doctype public identifiers that would mean "this document is blah, blah, and blah" but not sure if that scales
... there is the @version attribute - currently only takes single value
MG: what about link rel="profile"
GJR: also thinking along @profile lines
SP: rel="version"
SM: interesting idea
MG: what meaning does @profile have then?
SP: rel="profile" used for profiles that define value of attributes rather than implying content model
SM: how UA should interpret
values of rel, href and class in HTML
... could just add another reserved value for this case
MG: how planning/hoping to do in DAISY with grammars on top of XHTML2 with link rel="version" -- only thinking of having one, but possibility of multiples is tantalizing
SM: XML Schema's location attribute to declare multiple schemas
RM: is a hint -- we need to give locatoin
SM: @location can point to 5 different schemas
MG: solution should be general enough for use in processors
SM: don't want to rely on a
hint
... like idea of using LINK
<Steven> +1
GJR: plus 1
SM: don't think any existing rel values map to case; need new one
MG: "version"
... href cannonical URI - processor can auto-discover resources associated with language
... capable of using RDFa vocabulary - what is at end of namespace URI for DAISY use would be grammar
<ShaneM> rel="version" href="canonical URI for version" - need to create good examples for these.
SM: not sure what canonical URIs for languages
SP: for us to decide
<Steven> the TR URI
SM: should be a fixed string a
language processor can rely on
... what makes for a good identifier -- i.e. one not date-stamped
TEN MINUTE WARNING
SP: been a really good meeting
RM: got through several items
SM: like this format better than the 1 hour meetings -- get more done
RM: will have regular call
tomorrow (11 March 2009)
... schedule another virtual F2F before end of march
SP: dates?
ADJOURNED
RM: XHTML2 call on 11 March 2009 -- will discuss scheduling of next virtual face2face
<scribe> scribe: Gregory_Rosmaita
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/Orbion/Orbeon/ Succeeded: i/SP: have to say/ScribeNick: oedipus Succeeded: s/coding/encoding/ Succeeded: s/MUSTG/MUST/ Succeeded: s/SM: what/SP: what/ Succeeded: s/aa/a/ Succeeded: s/RM: alt/GJR: alt/ Succeeded: s/what DFN/whata about DFN/ Succeeded: s/RM: deprecate/GJR: deprecate/ Succeeded: s/but not included/but title not included/ Succeeded: s/@// Succeeded: s/role/xv:role/ WARNING: Bad s/// command: s/for=insert11" Succeeded: s/for="insert11"/for="insert13"/ Succeeded: s/suggestin/suggestion/ Succeeded: s/we discuss/we discussed/ Succeeded: s/space/safe/ Found Scribe: Steven Inferring ScribeNick: Steven Found ScribeNick: oedipus Found Scribe: Gregory_Rosmaita Scribes: Steven, Gregory_Rosmaita ScribeNicks: oedipus, Steven Default Present: Steven, Gregory_Rosmaita, Roland, Markus, ShaneM Present: Steven Gregory_Rosmaita Roland Markus ShaneM Regrets: Tina Mark Alessio Agenda: Got date from IRC log name: 10 Mar 2009 Guessing minutes URL: People with action items: - cases for genericizing gregory investigate shane steven use WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2009/03/10-xhtml-minutes.html | CC-MAIN-2015-27 | refinedweb | 6,696 | 54.05 |
On 26th April 2019, the most hyped and most anticipated movie of our generation was released Avengers Endgame, and I managed to grab the best seats in the hall for the first day first show in IMAX, Bangalore.
When the release date for the movie was announced, I found out that I had booked a flight to go home in the evening for the exact same day three months ago. Living in Bangalore, with the airport being almost 3hrs away I knew that If I’m not able to get the tickets for the morning show, the movie I was waiting for almost a year will most probably get spoiled & I won’t be able to view it in IMAX which I was dying for.
I was in a tough spot, this is a city where you need to book tickets at least 3–4 days in advance for the weekends, and with the added hype around Avengers, something had to be done. My friends and I would be regularly refreshing bookmyshow, a popular event ticket booking portal so that we don’t miss out on the tickets, but I knew this strategy would never work.
The only solution was to automate this somehow, write a script that periodically refreshes the site and when the tickets are available to alert us in a way that we don’t miss it.
Finding out when the tickets are out
So every movie which is coming soon on bookmyshow gets a page of its own, for every city it is different. For Avengers Endgame for Bangalore, it was this
The only difference was it didn’t have the Book Tickets button at the time when it was still in the coming soon phase. So all I needed to do was periodically load the page and check when it finally gets the button element. The shell script for it was pretty simple.
# curl the bookmyshow website
output=`curl -s | grep -c 'Book Tickets'`
if [ $output -gt 0 ] then # Alert somehow, will come to that later fi
To periodically run it, I had to run the script as a cron job. But adding it in my own machine meant I had to keep it on at all times which wasn’t feasible. I needed to add this cron job in some remote server which will keep running no matter what. So I added it in a remote AWS server that I had provisioned for testing. You can refer to the first part of my other medium article in case you need to provision an ec2 AWS instance for yourself. And I added the below crontab entry
# Run the alert_ticket script every minute as a scheduled cron */1 * * * * sh /home/ec2-user/check_ticket.sh
Alert when the tickets are available
Now I needed some way in which the script could alert us after the tickets were available. The only thing I have on my person at all times is my mobile, so either sending a text message or triggering an automated call seemed the best option for me. I figured out that I can use Twilio APIs to achieve the same for free since they provide enough credits at the start to make quite a lot of calls & messages.
The first part was to get a Twilio number and to register a few mobile numbers with them which would be getting the calls & messages using the docs provided here. The quickstart guides for both SMS and Voice were simple enough to follow. I ended up with one script which would alert both by calling & messaging different numbers to ensure at least someone in my group is alerted.
#!/usr/bin/python
from twilio.rest import Client
# Your Account SID from twilio.com/console account_sid = "xxxx" # Your Auth Token from twilio.com/console auth_token = "xxxx"
client = Client(account_sid, auth_token)
message = client.messages.create( to="+xxxxxxxxxxxx", from_="+zzzzzzzzzzzz", body="Book Tickets for avengers endgame")
call = client.calls.create( to="+yyyyyyyyyyyy", from_="+zzzzzzzzzzzz", url="")
# Send message print(message.sid)
# Trigger a call print(call.sid)
And my check_ticket.sh script changed to calling the above script when it figures out the tickets are available.
# Curl the bookmyshow website
output=`curl -s | grep -c 'Book Tickets'`
if [ $output -gt 0 ] then /usr/bin/python /home/ec2-user/alert_by_twilio.sh fi
Conclusion
And that’s all folks — two small scripts on a remote server are all it takes to set everything up. When I set it up for Avengers I & my friend got a call & message respectively, we immediately head over to the site. It took some time for the portal to upload the tickets for the various halls in different locations. After multiple failed payments we finally got the best seats in the hall and had an amazing spoiler-free, first time, IMAX experience for the movie. | https://hackernoon.com/how-i-coded-my-way-to-early-tickets-for-avengers-endgame-f2efa3a128a8 | CC-MAIN-2019-43 | refinedweb | 806 | 66.88 |
-
- SEE ALSO
- AUTHOR
NAME
Data::Tubes::Util
DESCRIPTION
Helper functions for automatic management of argument lists and other.
FUNCTIONS
args_array_with_options
my ($aref, $args) = args_array_with_options(@list, \%defaults); # OR my ($aref, $args) = args_array_with_options(@list, \%args, \%defaults);
helper function to ease parsing of input parameters. This is mostly useful when your function usually takes a list as input, but you want to be able to provide an optional hash of arguments.
The function returns an array reference with the list of parameters, and a hash reference of arguments for less common things.
When calling this function, you are always supposed to pass a hash reference of options, which will act as a default. If the element immediately before is a hash reference itself, it will be considered the input for overriding arguments. Their combination (a simple overriding at the highest hash level) is then returned as $<$args>.
The typical way to invoke this function is like this:
function foo { my ($list, $args) = args_array_with_options(@_, {bar => 'baz'}); ... }
so that the function
foocan be called with an optional trailing hash reference containing the arguments, like this:
foo(qw< this and that >, {bar => 'galook!'});
In case your list might actually contain hash references, you will have to take this into consideration.
assert_all_different
$bool = assert_all_different(@strings);
checks that all strings in
@stringsare different. Returns
1if the check is successful, throws an exception otherwise. The exception is a hash reference with a key
messageset to the first string that is found repeated.
generalized_hashy
$outcome = generalized_hashy($text, %args); # OR $outcome = generalized_hashy(%args); # OR $outcome = generalized_hashy(\%args);
very generic parsing function that tries to figure out a hash out of an input text.
The default settings are optimezed for whipuptitude and DWIMmery. This means that a lot of strings that you would hardly consider sane are parsed anyway, just to give you something fast. If you need to be precise instead, you can either customize the different
%args, use a different parsing function or... roll your own.
The returned value is a hash with the following keys:
failpos
in case of failure, it reports the
position in the input text where the parsing was unsuccessful. It is absent when the parsing succeeds;
failure
in case of failure, it reports an error message. It is absent when the parsing succeeds;
hash
the parsed hash. It is absent when the parsing fails;
pos
the position at which the parsing ended, because the "close" sequence was found;
res
the number of characters in the input text that were not parsed;
The model is the following:
the string is considered a sequence of chunks, optionally marked at the beginning by an
opensequence, and at the end by a
closesequence. Chunks are separated by a chunk separator;
each chunk can be either a stand-alone value or a key/value pair. In the latter case, key and value are separated by a key-value separator
there is something that defines what a valid key and value looks like.
This gives you the following options via
%args:
capture
the regular expression that dominates all the other ones. You normally don't want to set it directly, but you can if you look at how the code uses it.
You can use this input argument using something that has already been compiled in a previous invocation of
generalized_hashy, because it is returned at every invocation. So, the typical idiom for avoiding the recompilation of this regular expression every time is:
# get the capture, set text to undef to avoid any parsing $args{capture} = generalized_hashy(undef, %args)->{capture};
From now on,
$args{capture}contains the regular expression and
generalized_hashywill not need to compute it again when called with this
%argslist.
It has no default value.
chunks_separator
a regular expression for telling chunks apart. Defaults to:
chunks_separator => qr{(?mxs: \s* [\s,;\|/] \s*)}
i.e. it eats up surrounding spaces, and can be a space, comma, semicolon, pipe or slash character;
a regular expression for stating that the hash ends. Defaults to:
close => qr{(?mxs: \s*\z)}
i.e. it eats up optional trailing whitespace and expects to find the end of the string;
key
a regular expression for valid keys. This allows you to be quite precise as to what you admit for keys, but be sure to take a look at "key_admitted" below for a quicker way to set this parameter.
It does not have a default value as it relies upon "key_admitted"'s one.
key_admitted
a specification for valid, unquoted keys. When specifying this parameter and not setting a "key", the key is computed according to the algorithm explained below for admitted sequences.
This parameter can be either a regular expression, or a plain string containing the admitted characters. Defaults to:
key_admitted => qr{[^\\'":=\s,;\|/]};
i.e. whatever cannot fit in either separator.
key_decoder
a decoding function for a parsed key. You might want to set it when you allow quoting and/or escape sequences in your keys.
By default, it removes quotes and escaping characters related to "key_admitted";
key_default
-
default_key
a default key to use when there is a stand-alone value. The
default_keyvariant is provided for compatibility with "metadata" and "hashy" in Data::Dumper::Plugin::Parser.
When not set and a stand-alone value is found, the parsing fails and an error is returned.
There is no default. Note that this is different from the default setting/behaviour of "ghashy" in Data::Dumper::Plugin::Parser, although that function used
generalized_hashybehind the scenes. Again, this is for similarity with
hashyand backwards compatibility.
key_duplicated
a sub reference that will be called whenever a key is already present in the output hash. This allows you to e.g. complain loudly in case your input has a duplicated key.
By default, when a duplicate key is found for the first time the current value is transformed into an array reference whose first element is the old value and the second one is the new value. Any following value for that key is appended to the array;
key_value_separator
a regular expression for telling a key from a value. Defaults to:
key_value_separator => qr{(?mxs: \s* [:=] \s*)}
i.e. it eats up surrounding spaces, and can be a colon or an equal sign;
open
a regular expression for the hash beginning. Defaults to:
open => qr{(?mxs: \s* )}
i.e. it eats up optional leading whitespace;
pos
an integer value to set the initial position for parsing the input string. Default to 0, i.e. the start of the string;
text
the text to parse. This can also appear as the first unnamed parameter in the argument list;
value
a regular expression for valid values. This allows you to be quite precise as to what you admit for values, but be sure to take a look at "value_admitted" below for a quicker way to set this parameter.
It does not have a default value as it relies upon "value_admitted"'s one.
value_admitted
a specification for valid, unquoted values. When specifying this parameter and not setting a "value", the key is computed according to the algorithm explained below for admitted sequences.
This parameter can be either a regular expression, or a plain string containing the admitted characters. Defaults to:
value_admitted => qr{[^\\'":=\s,;\|/]};
i.e. whatever cannot fit in either separator.
value_decoder
a decoding function for a parsed value. You might want to set it when you allow quoting and/or escape sequences in your values.
By default, it removes quotes and escaping characters related to "value_admitted";
When using either "key_admitted" or "value_admitted", the "key" and "value" regular expressions will be computed automatically allowing for single and double quoted strings. This is what we refer to as admitted sequences. In this case, the admitted regular expression (we will call it
$admitted) is used as follows:
allowed_sequence => qr{(?mxs: (?mxs: (?: "(?: [^\\"]+ | \\. )*") # double quotes | (?: '[^']*') # single quotes ) | (?: (?: $admitted | \\.)+? ) # unquoted sequence, with escapes )}
In case
$admittedis not a regular expression, it is transformed into one like this:
$admitted = qr{[\Q$admitted\E]}
i.e. it is considered a set of valid characters and transformed into a characters class.
One admitted sequence can then be either of the following:
- double-quoted
in this case, it is bound by double quotes characters, and can contain any character, including the double quotes themselves, by escaping using the backslash. As a matter of fact, every sequence of a backslash and a character is accepted whatever the second character is (including the backslash itself and the quoting character);
- single-quoted
in this case, it is bound by single quote characters, and can contain any character except the single quote itself. This differs from what Perl accepts in single-quoted strings, and is more in line with what happens in other languages (e.g. the shell);
- unquoted
in this case, no quotation character is considered, and the
$admittedcharacters are used, with a twist: you can still escape otherwise invalid characters with the backslash.
If you don't like all this DWIMmery you can set "key" and "value" independently, of course.
Some examples are due. The following inputs all produce the same output in the default settings, ranging from mostly OK to definitely weird:
input text -> q< what:ever you:do > input text -> q< what: ever you: do > input text -> q< what: ever you= do | wow: yay > input text -> q< what: ever , you= do | wow: yay > output hash -> {what => 'ever', you => 'do', wow => 'yay'}
This shows you that you can do some escaping in the keys and values:
input text -> q< what: ever\ \"\,\"\ you\=\ do | wow: yay > input text -> q< what: 'ever "," you= do' | wow: yay > input text -> q< what: "ever \",\" you= do" | wow: yay > output hash -> {what => 'ever "," you= do', wow => 'yay'}
load_module
my $module = load_module($locator); # OR my $module = load_module($locator, $prefix);
loads a module automatically. There are a lot of modules on CPAN that do this, probably much better, but this should do for these module's needs.
The
$locatoris resolved into a full module name through "resolve_module"; the resulting name is then
required and the resolved name returned back.
Example:
my $module = load_module('Reader');
loads module Data::Tubes::Plugin::Reader and returns the string
Data::Tubes::Plugin::Reader, while:
my $other_module = load_module('Foo::Bar');
loads module
Foo::Barand returns string
Foo::Bar.
You can optionally pass a
$prefixthat will be passed to "resolve_module", see there for further information.
load_sub
my $sub = load_sub($locator); # OR my $sub = load_sub($locator, $prefix);
loads a sub automatically. There are a lot of modules on CPAN that do this, probably much better, but this should do for these module's needs.
The
$locatoris split into a pair of module and subroutine name. The module is loaded through "load_module"; the subroutine referenc3 is then returned from that module.
Example:
my $sub = load_module('Reader::by_line');
loads subroutine
Data::Tubes::Plugin::Reader::by_lineand returns a reference to it, while:
my $other_sub = load_module('Foo::Bar::baz');
returns a reference to subroutine
Foo::Bar::bazafter loading module
Foo::Bar.
You can optionally pass a
$prefixthat will be passed to "resolve_module", see there for further information.
metadata
my $href = metadata($input, %args); # OR my $href = metadata($input, \%args);
parse input string
$stringaccording to rules exposed below, that can be controlled through
%args.
The string is split on the base of two separators, a chunks separator and a key/value separator. The first one isolates what should be key/value pairs, the second allows separating the key from the value in each of these chunks. Whenever a chunk is not actually a key/value pair, it is considered a value and associated to a default key.
The following items can be set in
%args:
chunks_separator
what allows separating chunks, it MUST be a single character;
default_key
a string used as the key when a chunk cannot be split into a pair;
key_value_separator
what allows separating the key from the value in a chunk, it MUST be a single character.
Examples:
# use defaults my $input = 'foo=bar baz=galook booom!'; my $href = metadata($input); # $href = { # foo => 'bar', # baz => 'galook', # '' => 'booom!' # } # use defaults my $input = 'foo=bar baz=galook booom!'; my $href = metadata($input, default_key => 'name'); # $href = { # foo => 'bar', # baz => 'galook', # name => 'booom!' # } # use alternative separators my $input = 'foo:bar & bar|baz:galook booom!|whatever'; my $href = metadata($input, default_key => 'name', chunks_separator => '|', key_value_separator => ':' ); # $href = { # foo => 'bar & bar', # baz => 'galook booom!', # name => 'whatever' # }
normalize_args
my $args = normalize_args( %args, \%defaults); # OR my $args = normalize_args(\%args, \%defaults); # OR my $args = normalize_args($value, %args, [\%defaults, $key]);
helper function to handle input parameters, with some defaults. Allows accepting both a series of key/value pairs, or a hash reference with these pairs, while at the same time providing default values.
A typical usage is as follows:
sub foo { my $args = normalize_args(@_, {bar => 'baz'}); ... }
The last version allows you to accept an initial
$valuewithout a key in your functions, because you pass the default
$keyduring the call to
normalize_args. A typical usage is as follows:
sub foo { my $args = normalize_args(@_, [{bar => 'baz'}, 'aargh']); ... }
In this case, you can accept calling
foolike this:
foo('some value', salutation => 'aloha');
and
$argswill be populated as follows:
$args = { aargh => 'some value', # thanks to the default $key salutation => 'aloha', # passed as %args bar => 'baz', # from defaults };
normalize_filename
my $name_or_handle = normalize_filename($name, $default_handle);
helper function to normalize a file name according to some rules. In particular, depending on
$filename:
if it is a filehandle, it is returned directly;
if it is the string
$default_handleis returned. This allows you to use
STDINor
STDOUTas input/output handles in case the filename is
if it starts with the string
file:, this prefix is stripped away and the rest is used as a filename. This allows you to actually use
file:, then you should always put this prefix, e.g.:
file:whatever -- should be passed as --> file:file:whatever
if it starts with the string
handle:, this prefix is stripped and the rest is used to get one of the standard filehandles. The allowed remaining parts are (case-insensitive):
in
-
stdin
-
out
-
stdout
-
err
-
stderr
-
Any other remaining part causes an exception to be thrown.
Again, if you actually need to create a file whose name is e.g.
handle:whatever, you have to prefix it with
file::
handle:whatever -- should be passed as --> file:handle:whatever
otherwise, the provided
$filenamewill be returned as-is.
pump
pull($iterator); my $records = pull($iterator); my @records = pull($iterator); pull($iterator, $sink);
exhaust an
$iterator, depending on the conditions;
if a
$sinkis present, it MUST be a sub reference. For each item extracted from the iterator, this sub reference will be called with the items as argument;
otherwise, if called in void context, the iterator is simply exhausted, without any kind of accumulation of the records generated;
otherwise, depending on scalar context or list context, an array reference or a list of generated records is returned.
read_file
my $contents = read_file($filename, %args); # OR my $contents = read_file(%args); # OR my $contents = read_file(\%args);
a slurping facility. The following options are available:
binmode
parameter for
CORE::binmode, defaults to
:encoding(UTF-8);
filename
the filename (or reference to a string, if you really need it) to slurp data from.
You can optionally pass the filename standalone as the first argument without pre-pending it with the string
filename. In this case, it MUST appear as the first item in the argument list.
read_file_maybe
my $text = read_file_maybe(\@aref); my $x = read_file_maybe($x); # where ref($x) ne 'ARRAY'
helper function that expands the input argument with "read_file" if it is an array reference, while returning the input argument unchanged otherwise.
This can be useful if you want to overload an input parameter with either a straight text or something that should be loaded from a file, like a template:
my $template = read_file_maybe($args{template});
In this case, if
$args{template}is a text, it will be returned unchanged. Otherwise, if it is an array reference, it will be expanded in a list passed to "read_file", and the contents of the file returned back.
Examples:
$text = read_file_maybe('this goes straight'); # direct text # $text contains 'this goes straight' now $text = read_file_maybe(['/path/to/text.txt']); # $text has the contents of file /path/to/text.txt now $text = read_file_maybe(['/path/to/text.txt', binmode => ':raw']); # ditto, but read as raw text instead of default utf-8
resolve_module
my $full_module_name = resolve_module($module_name); # OR my $full_module_name = resolve_module($module_name, $prefix);
possibly expand a module's name according to a prefix. These are the rules as of release
0.736:
if
$module_namestarts with either a plus sign character
$prefixwill be ignored in this case;
otherwise,
${prefix}::${module_name}will be returned (where
$prefixdefaults to the string
Data::Tubes::Plugin).
The change is related to simplification of interface and better conformance to what other modules do in similar situations (principle of least surprise).
Examples:
module_name('^SimplePack'); # SimplePack module_name('+Some::Pack'); # Some::Pack module_name('SimplePack'); # Data::Tubes::Plugin::SimplePack module_name('Some::Pack'); # Data::Tubes::Plugin::Some::Pack module_name('Pack', 'Some::Thing'); # Some::Thing::Pack module_name('Some::Pack', 'Some::Thing'); # Some::Thing::Some::Pack
API Versioning Note: behaviour of this function changed between version
0.734and
0.736. The previous behaviour, described below, is still available when
$Data::Tubes::API_VERSION(see "API Versioning" in Data::Tubes) is (lexicographically) less than, or equal to,
0.734. Here's what the function does with the older interface:
if
$module_namestarts with an exclamation point
$prefixwill be ignored in this case;
otherwise, if
$module_namestarts with a plus sign
$prefixwill be used (defaulting to
Data::Tubes::Plugin);
otherwise, if
$module_namedoes not contain sub-packages (i.e. the sequence
$prefixwill be used as in the previous bullet;
otherwise, the provide name is used.
Examples (in the same order as the bullet above):
module_name('!SimplePack'); # SimplePack module_name('+Some::Pack'); # Data::Tubes::Plugin::Some::Pack module_name('SimplePack'); # Data::Tubes::Plugin::SimplePack module_name('Some::Pack'); # Some::Pack module_name('Pack', 'Some::Thing'); # Some::Thing::Pack module_name('Some::Pack', 'Some::Thing'); # Some::Pack
shorter_sub_names
shorter_sub_names($package_name);
this helper is used in plugins to generate alternative versions of the implemented functions, with shorter names.
The basic rationale is that functions are usually named after the area they cover, e.g. the function in Data::Tubes::Plugin::Reader that reads a filehandle line-by-line is called
read_by_line. In this way, when you use e.g.
summonfrom Data::Tubes, you end up with a function
read_by_linethat is much clearer than simply
by_line.
On the other hand, when you rely upon automatic running of factory functions like in
tubeor
pipeline(again, in Data::Tubes), some parts are redundant. In the example, you would end up using
Reader::read_by_line, where
read_is actually redundant as you already have the last part of the plugin package name to tell you what this
by_linething is about.
shorter_sub_namescomes to the rescue to generate alternative names by analysing the current namespace for a package and generating new functions by removing a prefix. In the Data::Tubes::Plugin::Reader case, for example, it is called like this at the end of the module:
shorter_sub_names(__PACKAGE__);
and it generates, among the others,
by_lineand
by_paragraph.
Consider using this if you generate new plugins.
sprintffy
my $string = sprintffy($template, \@substitutions);
expand a
$templatestring a-la
sprintf, based on a list of
@substitutions.
The template targets are
sprintf-like, i.e. sequences that start with a percent sign followed by... something.
Each substitution is supposed to be an array reference with two items inside: a regular expression and a value specifier. The regular expression is used to match what comes after the percent sign, while the value part can be either a straight value, or a subroutine reference that will be run to get the real value for the substitution.
There is always an implicit, high priority substitution that matches a single percent sign and expands to a percent sign, so that the string
sprintf-like.
test_all_equal
my $bool = test_all_equal(@list);
test whether all elements in
@listare equal to one another or not, and return test output as a boolean value (i.e. something that Perl considers true or false).
trim
trim(@strings);
remove leading/trailing whitespaces from input
@strings, in-place.
traverse
my $item = traverse($data, @keys);
Assuming that
$datais an array or hash reference, traverse it using items in
@keysat each step in the descent.
tube
see
tubein Data::Tubes, this is the same function.
unzip
my ($even, $odds) = unzip(@list); # OR my ($even, $odds) = unzip(\@list);
separates even and odd items in the input
@listand returns them as two references to arrays.
SEE ALSO
Data::Tubes is a valid entry point of all of. | https://metacpan.org/pod/Data::Tubes::Util | CC-MAIN-2022-21 | refinedweb | 3,412 | 51.89 |
I need to create a script that automatically inputs a password to OpenSSH
ssh client.
Let's say I need to SSH into
a1234b.
I've already tried...
#~/bin/myssh.sh ssh [email protected] a1234b
...but this does not work.
How can I get this functionality into a script?
Solution 1
First you need to install sshpass.
- Ubuntu/Debian:
apt-get install sshpass
- Fedora/CentOS:
yum install sshpass
- Arch:
pacman -S sshpass
Example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no [email protected]_SITE.COM
Custom port example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no [email protected]_SITE.COM:2400
Notes:
sshpasscan also read a password from a file when the
-fflag is passed.
- Using
-fprevents the password from being visible if the
pscommand is executed.
- The file that the password is stored in should have secure permissions.
Solution 2
After looking for an answer to the question for months, I finally found a better solution: writing a simple script.
set timeout 20 set cmd [lrange $argv 1 end] set password [lindex $argv 0] eval spawn $cmd expect "password:" send "$password\r"; interact
Put it to
/usr/bin/exp, So you can use:
exp <password> ssh <anything>
exp <password> scp <anysrc> <anydst>
Done!
Solution 3
Use public key authentication:
In the source host run this only once:
ssh-keygen -t rsa # ENTER to every field ssh-copy-id [email protected]
That's all, after that you'll be able to do ssh without password.
Solution 4
You could use an expects script. I have not written one in quite some time but it should look like below. You will need to head the script with
#!/usr/bin/expect
"login:" send "username\r" expect "Password:" send "password\r" interactspawn ssh HOSTNAME expect
Solution 5
Variant I
sshpass -p PASSWORD ssh [email protected]
Variant II
Solution 6
sshpass + autossh
One nice bonus of the already-mentioned
sshpass is that you can use it with
autossh, eliminating even more of the interactive inefficiency.
sshpass -p mypassword autossh -M0 -t [email protected]
This will allow autoreconnect if, e.g. your wifi is interrupted by closing your laptop.
With a jump host
sshpass -p `cat ~/.sshpass` autossh -M0 -Y -tt -J [email protected]:22223 -p 222 [email protected]
Solution 7
sshpass with better security
I stumbled on this thread while looking for a way to ssh into a bogged-down server -- it took over a minute to process the SSH connection attempt, and timed out before I could enter a password. In this case, I wanted to be able to supply my password immediately when the prompt was available.
(And if it's not painfully clear: with a server in this state, it's far too late to set up a public key login.)
sshpass to the rescue. However, there are better ways to go about this than
sshpass -p.
My implementation skips directly to the interactive password prompt (no time wasted seeing if public key exchange can happen), and never reveals the password as plain text.
# preempt-ssh.sh # usage: same arguments that you'd pass to ssh normally echo "You're going to run (with our additions) ssh [email protected]" # Read password interactively and save it to the environment read -s -p "Password to use: " SSHPASS export SSHPASS # have sshpass load the password from the environment, and skip public key auth # all other args come directly from the input sshpass -e ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no "[email protected]" # clear the exported variable containing the password unset SSHPASS
Solution 8
I don't think I saw anyone suggest this and the OP just said "script" so...
I needed to solve the same problem and my most comfortable language is Python.
I used the paramiko library. Furthermore, I also needed to issue commands for which I would need escalated permissions using
sudo. It turns out sudo can accept its password via stdin via the "-S" flag! See below:
import paramiko ssh_client = paramiko.SSHClient() # To avoid an "unknown hosts" error. Solve this differently if you must... ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # This mechanism uses a private key. pkey = paramiko.RSAKey.from_private_key_file(PKEY_PATH) # This mechanism uses a password. # Get it from cli args or a file or hard code it, whatever works best for you password = "password" ssh_client.connect(hostname="my.host.name.com", username="username", # Uncomment one of the following... # password=password # pkey=pkey ) # do something restricted # If you don't need escalated permissions, omit everything before "mkdir" command = "echo {} | sudo -S mkdir /var/log/test_dir 2>/dev/null".format(password) # In order to inspect the exit code # you need go under paramiko's hood a bit # rather than just using "ssh_client.exec_command()" chan = ssh_client.get_transport().open_session() chan.exec_command(command) exit_status = chan.recv_exit_status() if exit_status != 0: stderr = chan.recv_stderr(5000) # Note that sudo's "-S" flag will send the password prompt to stderr # so you will see that string here too, as well as the actual error. # It was because of this behavior that we needed access to the exit code # to assert success. logger.error("Uh oh") logger.error(stderr) else: logger.info("Successful!")
Hope this helps someone. My use case was creating directories, sending and untarring files and starting programs on ~300 servers as a time. As such, automation was paramount. I tried
sshpass,
expect, and then came up with this.
Solution 9
This is how I login to my servers:
ssp <server_ip>
alias ssp='/home/myuser/Documents/ssh_script.sh'
cat /home/myuser/Documents/ssh_script.sh
ssp:
And therefore:
ssp server_ip
Solution 10
# create a file that echo's out your password .. you may need to get crazy with escape chars or for extra credit put ASCII in your password... echo "echo YerPasswordhere" > /tmp/1 chmod 777 /tmp/1 # sets some vars for ssh to play nice with something to do with GUI but here we are using it to pass creds. export SSH_ASKPASS="/tmp/1" export DISPLAY=YOURDOINGITWRONG setsid ssh [email protected] -p 22
reference:
Solution 11
I am using below solution but for that you have to install
sshpass If its not already installed, install it using
sudo apt install sshpass
Now you can do this,
sshpass -p *YourPassword* shh [email protected]
You can create a bash alias as well so that you don't have to run the whole command again and again. Follow below steps
cd ~
sudo nano .bash_profile
at the end of the file add below code
mymachine() { sshpass -p *YourPassword* shh [email protected] }
source .bash_profile
Now just run
mymachine command from terminal and you'll enter your machine without password prompt.
Note:
mymachinecan be any command of your choice.
- If security doesn't matter for you here in this task and you just want to automate the work you can use this method.
Solution 12
This is basically an extension of abbotto's answer, with some additional steps (aimed at beginners) to make starting up your server, from your linux host, very easy:
- Write a simple bash script, e.g.:
- Save the file, e.g. 'startMyServer', then make the file executable by running this in your terminal:
sudo chmod +x startMyServer
- Move the file to a folder which is in your 'PATH' variable (run 'echo $PATH' in your terminal to see those folders). So for example move it to '/usr/bin/'.
And voila, now you are able to get into your server by typing 'startMyServer' into your terminal.
P.S. (1) this is not very secure, look into ssh keys for better security.
P.S. (2) SMshrimant answer is quite similar and might be more elegant to some. But I personally prefer to work in bash scripts.
Solution 13
If you are doing this on a Windows system, you can use Plink (part of PuTTY).
plink [email protected] -pw your_password
Solution 14
I got this working as follows
.ssh/config was modified to eliminate the yes/no prompt - I'm behind a firewall so I'm not worried about spoofed ssh keys
host * StrictHostKeyChecking no
Create a response file for expect i.e. answer.expect
set timeout 20 set node [lindex $argv 0] spawn ssh [email protected] service hadoop-hdfs-datanode restart expect "*?assword { send "password\r" <- your password here. interact
Create your bash script and just call expect in the file
while [$i -lt 129] # a few nodes here expect answer.expect hadoopslave$i i=[$i + 1] sleep 5 donei=1
Gets 128 hadoop datanodes refreshed with new config - assuming you are using a NFS mount for the hadoop/conf files
Hope this helps someone - I'm a Windows numpty and this took me about 5 hours to figure out!
Solution 15
I have a better solution that inclueds login with your account than changing to root user. It is a bash script
Solution 16
The answer of @abbotto did not work for me, had to do some things differently:
- yum install sshpass changed to - rpm -ivh
- the command to use sshpass changed to - sshpass -p "pass" ssh [email protected] -p 2122
Solution 17
I managed to get it working with that:
SSH_ASKPASS="echo \"my-pass-here\"" ssh -tt remotehost -l myusername
Solution 18
In the example bellow I'll write the solution that I used:
The scenario: I want to copy file from a server using sh script:
$PASSWORD=password my_script=$(expect -c "spawn scp [email protected]:path/file.txt /home/Amine/Bureau/trash/test/ expect \"password:\" send \"$PASSWORD\r\" expect \"#\" send \"exit \r\" ") echo "$my_script"
Solution 19
This works:
BUT!!! If you have an error like below, just start your script with expect, but not bash, as shown here:
expect myssh.sh
instead of
bash myssh.sh
/bin/myssh.sh: 2: spawn: not found /bin/myssh.sh: 3: expect: not found /bin/myssh.sh: 4: send: not found /bin/myssh.sh: 5: expect: not found /bin/myssh.sh: 6: send: not found
Solution 20
Use this script tossh within script, First argument is the hostname and second will be the password.
set pass [lindex $argv 1] set host [lindex $argv 0] spawn ssh -t [email protected]$host echo Hello expect "*assword: " send "$pass\n"; interact"
Solution 21
To connect remote machine through shell scripts , use below command:
sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no [email protected]
where
IPADDRESS,
USERNAME and
PASSWORD are input values which need to provide in script, or if we want to provide in runtime use "read" command.
Solution 22
This should help in most of the cases (you need to install sshpass first!):
read -p 'Enter Your Username: ' UserName; read -p 'Enter Your Password: ' Password; read -p 'Enter Your Domain Name: ' Domain; sshpass -p "$Password" ssh -o StrictHostKeyChecking=no $UserName@$Domain
Solution 23
In linux/ubuntu
ssh [email protected]_ip_address -p port_number
Press enter and then enter your server password
if you are not a root user then add sudo in starting of command | https://solutionschecker.com/questions/automatically-enter-ssh-password-with-script/ | CC-MAIN-2022-40 | refinedweb | 1,804 | 61.87 |
On your ColumnConfig object, set a renderer using setRenderer(new MyRenderer).
MyRenderer is a class that implements GridCellRenderer, and returns the HTML.
Ben
Type: Posts; User: bigmountainben
On your ColumnConfig object, set a renderer using setRenderer(new MyRenderer).
MyRenderer is a class that implements GridCellRenderer, and returns the HTML.
Ben
Sure, that would work too:
enum MyEnum {
Admin(Util.getMessages().adminDesc());
User(Util.getMessages().userDesc());
...
}
You can also use the BeanModelMarker method:
@BeanModelMarker.BEAN(MyEnum.class)
public class MyEnumModel implements BeanModelMarker { }
And then you can use the BeanModel in your combo...
I can't seem to find a way to specify a custom sorter for a particular column. Anyone can point me in the right direction?
Thanks!
Ben
The issue issue manifest itself on a version of GXT 2.0-m2 built on May 19th, under MacOS X, GWT 1.6.4 in Hosted Mode (safari agent). From the stacktrace and the nature of the bug, it's likely that...
Hi there,
There are a couple ways to do it:
When you build your Grid, in your ColumnConfig for the relevant column:
ColumnConfig myColumn = new ColumnConfig("my", "My", 35);...
Thanks Sven.
It's even documented in ColumnConfig:
/**
* A column config for a column in a column model.
*
* <p /> The column config is a configuration object that should only be used
...
Calling ColumnConfig's setHidden method does not work once a Grid has been rendered. It works if its called before, but not if it's called after. The documentation does not make mention of the fact...
Thanks Sven!
Safari (MacOS X 10.5.6)
Read the first line.
GWT 1.6, GXT 2.0-m1 (built as of 5/10/09), Hosted mode
When you have a ComboBox field, and:
- you style the label with a margin component
- you set allowBlank to false
The warning image will...
I figured that's what you meant.
That works, actually - but the Window should be listening to its children for resize events and do this for me.
I am seeing this on Safari (and Hosted mode,...
Doesn't fix it.
To restate: the issue is that when you collapse or expand the field set, the shadow for the entire window does not resize along with the window.
public class Test implements EntryPoint {
public void onModuleLoad() {
Window w = new Window();
w.setHeading("my favorite variables");
w.setWidth(500);
...
The problem also manifest itself if I use the default, FlowLayout and a RowLayout. What kind of Layout is one supposed to used with auto-= height?
popup = new Window();
popup.setAutoHeight(true); //(and all the components in it too)
Window has a FitLayout
In the FitLayout, there is a LayoutContainer (call it form) with a FormLayout
form...
Ok thanks.
name.setEmptyText("A short description of the task");
name.setFieldLabel("Some Label");
// we have to do it at this level because otherwise gxt styles take over...
I have a class ContentReference.
If I parameterize it - class ContentReference<T extends ContentType> the BeanModel mechanisms (both the BeanModelMarker interface and BeanModelMarker.BEAN tag)...
Gents,
Users hacking ext-all.css by hand is not a solution, and neither should it be a FAQ. You need to preface all GXT standard tag mods with a contextual selector (i.e. 'gxt' or something like...
Hi,
The title is pretty self-explanatory. Expanding a bit:
1. I am using a ButtonBar to add a submit/cancel pair on a FormPanel (itself in a popup Window). Is this the right thing to do?
2.... | https://www.sencha.com/forum/search.php?s=670446e745e5eb356c8bec65883c119b&searchid=17686687 | CC-MAIN-2016-36 | refinedweb | 580 | 59.7 |
Finding the best poker hand in five-card draw with python
Last Updated on January 2, 2018
I recently took a Hackerrank challenge for a job application that involved poker. I'm not a poker player, so I had a brief moment of panic as I read over the problem the description. In this article I want to do some reflection on how I approached the problem.
The ProblemThe Problem
The hackerrank question asked me to write a program that would determine the best poker hand possible in five-card draw poker. We are given 10 cards, the first 5 are the current hand, and the second 5 are the next five cards in the deck. We assume that we can see the next five cards (they are not hidden). We want to exchange any
n number of cards (where
n <= 5) in our hand for the next
n cards in the deck. For example, we can take out any combination of 2 cards from the hand we are given, but we must replace these two cards with the next two cards from the deck (we can't pick any two cards from the deck).
Evaluating handsEvaluating hands
Suit and value make up the value of playing cards. For example, you can have a 3 of clubs. 3 is the value, clubs is the suit. We can represent this as
3C.
Suits
Clubs C Spades S Heart H Diamonds D
Value (Rank)
2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King, Ace
values = {"2":2, "3":3, "4":4, "5":5, "6":6, "7":7, "8":8, "9":9, "10":10, "J":11, "Q":12, "K":13, "A":14}
HandsHands
Here are the hands of poker
Royal flush (the problem didn't ask me to consider Royal Flush)
A, K, Q, J, 10, all the same suit.
Straight flush
Five cards in a sequence, all in the same suit. Ace can either come before 2 or come after King..
Evaluating a hand of cardsEvaluating a hand of cards
A hand is five cards. The first thing I did was write out functions to evaluate if a group of 5 cards satisfies the conditions of one of the ten hands.
Here's a sample hand:
hand = ["3S", "JC", "QD", "5D", "AH"]
To write functions, I reached for using 2 important python features:
set and
defaultdict.
Here's an example of a simple function to detect a flush, a hand with cards of all the same suit:
Checking a flushChecking a flush
def check_flush(hand): suits = [h[1] for h in hand] if len(set(suits)) == 1: return True else: return False
This function creates a list of the suits in our hand, and then counts the unique elements in that list by making it a set. If the length of the set is 1, then all the cards in the hand must be of the same suit.
But wait, what if we have a straight flush? Also, a hand that satisfies a flush could also be described as a two pair hand. The problem asked me to find the highest possible hand for a given set of cards, so I tried to keep things simple by writing a
check_hand() function that checks each hand starting from straight flush down to high card. As soon as a condition for a hand was satisfied, I returned a number that corresponded to the strength of the hand (1 for high card up to 10 for straight flush). The problem didn't include Royal flush, so I will not include that here.
Here's the
check_hand function:
def check_hand(hand): if check_straight_flush(hand): return 9 if check_four_of_a_kind(hand): return 8 [...] if check_two_pair(hand): return 3 if check_pair(hand): return 2 return 1
This function starts checking the most valuable hands. After it checks the second to lowest hand (pair), it returns a value of 1. This value of 1 corresponds to the "highest card" hand. Since I'm not comparing the relative value of hands, it doesn't matter what the highest card is, so the number just represents the type of hand that is the strongest.
Other handsOther hands
Here are the all of the functions I used to detect hands:
card_order_dict = {"2":2, "3":3, "4":4, "5":5, "6":6, "7":7, "8":8, "9":9, "T":10,"J":11, "Q":12, "K":13, "A":14} def check_straight_flush(hand): if check_flush(hand) and check_straight(hand): return True else: return False def check_four_of_a_kind(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v]+=1 if sorted(value_counts.values()) == [1,4]: return True return False def check_full_house(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v]+=1 if sorted(value_counts.values()) == [2,3]: return True return False def check_flush(hand): suits = [i[1] for i in hand] if len(set(suits))==1: return True else: return False def check_straight(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v] += 1 rank_values = [card_order_dict[i] for i in values] value_range = max(rank_values) - min(rank_values) if len(set(value_counts.values())) == 1 and (value_range==4): return True else: #check straight with low Ace if set(values) == set(["A", "2", "3", "4", "5"]): return True return False def check_three_of_a_kind(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v]+=1 if set(value_counts.values()) == set([3,1]): return True else: return False def check_two_pairs(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v]+=1 if sorted(value_counts.values())==[1,2,2]: return True else: return False def check_one_pairs(hand): values = [i[0] for i in hand] value_counts = defaultdict(lambda:0) for v in values: value_counts[v]+=1 if 2 in value_counts.values(): return True else: return False
defaultdict is a great built-in that is good to use when you don't know what elements will be in your dictionary, but you know what the initial values of any key that could be added should be. We don't need it here, but the alternative would be to write a very long dictionary where keys are the possible card values and the values of each key is 0.
Finding the best handFinding the best hand
It would certainly be cleaner and more efficient to write out the above functions into one large function, but I wanted to keep things simple as I was under time constraints.
The next step in the problem is to determine the best possible hand we can get given the hand we are dealt and the 5 cards on top of the deck. I decided to first solve this problem with brute force. Here was my logic for this part: use
itertools to get all combinations of groups of 0, 1, 2, 3, 4 and 5 cards from my hand and add the first
5 - n cards from the deck so we get a five card deck. For each combination of cards we can run
check_hand() and keep track of the highest rank hand, and then return that hand as the best hand. Here's the code I wrote for this part of the problem:
from itertools import combinations hand_dict = {9:"straight-flush", 8:"four-of-a-kind", 7:"full-house", 6:"flush", 5:"straight", 4:"three-of-a-kind", 3:"two-pairs", 2:"one-pair", 1:"highest-card"} #exhaustive search using itertools.combinations def play(cards): hand = cards[:5] deck = cards[5:] best_hand = 0 for i in range(6): possible_combos = combinations(hand, 5-i) for c in possible_combos: current_hand = list(c) + deck[:i] hand_value = check_hand(current_hand) if hand_value > best_hand: best_hand = hand_value return hand_dict[best_hand]
Checking test casesChecking test cases
Lastly, I need to check each hand and print out the best hand possible. Here's the loop I wrote to do this:
for i in sys.stdin.readlines(): cards = list(map(lambda x:x, i.split())) hand = cards[:5] deck = cards[5:] print("Hand:", " ".join(hand), "Deck:", " ".join(deck), "Best hand:", play(cards))
This will accept one round of cards per line:
2C 3D 4S 5D 7H KD QH 6C JH 2D
and it will output the following:
Hand: 2C 3D 4S 5D 7H Deck: KD QH 6C JH 2D Best hand: straight
OptimizationOptimization
This was an interesting problem to deal with as the solution contained several parts that worked together. While solving the problem I aimed worked through to the end leaving some parts to come back to that I felt confident in solving. Instead of writing each function to check differnt hands at the beginning, I filled most of these functions with
pass and moved on to write the next part that involves checking each different combination of cards. Recently having worked through python's
itertools exercises on Hackerrank, the
combinations functions was fresh in my mind.
While I was able to arrive at a solution that satisfied the test cases, I did not have time to think about the efficiency or Big O analysis of the problem.
There is obviously some refactoring that I could do to make things cleaner. With more time I would take an object oriented approach by making classes for cards and hands, and adding class methods to evaluate the hands.
For each round, we have to run
check_hand() on each hand combination. Let's think about how many hands we have to evaluate:
We have to consider combinations of cards formed by taking out groups of 0, 1, 2, 3, 4 and 5 cards and adding the next number of cards in the deck that bring the total card count to 5, which means we have to do 5C0 + 5C1 + 5C2 + 5C3 + 5C4 + 5C5 calls to
check_hand(). So the sum of total calls is 1 + 5 + 10 + 10 + 5 + 1 = 32.
For each of these 32 calls that happen when we run
play(),
check_hands() runs through each of the
check_ functions starting with the highest value hand. As soon as it finds a "match",
check_hands() returns a number value (
hand_value) corresponding to straight flush, four of a kind, etc. This value is then compared with the highest value that has been previously found (
best_hand) and replaces that value if the current hand's hand rank has a higher value.
I'm not sure if there is faster way to find the best hand than the brute force method I implemented. | https://briancaffey.github.io/2018/01/02/checking-poker-hands-with-python.html/ | CC-MAIN-2021-49 | refinedweb | 1,759 | 62.31 |
- 0shares
- Facebook0
- Twitter0
- Google+0
- Pinterest0
- LinkedIn0
Plugins Overview
In bootstrap we have 12 jQuery plugins that are used to enhance our websites. There is no need of learning the JavaScript to use plugins in bootstrap. The plugins can be used by simply using the Data API of bootstrap. In this way no coding is required. In our web page we can add plugins in two forms that are individually or by compiling.
The plugins can be individually added by using the *.js files. It should be noted here that some of the plugins are dependent on other plugins. In this way you should ensure that other plugins on which the added ones are dependent must be added to the page.
The plugins can be compiled or added all at once by using the bootstrap.js files. We can also use the bootstrap.min.js files to add compiled plugins to the web page. Both of these files should not be added together as both of them contain all the bootstrap plugins in one file.
JQuery must be included before adding the plugin file as all plugins depend on jQuery.
Data Attributes:
- Plugins are accessible when Data API is used. Because of the Data API, there is no need of JavaScript code for any of the features of plugins.
- There are some cases when we need to disable the bootstrap data API. This can be done by using the line of JavaScript that is $(document).off(‘.data-api’).
- A single plugin can also be turned off by adding the name of plugin in the above JavaScript line as a namespace as: $(document).off(‘.alert.data-api’).
Programmatic API:
The plugins can be used through JavaScript API. These APIs are single and chainable methods that can accept a string which can be used as a target to a particular method. This particular method invokes a plugin and this plugin will have a default behavior.
Consider the following example in which we have initiated the plugins; at first the plugin is initiated with default behavior, then the plugin is initiated having no keywords and finally the plugin is initiated and is shown immediately:
$(“#myModal”).modal()
$(“#myModal”).modal({ keyboard: false })
$(“#myModal”).modal(‘show’)
Every plugin has its own constructor which can be added by $.fn.popover.Constructor. An instance of the plugin can be retrieved from an element by $(‘[rel = popover]’).data(‘popover’).
No Conflict:
We can also use the plugins of bootstrap with other UI frameworks. When the plugins of bootstrap are added with UI frameworks then there are chances of collisions of namespace. To avoid this we can call the instance .noConflict for that very plugin on which there are chances of collisions of namespace. This can be done as follows:
var bootstrapButton = $.fn.button.noConflict()
This is used to return $.fn.button to the previously assigned value.
$.fn.bootstrapBtn = bootstrapButton
This is used to give the bootstrap functionality to $().bootstrapBtn.
Events:
We have a number of custom events for plugins that are provided by bootstrap. We can have the events in two forms that are as follows:
Infinitive form:
The infinitive form comes to action when an event is started. This form is used when the user wants to stop the execution of an action.
Past participle form:
The infinitive form comes to action when an event is ended. | http://www.tutorialology.com/bootstrap/plugins-overview/ | CC-MAIN-2017-47 | refinedweb | 559 | 73.68 |
[
]
Manfred Sattler commented on JCR-2933:
--------------------------------------
Can you check this testcase. It gives me 1 row. This is not correct.
The property "prop1" from the first two nodes have the same value.
def n1 = root.addNode("node1", "test:SamplePage");
n1.setProperty("prop1", "page1");
def n2 = root.addNode("node2", "test:SamplePage");
n2.setProperty("prop1", "page1");
def n3 = n1.addNode("node3", "test:SampleContent");
Select * from [test:SamplePage] as page left outer join [test:SampleContent] as content on
ISDESCENDANTNODE(content,page) where page. SQL2 Left Outer Join
> --------------------
>
> Key: JCR-2933
> URL:
> Project: Jackrabbit Content Repository
> Issue Type: Bug
> Components: jackrabbit-core
> Affects Versions: 2.2.4
> Reporter: Manfred Sattler
> Assignee: Jukka Zitting
> Attachments:: | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/201104.mbox/%3C1684569427.26845.1301634965723.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2014-52 | refinedweb | 110 | 55.61 |
Building a Progressive Web App with Blazor
Jon.
Creating a new Blazor PWA
In this post, we’ll walk through creating a simple “To Do” application; in a future post we’ll add some more advanced PWA features.
Creating a new Blazor PWA is especially easy now in Visual Studio for Mac using the latest Blazor project template. You’ll need to have version 8.6 Preview 5 and later, and until the full release you’ll need to ensure you have the latest .NET Core SDK installed and to install the latest Blazor templates manually to see them show up in the Visual Studio for Mac new project dialog. After launching Visual Studio for Mac you’ll see the dialog below, click New to begin creating the project. If you already have Visual Studio open, you could also use the ⇧⌘N shortcut to open the new project dialog.
From here we will create a .NET Core Console project by selecting Web and Console > App > Blazor WebAssembly App.
Next, you’ll need to configure the new Blazor WebAssembly (often abbreviated as WASM) application. We’ll name our application “BlazorPwaTodo”. Select “No Authentication”, and check both the “ASP.NET Core Hosted” and “Progressive Web Application” options as shown below.
If you’re using Visual Studio 2019 on Windows, the new project dialog is pretty similar:
If you’re using Visual Studio Code, you can create a Blazor PWA from the command line using the –pwa switch.
dotnet new blazorwasm -o BlazorPwaTodo --pwa --hosted
Note: We’re selecting the “ASP.NET Core Hosted” option for two reasons. First, because we plan to integrate some backend services in the future. Second, using an ASP.NET Core Hosted site makes it easier to run a published version of the app in your local developer environment.
Adding a Todo Razor Component
We’re going to build a simple To Do application, so we’ll start by creating a new page to view our list. Right-click the Pages folder and select Add > New Item > Razor Component. Name the component’s file Todo.razor.
Use the following initial markup for the Todo component:
@page "/todo" <h3>Todo</h3>
The first line of this template defines a route template, so browsing to “/todo” will resolve to this Todo component.
Next, we’ll add the Todo component to the navigation bar. Open Shared/NavMenu.razor and add a new list item with a NavLink element for the Todo component as shown below:
<li class="nav-item px-3"> <NavLink class="nav-link" href="todo"> <span class="oi oi-plus" aria-</span> Todo </NavLink> </li>
Now run the application by selecting Run > Start Without Debugging from the menu. Clicking the Todo link in the navigation shows our new page with the Todo heading, ready for us to start adding some code.
Coding basic Todo functionality
Let’s create a simple class to represent a todo item. Since we’re using an ASP.NET Core hosted template and will probably include some back-end APIs later, we’ll put that class in Shared project to make it available for use in our server application as well. Right-click the Blazor.PwaTodo.Shared project and select Add > New Class. Name the new class file TodoItem.cs and enter the following code:
public class TodoItem { public string Title { get; set; } public bool IsDone { get; set; } }
Return to the Todo.razor component and add the following code:
newTodo; private void AddTodo() { if (!string.IsNullOrWhiteSpace(newTodo)) { todos.Add(new TodoItem { Title = newTodo }); newTodo = string.Empty; } } }
Let’s take a look at what this code is doing:
- In the code block at the bottom, we’d declared a list to hold our TodoItem objects, named todo.
- We’re iterating over the list of items using a foreach loop inside the ul element.
- We have an input element that is bound to a string property named newTodo.
- The input element has a a corresponding button to allow adding new items to the list.
- We’re using the button’s @onclick attribute to call the AddTodo method.
- The AddTodo method checks to make sure that text has been entered, then adds a new item to the list and clears the input to allow adding another item.
Note: This explanation is condensed for brevity. If you’d like a more thorough explanation, there’s an in-depth walkthrough of this Todo example in the Blazor documentation.
Run the application (again, by using the Run > Start Without Debugging command in the menu) and verify that you can add new items to the list.
Marking Todo items complete
So far, our Todo application only allows adding new items to the list with no support for marking them complete. Let’s fix that by adding a checkbox for each item that’s bound to the IsDone property.
Update the list to include a checkbox input element as shown below.
<ul> @foreach (var todo in todos) { <li> <input type="checkbox" @ <input @ </li> } </ul>
And finally, to verify that these values are bound, we’ll update the page header to show the number of items that are not yet complete. Update the h3 element with this code:
<h3>Todo (@todos.Count(todo => !todo.IsDone))</h3>
Run the application to see it in action. Leave the application running when you’re done testing, as we’ll be using the running application in the next section.
Installing the PWA
Now that we’ve built a basic application, we can start taking advantage of the PWA features. Every time you’ve been running this application, it’s actually been running with PWA support due to using the PWA template.
A Progressive Web Application (PWA) is.
We’ll look at two of these features in this post, then dig into some more advanced features in a future post.
First, let’s look at installation. When your application is running, you’ll see an installation prompt in the browser. Here’s how that looks using Edge on macOS:
Installing the application relaunches the application in its own window without an address bar.
Additionally, the application is shown with its own icon in the taskbar.
The window title, color scheme, icon, and other details are all customizable using the project’s manifest.json file. We’ll dig into that customization in the next post.
Testing mobile PWA installation and offline support
By default, apps created using the PWA template option have support for running offline. A user must first visit the app while they’re online. The browser automatically downloads and caches all the resources required to operate offline.
However, offline support would interfere with local development since you might be viewing cached offline files when you make updates. For that reason, offline support is only available for published applications.
To test this out locally, we’ll need to publish the application. Right-click on the BlazorPwaTodo.Server application and select Publish > Publish to Folder. You can leave the default folder name but be sure to note it. The default should be bin/Release/netcoreapp3.1/publish.
Open a terminal and navigate to the publish directory – you can use Visual Studio for Mac’s integrated terminal if you’d like. Launch the server application using this command:
dotnet BlazorPwaTodo.Server.dll
This launches the published application running on localhost port 5001. Now that you’re running the published application, we can test offline support. The easiest way to do this is by using the Network tab in browser tools to simulate offline mode, as shown below.
But let’s make this a bit more of a challenge, and test this out in the mobile emulator! In order to do that, we’ll need our mobile browser to be able to connect to our published site. My favorite way to handle that is by using the popular ngrok tool. After installing ngrok, launch it using the following command:
ngrok http -host-header=”localhost:5001”
You should see ngrok spin up and provide a temporary public https endpoint we can use for testing. For more background on why we’re setting the host header, see this blog post by Jerrie Pelser.
Note: There are several different options for pointing a mobile emulator at your development website, but they all can carry some unexpected complications. The Android emulator does provide a local loopback on 10.0.0.2, but it doesn’t trust the ASP.NET Core HTTPS dev certs, which causes some additional setup. It’s also possible to deploy to Azure and use the wildcard certificate support in the free tier, but you’re no longer testing against localhost and need to publish each time you want to test an update. So, in my experience so far, ngrok is the quickest and simplest way to do a quick PWA / mobile test.
Now that we’ve got our site ready for testing on Using Visual Studio for Mac, we can launch an Android emulator using the Tools > Device Manager menu (assuming you’ve installed the Android workload). If you don’t already have an Android device configured, create one by clicking the New Device button. I’m using the Pixel 2 Pie 9.0 image.
Browse to the public HTTPS endpoint provided by ngrok. You should see the website, as well as a prompt to install the application locally. Install the application and close the browser, then test the PWA application by launching from the home screen.
Finally, let’s test out offline support. Use the settings app and enable Airplane Mode, then launch the application. The application should continue to function with offline support.
Summary & Wrap Up
In this post, we’ve built a basic Todo application using the Blazor PWA template in Visual Studio for Mac. We then installed it locally on our Mac as well as on an Android emulator and tested basic offline support.
Blazor PWA’s offer a lot of advanced features, such as customizing the application appearance, adding push notifications, controlling caching and synchronization policies, leveraging the service worker for background work, and more. We’ll take a look at some of these features in a future blog post.
Documentation links:
- Build your first Blazor app (Todo sample)
- Build Progressive Web Applications with ASP.NET Core Blazor WebAssembly
Give it a try today!
The new Blazor WASM templates with PWA support are is now in Visual Studio 2019 for Mac 8.6 Preview. To start using them,.
We hope you enjoy using Visual Studio 2019 for Mac 8.6 as much as we enjoyed working on it!
Blazor is very cool!
Combine Blazor PWA with and have full control.
Very interesting this example but I had some issues:
1- The option in Visual Studio in Mac to create “Blazor WebAssembly App” never show up.
2- After created the project by the command line, the first time you try to run in VS you get the error:
Conflicting assets with the same path ‘/service-worker.js’ for content root paths ‘/Users/usertest/Workspace/BlazorPwaTest/Client/obj/Debug/netstandard2.1/blazor/serviceworkers/wwwroot/service-worker.js’ and ‘/Users/usertest/Workspace/BlazorPwaTest/Client/wwwroot/service-worker.js’.
The only way to solve this is to Close and Open VS again.
Everything else was working like a charm.
Will Blazor PWA apps be supported on Xbox?
Can we use native device features, like the camera in a Blazor PWA app?
Cool =) | https://devblogs.microsoft.com/visualstudio/building-a-progressive-web-app-with-blazor/ | CC-MAIN-2020-34 | refinedweb | 1,902 | 64.61 |
On 2012-10-02 22:24-0400 Hezekiah M. Carty wrote:
> I've used PLplot to plot a lot of data on maps, including mixing data
> from shapefiles and other sources. Based on my experience I think we
> may be going off in the wrong direction here.
Hi Hez:
In order to be sure we understand the rest of your comments, could you
explain in more detail how you made map data accessible to PLplot so
you could decorate existing maps like you described above? I assume
your applications used the API of libraries like shapelib, MDAH, and
OGR to import the map data you desired and then you called on the
libplplot routines to manipulate those data further, but that
assumption needs confirmation.
If you do confirm that was the approach you took, my vision is quite
similar except my idea was it would be an added convenience for many
map/PLplot users to provide a PLplot API that imports map data so their
applications can manipulate those data with further PLplot commands.
Of course, if some users didn't like that PLplot map importing API,
they would be free to import map data into their applications in any
other way they like, but our job would be to make that PLplot map
importing API reasonably attractive and useful for a large majority of
users' map importing needs without making that API too complicated.
If you respond you cannot simplify the shapelib, MDAH, and OGR API's
any more than they are now for PLplot's _limited_ needs, then
obviously it makes little sense for PLplot to provide duplicates of
those map importing API's when the user can just call those libraries
directly from his application.
However, private map importing functionality _is_ needed behind the scenes for
plmap (as used in example 19), and I think we should support that "hidden"
approach (where the map data is simply plotted and not exposed to the
calling routine) as well as the "exposed" approach (which does
expose the map data to the calling routine) as outlined above.
As we all know, the map importing that plmap does behind the scene
currently uses an obscure map file format with very few maps
associated with it, but I think we should expand that map importation
capability along the lines I described above for the "exposed"
approach.
Would a good compromise be to work exclusively on improving the map
import capability behind plmap for now? If it turns out we like how
we simplify the shapelib, MDAH, and OGR API's in that private case for
plmap's limited map import needs, then we can turn that into a general
public API suitable for the "exposed" case described above. But if
the improved map importing functionality we develop for plmap is not
going to be added value for users (say, because it just duplicates
shapelib, MDAH, and OGR API's) then we should keep it private for the
exclusive use of plmap.
I certainly agree with you that we need examples of the "exposed" case
(whether implemented with direct calls to shapelib, MDAH, or OGR or
via some PLplot map import API if that turns out we develop that). I
haven't thought too much about your call for improvements in PLplot's
functions to deal with additional general plotting needs that would be
useful for the "exposed" map case, but I assume those ideas are well
taken as | http://sourceforge.net/p/plplot/mailman/message/29917195/ | CC-MAIN-2014-23 | refinedweb | 576 | 50.43 |
This is a quick little C program which demonstrates how to feed data to the
Windows clipboard. It also happens to be a useful little utility, if you
use the Windows command line much. It works just fine with Cygwin.
See Reading from the Windows Clipboard for a symmetrical utility which
pastes from the clipboard to standard output.
If you have any problems with this, please let me know. If you actually find
it useful or informative, for God's sake let me know!
/*****************************************************************************
* ccopy.c
* Simple program to copy stdin to Win32 clipboard
*
* You'll have to link this to user32.lib (or libuser32.a with Cygwin)
*
* I've built this with MSVC++6 and EGCS 2.91.57
*
* wharfinger 1/17/01
*
* Version 0.1 1/17/01 Thanks to some comments by Eraser_, I took a second
* look at streamtostr(), and noticed that it could be simplified. I also
* noticed a bug: If realloc() had failed, it would have tried to write
* through a NULL pointer. But it's fixed now.
*****************************************************************************/
#include <stdio.h>
#include <windows.h>
/* Arbitrary */
#define BUFSIZE 1024
#define HASSLE_USER "\
usage: Pipe text through it. The text will be copied to the clipboard.\n\
\n\
This program was written by [email protected]; no warranty blah\n\
blah blah, public domain etc.\n"
/* Copy string to clipboard */
BOOL strtocb( char *s, size_t size );
/* Copy stream to string; allocate as needed */
size_t streamtostr( char **ps, size_t *size, FILE *stream );
/*---------------------------------------------------------------------------*/
int main( int argc, char *argv[] )
{
/* If somebody just types "ccopy", drop science on the poor dumb bastard.
* If there are any command line arguments at all, do the same. Since
* the only argument we could possibly have is "-?", anything else is
* wrong, and we'd respond the same way in any case -- don't bother to
* check.
*/
if ( argc > 1 || isatty( fileno( stdin ) ) ) {
fprintf( stderr, HASSLE_USER );
} else {
char * s = NULL;
size_t size = 0;
/* Try to slurp stdin into a string. */
if ( streamtostr( &s, &size, stdin ) ) {
/* If that worked, try to stuff it in the clipboard. */
int rtn = strtocb( s, size );
free( s );
return ! rtn;
}
}
return 1;
}
/*---------------------------------------------------------------------------*/
BOOL strtocb( char *s, size_t size )
{
HGLOBAL glob = NULL;
char * dest = NULL;
if ( ! s )
return FALSE;
else {
OpenClipboard( NULL ); /* Argument is HWND; we don't have one */
EmptyClipboard(); /* Discard whatever was there. */
/* "Global memory" is a relic of 16-bit Windows; all I did back then
* was DOS, so I really don't grok. In any case, if you use the
* GMEM_MOVEABLE flag, you get a handle back from GlobalAlloc()
* instead of a pointer. You then call GlobalLock() to get an
* pointer that you can write through. SetClipboardData() expects a
* handle, so we have to give it a handle; hence all the bullshit.
*/
if ( NULL != ( glob = GlobalAlloc( GMEM_MOVEABLE, size + 1 ) )
&& NULL != ( dest = (char *)GlobalLock( glob ) ) )
{
strcpy( dest, s );
GlobalUnlock( glob );
/* CF_TEXT is an integer constant which says what kind of data
* we claim to be copying. There are a dozen or so standard
* ones, and you can define more with RegisterClipboardFormat().
* The clipboard can hold different data for each of an arbitrary
* number of different "formats", all at the same time.
*/
SetClipboardData( CF_TEXT, glob );
}
/* We're done with it. When our process exits, it'll be released
* anyway, and you and I both happen to know that we'll be doing that
* really soon. So we're just playing nice not for any practical
* reason, but merely not to make Jesus cry. A normal Windows
* program will keep on executing for an arbitrarily long time, of
* course, and in that case it really would matter.
*
* We never free glob because that belongs to the system now. After
* we exit, it'll still persist. That's the whole point.
*/
CloseClipboard();
return dest != NULL;
}
}
/*---------------------------------------------------------------------------*/
size_t streamtostr( char **ps, size_t *psize, FILE *stream )
{
int c;
size_t copied = 0; /* Bytes copied since last realloc() */
/* Aw, what the hell, start fresh. */
if ( *ps != NULL )
free( *ps );
/* Try to allocate */
if ( ! ( (*ps) = (char *)malloc( BUFSIZE ) ) )
return *psize = 0;
if ( stream ) {
/* Loop, copy, etc. */
for ( *psize = 0; ( c = fgetc( stream ) ) != EOF; /**/ ) {
/* If we've used up all the allocated memory and we're still
* going, allocate some more. If that fucks up, we're doomed.
*/
if ( copied == BUFSIZE - 1 ) {
copied = 0;
if ( ! ( *ps = (char *)realloc( *ps, (*psize) + BUFSIZE ) ) )
return *psize = 0;
}
(*ps)[ (*psize)++ ] = c;
copied++;
}
/* Play nice, now. */
if ( *ps )
(*ps)[ *psize ] = '\0';
}
return *psize;
}
/*****************************************************************************/
/*****************************************************************************/
Log in or register to write something here or to contact authors.
Need help? [email protected] | https://everything2.com/title/Writing+to+the+Windows+Clipboard | CC-MAIN-2018-05 | refinedweb | 746 | 74.59 |
>> display an image in JavaFX?
Advanced Java Using Eclipse IDE: Learn JavaFX & Databases
33 Lectures 7.5 hours
Complete Oracle JavaFX Bootcamp! Build Real Projects In 2021
64 Lectures 12.5 hours
Emenwa Global, Ejike IfeanyiChukwu
The javafx.scene.image.Image class is used to load an image into a JavaFX application. This supports BMP, GIF, JPEG, and, PNG formats.
JavaFX provides a class named javafx.scene.image.ImageView is a node that is used to display, the loaded image.
To display an image in JavaFX −
Create a FileInputStream representing the image you want to load.
Instantiate the Image class bypassing the input stream object created above, as a parameter to its constructor.
Instantiate the ImageView class.
Set the image to it by passing above the image object as a parameter to the setImage() method.
Set the required properties of the image view using the respective setter methods.
Add the image view mode to the group object.
Example
import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.stage.Stage; public class ImageViewExample extends Application { public void start(Stage stage) throws IOException { //creating the image object InputStream stream = new FileInputStream("D:\images\elephant.jpg"); Image image = new Image(stream); //Creating the image view ImageView imageView = new ImageView(); //Setting image to the image view imageView.setImage(image); //Setting the image view parameters imageView.setX(10); imageView.setY(10); imageView.setFitWidth(575); imageView.setPreserveRatio(true); //Setting the Scene object Group root = new Group(imageView); Scene scene = new Scene(root, 595, 370); stage.setTitle("Displaying Image"); stage.setScene(scene); stage.show(); } public static void main(String args[]) { launch(args); } }
Output
- Related Questions & Answers
- How to display an image in HTML?
- How to add scroll bar to an image in JavaFX?
- How to add context menu to an image in JavaFX?
- How to add an image as label using JavaFX?
- How to add an image to a button (action) in JavaFX?
- How to Invert the color of an image using JavaFX?
- How to change the aspect ratio of an image in JavaFX?
- How to set image as hyperlink in JavaFX?
- How to add image patterns to nodes in JavaFX?
- OpenCV JavaFX application to alter the sharpness of an image
- How to add image to the menu item in JavaFX?
- JavaFX example to decrease the brightness of an image using OpenCV.
- How can I display an image using Pillow in Tkinter?
- How can I display an image using cv2 in Python?
- How to display OpenCV Mat object using JavaFX? | https://www.tutorialspoint.com/how-to-display-an-image-in-javafx | CC-MAIN-2022-40 | refinedweb | 440 | 62.34 |
A Flask blueprint that provides a faceted search interface for bibliographies based on Zotero.
Project description
Kerko
Kerko is a web application component for the Flask framework that provides a user-friendly search and browsing interface for sharing a bibliography managed with the Zotero reference manager.
How it works
Kerko is implemented in Python as a Flask blueprint and, as such, cannot do much unless it is incorporated into a Flask application. A sample application is available, KerkoApp, which anyone with basic requirements could deploy directly on a web server. It is common, however, to integrate Kerko into a larger application, either derived from KerkoApp or custom-built for specific needs. The Kerko-powered bibliography might be just one section of a larger website.
Kerko does not provide any tools for managing bibliographic records. Instead, a well-established reference management software, Zotero, is used for that purpose. The Zotero desktop application provides powerful tools to individuals or teams for managing bibliographic data, which it stores in the cloud on zotero.org. Kerko can be configured to automatically synchronize its search index from zotero.org on a regular basis, ensuring that visitors get an up-to-date even if it is changing frequently.
The combination of Kerko and Zotero gives you the best of both worlds: a user-friendly interface for end-users of the bibliography, and a powerful bibliographic reference management tool for working on the bibliography's content.
Features
The following features are implemented in Kerko:
- Faceted search interface: allows exploration of the bibliography both in search mode and in browsing mode, potentially suiting different user needs, behaviors and abilities. For example, users with a prior idea of the topic or expected results are able to enter keywords or a more complex query in a search field, while those who wish to become familiar with the content of the bibliography or discover new topics may choose to navigate along the proposed facets, to refine or broaden their results. Since both modes are integrated into a single interface, it is possible to combine them.
- Search syntax: boolean operators (AND, OR, NOT; AND is implicit between any two terms separated by a whitespace), logical grouping (with parenthesis), sequence of words (with double quotes (")).
- Search is case-insentitive, accents are folded, and punctuation is ignored. To further improve recall (albeit at the cost of precision), stemming is also performed on terms from most text fields (e.g., title, abstract, notes). It relieves the user from having to specify all variants of a word when searching; for example, terms such as "search", "searches", and "searching" all return the same results. The Snowball algorithm is used for that purpose.
- Sort options: by relevance score (only applicable with text search), by publication date, by author, by title.
- Relevance scoring: provided by the Whoosh library and based on a score computed with the BM25F algorithm, which determines how important a term is to a document in the whole collection of documents, while taking into account its relation to document structure (most fields are neutral, but the score is boosted when a term appears in specific fields, e.g., DOI, ISBN, ISSN, title, author/contributor).
- Facets: allow filtering by topic (Zotero tag), by resource type (Zotero item type), by publication year. Application may be configured to add facets based on collections and subcollections; in this case, any top-level collection can represent a facet, and each subcollection a value within the facet. Using Zotero's ability to assign any given item to multiple collections, a faceted classification scheme can be modeled (including hierarchies within facets).
- Language support: the default user interface is in English, but some translations are provided, and it can be translated using standard gettext-compatible tools (see the Translating section). Also to consider: locales supported by Zotero (which provides the names of fields, item types, and author types), and languages supported by Whoosh: ar, da, nl, en, fi, fr, de, hu, it, no, pt, ro, ru, es, sv, tr.
- Responsive design: the simple default implementation works on large monitors as well as on small screens. It is based on Bootstrap.
- Customizable front-end: applications may partly or fully replace the default templates, scripts and stylesheets with their own.
- Semantic markup: users may easily import citations into their own reference manager software, either from search results pages or individual bibliographic record pages, both of which embed bibliographic metadata (using the OpenURL COinS model). Zotero Connector, for example, will automatically detect the metadata present in the page, but that applies to many other reference management software as well.
- Printing: stylesheets are provided for printing individual bibliographic records as well as lists of search results. When printing search results, all results get printed (not just the current page of results).
- Modularity: although a standalone application is available, Kerko is designed not as a standalone application, but to be part of a larger Flask application.
Demo site
A demo KerkoApp installation is available at. You may also visit to view its source data on zotero.org.
Requirements
Kerko requires Python 3.6 or later.
It has only been tested under Linux (so far). If you run it on other platforms, (with or without encountering compatibility issues), please let us know.
Dependencies
The following packages will be automatically installed when installing Kerko:
- Babel: utilities for internationalization and localization.
- Bootstrap-Flask: helper for integrating Bootstrap.
- environs: helper for separating configuration from code.
- Flask: web application framework.
- Flask-BabelEx: allows Kerko to provide its own translations, at the blueprint level.
- Flask-WTF: simple integration of Flask and WTForms.
- Jinja2: template engine.
- Pyzotero: Python client for the Zotero API.
- Werkzeug: WSGI web application library (also required by Flask).
- Whoosh: pure Python full-text indexing and searching library.
- WTForms: web forms validation and rendering library.
The following external resources are loaded from CDNs by Kerko's default templates (but could be completely removed or replaced by your application):
- Bootstrap: front-end component library for web applications.
- FontAwesome: beautiful open source icons.
- jQuery: JavaScript library (required by Bootstrap).
- Popper.js: JavaScript library for handling tooltips, popovers, etc. (used by Bootstrap).
Getting started
This section only applies if you intend to integrate Kerko into your own application. If you are more interested into the standalone KerkoApp application, please refer to its installation instructions.
We'll assume that you have some familiarity with Flask and suggest steps for
building a minimal app, let's call it
hello_kerko.py, to get you started.
The first step is to install Kerko. As with any Python library, it is highly recommended to install Kerko within a virtualenv.
Once the virtualenv is set and active, use the following command:
pip install kerko
In
hello_kerko.py, configure variables required by Kerko and create your
appobject, as in the example below:
from flask import Flask from kerko.composer import Composer app = Flask(__name__) app.config['SECRET_KEY'] = '_5#y2L"F4Q8z\n\xec]/' # Replace this value. app.config['KERKO_ZOTERO_API_KEY'] = 'xxxxxxxxxxxxxxxxxxxxxxxx' # Replace this value. app.config['KERKO_ZOTERO_LIBRARY_ID'] = '9999999' # Replace this value. app.config['KERKO_ZOTERO_LIBRARY_TYPE'] = 'group' # Replace this value. app.config['KERKO_COMPOSER'] = Composer()
The
SECRET_KEYvariable is required for generating secure tokens in web forms. It is usually set in an environment variable rather than in Python code (the latter usually goes in a code repository, making its value not so secret), but here we're taking the minimal route!
The
KERKO_ZOTERO_API_KEY,
KERKO_ZOTERO_LIBRARY_IDand
KERKO_ZOTERO_LIBRARY_TYPEvariables are required for Kerko to be able to access your Zotero library. See the Configuration variables section for details on how to properly set these variables.
The
KERKO_COMPOSERvariable, on the other hand, specifies key elements needed by Kerko, e.g., fields for display and search, facets for filtering. These are defined by instanciating the
Composerclass. Your application may manipulate the resulting object at configuration time to add, remove or alter fields, facets, sort options or search scopes. See the Kerko Recipes section for some examples.
Also configure the Flask-BabelEx and Bootstrap-Flask extensions:
from flask_babelex import Babel from flask_bootstrap import Bootstrap babel = Babel(app) bootstrap = Bootstrap(app)
See the respective docs for Flask-BabelEx and Bootstrap-Flask for more details.
Instanciate the Kerko blueprint and register it in your app:
from kerko import blueprint as kerko_blueprint app.register_blueprint(kerko_blueprint, url_prefix='/bibliography')
The
url_prefixargument defines the base path for every URL provided by Kerko.
In the same directory as
hello_kerko.pywith the virtualenv active, run the following shell command:
export FLASK_APP=hello_kerko.py flask kerko index
Kerko will retrieve your bibliographic data from zotero.org. If you have a large bibliography, this may take a while (and there is no progress indicator). In production use, that command is usually added to the crontab file for regular execution.
To list all commands provided by Kerko:
flask kerko --help
Run your application:
flask run
Open in your browser and explore the bibliography.
You have just built a really minimal application for Kerko. Check KerkoApp for a slightly more complete example.
Configuration variables
The variables below are required and have no default values:
KERKO_ZOTERO_LIBRARY_ID: Your personal userID for API calls, as given on zotero.org (you must be logged-in on zotero.org).
KERKO_ZOTERO_LIBRARY_TYPE: The type of library on zotero.org (either
'user'for your main personal library, or
'group'for a group library).
KERKO_ZOTERO_API_KEY: The API key associated to the library on zotero.org. You have to create that key.
KERKO_COMPOSER: An instance of the
kerko.composer.Composerclass.
Any of the following variables may be added to your configuration if you wish to override their default value:
KERKO_TITLE: The title to display in web pages. Defaults to
'Kerko'.
KERKO_DATA_DIR: The directory where to store the search index. Defaults to
data/kerko.
BABEL_DEFAULT_LOCALE: The default language of the user interface. Defaults to
'en'. Your application may set this variable and/or implement a locale selector function to override it (see the Flask-BabelEx documentation).
KERKO_USE_TRANSLATIONS: Use translations provided by the Kerko package. Defaults to
True. When this is set to
False, translations may be provided by the application's own translation catalog.
KERKO_WHOOSH_LANGUAGE: The language of search requests. Defaults to
'en'. You may refer to Whoosh's source to get the list of supported languages (
whoosh.lang.languages) and the list of languages that support stemming (
whoosh.lang.has_stemmer()).
KERKO_ZOTERO_LOCALE: The locale to use with Zotero API calls. This dictates the language of Zotero item types, fields and creator types. Defaults to
'en-US'. Supported locales are listed at, under "locales".
KERKO_PAGE_LEN: The number of search results per page. Defaults to
20.
KERKO_CSL_STYLE: The citation style to use for formatted references. Can be either the file name (without the
.cslextension) of one of the styles in the Zotero Styles Repository (e.g.,
apa) or the URL of a remote CSL file. Defaults to
'apa'.
KERKO_RESULTS_ABSTRACT: Show abstracts in search result pages. Defaults to
False.
KERKO_PAGER_LINKS: Number of pages to show in the pager (not counting the current page). Defaults to
8.
KERKO_FACET_COLLAPSING: Allow collapsible facets. Defaults to
False.
KERKO_PRINT_ITEM_LINK: Provide a print button on item pages. Defaults to
False.
KERKO_PRINT_CITATIONS_LINK: Provide a print button on search results pages. Defaults to
False.
KERKO_PRINT_CITATIONS_MAX_COUNT: Limit over which the print button should be hidden from search results pages. Defaults to
0(i.e. no limit).
KERKO_ZOTERO_MAX_ATTEMPTS: Maximum number of tries after the Zotero API has returned an error or not responded during indexing. Defaults to
10.
KERKO_ZOTERO_WAIT: Time to wait (in seconds) between failed attempts to call the Zotero API. Defaults to
120.
KERKO_ZOTERO_BATCH_SIZE: Number of items to request on each call to the Zotero API. Defaults to
100(which is the maximum currently allowed by the API).
KERKO_ZOTERO_START: Skip items, start at the specified position. Defaults to
0. Useful only for development/tests.
KERKO_ZOTERO_END: Load items from Zotero until the specified position. Defaults to
0(no limit). Useful only for development/tests.
Kerko Recipes
TODO
Known limitations
- The system can probably handle relatively large bibliographies (it has been tested so far with ~15k entries), but the number of distinct facet values has more impact on response times. For the best response times, it is recommended to limit the number of distinct facet values to a few hundreds.
- Kerko can only manage a single bibliography per application.
- Although Kerko might be integrated in a multilingual web application were the visitor may select a language, Zotero does not provide a way to manage tags or collections in multiple languages. Thus, there is no easy way for Kerko to provide those names in the user's language.
- Whoosh does not provide much out-of-the-box support for non-Western languages. Therefore, search might not work very well with such languages.
- No other referencement management tool than Zotero may serve as a back-end for Kerko.
Design choices
- Do not build a back-end. Let Zotero act as the "content management" system.
- Allow Kerko to integrate into richer web applications.
- Only implement in Kerko features that are related to the exploration of a bibliography. Let other parts of the web application handle all other features that might be needed.
- Use a lightweight framework (Flask) to avoid carrying many features that are not needed.
- Use pure Python dependencies to keep installation and deployment simple. Hence the use of Whoosh for search, for example, instead of Elasticsearch or Solr.
- Use a classic architecture for the front-end. Keep it simple and avoid asset management. Some will want to replace the front-end anyway.
Translating Kerko
Kerko can be translated using Babel's setuptools integration.
The following commands should be executed from the directory that contains
setup.py, and the appropriate virtualenv must have been activated
beforehand.
Create or update the PO template (POT) file:
python setup.py extract_messages
Create a new PO file (for a new locale) based on the POT file:
python setup.py init_catalog --locale <your_locale>
Update an existing PO file based on the POT file:
python setup.py update_catalog --locale <your_locale>
Compile MO files:
python setup.py compile_catalog
You are welcome to contribute your translation. See the Submitting a translation section.
Contributing
Reporting issues
Issues may be submitted on Kerko's issue tracker. Please consider the following guidelines:
- Make sure that the same issue has not already been reported or fixed in the repository.
- Describe what you expected to happen.
- If possible, include a minimal reproducible example to help others identify the issue.
- Describe what actually happened. Include the full traceback if there was an exception.
Submitting code changes
Pull requests may be submitted against Kerko's repository. Please consider the following guidelines:
- Consider using Yapf to autoformat your code (with option
--style='{based_on_style: facebook, column_limit: 100}'). Many editors provide Yapf integration.
- Include a string like "Fixes #123" in your commit message (where 123 is the issue you fixed). See Closing issues using keywords.
Submitting a translation
Some guidelines:
- The PO file encoding must be UTF-8.
- The header of the PO file must be filled out appropriately.
- All messages of the PO file must be translated.
Please submit your translation as a pull request against Kerko's repository, or by e-mail, with the PO file included as an attachment (do not copy the PO file's content into an e-mail's body, since that could introduce formatting or encoding issues).
Supporting the project
Nurturing an open source project such as Kerko, following up on issues and helping others in working with the system is a lot of work, but hiring the original developers of Kerko can do a lot in ensuring continued support and development of the project.
If you need professionnal support related to Kerko, have requirements not currently implemented in Kerko, want to make sure that some Kerko issue important to you gets resolved, or if you just like our work and want to hire us for different project, please e-mail us.
Project background
Kerko was inspired by two prior projects:
- Bibliographie sur l’histoire de Montréal, developed in 2014 by David Lesieur and Patrick Fournier, of Whisky Echo Bravo, for the Laboratoire d'histoire et de patrimoine de Montréal (Université du Québec à Montréal).
- Bibliography on English-speaking Quebec, developed in 2017 by David Lesieur, for the Quebec English-Speaking Communities Research Network (QUESCREN) (Concordia University).
Later on, it became clear that other organizations needed a similar solution. However, software from the prior projects had to be rewritten so it could more easily be configured for different bibliographies from organizations with different needs. That led to Kerko, whose development was made possible through the following projects:
- TODO: list project 1 when it's live.
- TODO: list project 2 when it's live.
Etymology
The name "Zotero" reportedly derives from the Albanian word zotëroj, which means "to learn something extremely well, that is to master or acquire a skill in learning" (Source: Etymology of Zotero).
The name "Kerko" is a nod to Zotero as it takes a similar etymological route: it derives from the Albanian word kërkoj, which means "to ask, to request, to seek, to look for, to demand, to search" and seems fit to describe a search tool.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/Kerko/0.3a0/ | CC-MAIN-2020-10 | refinedweb | 2,863 | 57.77 |
On Fri, 7 Jul 2006 00:10:20 -0700, Yusnel Rojas García <yrojass at gmail.com> wrote: >from twisted.names import client, dns > >def somefunction(somepars): > r = client.Resolver('/etc/resolv.conf') ^ Resolver takes an IP address here, not a filename. > d = r.resolve(dns.Query('', dns.MX, dns.IN)) ^ Resolver does not have a "resolve" method. You can read the API documentation for IResolver here: Perhaps you meant "lookupMailExchange"? > d.Callbacks(somefunct1, somefunctErr2) ^ Deferreds do not have this method. You can read the API documentation for Deferred here: Perhaps you meant "addCallbacks"? > return ? ^ what you want to return here is 'd' That way callers of this function can get the results, but they have to wait for the results as well. The main thing you want to do, e.g. stop your program and wait in-place for results to appear, is completely impossible. I've watched a number of people go through this process, and I can say with some certainty that what you need to do is focus on understanding why it's impossible, not searching for esoteric ways to make it happen. If you _do_ manage to do something that appears to be what you want, it will be wrong in surprising and upsetting ways fairly quickly. If you are just looking for a way to mess around interactively with Deferred objects and their results, run twisted/conch/stdio.py. Here's a screenshot: I've also written a brief example of what you want to do that actually works. Notice the parts of the code where the result does not exist vs. those where it does. Keep in mind that *the reactor needs to do work*, such as sending and receiving traffic, in order to produce a result; it's impossible to produce one immediately. # ---- cut here ---- # We don't need to build queries ourselves, just a resolver. from twisted.names.client import Resolver theResolver = Resolver("/etc/resolv.conf") def getMX(hostname="twistedmatrix.com", r=theResolver): d = r.lookupMailExchange(hostname) # We're going to look up twistedmatrix.com's address, but we don't have a # result *now*... def showAddress((answers, authority, additional)): # but we _do_ have a result *now*. the Deferred calls this function # when it gets a result, because (see below) ... print 'Got', len(answers), 'results.' for resourceRecordHeader in answers: mxRecord = resourceRecordHeader.payload dnsName = mxRecord.name nameString = dnsName.name print "one answer: ", nameString # pass through the values we were given in case callers of this # function want to use them. return (answers, authority, additional) # here we tell the Deferred: when you get a successful result, call that # function. d.addCallback(showAddress) # We return the Deferred so that callers of this function can act upon it. return d from twisted.internet import reactor # Kick off our code: rather than addCallback, we use addBoth here, because we # want to stop the reactor whether it fails or not. If we only wanted to act # in case the DNS operation failed, we could use addErrback instead. getMX().addBoth(lambda ignored: reactor.stop()) # You don't have to do this in most Twisted programs, because they will be # running in the reactor already, and you generally don't want to stop the # reactor if your program is supposed to keep running. reactor.run() | http://twistedmatrix.com/pipermail/twisted-python/2006-July/013611.html | CC-MAIN-2018-05 | refinedweb | 545 | 66.84 |
Results 1 to 1 of 1
Thread: Scroll an image with jQuery
- Join Date
- Aug 2006
- 311
- Thanks
- 0
- Thanked 1 Time in 1 Post
Scroll an image with jQuery
I am trying to scroll an image with jquery from one point to another with jQuery
I am using the following code but cannot seem to get it to work
PHP Code:
var pos = $window.scrollTop(); //position of the scrollbar
var windowHeight = $window.height(); //get the height of the window
function test(startPos, endPos, scrollPos, windowHeight)
{
var obj = $('#ios');
var position = obj.position();
console.log(position.top + (startPos - endPos)/ (scrollPos/100))
return (position.top + (startPos - endPos)/ (scrollPos/100))
}
$ios.css({'top':test(500, 120, pos, windowHeight) }); | http://www.codingforums.com/javascript-frameworks/244762-scroll-image-jquery.html | CC-MAIN-2017-43 | refinedweb | 116 | 62.68 |
This article is compilation of Question and Answer, which were asked between 25 April 2005 to 8 June 2005 (45 Days) on Visual C++ .
Thanks to people for their contributions on the Codeproject forum to help fellow peer in their need. I have change some people original comment to suit/maintain the look/feel of article (I beg apology for that). If i have miss something, please feel free to mail me or leave comment at bottom.
Q 1.01 How to get application currently activated, Name of Application and Window Caption ?Q 1.02 How to get drive volume serial number in VC++ ?Q 1.03 How to get the absolute path of the virtual folder like My Doucuments, Recycle Bin?Q 1.04 Is there is any wrapper present for the Xml Parser in Visual C++?Q 1.05 How i can change my console display mode to full screen mode in a dos based c++ program?Q 1.06 How do i disable mouse wheel in comboBox (MFC) ?Q 1.07 How i can run an Application file and then stop it ?Q 1.08 Is there an "Official C++ rules" (perhaps guidelines is better than rules) web site?Q 1.09 What is the best was to get the modified date/time of a file ?Q 1.10 How to monitor Directory and Registry for any Modification or Updation?Q 1.11 How to Detect Multiple VGA card in Display?Q 1.12 How To Make Data Arrival to whole application ?Q 1.13 Can i normalize this array into 0.0 to 0.1 without precision lost (and still maintain it as float)?Q 1.14 How to run an application when Computer is about to shutdown?Q 1.15 How to find Unicode FILENAMES from a certain Directory?Q 1.16 How can I convert a int to a char*?Q 1.17 How do I add File Version information in project?Q 1.18 What the best way to sleep 1 ms?
Q 3.01 How to determine if IE has finish loading?Q 3.02 how to update edit control text ?Q 3.03 How to get output of console program?Q 3.04 How to add a progress bar in status bar using Win32 Api ?Q 3.05 How to set background image for a dialog in MFC ?Q 3.06 How to detect if an internet connection is available?Q 3.07 How to Connect with website and Save Html Page in File ?Q 3.08 How can you retrieve the HINSTANCE when you only have HWND ?Q 3.09 How to Set root Folder in CFileDialog?Q 3.10 How to get list of computers in specified workgroup?Q 3.11 How to get the PhysicalDrive number which the logical drive located?Q 3.12 How to programatically change the 'Work Offline' state to the normal 'Online' state?Q 3.13 How to find the way to obtain the user names (or user id's) of the owners of process ?Q 3.14 How to catch that user clicked 'x' button ?Q 3.15 How To Launch email client with attachment ?Q 3.16 How can I change the foreground color of an edit control?Q 3.17 Is there a way of determining which OS running?Q 3.18 How to get notified when user logoff?Q 3.19 How can I find out how much memory (Private Bytes) my app currently has ?Q 3.20 How can i want to detect whether certain path or directory exists or not?Q 3.21 How to refresh Explorer window after Registry key changed?Q 3.22 How can I determine the height in pixels of status bars?Q 3.23 How do you duplicate a file?Q 3.24 How to rotate a GDI object in VC++ without using setworldtransform?Q 3.25 What is #pragma pack()?
Q 5.01 How to remove close button from window?Q 5.02 How to change foreground and back ground color of Edit Control?Q 5.03 What is the structure of .RC file ?Q 5.04 How to create a pop up message near the right hand corner of the task bar?Q 5.05 how to make setup in visual c++ ?Q 5.06 How to convert text file to BMP on runtime?Q 5.07 How can I get the default printer which is connected on a pc directly or on a LAN? Q 5.08 How to change the HTML currently loaded in IE, programmatically?Q 5.09 How to use database created in MS Access 2000 in pc where MS Access in not Installed?Q 5.10 What's the SDK function of the DeflateRect of MFC?Q 5.11 If I add 100 to current date, how to get that modified date?Q 5.12 How to send the Image File over the Socket ?Q 5.13 How my service can impersonate Administrator (the service has access to the login information) while interacting with the Service Manager?Q 5.14 Is there an SDK func that can convert big buffers from big endian to little endian ?Q 5.15 How to convert the Hexadecimal into Binary?Q 5.16 How programatically examine what text in clipboard?Q 5.17 How to programmatically know that Webcam is attached to USB port?Q 5.18 How to Send Email from my program without having email account?Q 5.19 How to send Data/UDT thorugh Window Message?Q 5.20 How to get the number of bytes/packets being received / sent over an NIC (standard ethernet card)?Q 5.21 Could anybody refer me to RAW DATABASE Format?Q 5.22 How to check is current document in closing in MDI structure?Q 5.23 what is EV_BREAK event detection pre-condition in serial comm program?Q 5.24 How to get the DPI value from C++ code?Q 5.25 Is there a simple way of making the Exe for my program get a different icon depending on whether it's a debug or release build?Q 5.26 Is there anyway to handle the forced exit of an application by task manager?Q 5.27 how to determine the name of the Computer in VC++?Q 5.28 how to change the position of a checkbox control during runtime?Q 5.29 How can I show hidden folders when calling SHBrowseForFolder function?Q 5.30 How to convert UNICODE data into Ascii ?
Q 1.01 How to get application currently activated, Name of Application and Window Caption ? [top^]
A. ThatsAlok Quoted :-
These Apis will Help :-[GetForegroundWindow]for getting handle of application which currently have Keyboard Focus[GetWindowText] retrieve the Caption Text associated with Windows handle[GetWindowModuleFileName] return with the path of application!
GetForegroundWindow
GetWindowText
GetWindowModuleFileName
Q 1.02 How to get drive volume serial number in VC++ ? [top^]
A. DavidCrow Quoted :-
Either Use [GetVolumeInformation] or see here /library/en-us/fileio/fs/enumerating_mount_points.asp
GetVolumeInformation
Q 1.03 How to get the absolute path of the virtual folder like My Doucuments, Recycle Bin etc.? [top^]
A. Dean Michaud Quoted :-
SHGetSpecialFolderPath()
SHGetFolderPath()
Q 1.04 Is there is any wrapper present for the Xml Parser in Visual C++? [top^]
A. cedric moonen Quoted :-
Maybe you'll find some interesting things
Q 1.05 How i can change my console display mode to full screen mode in a dos based c++ program? [top^]
A. stolid_rock Quoted :-
You will have to use the [SetConsoleWindowInfo] function to set the size. But before that you have to use functions like [GetConsoleWindow], [GetConsoleScreenBufferInfo] for the actual effect to take place.
SetConsoleWindowInfo
GetConsoleWindow
GetConsoleScreenBufferInfo
Q 1.06 How do i disable mouse wheel in comboBox (MFC) ? [top^]
A. Rage Quoted :-
A very nasty trick would be to derive a class from the combobox control, use it in place of your combobox control, capture the mousewheel messages WM_MOUSEWHEEL and then don't forward them to the base class...
Q 1.07 How i can run an Application file for a While and then stop it ? [top^]
A. CodeBeetle Quoted :-
use [createprocess()] :- For Creating the Process then call [waitforsingleobject()] for waiting for object to execute and then call [terminateprocess()] to make application terminate.
example class:
Q 1.08 Is there an "Official C++ rules" (perhaps guidelines is better than rules) web site? [top^]
See the
Q 1.09 What is the best was to get the modified date/time of a file ? [top^]
How about [GetFileAttributesEx()]?
Ravi Bhavnani Quoted :-
or CFile::GetStatus() or _stat() or GetFileTime()
Q 1.10 How to monitor Directory and Registry for any Modification or Updation? [top^]
For Monitoring Directory/File changes use [ReadDirectoryChangesW] and for registry monitoring[RegNotifyChangeKeyValue].
Q 1.11 How to Detect Multiple VGA card in Display? [top^]
Q 1.12 How To Make Data Arrival to whole application / or make it global in whole application? [top^]
A. Gary R. Wheeler Quoted :-)? [top^]
A. Christian Graus Quoted :-
No - quite obviously if you compress the numbers into a smaller range, you will lose precision. Floating point numbers suffer from precision problems anyhow, you should be using double if you want to improve precision, and a fixed point format if you want to be absolute lt.
Q 1.14 How to run an application when Computer is about to shutdown? [top^]
You would need a separate application running that handles the WM_QUERYENDSESSION message. When that message is received, return 0, start the "dialog based application." and then shutdown on your own by using ExitWindowEx Api
Q 1.15 How to find Unicode FILENAMES from a certain Directory? [top^]
A. Ryan Binns Quoted :-
Look at [FindFirstFileW()] , [FindNextFileW()] and [FileFindCloseW]
Q 1.16 How can I convert a int to a char*? [top^]
A. mkuhac Quoted :-
TCHAR szBuffer[16];
INT iValue = 5;
::wsprintf(szBuffer, TEXT("%d"), iValue);
ThatsAlok Quoted :-
int nNum=10;
char szNum[10];
itoa(nNum,szNum,10);
Q 1.17 How do I add File Version information in project? [top^]
Just add a Version resource to it. Click on the ResourceView tab in the Workspace pane. Use Ctrl+R to insert a new resource. Select Version from the list.
Blake Miller Quoted :-
Add a resource file (*.RC) to your project. Add a version information resource to the resource file.
Q 1.18 What the best way to sleep 1 ms? [top^]
A. Adi Narayana Vemuru Quoted :-
write your own delay routine using high-resolution APIs like [QueryPerformaceCounter()] and [QueryPerformaceFrequency()] . you can achieve timer even in microseconds.
Q 2.01 How to detect the IE/browser version? [top^]
A. Priyank Bolia Quoted :-
look at the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Version I hope this will work.
DavidCrow Quoted :-
just use [GetFileVersionInfo()] and [VerQueryValue()].
Q 2.02 AfxGetAppName() is an MFC function, Is there is any Equivalent Win32 Api?[top^]
char szPath[_MAX_PATH], szBase[_MAX_FNAME],
szExt[_MAX_EXT];
GetModuleFileName(NULL, szPath,sizeof(szPath));
_splitpath(szPath, NULL, NULL, szBase, szExt);
Michael Dunn Quoted :-
Call GetModuleFileName(), and then PathFindFileName() on the full path.
Q 2.03 How to get Remote IP duirng Socket Connection? [top^]
use [getpeername] api
Q 2.04 How to Restrict my application to one instance only ? [top^]
A. Many Quoted :-
Q 2.05 How to use nmake? [top^]
A. PJ Arends Quoted :-
Start by reading _asug_overview.3a_.nmake_reference.asp
Q 2.06 What is the differences between command.com and cmd.exe? [top^]
command.com is used for backward compatibility with 16-bit and Windows 9x applications. cmd.exe is not available on either. One obvious difference is how each handles the space character.
For example, the command md this folder has spaces would fail with command.com but would work with cmd.exe, although four folders would get created. To remedy this, the folder name must be surrounded by quotation marks (md "this folder has spaces").
Q 2.07 How will i use unistd.h in VC++ or what the defination of this header file ? [top^]
here is it,the Defination of Unistd.h
/* * This file is part of the Mingw32 package.
* * unistd.h maps (roughly) to io.h */
#ifndef _UNISTD_H
#define _UNISTD_H
#include <IO.H>
#include <PROCESS.H>
#endif /* _UNISTD_H */
Just Save it as Unistd.h and Include in your project
Q 2.08 How to dynamically change the coloumn name of a list control? [top^]
A. rateep Quoted :-
have you tried CListCtrl::SetColumn(.....)?
Q 2.09 Does anyone know a Wrapper class to get samplerate bitrate and time of a mp3? [top^]
A. Alexander M. Quoted :-
Here you can find all infos you need:
Q 2.10 How can I get the ID of the dialog item which own the focus? [top^]
Use GetFocus() to find the Ctrl which currently under the Keyboard Focus.Pass the HWND or CWnd return from call of GetFocus to GetDlgCtrlID which will return the Numeric ID of Control!
Jörgen Sigvardsson Quoted:-
What Alok said, with the addition that you can use int id = ::GetWindowLong(::GetFocus(), GWL_ID);
if you'd like to stay win32.
Q 2.11 Is it possible to get info between HWND and Process handle? [top^]
A. Full Question:
I need: Get Handle of Main Window from process handle (I know the process handle). Get process info (i.e. process ID or handle) from a Window handle (I know the window handle)?
Answer :- cmk Quoted :-
1. - Use GetProcessId() to get process id from process handle.
2. - Use GetWindowThreadProcessId() to get the thread id and process id that was used to create the window.
Q 2.12 How to Include " in a text string without ending the string? [top^]
A. Rama Krishna Vavilala Quoted :-
BSTR bstRet=SysAllocStringByteLen("Some text with the inch symbol\" and then end",44);
Q 2.13 How to check the string if it is the number ? [top^]
I believe the function you want is called IsDigit. Here's the MSDN reference.
Ted Ferenc Quoted :-
The old fashioned way was to use strtoul and check the 'end pointer'. But be aware if it is a very large number it will fail with an overflow
PJ Arends Quoted :-
Use strtod instead of atof. It has a parameter that points to first character in the string that it does not recognize as part of a number. If that character is not NULL (the end of the string) the string is not a number. It takes into account your current locale settings and exponential numbers.
Q 2.14 How to change attributes for a directory? [top^]
This works for me:
if (! SetFileAttributes("c:\\ResOrg",FILE_ATTRIBUTE_READONLY))
{
DWORD dwError = GetLastError();
}
Q 2.15 How can i find out through code, the amout of processing power my application is using ? [top^]
See if this helps.
Q 2.16 How can i find out the information about my computers processor like - Manufacturer, Clock Speed etc? [top^]
Try the Win32_Processor WMI class.
Q 2.17 Has anyone seen code that finds mersenne prime numbers? [top^]
Google for the GIMPS project. I was going to refer you to the Mersenne site but it is currently down.
John M. Drescher Quoted :-
Q 2.18 how I can convert a DWORD (32 bits) value into a byte array of four bytes (32 bits)? [top^]
for (int i = 3; i >= 0; --i)
{
length[3 - i] = (dwLength & (0xff <<(i * 8))) >> (i * 8);}
DWORD dwLength=(DWORD) file.GetLength();
BYTE length[sizeof(DWORD)];
*(DWORD*) &length[0] = dwLength;
Bouli Quoted :-
Actually I've found the following way:? [top^] as
c:\\your_app_path\\app.exe -hidden
Now 4 from 'char [10]' to 'const unsigned short *'"? [top^]
A. RChin Quoted :- ? [top^]
Q 3.01 How to determine if IE has finish loading? [top^]
A. Kharfax Quoted :-
You should create a browser helper object (Search BHO in Google) and handle the DOCUMENT_COMPLETE event.
Q 3.02 how to update edit control text ? [top^]
A. Ravi Bhavnani Quoted :-
If m_edit is a control (not data) member (i.e. m_edit is of type CEdit),do the following: m_edit.SetWindowText ("Test");to set the edit control's text.
Priyank Bolia Quoted :-
Use: DDX_Text(pDX, IDC_EDIT1, m_edit);and after assigning the values use:m_edit = "Test";UpdateData( false );
Q 3.03 How to get output of console program? [top^]
This [^] article may help.
You can read here
Q 3.04 How to add a progress bar in status bar using Win32 Api ? [top^]
A. Jack Squirrel Quoted :-
Q 3.05 How to set background image for a dialog in MFC ? [top^]
Does this help?
Q 3.06 How to detect if an internet connection is available? [top^]
this MS resource will help you
Taka Muraoka Quoted :-
You'd think it'd be really easy but it ain't Check out this more info.
Q 3.07 How to Connect with website and Save Html Page in File ? [top^]
this article will help you in retrieving HTML pages form Internet server/Domain
As Alok has pointed out, you can use Ander's AmHttpUtilities class. If you need to do more involved parsing of the HTML content, see this[^] article.
Q 3.08 How can you retrieve the HINSTANCE when you only have HWND ? [top^]
A. bharadwajgangadhar Quoted :-
To get hInstance. you can use AfxGetWinApp() sdk function. It retrives hInstance of the application.
Use GetWindowLong() with the GWL_HINSTANCE flag.
Q 3.09 How to Set root Folder in CFileDialog? [top^]
CFileDialog has a member m_ofn, which is an OPENFILENAME structure. In that structure, you have to set the lpstrInitialDir variable to point to a string containg the folder in question.
CFileDialog
m_ofn
OPENFILENAME
CFileDialog FileDialog(TRUE);
FileDialog.m_ofn.lpstrInitialDir ="C:\\MyDirectory";
FileDialog.DoModal();
Q 3.10 How to get list of computers in specified workgroup? [top^]
How about NetGroupGetUsers()?
Q 3.11 How to get the PhysicalDrive number which the logical drive located? [top^]
A. James R. Twine Quoted :-? [top^]
Check is this api [InternetGoOnline] help If DialUp you can use these api's too [InternetAutodial] and [Internetdial]
James R. Twine Quoted :-
I would look into InternetCheckConnection(...), it has a flag that forces connection; and InternetAttemptConnect(...).
Q 3.13 How to find the way to obtain the user names (or user id's) of the owners of process ? [top^]
Have a look at the PSAPI and Tool Help Library in MSDN.
CodeBeetle Quoted :-
Try This...
Q 3.14 How to catch that user clicked 'x' button ? [top^]
Handle WM_SYSCOMMAND and check for the case SC_CLOSE.
Q 3.15 How To Launch email client with attachment ? [top^]
I do not think that you can, you may have to use MAPI (look up "Simple MAPI" in MSDN).
Q 3.16 How can I change the foreground color of an edit control? [top^]
handle WM_CTLCOLOR message and call SetTextColor() on the supplied DC.
Q 3.17 Is there a way of determining which OS running? [top^]
Is there a way of determinig the OS running?
But like Shog said, if you use unsupported API's, then you need to dynamically load them, or your app won't run, even if you never run the code that calls them. The alternative is to write your own versions. I did that with TransparentBlt and AlphaBlend many years ago, so I could support W95 and still call them.
How? [top^]
Q 3.19 How can I find out how much memory (Private Bytes) my app currently has ? [top^]
pdh.
pdhStatus = PdhAddCounter(hQuery, szAvailBytes, 0, &hAvailBytes);
pdhStatus = PdhAddCounter(hQuery, szCacheBytes, 0, &hCacheBytes);
pdhStatus = PdhAddCounter(hQuery, szWorkingSet, 0, &hWorkingSet);
// Get the data.
pdhStatus = PdhCollectQueryData(hQuery);
// Format counter values.
pdhStatus = PdhGetFormattedCounterValue(hAvailBytes, PDH_FMT_LONG | PDH_FMT_NOSCALE, NULL,
&pdhfmtAvail);
pdhStatus = PdhGetFormattedCounterValue(hCacheBytes,PDH_FMT_LONG | PDH_FMT_NOSCALE, NULL, &pdhfmtCache);
pdhStatus = PdhGetFormattedCounterValue(hWorkingSet, PDH_FMT_LONG | PDH_FMT_NOSCALE, NULL,
&pdhfmtWorking);
wsprintf(szBuffer, TEXT("Physical Mem = %ldMB\n"),
(pdhfmtAvail.longValue + pdhfmtCache.longValue + pdhfmtWorking.longValue) / (1024 * 1024));
pdhStatus = PdhCloseQuery(hQuery);}
Q 3.20 How can i want to detect whether certain path or directory exists or not? [top^]
A. Blake Miller Quoted :-
Dude, you need to go here first:[^]
if( (DWORD)-1 == GetFileAttributes(szFilePath) ){ tells you if it is missing }
//! Checks whether a directory exists.
//! @param strDirectory Directory
//! @return true if the directory exists, false otherwise.
bool dirExists (CString strDirectory)
{
// Create full directory specification - return if unable
TCHAR* fullPath = _tfullpath (NULL,strDirectory, 0);
if (fullPath == NULL)
return (false);
// Check if directory exists by trying to make it the default directory
TCHAR szCurrDir [_MAX_PATH];
_tgetcwd (szCurrDir, _MAX_PATH - 1);
long nStatus = _tchdir (fullPath);
_tchdir (szCurrDir);
// Return free (fullPath);
if (nStatus == 0) return (true); return (false);
}
Try [PathFileExists] api
Q 3.21 How to refresh Explorer window after Registry key changed? [top^]
A. Michael Dunn Quoted :-
Check out [SHChangeNotify()] - there might be a flag that will make Explorer refresh its views.
Q 3.22 How can I determine the height in pixels of status bars? [top^]
Get the window handle to the status bar's window. Then can call GetWindowRect. Take the height of the rectangle as the height of the status bar.
Q 3.23 How do you duplicate a file? [top^]
A. Tom Wright Quoted :-
Try CopyFile or CopyFileEx here is the description:
BOOL CopyFile
( LPCTSTR lpExistingFileName,
LPCTSTR lpNewFileName,
BOOL bFailIfExists
);
Tom Archer Quoted :-
In addition to what Tom said, I would also suggest looking at the MakeSureDirectoryPathExists function if you need to copy the file to a specific folder hierarchy. (This function is in the dbghelp.dll, which you'll need to distribute depending on the target OS -
Q 3.24 How to rotate a GDI object in VC++ without using setworldtransform? [top^]
If it's a DIBSection( i.e. you have access to the bitmap bits ), then you can perform the rotate yourself on a new DIBSection ( seeing as a rotated image is bigger).
Q 3.25 What is #pragma pack()? [top^]
A. Stlan Quoted :- Perhaps a simple example is better than any explanation. First case:#pragma pack(1)<br />struct MyStruct<br />{<br />BYTE a;<br />UINT b;<br />};<br />#pragma pack()sizeof(struct MyStruct) will return 5 bytes (1+4=5)Second case:#pragma pack(4)<br />struct MyStruct<br />{<br />BYTE a;<br />UINT b;<br />};<br />#pragma pack()sizeof(struct MyStruct) will now return 8 bytes! (1+3+4=8) because the compiler aligns each member of MyStruct to a 4 bytes boundary. Concretly, the directive tells the compiler that the address of each member must be divisible by 4. To do that, the compiler insert the necessary blank bytes between variable members.
#pragma pack(1)<br />struct MyStruct<br />{<br />BYTE a;<br />UINT b;<br />};<br />#pragma pack()
sizeof(struct MyStruct)
#pragma pack(4)<br />struct MyStruct<br />{<br />BYTE a;<br />UINT b;<br />};<br />#pragma pack()
MyStruct
Be in touch for more article in this series, as Visual Cpp Forum getting popular day by day. I have to read almost 6000 post to find Useful/Fine Q and A for this article and I believe it will continue in future too :). it really hectic job to read posts, now i can feel what going on the CP Editors :)
Idea and Design are taken from Mr. Michael Dunn (MS Visual C++ MVP) Cpp Forum FAQ Article.
(MS Visual C++ MVP)
31
Q 4.01 How to get these parameter like HDD id, Motherboard ID, CDROM-Drive id without using Registry and MFC ? [top^]
A. Frank K Quoted :-
This from the msdn
Q 4.02 How to get notified that the screensaver has become active? [top^]
When a screen saver starts, it posts a WM_SYSCOMMAND message to the foreground window with wParam equal to SC_SCREENSAVE.
Flit Quoted :-
I saw this article in the MSDN, hope it helps. HOWTO: Know When Your Screen Saver Starts ID: Q238882
Q 4.03 How to access the variables declared in Doc.h from another .cpp? [top^]
An oldie, but a goodie: How To Get Current CDocument or CView from Anywhere;en-us;108587
Q 4.04 How to get all avaliable time zone? [top^]
These Api will help:-
EnumTimeFormatsEnumDateFormatsEnumCalendarInfoEnumUILanguagesand for more Info visit this link:-National Language Support[^]
Q 4.05 How i can set the text of the buttons in the toolBar ? [top^]
Send a TB_SETBUTTONINFO message to the toolbar.
SetWindowText(HANDLE_OF_BUTTON, BUTTON_TEXT);
Q 4.06 How to get the main icon of an EXE and then change the main icon of another EXE with that one? [top^]
Do you mean by actually modifying the .exe file? If so, check out BeginUpdateResource() and UpdateResource(..., RT_ICON, ...).
Q 4.07 How to make sure someone is logged on to the computer ? [top^]
How about NetWkstaUserGetInfo()?
if u use OpenInputDesktop() and it returns NULL then the interactive desktop is not available. ie: no-one is currently logged on or "using" the computer
Q 4.08 How can i access the mapped network drive? [top^]
Once you have mapped a drive letter to a network resource, the functions that use said drive letter are none the wiser.For example:
FindFirstFile("c:\\*.*", ...);
FindFirstFile("n:\\*.*", ...);
NetUseAdd()
WNetAddConnection2()
Q 4.09 I have programmed my Application using the Visual C++ 6.0, how could i be sure which version of MFCxx.dll i have supply with application? [top^]
Have a look at this
M. Wohlers Quoted :-
Start your application with Depends.Exe. This tool will show you which DLLs are used by your program. It can be found in a sub directory of Visual.
Q 4.10 What is the difference between an accelerators and a hotkey? [top^]
A. Iain Clarke Quoted :-.
Some also confuse one or both of those words with a "mnemonic", which is the underlined character you get on dialog controls allowing you to focus or activate that control using the <ALT>key along with the key of the underlined character. They (mnemonics) are also used on top-level Menus and Menu Items (although you do not need to use the <ALT>key when a menu or menubar is active).
Q 4.11 How to take the real part from a decimal number for example if i have 3.13 take only the 3 ? [top^]
#include<math.h>
int nFloor = (int)floor(3.13);
Try:
double d = 3.13;
int n = d;
cmk Quoted :- ? [top^]
I used this article to hit test a line: Win32: Hit Testing Lines and Curves at
Q 4.13 How to get the TYPE of control? [top^]
Use GetDlgItem to get the window handle/object. Use GetClassName to retrieve the class name /type of.
Q 4.14 How to set timeout value in programm using csocket class ? [top^]
[SetSocketOpt]
Q 4.15 Is there a way/method to modify the current date value so that it will end up become the last day of the next month in c runtime library? [top^]
For that,You have to derive your own LOGIC. here is small piece of code to get started.
// Get current date
time
if(endDate.tm_mon==1)
{
if(isLeap)
endDate.tm_mday=29;
else
endDate.tm_mday=28;
}
else
{
//Now For 31 days Month
if(((!(pstTM->tm_mon%2))&&(pstTM->tm_mon<=6))
||((pstTM->tm_mon%2))&&(pstTM->tm_mon>=7))
endDate.tm_mday=31;
else
//for Thirty Day Month
endDate.tm_mday=30;
}
printf("\nModified Date: %d/%d/%d" ,endDate.tm_mday,endDate.tm_mon+1,endDate.tm_year+1900);
Neville Frank Quoted :-
If you are into Boost then I suggest you use boost:date which uses ISO 8601. See: and
Toni78 Quoted :-
difftime could be very useful when you want to jump from one time period to another. I used to use it a lot when I was writing software for room bookings.
Q 4.16 How to establish the Window Dial up connection programmatically? [top^]
Q 4.17 How to crop an image in an application based on dialog? [top^]
The obvious thing to do is to create a new bitmap and use BitBlt to copy just the bit you need. The box part is easy, just draw the box at the mouse position, and keep track of the position in onmousemove.
Q 4.18 How to load the Dll function which argument can only known at runtime ? [top^]
A. munawar1968 Quoted :-
hi Check out "Smashing the Stack for fun and profit" by Aleph One. great docu for understanding stack manipulation with lots of codes snippets .
Q 5.01 How to remove close button from window? [top^]
A. Bob Stanneveld Quoted :-! .
Stlan Quoted :-
Try this: GetSystemMenu(FALSE)->EnableMenuItem(SC_CLOSE, FALSE);
GetSystemMenu(FALSE)->EnableMenuItem(SC_CLOSE, FALSE);
Q 5.02 How to change foreground and back ground color of Edit Control? [top^]
Check these api's [SetTextColor],[SetBkColor]
Q 5.03 What is the structure of .RC file ? [top^]
A. Tom Archer Quoted :-
RC Compiler Syntax:-
Q 5.04 How to create a pop up message near the right hand corner of the task bar? [top^]
I know I've seen articles here on that subject, search for "(taskbar OR tray) notification window" or something like that. (BTW, MS calls that style of popup "toast", like toast popping up from the toaster. )
Giorgi Moniava Quoted :-
maybe this will help you:-
Q 5.05 how to make setup in visual c++ ? [top^]
FreeWare Setup Maker!
or if you looking for sourcecode:-
Q 5.06 How to convert text file to BMP on runtime? [top^] on a LAN? [top^]
Have a look at CWinApp::GetPrinterDeviceDefaults()
Nilesh K. Quoted :-
Check out for EnumPrinters API, This should be able to get you available printers.
Q 5.08 How to change the HTML currently loaded in IE, programmatically? [top^]
This link will help:-
Q 5.09 How to use database created in MS Access 2000 in pc where MS Access in not Installed? [top^]
A. Michael P Butler Quoted :-
Sadly you can't. Access forms require MS Access to be installed (or at least the Access Runtime). If you have the correct licencing you can redistribute the MS Access runtime.
Q 5.10 What's the SDK function of the DeflateRect of MFC? [top^]
A. Stlan Quoted :-
There is no corresponding function in the SDK. Actually, if you look at the definition of CRect::DeflateRect, you will see that it calls ::InflateRect by negating the parameters. ? [top^]
Let Today is "2005-6-2", if I add 100 days to it Something like:
CTime Time(2005,6,2,0,0,0); // Look in the doc for the "daylight savings time"
// that you want to use
CTimeSpan Span(100,0,0,0);
CTime NewTime = Time + Span; // NewTime will hold the time you are looking for
Q 5.12 How to send the Image File over the Socket ? [top^]
A. Trollslayer Quoted :-
CSocketFile may do the job, like reading and writing files at either end.? [top^] ?[top^]
A. RichardS Quoted :-;
typedef union _tagEndianSwitch
{
BYTE_BREAK stBreak;
unsigned long int ulFull;
} ENDIAN_SWITCH;
void UTIL_SwitchEndian (unsigned long int *pulIn)
{
ENDIAN_SWITCH Endian;
unsigned char btemp;
Endian.ulFull = *pulIn;
btemp = Endian.stBreak.bLowerByte;
Endian.stBreak.bLowerByte = Endian.stBreak.bUpperByte;
Endian.stBreak.bUpperByte = btemp;
btemp = Endian.stBreak.bMiddleByte1;
Endian.stBreak.bMiddleByte1 = Endian.stBreak.bMiddleByte2;
Endian.stBreak.bMiddleByte2 = btemp;
*pulIn = Endian.ulFull;
}
This code takes a unsigned long int (32-bits) at swops the endian (either direction). Now all you need is a loop:
void ConvertEndian (unsigned long int szLen, unsigned long int *pulBuf)
{
for (unsigned long int i = 0; i < szLen; ++i)
{
UTIL_SwitchEndian (&pulBuf[i]);
}
}
This will convert the buffer to the opposite endian system. It should be easy to modify for 16-bits.
Q 5.15 How to convert the Hexadecimal into Binary? [top^]
ow do you want to store the binary? So you want a string that reads "0110 1111 1100 1001"? or is it in a 16-bit variable? As for swapping the bytes, store the bytes in array achOrig,
then just bit shift them: s
hort int w16Bits = (achOrig[0] << 8) | achOrig[1];
Now you will have w16Bits = 6F C9 (in memory)
Q 5.16 How programatically examine what text in clipboard? [top^]
See this CP article and this MSDN article.
Q 5.17 How to programmatically know that Webcam is attached to USB port? [top^]
A. FlyingTinman Quoted :-
If the webcam has a valid driver installed you just need to enumerate connected video devices and look for a "friendlyname" that matches that of your webcam.
Look at AMCAP source code (in the DirectShow SDK samples) for details of how to do that. AMCAP enumerates all connected video devices and lists their "friendlyname"s in a menu from which the user can select one.
Q 5.18 How to Send Email from my program without having email account? [top^]
A. cadi Quoted :-
Sure it is possible. There are two ways of sendinig an EMail:
Again two options:
But as far as i remember there are good articles on both issues like
Q 5.19 How to send Data/UDT thorugh Window Message? [top^]
When you send the message, cast the values to WPARAM and LPARAM and then cast them back in the message handler:
// sending
T;
}
Jack Squirrel Quoted :-
CString* pString = new CString(_T("Text"));
int nNumber = 100;
PostMessage(hMyWnd, MY_WM_MESSAGE1, reinterpret_cast<WPARAM>(pString),
reinterpret_cast<LPARAM>(nNumber));
CString* pString = reinterpret_cast<CSTRING*>(wParam);
int nNumber = reinterpret_cast<INT>(lParam);
...delete pString;
Rouslan Grabar [Russ] Quoted :- data
MYMESSAGEDATA* pIncomingData = (MYMESSAGEDATA*)lParam;
//do what ever you need ... ...
//deallocate memory
delete pIncomingData;
return 0;
}
Q 5.20 How to get the number of bytes/packets being received / sent over an NIC (standard ethernet card)? [top^]
Either look for IPHelper Apis or
S.Gopalakrishnan Quoted :-
Hi,? [top^]
Have you seen this?
Q 5.22 How to check is current document in closing in MDI structure? [top^]
A. S.Gopalakrishnan Quoted :-
Hi, There is no direct mechanism to do this. Create your own class which is derived from CDocument and override this method.
virtual void OnCloseDocument( );
liquid_ Quoted :-
I would suggest to make it by setting a flag, member of CDocument (eg. bool closing), at the beginning of your OnCloseDocument override and unsetting it at the end of the function
Q 5.23 what is EV_BREAK event detection pre-condition in serial comm program? [top^]
Hello, The EV_BREAK event is not reaised when the connection is lost. You should try the EV_ERR event. Look Here[^] for more info on serial events.
Breaks are some special action that some devices send across the network. It is sometimes used to determine which device is allowed to send data.
Q 5.24 How to get the DPI value from C++ code? [top^]? [top^]
1 ICON "MyDebug.Ico"
#else
1 ICON "MyRelease.Ico"
#endif
Q 5.26 Is there anyway to handle the forced exit of an application by task manager? [top^]
As far as I know, you canno't stop the taskmanager from killing your app. The reason in
Here's some interestnig reading for you to understand what's going on:
Q 5.27 how to determine the name of the Computer in VC++? [top^]
[GetComputerName]
Another way is gethostname().
Q 5.28 how to change the position of a checkbox control during runtime? [top^]
What about [MoveWindow] and [SetWindowPos] Api!
Q 5.29 How can I show hidden folders when calling SHBrowseForFolder function? [top^]
A. Mark Petrik Sosa Quoted :-
From what they say:
SHBrowseForFolder will not show hidden folders.
Q 5.30 How to convert UNICODE data into Ascii ? [top^]
Win32:
If you're using MFC 7.1:
In Continuation with Mr Jack :-
21
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
aniketvardam wrote:I am writing volume filter driver which starts at boot time
it reads configuration data from a file which is stored at "c:\mydata.dat"
i am calling "ZwCreateFile" in "FilterCreate" but it fails 0xC000003A, which says path not found.
this may because all volumes and file systems are yet to mount / start working
"Opinions are neither right nor wrong. I cannot change your opinion. I can, however, change what influences your opinion." - David CrowNever mind - my own stupidity is the source of every "problem" - Mixture
"Opinions are neither right nor wrong. I cannot change your opinion. I can, however, change what influences your opinion." - David Crow
. | https://www.codeproject.com/Articles/10677/Day-Series-Codeproject-VC-Forum-Q-A-IV | CC-MAIN-2018-39 | refinedweb | 5,996 | 67.55 |
Flutter natively doesn’t support rendering SVG. But there are some workaround for that. There is a plugging called flutter_svg which provide a solid way of adding SVG to your flutter project.
Add svg plugin
First, open the pubspec.yaml and add a plugin with version under the dependencies.
dependencies: flutter_svg: ^0.17.1
next open terminal and type
flutter pub get
And it will download the required plugin for your project.
Next, you have to import the plugin the file which you are going to use.
import 'package:flutter_svg/flutter_svg.dart';
Load SVG from asset
First, go and create the folder called images in the root structure and add any SVG image which you like.
Then open the pubspec.yaml and specify your file name under the assets.
Next you can load the file from asset as mentioned in below.
SvgPicture.asset("images/doughnut.svg")
Load SVG from network
You can directly load SVG using url also
SvgPicture.network("
Load SVG from SVG code
If you don't want to add SVG inside the assets, you can render SVG using it code also. You can use the string method provide by the plugin for this.
SvgPicture.string("<svg height="512" ..... >");
Color tint the SVG image
You can change the colour of the image using color property. This will change the colour of the entire image and good for icons kind of single colours images
SvgPicture.asset("images/doughnut.svg",color: Colors.amber,)
Also you can change the color blend mode to blend the colors and get different effects.
Add text direction support to SVG
If you want to flip the image horizontally based on the text direction(left to right/ right to left) you have to set matchTextDirection property to true. default it's false and support only left to right. You can simply check this by wrapping scaffold inside Directionality widget.
Directionality( textDirection: TextDirection.rtl, child: Scaffold( appBar: AppBar( title: Text('Material App Bar'), ), body: Center( child: Column( children: <Widget>[ Container( child: SvgPicture.asset("images/doughnut.svg",matchTextDirection: false,), ), ], ), ), ), )
What is semanticsLabel property in flutter SVG?
The semantic label describes the purpose of the image. This value does not show in the UI. These values used in screen readers to identify the purpose of the image. It best practice to apply this value of the widget needs more accessibility. Also, flutter has semantics widget with more controls of the properties.
Originally published at mightytechno
Connect with me - Instagram |Blog |Youtube
Discussion (3)
For me the svg is not loading. How do you specify the path?
You have to specify path in pubspec.yaml if load from project
How to scale SVG image like fit, cover or fill??? | https://dev.to/mightytechno/render-svg-in-flutter-app-4kan | CC-MAIN-2022-21 | refinedweb | 449 | 58.69 |
Archives
NAnt, NUnit, and NDoc... the hard way
I live and die these days by the holy trinity. NAnt, NUnit, and NDoc. They're lifesavers when it comes to automating builds, doing testing, and quickly generating API docs for your code (which helps if you're passing it off to another group for support).
However I'm screwed right now with these tools and stuck to using the command line (which is ok because I grew up out of that world). For NUnit, I was using the NUnit add-in which was great but then it "evolved" into TestDriven.NET. Great. A nice MSI package that you can install and now you can right click on test and run it. The output window shows you any tests that failed and a double-click on it will take you to the offending test. Only problem is that I run it and get yelled at that my version is old. Checking the Download page the last few weeks I've been getting nothing but an error page with an ICAP Error (whatever that is). The download page just doesn't seem to come up for me no matter where I'm trying it from.
The other bee in my bonet is NAntRunner. NAnt is a great tool and our build script kicks (as much as a build script can). However I always have a console window open to run NAnt so work in the VS.NET IDE, switch to command window, run NAnt, lather, rinse, repeat. Okay, I could add NAnt to my Tools menu but it just opens a window and runs so it's hard to see when something goes wrong. Plus I would have to do something funky so I could specify different targets, etc. NAntRunner is a nice looking add-in that is supposed to recognize your .build file and will let you (graphically) pick a target and run it. The results will also be output to the Visual Studio IDE so I can just happily work in my environment. However I have yet to successfuly get NAntRunner working. The SourceForge project didn't a source release but the code is in CVS so I could see what's wrong and maybe fix it, but I don't want to do that. That's like renting a car and filling it with gas but it still doesn't do anything. It's not the end-consumers problem that the tool doesn't work (and others say they can't get it to work either). Obviously it somehow got working for the guy that wrote it as I can see screenshots of it in action. However, looking at the code reveals some really ugly assumptions like where cmd.exe is installed and where Visual Studio lives. I just want to use something that works!
It would be nice if I could just press a keystroke combination, see the output of my tests, run and deploy my build with NAnt all within the Visual Studio IDE. Is that too much to ask for? Sigh.
Automagically updating themes.
SharePoint Wrappers 0.10:
- Connect to a SharePoint Server, get its properties, and enumerate sites
- Connect to a SharePoint Site, get its properties, and enumerate lists
- Create lists, doclibs, and list items
- Upload documents to a document library
- Get all versions of a document in a library
How do you use them? It's fairly simple:!
Who the heck is SPSTaskUser?
And more importantly why is he filling up my Application logs with these:
Windows cannot unload your classes registry file - it is still in use by other applications or services. The file will be unloaded when it is no longer in use.
Perplexed? I was. Until I spent a couple of days with a very fabulous Microsoft support person (Thanks Tracy!) going through a few errors we were having on a couple of our portals.
SharePoint (both SharePoint Portal Server [SPS] and Windows SharePoint Services [WSS]) have a few combinations of installs:
- SharePoint Portal Server using the built-in database engine (plain old MSDE)
- SharePoint Portal Server using SQL Server
- Windows SharePoint Services using the built-in database engine (new and cool WSMSDE)
- Windows SharePoint Services using SQL Server
Each has it's merits and downfalls (MSDE is limited to 2GB whereas WSMSDE doesn't have that limitation). Normally in a development setup you might decide to just install option #1, SPS with the built-in database engine. After all it's easy right? You don't have to do worry about a separate install of SQL and it's service packs and all that mumbo-jumbo. With that combo though comes, your friend and mine, SPSTaskUser. I was really perplexed as to who this user was? I certainly didn't create him. After a little Googling, others mentioned him (but usually it was "Who is this SPSTaskUser?"). However, just like why the infant universe did not simply spread out evenly after the Big Bang 14 billion years ago, the answer is here.
Installing SPS in this configuration creates a new local machine account called SPSTaskUser. The default website Application Pool (and all portals created after that on this server) will use a predfined Network Service account to do all the SPS crawling. When you set this up and your crawler kicks off (in my case it was every 10 minutes) you'll get a pair of messages in your Application Event Log like this:
2/18/2005 4:00:05 PM Userenv 1516 NT AUTHORITY\SYSTEM SERVERNAME Windows unloaded user S-1-5-21-2435244326-407298798-4041372769-1009_Classes registry when it received a notification that no other applications or services were using the profile.
2/18/2005 4:00:01 PM Userenv 1524 SERVERNAME\SPSTaskUser SERVERNAME "Windows cannot unload your classes registry file - it is still in use by other applications or services. The file will be unloaded when it is no longer in use.
It's rather annoying and was causing me some grief as I personally don't like anything but Information in my Application logs (and even then I get a little torqued about the amount of those Windows creates). Anyways, we spent some time and I was convinced that we hadn't created that user. SharePoint also creates a new scheduled task in the Windows Task Scheduler for each crawl of some content (luckily WSS only installs don't have this). That task will run as, you guessed it, SPSTaskUser. And we all know that when Scheduled Tasks run as a local account rather than a domain one it causes problems don't we kids?
We didn't find there were performance issues with the server, but it was dang annoying. There's two ways to fix this blip. The easiest answer is to just use SQL instead of MSDE. With SQL as the backend, two things happen. The Application Pools for the websites use a named user (a domain account you create with a non-expiring password) rather than the built-in NETWORK SYSTEM account. Second, the Default Content Access account (or Application Pool account, I can't remember which) that you specify to crawl content will be the account that runs the Scheduled Tasks. Problem gone, Application logs clean. Move along. A second option (although I haven't tried this) is to keep on using MSDE (although why would you?) and manually replace the Scheduled Task account with some domain account that you have for this type of stuff. That should fix it but your mileage may vary so caveat emptor.
One note, I don't know if WSMSDE creates this guy or not so maybe someone can confirm that and post a comment on it. I suspect it might not and the SPSTaskUser only gets created with SharePoint Portal Server and MSDE because it's the one doing the crawling of content (WSS searches content using SQLs Full Text Search engine).
Anyways, hope that helps someone out there and have a great weekend!
SharePoint Pet Peeve #327
Dear Microsoft,
Please enable alerts to fire when an item is submitted to a list that requires approval.
Man, I don't know how many times I keep forgetting to go to a raft of lists that I have to approve content to see if anything new has been added. I know that the EventHandler interface for lists was omitted due to timing, but having to manually check lists where approval is required is rather annoying.
Nintex has a pretty good add-on for Document Libraries but it doesn't work for Lists. There are some Web Parts floating around that will show you what needs you approval which is good too. However I was hoping to minimize some of the extra add-ons that I feel should be part of the base system.
Okay, enough ranting for this morning.
Best Practices for Writing HTML in Web Parts?
Something that's as puzzling as the Cadbury secret (and we all know that us programmers figured that out long ago) is just what's the best way to write out HTML in Web Parts? So call me masochistic but I write Web Parts with code. Yes, it's ugly. Yes, it's painful. Yes, you could use something cool like SmartPart or load the controls yourself (but that brings on a host of other issues like Code Access Security so we won't go into that).
For those of us that hold true to the "old fashioned" way, what's the best way to write all that code out? Consider these two approaches that writes out a label and control in a row for a form:
private void RenderRow(HtmlTextWriter output, Label label, WebControl control)
{
output.Write("<tr><td class=\"ms-formlabel\" valign=\"top\" nowrap=\"true\">");
label.RenderControl(output);
output.Write("</td><td class=\"ms-formbody\">");
control.RenderControl(output);
output.Write("</td></tr>");
}
private void RenderRow(HtmlTextWriter output, Label label, WebControl control)
{
output.RenderBeginTag("tr");
output.RenderBeginTag("td");
output.AddAttribute("class", "ms-formlabel");
output.AddAttribute("valign", "top");
output.AddAttribute("nowrap", "true");
label.RenderControl(output);
output.RenderEndTag();
output.RenderBeginTag("td");
output.AddAttribute("class", "ms-formbody");
control.RenderControl(output);
output.RenderEndTag();
output.RenderEndTag();
}
Both output exactly the same HTML. Does it matter? The first approach is less lines but is it any more (or less) readable? Or maybe everything should be built up in a StringBuilder class and slammed out to the HtmlTextWriter? Or is it simply whatever is readable or maintainable works? Looking for your thoughts, ideas, suggestions, rants, assorted concealed lizards of any kind.
Opening Links in new Windows
This question gets asked a lot in the newsgroups. The default behaviour of the links list (which is just a template afterall) is to open the link in the same window. I took a look at Jim Duncan's cBlog site definition recently that had a nice modification that I think should be on everyone's SharePoint box. He added a simple checkbox to set a link to open in a new window.
First you'll need a new site definition so copy STS to another one for this purpose. Then copy the Links list definition to a new one (it's in a folder called FAVORITES). Alternately, you can just work directly on the definition itself but see the note at the end of this blog about that. In the LINKS definition they'll be a SCHEMA.XML file which defines all the fields and how the list behaves.
First let's add the new checkbox field that will hold our new option:
<Field Type="Boolean" DisplayName="Open in New Window" Name="NewWindow"/>
Great. Now it'll show up on the New and Edit forms and be saved with the list. How do you get it to show up in the rendered HTML? This is done in the DisplayPattern tag for the URL field. The default Links list part that we're interested looks like this:
>
<HTML><![CDATA[</A>]]></HTML>
</Default>
So we want to do a check on our NewWindow field and if it's set, add in a "target=blank" piece of HTML to our HREF tag. So the new DisplayPattern tag should look something like this (changed highlighted in Red):
>
<Switch>
<Expr>
<Field Name="NewWindow"/>
</Expr>
<Case Value="Yes"><HTML><![CDATA[ target="blank"]]></HTML></Case>
</Switch>
<HTML><![CDATA[</A>]]></HTML>
</Default>
That's it. A simple change to one file and all your links have the capability to open in new windows across all your sites. You can preset this value if you want (using the <ROWS> tag in your sitedef) and hide the field or let your users choose. Of course, modifying your base SharePoint install isn't recommended as the next service pack or update may wipe out those changes however for this change I'm willing to do the file management myself to avoid this. Your mileage may vary.
The other trap is that all the sites that are out there, already created using the STS template might break. Adding a new field to a definition usually is ok but it's something you need to test big time if you have a lot of sites out there already created (NUnitAsp is great for this). It would be nice if it was part of the base system though wouldn't it?
MSOS DevCon, Word XML, and the Mvp.Xml Project
I was invited, but unfortunately due to scheduling conflicts I was unable to attend this years Microsoft Office System Developer Conference in Redmond this past week. My loss as Bill Gates gave the keynote about the importance of XML, connectivity, ease of development, lower CTO, better performance, extreme developer tools (VS2005 is just plain freakin' amazing, my words not his) and the importance of reuse and leveraging both 3rd party and MS developers to build interopabile solutions. The great thing (from my perspective) is that SharePoint is being positions at the core of most of the products which means lots more development and collaboration to come.
Mike Fitzmaurice gave a presentation about how SharePoint can be used to access backend data services (SAP, Siebel, PeopleSoft, etc.) including some discussion around BizTalk, SSO (Single Sign On), etc.
I think one of the key things is to look at storing Word, Excel, etc. files in XML when saving stuff in SharePoint rather than the traditional DOC, XLS, etc. formats. I noticed the recently released (2/4/2005) Word XML Software Developers Kit which follows the Office XML Reference Schemas that were published awhile ago. Displaying Word XML docs in a browser (via SharePoint and an XSL file) is so much better on performance than launching embedded Word and you've got so much more power like loading it up into an XmlDocument and modifying it rather than the traditional Word COM fiasco.
On the Xml front, you might also want to check out the Mvp.Xml project here which is aimed at supplementing .NET framework functionality available through the System.Xml namespace. Helps a lot when you're working with Xml Web Services coming out of SharePoint.
Stupid SharePoint Fact #327
Ever wonder why all those FrontPage (and subsequently now SPS and WSS) directories were called /vti? VTI was the acronym for a company called Vermeer Technologies Incorporated. Microsoft aquired the company in 1996 which had a flagship product. This product eventually became FrontPage. Hence all the vti directories all over your servers. You can check out the original press release here on it.
Reading vs. Updating objects in the SharePoint Object Model
A message came up on the newsgroup about someone having a problem with this code:
System.Guid listId=web.Lists.Add(title,description,web.ListTemplates["Document Library"],web.DocTemplates[0]);
web.Lists[listId].OnQuickLaunch=true;
web.Lists[listId].Update();
They were creating a list (a document library in this case) and wanted to display it on the Quick Launch. The code looked okay but the list wasn't being displayed on the Quick Launch. After a little experimenting (going down a couple of turkey trails like web part vs. console app) I found this slight change worked:
System.Guid listId=web.Lists.Add(title,description,web.ListTemplates["Document Library"],web.DocTemplates[0]);
SPList list = web.Lists[listId];
list.OnQuickLaunch=true;
list.Update();
I do remember passing by a document somewhere that said when accessing classes in the SharePoint namespace for updating to use the actual objects rather than an index of a collection. For whatever reason, I just naturally always get the list object myself so never came across this before. For reading purposes (like listing all the lists on a site) it's fine, but updates need to be implemented this way. I can't find that link right now but I did find a blog by Kris Syverstad from July of 2004 here on it to which Peter Provost thought maybe the WSS team implemented a property indexer when they probably should have implemented a method.
Looking through the newsgroups, this is probably one of the biggest "why doesn't my code work" message posted when it comes to updating properties. This tip will be true for any object in a collection (SPSite, SPList, SPField, etc.). Good stuff to know.
AlternateHeaders and WSS sites...
...doesn't work. Yup, plain and simple. The ONET.XML file describes site definitions and can contain an attributed called "AlternateHeader" which you specify as your own aspx page living in the LAYOUTS\1033 directory. By default, all portal areas (everything under the SPSxxx folders) have this set to "PortalHeader.aspx" which just brings in the various CSS files and registers a couple of tag prefixes for accessing SharePoint Web Controls. However, none of the WSS site definitions have this AlternateHeader defined. You might think you're smart (several people have thought this) and set the AlternateHeader attribute to PortalHeader.aspx and magically all your WSS sites will start looking like they actually belong to the portal (i.e. having the same Portal navigation, etc.). Nope. All it does is create chaos and anarchy because SharePoint controls don't necessarily work in WSS sites. And it's only Tuesday... | http://weblogs.asp.net/bsimser/archive/2005/02 | CC-MAIN-2014-35 | refinedweb | 3,034 | 62.88 |
Console.WindowLeft Property
Gets or sets the leftmost position of the console window area relative to the screen buffer.
Namespace: SystemNamespace: SystemLeft property determines which column of the buffer area is displayed in the first column of the console window. The value of the WindowLeft property can range from 0 to BufferWidth - WindowWidth. Attempting to set it to a value outside that range throws an ArgumentOutOfRangeException.
When a console window first opens, the default value of the WindowLeft property is zero, which indicates that the first column shown by the console corresponds to the first column (the column at position zero) in the buffer area. The default width of both the console window and the buffer area is 80 columns. This means that the WindowLeft property can be modified only if the console window is made narrower or the buffer area is made wider.
Note that if the width of the buffer area exceeds the width of the console window, the value of the WindowLeft property is automatically adjusted when the user uses the horizontal scroll bar to define the window's relationship to the buffer area.
The following example opens an 80-column console window and defines a buffer area that is 120 columns wide. It displays information on window and buffer size, and then waits for the user to press either the LEFT ARROW key or the RIGHT ARROW key. In the former case, it decrements the value of the WindowLeft property by one if the result is a legal value. In the latter case, it increases the value of the WindowLeft property by one if the result would be legal. Note that the example does not have to handle an ArgumentOutOfRangeException, because it checks that the value to be assigned to the WindowLeft property is not negative and does not cause the sum of the WindowLeft and WindowWidth properties to exceed the BufferWidth property value.
using System; public class Example { public static void Main() { ConsoleKeyInfo key; bool moved = false; Console.BufferWidth = 120; Console.Clear(); ShowConsoleStatistics(); do { key = Console.ReadKey(true); if (key.Key == ConsoleKey.LeftArrow) { int pos = Console.WindowLeft - 1; if (pos >= 0 && pos + Console.WindowWidth <= Console.BufferWidth) { Console.WindowLeft = pos; moved = true; } } else if (key.Key == ConsoleKey.RightArrow) { int pos = Console.WindowLeft + 1; if (pos + Console.WindowWidth <= Console.BufferWidth) { Console.WindowLeft = pos; moved = true; } } if (moved) { ShowConsoleStatistics(); moved = false; } Console.WriteLine(); } while (true); } private static void ShowConsoleStatistics() { Console.WriteLine("Console statistics:"); Console.WriteLine(" Buffer: {0} x {1}", Console.BufferHeight, Console.BufferWidth); Console.WriteLine(" Window: {0} x {1}", Console.WindowHeight, Console.WindowWidth); Console.WriteLine(" Window starts at {0}.", Console.WindowLeft); Console.WriteLine("Press <- or -> to move window, Ctrl+C to. | http://msdn.microsoft.com/en-us/library/system.console.windowleft.aspx | CC-MAIN-2013-20 | refinedweb | 443 | 57.16 |
Subsets and Splits