text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Key Takeaways
- .NET Core is cross-platform and runs on Windows, Linux, Mac OS X and many more. In comparison to .NET the release cycle is much shorter. Most of .NET Core ships in NuGet packages and can be easily released and upgraded.
- The faster release cycle is particularly helpful for performance improvement work, and a great deal of work is going in to improving performance of language constructs such as
SortedSetand LINQ’s
.ToList()method.
- Faster cycles and easier upgrades also bring the opportunity to iterate over new ideas of improving .NET Core performance, by introducing types like
System.ValueTupleand
Span
- These improvements can then be fed back into the full .NET framework once proven..
Now that .NET Core is on the streets, Microsoft and the open-source community can iterate more quickly over new features and enhancements in the framework. One of the areas of .NET Core that gets continuous attention is performance: .NET Core brings along many optimizations in terms of performance, both in execution speed as well as memory allocation.
In this article, we’ll go over some of these optimizations and how the continuous stream – or Span<T>, more on that later – of performance work, helps us in our lives as developers.
.NET and .NET Core
Before we dive in deeper, let’s first look at the main difference between the full .NET framework (let’s call it .NET for convenience) and .NET Core. To simplify things, let’s assume both frameworks respect the .NET Standard - essentially a spec that defines the base class library baseline for all of .NET. That makes both worlds very similar, except for two main differences:
First, .NET is mostly a Windows thing, where .NET Core is cross-platform and runs on Windows, Linux, Mac OS X and many more. Second, the release cycle is very different. .NET ships as a full framework installer that is system-wide and often part of a Windows installation, making the release cycle longer. For .NET Core, there can be multiple .NET Core installations on one system, and there is no long release cycle: most of .NET Core ships in NuGet packages and can be easily released and upgraded.
The big advantage is that the .NET Core world can iterate faster and try out new concepts in the wild, and eventually feed them back into the full .NET Framework as part of a future .NET Standard.
Very often (but not always), new features in .NET Core are driven by the C# language design. Since the framework can evolve more rapidly, the language can, too. A prime example of both the faster release cycle as well as a performance enhancement is System.ValueTuple. C# 7 and VB.NET 15 introduced “value tuples”, which were easy to add to .NET Core due to the faster release cycles, and were available to full .NET as a NuGet package for full .NET 4.5.2 and earlier, and only became part of the full .NET Framework in .NET 4.7.
Now let’s have a look at a few of these performance and memory improvements that were made.
Performance improvements in .NET Core
One of the advantages of the .NET Core effort is that many things had to be either rebuilt, or ported from the full .NET Framework. Having all of the internals in flux for a while, combined with the fast release cycles, provided an opportunity to make some performance improvements in code that were almost considered to be “don’t touch, it just works!” before.
Let’s start with
SortedSet<T> and its Min and Max implementations. A
SortedSet<T> is a collection of objects that is maintained in a sorted order, by leveraging a self-balancing tree structure. Before, getting the
Min or
Max object from that set required traversing the tree down (or up), calling a delegate for every element and setting the return value as the minimum or maximum to the current element, eventually reaching the top or bottom of the tree. Calling that delegate and passing around objects meant there was quite some overhead involved. Until one developer saw the tree for what is was and removed the unneeded delegate call as it provided no value. His own benchmarks show a 30%-50% performance gain.
Another nice example is found in LINQ, more specifically in the commonly
used .ToList() method. Most LINQ methods operate as extension methods on top of an
IEnumerable<T> to provide querying, sorting and methods like
.ToList(). By doing this off an
IEnumerable<T>, we don’t have to care about the implementation of the underlying
IEnumerable<T>, as long as we can iterate over it.
A downside is that when calling
.ToList(), we have no idea of the size of the list to create and just enumerate all objects in the enumerable, doubling the size of the list we’re about to return whenever capacity is reached. That’s slightly insane as it potentially wastes memory (and CPU cycles). So, a change was made to create a list or array with a known size if the underlying
IEnumerable<T> is in fact a
List<T> or
Array<T> with a known size. Benchmarks from the .NET team show a ~4x increase in throughput for these.
When looking through pull requests in the CoreFX lab repository on GitHub, we can see tons of performance improvements that have been made, both by Microsoft and the community. Since .NET Core is open source and you can provide performance fixes too. Most of these are just that: fixes to existing classes in .NET. But there is more: .NET Core also introduces several new concepts around performance and memory that go beyond just fixing these existing classes. Let’s look at those for the remainder of this article.
Reducing allocations with System.ValueTuple
Imagine we want to return more than one value from a method. Previously, we’d have to either resort to using out parameters, which are not very pleasant to work with and not supported when writing async methods. The other option was to use
System.Tuple<T> as a return type, but this allocates an object and has rather unpleasant property names to work with
(Item1, Item2, …). A third option would be to use specific types or anonymous types, but that introduces overhead when writing the code as we’d need the type to be defined, and it also makes unnecessary allocations in memory if all we need is a value embedded in that object.
Meet tuple return types, backed by System.ValueTuple. Both C# 7 and VB.NET 15 added a language feature to return multiple values from a method. Here’s a before and after:
// Before: private Tuple<string, int> GetNameAndAge() { return new Tuple<string, int>("Maarten", 33); } // After: private (string, int) GetNameAndAge() { return ("Maarten", 33); }
In the first case, we are allocating a
Tuple<string, int>. While in this example the effect will be negligible, the allocation is done on the managed heap and at some point, the Garbage Collector (GC) will have to clean it up. In the second case, the compiler-generated code uses the
ValueTuple<string, int> type which in itself is a struct and is created on the stack – giving us access to the two values we want to work with while making sure no GC has to be done on the containing data structure.
The difference also becomes visible if we use ReSharper’s Intermediate Language (IL) viewer to look at the code the compiler generates in the above examples. Here are just the two method signatures:
// Before: .method private hidebysig instance class [System.Runtime]System.Tuple`2<string, int32> GetNameAndAge() cil managed { // ... } // After: .method private hidebysig instance valuetype [System.Runtime]System.ValueTuple`2<string, int32> GetNameAndAge() cil managed { // ... }
We can clearly see the first example returns an instance of a class and the second example returns an instance of a value type. The class is allocated in the managed heap (tracked and managed by the CLR and subject to garbage collection, mutable), whereas the value type is allocated on the stack (fast and less overhead, immutable). Or in short:
System.ValueTuple itself is not tracked by the CLR and merely serves as a simple container for the embedded values we care about.
Note that next to their optimized memory usage, features like tuple deconstruction are quite pleasant side effects of making this part of the language as well as the framework.
Allocationless substrings with Span<T>
We already touched on stack vs. managed heap in the previous section. Most .NET developers use just the managed heap, but .NET has three types of memory we can use, depending on the situation:
- Stack memory – the memory space in which we typically allocate value types like int, double, bool, … It’s very fast (very often lives in the CPU’s cache), but limited in size (typically < 1 MB). The adventurous use the
stackallockeyword to add custom objects but know they are on dangerous territory as a
StackOverflowExceptioncan occur at any time and crash our entire application.
- Unmanaged memory – the memory space where there is no garbage collector and we have to reserve and free memory ourselves, using methods like
Marshal.AllocHGlobaland
Marshal.FreeHGlobal.
- Managed memory / managed heap – the memory space where the garbage collector frees up memory that is no longer in use and where most of us live their happy programmer life with few memory issues.
All have their own advantages and disadvantages, and have specific use cases. But what if we want to write a library that works with all of these memory types? We’d have to provide methods for each of them separately. One that takes a managed object, another one that takes a pointer to an object on the stack or in the unmanaged heap. A good example would be in creating a substring of a string. We would need a method that takes a
System.String and returns a new
System.String that represents the substring to handle the managed version. The unmanaged/stack version would take a char* (yes, a pointer!) and the length of the string, and would return similar pointers to the result. Unmanageable…
The System.Memory NuGet package (currently still in preview) introduces a new
Span<T> construct. It’s a value type (so not tracked by the garbage collector) that tries to unify access to any underlying memory type. It provides a few methods, but in essence it holds:
- A reference to T
- An optional start index
- An optional length
- Some utility functions to grab a slice of the Span<T>, copy the contents, …
Think of it as this (pseudo-code):
public struct Span<T> { ref T _reference; int _length; public ref T this[int index] { get {...} } }
No matter if we are creating a
Span<T> using
a string, a char[] or even an unmanaged char*, the
Span<T> object provides us with the same functions, such as returning an element at index. Think of it as being a
T[], where
T can be any type of memory. If we wanted to write a Substring() method that handles all types of memory, all we have to care about is working with a
Span<char> (or its immutable version,
ReadOnlySpan<T>):
ReadOnlySpan<char> Substring(ReadOnlySpan<char> source, int startIndex, int length);
The source argument here can be a span that is based on a
System.String, or on an unmanaged
char* – we don’t have to care.
But let’s forget about the memory-type agnostic aspect of
Span<T> for a bit and focus on performance. If we’d write a
Substring() method for
System.String, this is probably what we would come up with:
string Substring(string source, int startIndex, int length)
string Substring(string source, int startIndex, int length) { var result = new char[length]; for (var i = 0; i < length; i++) { result[i] = source[startIndex + i]; } return new string(result); }
That’s great, but we are in fact creating a copy of the substring. If we call
Substring(“Hello World!”, 0, 5), we’d have two strings in memory: “Hello World” and “Hello”, potentially wasting memory space, and our code still has to copy data from one array to another to make this happen, consuming CPU cycles. Our implementation is not bad, but it is not ideal either.
Imagine implementing a web framework, and having to use the above code to grab the request body from an incoming HTTP request that has headers and a body. We’d have to allocate big chunks of memory that have duplicate data: one that has the entire incoming request and the substring that holds just the body. And then there’s the overhead of having to copy data from the original string into our substring.
Now let’s rewrite that using
(ReadOnly)Span<T>:
static ReadOnlySpan<char> Substring(ReadOnlySpan<char> source, int startIndex, int length) { return source.Slice(startIndex, length); }
Ok, that is shorter, but there is more. Due to the way
Span<T> is implemented, our method does not return a copy of the source data, instead it returns a
Span<T> that refers to a subset of our source. Or in the example of splitting an HTTP request into headers and body: we’d have three
Span<T>: the incoming HTTP request, one
Span<T> pointing to the original data’s header part, and another
Span<T> pointing to the request body. The data would be in memory only once (the data from which the first
Span<T> is created), all else would just point to slices of the original. No duplicate data, no overhead in copying and duplicating data.
Conclusion
With .
ToList() method.
Faster cycles and easier upgrades also bring the opportunity to iterate over new ideas of improving .NET Core performance, by introducing types like
System.ValueTuple and
Span<T> that make it more natural for .NET developers to use the different types of memory we have available in the runtime, while at the same time avoiding the common pitfalls related to them.
Imagine if some .NET base classes were reworked to a
Span<T> implementation. Things like string UTF parsing, crypto operations, web parsing and other typical CPU and memory consuming tasks. That would bring great improvements to the framework, and all of us .NET developers would benefit. Turns out that is precisely what Microsoft is planning to do! .NET Core’s performance future is bright!
About the Author MVP for Microsoft Azure. Maarten is a frequent speaker at various national and international events and organizes Azure User Group events in Belgium. In his free time, he brews his own beer. Maarten's blog. | https://www.infoq.com/articles/performance-net-core/ | CC-MAIN-2019-35 | refinedweb | 2,446 | 63.39 |
Hi, Im new to these forums and i need help with this application.
Application Requirements
In this application you are to create a Three Shells Game. In this game there are three cups or shells
that are upturned. For each new game the pea is placed at random under one of the cups. The player
must select a cup. If the pea is under that cup then the player wins; if not, the player loses.
You are to design and build your version of this game. You will need to decide how you are going to
represent the cups and pea on your interface and how the player will select the cup with the pea. The
results of the selection (a win or loss) will also need to be displayed within the interface. Your
application will need to keep track of game statistics such as the number of games won, the number
of games lost, the longest winning streak (number of consecutive wins) and the longest losing streak
(number of consecutive losses). The logic associated with keeping track of streaks is provided in
Appendix A. These statistics only need to be kept for the current session, not for the lifetime of the
application. You are not permitted to program your game to cheat.
Although you can choose your own format and layout for your interface, there are a few compulsory
requirements.
1. You need to include buttons that provide access to “New Game”, “Instructions”, and “About
Me” (your details). You may include other buttons of your own choosing.
2. The player needs to be able to “see” where the pea was located if they selected the wrong
cup.
3. The interface needs to be well set out and usable.
4. You must create your application in Microsoft C# .Net 2005
5. You must use program documentation to explain your code (undocumented code will be sent
back to you for revision with penalties)
6. You must use good programming practices, i.e. meaningful names for controls and variables,
appropriate control structures, etc.
Optional extras you might like to consider are:
1. Different “shell” shapes or colours.
2. An extra guess if the first one was wrong.
3. For those who want to research on their own, sound or animation, menus, additional forms or
windows
Your application will need to generate random numbers to “position” the pea under one of the cups.
You will need to investigate how to use the random class to do this.
Appendix A:
Determining the logic behind the longest winning and losing streaks.
The following flowchart demonstrates one possible way of determining the longest winning and losing
streak. There are other possible methods and you are more than welcome to develop your own.
Alternatively you can convert this flowchart to C# code.
Assumptions
A number of assumptions need to be made about variables used in the flowchart. These are:
1. LongestWinStreak stores the number of games in the longest winning streak (initially zero).
2. LongestLossStreak stores the number of games in the longest losing streak (initially zero.
3. CurrentWinStreak stores the number of games in the current winning streak (initially zero.
4. CurrentLossStreak stores the number of games in the current losing streak (initially zero.
5. GameResult is a Boolean value indicating whether the player won (true) or lost (false).
START CheckStreaks GameResult is true CurrentLossStreak = 0 Increment CurrentWinStreak CurrentWinStreak > LongestWinStreak LongestWinStreak = CurrentWinStreak STOP CheckStreaks CurrentWinStreak = 0 Increment CurrentLossStreak CurrentLossStreak > LongestLossStreak LongestLossStreak = CurrentLossStreak No No No Yes Yes Yes
This is my code so far... but it doesnt work
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace sit_102_assignment_2 { public partial class Form1 : Form { //Declaring a random object. Random RandomNumber = new Random(); //Declaring an int which will hold the random number int x, counter = 0; public Form1() { InitializeComponent(); } private void newGameButton_Click(object sender, EventArgs e) { shellOne.Visible = true; shellTwo.Visible = true; shellThree.Visible = true; x = RandomNumber.Next(1, 4); } private void shellThree_Click(object sender, EventArgs e) { if (x == 3) { textBox1.Text = "Good"; counter++; x = RandomNumber.Next(1, 4); } else { textBox1.Text = "Bad"; } } private void shellTwo_Click(object sender, EventArgs e) { if (x == 2) { textBox1.Text = "Good"; counter++; x = RandomNumber.Next(1, 4); } else { textBox1.Text = "Bad"; } } private void shellOne_Click(object sender, EventArgs e) { if (x == 1) { textBox1.Text = "Good"; counter++; x = RandomNumber.Next(1, 4); } else { textBox1.Text = "Bad"; } } private void Form1_Load(object sender, EventArgs e) { } } }
All i can see are the new game, instructions and about me buttons and the text box and they are so not useful.
Any help would be greatly appreciated.
Edited 3 Years Ago by Nick Evan: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/118737/c-net-2005-version | CC-MAIN-2017-04 | refinedweb | 780 | 59.4 |
01 June 2012 07:23 [Source: ICIS news]
By Mahua Chakravarty
SINGAPORE (ICIS)--Benzene prices in ?xml:namespace>
Benzene prices may slip below the psychological $1,000/tonne (€810/tonne) FOB (free on board)
“Prices can go below $1,000/tonne FOB
Naphtha prices in Asia were at $809.50-812.50/tonne CFR (cost and freight)
Looking at the current $245.50-257.50/tonne spread from naphtha to benzene, there is room for benzene prices to drop further, they said.
Aromatics producers need a spread of about $150/tonne as the break-even point for production margins.
Spot benzene prices in Asia hit a five-month low on 31 May at $1,055-1,070/tonne FOB
Prices have plunged by $120-130/tonne in the past one month, mainly driven by a bearish upstream market, players added.
“The market is mainly being driven by crude [these days],” said a second Korean producer.
Meanwhile, benzene supply in
Benzene supply was more balanced for June as a number of regional crackers have reduced or are planning to cut operating rates, traders and producers said.
Poor margins for toluene disproportionation (TDP) producers have also forced few units to cut production in recent weeks, resulting in less benzene supply, they added.
Asian benzene exporters were also planning to load about 50,000-60,000 tonnes of benzene for the
But demand for benzene from the SM sector could also see some decline in the coming weeks, as some SM producers in the region were planning to cut production due to weak performance in the derivatives styrenics sector, producers added.
( | http://www.icis.com/Articles/2012/06/01/9566123/asia-benzene-may-soften-further-on-weak-crude-naphtha-values.html | CC-MAIN-2015-11 | refinedweb | 268 | 55.58 |
Binary search, also known as half-interval search,binary chop or logarithmic search.
How it Works?
Binary search works on sorted arrays values. Binary search begins by comparing the middle element of the array with the target value. If the target value matches the middle element, its position in the array is returned. If the target value is less than or greater
than the middle element, the search continues in the lower or upper half of the array, respectively, eliminating the other half from consideration.If the search ends with the remaining half being empty, the target is not in the array.
Time and Space Complexity?
Binary search runs in at worst logarithmic time, making O(log n) comparisons and takes constant (O(1)) space.
where n is the number of elements in the array, the O is Big O notation, and log is the logarithm.
Java API for Binary Search
Java Arrays class also provide api’s for Binary Search. You can use as below
import java.util.Arrays; public class BinarySearchByJavaAPI { public static void main(String[] args) { char characters[] = { 'l', 'm', 'n', 'p', 'q<span data-</span>' }; System.out.println(Arrays.binarySearch(characters, 'a')); System.out.println(Arrays.binarySearch(characters, 'p')); } }
Java Program for Binary Search
import java.util.Scanner; class BinarySearch { public static void main(String args[]) { int c, first, last, middle, n, search, array[]; Scanner in = new Scanner(System.in); System.out.println("Enter number of elements"); n = in.nextInt(); array = new int[n]; System.out.println("Enter " + n + " sorted integers"); for (c = 0; c < n; c++) array[c] = in.nextInt(); System.out.println("Enter value to find"); search = in.nextInt(); first = 0; last = n - 1; middle = (first + last)/2; while( first <= last ) { if ( array[middle] < search ) first = middle + 1; else if ( array[middle] == search ) { System.out.println(search + " found at location " + (middle + 1) + "."); break; } else last = middle - 1; middle = (first + last)/2; } if ( first ><span data-</span> last ) System.out.println(search + " is not present in the list.\n"); } }
More Info
For more Algorithms and Java Programing Test questions and sample code follow below links | https://facingissuesonit.com/2017/12/06/binary-search-java-program/ | CC-MAIN-2021-17 | refinedweb | 350 | 50.94 |
My previous post pointed out the shortcomings of using Try-Catch blocks to handle situations that are not truly exceptional. In the same vein I offer this opinion on the VB.Net MsgBox() function -- note I say opinion because this time I don't have the astounding perf numbers to back me up. Here goes . . .
I was eating dinner tonight and showing a friend of mine a little VB.Net code. Yes, that's right -- a buddy stopped by and we were huddled around the laptop for a little nerdery and eggplant parmesan. This friend of mine does some Access development and I've been working on bringing him into the .Net fold. We got sidetracked for about 30 minutes on why I don't use MsgBox() and, instead, use MessageBox.Show(). While some will say it's only a matter of preference, I think it's more than that. MsgBox is the Visual Basic wrapper around the .Net Framework's MessageBox.Show method -- there is a tiny bit more overhead to MsgBox, but it's negligible especially when you consider you're halting your program to display a dialog box to the user! I can't really play the perf card here.
I think my two main issues with MsgBox are:
Call me a snob, but I know if I was reviewing code for a new hire and saw MsgBox() in their VB.Net code, I'd have a slightly -- just ever so slightly -- lesser opinion of the code. If it came down to two identical candidates, one MsgBoxer and one MessageBox.Shower, I'd hire the MessageBox.Shower for sure. MSDN even says MessageBox.Show is preferred.
I realize this is hardly a CodingWorse scenario on par with the previous try-catch transgression, but MsgBox just rubs me the wrong way.
This is pretty funny. The reason most of you dislike visualbasic namespace is because (I think) is because your not 'comfortable' with seeing old vb6 like code. Tough luck, it works, it's less typing a lot of the time and ppl that are used to doing something that works usually continue to use it.
I agree that using .net framework does make c# and vb.net almost the same language.
Don't begrudge ppl for using familiar language specific functions it makes the world a more interesting place.
Expand your mind, dudes. | http://codebetter.com/blogs/grant.killian/archive/2005/02/02/50387.aspx | crawl-001 | refinedweb | 396 | 74.9 |
Robert Nix
2008-04-02
Tried MDBtools because I have an MDB file I need to extract the database from.
First problem: While building the tools on Mac, I ran into a problem compiling backend.c. It contains the statement "static GHashTable *mdb_backends;". Unfortunately, the file mdbtools.h contains the line "extern GHashTable *mdb_backends;". Having the included statement causes the compiler to choke on the static statement in backend.c.
The way I got past this was to change mdbtools.h to read as follows:
/* hash to store registered backends */
#ifndef _BACKEND_C
extern GHashTable *mdb_backends;
#endif
And in backend.c, I changed as follows:
#define _BACKEND_C
#include "mdbtools.h"
Obviously, you don't see the problem in your compiler, or have different defaults than I. You can address the issue if you like....
My second problem is that I have an MDB file with a database in it that I'd like to extract. Dumping the file, I can see the data I'm looking for, but mdb-export dumps all blank records. The number of records is correct, but the data is all either low dates for the date field, or "" for the text fields. mdb-table shows the table, mdb-schema shows the database layout, but mdb-export and mdb-array both give me 89 of the blank records, such as:
145,"01/01/70 00:00:00","","","","","","","","","",""
146,"01/01/70 00:00:00","","","","","","","","","",""
147,"01/01/70 00:00:00","","","","","","","","","",""
148,"01/01/70 00:00:00","","","","","","","","","",""
149,"01/01/70 00:00:00","","","","","","","","","",""
150,"01/01/70 00:00:00","","","","","","","","","",""
151,"01/01/70 00:00:00","","","","","","","","","",""
152,"01/01/70 00:00:00","","","","","","","","","",""
mdb-ver returns "JET4".
Is there any way to recover this data?
--
Bob Nix
[email protected]
Anonymous
2010-03-11
The proper solution to the extern/static issue is to change this line:
extern GHashTable *mdb_backends;
to
static GHashTable *mdb_backends;
Anonymous
2010-03-11
This post is in a lot of places, here is my response as posted in another thread identical to this one:
The line:
extern GHashTable *mdb_backends;
Should be changed to:
static GHashTable *mdb_backends; | http://sourceforge.net/p/mdbtools/discussion/6689/thread/6c42e81e/ | CC-MAIN-2015-18 | refinedweb | 357 | 71.85 |
If we hit the VM_BUG_ON(), we're detecting a genuinely bad situation,but we're very unlikely to get a useful call trace.Make it a warning instead.Signed-off-by: Andy Lutomirski <[email protected]>--- arch/x86/mm/tlb.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-)diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.cindex dbbcfd59726a..f4e471dd1526 100644--- a/arch/x86/mm/tlb.c+++ b/arch/x86/mm/tlb.c@@ -121,8 +121,28 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * hypothetical buggy code that directly switches to swapper_pg_dir * without going through leave_mm() / switch_mm_irqs_off() or that * does something like write_cr3(read_cr3_pa()).+ *+ * Only do this check if CONFIG_DEBUG_VM=y because __read_cr3()+ * isn't free. */- VM_BUG_ON(__read_cr3() != (__sme_pa(real_prev->pgd) | prev_asid));+#ifdef CONFIG_DEBUG_VM+ if (WARN_ON_ONCE(__read_cr3() !=+ (__sme_pa(real_prev->pgd) | prev_asid))) {+ /*+ * If we were to BUG here, we'd be very likely to kill+ * the system so hard that we don't see the call trace.+ * Try to recover instead by ignoring the error and doing+ * a global flush to minimize the change of corruption.+ *+ * (This is far from being a fully correct recovery.+ * Architecturally, the CPU could prefetch something+ * back into an incorrect ASID slot and leave it there+ * to cause trouble down the road. It's better than+ * nothing, though.)+ */+ __flush_tlb_all();+ }+#endif if (real_prev == next) { VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=-- 2.13.5 | http://lkml.org/lkml/2017/9/8/15 | CC-MAIN-2017-51 | refinedweb | 232 | 50.43 |
POJO vs Java Beans. POJOs have gained the most acceptance because they are easy to write and understand. They were introduced in EJB 3.0 by Sun microsystems..
A POJO should not:
- Extend prespecified classes, Ex: public class GFG extends javax.servlet.http.HttpServlet { … } is not a POJO class.
- Implement prespecified interfaces, Ex: public class Bar implements javax.ejb.EntityBean { … } is not a POJO class.
- Contain prespecified annotations, Ex: @javax.persistence.Entity public class Baz { … } is not a POJO class.
POJOs basically defines an entity. Like in your program, if you want an Employee class, then you can create a POJO as follows:
Java
The above example is a well-defined example of the POJO class. As you can see, there is no restriction on access-modifiers of fields. They can be private, default, protected, or the public. It is also not necessary to include any constructor in it.
POJO is an object which encapsulates Business Logic. Following image shows a working example of POJO class. Controllers interact with your business logic which in turn interact with POJO to access the database. In this example a database entity is represented by POJO. This POJO has the same members as database entity.
Java Beans
Beans are special type of Pojos. There are some restrictions on POJO to be a bean.
- All JavaBeans are POJOs but not all POJOs are JavaBeans.
- Serializable i.e. they should implement Serializable interface. Still, some POJOs who don’t implement Serializable interface are called POJOs because Serializable is a marker interface and therefore not of much burden.
- Fields should be private. This is to provide the complete control on fields.
- Fields should have getters or setters or both.
- A no-arg constructor should be there in a bean.
- Fields are accessed only by constructor or getter setters.
Getters and Setters have some special names depending on field name. For example, if field name is someProperty then its getter preferably will be:
public "returnType" getSomeProperty() { return someProperty; }
and setter will be
public void setSomePRoperty(someProperty) { this.someProperty=someProperty; }
Visibility of getters and setters in generally public. Getters and setters provide the complete restriction on fields. e.g. consider below the property,
Integer age;
If you set visibility of age to the public, then any object can use this. Suppose you want that age can’t be 0. In that case, you can’t have control. Any object can set it 0. But by using the setter method, you have control. You can have a condition in your setter method. Similarly, for getter method if you want that if your age is 0 then it should return null, you can achieve this by using the getter method as in the following example: | https://www.geeksforgeeks.org/pojo-vs-java-beans/?ref=lbp | CC-MAIN-2021-49 | refinedweb | 454 | 60.72 |
Using Java Reflection 转载 2004年10月31日 08:04:00 标签: java / class / constructor / integer / object / string / 1775 编辑 删除 Using Java Reflection Print-friendly Version Articles IndexBy Glen McCluskeyJanuary 1998. One tangible use of reflection is in JavaBeans, where software components can be manipulated visually via a builder tool. The tool uses reflection to obtain the properties of Java components (classes) as they are dynamically loaded. A Simple Example To see how reflection works, consider this simple example:); } } } For an invocation of: java DumpMethods java.util.Stack the output is: public java.lang.Object java.util.Stack.push( java.lang.Object) public synchronized java.lang.Object java.util.Stack.pop() public synchronized java.lang.Object java.util.Stack.peek() public boolean java.util.Stack.empty() public synchronized int java.util.Stack.search(java.lang.Object) That is, the method names of class java.util.Stack are listed, along with their fully qualified parameter and return types. This program loads the specified class using class.forName, and then calls getDeclaredMethods to retrieve the list of methods defined in the class. java.lang.reflect.Method is a class representing a single class method. Setting Up to Use Reflection The reflection classes, such as Method, are found in java.lang.reflect. There are three steps that must be followed to use these classes. The first step is to obtain a java.lang.Class object for the class that you want to manipulate. java.lang.Class is used to represent classes and interfaces in a running Java program. One way of obtaining a Class object is to say: Class c = Class.forName("java.lang.String"); to get the Class object for String. Another approach is to use: Class c = int.class; or Class c = Integer.TYPE; to obtain Class information on fundamental types. The latter approach accesses the predefined TYPE field of the wrapper (such as Integer) for the fundamental type. The second step is to call a method such as getDeclaredMethods, to get a list of all the methods declared by the class. Once this information is in hand, then the third step is to use the reflection API to manipulate the information. For example, the sequence: Class c = Class.forName("java.lang.String"); Method m[] = c.getDeclaredMethods(); System.out.println(m[0].toString()); will display a textual representation of the first method declared in String. In the examples below, the three steps are combined to present self contained illustrations of how to tackle specific applications using reflection. Simulating the instanceof Operator Once Class information is in hand, often the next step is to ask basic questions about the Class object. For example, the Class.isInstance method can be used to simulate the instanceof operator: class A {} public class instance1 { public static void main(String args[]) { try { Class cls = Class.forName("A"); boolean b1 = cls.isInstance(new Integer(37)); System.out.println(b1); boolean b2 = cls.isInstance(new A()); System.out.println(b2); } catch (Throwable e) { System.err.println(e); } } } In this example, a Class object for A is created, and then class instance objects are checked to see whether they are instances of A. Integer(37) is not, but new A() is. Finding Out About Methods of a Class One of the most valuable and basic uses of reflection is to find out what methods are defined within a class. To do this the following code can be used: import java.lang.reflect.*; public class method1 { private int f1( Object p, int x) throws NullPointerException { if (p == null) throw new NullPointerException(); return x; } public static void main(String args[]) { try { Class cls = Class.forName("method1"); Method methlist[] = cls.getDeclaredMethods(); for (int i = 0; i < methlist.length; i++) { Method m = methlist[i]; System.out.println("name = " + m.getName()); System.out.println("decl class = " + m.getDeclaringClass()); Class pvec[] = m.getParameterTypes(); for (int j = 0; j < pvec.length; j++) System.out.println(" param #" + j + " " + pvec[j]); Class evec[] = m.getExceptionTypes(); for (int j = 0; j < evec.length; j++) System.out.println("exc #" + j + " " + evec[j]); System.out.println("return type = " + m.getReturnType()); System.out.println("-----"); } } catch (Throwable e) { System.err.println(e); } } } The program first gets the Class description for method1, and then calls getDeclaredMethods to retrieve a list of Method objects, one for each method defined in the class. These include public, protected, package, and private methods. If you use getMethods in the program instead of getDeclaredMethods, you can also obtain information for inherited methods. Once a list of the Method objects has been obtained, it's simply a matter of displaying the information on parameter types, exception types, and the return type for each method. Each of these types, whether they are fundamental or class types, is in turn represented by a Class descriptor. The output of the program is: name = f1 decl class = class method1 param #0 class java.lang.Object param #1 int exc #0 class java.lang.NullPointerException return type = int ----- name = main decl class = class method1 param #0 class [Ljava.lang.String; return type = void ----- Obtaining Information About Constructors A similar approach is used to find out about the constructors of a class. For example: import java.lang.reflect.*; public class constructor1 { public constructor1() { } protected constructor1(int i, double d) { } public static void main(String args[]) { try { Class cls = Class.forName("constructor1"); Constructor ctorlist[] = cls.getDeclaredConstructors(); for (int i = 0; i < ctorlist.length; i++) { Constructor ct = ctorlist[i]; System.out.println("name = " + ct.getName()); System.out.println("decl class = " + ct.getDeclaringClass()); Class pvec[] = ct.getParameterTypes(); for (int j = 0; j < pvec.length; j++) System.out.println("param #" + j + " " + pvec[j]); Class evec[] = ct.getExceptionTypes(); for (int j = 0; j < evec.length; j++) System.out.println( "exc #" + j + " " + evec[j]); System.out.println("-----"); } } catch (Throwable e) { System.err.println(e); } } } There is no return-type information retrieved in this example, because constructors don't really have a true return type. When this program is run, the output is: name = constructor1 decl class = class constructor1 ----- name = constructor1 decl class = class constructor1 param #0 int param #1 double ----- Finding Out About Class Fields It's also possible to find out which data fields are defined in a class. To do this, the following code can be used: import java.lang.reflect.*; public class field1 { private double d; public static final int i = 37; String s = "testing"; public static void main(String args[]) { try { Class cls = Class.forName("field1"); Field fieldlist[] = cls.getDeclaredFields(); for (int i = 0; i < fieldlist.length; i++) { Field fld = fieldlist[i]; System.out.println("name = " + fld.getName()); System.out.println("decl class = " + fld.getDeclaringClass()); System.out.println("type = " + fld.getType()); int mod = fld.getModifiers(); System.out.println("modifiers = " + Modifier.toString(mod)); System.out.println("-----"); } } catch (Throwable e) { System.err.println(e); } } } This example is similar to the previous ones. One new feature is the use of Modifier. This is a reflection class that represents the modifiers found on a field member, for example "private int". The modifiers themselves are represented by an integer, and Modifier.toString is used to return a string representation in the "official" declaration order (such as "static" before "final"). The output of the program is: name = d decl class = class field1 type = double modifiers = private ----- name = i decl class = class field1 type = int modifiers = public static final ----- name = s decl class = class field1 type = class java.lang.String modifiers = ----- As with methods, it's possible to obtain information about just the fields declared in a class (getDeclaredFields), or to also get information about fields defined in superclasses (getFields). Invoking Methods by Name So far the examples that have been presented all relate to obtaining class information. But it's also possible to use reflection in other ways, for example to invoke a method of a specified name. To see how this works, consider the following example:); } } } Suppose that a program wants to invoke the add method, but doesn't know this until execution time. That is, the name of the method is specified during execution (this might be done by a JavaBeans development environment, for example). The above program shows a way of doing this. getMethod is used to find a method in the class that has two integer parameter types and that has the appropriate name. Once this method has been found and captured into a Method object, it is invoked upon an object instance of the appropriate type. To invoke a method, a parameter list must be constructed, with the fundamental integer values 37 and 47 wrapped in Integer objects. The return value (84) is also wrapped in an Integer object. Creating New Objects There is no equivalent to method invocation for constructors, because invoking a constructor is equivalent to creating a new object (to be the most precise, creating a new object involves both memory allocation and object construction). So the nearest equivalent to the previous example is to say: import java.lang.reflect.*; public class constructor2 { public constructor2() { } public constructor2(int a, int b) { System.out.println( "a = " + a + " b = " + b); } public static void main(String args[]) { try { Class cls = Class.forName("constructor2"); Class partypes[] = new Class[2]; partypes[0] = Integer.TYPE; partypes[1] = Integer.TYPE; Constructor ct = cls.getConstructor(partypes); Object arglist[] = new Object[2]; arglist[0] = new Integer(37); arglist[1] = new Integer(47); Object retobj = ct.newInstance(arglist); } catch (Throwable e) { System.err.println(e); } } } which finds a constructor that handles the specified parameter types and invokes it, to create a new instance of the object. The value of this approach is that it's purely dynamic, with constructor lookup and invocation at execution time, rather than at compilation time. Changing Values of Fields Another use of reflection is to change the values of data fields in objects. The value of this is again derived from the dynamic nature of reflection, where a field can be looked up by name in an executing program and then have its value changed. This is illustrated by the following example: import java.lang.reflect.*; public class field2 { public double d; public static void main(String args[]) { try { Class cls = Class.forName("field2"); Field fld = cls.getField("d"); field2 f2obj = new field2(); System.out.println("d = " + f2obj.d); fld.setDouble(f2obj, 12.34); System.out.println("d = " + f2obj.d); } catch (Throwable e) { System.err.println(e); } } } In this example, the d field has its value set to 12.34. Using Arrays One final use of reflection is in creating and manipulating arrays. Arrays in the Java language are a specialized type of class, and an array reference can be assigned to an Object reference. To see how arrays work, consider the following example:); } } } This example creates a 10-long array of Strings, and then sets location 5 in the array to a string value. The value is retrieved and displayed. A more complex manipulation of arrays is illustrated by the following code: import java.lang.reflect.*; public class array2 { public static void main(String args[]) { int dims[] = new int[]{5, 10, 15}; Object arr = Array.newInstance(Integer.TYPE, dims); Object arrobj = Array.get(arr, 3); Class cls = arrobj.getClass().getComponentType(); System.out.println(cls); arrobj = Array.get(arrobj, 5); Array.setInt(arrobj, 10, 37); int arrcast[][][] = (int[][][])arr; System.out.println(arrcast[3][5][10]); } } This example creates a 5 x 10 x 15 array of ints, and then proceeds to set location [3][5][10] in the array to the value 37. Note here that a multi-dimensional array is actually an array of arrays, so that, for example, after the first Array.get, the result in arrobj is a 10 x 15 array. This is peeled back once again to obtain a 15-long array, and the 10th slot in that array is set using Array.setInt. Note that the type of array that is created is dynamic, and does not have to be known at compile time. Summary Java reflection is useful because it supports dynamic retrieval of information about classes and data structures by name, and allows for their manipulation within an executing Java program. This feature is extremely powerful and has no equivalent in other conventional languages such as C, C++, Fortran, or Pascal. | http://blog.csdn.net/nzh_csdn/article/details/160687 | CC-MAIN-2018-09 | refinedweb | 2,026 | 50.53 |
El martes, 26 de diciembre de 2017 07:27:33 -03 Boris Pek escribió: > Hi, > > >> salsa.d.o (alioth's replacement for git at least) is up in beta state. > >> > >> I think we should ask for the pkg-kde team group. I think ACLs will be > >> back > >> with this. > >> > >> What do you think? > > > > s/ask for/create ourselves/, otherwise +1. > > Also do not forget to create pkg-kde-extras group.
That's normally a subgroup in the main group, not a group by itself. Of course we could use the opportunity to create it as a special group in the gitlab instance (ie, at the same level as the main team) but keep it "under our umbrella". Please note that I am suggesting this just to consider it as an option, specially if it fits better in the gitlab instance (I don't know). I think this should be kept under our umbrella no matter how we create the instance in gitlab. > And how about Qt-extras (pkg-kde/qt-extras) sub-team which was announced > earlier in this year? It's logically created in alioth as a git namespace, same as with kde-extras. > > I don’t think there is any migration from Alioth, so we will need to add > > members to the team manually. > > Will we add maintainers with available accounts once after creation of group > or will we wait personal requests to join group? Good question. Personally I would just start with the current active admins and then wait for a personal request in order to use the oportunity to clean up the list. Of course I would not heasitate to add someone who is already listed on alioth as soon as the request is received. --. Matthias Ettrich, founder of the KDE project. Lisandro Damián Nicanor Pérez Meyer
signature.asc
Description: This is a digitally signed message part.
-- | https://www.mail-archive.com/[email protected]/msg01585.html | CC-MAIN-2018-09 | refinedweb | 311 | 73.68 |
I am new to python web frameworks. I am using web.py because I like how raw it is. I am wondering though, how one can produce pages and scripts efficiently when being restricted to sending output through the return keyword? It seems like for any python module you can only send one thing to the browser, even if it is a large string. What am I missing about the python way of server-side scripting? If it helps, I am coming from a PHP perspective. I am used to being able to say print "foo" and foo will appear. Now I am only allowed to print once per module. If you could point me in the direciton of a python approach to scripting vs the php approach I'd much appreciate it! I'm awful confused at this point how python can be so efficient while being so limited.
For example, here is a basic program I wrote:
import web urls = ('/','index') class index: def GET(self): return "foo" ///Where this is the only place on the file I can output anything (and the rest of the file you should be familiar with)
The point is that it appears to me that the only place you could output anything is in the one return line? Now I understand that if the URI arguments change, you can map to a different class and thus output something differently, but even then this seems limited?
Or ultimately, is the point of the web framework to use 'templates' as they are called in web.py? Thanks for the help. | https://www.daniweb.com/programming/software-development/threads/444175/web-py-authoring-efficiency | CC-MAIN-2016-44 | refinedweb | 266 | 79.3 |
An open-source multi-purpose N-body code:
You can install REBOUND with pip if you want to only use the python version of REBOUND:
pip install rebound
Then, you can run a simple REBOUND simulation such as
import rebound sim = rebound.Simulation() sim.add(m=1.0) sim.add(m=1.0e-3, a=1.0) sim.integrate(1000.) sim.status()
If you want to use the C version of REBOUND simply copy and paste this line into your terminal (it won’t do anything bad, we promise):
git clone && cd rebound/examples/shearing_sheet && make && ./rebound
The full documentation with many examples, changelogs and tutorials can be found at
We’re alway trying to improve REBOUND and extending the documention is high on our to-do list. If you have trouble installing or using REBOUND, please open an issue on github and we’ll try to help as much as we can.
For a changelog of the most important changes in recent updates, see
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/rebound/ | CC-MAIN-2017-30 | refinedweb | 184 | 53.04 |
GETSID(2) Linux Programmer's Manual GETSID(2)
getsid - get session ID
#include <sys/types.h> #include <unistd.h> pid_t getsid(pid_t pid); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): getsid(): _XOPEN_SOURCE >= 500 || /* Since glibc 2.12: */ _POSIX_C_SOURCE >= 200809L
getsid(0) returns the session ID of the calling process. getsid() returns the session ID of the process with process ID pid. If pid is 0, getsid() returns the session ID of the calling process.
On success, a session ID is returned. On error, (pid_t) -1 will be returned, and errno is set appropriately.
EPERM A process with process ID pid exists, but it is not in the same session as the calling process, and the implementation considers this an error. ESRCH No process with process ID pid was found.
This system call is available on Linux since version 2.0.
POSIX.1-2001, POSIX.1-2008, SVr4.
Linux does not return EPERM. See credentials(7) for a description of sessions and session IDs.
getpgid(2), setsid(2), credentials(7)
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 GETSID(2)
Pages that refer to this page: procps(1), ps(1), setsid(2), syscalls(2), sd_pid_get_session(3), tcgetsid(3), utmp(5), credentials(7) | http://www.man7.org/linux/man-pages/man2/getsid.2.html | CC-MAIN-2018-51 | refinedweb | 233 | 67.96 |
This post will cover how to use an alternate DOM representation (i.e. String) with @XmlAnyElement.
April 15, 2011
@XmlAnyElement and non-DOM Properties
Posted by Blaise Doughan at 4:49 PM 8 comments
Labels: JAXB, XmlAnyElement
JAXB and JSON via Jettison - Namespace Example
In a previous post I described how Jettison can be leveraged by a JAXB implementation to produce/consume JSON. A reader correctly pointed out that I did not describe how to handle namespaces. Since JSON does not support namespaces you would not normally include them in your mappings. However if you wanted to map your object model to both JSON and XML with namespaces, this post will demonstrate how it can be done.
Posted by Blaise Doughan at 2:53 PM 6 comments
Labels: JAXB, Jettison, JSON, Namespaces
XML Schema to Java - XSD Choice
In a previous blog post I wrote about how to map to the choice structure in XML schema when starting from classes. An astute reader tried generating an object model from that schema and noticed that the JAXB implementation generated something different than expected. In this post I'll explain the difference.
Posted by Blaise Doughan at 1:18 PM 1 comments
Labels: Bindings File, Choice, JAXB, XmlElements
April 12, 2011
J.
Posted by Blaise Doughan at 12:04 PM 10 comments
April 11, 2011
MOXy's XML Metadata in a JAX-RS Service
In previous posts I introduced how EclipseLink JAXB (MOXy) can represent it's metadata as XML, and how MOXy can be used in a JAX-RS service. In this post I'll demonstrate how to leverage MOXy's metadata file in a JAX-RS service by using a ContextResolver.
Posted by Blaise Doughan at 11:14 AM 0 comments
Labels: EclipseLink, Extension, JAX-RS, Mapping File, MOXy, REST | http://blog.bdoughan.com/2011_04_01_archive.html | CC-MAIN-2017-13 | refinedweb | 300 | 66.27 |
.
UPDATE: More thoughts on considerations motivating this potential change here.
That ‘Object Oriented’, then apparently ‘Object!
I ‘progress’. .
Another interesting post by the great Eric Lippert .
C# doesn’t need top-level functions, though I do occasionally miss them.
What creeps me out here is (if I understand you correctly) the prospect of anything remotely REPL-ish going into C#. You don’t bolt a bathtub onto a motorcycle just because somebody, somewhere, might need a place to raise rabbits. Look at C++. Just, just LOOK at C++. I love C++, but holy Toledo, it’s not a design to emulate. People spend years thinking in C++ all day every day and still get blind-sided by weird stuff.
Language design should to some degree proceed by accretion and trial and error, just like the design of any tool. It’ll never be an exact science and there’ll always be room for improvement. But you have to beware of the point where you evolve a hammer into a circular saw/screwdriver, and then your wife wants a nail clipper, and you start thinking you should do it by adding more features to your circularhammersawdriver, just because you’ve put so much time into it already.
Sorry, that reads like Spolsky on a caffeine jag…
.
No doubt Eric is already aware of this, and of course it’s not really relevant to the question anyway (wisely, not one of the six criteria involved in the question of whether to do a feature or not has anything to do with what other languages do)
As far as this goes:
"Open a big project that you work on. Search for files named "Utility.cs" (or "StaticMethods.cs", or "Helpers.cs") in the tree. Count the number of files found."
I only have classes like that when it’s a small, "rough-cut" program and just a handful of these kinds of "no-man’s land" methods. Otherwise, these kinds of methods do wind up in specific static classes for the purpose of organization.
Could one accomplish something similar by simply putting them into a namespace, and not requiring that they group into a class? Sure, I suppose so. But why? In what way does that significantly benefit my code? It doesn’t. It’s trivial to type the correct class name to qualify the method, and doing so actually makes the code a bit more clear. If I’m skimming through the code, it’s nice to have immediate indication (for example) that I’m calling the Truncate() that will return the integer portion of a number rather than the Truncate() that resets the file length to 0.
Sure, with additional analysis one could probably still figure out the difference in such cases without too much trouble, but a) every exception to that generality argues against top-level methods, and b) for these kinds of static methods, the code is still easier to understand when it’s more explicit, even as it’s possible to decipher it even if the class/namespace name isn’t explicit.
Wow! I would never have thought that a rather innocent suggestion of adding top-level functions to the language could evolve into a flamefest worse than the one over "dynamic". It’s interesting to observe where the priorities of C# coder lie, but that’s quite unexpected to me. Go figure…
>>.
While agreeing that Java’s semantics are irrelevant to C#, note that ‘import static’ appeared in Java SE 5.0, see.
"flamefest"?
Huh?
I’d say of people stating "more emphatic" opinions one way or the other, you’ve essentially got parity. And many responses (including mine) are more along the lines of "it could be marginally useful, but there are much better things the language designers could be doing with their time".
Come on…I’m sorry not everyone agrees with you 100% on the matter, but to characterize the comments as a "flamefest" is absurd. It’s been a very civil discussion, and if this is a worse disagreement than that which occurred with "dynamic", I’d say "dynamic" must’ve gotten a pretty warm reception.
"note that ‘import static’ appeared in Java SE 5.0, see."
Ah. That’ll teach me to stay up-to-date. 🙂 Thanks.
Still, I note even in that reference, they strongly advocate avoiding the use of the construct except in very narrow circumstances. I’d say in the "should this feature exist" point system, a feature that requires a warning like that would probably start off with -200 points instead of the usual -100. 🙂
+1 for not doing it.
Especially not just because it is "trendy". Trends are often overrated and don’t necessarily last for long.
I’d say if you want a truely competitive REPL or scripting language you’ll need several other important changes to the language. If that’s your goal, maybe it would be best to have a new language spinoff, say CScript?
Although I love my C#3 being so much more expressive and productive (esp. thanks to the LINQ APIs and the lambdas), I can’t help but think it is headed toward the bad direction. Each iteration of the language becomes more complex, soon _too_ complex. Extensions method surely aren’t good OO-design, it is my understanding that they are mostly there to make the LINQ machinery work. Anders had explained why he ruled out default parameters from C#, now they are finally there in C#4…
If you need ideas for next version, here’s one, which even happens to be trendy:
Make asynchronous continuations easier. You need to write so much unreadable code, just to call a webservice asynchronously and handle its response (webservice being just an example). F# "async" workflows are pure genius and I’m very jealous for not having something similar in C#.
As someone suggested, if languages were better integrated in Visual Studio (i.e. one could write files in different languages in the same project) that would be awesome.
Why do I always hear C# has such a constrained budget and that future versions may or may not happen? This has been a recurring theme in your posts. Is C# gonna be axed soon?
Todd,
EVERYTHING has a constrained budget, and I think the development team at Microsoft (not just C# but the entire division) has been very open about the circumstances.
I have scripting background before coming to C#. I’d really love the ability to be scripty (WITH good intellisense, unlike most scripty languages). I’d go as far as replace powershell with a C# command line while keeping the C# perf and adding some usual command line aliases to c# code. I just don’t dig powershell syntax and I believe you could accomplish the powershell style of tasks with c# with just as little typing if there was suitable functionality in the framework and intellisense was heavily used.
However I also agree with the concerns represented here. C# programs just look cleaner as there’s some structure and organization there always.
I think there has to be some kind of ‘best of both’ approach and maybe you should make a post suggesting some and also offer readers a chance to discuss and suggest their own.
I believe that at somepoint, the cost of adding new stuff while keeping existing users happy will start to either impede progress or have some other negative effects. Even just perceived, not real, negative effects could hurt language in the long run and I believe that’s something MS needs to consider.
What I’d consider is to at some point branch the C# language such that the language reciding in .cs files gets only minor upgrades if absolutely necessary and the new developments would go into files with new file extension with strong IDE support such as ability to perfectly convert old code to whatever modifications were made that broke backward compat in the branch. That’s also where I’d put these top level methods.
Oh and I have to suggest a name for the C# branch: C# Next Generation. LOL
Joku,
1) "best of both" is alsmost always an impossible goal in any area. If something can be used for "A" and for "B", then it is alomst inevitable that adding the support for "B" has imposed some constraints on the implementation of "A" (along with the reverse).
2) "the cost of adding new stuff" always exists. Hopefully the positive effect significantly outweighs the negative effect. Consider a situation where I hire four coders who each know 95% of version "n". statistically, up to 15% of the code base that gets developed can not be effectively maintained by randomly assigning it to a programmer.
If the language capabilities expand by 10%, then it is likely that the average percentage of knowledge/experience will drop. Initially this will be because few people know the new stuff. This is not so bad of an impact because existing codebases will tend to consist of the older established capabilities.
But, over time the difference between old and new will fade. Unfortunately history has shown that the amount of work people invest before considering themselves "competent" remains about the same. This results in the average candidte knowing a smaller subset, and the subsets between different candidates diverging.
At this point, putting together a team of four is statistically going to have the probability that over 35% of the codebase may contains some feature that a given team member is not familiar with.
disclaimer: the numbers used are mathematically valid, but make "worst case" assumptions; they also get worse as the size of a team increases. In my 25+ years working as a consultant involved with various teams of all levels, I have found this principal to hold true.
There are some management steps which can be taken to mitigate this effect, but they typically run into one of two problems. Some constraints can be very difficult to enforce in an automated way. Other times certain "excluded" items present a compelling case (for the team member who are knowledgable about the item).
I just want to say that I think adding global methods would be a horrible idea. Please don’t put them in there. I don’t want to go back to the old days of C where programmers putting everything in global variables and the code is impossible to read. Don’t go making it like JavaScript either. I want strongly typed objects.
I agree with the "don’t add this" crowd. This feels to me like a technological solution to accomodate a combination of laziness (not creating logical places for the code) and style differences (those who don’t like Class.Method), both of which are human problems.
I like most of the reasoning against it, but I think that one important example has been overlooked, the cleanup of code to make it more readable for people that do not like to have StaticClassName.StaticMethodName everywhere, and that is to write a local wrapper method, or better yet, have Visual Studio do it for you. In the below sample, I used Refactor, Extract Method on Math.Sin(s), which created a private static method, which can now be called within the current class. You can remove the static from the local Sin method as well.
private void GetSinOfZero()
{
double s = 0;
// Original line :
// s = Math.Sin(s);
s = Sin(s);
this.Text = s.ToString("0.0000");
}
private static double Sin(double s)
{
return Math.Sin(s);
}
I think that this is a "Best of Both" solution using the tools that are already in C#. It keeps the calling code readable, and if you need to see the true method invocation, it is right there for you. I believe that the compiler is smart enough to see that the wrapper is just that and directly wire up the code as if it were a direct call to Math.Sin, so performance should not be an issue.
The downside to this is that you need to create new method wrappers for each class, but that is easy to do and easy to put into a central location where commonly used ones can just be copied and pasted. You could probably even automate that by using partial classes and adding a T4 template that includes a common set of wrappers for each class that you want to have them, reading them from a central file, if you really feel the need to do that.
Another vote for keeping it simple. I do love lambda expressions, so there’s no way I’d say C# 2.0 was the best, but please don’t tack on everything. If you add every feature from every other .NET language, then what exactly is the point of all the interoperability?
😉..! 🙂!!!) 🙂. | https://blogs.msdn.microsoft.com/ericlippert/2009/06/22/why-doesnt-c-implement-top-level-methods/ | CC-MAIN-2017-09 | refinedweb | 2,151 | 61.77 |
Introduction
Python is a versatile programming language used to develop desktop and web applications. It allows you to work on complex projects.
Learn how to get current date and time in the python script with multiple options.
Prerequisites
- Command line / terminal window access
- User account with root or sudo privileges
- Python installed
- Prefered text editor (in this case
nano)
Get Current Date Time in Python with datetime Module
Use the command line to create and access a new file:
sudo nano python_date.py
The system will tell you the file does not exist and ask you to create it. Click Yes and add the following in your text editor:
from datetime import date today = date.today() print("Today's date:", today)
Save the file, then exit. Run the file by entering:
python python_date.py
The result will show today’s date using the datetime module:
Options for datetime formating
Python has a number of options to configure the way date and time are formated.
Create a sample file:
sudo nano sample_format.py
Edit the file as follows:
import datetime e = datetime.datetime.now() print ("Current date and time = %s" % e) print ("Today's date: = %s/%s/%s" % (e.day, e.month, e.year)) print ("The time is now: = %s:%s:%s" % (e.hour, e.minute, e.second))
Save the file and exit. Run the file by entering:
python sample_format.py
The system displays the date and time, the date, and the time on three separate lines.
Use strftime() to display Time and Date
The strftime() method returns a string displaying date and time using date, time or datetime object.
Enter the following command in your terminal window:
sudo nano python_time.py
You have created a new file named python_time.py. Use this file to define the format of the information the system is to display.
To display the time in 24-hour format, enter:
import time print (time.strftime("%H:%M:%S"))
This example image shows the file when using
nano on a Debian distribution :
Save and close the file.
Execute the script by typing:
python python_time.py
The result displays the time in the requested format:
To display the time in a 12-hour format, edit the file to:
import time print (time.strftime("%I:%M:%S"))
Save and close the file.
Execute the script by typing:
python python_time.py
Additional Options Using strftime
The strftime method accepts several formatting options.
First, create a new sample file:
sudo nano test_format.py
Edit the file as follows:
import datetime e = datetime.datetime.now() print (e.strftime("%Y-%m-%d %H:%M:%S")) print (e.strftime("%d/%m/%Y")) print (e.strftime("%I:%M:%S %p")) print (e.strftime("%a, %b %d, %Y"))
Save the file and exit.
Run the file by typing:
python test_format.py
An example of the result in Debian is below:
The strftime method has many formatting options. All the available options are in the official documentation.
Conclusion
This guide has shown you how to use Python to display the date and time. Python is pretty straightforward and offers multiple options (%X values) to show the date and time in different formats.
You now understand some basic concepts and can attempt to create more complex scripts. | https://phoenixnap.es/kb/get-current-date-time-python | CC-MAIN-2022-33 | refinedweb | 539 | 67.65 |
18 January 2008 17:02 [Source: ICIS news]
WASHINGTON (?xml:namespace>
?xml:namespace>
The board, a New York City-based non-profit business data and analysis organisation, said its index of leading economic indicators fell by 0.2% in December to 136.5, marking the third consecutive monthly decline and the fourth drop in the index in six months.
Using 1996 as its base measure of 100, the leading indicators index is composed of business survey data in ten areas, including money supply, stock prices, manufacturers’ new orders for consumer goods, building permits and interest rates, among others.
The board said the continuing decline in the
Also contributing were smaller declines in manufacturers’ new orders for non-defense capital goods and consumer expectations coupled with a rise in unemployment claims.
The trade group noted that the leading index is down 0.8% from June to December and is 1.4% below its December 2006 level.
“The leading index has weakened sharply since mid-2007, with widespread weakness among its components in the last two months, and it has returned to the level attained in mid-2005,” the board said.
However, the report said, “despite the spreading weakness, the index has declined only 1.5% from its highest level in January 2006, compared with a decrease of about 3% between its previous peak in January 2000 and March 2001”.
While the current index decline is not as sharp as the 2000-2001 drop, the board said the December decline and earlier month falls signal a bumpy economic road ahead.
“Taken together, the recent behaviour of the composite indexes highlights increasing risks for further economic weakness and suggest that economic activity is likely to be sluggish in the near term,” the board said.
The White House and Congress are in the midst of negotiations for a federal economic stimulus plan aimed at rev | http://www.icis.com/Articles/2008/01/18/9094194/us-leading-indicators-fell-in-december.html | CC-MAIN-2015-06 | refinedweb | 311 | 51.38 |
criteria to evaluate invemestments
1.Besides net present value (NPV) and internal rate of return (IRR), what other criteria do companies use to evaluate investments?
2.What are he disadvantages of NPV as in investment criterion
3.How will the change in cost of capital impact the investment decision process?
See attached files.
Solution Preview
In addition to NPV and IRR, companies often use the Payback Rule. This Rule is simple to employ, which lends to its popularity. Essentially, it determines whether an investment will earn a return, or at least pay for itself, in a certain amount of time. The Payback Rule lacks
Firms may also use average accounting return (AAR). This is a benefit/cost ratio that produces a pseudo rate of return by dividing the average net income by the average book value. A project is considered acceptable if its AAR return exceeds a target return. However, due to lack of risk adjustment and the use of profits rather than cash flows, this method is seriously flawed.
The profitability index may also be utilized. It is ...
Solution Summary
Different methods to evaluate whether investments should be undertaken | https://brainmass.com/economics/risk-analysis/criteria-to-evaluate-invemestments-168154 | CC-MAIN-2017-13 | refinedweb | 190 | 57.98 |
In order to make the code so far a little more reusable, I moved it over into its own class, called Tone. I also implemented some optimizations and other little tricks. The most important is that instead of calculating the next batch of samples along with the envelope on every SAMPLE_DATA event, I precalculate all the samples within the envelope right up front, storing it in a Vector of Numbers. Here’s the class:
package { import flash.media.Sound; import flash.events.SampleDataEvent; import flash.events.Event; public class Tone { protected const RATE:Number = 44100; protected var _position:int = 0; protected var _sound:Sound; protected var _numSamples:int = 2048; protected var _samples:Vector.<number>; protected var _isPlaying:Boolean = false; protected var _frequency:Number; public function Tone(frequency:Number) { _frequency = frequency; _sound = new Sound(); _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData); _samples = new Vector.<number>(); createSamples(); } protected function createSamples():void { var amp:Number = 1.0; var i:int = 0; var mult:Number = frequency / RATE * Math.PI * 2; while(amp > 0.01) { _samples[i] = Math.sin(i * mult) * amp; amp *= 0.9998; i++; } _samples.length = i; } public function play():void { if(!_isPlaying) { _position = 0; _sound.play(); _isPlaying = true; } } protected function onSampleData(event:SampleDataEvent):void { for (var i:int = 0; i < _numSamples; i++) { if(_position >= _samples.length) { _isPlaying = false; return; } event.data.writeFloat(_samples[_position]); event.data.writeFloat(_samples[_position]); _position++; } } public function set frequency(value:Number):void { _frequency = value; createSamples(); } public function get frequency():Number { return _frequency; } } }
Note that in the constructor I call createSamples(). This creates the Vector with all samples needed for the duration of the note, including the amplitude of the pseudo-envelope. In the frequency setter, the samples are re-created. The result is that in the onSampleData handler method, I just fill up the byte array with the next so many values out of the _samples vector, stopping when I reach the end of that Vector.
Note also that the amplitude is decreased per sample, rather than per SAMPLE_DATA event, thus it needs to be reduced by a much smaller amount each time. This should also give a smoother envelope, though I’m not sure how noticeable it is.
Here’s a brief bit of code that shows it in action:
import flash.events.MouseEvent; var tone:Tone = new Tone(800); stage.addEventListener(MouseEvent.CLICK, onClick); function onClick(event:MouseEvent):void { tone.frequency = 300 + mouseY; tone.play(); }
It creates a tone. Whenever you click on the stage, it calculates a new frequency for the tone based on the y position of the mouse and plays the tone. Simple enough.
I don’t consider this class anywhere near “complete”. Just a beginning evolution in something. I’d like to add support for more flexible and/or complex envelopes, a stop method, and some other parameters to change the sound. But even so, this is relatively useful as is, IMHO.
Just a idea, but I wanted your opinion about this :
If you stored the samples in a ByteArray instead of a Vector, you could be using the readBytes / writeBytes methods in the SampleDataEvent handler, thus avoiding the cost of the loop.
Maybe it would help optimize the stuff, wouldn’t it ?
Hello there!
I’ve been following the articles in this series and I must say that this is really great!
Thanks
Luciano
What a great series of posts! I couldn’t be happier for the insight, and I’ll be playing around with this for the next few weeks.
boblemarin, yeah, I thought about that. just didn’t have a chance to test and compare the two methods. but i imagine you are totally correct that a byte array would be faster.
So nice Keith!, you have no idea how many times I’ve tried to understand AS3 + audio 😀
Thanks for the tuts. I’ve been messing with this for over a year and got stuck. I asked Andre and he was kinda vague about creating an engine from scratch. Thanks
This info is so helpful, thank you so much! I bet you could fill a book with this stuff. Interactive Audio!
My friends are amazed when I created music using your Tone Class. In my mother tongue I just want to say “kalakura keith…..”
This is very cool stuff. Thanks for this.
Just a note, you have 2 syntax errors on lines 13 and 23. The code wasn’t compiling in Flash Professional CS5 because your Vector objects are of type “number” with a lower-case “n”
hi keith, I made a tonewheel application similar to andrew’s using your tone class with some controls. Your making me crazy. .
hi keith, which syntax highlighter you are using?
Thanks for the great tutorials! I have been having a hard time understanding all the SampleData stuff.
For a short children’s story ( ) I wanted the user to control the pitch and volume of samples (by moving the mouse). Luckily I found Andre Michelle’s MP3Pitch.as, tried to figure it out and then changed it a little so it could loop short sound samples and control pitch and volume. For people who are interested in these changes of this class, see this post: here is the music toy made with the help of this Tone class
hello Peter,
i must confess i’m a big fan of you. I like very much your books, mostly “Making things move”. Even i was (and am) a novice Actionscript coder, without too much coding experience behind me (actually after two earlier atempts, at the beginning of this year i was starting to be very captivated by AS3), i understood perfectly the concepts and ideas behind every chapter. Do you felt at the beginning of your Flash adventure that you have the idea, but you can’t make it working, because something is missing. I’m at that point sometimes.
Sorry if this is outside of the topic.
[…] I’m really not into sound or music, I’ve used Keith Peters’ Tone class pretty neat piece of code. some enhancement would include a better sound generation like the […]
Hi Keith,
I just built a simple pulse synth as an experiment and it can be fun to play width.
There’s the source available, as well as a -basic- Audio Unit version.
Just in case, it is there :
Hi, I’ve been trying to use this, but when I export it in flash, it gives me 4 errors:
Scene 1, Layer ‘Layer 1’, Frame 1, Line 7 1120: Access of undefined property mouseY.
C:\…\Tone.as, Line 1 1180: Call to a possibly undefined method addFrameScript.
Scene 1, Layer ‘Layer 1’, Frame 1, Line 4 1120: Access of undefined property stage.
C:\…\Tone.as, Line 1 5000: The class ‘Tone’ must subclass ‘flash.display.MovieClip’ since it is linked to a library symbol of that type.
I’ve added import flash.display.MovieClip; to the class with no luck, and unchecked automatically add instances… with no luck. Any ideas what I am doing wrong?
thanks!
[…] by some example over at the Soulwire blog. And the bell sound bit came from a wee script posted by Keith ‘Bit-101′ Peters. So, really, when it comes down to it, none of it is mine. Sometimes you can get some pretty nice […]
Hi there, i am totally new to AS3 and am struggling the exact way as mari does.
Any ideas or hints are welcom.
thanks in advance and cheers,
Alex
[…] Tone Class – AS3 Sound Synthesis Part I Part II Part III Part IV […]
Great tutorials. I wish sound manipulation in ActionScript was covered more.
I was wondering, how do you make the Tone class play tones continually in the way you’ve done here with the pre-calculating the samples? I tried this:
var mult:Number = _frequency / RATE * Math.PI * 2;
for(var i:int = 0; i < _numSamples; i++)
{
_samples[i] = Math.sin(mult * i);
}
But the result is a glitchy mess.
Cheers
Nick.
[…] I recently stumbled upon AS3SFXR which i thought i could use to generate sound on the fly with a lot of tweakable parameters. It was fully in line with the retro feel i wanted to give to my app. I began testing, but i quickly realized that this library was kinda CPU intensive, and even using its sound caching capabilities, i would never really be able to play a lot of different sample at the same time. So in the end i had to fall back to mostly playing some mp3 sample, and only one fully configurable “sound”, wich generation is greatly inspired by this bit-101 sound synthesis class […]
Expect your new article about sound~ thanks again!~
Hi,
please check this code:
Try to record more than 2-3 seconds and then listen saved mix. Mixed sound is distorted after couple of seconds of recording
Wav encoder is working ok…
How to fix this?
Thanks in advance!
Hey,
I wanted to drop you a line saying thanks. This series of posts is hugely inspiring and incredibly useful. I’m wondering if you’ve kept up your experiments in it. I’ve been playing with the concept and have had some success implementing a more complex envelope that allows the user control over ADSR, though I’ve yet to develop a suitable method for sustaining the note until the user releases the mouse.
Thanks for putting this out.
[…] Part IV […]
[…] Tone Class – AS3 Sound Synthesis Part I Part II Part III Part IV […] | http://www.bit-101.com/blog/?p=2681 | CC-MAIN-2017-17 | refinedweb | 1,579 | 65.52 |
Did you know that you can run Java Servlets with Microsoft's Internet Information Server (IIS) without any third-party products? All you need is plain old IIS and pure Java. Granted, you do need to use Microsoft's Java SDK for reasons that I will explain in this article, but rest assured that your code will be free of any proprietary extensions and remain completely portable to other servlet engines.
Microsoft's Internet Information Server
But why would you want to do something as silly as running a Java servlet in an environment that wasn't designed for that purpose? First, many of us die-hard Java fanatics are trapped in Microsoft-only shops due to circumstances beyond our control. We all have our Linux boxes tucked away under our desks, running IBM's latest JDK and Apache's latest servlet engine, but it will be a cold day in the underworld before our bosses let us deploy products on such a system. You can certainly find commercial servlet engines that run on Microsoft's platforms, but they can cost big bucks. Try explaining to your boss that you need a few thousand dollars for a new Web server because you're going to scrap the free one that came with the operating system (or use it as a simple pass-through proxy, which is how many offerings currently work). Then, once your boss stops swearing, you can ask yourself if you're just a little too anxious to abandon the Microsoft ship. Microsoft and Sun have had their problems, but that doesn't change the fact that IIS is a respectable piece of software. And now that you know it can run Java servlets, it has become a little more appealing.
The Adapter design pattern
The magic that glues those two technologies together is a simple application of the Adapter design pattern. Quoting from the infamous Gang of Four book, Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Resources), the intent of the Adapter pattern is to convert the interface of a class into another interface clients expect. But which classes must you adapt? The answer is the handful of core classes that a Java Servlet uses to interact with its environment -- specifically, the
Request,
Response, and
Session objects. As luck would have it, you don't have to adapt the
Cookie class -- the translation is handled in-line by the other adapters.
IIS, or more specifically its Active Server Page (ASP) environment, contains a core group of classes that virtually mirror those of the Java Servlet specification. Actually, I should say the servlets mirror the ASP framework, since IIS shipped long before the servlet specifications were written, but I won't add any more fuel to the Microsoft-versus-Sun fire.
The
Request,
Response,
Session, and
Cookie objects exist in both frameworks. The only problem is that the interfaces for those objects are incompatible between environments. That's where the Adapter design pattern comes into play. You have to adapt (or wrap) the IIS versions of the objects to make them look and act like servlet versions.
A quick and dirty overview of servlets
A servlet, at a bare minimum, simply has to implement a single method:
public void doGet( HttpServletRequest request, HttpServletResponse response );
Technically, the servlet must also implement a
doPost method if it wishes to handle client requests that use the HTTP
POST command instead of
GET. For the purpose of keeping this article simple, however, you can assume that all client requests are of type
GET.
The
doGet method takes two objects: a request and a response. The request object encapsulates any data that the client sent to the server, along with some meta-information about the client itself. You use the response object to send data back to the client. That's a very abstract explanation, but this article isn't an introduction to servlets, so I won't go into greater detail. For a good primer to servlets, I recommend Java Servlet Programming (O'Reilly & Associates) by Jason Hunter, William Crawford, and Paula Ferguson.
Active Server Pages
When you call the servlet from the ASP, you're just going to call the
doGet method and pass in the appropriate request and response objects. From that point on, the servlet has full control. The ASP script acts as a bootstrap to the servlet. But before you can pass in the request and response objects, you must wrap them with the respective adapter classes (which I will examine in detail later on).
I'll start from the top and work my way down. The URL that the client is going to request will look something like. The
.asp extension means that the requested document is an Active Server Page script. Here's the
servlet.asp script in its entirety:
dim requestAdapter set requestAdapter = getObject( "java:com.nutrio.asp.RequestAdapter" ) dim responseAdapter set responseAdapter = getObject( "java:com.nutrio.asp.ResponseAdapter" ) dim servlet set servlet = getObject( "java:com.nutrio.servlet.HelloWorldServlet" ) servlet.doGet requestAdapter, responseAdapter
Breaking it down, you'll see that you start out by declaring a variable called
requestAdapter. The
dim command is the Visual Basic version of a variable declaration. There are no hard types in Visual Basic. Variables are actually wrapped by a
Variant object, which exposes the variable in any flavor that the calling code desires (for example, number, string, and so forth). That is very convenient, but it can lead to confusing and dangerous code. That's why the Hungarian Notation was invented (see Resources). But that's a whole other debate.
After declaring the variable, you instantiate your first adapter class, using the ASP
getObject method, and assign it appropriately. The
getObject method is a new addition to IIS version 4. It's called a moniker (a COM object that is used to create instances of other objects, see Resources), but it lets you access Java objects without any of the Component Object Model's (COM, see Resources) registration headaches. In turn, you then declare, instantiate, and assign the response wrapper, and then do the same for the servlet. Finally, you call the servlet's
doGet method and pass in the adapted request and response objects.
That particular script is fairly limited because it only launches one particular servlet. You'll probably want to expand it to launch an entire suite of servlets, so you'll need to make a couple of minor modifications. Assuming that all your servlets are in the same package, you can pass in the class name of the target servlet as an argument to the URL such as. Then you'll have to change the end of the script to load the specified class. Here's the new code:
dim className set className = Request.QueryString( "class" ) dim servlet set servlet = getObject( "java:com.nutrio.servlet." & className ) servlet.doGet requestAdapter, responseAdapter
That's it! You've just turned Microsoft's Internet Information Server into a Java Servlet engine. It's not a perfect engine, as you'll see later, but it's pretty close. All that remains to be discussed is the nitty-gritty of the adapter classes.
For brevity, I'm just going to cover the implementation of the more popular methods in each adapter. The measurement of popularity is based on my personal experience and opinion; it doesn't get much more scientific than that (sic).
Microsoft's Java SDK
Starting with the request wrapper, the first thing that the object must do is acquire a reference to its ASP counterpart. That is accomplished via the
AspContext object from the
com.ms.iis.asp package. The what package, you ask? Ah yes, here is where I explain why you need to install Microsoft's Java SDK.
You can download Microsoft's Java SDK for free (see Resources). Make sure that you get the latest version, which is 4.0 at the time of this writing. Follow the simple installation instructions and reboot (sigh) when prompted. After you install the SDK, adjust your
PATH and
CLASSPATH environment variables appropriately. Take a tip from the wise and search your system for all the instances of
jview.exe, then ensure that the latest version resolves first in your
PATH.
Unfortunately, the documentation and sample code that comes with Microsoft's Java SDK is sorely lacking in regard to the IIS/ASP integration. There certainly is plenty of verbiage -- you get an entire compiled HTML document on the subject, but it appears more contradictory and confusing than explanatory in most places. Thankfully, there is an
aspcomp package in the SDK's
Samples directory that virtually mirrors the
com.ms.iis.asp package and comes with the source code. You did install the sample files with the SDK, didn't you? That
aspcomp package helped me to reverse-engineer a lot of the API logic.
The request adapter
Now that you have Microsoft's SDK at your disposal, you can get back to implementing the adapter classes. Below is the bare bones version of the request adapter. I have omitted the package declaration and import statements so that you can focus on the meat of the code.
public class RequestAdapter implements HttpServletRequest { private Request request; public RequestAdapter() { this.request = AspContext.getRequest(); }
Note that the class exposes a single
public constructor that takes no arguments. That is required for the ASP script to instantiate the class as a moniker (through the
getObject method). The constructor simply asks the
AspContext object for a reference to the ASP version of the request object and stores a pointer to it. The adapter implements the
HttpServletRequest interface, which lets you pass it into your servlets under the guise of a real servlet environment.
The most popular method of the request object is
getParameter. That method is used to retrieve a piece of data that the client is expected to provide. For example, if the client has just filled out a form and submitted it to the server, the servlet would call
getParameter to retrieve the values of each form item.
In the ASP version of the request object, Microsoft differentiates parameters between those that arrive via
GET and those that arrive via
POST. You have to call
getQueryString or
getForm, respectively. In the servlet version, there is no such differentiation at the request level because the
GET versus
POST mode is dictated when
doGet or
doPost is called. Thus, when you adapt the
getParameter method, you must look in both the query string and the form collections for the desired value.
There's one more quirk. If the parameter is missing, the Microsoft version will return an empty string, whereas the Sun version will return a
null. To account for that, you must check for an empty string and return
null in its place.
public String getParameter( String str ) { String result = request.getQueryString().getString( str ); if( ( result != null ) && result.trim().equals( "" ) ) { result = request.getForm().getString( str ); if( ( result != null ) && result.trim().equals( "" ) ) { return( null ); } } return( result ); }
It's pretty simple, but don't get your hopes up because things are about to get more complicated. The servlet version of the request object also exposes a method called
getParameterNames, which returns an
Enumeration of the keys for each client-provided piece of data. As above, that is a single point of entry as far as servlets are concerned, but ASP differentiates between the
GET- and
POST-provided data. In order to return a single
Enumeration to the servlet, you must combine the two
Enumerations of the ASP request object's query string and form collections. Below is a handy little tool that I whipped up just for that problem. The tool is called
EnumerationComposite (not to be confused with the Composite design pattern), and it takes an array of
RequestDictionarys (the ASP version of a
Hashtable) and concatenates them into one big
Enumeration. Here's the code in its entirety:
public class EnumerationComposite implements Enumeration { private RequestDictionary[] array; private int stackPointer = 0; public EnumerationComposite( RequestDictionary[] array ) { this.array = array; } public boolean hasMoreElements() { if( this.stackPointer >= this.array.length ) { return( false ); } else if( this.array[ this.stackPointer ].hasMoreItems() ) { return( true ); } else { this.stackPointer += 1; return( this.hasMoreElements() ); } } public Object nextElement() { return( this.array[ this.stackPointer ].nextItem() ); } }
That tool greatly simplifies your job now. Here's how the
getParameterNames method looks:
public Enumeration getParameterNames() { return( new EnumerationComposite( new RequestDictionary[] { request.getQueryString(), request.getForm() } ) ); }
The next most popular method of the response object is
getSession. The session object is another core object that is mirrored between ASP and servlets. Thus, you must provide the session with its own adapter, and I will cover that shortly. But before I do, here's the request method:
public HttpSession getSession( boolean flag ) { return( new SessionAdapter() ); }
The last method of the request object that you'll adapt for this article is
getCookies. As its name implies, it returns a collection of cookies, which the client has provided. The ASP version of the cookie object has me baffled. It appears to act as a collection of itself, exposing many methods with enigmatic functionality. However, I was able to decipher enough to write the servlet adaption. The only tricky part is that the ASP version returns an
Enumeration, while the servlet version expects an array, offering a good chance to use the not so well known and underutilized
copyInto method off the
Vector class. Also note that I had to predicate each reference to a
Cookie object since the class name is identical in both the
com.ms.iis.asp and
javax.servlet.http packages. Here's the code:
public javax.servlet.http.Cookie[] getCookies() { Vector tmpList = new Vector(); CookieDictionary aspCookies = this.request.getCookies(); IEnumerator e = aspCookies.keys(); while( e.hasMoreItems() ) { String key = (String) e.nextItem(); String val = aspCookies.getCookie( key ).getValue(); tmpList.addElement( new javax.servlet.http.Cookie( key, val ) ); } javax.servlet.http.Cookie[] cookies = new javax.servlet.http.Cookie[ tmpList.size() ]; tmpList.copyInto( cookies ); return( cookies ); }
The session adapter
Now that you're done with the request adapter, you need to backtrack and cover the session adapter. The session, in both ASP and servlets, is mainly used as a veritable hashtable. You simply put and get objects into and out of the session. Those values are acted upon almost identically to the respective response parameter rules discussed above. The implementation of the session adapter is too trivial to warrant discussion. The full source code is available in Resources.
The response adapter
The next major piece of the puzzle is the response adapter. Just like the request adapter, the response adapter requires a few clever tricks. But before I get into the difficult stuff, let me get the easy stuff out of the way. Here's the supersimple code for two of the more popular response methods:
public void sendRedirect( String str ) { this.response.redirect( str ); } public void setContentType( String str ) { // ASP automatically set's content type! }
What's up with
setContentType? It doesn't do anything! That's right, IIS doesn't make the perfect servlet engine after all. By the time the servlet gets executed, the ASP engine has already defined the content type, along with the other standard HTTP headers. But speaking from experience, the majority of servlets do not need to set the content type to anything other than plain text or HTML.
As mentioned earlier, you don't require an adapter class for handling cookies. The
addCookie method of the response object simply has to create an instance of a Microsoft cookie based on the contents of the supplied Sun cookie. Both Microsoft and Sun agree that cookies are simple name and value pairings of data. However, they disagree on the way that cookie expiration should be represented in an API.
Sun's version of cookie expiration uses an integer value that specifies the cookie's maximum age in seconds. That value is passed into the
setMaxAge method of the
Cookie object. A value of zero signifies immediate expiration while a negative value (being a special case) dictates that the cookie should be discarded when the user's browser exits.
Microsoft's version of cookie expiration is a little different. Microsoft's cookies, by default, are set to expire when the user's browser exits. Therefore, if the Sun version of the cookie has a negative expiration value, you should not alter Microsoft's version of the cookie. If the maximum age of the Sun version is equal to or greater than zero, you have to translate the age into a Microsoft
Time object and pass it into the Microsoft version of the cookie. Note that the month value is zero-based in Java's
Calendar class but one-based in Microsoft's
Time class, so you must increment the value during the conversion.
public void addCookie( javax.servlet.http.Cookie cookie ) { com.ms.iis.asp.Cookie aspCookie = this.response.getCookies().getCookie( cookie.getName() ); aspCookie.setValue( cookie.getValue() ); int age = cookie.getMaxAge(); if( age < 0 ) { // expire on browser exit } else { GregorianCalendar date = new GregorianCalendar(); Date time = new Date( System.currentTimeMillis() + ( 1000 * age ) ); date.setTime( time ); Time aspTime = new Time( date.get( Calendar.YEAR ), 1 + date.get( Calendar.MONTH ), date.get( Calendar.DAY_OF_MONTH ), date.get( Calendar.HOUR ), date.get( Calendar.MINUTE ), date.get( Calendar.SECOND ) ); aspCookie.setExpires( aspTime ); } }
The most popular response method happens to also be the trickiest to implement, which is why I saved it for last. The method in question is
getWriter. That method returns a
PrintWriter object that lets the servlet write information to the client's display. In most cases, the servlet is just composing HTML, which is buffered until it is all sent to the client. Why is it buffered? Because the servlet, after already dumping a lot of information to the
PrintWriter, might decide that something is amiss and abort by calling the
sendRedirect method. The redirection code must be the first thing that the browser receives, and obviously there's no need to send any buffered information to the client once a redirect has been issued.
With that in mind, you have to create one more adapter class. That new adapter will wrap the
PrintWriter object. It will buffer all of its contents until the
close method is called. Here's the corresponding response method:
public PrintWriter getWriter() { return( new PrintWriterAdapter() ); }
And here's the code for the
PrintWriter adapter in its entirety:
public class PrintWriterAdapter extends PrintWriter { private static final String CR = "\n"; private StringBuffer sb = new StringBuffer(); public PrintWriterAdapter() { super( System.err ); } public void print ( String str ){ sb.append( str ); }//response.write( str ); } public void println( String str ){ print ( str + CR ); } public void print ( Object obj ){ print ( obj.toString() ); } public void println( Object obj ){ println( obj.toString() ); } public void print ( char[] chr ){ print ( new String( chr ) ); } public void println( char[] chr ){ println( new String( chr ) ); } public void close() { AspContext.getResponse().write( sb.toString() ); } }
Conclusion
Microsoft's Internet Information Server doesn't make the perfect servlet engine, but it comes pretty darn close. In all of my servlet experience, the combination of IIS and those adapter classes have proven adequate for developing and deploying commercial applications. And, if you happen to be locked into a strictly Microsoft shop, those tools offer you the chance to branch out and experiment with the wonder of Java servlets. As always, I am interested in hearing your comments, criticisms, and suggestions on improving the code.
The source code for all of the classes I've introduced in this article, including a little more functionality than I've covered, can be found in Resources. Note that many of the methods, specifically those that I haven't yet needed, remain unimplemented. If you venture to finish the job, send me a copy (wink).
A formal plea to Microsoft, or to the helpful reader
The technology that I have described in this article has been successfully deployed on most of the systems in my lab. However, on a few machines, it simply doesn't work. The ASP page reports the error "No object for moniker" for any and all references to the adapter objects. That is undoubtedly due to some enigmatic combination of Microsoft's Java SDK 4.0, Microsoft's Internet Information Server (Windows NT Option Pack 4), Visual J++, or some Service Packs. I've searched the Microsoft Developer's Network (MSDN) in vain and come up dry. If you know what the problem is and have a solution, please share it with me. Thanks.
Learn more about this topic
- The source code for this article
- Microsoft's Java SDK
- Java Servlets
- Microsoft's Internet Information Server (IIS)
- Design PatternsElements of Reusable Object-Oriented Software, Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Addison-Wesley, 1995)
- Java Servlet Programming, Jason Hunter, William Crawford, and Paula Ferguson (O'Reilly & Associates, 1998)
- Java Monikers
- Microsoft's Component Object Model (COM)
- Active Server Pages (ASP)
- Design patterns
- Hungarian Notation | http://www.javaworld.com/article/2076107/java-web-development/use-microsoft-s-internet-information-server-as-a-java-servlet-engine.html | CC-MAIN-2014-49 | refinedweb | 3,493 | 55.24 |
By default, Hairball shows all dependencies - including primitives, Strings etc. To limit what gets shown, use the -includePackages option - for example, to only show dependencies from packages org.myprog.util and org.myprog.logic, run with the flags -includePackages org.myprog.logic,org.myprog.util.
Right now Setter and Constructor injection, and some singleton implementaions dependencies are supported.
A dependency that is provided via a setter method, e.g.:
public class SomeClass {
private Dependency dependency;
public void setDependency(Dependency dependency) {
this.dependency = dependency;
}
}
Frameworks like Pico and Guice support injection via an arbitrary method tagged with the relevant attribute. Right now Hairball looks for methods prefixed with set only, although eventually the code will be advanced enough to pick up other types of setter injection.
A dependency is passed into an object via a constructor, like so:
public class SomeClass {
private Dependency dependency;
public SomeClass(Dependency dependency) {
this.dependency = dependency;
}
}
Well, it depends. Hairball is a tool, and it is up to you how you use it. It doesn't make any judgements about whether or not your inter-class dependencies are good or not.
Using Singletons (well, any static state) can make testing problematic. It is up to you if that is an important consideration or not.
Setter injection allows dependencies to be changed after object creation. Not only does this mean that you are not sure if an object can be used after creation, it also means you might have to handle dependencies changing during the lifetime of the object. This can be handy though, especially if you are using this to enable on the fly configuration of the object.
Constructor Injection when used properly can ensure that you know the object is full created and therefore useable after construction, which can simplify use.
Right now Hairball programatically excludes all classes in the java package. Eventaully this will be configuable by the user.
Not right now - although support is planned.
Dot is easy to work with, and allows easy scripting of automatically laid out diagrams using the Dot and Neato command line tools. It does have it's limitations though. It cannot handle realy large images (although it would be better to say that quite often images viewers cannot handle the large images it can produce).
yEd is a fairly good graph editor - but the general viewing features are more interesting, and make it possible to work with larger diagrams, and more importantly easily customise how they are laid out. The one drawback is that it isn't possible to script how they are laid out, as although yEd is free, and API which ccould be used to script it (yWorks) is commercial.
Prefuse was looked at as an alternative to using yEd and GraphML, but Prefuse didn't seem to be that intuitive. It certainly may be of use in the future though. | http://code.google.com/p/hairball/wiki/FAQ | crawl-003 | refinedweb | 476 | 55.44 |
hi experts
I wrote a code for calculating efficiency in 4 channels,two of these channels (SR1,SR2) are hadronic and two other channels(E-Tau,Mu-Tau) are leptonic-hadronic. i want to make a TAU-Pt histogram for hadronic and leptonic-hadronic channels. i defined these two histogram and fill them in each channel. but it did not work. how can i solve my problem?
regards
leila
WpWp.cxx (18.7 KB)
hi experts
Hi Leila,
By showing some code maybe?
Cheers, Bertrand.
OK, thanks. And how does it fail? (since I cannot try it…)
Davismt2.h (1.91 KB)
LHEF.h (9.89 KB)
and you also need a root file too, i could not attach it cause it is too long, i think it is better to solve it myself, thanks alot
Well, if it is available somewhere via http, I can try… Otherwise a subset of it, or just explain how it fails (empty histogram?)
EDIT: And those files are missing too:
#include "Tauola/Tauola.h"
#include "Tauola/TauolaHEPEVTParticle.h"
#include "Tauola/TauolaHEPEVTEvent.h"
Cheers, Bertrand.
there is not any error but it does not make a histogram. it just calculates efficiencies!!! is there any problem in definition of histogram?
best
leila
Try:
[...] #include <TLorentzVector.h> TH1* hTauPt_hadhad, hTauPt_lhad; [...] /** Example of using Tauola to decay taus stored in HEPEVT-like event record */ int main(){ hTauPt_hadhad = new TH1D("hTauPt_hadhad" , "p_{T}^{#tau_hadhad}" , 40 , 0 , 400 ); hTauPt_lhad = new TH1D("hTauPt_lhad" , "p_{T}^{#tau_lhad}" , 40 , 0 , 400 ); //These three lines are not really necessary since they are the default [...] | https://root-forum.cern.ch/t/make-a-histogram/22164 | CC-MAIN-2022-27 | refinedweb | 261 | 68.77 |
>> So it would seem to me. Nevertheless, in my opinion the proper fix isto annotate the call site>> (in head.S) to specify a zero EIP as return address (which denotesthe bottom of a frame).>>Can you please send a patch to do that?>>That seems to be missing in some other places too, e.g. i386 sysenterpath, x86-64 kernel_thread,>more?Attaching both an i386 version (boot/idle thread only, you didkernel_thread already)and an x86-64 one (boot/idle and kernel_thread). The i386 sysenter pathis a differentthing, there we have an actual caller (though outside of the kernel),which I'd like tocontinue to reflect/catch through arch_unw_user_mode().JanAdd kernel thread stack frame termination for properly stopping stackunwinds.One open question: Should these added pushes perhaps be madeconditional upon CONFIG_STACK_UNWIND or CONFIG_UNWIND_INFO?Signed-off-by: Jan Beulich <[email protected]>--- linux-2.6.18-rc4/arch/x86_64/kernel/entry.S 2006-08-15 11:29:41.000000000 +0200+++ 2.6.18-rc4-unwind-x86_64-term/arch/x86_64/kernel/entry.S 2006-08-15 10:15:40.000000000 +0200@@ -973,6 +973,8 @@ ENTRY(kernel_thread) ENDPROC(kernel_thread) child_rip:+ pushq $0 # fake return address+ CFI_STARTPROC /* * Here we are in the child and the registers are set as they were * at kernel_thread() invocation in the parent.@@ -983,6 +985,7 @@ child_rip: # exit xorl %edi, %edi call do_exit+ CFI_ENDPROC ENDPROC(child_rip) /*--- linux-2.6.18-rc4/arch/x86_64/kernel/head.S 2006-06-18 03:49:35.000000000 +0200+++ 2.6.18-rc4-unwind-x86_64-term/arch/x86_64/kernel/head.S 2006-08-15 11:05:13.000000000 +0200@@ -191,6 +191,7 @@ startup_64: * jump */ movq initial_code(%rip),%rax+ pushq $0 # fake return address jmp *%rax /* SMP bootup changes these two */Add boot/idle kernel thread stack frame termination for properlystopping stack unwinds.One open question: Should this added push perhaps be made conditionalupon CONFIG_STACK_UNWIND or CONFIG_UNWIND_INFO?Signed-off-by: Jan Beulich <[email protected]>--- linux-2.6.18-rc4/arch/i386/kernel/head.S 2006-08-15 11:32:08.000000000 +0200+++ 2.6.18-rc4-unwind-i386-term/arch/i386/kernel/head.S 2006-08-15 11:06:03.000000000 +0200@@ -317,20 +317,14 @@ is386: movl $2,%ecx # set MP movl %eax,%gs lldt %ax cld # gcc2 wants the direction flag cleared at all times+ pushl %eax # fake return address #ifdef CONFIG_SMP movb ready, %cl movb $1, ready- cmpb $0,%cl- je 1f # the first CPU calls start_kernel- # all other CPUs call initialize_secondary- call initialize_secondary- jmp L6-1:+ cmpb $0,%cl # the first CPU calls start_kernel+ jne initialize_secondary # all other CPUs call initialize_secondary #endif /* CONFIG_SMP */- call start_kernel-L6:- jmp L6 # main should never return here, but- # just in case, we know what happens.+ jmp start_kernel /* * We depend on ET to be correct. This checks for 287/387. | http://lkml.org/lkml/2006/8/15/78 | CC-MAIN-2015-11 | refinedweb | 468 | 59.5 |
Hello, I'm fairly new at Python so hopefully this question won't be too awful. I am writing some code that will FTP to a host, and want to catch any exception that may occur, take that and print it out (eventually put it into a log file and perform some alerting action). I've figured out two different ways to do this, and am wondering which is the best (i.e. cleanest, 'right' way to proceed). I'm also trying to understand exactly what occurs for each one. The first example: from ftplib import FTP try: ftp = FTP(ftp_host)(ftp_user, ftp_pass) except Exception, err: print err This works fine. I read through the documentation, and my understanding is that there is a built-in exceptions module in python, that is automatically available in a built-in namespace. Within that module is an 'Exception' class which would contain whatever exception is thrown. So, I'm passing that to the except, along with err to hold the value and then print it out. The second example: from ftplib import FTP import sys try: ftp = FTP(ftp_host)(ftp_user, ftp_pass) except: print sys.exc_info() Here I, for the most part, get the same thing. I'm not passing anything to except and just printing out the exception using a method defined in the sys module. So, I'm new to Python... I've made it this far and am happy, but want to make sure I'm coding correctly from the start. Which method is the better/cleaner/more standard way to continue? Thanks for any help. | https://mail.python.org/pipermail/python-list/2009-July/543804.html | CC-MAIN-2019-35 | refinedweb | 265 | 72.76 |
deterministic skip list
Deterministic skip list - Deterministic. Skip. J. Ian. Munrot. Thomas. Papadakist. Lists*. Robert. Sedgewick*. Abstract. Reexplore techniques based on the notion of askip list to.
Deterministic Skip Lists - Deterministic skip list. The deterministic skip list is a data structure which implements a dynamic ordered dictionary. From now on we shall abbreviate the deterministic skip list as a skip list. The truncated skip list is a skip list in which the height of the skip list is bounded from above by a constant.
Skip list - In computer science, a skip list is a data structure that allows O ( log n ) {\ displaystyle A skip list is built in layers. .. "Deterministic skip lists" (PDF).
Deterministic skip lists - We explore techniques based on the notion of a skip list to guarantee logarithmic search, insert and delete costs. The basic idea is to insist that between any pair
A deterministic skip list for k-dimensional range search - This paper presents a new data structure for multi-dimensional point data which is based on an extension of the deterministic skip list data structure projectedinto
Deterministic skip lists - We explore techniques based on the notion of a skip list to guarantee logarithmic search, insert and delete costs. The basic idea is to insist that between any pair
Promblem A Deterministic Skip List - Promblem A Deterministic Skip List. Huige Cheng. August 20, 2009. Abstract. This document is written for the implementation of 1-2-3 skip list based on the
Alternating Skip Lists - ASLs are derived from the array form of the 1-2 deterministic skip list (see " Deterministic Skip Lists," by J.I. Munro, T. Papadakis, and R.
Deterministic 1–2 skip list in distributed system - In this paper a data structure called deterministic 1-2 skip list has been proposed as a solution for search problems in distributed environment. The data structure
SKIP LISTS - Solution: Skip lists (Bill Pugh 2000) A skip list for a set S of distinct (key, element) items is a series of lists S0, S1 , … , Sh . A deterministic version of skip lists.
probabilistic skip list structure
An Analysis of Skip Lists -.
Skip list - Skip Lists: - fix these drawbacks. - good data structure for a dictionary ADT Called skip lists because higher level lists let you skip over with probability 1/2.
Skip Lists -.
Skip List - Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a
Skip Lists: A Probabilistic Alternative to Balanced Trees - An Analysis of Skip Lists. The Skip List is a probabilistic data structure that has the same average case asymptotic performance as more complex data structures such as AVL trees, balanced trees, etc. on average. The following is a hopefully more understandable probabilistic analysis, curated from various sources.
Skip List - The worst case search time for a sorted linked list is O(n) as we can only linearly traverse the list and cannot skip nodes while searching. For a Balanced Binary
Skip Lists - Reference: Pugh, W. Skip Lists: A Probabilistic Alternative to Balanced Trees. typedef struct skip_list_node { struct skip_list_node **forward; int key; void *val;
Insert in skip list - “Skip Lists: A Probabilistic Alternative to Balanced Trees” (author William in a probabilistic data structure, the same sequence of operations will usually not.
15.1. Skip Lists - This module presents a probabilistic search structure called the skip list. Like the BST, skip lists are designed to overcome a basic limitation of array-based and
Skip Lists: A Probabilistic Alternative to Balanced Trees - Balanced Trees. Skip lists are data structures thla t use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and.
concurrent skip lists
A Provably Correct Scalable Concurrent Skip List - The elements used for a skip list can contain more than one pointer since they can participate in more than one list. Insertions and deletions are implemented much like the corresponding linked-list operations, except that "tall" elements must be inserted into or deleted from more than one linked list.
Skip list - ConcurrentSkipListSet. public ConcurrentSkipListSet(). Constructs a new, empty set that orders its elements according to their natural ordering.
ConcurrentSkipListSet (Java Platform SE 7 ) - Overview. In this quick article, we'll be looking at the ConcurrentSkipListMap class from the java.util.concurrent package. This construct allows
Guide to the ConcurrentSkipListMap - Skip lists [7] are an increasingly important data structure for storing and re- trieving ordered in-memory data. In this paper, we propose a new concurrent skip-list algorithm that appears to perform as well as the best existing concur- rent skip list implementation under most conditions.
When is a ConcurrentSkipListSet useful? - ConcurrentSkipListSet and ConcurrentSkipListMap are useful when you I'm assuming the JDK guys went with a skip list here because the
Concurrent skiplist implementation - A concurrent skiplist with hazard pointers. I recently implemented a concurrent skip list based upon William Pugh's tech report. Since the implementation is in C,
folly/ConcurrentSkipList.h at master · facebook/folly · GitHub - @author: Xin Liu <[email protected]>. //. // A concurrent skip list (CSL) implementation. // Ref: OPODIS2006-BA.
Concurrent Skiplists - Pick a concurrent data structure: Skiplist. – Apply the techniques from the lectures . – Get a better understanding of concurrency. – Write a parallel implementation
Concurrent Skip List - Problem Statement Write an implementation of a skip list data structure that is thread-safe for searching and insertion of key-value pairs. A skip
Concurrent Skip List - Concurrent Skip List. Dmitry Vyukov mailto:[email protected]. July 12, 2010. Table of. Contents. Problem Statement .
lock free skip list
How to implement lock-free skip list - Lock-free skip lists are described in the book The Art of Multiprocessor Programming, and the technical report Practical lock-freedom, which is
Lock-Free Linked Lists and Skip Lists - distributed, fault-tolerant, lock-free, linked list, skip list, efficient, analysis, amortized analysis. INTRODUCTION. A common way to implement shared data structures in. RELATED WORK. LINKED LISTS. 3.1 Linked List Design. 3.2 Algorithms. 3.3 Correctness.
greensky00/skiplist: Generic lock-free Skiplist container - A generic Skiplist container C implementation, lock-free for both multiple readers and writers. It basically uses STL atomic variables with C++ compiler, but they can be switched to built-in GCC atomic operations when we compile it with pure C compiler. This repository also
Lock-Free Search Data Structures: Throughput Modelling - We have validated our analysis on several fundamental lock-free search data structures such as linked lists, hash tables, skip lists and binary
Lock-free Skip Lists and Priority Queues - Our Lock-Free Concurrent Skip List. Define node state to depend on the insertion status at lowest level as well as a deletion flag. Insert from lowest level going
Lock-free Skip Lists and Dictionaries - New Lock-Free Concurrent Skip List. Define node state to depend on the insertion status at lowest level as well as a deletion flag. Insert from lowest level going
Efficient & Lock-Free Modified Skip List in Concurrent Environment - lock-free implementation of modified skip list data structure. That is suitable for both fully concurrent (large multi-processor) systems as well as pre-emptive
Skip list - In computer science, a skip list is a data structure that allows O ( log n ) {\ displaystyle .. Lock-free linked lists and skip lists (PDF). Proc. Annual ACM Symp. on
A Simple Optimistic skip-list Algorithm - This paper proposes a simple new lock-based concurrent skip-list algorithm. as the best previously known lock-free algorithm under the most common search
Concurrent Non-blocking Skip List Using Multi - We present a non-blocking lock-free implementation of skip list data structure Using this operation, we first design lock-free algorithms to.
skip list database
Skip list - Skip list..
What is Skiplist & Why a Skiplist Index for MemSQL - To the best of my knowledge, MemSQL is the first commercial relational database in production today to use a skiplist as its primary index
Skip List - The worst case search time for a sorted linked list is O(n) as we can only linearly traverse the list and cannot skip nodes while searching. For a Balanced Binary
Why are skip lists not preferred over B+-trees for databases - Databases are typically so huge that they have to be stored in external memory, such as a giant disk drive. As a result, the bottleneck in most
Skip Lists Done Right - Redis uses a skip list (doubly linked) implementation for sorted sets, here .. MemSQL (distributed relational database with in-memory tables)
Parallelizing Skip Lists for In-memory Multi-core Database Systems - not efficient for in-memory databases, especially in the context of today's multi- core architecture. In this paper, we study the parallelizability of skip lists for the
MIT's Introduction to Algorithms, Lecture 12: Skip Lists - Skip lists are an efficient data structure that can be used in place of It took him 10 minutes to implement linked lists (on which skip lists are based) . Cyrus Imapd uses a skiplist database format as one of its storage engines.
Comparison of Skip List Algorithms to Alternative Data Structures - simplicity in the algorithms for insertion and deletion for a skip list. This paper analyzes .. Trees, ACM Trans. on Database Systems, Vol. 5, No. 3 (Sept. 1980)
database design - How to Store Skip Lists on Disk - A DBMS is an implementation of a datamodel - a structure for how data will be represented. The relational model represents data as tables and
skip list implementation in c
skiplist implementation in c · GitHub - skiplist implementation in c. GitHub Gist: instantly share code, notes, and snippets.
C Program to Implement Skip List - C Program to Implement Skip List. /* Skip Lists: A Probabilistic Alternative to Balanced Trees */ #include <stdlib.h> #include <limits.h> #define SKIPLIST_MAX_LEVEL 6. typedef struct snode { int key; int value; struct snode **forward;
skiplist.c - #include <stdlib.h> #include <assert.h> #include <limits.h> #include "skiplist.h" # define MAX_HEIGHT (32) struct skiplist { int key; int height; /* number of next
Skip List - In this article, we will be discussing how to insert an element in Skip list. Deciding nodes level. Each element in the list is C++. filter_none. edit close. play_arrow. link brightness_4 code using namespace std;. // Class to implement node.
Implementing a Skip List in C – Guy Alster's Blog - Implementing a Skip List in C. The entire code can be found on my Github:. SKIP LIST, so what is it all about? A skip list is a dynamic and randomized data structure that allows storing keys in a sorted fashion. The skip list is in essence, a linked list with multiple levels. So
c++ - Skip List implementation - Your code is a real pleasure to read because it is clean and well written - I wish more of the code here at CR were like this. Stilistically the focus
Skip Lists: Done Right · Ticki's blog - Skip lists are a wonderful data structure, but it is hard to get\ right. various techniques to optimize and\ improve the implementation of skip lists. limit on the highest level, and the implication of linear upperbound when h > c.
Skip List implementation using c - Skip List implementation using c. printf("\nOption for operation:\n 0)Exit 1)insert 2)search 3)print List 4)delete\nOption: "); scanf("%d",&choice);
Skip Lists in C - This file contains source code to implement a dictionary using skip lists and a test driver to test the routines. A couple of comments about this implementation:
Design and Implementation of the C++ Skip List - A skip list is a a singly linked list with additional, coarser, linked lists. The search then evolves to: HED[2] -> HED[1] which leads to A[1] -> C[1] -> E[1] , again
advantages and disadvantages of skip list
Skip Lists: Done Right · Ticki's blog - Advantages. Skip lists perform very well on rapid insertions because there are no rotations or reallocations. The algorithms can easily be modified to a more specialized structure (like segment or range “trees”, indexable skip lists, or keyed priority queues).
SKIPLISTS - INTRODUCTION. Skip lists are a data structure that can be used in place of balanced trees. It mayor advantage is that that provides expected O(log n) time for all operations. At a high level, a skip list is DISADVANTAGES. Each node in a
What uses are there for Skip Lists? - Skiplist is a data structure used to store sorted list of items, very much like Binary Search Tree. The main advantage of using Skiplists over BST is that it is best suited for concurrent What are the disadvantages of skip lists?
Skip Lists - Hard to search in less than O(n) time. (binary search doesn't work, eg.) - Hard to jump to the middle. • Skip Lists: - fix these drawbacks. - good data structure for a
Skip List Documentation - An advantage claimed for SkipLists are that the insert and remove The drawbacks of a SkipList include its larger space requirements and its
Skip list - In computer science, a skip list is a data structure that allows O ( log n ) {\ displaystyle . The advantage of this quasi-randomness is that it doesn't give away nearly as much level-structure related information to an adversarial user as the
Skip List – Chengcheng Xu – Medium - 1989 by William Pugh. Simply speaking, skip list is a data structure that allows fast search within an ordered sequence of… In general, skip list has following advantages: Skip list . Disadvantages of skip list. Skip list does
SKIP LIST & SKIP GRAPH - A skip list for a set L of dis>nct (key, element) items is a series of linked lists L. 0 Skip List. A. G. J. M. R. W. HEAD. TAIL. Each node linked at higher level with probability 1/2. Level 0. A. J. M . Disadvantages. Advantages. • O(log n) expected
A short survey of Advantages and Applications of Skip - lexicographically by their keys in a circular doubly-linked list. A short survey of Advantages and Applications of. Skip Graphs. Shalini Batra, Amritpal Singh
Skip List - Skip Lists were first described in "Skip Lists: A Probabilistic Alternative to Balanced Trees", W. Pugh (CACM, June 1990). approximate O(log n) access times with the advantage that the implementation is straightforward and Disadvantages. | http://www.brokencontrollers.com/article/31580869.shtml | CC-MAIN-2019-39 | refinedweb | 2,363 | 54.22 |
Finding the Lexicographical Next Permutation in O(N) time complexity
Reading time: 30 minutes | Coding time: 10 minutes
In Lexicographical Permutation Algorithm we will find the immediate next smallest Integer number or sequence permutation. We present two algorithms to solve this problem:
- Brute force in O(N!) time complexity
- Efficient approach in O(N) time complexity
Example : Integer Number :- 329
All possible permutation of integer number : n!
where n is an number of decimal integers in given integer number.
Here, all possible permutation of above integer number are as follows :
1] 239 2] 293 3] 329 4] 392 5] 923 6] 932
The immediate next smallest permutation to given number is 392, hence 392 is an next Lexicographic permutated number of 329
Naive Algorithm O(N!)
Step 1 : Find the all possible combination of sequence of decimals using an algorithm like heap's algorithm in O(N!)
Step 2 : Sort all of the sequence elements in ascending order in O(N! * log(N!))
Step 3: Remove duplicate permutations in O(N)
Step 3 : Find the immediate next number of the required number in the list in O(N)
Step 4 : stop.
Best Algorithm in O(N)
Step 1 : Find the largest index i such that array[i − 1] < array[i].
(If no such i exists, then this is already the last permutation.)
Step 2 : Find largest index j such that j ≥ i and array[j] > array[i − 1].
Step 3 : Swap array[j] and array[i − 1].
Step 4 : Reverse the suffix starting at array[i].
Explanation
In this algorithm, to compute the next lexicographic number will try to increase the number/sequence as little as possibl and this will be achieved by modifying the rightmost elements leaving the leftmost elements unchanged.
Here above we have given the sequence (0, 1, 2, 5, 3, 3, 0).
Step 1 : Identify the longest suffix that is non-increasing (i.e. weakly decreasing). In our example, the suffix with this property is (5, 3, 3, 0). This suffix is already the highest permutation, so we can’t make a next permutation just by modifying it – we need to modify some element(s) to the left of it. (Note that we can identify this suffix in O(n) time by scanning the sequence from right to left. Also note that such a suffix has at least one element, because a single element substring is trivially non-increasing.)
Step 2 : Look at the element immediately to the left of the suffix (in the example it’s 2) and call it the pivot. (If there is no such element – i.e. the entire sequence is non-increasing – then this is already the last permutation.) The pivot is necessarily less than the head of the suffix (in the example it’s 5). So some element in the suffix is greater than the pivot. If we swap the pivot with the smallest element in the suffix that is greater than the pivot, then the prefix is minimized. (The prefix is everything in the sequence except the suffix.)
Step 3 : In the above example, we end up with the new prefix (0, 1, 3) and new suffix (5, 3, 2, 0). (Note that if the suffix has multiple copies of the new pivot, we should take the rightmost copy – this plays into the next step.)
Step 4 : Finally, we sort the suffix in non-decreasing (i.e. weakly increasing) order because we increased the prefix, so we want to make the new suffix as low as possible. In fact, we can avoid sorting and simply reverse the suffix, because the replaced element respects the weakly decreasing order. Thus we obtain the sequence/number (0, 1, 3, 0, 2, 3, 5), which is the next permutation that we wanted to compute.
Implementation
#include <iostream> #include <string> #include <vector> std::string getNextPermutation(std::vector<int> &v) { //find the largest suffix that is non-increasing int pos_suffix_start; for (pos_suffix_start = v.size()-1; pos_suffix_start > 0 && v[pos_suffix_start-1] >= v[pos_suffix_start]; pos_suffix_start--); if (pos_suffix_start == 0) return "-1"; int pos_pivot = v.size() - 1; pos_pivot = pos_suffix_start - 1; // find the rightmost digit in suffix that is the least number greater than the pivot, called as swapper. int pos_swapper; for (pos_swapper = v.size()-1; pos_swapper > pos_pivot && v[pos_swapper] <= v[pos_pivot]; pos_swapper--); //Swap pivot digit with swapper digit int tmp = v[pos_pivot]; v[pos_pivot] = v[pos_swapper]; v[pos_swapper] = tmp; //Prepare resulting string with reversing elements after the pivot digit std::string res = ""; for (int i = 0; i <= pos_pivot; i++) res+=std::to_string(v[i]); for (int i = v.size() - 1; i > pos_pivot; i--) res+=std::to_string(v[i]); return res; } int main () { int number; std::cout << "\nEnter the Number of digits in sequence : "; std::cin >> number; std::vector<int> v; std::cout << "\nEnter the " << number << " digits : \n"; for (int i=0; i<number; i++) { int digit; std::cin>>digit; v.push_back(digit); } std::cout << "\nThe next lexicograhic number is " << getNextPermutation(v); }
Complexity
The time and space complexity of Lexicographical (Next)Permutation Algorithm is :
- Worst case time complexity:
Θ(n)
- Average case time complexity:
Θ(n)
- Best case time complexity:
Θ(n)
- Space complexity:
Θ(1)
where N is the number of lines | https://iq.opengenus.org/lexicographical-next-permutation/ | CC-MAIN-2021-04 | refinedweb | 863 | 51.58 |
Erik Hensema <[email protected]> wrote:> Horst von Brand ([email protected]):> [on reiserfs4]> >> >> and _can_ do things> >> >> no other FS can> > Mostly useless things...> Depends on your point of view. If you define things to be useful> only when POSIX requires them, then yes, reiser4 contains a lot> of useless stuff.That isn't my definition.> However, it's the 'beyond POSIX'-stuff what makes reiser4> interesting.I haven't seen a shred of evidence of that up to here. Just redoingin-kernel (for completely inscrutable reasons) stuff that has beenconfortably done in userland for many years isn't "Interesting", quite thecontrary.> Multistream files have been useful on other OSses for years.I only have seen other OSes moving away from such stuff...> They> might be useful on Linux too (Samba will surely like them).OK, if you think Windows is a good idea all around...> The plugin architecture is very interesting.Again: It isn't "plugins", it's "kernel configuration options redefiningthe filesystem layout". And that is extremely toxic: If the claims are tobe believed, somebody using ReiserFS 4 could end up using filesystems aswidely different as ext3 and ufs today. Both called the same. Or everybodywill end up using the exact same set of "plugins", so they make no sense asconfiguration options. Sure, it is nice to have different versions of thesame filesystem (in a way, ext3 is a version of ext2; in ext3 there aresome options that where introduced later, and some of which aren'tbackwards-compatible), but this is not something I would want eachindividual user screw around with willy-nilly. So the whole "plugin" ideais very questionable to me.> Sometimes you don't> need files to be in the POSIX namespace.The POSIX namespace /is/ the namespace for files.> Why would you want to> store a mysql database in files?Because it is the abstraction of permanent storage that the OS gives me. OrI could write them directly on a raw block device for performance (bycutting out a middleman).> Why not skip the overhead of the> VFS and POSIX rules and just store them in a more efficient way?Exactly. Cut out the filesystem.> Maybe you can create a swapfile plugin.The kernel manages swapping on files and devices just fine, thank you.> No need for a swapfile to> be in the POSIX namespace either.And how do you handle it if it has no filename?!> It's just a fun thing to experiment with.Noone here is stopping you from experimenting.> It's not always> nescesary to let the demand create the means. Give programmers> some powerful tools and wait and see what wonderful things start> to evolve.The sad truth is that if you give a random collection of people powerfultools they misuse them more often than not, creating a huge mess in theprocess. That is why it is so hard to design good tools.> And yes, maybe in ten years time POSIX is just a subsystem in> Linux. Maybe commerciale Unix vendors will start following Linux> as 'the' standard instead of the other way around. Seems fun to> me :-)To me too. But forcing Linux today to be as non-POSIX as possible, just soit will be prepared for 10 years in the future makes no sense, because youbreak it /now/.> I think this debate will mostly boil down to 'do we want to> experiment with beyond-POSIX filesystems in linux?'.You (and some others) clearly want to. More power to the bunch that comesup with clean semantics that can be implemented efficiently and are usefulin real life (as opposed to "it would be oh-so-nice to also have $FEATUREfor my pet $NICHE_CASE, feature for which I just can't be botheredconsidering ramifications at all"). Before going off look up "featuritis"(and consider how it all but killed off a lot of OSes, even many Unixvariants, and uncountable other things too).> Clearly we don't _need_ it now. There simply are no users. But> will users come when reiser4 is merged? Nobody knows.Probably a tiny minority. Something like the following ReiserFS 3 hastoday.> IMHO reiser4 should be merged and be marked as experimental.IMHO ReiserFS 4 should not be merged into Linus' kernel. So what? It is notmy call (nor yours).> It> should probably _always_ be marked as experimental, because we> _know_ we're going to need some other -- more generic -- API when> we decide we like the features of reiser4. The reiser4 APIs> should probably be implemented as generic VFS APIs. But since we> don't know yet what features we're going to use, let reiser4 be> self contained. Maybe reiser5 or reiser6 will follow standard> VFS-beyond-POSIX rules, with ext4 and JFS2 also implementing them.If is is /that/ experimental, it has no place in Linus' kernel at all. Itis not (and has not been for a half dozen years at least) a playground forrandom experiments. Sure, you can fork off a branch for fooling around, youare even wellcome to keep your laboratory synched up with the versioneverybody is using, if that serves your needs.> It's just too damn hard to predict the future.Right. Instead of gambling it will turn out just as you think (why should Igive /your/ view more weight than mine? or prefer views which have shown tobe erroneous over the view of people who /have/ shaped the present we arein today?), why not wait and see?> IMHO better just> merge reiser4 and let it be clear to everybody that reiser4 is an> experiment.IMHO much simpler just leaving experiments to experimental branches.> As long as it doesn't affect the rest of the kernelImpossible.> and it's> clear to the users that reiser4 is *not* going to be the> standard, it's fine with me.Now I'm at a complete loss... Why should it be placed in the standardkernel | http://lkml.org/lkml/2005/7/11/192 | CC-MAIN-2017-43 | refinedweb | 982 | 66.54 |
DORF -- Implementor's view
The implementation has been prototyped in GHC 7.2.1, see, and SPJ's observations/possible improvements and caveats.
Opportunity for improvement! ]
Record/field update
Update uses method set from the Has class:
set :: (Has r fld t) => fld -> t -> r -> r
sets, and the result from set depends only on the record type -- rather than type family Set H-R type is hidded inside HR instance (t ~ ([a_] -> [a_])) => -- same as SORF Has HR Proxy_rev t where get HR{rev} _ = rev> and <No Mono Record Fields>. When import/exporting do we need to also export the Proxy_type? If not exported, update syntax cannot be desuggarred to use it.)
Should application programmers declare instances for `Has'/set?
Nothing so far suggests they should. (And there's obvious dangers in allowing it.)
But updating through virtual fields might need it. See <DORF -- comparison to SORF>#<Virtual record selectors>.
Attachments
- DORF Prototype ADC5 15Feb2012.lhs
(11.9 KB) - added by guest 15 months ago.
Prototype implementation
- DORF Prototype Importing 29Feb2012.lhs
(4.9 KB) - added by AntC 15 months ago.
demo importing and namespace control of field clashes | http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView?version=2 | CC-MAIN-2013-20 | refinedweb | 191 | 60.41 |
Release notes/0.44/fr.
Indicateur de style des outils.
Barre des contrôles de l'outil texte
-.
Palette de couleurs insérable
-..
Raccourcis clavier configurables.
Barre d'état
- sensitive to pressure of your: [].
Afficher ou non les poignées.
Nouveau comportement de suppression
-.
Préserver les positions des noeuds et poignées
-.
Divers
-.
Plume calligraphique
Tremblement
-.
Largeur de plume
-).
Outil stylo (courbes de Bézier)
-.
Masques et chemins de décou
Boîte de dialogue transformer.
Centres de rotation persistants
-.
Coller les dimensions).
Connecteurs et disposition automatique
-.
Vectorisation sélective avec.
Adhérence
-.
Marqueurs
-.
Effets d'extension
-.
Raccourcis divers
-).
Améliorations diverses
- Document.
Corrections diverses de bogues
-. se plantera s'il est lié à la première version du paquet Debian de la librairie ramasse miette
- On Linux, Inkscape may crash if you have the "Composite" option enabled in your X.org configuration. To disable this option, comment out this line in your /etc/X11/xorg.conf:
Option "Composite" "Enable"
- so it becomes
#Option "Composite" "Enable"
- and restart X.
Les espaces de nom (namespaces) peuvent nécessiter d'être réparés
- Previous versions of inkscape sometimes silently saved documents with wrong namespace URIs. This has been fixed, but such corrupted documents will no longer load successfully. Such documents may require their namespace declarations to be fixed by hand.
Attention aux thèmes défectueux sous. However, but it would be nice if you as affected user would inform the gtk-engines maintainers of any further.
Versions précédentes
- () | https://wiki.inkscape.org/wiki/index.php?title=Release_notes/0.44/fr&oldid=7053 | CC-MAIN-2020-05 | refinedweb | 232 | 62.95 |
Microsoft provides ASP.NET Cache object within the Application scope of ASP.NET. And this allows you to cache application data and reduce those expensive database trips and improve your ASP.NET performance. Here is how ASP.NET Cache is typically used.
using System.Web.Caching; ... string key = "Employee:EmployeeId:1000"; Employee employee = (Employee)Cache[key]; if (employee == null){ // Load Employee from DB & put it in the Cache LoadEmployeeFromDb(employee); Cache.Insert(key, employee, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, null ); }
Doc: Client Side ASP.NET Features
ASP.NET Cache is a stand-alone in-process (InProc) cache and therefore has many limitations if your application is deployed in a load-balanced web farm. These limitations are:
NCache is a distributed cache and resolves all the limitations of ASP.NET Cache mentioned above. Here is how NCache addresses these limitations:
NCache provides all of ASP.NET Cache features with an identical API plus. This enables you to migrate from ASP.NET Cache to NCache seamlessly. You only change the namespace from System.Web.Caching to Alachisoft.NCache.Web.Caching and ensure that all your objects being cached are serializable.
Additionally, NCache provides numerous caching features that ASP.NET Cache does not have. Here is a partial list:
Read more about all of NCache edition features.
Here are some simple steps you could take to quickly benefit from it: | http://www.alachisoft.com/ncache/asp-net-cache.html | CC-MAIN-2018-51 | refinedweb | 229 | 54.18 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
which directs its courts and administrative agencies, when confronted with a legal problem involving a foreign element, whether or not they should apply a foreign law or laws. Elements: a. It is part of the municipal law of a state; b. There is a directive to courts and administrative agencies; c. There is a legal problem involving a foreign element; and d. There is either an application or nonapplication of a foreign law or laws. Since every state has its own municipal law, it follows quite naturally that each state has also its own conflict of laws. It is part of the municipal law, NOT international in character. It is the judicial tribunals of a country that ultimately are called upon to decide or resolve conflicts problems; while administrative agencies also decide preliminarily a given controversy involving a foreign factor. A foreign element has to be present before the matter can be considered a conflict problem. Thus, if the transaction in question arises wholly within a single state, all the parties interested having been, and continuing to be, domiciled and actually present there, and all nationals of the very same state, the question being raised there also, no foreign element exists to cause any interference with the usual and regular enforcement of the domestic municipal law by the domestic tribunals. When effect is given to a foreign law in any territory, it is only because the municipal law of that state temporarily abdicates its supreme authority in favor of the foreign law, which for the time being, with reference to that particular matter, becomes itself, by will of that state, its municipal law. Importance of Conflict of Laws: 1. To adjust conflicting rights in international, mercantile and corporation transactions; and 2. To solve personal, family, property, and successional, contractual problems, possessed of facts or elements operating in two or more states. Cause for Conflicts Problems Variance in the municipal law of the countries involved. Different municipal laws may give identical laws varying interprestations. Scope of Functions of Conflict of Laws a. The determination of which country has jurisdiction b. The applicability to a particular case of either the local or the foreign law
c. The determination of the force, validity and effectiveness of a foreign judgment How Conflict of Laws is observed 1. States may observe conflict of laws by complying faithfully with its own conflict rules. But States must, insofar as is practicable, try to harmonize their own rules of equity with the legislation and jurisprudence of other lands. 2. Private individuals may in their own way abide by our conflicts rules by observing them and by complying with judicial decisions on the subject. Why Conflict of Laws is observed 1. It is part of the municipal law of the state 2. Individual citizens has fear of municipal sanctions Names given to the subject Private International Law International Private Law Civil International Law Extraterritorial Law Private Law of Nations Private Law of Foreigners The Extraterritorial Recognition of Rights The Law of Strangers The Theory of the Extraterritorial Authority of Laws Conflict of Laws Conflict of Laws (Private) vs. Law of Nations (Public)
Nature Persons Involved Transactions Involved Remedies or Sanctions
Conflict of Laws Municipal Private Individuals Private ones between private individuals Resort to municipal tribunals
Law of Nations International Sovereign States Generally affect public interest Peaceful such as negotiations, good offices, mediation, conciliation, arbitration, and the likes; or Forcible such as embargo, boycott, war, and the likes
Sources of Conflict of Laws 1. Indirect a. Natural Moral Law -rule of human conduct implanted by God in our nature and in our conscience, urging us to do whatever is right and avoid whatever is evil. b. Works of Writers -legal scholars are considered sources of Conflict of Laws insofar as their
Codifications -Civil Code of the Philippines & Philippine Code of Commerce. Special Laws -Corporation Code. he is subject to all subsequent matters in the same suit. . f. In Conflict of Laws. the court does not acquire jurisdiction over the defendant. court does not acquire jurisdiction. Vitiated by fraud -if initiated by the plaintiff. Jurisdiction over the person -It is the power of a court to render a judgment that will be binding on the parties involved. Constitutions -the fundamental law of the land. Jurisdiction In general. The evidence and witnesses may be not be readily available. are indeed evidence of what the laws mean. Jurisdiction over the plaintiff is acquired from the moment he institutes the action by the proper pleading. lex loci celebrationis (law of the place of celebration). b. e. he is subject to any set-offs. y y y Kinds of Jurisdiction a. PD on Intellectual Property. International Customs -lex situs (law of the place where the property is situated). When a tribunal possesses jurisdiction. the court acquires jurisdiction. Patent Law. it has no alternative except to dismiss the case. Vitiated by force -if force is legal. if initiated NOT by the plaintiff. in which case it may either apply lex fori (law of forum)or lex causae (proper foreign law) -It is conferred by law and is defined as the authority of a court to hear and decide cases of the general class to which the proceedings in question belong. Foreign Investment Act. d. Any judgment rendered without or in excess of jurisdiction is clearly null and void even in the state that rendered it. Continuing Jurisdiction even if the defendant leaves o Even if the defendant leaves the state of the forum prior to the final determination of the suits. Geneva Conventions of 1823. and Codigo Bustamante of 1898. b. though not laws. Direct a. Judicial Decisions -Judicial Decisions. Treaties and Conventions -European Hague Conventions of 1896. while jurisdiction over the defendant is acquired through voluntary appearance or personal/substituted service of summons. 1930 and 1931. Treaties of Montevideo of 1899. c. the jurisdiction over him that had been previously acquired continues. How Service is made on a private foreign corporation? To the resident agent if doing business in the Philippines or to the government official designated by law to that effect or any of its officer or agents within the Philippines. c. The court dockets of the forum may already be clogged and to permit additional cases would inevitable hamper the speedy administration of justice. The evils of forum shopping ought to be curbed. that the forum may provide as proper elements of defense o On the part of the defendant. Effect of vitiated personal service of summons a. regardless of the persons who may be interested thereon. in view of the lack of due process. y Repercussions of Submission to Jurisdiction: o On the part of the plaintiff. When a court has no jurisdiction.writings have influenced judicial decisions on the subject. y It is usually the law of the forum that furnishes the yardstick for the presence or absence of jurisdiction. Insurance Act. if force is illegal. b. it is the authority of a tribunal to hear and decide a case. crossclaims. or (2) assume jurisdiction. and Central Bank Act. 2. court acquires jurisdiction. Jurisdiction over the subject matter Reason for refusal to assume jurisdiction Inconvenient to the forum (forum non conveniens): a. c. Omnibus Investment Code. 1902 and 1905. Nationalization of the Retail Trade Acvt. territoriality (where the crime is committed) or generality (criminal laws of a country bind both the citizens and aliens who are in the said country or territory except the principles of public international law and presence of treaty stipulations). etc. such as appeals. 1926. b. It also includes the power to enforce any judgment it may render. it is the power of the state to create legal interest which other states will recognize and enforce. d. Jurisdiction over the res -It is the jurisdiction over the particular subject matter in controversy. The forum has no particular interest in the case. counterclaims. lex nationalii/lex domicilii (national law or domiciliary law). it may (1) refuse to assume jurisdiction on the ground of forum non conveniens.
judgment. the forum in turn will recognize the laws and judgments emanating from said foreign state. When the case involves penal laws. we must do so. When the case involves purely fiscal or administrative matters. Theories on why the foreign law may in some cases be given effect 1. When the case involves any of the exceptions to the application of the proper foreign law: a. or contract is contrary to almost universally conceded principles of morality. is the dispensing of justice. judgment. the case may be better tried is said courts. or contract. if proved to be commonly admitted in such courts. thus. Based on persuasiveness of a foreign judgment if the forum is persuaded that a foreign judgment is meritorious and has been rendered by a court of competent jurisdiction. or by his deputy. Theory on Comity We apply foreign law because of its convenience. including Conflicts of Laws. When the proper foreign law has not been properly pleaded and proved. Application of Internal or Domestic Law 3 instances to apply internal or domestic law: a. residents and transients in our land. Recognition and enforcement of foreign law distinguished Recognition means that our courts will allow said foreign judgment to be presented as a defense to a local litigation. d. it is as if the foreign law has become part and parcel of our own local law. . 4. Other courts are open. if this can be attained in many cases by applying the proper foreign law. may work undeniable injustice to the citizens or residents of the forum. Certainly. but simply the vested rights that have been vested under such foreign law or judgment. 3. c. or (2) by a copy attested by the officer having legal custody of the record. b. Kinds of Comity: a. it will not hesitate to enforce that foreign judgment in the forum even if the foreign forum does not reciprocate. judgment. judgment. or contract is contrary to a sound and established public policy of the forum. we cannot be blamed is we disregard the foreign interpretation and instead use our own previous interpretation of the same. When the foreign law. f. c. hence. Theory on vested rights We seek to enforce not the foreign law or the foreign judgment itself. When the foreign law. identical or similar problems must have identical or similar solutions anywhere. e. or (2) by printed and published books of reports of decisions of the country involved. and because we want to give protection to our citizens. Theory of Harmony of Laws We have to apply foreign law so that wherever a case is decided. judgment. h. or contract. it must be properly pleaded and proved as a fact. judgments. irrespective of the forum. Basis of theory: Principle of territoriality a judge cannot directly recognize or sanction foreign law and judgments. Enforcement virtually implies a direct act of sovereignty. Based on reciprocity if the laws and judgments of the forum are recognized in a foreign state.e. Enforcement when a plaintiff wants the courts to positively carry out and make effective in the Philippines a foreign judgment. or contract involves procedural matters.) *Proof of foreign law: WRITTEN LAW (1) an official publication thereof. thus. When the application of the foreign law. and accompanied with a certificate that such officer has custody. Theory of Local law We apply foreign law not because it is foreign. *a foreign law that has been duly pleaded and proved in our courts of justice must receive the same interpretation given to said law by the foreign tribunals concerned except if somewhere in our laws we find a statute worded identically. contracts. Theory of Justice The purpose of all laws. it is his own territorial law which must exclusively govern all problems demanding his decision. b. g. When the application of the foreign law. When the law of the forum expressly so provides in its conflicts rules. may work against the vital interests and national security of the state of the forum. When the foreign law. (Our civil code cites certain instances when our courts in resolving conflict problems have no course except to apply our own internal law) b. that is. (There is no judicial notice of any foreign law. the solution should approximately be the same. UNWRITTEN LAW (1) the oral testimony of expert witnesses. 2. but because our own rules applying similar rules require us to do so. 5. When the case involves real or personal property situated in the forum. Recognition involves merely the sense of justice.
4. If. 2. 2. The application of the proper foreign law to the problem Theories on Characterization 1. 3 Important steps in characterization 1. court has jurisdiction. Lex fori theory the forum merely considers its own concepts its own characterization. They may contradict one another. Factors which give rise to the problem of characterization 1. are of a common nature. the law that will be applied will have to depend upon the facts involved. All-Sided Rule / Multilateral Rule indicates when foreign law is to be applied. One-Sided Rule / Unilateral Rule indicates when Philippine Internal Law will apply. The factual situation the set of facts presenting a conflicts problem 2. no problem in Conflict of Laws arises. 7. Recognition does not require either action or a special proceeding. 3. 6. one without any foreign element. Purpose: to enable the forum to select the proper law. The characterization of the factual situation The process of assigning the proven facts into their particular category. Lex causae theory it follow the characterization of the foreign state which is the principal point of contact. They may contravene our established public policies. Recognition may exist without enforcement. 3. 5. the latter merely indirectly responds by indicating whether internal or foreign law is to be applied. and identity of parties. The point of contact or the connecting factor the law of the country with which the factual situation is most intimately connected Characterization the process of determining under what category a certain set of facts or rules fall. the administration of justice may be shockingly corrupt. no fraud. Selection of the proper law 3. while a conflicts rules applies when the factual situation involves a foreign element. Reason why not all foreign judgments can be recognized or enforced in our country The requisite proof may not be adequate. no collusion. and The judgment must be Res Judicata in the state that rendered it (judgment is final. no want of notice. for instance. 3. Universal analytical theory characterization comes only after a general comparative analytical study of the jurisprudence of all the states involved. Conditions and requisites before foreign judgments may be recognized and enforced in the Philippines There must be proof of the foreign judgment. There must be no lack of jurisdiction. The characterization of the point of contact or the connecting factor Determining whose characterization of the point of contact should be adhered to. an identity of name covers a difference of nature or content of a legal idea. Composition of Conflicts Rules 1. Characterization of the questions 2. The determination of the conflicts rule which is to be applied Determining whether the conflicts rule that we have on the matter or some foreign conflicts rules. Enforcement necessarily carries with its recognition. The pleading and proving of the proper foreign law There must be a competent evidence of the existence of the foreign law on the matter interpreted using our own internal law. it is clearly determine that no foreign element is involved. Dual theory of lex fori and lex causae similar to the comparative approach theory but only . in general terms. Application of the proper law Steps in the application of the proper law 1. The determination of the facts involved In every case. Conflicts Rules These are the provisions found in a country s own law which govern factual situations possessed of a foreign element. judgment is on merit. subject matter and cause of action). Different legal systems attach to the same legal term with different meanings. The judgment must not contravene a sound and established public policy of the forum. The characterization of the problem as procedural or substantive Determining whether the matter is one pertaining to substantive law or procedural law. The former directly answers a given problem. Conflicts of Rules compared to Purely Internal Rules A purely internal rules governs a purely domestic problem. Enforcement necessitates a separate action or proceeding brought precisely to make the foreign judgment effective. 2. Different legal systems may contain ideas or conceptions completely unknown to one another. The judgment must be a civil or commercial matter. In some countries. Different legal systems apply different principles for the solution of problems which. Kinds of Conflicts Rules 1. 4. that is. no clear mistake of law or fact. 2.
Natural-born citizens those who are citizens of the Philippines from birth without having to perform any act to acquire or perfect their Philippine citizenship. c. 2. racial and cultural group. Naturalization is a proceeding in rem. 3. Nationality / Personal theory status and capacity of an individual are generally governed by the law of his nationality. It may be his national law or domiciliary law or the law of the situs. Domicile is that place where a person has certain settled. He may have renounced his nationality by certain acts. d. How stateless is brought about: a. The problem of dual or multiple nationalities or citizenships can hardly arise because citizenship is a matter to be exclusively determined by a country s own law. 2. Secondarily the law of the place of temporary residence Naturalization process of acquiring the citizenship of another country. it is a privilege 2. Juridical capacity (passive) the fitness to be the subject of legal relations. Just as a state may denationalize its own citizens. Citizenship is not a right. 6. 2. He may have been deprived of his citizenship for any cause. Only foreigners may be naturalized 4. legal relations because: it is assigned to him also by the law at the moment or birth (domicile of origin). Nationality membership in an ethnic. Theories on Personal law or law that governs status and capacity in general 1. Capacity to act (active) power to do acts with legal effects. 2. 5.two concepts enter into the picture. 3 Kinds of Citizens of the Philippines 1. However. 2. The law of the domicile b. more or less permanent. He may have voluntarily asked for a released from his original state. and may be defined as the sum total of his rights and obligations. dual or multiple citizenships may really exist. Naturalized citizens those who become a citizen through judicial proceedings. Theories on determining citizenship 1. Jus sanguinis one follows the citizenship of his parents. social. Citizenship membership in a political society. Totality theory get the characterization intended by the parties or get the law intended by the parties to apply and then proceed to apply the characterization given by that intended law. Capacity is merely a part of status. a person is a citizen of the same. Personal law of stateless individuals: a. cannot easily be terminated at the mere will or desire of the parties concerned 4. Status is conferred principally by the state not by the individual 2. Status. Naturalization demands allegiance to our Constitution. Situs / Eclectic theory views tha particular place of an event or transaction as generally the controlling law. The requisite conditions for naturalization are laid down by congress 3. and therefore jurisdictiol over the entire world is requires by publication. Attributes of naturalization 1. Status the place of an individual in society. become such by choosing or electing Philippine citizenship at the age of twenty-one (21) or within a reasonable time thereafter. Jus soli if born in a country. and government 6. Status is a matter of public or social interest 3. Kinds of capacity: 1. He may have been born in a country which recognizes only the principle of jus sanguinis and of parents whose law recognizes only the principle of jus soli. Citizens by election those citizens who by virtue of certain legal provisions. express or implied. b. and consists of personal qualities and relationships. or it is assigned to him also by the law after birth on account of a legal disability caused for . the characterization of lex fori and lex causae. How dual or multiple citizenships arise? Through a naturalized citizen s failure to comply with certain legal requirements in the country of origin From a combined application of jus soli and jus sanguinis By the legislative act of states By the voluntary act of the individual concerned The problem of stateless individuals 1. Status is generally supposed to have a universal character Personal law the law attaches to an individual wherever he may go. with which the state and the community are concerned. in the viewpoint of a third state. laws. Autonomous theory forum should consider the characterization of the country referred to in the conflicts rule of the lex causae. Domiciliary theory / Territorial theory the law of that domicile as the proper determinative law on status and capacity. being a concept of social order. so may naturalization be revoked 5. Characteristics of status 1. 3. fixed.
IDIOTS. except for certain purposes. Distinguished double renvoi and transmission Double renvoi deals with two countries. Marriage is voidable before marriage is annulled. If the participation is passive. insanity. Legitimated child domicile of the father at the time of the birth of the child. b. Rules on Domicile of origin 1. will put itself in the position of the foreign court and whatever the English court will do respecting the case the Philippine court will likewise do) Double Renvoi is that which occurs when the local court. 3. Provable intent that it should be one s fixed and permanent place of abode. Situs or Electic Theory. Above the age of majority domicile of choice of guardian or their constructive domicile is in the place where they had their domicile of choice shortly before they became insane. Is a reference to the whole of the foreign law. she may decide to remain in the domicile of her former husband or not. . Rules on Domicile of Choice 1. MARRIED WOMAN 1. while transmission with a transmitting. With actual physical presence in the place chosen. We may accept the renvoi 3. Marriage is valid domicile of choice of both husband or the wife. We may follow the theory of desistment or the mutual-disclaimer of jurisdiction theory 4. If the participation is active. including its conflicts rules. Double renvoi deals with a referring back. Marriage is void she has no constructive domicile. in adopting the foreign court theory. LUNATICS AND INSANES 1. or b. Is a reference to the internal law of said foreign law. We may reject the renvoi 2. 2. and d. With freedom of choice. No natural person must ever be without a domicile. while transmission with three or more countries. 2. 2. 2. and from different legal viewpoints. Transmission the process of applying the law of a foreign state thru the law of a second foreign state. the problem arises when there is a doubt as to whether a reference to a foreign law: a. kinds of participation 1. it remains the domicile unless a new one is obtained a. Rules on Constructive domicile INFANT 1. Adopted domicile of choice of the adopter. We may make use of the foreign court theory (our Philippine court. Ward domicile of choice of the guardian. c. No person can have two or more domiciles at the same time. 3. domicile of choice of both husband or wife. but after the marriage is annulled. and 2. 4.instance by minority. Proposed solutions 1. Adopted child domicile of the real parents or the parent by consanguinity. Legitimate child domicile of choice of the father at the moment of the birth of the child. By a capacitated persons. 4. he intends to return (domicile of choice). Legitimate domicile of choice of either the father or the mother. the governing law is the law of the legal situs. whenever he is absent. 5. in deciding the case. or marriage in the case of a woman (constructive domicile or domicile by operation of law). Renvoi literally means a referring back. discovers that the foreign court accepts the revoi. Below the age of majority considered infants under the law 2. Illegitimate the domicile of choice of the mother. Foundling country where it was found. Every sui juris may change his domicile 4. or he has his home there that to which. 3. the governing law is the law of the actual situs. 3. Illegitimate child domicile of choice of the mother at the moment of the birth of the child. Once acquired. | https://www.scribd.com/doc/62621375/Reviewer | CC-MAIN-2017-43 | refinedweb | 4,163 | 50.63 |
ASP.NET MVC Framework works on the concepts of routing to map URLs to the Controller class and Action defined in the Controller.Routing use the Route table for Matching the URL definition according to a pattern in mentioned Route table and passes them as parameter arguments to a controller Action after validation.
ASP.NET application, not using routing concepts.It specifically maps the URL based incoming request to the physical file(ex .aspx file) .
You can see Routing table codes in your Global.aspx file as given below:-
ASP.NET application, not using routing concepts.It specifically maps the URL based incoming request to the physical file(ex .aspx file) .
You can see Routing table codes in your Global.aspx file as given below:-
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace Student_MVC_Application { // = "Mystudent", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } }
Description:- Here you can see following routing elements as given below:-
- Route Name = Default
- URL with Parameter = {controller}/{action}/{id}
- Controller Name = Mystudent
- Action Name = Index
- MapRoute and IgnoreRoute are use as extension methods of the RouteCollection class.
- The MapRoute makes use of the Route Object property and creates an instance of Route.
- The IgnoreRoute method works with the StopRouteHandler Class.
- {resource}.axd/{*pathInfo}, is used to prevent requests for the web resources,such as WebReference.axd or ScriptResource.axd ,from getting passed to a controller.
A pattern is a signature that helps for matching the Incoming Request to your system or other.
Example :-
{controller}/{action}/{id} is a predefined pattern in every MVC templates.You can change your route Path according to your project using Custom Routing Functionality.
What is the Meaning of this Pattern:-.
There are following URLs and its definitions as given below:-
1.) :-
Here controller =Home, action=Index ,id =none, since default value of controller and action are Home and Index Respectively.
2.) :-
Here controller =Bollywood, action=Index ,id =none ,since default value of action in Index.
3.) :-
Here controller =Bollywood, action=images ,id=none
4.) :-
Here controller =Bollywood, action=images ,id=10
5.) :-
There is no match found because this pattern route is not defined in Global.asax file .First You have to define a separate route for this URL in Global.aspx file.Then it will work .
Now one question will be raised in your mind "How to create custom route in ASP.NET MVC application".I will explain it in a separate post.
For More...
- Binding concepts of WPF
- How to implement Role based security in asp.net application
- How to display xml data in list box in asp.net application
- Advance .net and c# interview questions and answers
- How to insert data in ms access database and display it in gridview control
- How to save image on website folder in asp.net | https://www.msdotnet.co.in/2015/03/routing-concepts-of-aspnet-mvc-framework.html | CC-MAIN-2022-05 | refinedweb | 483 | 52.15 |
Gantry - Web application framework for mod_perl, cgi, etc.
use Gantry qw/-Engine=MP13 -TemplateEngine=Default/; use Gantry qw/-Engine=MP13 -TemplateEngine=TT/; use Gantry qw/-Engine=CGI -TemplateEngine=TT/; use Gantry qw/-Engine=MP20/;
Note, if you want to know how to use Gantry, you should probably start by reading Gantry::Docs::QuickStart or Gantry::Docs::Tutorial.
Perl Web application framework for Apache/mod_perl. Object Oriented design for a pragmatic, modular approach to URL dispatching. Supports MVC (or VC, MC, C, take your pick) and initiates rapid development. This project offers an orgainized coding scheme for web applications.
Gantry can be extended via plugins. The plugins can optionally contain callback methods.
Defined phases where callbacks can be assigned. pre_init at the beginning, before pretty much everything post_init just after the main initializtion of the request pre_action just before the action is processed post_action just after the action has been processed pre_process just before the template engine is envoked post_process right after the template engine has done its thing
package Gantry::Plugins::SomePlugin; sub get_callbacks { my ( $class, $namespace ) = @_; return if ( $registered_callbacks{ $namespace }++ ); return ( { phase => 'init', callback => \&initialize }, { phase => 'post_init', callback => \&auth_check }, ); } sub initialize { my $gantry_site_object = shift; ... } sub auth_check { my $gantry_site_object = shift; ... } Note that the pre_init callback receives an additional parameter which is either the request object (for mod_perl) or the CGI object. If your plugin in registers callbacks, please document this for your users. They should add -PluginNamespace to the full use list, and it must come before the plugins which register callbacks. In addition, you can specify a plugin location with -PluginDir. This allows you to put plugins in directories out outside of the default Gantry::Plugins directory. Plugin callbacks are called in the order in which the plugins are loaded. This gives you some control over the order in which the callbacks will run by controlling the order in which the plugins are specified in the application use statement. Example: use Some::Gantry::App qw( -Engine=MP20 -Template=TT -PluginNamespace=module_name SOAPMP20 -PluginDir=MyApp::Plugins MyPlugin ); Then, they should implement a method called namespace at the top of each heirarchy which needs the plugins: sub namespace { return 'module_name'; }
This is the default handler that can be inherited it calls init, and cleanup. Methods to be called from this handler should be of the naming convention do_name. If this cannot be found then the autoloader is called to return declined. Methods should take $r, and any other parameters that are in the uri past the method name.
The init is called at the begining of each request and sets values such as, app_rootp, img_rootp, and other application set vars.
$self->declined( 1 );
Set and unset the declined flag
$self->relocate( location );
This method can be called from any controller will relocated the user to the given location.
This method has been moved to Gantry::State::Default.
$self->relocate_permanently( location );
This method can be called from any controller will relocated the user to the given location using HTTP_MOVED_PERMANENTLY 301.
This method has been moved to Gantry::State::Default.
$self->redirect( 1 );
Set and unset the redirect flag
$self->no_cache( 1 );
Set and unset the no cache flag. This directive informs Apache to either send the the no_cache header or not.
Dual use accessor for caching page content. If a plugin prior to the action phase populates this value, that value will be directly returned to the browser, no dispatch will occur.
$self->template_disable( 1 );
Set and unset the template disable flag.
$self->method; $self->method( $r->method );
Set/get the apache request method, either 'POST' or 'GET'
$self->cleanroot( uri, root );
Splits the URI and returns and array of the individual path locations.
$self->cleanup
This method is called at the end of the request phase to cleanup, disconnect for a database, etc.
$self->_increment_engine_cycle
Increments the the engine cycles total.
$self->engine_cycle
Returns the engine cycle total.
Generates an error page. Feel free to override this to change the appearance of the error page.
$hash_ref_of_cookies = $self->get_cookies(); $cookie_value = $self->get_cookies( 'key_of_cookie' );
If called without any parameters, this method will return a reference to a hash of all cookie data. Otherwise, by passing a key to this method then the value for the requested cookie is returned.
$self->set_cookie( { name => cookie name, value => cookie value, expire => cookie expires, path => cookie path, domain => cookie domain, secure => [0/1] cookie secure, } )
This method can be called repeatedly and it will create the cookie and push it into the response headers.
Used by set_cookie to store/buffer cookies for the CGI engine. Not intended for direct calls.
Dual use accessor.
Parameters: key value
Returns: always returns the hash of headers
Omit the key and value for pure getter behavior.
$r = $self->r; $self->r( $r );
Set/get for apache request object
$cgi = $self->cgi; $self->cgi( CGI::Simple->new() );
Set/get for CGI::Simple object. See CGI::Simple docs. This method is only available when using the CGI engine.
$uri = $self->uri; $self->uri( uri );
Set/get for server uri
$location = $self->location; $self->location( location );
Set/get for server location
$url_for_email = $self->current_url
Get the url of the current page. This combines protocol, base_server and uri to form a valid url suitable for inclusion in an email.
$path_info = $self->path_info; $self->path_info( path_info );
Set/get for server path_info
$type = $self->content_type; $self->content_type( 'text/html' );
Set/get for reponse content-type
$type = $self->content_length; $self->content_length( $length );
Set/get for reponse content-length
$self->root( '/home/tkeefer/myapp/root' ); $root = $self->root;
Set/get for the root value. This value is the application root directory that stores the templates and other application specific files.
$self->template( 'some_template.tt' );
Set/get for template name for current request
The filename is relative to the $self->root value, otherwise it needs to be the full path to template file.
$self->template_default( 'some_default_template.tt' );
Set/get for a template default value. If a template has not been defined for the request, then the default template is called.
The filename is relative to the $self->root value, otherwise it needs to be the full path to template file.
$self->template_wrapper( 'wrappers/wrapper.tt' );
Set/get for the template toolkit wrapper file. The wrapper does exactly as it says; it wrapper the ouput from the controller before the response is sent to the client.
The filename is relative to the $self->root value, otherwise it needs to be the full path to template file.
Dual accessor for the HTTP status of the page hit.
$self->css_root( '/home/tkeefer/myapp/root/css' ); $css_root = $self->css_root;
Set/get for the css_root value. This value is used to locate the css files on disk.
$self->img_root( '/home/tkeefer/myapp/root/images' ); $img_root = $self->img_root;
Set/get for the img_root value. This value is used to locate the application image files on disk.
$self->doc_root( '/home/tkeefer/myapp/root' ); $doc_root = $self->doc_root;
Set/get for the doc_root value. This value is used to locate the application root directory on disk.
$self->app_rootp( '/myapp' ); $app_rootp = $self->app_rootp;
Set/get for the app_rootp value. This value is used to identify the the root URI location for the web application.
$self->img_rootp( '/myapp' ); $img_rootp = $self->img_rootp;
Set/get for the img_rootp value. This value is used to identify the the root URI location for the web application images.
$self->web_rootp( 'html' ); $web_rootp = $self->web_rootp;
Set/get for the web_rootp value. This value is used to identify the the root URI location for the web files.
$self->doc_rootp( 'html' ); $doc_rootp = $self->doc_rootp;
Set/get for the doc_rootp value. This value is used to identify the the root URI location for the web files.
$self->css_rootp( '/myapp/style' ); $css_rootp = $self->css_rootp;
Set/get for the app_rootp value. This value is used to identify the the root URI location for the web application css files.
$self->tmp_rootp( '/myapp/tmp' ); $tmp_rootp = $self->tmp_rootp;
Set/get for the tmp_rootp value. This value is used to identify the the root URI location for the web application temporary files.
$self->js_rootp( '/myapp/js' ); $js_rootp = $self->js_rootp;
Set/get for the js_rootp value. This value is used to identify the the root URI location for the web application javascript files.
$self->editor_rootp( '/fck' ); $editor_rootp = $self->editor_rootp;
Set/get for the editor_rootp value. This value is used to identify the the root URI location for the html editor.
$self->tmp_rootp( '/home/httpd/html/myapp/tmp' ); $tmp_root = $self->tmp_root;
Set/get for the tmp_root value. This value is used to identify the the root directory location for the web application temporary files.
$self->js_rootp( '/home/httpd/html/myapp/js' ); $js_root = $self->js_root;
Set/get for the js_root value. This value is used to identify the the root directory location for the web application javascript files.
Use this to store things for your template system, etc. See Gantry::Stash.
An obscure accessor for storing smtp_host.
$self->user( $apache_connection_user ); $user = $self->user;
Set/get for the user value. Return the full user name of the active user. This value only exists if the user has successfully logged in.
This method is used by the AutoCRUD plugin and others to get code controlled config information, like table permissions for row level auth contro.
The method in this module returns an empty hash, making it safe to call this method from any Gantry subclass. If you want to do anything useful, you need to override this method in your controller.
Always returns Gantry::Control::Model::auth_users. Override this method if you want a different auth model.
Allows you to set the auth model name, but for this to work correctly, you must override get_auth_model_name. Otherwise your get request will always give the default value.
$self->test( 1 );
enable testing mode
$user_id = $self->user_id( model => '', user_name => '' ); $user_id = $self->user_id;
Returns the user_id for the given user_name or for the currently logged in user, if no user_name parameter is passed. The user_id corresponds to the user_name found in the auth_users table. The user_id is generally used for changelog entries and tracking user activity within an app.
By default, the first time you call user_id or user_row during a request, the model will be set. It will be set to the value you pass in as model or the value returned by calling
<$self-get_auth_model_name>>, if no model parameter is passed. This module has a get_auth_model_name that always returns 'Gantry::Control::Model::auth_users'. If you use a different model, override get_auth_model_name in your app's base module. We assume that your model has these methods: id and user_name.
$user_row = $self->user_row( model => '', user_name '' ); $user_row = $self->user_row;
The same as user_id, but it returns the whole model object (usually a representation of a database row).
If your models are based on DBIx::Class, or any other ORM which does not provide direct search calls on this models, you must implement a search method in your auth_users model like this:
sub search { my ( $class, $search_hash, $site_object, $extra_hash ) = @_; my $schema = $site_object->get_schema(); return $schema->resultset( 'auth_users' )->search( $search_hash, $extra_hash ); }
user_row calls this method, but DBIx::Class does not provide it for the model. Further, the search it does provide is available through the resultset obtained from the schema. This module knows nothing about schema, but it passes the self object as shown above so you can fish it out of the site object.
$self->page_title( 'Gantry is for you' ); $page_title = $self->page_title;
Set/get for the page title value. This page title is passed to the template and used for the HTML page title. This can be set in either the Apache LOCATION block or in the contoller.
$self->date_fmt( '%m %d, %Y' ); $fmt = $self->date_fmt;
Set/get for the date format value. Used within the application for the default date format display.
$self->post_max( '4M' ); $post_max = $self->post_max;
Set/get for the apache request post_max value. See Apache::Request or Apache2::Request docs.
$self->ap_req( api_call_to_apache ); $req = $self->ap_req;
Set/get for the apache request req object. See mod_perl documentation for intructions on how to use apache requets req.
Always returns the params (from forms and the query string) as a hash (not a hash reference, a real hash).
Always returns the unfiltered params (from forms and the query string) as a hash (not a hash reference, a real hash).
$self->params( $self->ap_req ); $params = $self->params;
Set/get for the request parameters. Returns a reference to a hash of key value pairs.
$self->uf_params( $self->ap_req ); $uf_params = $self->uf_params;
Set/get for the unfiltered request parameters. Returns a reference to a hash of key value pairs.
$self->serialize_params( [ array_ref of keys to exclude ], <separator> ); $self->serialize_params( [ 'page' ], '&' );
Returns a serialized string of request parameters. The default separator is '&'
$self->escape_html($value)
Replace any unsafe html characters with entities.
$self->unescape_html($value)
Unescape any html entities in the specified value.
$self->protocol( $ENV{HTTPS} ? 'https://' : 'http://' ); $protocol = $self->protocol;
Set/get for the request protocol. Value is either 'http://' or 'https://'. This is used to construct the full url to a resource on the local server.
Pass this the name of the instance and (optionally) the ganty.conf file where the conf for that instance lives. Returns whatever Gantry::Conf->retrieve returns.
For internal use. Makes a new stash. The old one is lost.
For internal use in cleaning up Data::Dumper output for presentation on the default custom_error page.
returns a true value (1) if client request is of post method.
Returns the currently configured value of gantry_secret or w3s3cR7 otherwise.
Not yet implemented. Currently you must code this in your model base class.
Dual use accessor so you can keep track of the base model class name when using DBIx::Class.
Call this as a class OR object method. Returns the namespace of the current app (which could be the name of the apps base module). The one in this module always returns 'Gantry'.
You need to implement this if you use a plugin that registers callbacks, so those callbacks will only be called for the apps that want the plugin. Otherwise, every app in your Apache server will have to use the plugin, even those that don't need it.
Currently, the only plugin that registers callbacks is AuthCookie.
Returns the current Gantry version number. Like using
$Gantry::VERSION but via a method.
Returns the name of the current do_ method (like 'do_edit').
Main stash object for Gantry
Gantry's native object relational model base class
DBIx::Class base class for models
Mixin providing get_schema which returns DBIx::Class::Schema for data models
Class::DBI base class for models
Helper for flexible CRUD coding scheme.
provides a more automated approach to CRUD (Create, Retrieve, Update, Delete) support
These module creates a couple calendar views that can be used by other applications and are highly customizeable.
This module is the binding between the Gantry framework and the mod_perl API. This particluar module contains the mod_perl 1.0 specific bindings.
See mod_perl documentation for a more detailed description for some of these bindings.
This module is the binding between the Gantry framework and the mod_perl API. This particluar module contains the mod_perl 2.0 specific bindings.
See mod_perl documentation for a more detailed description for some of these bindings.
This module is a library of useful access functions that would be used in other handlers, it also details the other modules that belong to the Control tree.
These functions wrap the common DBI calls to Databases with error checking.
This is recommended templating system in use by by Gantry.
This modules is used to to bypass a tempalting system and used if you prefer to output the raw text from within the controllers.
Implements HTML tags in a browser non-specfic way conforming to 3.2 and above HTML specifications.
This module supplies easy ways to make strings sql safe as well as allowing the creation of sql commands. All of these commands should work with any database as they do not do anything database specfic, well as far as I know anyways.
This module allows the validation of many common types of input.
Stand alone web server used for testing Gantry applications and for quick delopment of Gantry applications. This server is not recommended for production use.
Flexible configuration system for Gantry
perl(3), httpd(3), mod_perl(3)
Limitations are listed in the modules they apply to.
Please visit for project information, sample applications, documentation and mailing list subscription instructions.
Web:
Mailing List:
IRC: #gantry on irc.slashnet.org
Tim Keefer <[email protected]>
Phil Crow <[email protected]>
Gantry was branched from Krkit version 0.16 Sat Jun 11 15:27:28 CDT 2005
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.6 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/~tkeefer/Gantry-3.64/lib/Gantry.pm | CC-MAIN-2016-50 | refinedweb | 2,814 | 57.06 |
Hi iam stuck at the satge of pushing the image to the private bluemix registry. I get the usrname(bearer) prompt whenever I try to push the image. see the command and the output below. I have tried the solutions mentioned around this however it dosent seem to help. One of the threads did mention about api key. however iam not sure how to fetch the same. Create my own option under bluemix containers basically only lists the steps to create continer. Please help. I have almost wasted 2 days trying to figure out what that login credential needs to be.
$ docker push registry.ng.bluemix.net/elkimage The push refers to a repository [registry.ng.bluemix.net/elkimage] (len: 1) Sending image list
Please login prior to push: Username (bearer):
Are you following all the steps listed here?
And here?
I'm getting a similar problem... any ideas? I have a trivial docker file
[ibmcloud@analyticsadmin base]$ cat Dockerfile FROM docker.io/ubuntu:latest MAINTAINER Nigel Jones RUN echo "Imaged" > /tmp/image.txt
I build it with sudo docker build -t ubuntu .
then tag it with sudo docker tag ubuntu registry.eu-gb.bluemix.net/jonesnanalytics/ubuntu
I login with cf login
Then push with sudo docker push registry.eu-gb.bluemix.net/jonesnanalytics/ubuntu
I get the same prompt to login.
I'm new to bluemix/docker so user error is highly likely. Can you spot my error?
[Note: The post editor does not preserve newlines....]
I converted your comment/question from an answer.
Answer by MichaelHough (131) | Aug 10, 2015 at 07:33 AM
Hi. The image you're pushing in your example doesn't have your private namespace. You must specify the namespace when you tag your image. You can find out your namespace by logging in to containers and performing
cf ic namespace get or
ice namespace get.
You must tag your images in the format registry.ng.bluemix.net/your_namespace/your_image. So, for the image above, if your namespace were helloworld you must tag your image as such (assuming the image above is still tagged on your machine):
docker tag registry.ng.bluemix.net/elkimage registry.ng.bluemix.net/helloworld/elkimage
and then push your image with:
docker push registry.ng.bluemix.net/helloworld/elkimage
If you still get a login prompt, type
bearer and press enter. You should not be prompted for a password. If you do get prompted, please make sure you're logged in and your login token has not expired. Perform
cf ic login or
ice login to refresh it.
69 people are following this question.
IBM Container does not start application 2 Answers
Cannot start DB2 image in IBM Container 1 Answer
IBM Containers - privileged mode 1 Answer
IBM Containers - External Internet Access Connectivity failing on Container startup 2 Answers
BXNUI0110E: Could not allocate IBM Containers resources 2 Answers | https://developer.ibm.com/answers/questions/206977/ibm-containers.html?childToView=243040 | CC-MAIN-2019-39 | refinedweb | 480 | 59.09 |
BUILDING STRUTS BASED APPLICATIONS IN WSAD V5.1.2 TUTORIAL In this tutorial, you will develop a simple online registration application. The tutorial was tested on WSAD v5.1.2. Create the Projects Start WSAD. Switch to the J2EE perspective (from menu select Windows->Open Perspective->Other then select J2EE). Activate the J2EE Hierarchy view. Right click on Enterprise Applications and select New->Enterprise Application Project. Select the J2EE 1.3 option and click on Next. Set the project name to StrutsApp and click on Finish. Right click on Web Modules and select New->Dynamic Web Project. Set the project name to StrutsWeb. Select the Configure advanced options check box. Click on Next. In the EAR project drop down, select StrutsApp. Click on Next. Under the Web Project features, select Add Struts support. Click on Next. Uncheck Create a Web diagram for the project. Check Override default settings. Enter the following values. Default Java package prefix: com.webagesolutions.struts Resource bundle->Java package: com.webagesolutions.struts.resources Click on Finish. Click on Yes to switch to the Web perspective. Note the following about a Web project with Struts support enabled: The action servlet is automatically registered in web.xml. The URL mapping of the action servlet is *.do. The Struts custom tag library files are added to the WEB-INF folder. The Struts configuration file struts-config.xml is added to the WEB-INF folder. Build the View Make sure that you are in the Web perspective. In the Project Navigator view, right click on StrutsWeb and select New->JSP File. Set the file name to register.jsp. Notice, that the model for the new JSP is automatically set to Struts JSP. Click on Finish. System will open register.jsp in the editor. At the bottom of the editor, click on the Source tab. Notice that the struts-html and struts-bean tag libraries are already loaded. <%@ taglib Name:<BR> <html:text</html:text> <BR>Address:<BR> <html:text</html:text> <BR>City:<BR> <html:text</html:text> <BR>State:<BR> <html:text</html:text> <BR>Country:<BR> <html:text</html:text> <BR>ZIP:<BR> <html:text</html:text> <BR> <html:submit>Register Online</html:submit> </html:form> Save register.jsp and ignore the warning about the missing register.do action. Struts Refresher: When the html:form tag is executed, the tag’s code looks up the Struts action by the name specified in the action attribute of the tag. The code then locates the form bean name associated with the action and creates a new instance of the bean if one can not be located in the request or session scope. Form element tags such as html:text uses this bean instance to display the bean’s properties. Create another JSP file called thankyou.jsp. Enter a basic thank you message in that JSP file. Build the Model First, we will create the form bean class that will hold user input data. Right click on the StrutsWeb/Java Resources folder and select New->Other and then Web->Struts->ActionForm Class. Click on Next. Set the ActionForm class name to RegistrationFormBean. By default, the package name will be com.webagesolutions.struts.forms as set at the project level. Click on Next. The tool can automatically add attributes to the form bean based on a Struts form created in a JSP file. Follow the screenshot below to select all the form elements in the register.jsp page. Click on Finish. This will generate the RegistrationFormBean.java file and register the form bean in struts-config.xml with the name registrationFormBean. Open the RegistrationFormBean.java file and change the reset method as follows. public void reset(ActionMapping mapping, HttpServletRequest request) { name = ""; address = ""; city = ""; state = ""; country = "USA"; zip = ""; } The reset method is automatically called by the framework just before it transfers the input data from a HTTP request to the form bean properties. We will keep this tutorial simple and not develop a business logic layer. If you wish, you can develop a class called com.webagesolutions.struts.UserManager and add a method called registerUser() as follows. public void registerUser(RegistrationFormBean u) throws Exception { } Build the Controller We will develop a Struts action class called Register that will act as the controller. The controller will invoke the model logic available from the UserManager class. In case of success it will redirect the browser to thankyou.jsp. In case of failure, it will forward to register.jsp and display an error message. Right click on the Java Source folder and select New->Other and then Web->Struts->Action Class. Set the Action class name to Register. The package name should be com.webagesolutions.struts.actions by default. Click Next. Struts Refresher: Since the mapped name of the action is register and the action servlet’s URL map is *.do, the actual URL for the action is register.do. This is what we had specified in the action attribute of the form in register.jsp. Set the Form Bean Name to registerFormBean and set the scope to request. Click on Finish. System will create the Register.java file and add the action in the struts-config.xml. The new action class wizard allows basic configuration of the action class. In our case, we will need to open the struts-config.xml to set the input and forwards for the action. Double click on WEB-INF/struts-config.xml. Select the /register action. Set the Input: field to register.jsp. Click on the Local Fowards tab on the top. Select the /register action from the Action Mappings list. Under the Local Forwards list, click on Add. Set the forward name to success and hit Enter. Set the Path field to /thankyou.jsp. Check the Redirect checkbox. Similarly, add another forward as follows: Name: failure Path: /register.jsp Redirect: Not checked. Save and close struts-config.xml. Struts Refresher: If the form bean validation finds errors, the framework forwards to the JSP page specified in the input attribute of the action. If, on the other hand, the action class encounters errors, it can forward to a JSP page appropriate for the error. In either case, we need to be able to show error messages to the user from the forwarded JSP page. Error handling from an action is more pwoerful in the sense, we can forward to different error pages based on the error condition. Open the action class Register.java and set the perform method as follows. public ActionForward execute( ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { ActionErrors errors = new ActionErrors(); ActionForward forward = new ActionForward(); RegistrationFormBean reg = (RegistrationFormBean) form; if (!errors.empty()) { saveErrors(request, errors); forward = mapping.findForward("failure"); } else { forward = mapping.findForward("success"); } return (forward); } Test Right click on register.jsp and select run on Server. Select a WebSphere V5 server. Try submitting the form. We do not have error handling built in yet. In all cases, you should see the thank you page. Error Handling We will perform basic validation from the form bean class and more advanced validation from the action class. In the form bean class (RegisterFormBean.java) add a new method called checkEmpty as follows. private void checkEmpty(String param, ActionErrors errs, String msg) { if (param == null || param.trim().length() == 0) { errs.add(msg, new ActionError(msg)); } } Change the validate method as follows. public ActionErrors validate( ActionMapping mapping, HttpServletRequest request) { ActionErrors errors = new ActionErrors(); checkEmpty(getName(), errors, "name_missing"); checkEmpty(getAddress(), errors, "address_missing"); checkEmpty(getCity(), errors, "city_missing"); return errors; } Struts Refresher: Each ActionErrors object contains zero or more ActionError object. Each ActionError object contains a message key String (For example, name_missing and address_missing). Actual messages are stored in the application’s resource bundle. Open ApplicationResources.properties and enter the error messages as follows. # Optional header and footer for <errors/> tag. errors.header=<ul> errors.footer=</ul> name_missing=<li>Name missing city_missing=<li>City missing address_missing=<li>Address missing invalid_address=<li>Invalid address Next, we will show how to perform error handling from an action class. Usually, the model layer performs complex error checking (such as a product is out of inventory). It needs to communicate the error condition to the controller through exceptions or return codes. On error, the controller needs to forward to an error handler JSP. Open the action class (Register.java) and add validation code as follows. RegistrationFormBean reg = (RegistrationFormBean) form; if (reg.getAddress().length() < 5) { errors.add("invalid_address", new ActionError("invalid_address")); } Save changes. Open register.jsp and before the html:form tag display the error by adding this line. <html:errors/> Test Restart the StrutsApp enterprise application. Test for the error conditions. WASKB-003 BUILDING STRUTS BASED APPLICATIONS IN WSAD V5.1.2 TUTORIAL was last modified: November 9th, 2017 by admin | https://www.webagesolutions.com/knowledgebase/waskb/waskb003/ | CC-MAIN-2018-22 | refinedweb | 1,458 | 53.27 |
Data Infrastructure Management Software Discussions
As seen in the screenshot below I am building a workflow to deploy a NFS export(namespace) off a net new volume which I then create a mirror volume with export policy & rules with the final step being I want to mirror from primary(source) to secondary(destination). In my secondary_volume parameter I have it set to filter volume by key using the volume name parameter as $Volume_Name + "_mirror" which is the same as I use for the mirror volume creation which is working just fine. My workfow preview/planning fails on the Create SnapMirror step stating 'No results were found wiht the filter volumes by key. See below screen shot:
Thinking maybe I needed to use the variable for the Create volume 1 step I updated my secondary_volume parameter I have it set to filter volume by key using the step variable to create the secondary(mirror) volume. My workfow preview/planning fails on the Create SnapMirror step stating ' Failed to evaluate resource selector. Found variable - expected literal'. See below screen shot:
How do I go about adding the source(primary) volume name created NFSAuto and destination(secondary) volume name to the SnapMirror step parameters to get it to work? Thanks in advance for any assistance.
Solved!
See The Solution
Jimmy,
in the Details tab, do you have "Consider Reserved Elements" checked? Having it unchecked would explain the error you describe.
The second error you are getting is because you are using a dictionary object (like volume1) in a spot (the filter) where one of its attributes (volume1.name) would go. Appending .name to the entry field in the finder would fix the syntax, but you'd be right back at your first problem because without a reservation, the filter would again not return any result for that name.
The better way to refer back to the mirror volume would be to get rid of the finder (hit the x next to "Automatically searched") and just put its name in the entry box. Make sure you are actually putting in the dictionary object (volume1) and not one of its attributes (volume1.name).
Hope this helps. If it doesn't, please share your workflow. This would take some of the guesswork out of what your objects and variables are.
Christian
View solution in original post
Christian -
Thanks for the reply it pointed me in the right direction to get my workflow completed and working; much appreciated. I did have "Consider Reserved Elements" checked so I removed the automatically searched for secondary & primary volume and replaced it with the dictionary object. All is working now. Thanks again! | https://community.netapp.com/t5/Data-Infrastructure-Management-Software-Discussions/Create-Snapmirror-step-fails-using-user-input-variable-for-volume-export-create/m-p/120162/highlight/true | CC-MAIN-2021-10 | refinedweb | 441 | 59.23 |
Asked by:
Live from Redmond - Visual Studio LightSwitch Town Hall Thursday, September 12, 2013 | 8:00AM – 9:30AM (PDT, Redmond Time)
General discussion.
Join Online Meeting:
Join by Phone:
+14257063500
+18883203585
Find a local number:
Conference ID: 143610907
Please note that the meeting capacity is 250 participants. If you are unable to join, a recording will be available.
UPDATE: Thanks all for coming! You can listen to the recording here:
- Edited by Beth Massi - MicrosoftMicrosoft employee Thursday, September 12, 2013 5:23 PM updated with recording link
All replies
Thanks for arranging this Joe.
My immediate questions/interests are around future support for the following (in no particular order):
- HTML client side computed properties
- Occasionally connected/untethered HTML clients (I'm amazed at the demand for this: e.g. field workers visiting remote clients, shopkeepers that want POS systems to continue when their internet connection goes down, etc)
- How soon can we expect multiple HTML clients per LS project (thinking modular development)
- Multiple simultaneous units of work in the HTML client (like the individual tabs provide in the Silverlight client)
- Partial screen support (like Partial Views in ASP.NET MVC) and the ability to dynamically load those could be nice
- Support for hand-crafting screens or part screens instead of using screen designer? This could allow us to dynamically generate screens or part screens at run-time. Think about Dynamic CRM type scenarios where users can create custom entities or custom fields on entities.
- Copy & Paste entire Tree Nodes in screen designer
- Native TypeScript support for Screen and Entity code (e.g. is a TS version of msls.js coming?)
- Progress on allowing a fully responsive single UI across all devices incl large form factors
- Additional HTML Controls: Data Table/Grid, TreeView, Menu, Toolbars, Tabs, Etc
- Sharing scripts and other assets between multiple HTML clients (hooked up to TFS) is cumbersome today. Although not a LS-specific issue, the way I do this today is to put those in a separate web project and reference those at runtime from the other HTML client's default.htm files via "..." references to that web project. Is this the best way or is there something better (even in the works)?
- Renaming a LS project today (with all the included nested projects) is difficult and error prone in LS2013 Preview - will this be improved?
- What else is coming with Office365/SP/Azure hosting/integration (I'm particularly interested in seeing where this is heading)
- Any performance improvements/workarounds for list/tile controls that create many HTML DOM items with large virtual scrolling lists?
- Shaping Data: sometimes a normalized database makes it harder to manipulate that data from the client. RIA domain services allow us to shape data in ways that make client screen development easier. With the external database datasource you can do some of this with views. If we could create "custom entities" in the LS entity designer and then write server side code to populate/manipulate those on the server side it will allow us to shape data in ways that we can only do with RIA services or views today. I understand there might be technical challenges, but it would be very powerful.
- Binding screen queries to Drop Down controls in the HTML client. We have a work around today to bind it to a Choice List and then to inject options from a query at runtime, but it would be nice to support that out of the box.
I'll probably think of more before the meeting :)
Thanks in advance
Thanks for this Joe
I also have couple of question
1. Improvement of LightSwitch Silverlight Client Performance.
2. Additional default HTML Control similar like Silverlight Client.
3. Still its difficult to develop ERP kind of application (because of more no of table and screens) using lightswitch, what are the possibilities to overcome this.
4. Any improved lightswitch shell and theme for VS 2013.
5. The future of Lightswitch Silverlight client.
6. Any reporting solution for HTML Client?
Thanks in Advance
Rashmi Ranjan Panigrahi
If you found this post helpful, please “Vote as Helpful”. If it answered your question, please “Mark as Answer”.
This will help other users to find their answer quickly.
- Edited by babloo1436 Thursday, August 29, 2013 7:51 AM
Bravo Joe!
- Currently the HTML client builds out a single model.json file and included data.js, view.js as well as usercode.js that are loaded at the start of the application after the client application context is built. Will there be a way to utilize some form of composition of one or more LS applications through a Lazy Loading approach such that we don't pull down nor build out related entity and screens if the user has not called out from them at run time so that application loads fast and scales well for larger modular applications.
- Will there be an API to allow us to build our own Shell within the HTML client similar to the SL client, where we are provided information regarding navigation groups and related screens at run time through a series of events and which are secured through the framework.
- What in the way of advancements will will see for an HTML 5 Desktop client that makes better use of screen real-estate when applications are utilized with the traditional business application accessed via mouse and keyboard yet at the same time remain responsive to tablet and phone devices.
- What are your plans for screen and application authorization within the HTML client?
- What if any plans do you have in regards to application navigation and layout extensibility within the HTML client?
- Currently the msls core js manages a single document interface, will there ever be a MDI option. If not, is it possible to build a custom shell to produce a similar effect?
- Will there soon be solid documentation that addresses advanced interface API to help us extend the HTML client rather than us needing to post questions on the forum or getting how to information from 3rd party LightSwitch help sites which may not have full insight into how the framework all comes together.
- Will there be a Windows 8.1 XAML client for LightSwitch using the new XAML markup language that renders for all types of window 8 screen sizes? If so same questions (1-7) but for the Windows 8.1 XAML client.
Johnny Larue,
- Edited by John Kears Thursday, September 5, 2013 4:37 PM
Thanks in advance for soliciting suggestions and arranging this conference.
Vis-a-vis the request for more HTML controls: would it be possible to appropriate screen controls and additional code from the WinJS library, including possibly the observable and binding namespaces? Currently the LS databinding API seems too opinionated, and for complex entities is not as easy to work with custom controls.
Hi and thank's for giving us 'a voice' <g>
Technical questions:
LS + internationalization: Build in multiple language support via Database not via Ressource-file ?
LS: "real" Screen Designer for LS ? Using Blend for HTML ?
Converting MS Access and MS VB6 Screens as screen templates for using within LS ?
HTML LS + Typescript in VS - even more focus on Typescript integration as a standard incl. debugging etc.
Typescript should be integrated as VB or C# etc.
TypeScript Header for msls.js available (or better TypeScript sourcecode ?) as it is documented anyway.
HTML LS + ASP.NET_MVP ( ) as a standard ?
Reporting:
Reporting with LS ?
Reporting Services to support OData directly for better LS integration ?
HTML LS: integrating jsPDF ( )
HTML LS: Education: Special integrated Course programm (Webcasts ? ) containing
LS
HTML5
CSS3
Typescript incl. Javascript
Javascript Modules + LS integration
Testing
Deployment
German market Questions:
MS Germany does not push Lightswitch as it looks
No Blogs
Found just 1 video (in words: ONE) of creating a LS HTML client
Any changes planned ?
Azure + Cloud is super, but not all customers can convinced (NSA etc.) so:
Why not pushing the big Hosters (like 1und1.de - United internet) to offer Lightswitch publishing as well
1und1 does already offer a Hosting solution called "Windows Dual xxx" which contains 99 % of the needed things for LS
( - see under programming) -
but NOT Lightswitch because addidtional downloads and Settings needed
I already contacted server support, and they declined ... You as MS can push it, but not me as "mr. nobody"
At the moment i'd have to rent my own server (what i don't want)
Additionally: What 100% exact specification does a hoster need, if he wants to support Lightswitch ? (When NOT using automated Web download)
best regards from Germany
Klaus Oberdalhoff
Klaus Oberdalhoff Germany
- Edited by Klaus Oberdalhoff Thursday, September 5, 2013 2:58 PM
Thanks for this opportunity.
- Lightswitch reporting (from HTML client)
- the possibility of creating a more desktop oriented HTML client (grid component, command bar layout)
- more out-of-the-box user control/authentication (registration for new users) in HTML client
I'm mostly echoing others, but the more the merrier:
- Improved HTML shell and layout controls that can scale up to the desktop... I would like to have fairly tight control over how the application is displayed, much like the Silverlight client.
- Extensibility... I want to be able to make custom control extensions like I did in Silverlight and drop them in on the screen designer without having to code them in manually each time. Some tutorials like the current Silverlight LS extensibility ones on MSDN would be great.
I'll probably not make the call due to the time, will there be a replay available online?
Free Visual Studio LightSwitch extensions: Elyl's Extensions
Hi Joe,
Thank you for the opportunity to listen the community :)1. Using WPF for desktop applications OOB
2. Improve Shell SilverLight Client (leave it in the same way windows forms)
3. Improve the performance of SilverLight Client
4. Add new features to the Grid (grouping, export to excel, etc ...)
5. Integration with SSRS (SSRS Viewer for SilverLight Client)
6. Modular development (several projects with the same database)
7. HTML Client for Desktop
8. Encourage DevExpress migrate your controls silverlight for lightswitch
Hi Joe!
Great to hear this initiative from you. I have following questions:
- Using claims based identity (e.g. Azure ACS in conjugation with Microsoft Live/Google/Facebook accounts) with lightswitch.
- Possibility of using all screen metadata to convert an app into Windows RT app when it is not using any third party extension or controls.
Cheers!
Hi Joe,
I think the Lightswitch team need to find a solution to the "Search Box" that appear in all the screens because it only search the properties of type string.
There're situations where we need to search in columns
- that are computed properties
- that their type is not a string (for example a date or a number)
- that are located in a related table but are displayed in an AutoCompleteBox, even in this case sometime is useful to search in a column in the related table but isn't displayed in the AutoCompleteBox.
The current solution we have to this limitations is to create our own SearchBox but it creates inconsistencies with the SearchBox displayed by Lightswitch because there're two SearchBox in the same screen.
I think that one simple solution is to let the user to override the behavior of the SearchBox. I think the answer to this question may help LightSwitch 2011- search box
Joe,
I'm primarily interested in the following scenarios:
a) HTML client model with a SharePoint backend for data services and a SharePoint app model for a frontend
b) LS as the [MS] defacto standard middleware tool for making it very easy to create 100% OData compatible and high-performant OData servers - for broad client reach (e.g. Open Data services on a public sector public web site).
Specific Topics/Questions
1. Imagine a SharePoint list having different Content Types derived or extended from the same base content type. I want to have HTML screens that are dynamic: that the fields that are displayed/available to the user are dependent on the values the user enters for one (or more) of the fields common to each content type. This could be the form itself is dynamic or the tabs that appear across the top of the form are dynamic. In the latter case, the main tab corresponds to the fields in the base content type and the other tabs representing groups of fields from one of the child content types.
Related Forum Postings
2. In the LS Server, I want an easy way to modify the data entities between after it is pulled from SP and before it is passed on to become part of the output OData stream.
3. Server side performance the exceeds the needs of a high-volume public web site. LS is the only MS solution for serving up SharePoint data that is 100% OData compatible (short of trying to write custom, 100% OData compatible services from scratch (aka WCF)). SP2013 own REST services are not OData compatible.
4. Ability to create custom HTML screen layouts - re-usable across multiple VS LS solutions and projects.
5. When the LS server support for SharePoint doesn't include a field from a content type (e.g. RecurrenceData in a Calendar Event item), I want an easy to extend the LS server to add this additional capability without having to recreate a replacement for SharePoint's listdata.svc. Sort of related to point #2 above.
Related Forum Posting
6. Guaranteed 100% OData compatibility for the prevailing OData clients: Excel PowerPivot, Excel Power Query, others? I believe from an MS technology perspective, LS currents sets the "gold standard" for OData compatibility - please don't lose focus on this.
Michael Herman (Toronto)
Xpert Search Agents for Microsoft web sites:
- Edited by Michael Herman - Toronto Sunday, September 8, 2013 3:51 AM
I also welcome this effort (it was probably overdue).
I would like to see a roadmap concerning future releases. Since the Visual Studio Team switched to shorter release cycles it should be easier for the LightSwitch Team to release updates as part of an VS update. So hopefully there will be more activities in the future.
Peter Monadjemi
Hi Community Supporters!
Thanks for all of the great questions--please keep them coming. There's certainly more content than we can cover in the amount of time we have, so we'll provide a written response for the questions we don't get to live and post them here. Please try to attend in-person, though, because we'd really like to drill into some of the themes of feedback that surface in your questions.
With Visual Studio 2012 RC available, we can do a few short demos to answer some of the above questions and provide some perspective on where we're headed with the HTML client.
I hope you can make it!
Joe
Hi everyone,
Thanks for coming today!
Joe will be following up on this thread with a recap of the big themes and more answers to your questions that we didn't have time for. Give him a couple days, there were a lot of them :-)
If you couldn't make it, you can listen to the recording here:
Thanks again -- we hope you found this valuable.
-Beth
Senior Program Manager, Visual Studio Community | https://social.msdn.microsoft.com/Forums/vstudio/en-US/30059347-92da-4797-90aa-67414a4d77a9/live-from-redmond-visual-studio-lightswitch-town-hall-thursday-september-12-2013-800am?forum=lightswitch | CC-MAIN-2020-40 | refinedweb | 2,555 | 61.06 |
22, 2008 09:17 AM
As rich-Internet application (RIA) technologies mature, it is becoming increasingly important to integrate RIA, such as Adobe Flex applications, with robust server-side services. One of Java developers' favorite server-side frameworks, Spring, could play an important role in this process.
Marco Casario of RIAvolutionize the Web explains why he recommends BlazeDS to Spring integrate the Flex enterprise system, saying, “Spring is an open-source framework that helps make the developer's life easier. Using standard JEE approach, you'll tend to write a lot of code that is not useful or redundant or spend time implementing J2EE design patterns that are workarounds for technology limitations rather than real solutions. By cutting out these processes, Spring can save you a lot of time.”
Christophe Coenraets provides the rationale for integrating Flex with Spring:
The whole idea behind Spring Inversion of Control .
With respect to the integration of Flex, Spring, IBATIS and Cairngorm, Chris Giametta remarks:
I believe in creating a consistent, modular, and repeatable architecture. The architecture must be sufficient to support small applications as well as extremely robust enterprise applications. A key to project success is creating an architecture that new developers can rapidly integrate themselves into and begin to be productive on day one. I feel that Flex combined with Spring, iBATIS, and Cairngorm help me to quickly produce a patterned- based, repeatable architecture.
Sébastien Arbogast went to the effort of creating a blog series to demonstrate how to build a full stack of Flex, BlazeDS and Spring integration.
Arbogast's stack, from the bottom up, includes JBoss as the application server, MySQL as the data storage, Hibernate to help data access, Spring to build the business layer, BlazeDS as the remoting service and Flexe-genial for building rich client. The system was built using Maven with flex-compiler-mojo plug-in.
Arbogast says, “This project setup certainly requires a bit of work, but—setting aside a small issue with configuration file duplication that should be fixed soon—it's pretty clean, and flex-compiler-mojo works really great.”
Adobe® Rich Internet Application Project Portal
Download the Free Adobe® Flex® Builder 3 Trial
Give-away eBook – Confessions of an IT Manager
Download the Free Adobe® Flex® Builder 3 Trial
Adobe® Rich Internet Application Project Portal
The small issue mentioned in the quote has been fixed since the original publication of my articles and they have all been updated accordingly.
Since this is explicitly about combining Flex and Spring I'd like to mention the Cinnamon framework (). In addition to the features available in BlazeDS or other remoting frameworks which enable you to use Spring beans as remoting destinations, Cinnamon also let's you (optionally) use Spring to configure Cinnamon with a custom configuration namespace. In the most simple use case, adding remoting support for Flex clients to an existing Spring-based web application involves only two simple steps: Add the Cinnamon Servlet to web.xml and then add one tag to your existing bean definition:
<bean id="someService" class="example.SomeClass">
<cinnamon:export-service
</bean>
Furthermore Cinnamon adds a few features not available elsewhere:
It seems interesting but the first question that comes to mind is "why yet another AMF implementation?". GraniteDS existed as an open source alternative to LiveCycle Data Service before BlazeDS was open sourced. But what is the rationale behind cinnamon? Do you really think it's worth the migration?
"Why yet another AMF implementation?". Well, why don't you ask the Adobe guys, of these three products BlazeDS was the last one to be released. ;-)
Seriously, I'm not sure if I understand the question. In my previous comment I listed several Cinnamon features that are not available in other products like support for pure AS3 clients and deep integration with Spring. And since these were/are often requirements for our own projects we pretty much did not have another choice than to roll our own. GraniteDS was already available when we started last summer but it didn't seem easy to add these features to their existing architecture.
I think BlazeDS, GraniteDS and Cinnamon all have a slightly different focus so I'd say they all have their "right to exist". We know developers who used to work with GraniteDS and then changed to Cinnamon and are much happier now. I guess this might be the other way around for other devs. Why are there hundreds of Java web frameworks? ;-)
You can see an example of a real project based on Flex and Spring at
The fact that there are hundreds of web frameworks available is not necessarily a good reference since many developers are not exactly happy with that situation either. But I got your point.
I am also surprised by Cinnamon, since from my experience GraniteDS is a robust and cleanly designed AMF3 framework, with a community opened to external contributions. Have you discussed with Frank (GraniteDS project leader) to see if it was possible to integrate these functionnalities to GraniteDS ?
Spring integration is natively supported by GraniteDS, we use it from the beginning on Igenko, and it works very well. It also already supports Hibernate lazy loading.
To be honest I'm really puzzled by this mindset that just because there is a working solution each alternative is redundant. Tomcat works quite well, so whats the point of developing Jetty? Is CXF redundant because we have Axis2?
Yes, I briefly discussed with Franck about joining forces but we came to the conclusion that currently it's better to continue separately. It's not just about adding features, some of them might require architectural changes. With Spring integration it's not just about invoking Spring beans from Flex/Flash, we also support using Spring to configure Cinnamon, so the steps needed to expose an existing service for Flex/Flash clients are minimal for Spring users. Furthermore Cinnamon is not only a standalone product but also integrated with other Spicefactory Open Source projects like Parsley (which combines an AS3 IOC container with an MVC framework) and Pimento, the upcoming data management framework which will support a lot more for JPA/Hibernate than just lazy loading.
I mean, if you are happy with GraniteDS you should obviously stick with it but others might be happy to have choices.
Have you seen this?
Seems to me it connects Flex and Spring (Grails = Groovy (Java) + Spring + Hibernate) in a couple of simple steps.
All it needs right now is a good scaffolding Flex generation.
Rodrigo
Prana is another IoC container for Flex - one which is fast and is a very close port of the spring core to Flex. One more thing - be sure to secure your endpoints. | http://www.infoq.com/news/2008/05/integrate-flex-spring | crawl-002 | refinedweb | 1,118 | 50.46 |
MEAN Stack: Build an App with Angular 2+ and the Angular CLI
For expert-led online Angular training courses, you can’t go past Ultimate Angular by Todd Motto. Try his courses here, and use the code SITEPOINT to get 25% off and to help support SitePoint.
The MEAN stack comprises advanced technologies used to develop both the server-side and the client-side of a web application in a JavaScript environment. The components of the MEAN stack include the MongoDB database, Express.js (a web framework), Angular (a front-end framework), and the Node.js runtime environment. Taking control of the MEAN stack and familiarizing different JavaScript technologies during the process will help you in becoming a full-stack JavaScript developer.
JavaScript’s sphere of influence has dramatically grown over the years and with that growth, there’s an ongoing desire to keep up with the latest trends in programming. New technologies have emerged and existing technologies have been rewritten from the ground up (I’m looking at you, Angular).
This tutorial intends to create the MEAN application from scratch and serve as an update to the original MEAN stack tutorial. If you’re familiar with MEAN and want to get started with the coding, you can skip to the overview section.
Introduction to workflow.
Now that we’re acquainted with the pieces of the MEAN puzzle, let’s see how we can fit them together, shall we?
Overview
Here’s a high-level overview of our application.
We’ll’re not familiar with the Angular CLI yet, make sure you check out The Ultimate Angular CLI Reference.
npm install -g @angular/cli
Create a new directory for our bucket list project. That’s where both the front-end and the back-end code will go.base would be a complete mess. Instead, we’re going to do this the MVC way (Model, View, and Controller)—minus the view part.
MVC is an architectural pattern that separates your models (the back end) and views (the UI) from the controller (everything in between), hence MVC. Since Angular will take care of the front end for us, we’ll have three directories, one for models and another one for controllers, and a public directory where we’ll place the compiled angular code.
In addition to this, we’ll’ll declare our dependencies inside the
package.json file. For this project we’ll’ll’s the final version of our app.js file.
// We’ll declare all our dependencies here const express = require('express'); const path = require('path'); const bodyParser = require('body-parser'); const cors = require('cors'); const mongoose = require('mongoose'); const config = require('./config/database'); const bucketlist = require('./controllers/bucketlist'); //Connect mongoose to our database mongoose.connect(config.database); //Declaring Port const port = 3000; //Initialize our app variable const app = express(); //Middleware for CORS app.use(cors()); //Middlewares'))); app.get('/', (req,res) => { res.send("Invalid page"); }) //Routing all HTTP requests to /bucketlist to bucketlist controller app.use('/bucketlist',bucketlist); //Listen to port 3000 app.listen(port, () => { console.log(`Starting the server at port ${port}`); });
As previously highlighted in the overview, our awesome bucket list app will have routes to handle HTTP requests with GET, POST, and DELETE methods. Here’s a bare-bones controller with routes defined for the GET, POST, and DELETE methods.
//Require the express package and use express.Router() const express = require('express'); const router = express.Router(); //GET HTTP method to /bucketlist router.get('/',(req,res) => { res.send("GET"); }); //POST HTTP method to /bucketlist router.post('/', (req,res,next) => { res.send("POST"); }); //DELETE HTTP method to /bucketlist. Here, we pass in a params which is the object id. router.delete('/:id', (req,res,next)=> { res.send("DELETE"); }) module.exports = router;
I’d recommend using Postman app or something similar to test your server API. Postman has a powerful GUI platform to make your API development faster and easier. Try a GET request on and see whether you get the intended response.
And as obvious as it seems, our application lacks a model. At the moment, our app doesn’t have a mechanism to send data to and retrieve data from our database.
Create a
list.js model for our application and define the bucket list Schema as follows:
//Require mongoose package const mongoose = require('mongoose'); //Define BucketlistSchema with title, description and category const BucketlistSchema = mongoose.Schema({ title: { type: String, required: true }, description: String, category: { type: String, required: true, enum: ['High', 'Medium', 'Low'] } });
When working with mongoose, you have to first define a Schema. We have defined a
BucketlistSchema with three different keys (title, category, and description). Each key and its associated
SchemaType defines a property in our MongoDB document. If you’re wondering about the lack of an
id field, it’s because we’ll be using the default
_id that will be created by Mongoose.
Mongoose assigns each of your schemas an
_idfield by default if one is not passed into the Schema constructor. The type assigned is an ObjectId to coincide with MongoDB’s default behavior.
You can read more about it in the Mongoose Documentation
However, to use our Schema definition we need to convert our
BucketlistSchema to a model and export it using module.exports. The first argument of
mongoose.model is the name of the collection that will be used to store the data in MongoDB.
const BucketList = module.exports = mongoose.model('BucketList', BucketlistSchema );
Apart from the schema, we can also host database queries inside our BucketList model and export them as methods.
//BucketList.find() returns all the lists module.exports.getAllLists = (callback) => { BucketList.find(callback); }
Here we invoke the
BucketList.find method which queries the database and returns the BucketList collection. Since a callback function is used, the result will be passed over to the callback.
Let’s fill in the middleware corresponding to the GET method to see how this fits together.
const bucketlist = require('../models/List'); //GET HTTP method to /bucketlist router.get('/',(req,res) => { bucketlist.getAllLists((err, lists)=> { if(err) { res.json({success:false, message: `Failed to load all lists. Error: ${err}`}); } else { res.write(JSON.stringify({success: true, lists:lists},null,2)); res.end(); } }); });
We’ve invoked the
getAllLists method and the callback takes two arguments, error and result..
— MongoDB Documentation
Similarly, let’s add the methods for inserting a new list and deleting an existing list from our model.
//newList.save is used to insert the document into MongoDB module.exports.addList = (newList, callback) => { newList.save(callback); } //Here we need to pass an id parameter to BUcketList.remove module.exports.deleteListById = (id, callback) => { let query = {_id: id}; BucketList.remove(query, callback); }
We now need to update our controller’s middleware for POST and DELETE also.
//POST HTTP method to /bucketlist router.post('/', (req,res,next) => { let newList = new bucketlist({ title: req.body.title, description: req.body.description, category: req.body.category }); bucketlist.addList(newList,(err, list) => { if(err) { res.json({success: false, message: `Failed to create a new list. Error: ${err}`}); } else res.json({success:true, message: "Added successfully."}); }); }); //DELETE HTTP method to /bucketlist. Here, we pass in a param which is the object id. router.delete('/:id', (req,res,next)=> { //access the parameter which is the id of the item to be deleted let id = req.params.id; //Call the model method deleteListById bucketlist.deleteListById(id,(err,list) => { if(err) { res.json({success:false, message: `Failed to delete the list. Error: ${err}`}); } else if(list) { res.json({success:true, message: "Deleted successfully"}); } else res.json({success:false}); }) });
With this, we have a working server API that lets us create, view, and delete the bucket list. You can confirm that everything is working as intended by using Postman.
We’ll now move on to the front end of the application using Angular.
Building the Front End Using Angular
Let’s generate the front-end Angular application using the Angular CLI tool that we set up earlier. We’ll name it
angular-src and place it under the awesome-bucketlist directory.
ng new angular-src
We now have the entire Angular 2 structure inside our awesome-bucketlist directory. Head over to the
.angular-cli.json and change the ‘outDir’ to “../public”.
The next time you run
ng build — which we’ll do towards the end of this tutorial — Angular will compile our entire front end and place it in the public directory. This way, you’ll have the Express server and the front end running on the same port.
But for the moment,
ng serve is what we need. You can check out the boilerplate Angular application over at.
The directory structure of our Angular application looks a bit more complex than our server’s directory structure. However, 90% of the time we’ll be working inside the src/app/ directory. This will be our work space, and all of our components, models, and services will be placed inside this directory. Let’s have a look at how our front end will be structured by the end of this tutorial.
Creating Components, a Model, and a Service
Let’s take a step-by-step approach to coding our Angular application. We need to:
- create two new components called
ViewListComponentand
AddListComponent
- create a model for our
List, which can then be imported into our components and services
- generate a service that can handle all the HTTP requests to the server
- update the
AppModulewith our components, service, and other modules that may be necessary for this application.
You can generate components using the
ng generate component command.
ng generate component AddList ng generate component ViewList
You should now see two new directories under the src/app folder, one each for our newly created components. Next, we need to generate a service for our
List.
ng generate service List
I prefer having my services under a new directory(inside
src/app/).
mkdir services mv list.service.ts services/
Since we’ve changed the location of
list.service.ts, we need to update it in our
AppModule. In short,
AppModule is the place where we’ll declare all our components, services, and other modules.
The generate command has already added our components into the
appModule. Go ahead and import
ListService and add it to the
providers array. We also need to import
FormsModule and
HTTPModule and declare them as imports.
FormsModule is needed to create the form for our application and
HTTPModule for sending HTTP requests to the server.
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { HttpModule } from '@angular/http'; import { FormsModule} from '@angular/forms'; import { AppComponent } from './app.component'; import { AddListComponent } from './add-list/add-list.component'; import { ViewListComponent } from './view-list/view-list.component'; import { ListService } from './services/list.service'; @NgModule({ declarations: [ AppComponent, AddListComponent, ViewListComponent ], //Modules go here imports: [ BrowserModule, HttpModule, FormsModule ], //All the services go here providers: [ListService], bootstrap: [AppComponent] }) export class AppModule { }
Now we are in a position to get started with our components. Components are the building blocks in an Angular 2 application. The
AppComponent is the default component created by Angular. Each component consists of:
- a TypeScript class that holds the component logic
- an HTML file and a stylesheet that define the component UI
- an
@Componentdecorator, which is used to define the metadata of the component.
We’ll keep our
AppComponent untouched for the most part. Instead, we’ll use the two newly created components,
AddList and
ViewList, to build our logic. We’ll nest them inside our
AppComponent as depicted in the image below.
We now have a hierarchy of components — the
AppComponent at the top, followed by
ViewListComponent and then
AddListComponent.
/*app.component.html*/ <!--The whole content below can be removed with the new code.--> <div style="text-align:center"> <h1> {{title}}! </h1> <app-view-list> </app-view-list> </div>
/*view-list.component.html*/ <app-add-list> </app-add-list>
Create a file called
List.ts under the models directory. This is where we’ll store the model for our
List.
/* List.ts */ export interface List { _id?: string; title: string; description: string; category: string; }
View-List Component
The
ViewListComponent’ component’s logic includes:
listsproperty that is an array of
Listtype. It maintains a copy of the lists fetched from the server. Using Angular’s binding techniques, component properties are accessible inside the template.
loadLists()loads all the lists from the server. Here, we invoke
this.ListSev.getAllLists()method and
getAllLists()is a service method (we haven’t defined it yet) that performs the actual
http.getrequest and returns the list;
loadLists()then loads it into the Component’s list property.
deleteList(list)handles the deletion procedure when the user clicks on the Delete button. We’ll call the List service’s
deleteListmethod with
idas the argument. When the server responds that the deletion is successful, we call the
loadLists()method to update our view.
/*view-list.component.ts*/ import { Component, OnInit } from '@angular/core'; import { ListService } from '../services/list.service'; import { List } from '../models/List' @Component({ selector: 'app-view-list', templateUrl: './view-list.component.html', styleUrls: ['./view-list.component.css'] }) export class ViewListComponent implements OnInit { //lists propoerty which is an array of List type private lists: List[] = []; constructor(private listServ: ListService) { } ngOnInit() { //Load all list on init this.loadLists(); } public loadLists() { //Get all lists from server and update the lists property this.listServ.getAllLists().subscribe( response => this.lists = response,) } //deleteList. The deleted list is being filtered out using the .filter method public deleteList(list: List) { this.listServ.deleteList(list._id).subscribe( response => this.lists = this.lists.filter(lists => lists !== list),) } }
The template (
view-list.component.html) should have the following code:
<h2> Awesome Bucketlist App </h2> <!-- Table starts here --> <table id="table"> <thead> <tr> <th>Priority Level</th> <th>Title</th> <th>Description</th> <th> Delete </th> </tr> </thead> <tbody> <tr * <td>{{list.category}}</td> <td>{{list.title}}</td> <td>{{list.description}}</td> <td> <button type="button" (click)="deleteList(list); $event.stopPropagation();">Delete</button></td> </tr> </tbody> </table> <app-add-list> </app-add-list>
We’ve created a table to display our lists. There’s a bit of unusual code in there that is not part of standard HTML. Angular has a rich template syntax that adds a bit of zest to your otherwise plain HTML files. The following is part of the Angular template syntax.
- The
*ngFordirective lets you loop through the
listsproperty.
- Here
listis a template variable whereas
listsis the component property.
- We have then used Angular’s interpolation syntax
{{ }}to bind the component property with our template.
- The event binding syntax is used to bind the click event to the
deleteList()method.
We’re close to having a working bucket list application. Currently, our
list.service.ts is blank and we need to fill it in to make our application work. As highlighted earlier, services have methods that communicate with the server.
/*list.service.ts*/ import { Injectable } from '@angular/core'; import { Http,Headers } from '@angular/http'; import {Observable} from 'rxjs/Observable'; import { List } from '../models/List' import 'rxjs/add/operator/map'; @Injectable() export class ListService { constructor(private http: Http) { } private serverApi= ''; public getAllLists():Observable<List[]> { let URI = `${this.serverApi}/bucketlist/`; return this.http.get(URI) .map(res => res.json()) .map(res => <List[]>res.lists); } public deleteList(listId : string) { let URI = `${this.serverApi}/bucketlist/${listId}`; let headers = new Headers; headers.append('Content-Type', 'application/json'); return this.http.delete(URI, {headers}) .map(res => res.json()); } }
The underlying process is fairly simple for both the methods:
- we build a URL based on our server address
- we create new headers and append them with
{ Content-Type: application/json }
- we perform the actual
http.get/http.deleteon the URL
- We transform the response into
jsonformat.
If you’re not familiar with writing services that communicate with the server, I would recommend reading the tutorial on Angular and RxJS: Create an API Service to Talk to a REST Backend.
Head over to to ensure that the app is working. It should have a table that displays all the lists that we have previously created.
Add-List Component
We’re missing a feature, though. Our application lacks a mechanism to add/create new lists and automatically update the
ViewListComponent when the list is created. Let’s fill in this void.
The
AddListComponent’s template is the place where we’ll put the code for our HTML form.
<div class="container"> <form (ngSubmit)="onSubmit()"> <div> <label for="title">Title</label> <input type="text" [(ngModel)]="newList.title" name="title" required> </div> <div> <label for="category">Select Category</label> <select [(ngModel)]="newList.category" name = "category" > <option value="High">High Priority</option> <option value="Medium">Medium Priority</option> <option value="Low">Low Prioirty</option> </select> </div> <div> <label for="description">description</label> <input type="text" [(ngModel)]="newList.description" name="description" required> </div> <button type="submit">Submit</button> </form> </div>
Inside our template, you can see several instances of
[(ngModel)] being used. The weird-looking syntax is a directive that implements two-way binding in Angular. Two-way binding is particularly useful when you need to update the component properties from your view and vice versa.
We use an event-binding mechanism (
ngSubmit) to call the
onSubmit() method when the user submits the form. The method is defined inside our component.
/*add-list.component.ts*/ import { Component, OnInit } from '@angular/core'; import { List } from '../models/List'; import { ListService } from '../services/list.service'; @Component({ selector: 'app-add-list', templateUrl: './add-list.component.html', styleUrls: ['./add-list.component.css'] }) export class AddListComponent implements OnInit { private newList :List; constructor(private listServ: ListService) { } ngOnInit() { this.newList = { title: '', category:'', description:'', _id:'' } } public onSubmit() { this.listServ.addList(this.newList).subscribe( response=> { if(response.success== true) //If success, update the view-list component }, ); } }
Inside
onSubmit(), we call the listService’s
addList method that submits an
http.post request to the server. Let’s update our list service to make this happen.
/*list.service.ts*/ public addList(list: List) { let URI = `${this.serverApi}/bucketlist/`; let headers = new Headers; let body = JSON.stringify({title: list.title, description: list.description, category: list.category}); console.log(body); headers.append('Content-Type', 'application/json'); return this.http.post(URI, body ,{headers: headers}) .map(res => res.json()); } }
If the server returns
{ success: true }, we need to update our lists and incorporate the new list into our table.
However, the challenge here is that the
lists property resides inside the
ViewList component. We need to notify the parent component that the list needs to be updated via an event. We use
EventEmitter and the
@Output decorator to make this happen.
First, you need to import
Output and
EventEmitter from
@angular/core.
import { Component, OnInit, Output, EventEmitter } from '@angular/core';
Next, declare EventEmitter with the
@Output decorator.
@Output() addList: EventEmitter<List> = new EventEmitter<List>();
If the server returns
success: true, emit the
addList event.
public onSubmit() { console.log(this.newList.category); this.listServ.addList(this.newList).subscribe( response=> { console.log(response); if(response.success== true) this.addList.emit(this.newList); }, ); }
Update your
viewlist.component.html with this code.
<app-add-list (addList)='onAddList($event)'> </app-add-list>
And finally, add a method named
onAddList() that concatenates the newly added list into the
lists property.
public onAddList(newList) { this.lists = this.lists.concat(newList); }
Finishing Touches
I’ve added some styles from bootswatch.com to make our bucket list app look awesome.
Build your application using:
ng build
As previously mentioned, the build artifacts will be stored in the public directory. Run
npm start from the root directory of the MEAN project. You should now have a working MEAN stack application up and running at
Wrapping It Up
We’ve covered a lot of ground in this tutorial, creating a MEAN stack application from scratch. Here’s a summary of what we did in this tutorial. We:
- created the back end of the MEAN application using Express and MongoDB
- wrote code for the GET/POST and DELETE routes
- generated a new Angular project using Angular CLI
- designed two new components,
AddListand
ViewList
- implemented the application’s service that hosts the server communication logic.
And that’s it for now. I hope you enjoyed reading. I’d love to read your thoughts and feedback in the comments below!
Recommended Courses
This article was peer reviewed by Jurgen Van de Moere. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! | https://www.sitepoint.com/mean-stack-angular-2-angular-cli/ | CC-MAIN-2019-47 | refinedweb | 3,383 | 50.94 |
ok I got this working once before. Off the net I grabbed a stdbool.h header file that was recommended to use with VS. In admin mode I plopped it in Microsoft Visual Studio 9.0/VC/ include folder and presto! I was able to use stdbool.h without a hitch.
I had to do a system restore about a week ago. And setting everything back up again. Now when I try the above stunt, I get this error from VS:
This sorta thing is so frustrating. gah. Here is the stdbool.h header code:This sorta thing is so frustrating. gah. Here is the stdbool.h header code:Code:------ Build started: Project: Program 7.4, Configuration: Debug Win32 ------ Embedding manifest... .\Debug\Program 7.4.exe.intermediate.manifest : general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. Build log was saved at ":\Documents and Settings\Cruisin'\My Documents\Visual Studio 2008\Projects\Program 7.4\Program 7.4\Debug\BuildLog.htm" Program 7.4 - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
Anyone have any ideas what I'm overlooking? And NO I do not want to compile it as a cpp program. I want to just hit the debug button and go if I need to.Anyone have any ideas what I'm overlooking? And NO I do not want to compile it as a cpp program. I want to just hit the debug button and go if I need to.Code:#ifndef STDBOOL_H_ #define STDBOOL_H_ /** * stdbool.h - ISO C99 Boolean type * Author - Bill Chatfield * E-mail - bill underscore chatfield at yahoo dot com * Copyright - You are free to use for any purpose except illegal acts * Warrenty - None: don't blame me if it breaks something * * In ISO C99, stdbool.h is a standard header and _Bool is a keyword, but * some compilers don't offer these yet. This header file is an * implementation of the standard ISO C99 stdbool.h header file. It checks * for various compiler versions and defines things that are missing in * those versions. * * The GNU and Watcom compilers include a stdbool.h, but the Borland * C/C++ 5.5.1 compiler and the Microsoft compilers do not. * * See for compile macros. */ /** * Borland C++ 5.5.1 does not define _Bool. */ #ifdef __BORLANDC__ typedef int _Bool; #endif /** * Microsoft C/C++ version 14.00.50727.762, which comes with Visual C++ 2005, * and version 15.00.30729.01, which comes with Visual C++ 2008, do not * define _Bool. */ #if defined(_MSC_VER) && _MSC_VER <= 1500 typedef int _Bool; #endif /** * Define the Boolean macros only if they are not already defined. */ #ifndef __bool_true_false_are_defined #define bool _Bool #define false 0 #define true 1 #define __bool_true_false_are_defined 1 #endif #endif /*STDBOOL_H_*/ | https://cboard.cprogramming.com/c-programming/138189-stdbool-h-visual-studio-cplusplus-express-2008-standard-c-code.html | CC-MAIN-2018-05 | refinedweb | 466 | 69.99 |
Component family XML Basic settings Property type Either Built-In: No property data stored centrally. Repository: Select the repository file where the properties are stored.When this file is selected, the fields that follow are pre-filled in using fetched data. Schema type and Edit Schema A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.. XML field Name of the XML field to be processed. Related topic: see Talend Studio User Guide. Loop XPath query Node of the XML tree, which the loop is based on. Mapping Column: reflects the schema as defined by the Schema type field. XPath Query: Enter the fields to be extracted from the structured input. Get nodes: Select this check box to recuperate the XML content of all current nodes specified in the Xpath query list or select the check box next to specific XML nodes to recuperate only the content of the selected nodes. Limit Maximum number of rows to be processed. If Limit is 0, no rows are read Ignore the namespaces Select this check box to ignore namespaces when reading and extracting the XML data. tStatCatcher Statistics Select this check box to gather the Job processing metadata at intermediate component. It needs an input and an output | https://help.talend.com/reader/hCrOzogIwKfuR3mPf~LydA/GL9OELny4Px8OqD~iAvCUA | CC-MAIN-2020-24 | refinedweb | 231 | 65.52 |
The C library function double acos(double x) returns the arc cosine of x in radians.
Following is the declaration for acos() function.
double acos(double x)
x − This is the floating point value in the interval [-1,+1].
This function returns principal arc cosine of x, in the interval [0, pi] radians.
The following example shows the usage of acos() function.
#include <stdio.h> #include <math.h> #define PI 3.14159265 int main () { double x, ret, val; x = 0.9; val = 180.0 / PI; ret = acos(x) * val; printf("The arc cosine of %lf is %lf degrees", x, ret); return(0); }
Let us compile and run the above program that will produce the following result −
The arc cosine of 0.900000 is 25.855040 degrees | http://www.tutorialspoint.com/c_standard_library/c_function_acos.htm | CC-MAIN-2020-10 | refinedweb | 126 | 69.48 |
The for loop allows one to execute a set of the instructions until a condition is met.
It has the following syntax:
for (initialization; test-condition; increment/decrement) { Statement; }
The test condition is similar to the test condition that we use in an if statement.
Consider the following example where we will perform an operation similar to below.
1+2+3+4+5+6+7+8+9+10=55
#include <iostream> using namespace std; int main() { int sum = 0; for (int i = 1; i <= 10; ++i) sum += i; cout << "Sum of 1 to 10 inclusive is " << sum << endl; return 0; }
The initialization is done within the for statement setting i = 1. On each loop, a sequence of number will be added to the variable sum.
Notice we have been able to define i within the for loop itself. | http://codecrawl.com/2015/01/02/cplusplus-for-loop/ | CC-MAIN-2016-44 | refinedweb | 138 | 58.62 |
#include <stdio.h> void bits_uint(unsigned int value) { unsigned int bit; for ( bit = /* msb */(~0U >> 1) + 1; bit > 0; bit >>= 1 ) { putchar(value & bit ? '1' : '0'); } putchar('\n'); } int main(void) { unsigned int u; /* * Using this code to explain itself a little -or- what is '(~0U >> 1) + 1'? * Short answer: this expression sets only the most significant bit (MSB) * of an unsigned integer. */ u = 0U ; printf(" 0U : "); bits_uint(u); u = ~0U ; printf(" ~0U : "); bits_uint(u); u = ~0U >> 1 ; printf(" ~0U >> 1 : "); bits_uint(u); u = (~0U >> 1) + 1; printf("(~0U >> 1) + 1 : "); bits_uint(u); return 0; } /* my output (spaces edited for display here) 0U : 00000000000000000000000000000000 ~0U : 11111111111111111111111111111111 ~0U >> 1 : 01111111111111111111111111111111 (~0U >> 1) + 1 : 10000000000000000000000000000000 */
Are you able to help answer this sponsored question?
Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies. | https://www.daniweb.com/programming/software-development/code/216408/bits-part-1 | CC-MAIN-2018-13 | refinedweb | 145 | 60.99 |
First time here? Check out the FAQ!
I'm confused about using floating ip address. when I launch instance using gui, I just assigned private and public network interface at once. I'm sorry about my mistake.
So, I launch instance with only private network interface, and then assign floating ip from external network. as a result, i can acess to vm from external network and vice versa.
I can aware of my mistake using your ubuntu image.
After assign floating ip to instance, i finally found that ip address assigned to interface and nat forwaring rule from qrouter namespace.
Thanks for your help. :) And ovs-ofctl result still shows LINK_DOWN status.
Thanks for your answer.
I found out that private IP is assigned to instance and can ssh instace with my user account (not root).
Instance have two interface but only private interface is up. I tried to give public ip, but can't get root permission.
I bulit suse image from susestudio with some root password, but i don't know why root password not works.
maybe it's not an neutron problem,, I'm trying to find out any problem in nova or instance image.
I have updated content what you mentioned. ovs qr and qg interface LINK_DOWN status is normal behavior because of netns? I've tried to manually create namespace and ovs interface, but link may also change to down status right after move ovs interface to spcific namespace.
Hi, All.
I have deployed Openstack icehouse version on RHEL 6.5 for test lab followed by official guideline.
After neutron configuration and initiate VM, I can't access to floating IP of VM.
Nova instance have two interface for public and private network, I can't access neither of them.
So, I can't reach instance. I have tried restart all things and re-configure neutron network.
I'm not really familiar with openvswitch and networking, I have thought that i'm missing some important thing.
Please help me.
I have two node configuration,
first node act as controller, network node and compute node.
and second node act as compute node.
Each node have two network interface, one for external and other for private.
**Controller Node and network node and compute node 1 (with External network IP -10.183.120.170)
[root@osnode01 neutron]# ifconfig -a
br-ex Link encap:Ethernet HWaddr 68:B5:99:C7:D2:8C
inet addr:10.183.120.170 Bcast:10.183.120.191 Mask:255.255.255.192
inet6 addr: fe80::fc72:2dff:fe52:4709/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:4698305 errors:0 dropped:0 overruns:0 frame:0
TX packets:4369482 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1803665841 (1.6 GiB) TX bytes:1833241255 (1.7 GiB)
br-int Link encap:Ethernet HWaddr 8E:B9:63:DC:EE:45
inet6 addr: fe80::2c21:cff:fe90:d4fa/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:536 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:32184 (31.4 KiB) TX bytes:468 (468.0 b)
br-tun Link encap:Ethernet HWaddr D6:F1:17:0D:CA:4E
inet6 addr: fe80::8024:fdff:fe14:af81/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:230 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15164 (14.8 KiB) TX bytes:468 (468.0 b)
eth0 Link encap:Ethernet HWaddr 68:B5:99:C7:D2:8C
inet6 addr: fe80::6ab5:99ff:fec7:d28c/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:5101773 errors:0 dropped:0 overruns:0 frame:0
TX packets:4779571 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1851895406 (1.7 GiB) TX bytes:1879660396 (1.7 GiB)
eth1 Link encap:Ethernet HWaddr 68:B5:99:C7:D2:8E
inet addr:192.168.255.10 Bcast:192.168.255.255 Mask:255.255.255.0
inet6 addr: fe80::6ab5:99ff:fec7:d28e/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:792 errors:0 dropped:0 overruns:0 frame:0
TX packets:812 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 ...
[root@osnode01 neutron]# ifconfig -a
OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license. | https://ask.openstack.org/en/users/6348/gyusik-hwang/?sort=recent | CC-MAIN-2020-50 | refinedweb | 783 | 58.79 |
Love that ASP.NET and SQL ServerBen Miller's Developer Experiences Evolution Platform Developer Build (Build: 5.6.50428.7875)2004-09-14T03:16:00ZMoving on... have left Microsoft and am pursuing more in the SQL Server space. I am currently working in Salt Lake City for the LDS Church as a Sr. SQL Server DBA. My new blog home for all things Ben Miller will be at (DBA for what I do and Duck for who I am). You can also join me on IM at dbaduck AT hotmail dot com. I thoroughly enjoyed my time at Microsoft and in the MVP Program. I will not soon lose touch with the MVPs and I will be very involved in SQL Server and ASP.NET still...(<a href="">read more</a>)<img src="" width="1" height="1">benmiller Spring Code Camp - see you there will be keynoting the event and will see you in SLC. What: Utah Spring Code Camp When: April 14 th 2007 9:00-5:00 Where: Neumont University Registration: The local .NET Users Group and SQL Server Users Group is conducting a “Code Camp” for local software programmers next month at Neumont University. The code camp is by the community for the community. Always free and Always for the community. We will have Sessions on .NET, SQL Server, and...(<a href="">read more</a>)<img src="" width="1" height="1">benmiller Studio 2005 SP1 Update for Windows Vista....(<a href="">read more</a>)<img src="" width="1" height="1">benmiller Dash and Voice Command<p>I have an MDA and just recently bought a <a href="">Dash</a>. </p><p>I have a Bluetooth headset (<a href="">Motorola H700</a>) and with the <a href="">Dash</a> and the H700, if you push the bluetooth headset button, you get VoiceDial, and not VoiceCommand. This is not really helpful since I need VoiceCommand because I am so used to it with my MDA, that I have come to rely on it. </p>. </p><p. </p><p>I thought I would share as it was a little fun for me to find that I could get to my VoiceCommand while driving, etc. Not to mention, that I can have voice tags for applications. </p><p>Happy Mobiling.</p><img src="" width="1" height="1">benmiller again from Word 2007<p>I am testing the TR version of Word 2007 to see if it delays the posts that it did before. </p><p>I hope that this feature continues to get better with more options for pictures, etc. </p><p>Cool stuff.</p><img src="" width="1" height="1">benmiller vs. .NET 2.0 XML Framework<p>I have a project that I am wondering about using either XSLT or the .NET Framework 2.0 to transform an XML document into a table and then bind it. I know that I can bind XML in ways, but I am in a bind on this one and am more familiar with Framework coding in C# rather than XSLT. </p><p>I have run into a problem trying to get a URL into the xslt template using <a href=”something”>something</a> with the href being formed by part URL and part <xsl:value-of select=”something” />. It fails because I am not very good with it yet. Anyone with help or ideas, please feel free to suggest. I am going to go about it using .NET in C# and will entertain any XSLT help that anyone wants to provide. </p><p>I have been to <a href=""></a> and <a href=""></a> and found some things that are basic, but nothing about how to use XSLT to put html elements together. </p><p>More to come as I finish this project. </p><img src="" width="1" height="1">benmiller and Practices goes Mobile!<P class=MsoNormal><B><I><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: 'Arial','sans-serif'"><A title=patterns and practices Goes Mobile!</A></SPAN></I></B><?xml:namespace prefix = o<o:p></o:p></P> <P class=MsoNormal><SPAN class=postbody1><SPAN style="FONT-SIZE: 9pt; FONT-FAMILY: 'Verdana','sans-serif'">The Microsoft patterns & practices team has released the first Community Technical Preview (CTP) for the Mobile Client Software Factory. The factory will help architects and developers design and build mobile LOB solutions. The <U><SPAN style="COLOR: blue"><A title=Mobile Client Software Factory</A></SPAN></U> will include a prescriptive architecture, application blocks, and other guidance/tools for enterprise architects and developers targeting Windows Mobile powered devices. If you’re serious about building mobile enterprise solutions, join the community, download the latest pre-release drop, and start contributing feedback today!</SPAN></SPAN><o:p></o:p></P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Vista and Office 2007 Beta<P>I. </P> <P>My configuration of the E510 is 2 GB RAM and 256 <A href="">ATI</A> X600 Raedon video, with <A href="">Sound Blaster</A> Audigy 2 ZS. When I installed Vista, it recognized almost everything. It saw my video and installed a video driver, and also saw my dual tuner and installed the Angel II MPEG controller, but choked on my Sound Blaster Audigy card. </P> <P>I did decide to go out to ATI and download their latest driver, and I was pleasantly surprised to see they had Vista drivers and they are brand new (5/26/2006). I downloaded and installed them just for good measure. </P> <P>I went to <A href="">Sound Blaster</A> and they had <A href="">Vista</A> drivers, but they were for a previous build (5231 was the requirement). I downloaded them and proceeded to install them. Well the install choked so I had to reboot my machine, and from thereafter, the install would complain about the version of the OS. </P> <P. </P> <P>I thought, well since I am going Beta, I might as well download <A href="">Office 2007 Beta</A> since I had heard that it had some features that were pretty cool. I cannot believe how much fun it is to use it now. In fact I am not looking forward to going back to Office 2003 on my tablet. If I did not need to use <A href="">Send Personally</A> to send mail, I would jump at it. It is fantastic. So the combination of <A href="">Office 2007</A> and <A href="">Windows Vista</A> ROCK. I have had only 1 crash and after that it was pretty smooth sailing. </P> <P>Well more on this later, and BTW: I will be at <A href="">TechEd 2006 in Boston</A> for those that are going. I will be hanging in the Community Lounge areas. Look me up.</P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Access code and ASP.NET 2.0<P>I have been doing some work on a site and have come to the point of the BLL and DAL of the site. I had many ideas of how to do it and I have listened to many discussions about it on architecture lists and then I talked to my buddy <A href="">Aaron Zupancic</A> about this and <A href="">found a post</A> that he pointed me to about these things. This goes into the concept of using a delegate in the BLL to call into the DAL with the delegate to get the data out of a reader or whatever and the delegate will return the type, by either using a concrete type or a generic type.</P> <P>I have not ever done this, but it is simplifying my code and I think it is an elegant solution as he indicates. Just thought I would point to his insight.</P> <P> </P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Server party at Tech Ed Europe<P><SPAN>Just wanted to make sure that you all knew about this party. I wish I was going to be there.</SPAN></P> <P><SPAN>Come to the hottest Party at <SPAN class=SpellE>TechED</SPAN> EMEA! Jimmy Woo, the top nightlife hot-spot in Amsterdam has been exclusively reserved for Microsoft SQL Server, Microsoft Office and MSDN, Dr. Dobbs and Software Development Magazines on Wednesday, 6<SUP>th</SUP> July from 20:00-23:00. Please join us for drinks and celebration. And don’t miss out on a chance to win a new Bike or a Microsoft cycling shirt! Visit the SQL Server, Office or CMP booths at <SPAN class=SpellE>TechEd</SPAN> to get your personal invitations. And if you missed last years SQL bash this is your one chance to find out what all the fuss was about!</SPAN></P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Systems Developer Competition<B> <P>SQL Server 2005/Visual Studio 2005/BizTalk 2006 Connected Systems Developer Competition</P> <P>Do You Dare? Show the world what a great developer you are and have the chance to win $50,000 USD and a trip to the joint SQL Server 2005/Visual Studio 2005/Biztalk 2006 Launch event</P></B> <P>The <A href=""><U><FONT color=#0000ff>Connected Systems Developer Competition</U></FONT></A> has now launched. If you are a developer who uses or needs an excuse to use SQL Server 2005, Visual Studio 2005 or BizTalk then you should take a look at this competition as a reason to write some code. Not only do you have the chance to win $50,000 USD but if you are among the 15 finalists, you will be invited to the joint SQL Server 2005/Visual Studio 2005/Biztalk 2006 Launch event as well as get huge recognition for your skills.</P> <P><A href=""><IMG src="" border=0></A></P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Betas Unleashed - Salt Lake City<P>For those around the Utah area that are interested in a 1 day dive into Visual Studio or SQL Server 2005, this event is one that you do not want to miss. Details below.</P> <P><STRONG>Betas 2005 Unleashed</STRONG> <BR><BR.<BR><BR> <BR><BR><STRONG>Date: May 18th 2005 - 8.30 am - 5pm</STRONG> (registration and breakfast starts at 7.30)<BR><BR><STRONG>Venue: Miller Free Enterprise Center (</STRONG><A href=""><STRONG></STRONG></A><STRONG>)<BR>SLCC-Miller Campus<BR>9750 S 300 W<BR>Sandy, UT 84070<BR></STRONG><BR>Registration: <STRONG>Please register</STRONG> at the following link <A href=""></A> <BR><BR> <BR><BR><STRONG>Visual Studio .NET 2005 Track:</STRONG><BR.<BR> <BR>We have put together a comprehensive training for free to help our customers understand the power and value of Visual Studio 2005. This session will cover the core features of - <BR>. Web Development Platform - ASP .NET 2.0<BR>. Smart Client Development Platform - Windows Forms, Mobile Development<BR>. Visual Studio Team System<BR><BR> <BR><BR><STRONG>SQL Server 2005 Track:<BR></STRONG>SQL Server 2005 is the most anticipated database release in Microsoft's history. With significant improvements in the areas of infrastructure, development, and business intelligence SQL Server 2005 represents a huge leap forward for a wide variety of applications.<BR> <BR.<BR>. SQL Server Developer Platform - SQLCLR, TSQL Enhancements<BR>. Business Intelligence Platform - Reporting Services, Analysis Services, Data Warehousing <BR>. Infrastructure Platform - Availability, Management, Database Mirroring</P> <P> </P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller in Singapore<P>I just wanted to point you to my personal blog at <A href=""></A> as I have posted a Gallery for the pictures at <A href=""></A>.</P> <P>For personal things you can see them at the blog address above. I will still maintain both as this will have technical information on this one.</P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller eLearning from Developer eLearning on MSDN<P>Check this out. <A href=""></A></P> .</P> <P><STRONG>Grand Prize</STRONG> Sony 50" Plasma WEGA High Definition TV ultimate home theater package<BR><B>2nd Prize</B> Nikon D70 SLR Digital Camera package<BR><B>3rd Prize</B> Bose Acoustic Wave 5-CD Music System<BR><B>4th Prize</B> Bose Acoustic Noise Cancelling Headphones </P> <P>Now how is that for some cool prizes. Get out there now and take a course. This learning will cover some key new technologies in <A href="">Visual Studio 2005</A>. There will be more courses planned in the near future, but for now, go get involved. You deserve it.</P> <P>Have a great day and go see the <A href="">site</A> for details.</P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Server 2005 Virtual Hands on labs<P>Are you ready to experience SQL Server 2005? </P> <P>Announcing the launch of the SQL Server 2005 Virtual Hands on labs. In these labs, you will get to experience many of the new features in SQL Server 2005 including CLR integration, XML support and deep business intelligence integration.</P> <P>Just follow the link and experience SQL Server 2005 for yourself</P> <P>Registration link:<BR><A href=""></A></P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Server 2005 Webcasts in April...<P><A href=""><IMG src=""></A></P> <P class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>Discover how Microsoft SQL Server 2005 offers database developers the optimal combination of a tightly integrated development and data management platform. The rich and flexible programming environment in SQL Server 2005 allows you to leverage your existing skills and utilize familiar tools to build robust, secure, scalable applications.</SPAN></FONT></I></P> <P class=MsoNormal><I><FONT face=Tahoma size=2><SPAN:</SPAN></FONT></I></P> <UL> <LI class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>.NET Framework Integration: Learn how you can now take full advantage of the Microsoft .NET Framework class library and modern programming languages to implement functionality within the server. </SPAN></FONT></I> <LI class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>Transact-SQL and Managed Code: Find out how to decide between using traditional Transact-SQL or a programming language that is compatible with the .NET Framework, such as Visual Basic .NET or C#. Understand where each method provides benefits and how to design for this in the beginning. </SPAN></FONT></I> <LI class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>Web Services: See how to develop XML Web services in the database tier, making SQL Server an HTTP listener. </SPAN></FONT></I> <LI class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>XML: SQL Server 2005 contains deep native support for XML. Learn how this can significantly help as you develop applications that make use of XML. </SPAN></FONT></I> <LI class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>Data Access: Discover how ADO.NET 2.0 not only supports all of the new SQL Server 2005 features but also enables productivity and performance gains for all developers.</SPAN></FONT></I> </LI></UL> <P class=MsoNormal><I><FONT face=Tahoma size=2><SPAN>Register for the SQL Server 2005 webcast series to learn more.</SPAN></FONT></I></P> <P class=MsoNormal><B><I><FONT face=Tahoma size=2><SPAN>Bonus:</SPAN></FONT></I></B><I><FONT size=2><SPAN> (<A title=official rules</A>) pre-loaded with our best webcasts!</SPAN></FONT></I></P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Smart Client Developer Center REBOOTED:<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Check out this entry from <a href="">Jonathan Wells</a> about the <a href="">MSDN Smart Client Developer Center</a></span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>This is the link</span></font> <font size="2" face="Arial"><span style='font-size:10.0pt; font-family:Arial'><a href="" title=""></a></span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>From the site <em><i><font face="Times New Roman">'The goal of the site is to help you understand smart clients, what they are, when they are most appropriate, and most importantly, the best and most efficient way to construct them.'</font></i></em></span></font></p> <p><em><i><font size="3" face="Times New Roman"><span style='font-size:12.0pt; font-style:normal'>This is a great resource for anyone doing Smart Client development.</span></font></i></em></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller VS 2005 Beta 1 Refresh with Team Server<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>I spent the better part of my day a couple of days ago helping out someone having problems installing Visual Studio 2005 Beta 1 Refresh with Team System. </span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>It was such an experience that I decided to blog about it.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0).</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'.)</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>These are the short steps, but the key is to follow the Roadmap and I highlighted the main points that cannot be missed or you will not succeed in installing it (from my experience).</span></font></p> <p style='margin-left:.5in;text-indent:-.25in'><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>1.<font size="1" face="Times New Roman"><span style='font:7.0pt "Times New Roman"'> </span></font></span></font> DB Server must be installed as the Default Instance.</p> <p style='margin-left:.5in;text-indent:-.25in'><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>2.<font size="1" face="Times New Roman"><span style='font:7.0pt "Times New Roman"'> </span></font></span></font> App Server Role on AppTier cannot have FrontPage Server Extensions enabled.</p> <p style='margin-left:.5in;text-indent:-.25in'><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>3.<font size="1" face="Times New Roman"><span style='font:7.0pt "Times New Roman"'> </span></font></span></font> IIS Installed before the SQL 2005 Beta 2 is installed on the DBTier.</p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt').</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Hope this helps people out there that may run into problems. I did this on VPC using Virtual Server 2005 on 1 box with 2 VPC’s each with 256 MB RAM. More would have been better, but it worked at least.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Have a great one.</span></font></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller 2005 Developer Webcasts coming...<p><a href="" alt="SQL 2005 Developer Webcasts"><img src="" border="0" /></a> </p> <p><font face="Arial" size="2">Help us get the word out on the Webcasts that will begin in Decmeber 2004. <a title="http" href=""></a>)</font></p><div style="clear:both;"></div><img src="" width="1" height="1">benmiller up a new .Text site (personal)<P>I thought I would post my adventures with .Text setup so that some people may benefit.</P> <P>I currently have a blog at <a href=""></A> and this is my space on Microsoft’s turf. I have had a personal blog out on <A href=""></A> and have not done much to publicize it. So I thought that instead of cluttering up the site and having to avoid the /blog application on benmiller.net I would make a blog site at <A href=""></A>.</P> <P>So the first thing I did was to put the code that I used for my original personal blog on my new site. Then I used the same database which is a great thing about .Text and the configs. I then proceeded to do the IIS mapping described, which is to map .* to the aspnet_isapi.dll in IIS. I tried to pull up the site and it was gross looking. </P> <P>I got really frustrated after I added the blog to the blog_config table and browsed the new site and got nothing but a weird page. To make a long story short, I found out that if you have the .* mapped to the ISAPI, then you have to add the following lines to the web.config:</P> <P><HttpHandler pattern = "(\.config|\.asax|\.ascx|\.config|\.cs|\.csproj|\.vb|\.vbproj|\.webinfo|\.asp|\.licx|\.resx|\.resources)$" type = "Dottext.Framework.UrlManager.HttpForbiddenHandler, Dottext.Framework" handlerType = "Direct" /><BR><HttpHandler pattern = "(\.gif|\.js|\.jpg|\.zip|\.jpeg|\.jpe|\.css)$" type = "Dottext.Common.UrlManager.BlogStaticFileHandler, Dottext.Common" handlerType = "Direct" /></P> <P><BR>You can remove the above if you do not map .* to aspnet_isapi.dll.</P> <P>Then I found out that I could view the blogs, but could not for the life of me login to the admin. I could post an entry from another application (newsgator) and the post would show up. Well, after a long time of trying whatever I could, I thought I would try to login to my blog.benmiller.net on another machine. Well, I was able to. Now what?</P> <P>I noticed that when I went to <A href=""></A> that I was already logged in so I could just go into admin. I began to think about cookies, and since I had different usernames and passwords for each site, there had to be a conflict.</P> <P>This is what I found. I found out that a cookie was being written for benmiller.net and it was conflicting with me trying to login to the admin on <A href=""></A> since benmiller.net is the root domain. When I deleted that cookie, I was able to login to my admin on blog.benmiller.net.</P> <P>I am clear now and will make sure that I don’t have conflicting cookies any more.</P> <P>I hope that this helps someone, and I would be happy to answer any questions about what I found, as this was a 3 day process.</P> <P>Happy Blogging.</P> <P> </P><div style="clear:both;"></div><img src="" width="1" height="1">benmiller CTP versions of Visual Studio 2005 Express<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'> If you have not yet, you can go here <a href="" title=""></a> and get the latest Community Tech Preview of the Visual Studio 2005 Express Versions. </span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'> </span></font></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Howard will be speaking at SLC User Group<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>I am excited to be able to have <a href="">Rob Howard</a> come and speak to our <a href="">User Group</a> tomorrow at 6:00 PM. Salt Lake is growing in the .NET space and in the recognition of the need for User Group meetings. Rob is coming with the <a href="">INETA</a> Speakers Bureau and will be our speaker for the night.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>For those in the Salt Lake Area, join us for a great night of ASP.NET 2.0 stuff.</span></font></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller Abrams talk at the San Jose dotNet User Group...<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>I attended the <a href="">San Jose dotnet User Group </a> and listened to <a href="">Brad</a> talk about the CLR. It was a FANTASTIC talk and was very informative.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>We talked about the architecture of the CLR and where the BCL’s sit (that is what Brad worked on is the BCL’s). He just gave a great talk.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>One thing that I thought was interesting is that I actually knew what a Singleton was and how it was defined as a Pattern. I was pretty much amazed that I understood. We had a talk in the <a href="">SLC DotNet User Group</a> about that and Aaron Zupancic (don’t know if Aaron has a blog) gave a great talk on Patterns in development. Singleton happened to be one of them. Brad called it the “double check lock”.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Anyone who does not currently attend a User Group, should hop onto <a href="">INETA</a>’s website and find the local one and get involved. It is a great time and you even learn things.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Great job Brad.</span></font></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller C# Express application ScreenSaver<p>Just wanted to post my successes on working on the <a href="">Visual C# Express </a>project. I modified a ScreenSaver program that came from the template that is installed with <a href="">Visual C# Express</a>. It has an RSS Feed that gets loaded from a application setting in the .config file pointing to a main feed of an RSS (but could just as well point to the individual's RSS Feed.</p> <p>The thing that I modified on it was to use the RegEx libraries to remove the HTML tags and the stuff so that when you saw the description on the screen, that you did not see tags, but just text.</p> <p>I was talking to <A href="">Brad Abrams</a> about the project and how it was displaying the tags and I thought, well, why don't we just remove it so it would just show the text. So I set out to do just that.</p> <p>I have completed the project, thanks to some Regular Expressions from <a href="">RegExLib.com</a>. Great stuff there. If anyone wants the finished project, let me know and I will post it up here or somewhere you can get to.</p> <p>I have definitely decided that I am jumping in now and doing more Whidbey stuff so that I can catch up with my <a href="">MVP's</a>.</p> <p>More to come from <a href="">BorCon</a> as we still have 1 more full day.</p><div style="clear:both;"></div><img src="" width="1" height="1">benmiller name changes in ASP.NET 2.0<div class="Section1"> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'>Be sure to get a look at <a href="">Brian Goldfarb’s</a> post on <a href="">directory name changes</a> for ASP.NET 2.0.</span></font></p> <p><font size="3" face="Times New Roman"><span style='font-size:12.0pt'> </span></font></p></div><div style="clear:both;"></div><img src="" width="1" height="1">benmiller | http://blogs.msdn.com/b/benmiller/atom.aspx?Redirected=true | CC-MAIN-2016-30 | refinedweb | 4,715 | 64.91 |
Fable 3 made something I was not aware for some time, it moved to emitting ESM Modules and leaving babel and other stuff behind for users to set up.
It was around June that I was really mad at compilation times with Fable projects, After being in the JS/Node ecosystems for years I wondered what could be done to improve that situation..
This shouldn't be that hard, I just needed a server that well... served the HTML/CSS/JS files right?
I went to my desktop, created an F# script added a couple of libraries like Suave and CliWrap so I could call the
dotnet fable command from my F# code and make it compile my Fable files.
Taking out some code I came up with this PoC:
// I omited more code above for brevity let stdinAsyncSeq () = let readFromStdin () = Console.In.ReadLineAsync() |> Async.AwaitTask asyncSeq { // I wanted to think this is a "clever" // way to keep it running while true do let! value = readFromStdin () value } |> AsyncSeq.distinctUntilChanged |> AsyncSeq.iterAsync onStdinAsync let app = choose [ path "/" >=> GET // send the index file >=> Files.browseFileHome "index.html" // serve static files GET >=> Files.browseHome RequestErrors.NOT_FOUND "Not Found" // SPA like fallback >=> redirect "/" ] let config (publicPath: string option) = let path = Path.GetFullPath( match publicPath with | Some "built" -> "./dist" | _ -> "./public" ) printfn $"Serving content from {path}" // configure the suave server instance { defaultConfig with bindings = [ HttpBinding.createSimple HTTP "0.0.0.0" 3000 ] homeFolder = Some path compressedFilesFolder = Some(Path.GetFullPath "./.compressed") } // let's make it run! stdinAsyncSeq () |> Async.Start // dotnet fsi suave.fsx built to show how bundled files work startWebServer (config (fsi.CommandLineArgs |> Array.tryLast)) app
Now, I could have my suave server and my Fable compiler running on the background. I could see my files being served in my browser I could make changes, press F5 and see them working.
Angel Munoz@angel_d_munoz
It is possible to have a "node-less" #fsharp frontend development experience thanks to @FableCompiler and @SuaveIO
with a small F# script you can spin up a suave server and have fable compile your files in the background
github.com/AngelMunoz/sua…04:53 AM - 14 Jun 2021
Cool it worked... Yay!... sure, with my attention span for some things I simply didn't think too much about it, or so I thought.
What came up next was experimenting with snowpack and fuse-box to see which setup could work best with Fable 3 and Although, Both projects work extremely well with Fable, the snowpack project felt more compelling to me thanks to the promoted Unbundled development concept. I decided to go for it and tried the Fable Real World implementation and switched webpack for snowpack and the results were kind of what I was expecting, faster builds, a simpler setup and a much faster developer loop feedback with the browser.
Unconsciously on the back of my head was still that voice about writing something like snowpack in F#... In my mind, the people who build those kinds of tools are like people in the movies; You know they exist but, you don't think you are capable of doing something like it. Specially when most of my experience at that point was building UI's in things like Angular.
I went ahead and started studying the snowpack source code and I found out that they were using esbuild a JS/TS compiler written in Go, no wonder why it was faster than anything done in JavaScript at the time.
Also, on the background vitejs was also starting to get in shape, I was looking at Evan's tweets from afar and getting inspired from that as well so I realized I needed to go back and see if I could go even further.
- What if I used esbuild as well?
- What if I could use esbuild to produce my prod bundle after I built my fable code?
Angel Munoz@angel_d_munoz
Going back to that suave-dev server...
if we do add esbuild, we could actually go nodeless even when preparing for prod stuff
#fsharp
updated the scripts there
github.com/AngelMunoz/sua…20:27 PM - 21 Jun 2021
Turns out... I wasn't that crazy, after all both vite and snowpack were doing it as well!
Around September vite got traction with the vue user base and other users as well. I also studied a bit the vite source code, and even used it for some Fable material for posts. I was trying to make some awareness of Fable.Lit support for Web Components and I wanted to experiment in reality how good vite was, and boi it's awesome If you're starting new projects that depend on node tooling in my opinion, it's your best bet.
Anyways, I tend to be looking at what's new on the web space, and by this time these... Import Maps thing came to my attention, it is a really nice browser feature that can be used to control the browser's behavior to import JavaScript files.
Import maps can tell the browser to use "bare specifiers" (i.e.
import dependency from "my-dependency" rather than
"./my-dependency.js)
Almost like "pull this import from this URL".
Hopefully you are starting to put the pieces together as I was doing.
Maybe... It might be possible to actually enjoy the NPM ecosystem without having to rely on the local tooling... Just maybe...
Angel Munoz@angel_d_munoz
It's starting to make sense if this works we would just need a couple of msbuild tasks to switch from lookup to pinned urls in prod mode and no node needed for frontend development, @skypackjs and import maps really enable cool stuff
#fsharp19:04 PM - 16 Sep 2021
From then on, It was just about making small F# scripts to experiment PoC's and implementing small features.
At this point is when I said to myself.
It's time to build a Webpack alternative...
For this... FSharp.DevServer... I needed to have an idea of what I wanted to implement to make it usable at least, I settled on the following set of features.
- Serve HTML/CSS/JS
- Support for Fable Projects
- Reload on change
- Install dependencies
- Production Bundles
- Transpilation on the fly
- Dev Proxy
- Plugins
- HMR
Those are the least features necessary IMO to consider for a project like this.
I will take you on a quick tour at how those got implemented at some point in the project.
Keep in mind I'm not an F# expert! I'm just a guy with a lot of anxiety and free time so I use my code to distract myself while having a ton of fun with F# code.
While for a proof of concept Suave did great, I switched it in favor of Saturn given my familiarity with it and some ASP.NET code.
Serve HTML/CSS/JS
From most of the features, this must have been perhaps the most simple after all if you're using a server, it should be really simple to do right?
Well... Yes and No... it turns out that if you serve static files they get out of the middleware chain very quick due to the order of the static middleware is in. While it was good for serving it if I wanted to reload on change or compile these files I was not going to be able to do it.
// the current devServer function has way more stuff // due the extra features that have been implemented let private devServer (config: FdsConfig) = let withAppConfig (appConfig: IApplicationBuilder) = // let's serve everything statically // but let's ignore some extensions let ignoreStatic = [ ".js" ".css" ".module.css" ".ts" ".tsx" ".jsx" ".json" ] // mountedDirs is a property in perla.jsonc // that enables you to watch a partcular set of // directories for source code for map in mountedDirs do let staticFileOptions = let provider = FileExtensionContentTypeProvider() for ext in ignoreStatic do provider.Mappings.Remove(ext) |> ignore let options = StaticFileOptions() options.ContentTypeProvider <- provider // in the next lines we enable local mapings // to URL's e.g. // ./src on disk -> /src on the URL options.RequestPath <- PathString(map.Value) options.FileProvider <- new PhysicalFileProvider(Path.GetFullPath(map.Key)) options appConfig.UseStaticFiles staticFileOptions |> ignore let appConfig = // at the same time we enable transpilation // middleware when we're ignoring some extensions appConfig.UseWhen( Middleware.transformPredicate ignoreStatic, Middleware.configureTransformMiddleware config ) // set the configured options application { app_config withAppConfig webhost_config withWebhostConfig use_endpoint_router urls } // build it app .UseEnvironment(Environments.Development) .Build()
This part is just about serving files, nothing more, nothing less, that's the core of a dev server.
Support for Fable Projects
Fable is actually not hard to support, fable is distributed as a dotnet tool, we can invoke the command with CliWrap which has proven us in the PoC stage, how simple is to call a process from .NET.
// This is the actual Fable implementation module Fable = let mutable private activeFable: int option = None // this is to start/stop the fable command // if requested by the user let private killActiveProcess pid = try let activeProcess = System.Diagnostics.Process.GetProcessById pid activeProcess.Kill() with | ex -> printfn $"Failed to Kill Procees with PID: [{pid}]\n{ex.Message}" // Helper functions to add arguments to the fable command let private addOutDir (outdir: string option) (args: Builders.ArgumentsBuilder) = match outdir with | Some outdir -> args.Add $"-o {outdir}" | None -> args let private addExtension (extension: string option) (args: Builders.ArgumentsBuilder) = match extension with | Some extension -> args.Add $"-e {extension}" | None -> args let private addWatch (watch: bool option) (args: Builders.ArgumentsBuilder) = match watch with | Some true -> args.Add $"--watch" | Some false | None -> args // we can fire up fable either as a background process // or before calling esbuild for production let fableCmd (isWatch: bool option) = fun (config: FableConfig) -> let execBinName = if Env.isWindows then "dotnet.exe" else "dotnet" Cli .Wrap(execBinName) .WithArguments(fun args -> args .Add("fable") .Add(defaultArg config.project "./src/App.fsproj") |> addWatch isWatch |> addOutDir config.outDir |> addExtension config.extension |> ignore) // we don't do a lot, we simply re-direct the stdio to the console .WithStandardErrorPipe(PipeTarget.ToStream(Console.OpenStandardError())) .WithStandardOutputPipe( PipeTarget.ToStream(Console.OpenStandardOutput()) ) let stopFable () = match activeFable with | Some pid -> killActiveProcess pid | None -> printfn "No active Fable found" let startFable (getCommand: FableConfig option -> Command) (config: FableConfig option) = task { // Execute and wait for it to finish let cmdResult = getCommand(config).ExecuteAsync() activeFable <- Some cmdResult.ProcessId return! cmdResult.Task }
Keeping the process ID on memory might not be the best idea and there can be better ways to handle that but at least for now it works just fine.
Calling the
startFable function with fable options, will make fable run on the background, this allows us to have fable output JS files that we will be able to serve.
Reload on change
Reloading on change was an interesting feature to do, first of all I needed a file watcher and I have had heard before that the .NET one wasn't really that great, I also needed to communicate with the frontend when something changed in the backend.
For the file watcher, I tried to search for good alternatives, but to be honest in the end I decided to go with the one in the BCL.
I was kind of scared though how would I manage multiple notifications and events without making it a mess? I had No idea... Thankfully FSharp.Control.Reactive was found and is just what I needed. This library allows you to make observables from events and has a bunch of nice utility functions to work with stream like collections if you've used RxJS or RX.NET you will feel at home with it.
let getFileWatcher (config: WatchConfig) = let watchers = // monitor a particular list of addresses (defaultArg config.directories ([ "./src" ] |> Seq.ofList)) |> Seq.map (fun dir -> // for each address create a file watcher let fsw = new FileSystemWatcher(dir) fsw.IncludeSubdirectories <- true fsw.NotifyFilter <- NotifyFilters.FileName ||| NotifyFilters.Size let filters = defaultArg config.extensions (Seq.ofList [ "*.js" "*.css" "*.ts" "*.tsx" "*.jsx" "*.json" ]) // ensure you're monitoring all of the // extensions you want to reload on change for filter in filters do fsw.Filters.Add(filter) // and ensure you will rise events for them :) fsw.EnableRaisingEvents <- true fsw) let subs = watchers |> Seq.map (fun watcher -> // for each watche react to the following events // Renamed // Changed // Deleted // Created [ watcher.Renamed // To prevent overflows and weird behaviors // ensure to throttle the events |> Observable.throttle (TimeSpan.FromMilliseconds(400.)) |> Observable.map (fun e -> { oldName = Some e.OldName ChangeType = Renamed name = e.Name path = e.FullPath }) watcher.Changed |> Observable.throttle (TimeSpan.FromMilliseconds(400.)) |> Observable.map (fun e -> { oldName = None ChangeType = Changed name = e.Name path = e.FullPath }) watcher.Deleted |> Observable.throttle (TimeSpan.FromMilliseconds(400.)) |> Observable.map (fun e -> { oldName = None ChangeType = Deleted name = e.Name path = e.FullPath }) watcher.Created |> Observable.throttle (TimeSpan.FromMilliseconds(400.)) |> Observable.map (fun e -> { oldName = None ChangeType = Created name = e.Name path = e.FullPath }) ] // Merge these observables in a single one |> Observable.mergeSeq) { new IFileWatcher with override _.Dispose() : unit = watchers // when disposing, dispose every watcher you may have around |> Seq.iter (fun watcher -> watcher.Dispose()) override _.FileChanged: IObservable<FileChangedEvent> = // merge the the merged observables into a single one!!! Observable.mergeSeq subs }
With this setup you can easily observe changes to multiple directories and multiple extensions it might not be the most efficient way to do it, but It at least got me started with it, now that I had a way to know when something changed I needed to tell the browser what had happened.
For that I chose SSE (Server Sent Events) which is a really cool way to do real time notifications from the server exclusively without having to implement web sockets it's just an HTTP call which can be terminated (or not).
let private Sse (watchConfig: WatchConfig) next (ctx: HttpContext) = task { let logger = ctx.GetLogger("Perla:SSE") logger.LogInformation $"LiveReload Client Connected" // set up the correct headers ctx.SetHttpHeader("Content-Type", "text/event-stream") ctx.SetHttpHeader("Cache-Control", "no-cache") ctx.SetStatusCode 200 // send the first event let res = ctx.Response do! res.WriteAsync($"id:{ctx.Connection.Id}\ndata:{DateTime.Now}\n\n") do! res.Body.FlushAsync() // get the observable of file changes let watcher = Fs.getFileWatcher watchConfig logger.LogInformation $"Watching %A{watchConfig.directories} for changes" let onChangeSub = watcher.FileChanged |> Observable.map (fun event -> task { match Path.GetExtension event.name with | Css -> // if the change was on a CSS file send the new content let! content = File.ReadAllTextAsync event.path let data = Json.ToTextMinified( {| oldName = event.oldName |> Option.map (fun value -> match value with | Css -> value | _ -> "") name = event.path content = content |} ) // CSS HMR was basically free! do! res.WriteAsync $"event:replace-css\ndata:{data}\n\n" return! res.Body.FlushAsync() // if it's any other file well... just reload | Typescript | Javascript | Jsx | Json | Other _ -> let data = Json.ToTextMinified( {| oldName = event.oldName name = event.name |} ) logger.LogInformation $"LiveReload File Changed: {event.name}" do! res.WriteAsync $"event:reload\ndata:{data}\n\n" return! res.Body.FlushAsync() }) // ensure the task gets done |> Observable.switchTask |> Observable.subscribe ignore // if the client closes the browser // then dispose these resources ctx.RequestAborted.Register (fun _ -> watcher.Dispose() onChangeSub.Dispose()) |> ignore // keep the connection alive while true do // TBH there must be a better way to do it // but since this is not critical, it works just fine do! Async.Sleep(TimeSpan.FromSeconds 1.) return! text "" next ctx }
At this time, I also published about SSE on my blog, I really felt it was a really cool thing and decided to share it with the rest of the world :)
Install dependencies
I was really undecided if I wanted to pursue a webpack alernative because
- How can you install dependencies without npm?
- Do you really want to do
import { useState } from ''
On every damned file? oh no no no, I don't think so... Enter the Import Maps, this feature (along esbuild) was the thing that made me realize it was actually possible to ditch out node/webpack/npm entirely (at least in a local and direct way) instead of doing that ugly import from above, if you can provide a import map with your dependencies the rest should be relatively easy
<script type="importmap"> { "imports": { "moment": "", "lodash": "" } } </script> <!-- Allows you to do the next --> <script type="module"> import moment from "moment"; import lodash from "lodash"; </script>
Angel Munoz@angel_d_munoz
Here's something towards a node'less frontend development for #fsharp with help of @skypackjs
The cli tool "installs" a package (i.e. grabs a skypack lookup url) and saves: import map, lock, dependency this import map is added to the index file05:11 AM - 19 Sep 2021
So here I was trying to replicate a version of
package.json this ended up implementing the
perla.jsonc.lock file which is not precisely a lock file, while the URL's there are certainly the pined and production versions of those packages, it's in reality the import map in disguise, to get that information though I had to investigate how to do it. Once again I decided to study snowpack since it's the only frontend dev tool I know it has this kind of mechanism (remote sources), after some investigation and some PoC's I also stumbled upon JSPM's recently released Import Map Generator which is basically what I wanted to do! Skypack, JSPM and Unpkg offer reliable CDN services for production with all of these investigations and gathered knowledge I went to implement fetching dependencies and "installing" them with the dev server tool.
[<RequireQualifiedAccessAttribute>] module internal Http = open Flurl open Flurl.Http [<Literal>] let SKYPACK_CDN = "" [<Literal>] let SKYPACK_API = "" [<Literal>] let JSPM_API = "" let private getSkypackInfo (name: string) (alias: string) = // FsToolkit.ErrorHandling FTW taskResult { try let info = {| lookUp = $"%s{name}" |} let! res = $"{SKYPACK_CDN}/{info.lookUp}".GetAsync() if res.StatusCode >= 400 then return! PackageNotFoundException |> Error let mutable pinnedUrl = "" let mutable importUrl = "" // try to get the pinned URL from the headers let info = if res.Headers.TryGetFirst("x-pinned-url", &pinnedUrl) |> not then {| info with pin = None |} else {| info with pin = Some pinnedUrl |} // and the imports as well let info = if res.Headers.TryGetFirst("x-import-url", &importUrl) |> not then {| info with import = None |} else {| info with import = Some importUrl |} return // generate the corresponding import map entry [ alias, $"{SKYPACK_CDN}{info.pin |> Option.defaultValue info.lookUp}" ], // skypack doesn't handle any import maps so the scopes will always be empty [] with | :? Flurl.Http.FlurlHttpException as ex -> match ex.StatusCode |> Option.ofNullable with | Some code when code >= 400 -> return! PackageNotFoundException |> Error | _ -> () return! ex :> Exception |> Error | ex -> return! ex |> Error } let getJspmInfo name alias source = taskResult { let queryParams = {| install = [| $"{name}" |] env = "browser" provider = // JSPM offer various reliable sources // to get your dependencies match source with | Source.Skypack -> "skypack" | Source.Jspm -> "jspm" | Source.Jsdelivr -> "jsdelivr" | Source.Unpkg -> "unpkg" | _ -> printfn $"Warn: An unknown provider has been specied: [{source}] defaulting to jspm" "jspm" |} try let! res = JSPM_API .SetQueryParams(queryParams) .GetJsonAsync<JspmResponse>() let scopes = // F# type serialization hits again! // the JSPM response may include a scope object or not // so try to safely check if it exists or not match res.map.scopes :> obj |> Option.ofObj with | None -> Map.empty | Some value -> value :?> Map<string, Scope> return // generate the corresponding import map // entries as well as the scopes res.map.imports |> Map.toList |> List.map (fun (k, v) -> alias, v), scopes |> Map.toList with | :? Flurl.Http.FlurlHttpException as ex -> match ex.StatusCode |> Option.ofNullable with | Some code when code >= 400 -> return! PackageNotFoundException |> Error | _ -> () return! ex :> Exception |> Error } let getPackageUrlInfo (name: string) (alias: string) (source: Source) = match source with | Source.Skypack -> getSkypackInfo name alias | _ -> getJspmInfo name alias source
This was a relatively low effort to implement but it did require finding a way to gather these resources so they can be mapped to json objects. This approach also allows you yo import different version fo the same package in the same application! that can be useful when you want to migrate dependencies slowly rolling them out.
Production Bundles
Just as Installing dependencies, having a production ready build is critical This is where esbuild finally comes into the picture it is a crucial piece of the puzzle. Esbuild while it's written in go and offers a npm package, it provides a single executable binary which can be used in a lot of platforms and and architectures, it distributes itself through the npm registry so it's about downloading the package in the correct way and just executing it like we did for the fable command.
let esbuildJsCmd (entryPoint: string) (config: BuildConfig) = let dirName = (Path.GetDirectoryName entryPoint) .Split(Path.DirectorySeparatorChar) |> Seq.last let outDir = match config.outDir with | Some outdir -> Path.Combine(outdir, dirName) |> Some | None -> Path.Combine("./dist", dirName) |> Some let execBin = defaultArg config.esBuildPath esbuildExec let fileLoaders = getDefaultLoders config Cli .Wrap(execBin) .WithStandardErrorPipe(PipeTarget.ToStream(Console.OpenStandardError())) .WithStandardOutputPipe(PipeTarget.ToStream(Console.OpenStandardOutput())) // CliWrap simply allows us to add arguments to commands very easy .WithArguments(fun args -> args.Add(entryPoint) |> addEsExternals config.externals |> addIsBundle config.bundle |> addTarget config.target |> addDefaultFileLoaders fileLoaders |> addMinify config.minify |> addFormat config.format |> addInjects config.injects |> addOutDir outDir |> ignore)
the CLI API from esbuild is pretty simple to be honest and is really effective when it comes to transpilation the benefits are that it not just transpiles Javascript, it also transpiles typescript, jsx and tsx files. Adding to those features esbuild is blazing fast.
Transpilation on the fly
The dev server not only needs to serve JS content to the browser, often it needs to serve Typescript/JSX/TSX as well, and as we found earlier in the post if you serve static content your options for transforming or manipulating these request are severely limited, so I had to make particular middlewares to enable compiling single files on the fly.
let's check a little bit how these are somewhat laid out on Perla
[<RequireQualifiedAccess>] module Middleware = // this function helps us determine a particular extension is in the request path // if it is we will use one of the middlewares below on the calling site. let transformPredicate (extensions: string list) (ctx: HttpContext) = ... let cssImport (mountedDirs: Map<string, string>) (ctx: HttpContext) (next: Func<Task>) = ... let jsonImport (mountedDirs: Map<string, string>) (ctx: HttpContext) (next: Func<Task>) = ... let jsImport (buildConfig: BuildConfig option) (mountedDirs: Map<string, string>) (ctx: HttpContext) (next: Func<Task>) = task { let logger = ctx.GetLogger("Perla Middleware") if // for the moment, we just serve the JS as is and don't process it ctx.Request.Path.Value.Contains("~perla~") || ctx.Request.Path.Value.Contains(".js") |> not then return! next.Invoke() else let path = ctx.Request.Path.Value logger.LogInformation($"Serving {path}") let baseDir, baseName = // check if we're actually monitoring this directory and this file extension mountedDirs |> Map.filter (fun _ v -> String.IsNullOrWhiteSpace v |> not) |> Map.toSeq |> Seq.find (fun (_, v) -> path.StartsWith(v)) // find the file on disk let filePath = let fileName = path.Replace($"{baseName}/", "", StringComparison.InvariantCulture) Path.Combine(baseDir, fileName) // we will serve javascript regardless of what we find on disk ctx.SetContentType "text/javascript" try if Path.GetExtension(filePath) <> ".js" then return failwith "Not a JS file, Try looking with another extension." // if the file exists on disk // and has a js extension then just send it as is // the browser should be able to interpret it let! content = File.ReadAllBytesAsync(filePath) do! ctx.WriteBytesAsync content :> Task with | ex -> let! fileData = Esbuild.tryCompileFile filePath buildConfig match fileData with | Ok (stdout, stderr) -> if String.IsNullOrWhiteSpace stderr |> not then // In the SSE code, we added (later on) // an observer for compilation errors and send a message to the client, // this should trigger an "overlay" on the client side Fs.PublishCompileErr stderr do! ctx.WriteBytesAsync [||] :> Task else // if the file got compiled then just write the file to the body // of the request let content = Encoding.UTF8.GetBytes stdout do! ctx.WriteBytesAsync content :> Task | Error err -> // anything else, just send a 500 ctx.SetStatusCode 500 do! ctx.WriteTextAsync err.Message :> Task } :> Task let configureTransformMiddleware (config: FdsConfig) (appConfig: IApplicationBuilder) = let serverConfig = defaultArg config.devServer (DevServerConfig.DefaultConfig()) let mountedDirs = defaultArg serverConfig.mountDirectories Map.empty appConfig .Use(Func<HttpContext, Func<Task>, Task>(jsonImport mountedDirs)) .Use(Func<HttpContext, Func<Task>, Task>(cssImport mountedDirs)) .Use( Func<HttpContext, Func<Task>, Task>(jsImport config.build mountedDirs) ) |> ignore
It is a pretty simple module (I want to think) that only has some functions that deal with the content of the files and return any compiled result if neededm otherwise just send the file.
Now... Let's take a look at the magic behind
let! fileData = Esbuild.tryCompileFile filePath buildConfig to be honest I didn't really know what I was doing, the main line of thought was just to try and find the content on disk and try the next extension if it didn't work. Hah! well
let tryCompileFile filepath config = taskResult { let config = (defaultArg config (BuildConfig.DefaultConfig())) // since we're using // FsToolkit.ErrorHandling if the operation fails it will // "early return" meaning it won't continue the success path let! res = Fs.tryReadFile filepath let strout = StringBuilder() let strerr = StringBuilder() let (_, loader) = res let cmd = buildSingleFileCmd config (strout, strerr) res // execute esbuild on the file do! (cmd.ExecuteAsync()).Task :> Task let strout = strout.ToString() let strerr = strerr.ToString() let strout = match loader with | Jsx | Tsx -> try // if the file needs injects (e.g automatic "import React from 'react'" in JSX files) let injects = defaultArg config.injects (Seq.empty) |> Seq.map File.ReadAllText // add those right here let injects = String.Join('\n', injects) $"{injects}\n{strout}" with | ex -> printfn $"Perla Serve: failed to inject, {ex.Message}" strout | _ -> strout // return the compilation results // the transpiled output and the error if any return (strout, strerr) }
Surely thats a lot of things to do for a single file, I'm sure it must be quite slow right? Well... It turns out that .NET and Go are quite quite feaking fast
Angel Munoz@angel_d_munoz
.NET6 is really fast even for my not optimized sloppy #fsharp code
I'm doing a bunch of IO/async operations without peformance in mind plus my weird mental logic and yet both esbuild transform + middleware stuff + IO + tasks
and yet each request takes between 10-20ms03:43 AM - 03 Oct 2021
each request takes around 10-20ms and I'm pretty sure it can be improved once the phase of heavy development settles down and the code base stabilizes a little bit more.
Dev Proxy
This one is pretty new, a dev proxy is somewhat necessary specially when you will host your applications on your own server so you are very likely to have URLs like
/api/my-endpoint rather than it also helps you target different environments with a single configuration change, in this case it was not really complex thanks to @Yaurthek who hinted at me one Yarp implementation of a dev proxy, so I ended up basing my work on that.
The whole idea here is to read a json file with some
origin -> target mappings and then just adding a proxy to the server application.
let private getHttpClientAndForwarder () = // this socket handler is actually disposable // but since technically I will only use one in the whole application // I won't need to dispose it let socketsHandler = new SocketsHttpHandler() socketsHandler.UseProxy <- false socketsHandler.AllowAutoRedirect <- false socketsHandler.AutomaticDecompression <- DecompressionMethods.None socketsHandler.UseCookies <- false let client = new HttpMessageInvoker(socketsHandler) let reqConfig = ForwarderRequestConfig() reqConfig.ActivityTimeout <- TimeSpan.FromSeconds(100.) client, reqConfig let private getProxyHandler (target: string) (httpClient: HttpMessageInvoker) (forwardConfig: ForwarderRequestConfig) : Func<HttpContext, IHttpForwarder, Task> = // this is actually using .NET6 Minimal API's from asp.net! let toFunc (ctx: HttpContext) (forwarder: IHttpForwarder) = task { let logger = ctx.GetLogger("Perla Proxy") let! error = forwarder.SendAsync(ctx, target, httpClient, forwardConfig) // report the errors to the log as a warning // since we don't need to die if a request fails if error <> ForwarderError.None then let errorFeat = ctx.GetForwarderErrorFeature() let ex = errorFeat.Exception logger.LogWarning($"{ex.Message}") } :> Task Func<HttpContext, IHttpForwarder, Task>(toFunc)
And then somewher inside the aspnet application configuration
match getProxyConfig with | Some proxyConfig -> appConfig .UseRouting() .UseEndpoints(fun endpoints -> let (client, reqConfig) = getHttpClientAndForwarder () // for each mapping add the url add an endpoint for (from, target) in proxyConfig |> Map.toSeq do let handler = getProxyHandler target client reqConfig endpoints.Map(from, handler) |> ignore) | None -> appConfig
That's it! At least on my initial testing it seems to work fine, I would need to have some feedback on the feature to know if this is actually working for more complex use cases.
Future and Experimental things
What you have seen so far (and some other minor features) are already inside Perla, they are working and they try to provide you a seamless experience for building Single Page Applications however there are still missing pieces for a complete experience. For example Perla doesn't support Sass or Less at the moment and Sass is a pretty common way to write styles on big frontend projects, we are not able to parse out
.Vue files or anything else that is not HTML/CSS/JS/TS/JSX/TSX, We do support HMR for CSS files since that is not a complex mechanism but, HMR for JS/TS/JSX/TSX files is not there yet sady. Fear not that We're looking for a way to provide these at some point in time.
Plugins
I'm a fan of
.fsx files, F# scripts are pretty flexible and since F# 5.0 they are even more powerful than ever allowing you to pull dependencies directly from NuGet without any extra command.
The main goal for the author and user experiences is somewhat like this
As an Author:
- Write an
.fsxscript
- Upload it to gist/github
- Profit
As a User:
- Add an entry yo tour "plugins" section
- Profit
Implementation details are more complex though...
My vision is somewhere along the following lines
- get a request for a different file e.g. sass files
- if the file is not part of the default supported extensions
- Call a function that will parse the content of that file
- get the transpiled content or the compilation error (just like we saw above with the js middleware)
- return the valid HTML/CSS/JS content to the browser
To get there I want to leverage the [FSharp.Compiler.Services] NuGet Package to start an F# interactive session that runs over the life of the server,
- Start the server, also if there are plugins in the plugin section, start the fsi session.
- load the plugins, download them to a known location in disk, or even just get the strings without downloading the file to disk
- execute the contents on the fsi session and grab a particular set of functions
- These functions can be part of a life cycle which may possible be something like
- on load // when HMR is enabled
- on change // when HMR is enabled
- on transform // when the file is requested
- on build // when the production build is executed
- call the functions in the plugins section whenever needed
Starting an FSI session is not a complex task let's take a look.
let's say we have the following script:
#r "nuget: LibSassHost, 1.3.3" #r "nuget: LibSassHost.Native.linux-x64, 1.3.3" open System.IO open LibSassHost let compileSassFile = let _compileSassFile (filePath: string) = let filename = Path.GetFileName(filePath) let result = SassCompiler.Compile(File.ReadAllText(filePath)) [|filename; result.CompiledContent |]
In this file we're able to provide a function that when given a file path, it will try to compile a
.scss file into it's
.css equivalent, to be able to execute that in Perla, we need a module that does somewhat like this:
#r "nuget: FSharp.Compiler.Service, 41.0.1" open System open System.IO open FSharp.Compiler.Interactive.Shell module ScriptedContent = let tryGetSassPluginFunction (content: string): (string -> string) option = let defConfig = FsiEvaluationSession.GetDefaultConfiguration() let argv = [| "fsi.exe" "--noninteractive" "--nologo" "--gui-" |] use stdIn = new StringReader("") use stdOut = new StringWriter() use stdErr = new StringWriter() use session = FsiEvaluationSession.Create(defConfig, argv, stdIn, stdOut, stdErr, true) session.EvalInteractionNonThrowing(content) |> ignore match session.TryFindBoundValue "compileSassFile" with | Some bound -> // If there's a value with that name on the script try to grab it match bound.Value.ReflectionValue with // ensure it fits the signature we are expecting | :? FSharpFunc<string, string> as compileSassFile -> Some compileSassFile | _ -> None | None -> None let content = File.ReadAllText("./path/to/sass-plugin.fsx") // this is where it get's nice, we can also fetch the scritps from the cloud // let! content = Http.getFromGithub("AngelMunoz/Perla.Sass") match ScriptedContent.tryGetSassPluginFunction(content) with | Some plugin -> let css = plugin "./path/to/file.scss" printfn $"Resulting CSS:\n{css}" | None -> printfn "No plugin was found on the script"
This is more-less what I have in mind, it has a few downsides though
- Convention based naming
- Badly written plugins might leak memory or make Perla's performance to slow down
- Script distribution is a real concern, there's no clear way to do it as of now
- Security concerns when executing code with Perla's permissions on the user's behalf
And many others that I might not be looking after.
Being able to author plugins and process any kind of file into something Perla can use to enhance the consumer experience is just worth it though, for example just look at the vast amount of webpack and vite plugins. The use cases are there for anyone to fulfill them .
HMR
This is the golden apple I'm not entirely sure how to tackle...
There's an HMR spec that I will follow for that since that's what snowpack/vite's HMR is based on, libraries like Fable.Lit, or Elmish.HMR are working towards being compatible with vite's HMR, so if Perla can make it work like them, then we won't even need to write any specific code for Perla.
I can talk however of CSS HMR, This is a pretty simple change to support given that CSS changes are automatically propagated in the browser, it basically does half of the HMR for us.
Perla does the following:
- Sees
import "./app.css
- Runs the
cssImportmiddleware function I hinted at earlier and returns a ~CSS~ Javascript file that injects a script tag on the head of the page.
let cssImport (mountedDirs: Map<string, string>) (ctx: HttpContext) (next: Func<Task>) = task { // skip non-css files if ctx.Request.Path.Value.Contains(".css") |> not then return! next.Invoke() else let logger = ctx.GetLogger("Perla Middleware") let path = ctx.Request.Path.Value let baseDir, baseName = mountedDirs |> Map.filter (fun _ v -> String.IsNullOrWhiteSpace v |> not) |> Map.toSeq |> Seq.find (fun (_, v) -> path.StartsWith(v)) let filePath = let fileName = path.Replace($"{baseName}/", "", StringComparison.InvariantCulture) Path.Combine(baseDir, fileName) logger.LogInformation("Transforming CSS") let! content = File.ReadAllTextAsync(filePath) // return the JS code to insert the CSS content in a style tag let newContent = $""" const css = `{content}` const style = document.createElement('style') style.innerHTML = css style.setAttribute("filename", "{filePath}"); document.head.appendChild(style)""" ctx.SetContentType "text/javascript" do! ctx.WriteStringAsync newContent :> Task } :> Task
In the SSE handler function we observe for file changes in disk and depending on the content we do the corresponding update
watcher.FileChanged |> Observable.map (fun event -> task { match Path.GetExtension event.name with | Css -> // a CSS file was changed, read all of the content let! content = File.ReadAllTextAsync event.path let data = Json.ToTextMinified( {| oldName = event.oldName |> Option.map (fun value -> match value with | Css -> value | _ -> "") name = event.path content = content |} ) // Send the SSE Message to the client with the new CSS content do! res.WriteAsync $"event:replace-css\ndata:{data}\n\n" return! res.Body.FlushAsync() | Typescript | Javascript | Jsx | Json | Other _ -> //... other content ... }) |> Observable.switchTask |> Observable.subscribe ignore
To handle these updates we use two cool things, WebWorkers and a simple scripts, the live reload script has this content
// initiate worker const worker = new Worker("/~perla~/worker.js"); // connect to the SSE endpoint worker.postMessage({ event: "connect" }); function replaceCssContent({ oldName, name, content }) { const css = content?.replace(/(?:\\r\\n|\\r|\\n)/g, "\n") || ""; const findBy = oldName || name; // find the style tag with the particular name const style = document.querySelector(`[filename="${findBy}"]`); if (!style) { console.warn("Unable to find", oldName, name); return; } // replace the content style.innerHTML = css; style.setAttribute("filename", name); } function showOverlay({ error }) { console.log("show overlay"); } // react to the worker messages worker.addEventListener("message", function ({ data }) { switch (data?.event) { case "reload": return window.location.reload(); case "replace-css": return replaceCssContent(data); case "compile-err": return showOverlay(data); default: return console.log("Unknown message:", data); } });
Inside our Worker the code is very very similar
let source; const tryParse = (string) => { try { return JSON.parse(string) || {}; } catch (err) { return {}; } }; function connectToSource() { if (source) return; //connect to the SSE endpoint source = new EventSource("/~perla~/sse"); source.addEventListener("open", function (event) { console.log("Connected"); }); // react to file reloads source.addEventListener("reload", function (event) { console.log("Reloading, file changed: ", event.data); self.postMessage({ event: "reload", }); }); // if the server sends a `replace-css` event // notify the main thread about it // Yes! web workers run on background threads! source.addEventListener("replace-css", function (event) { const { oldName, name, content } = tryParse(event.data); console.log(`Css Changed: ${oldName ? oldName : name}`); self.postMessage({ event: "replace-css", oldName, name, content, }); }); source.addEventListener("compile-err", function (event) { const { error } = tryParse(event.data); console.error(error); self.postMessage({ event: "compile-err", error, }); }); } self.addEventListener("message", function ({ data }) { if (data?.event === "connect") { connectToSource(); } });
And that's how the CSS HMR works in Perla and it is instant, in less than a blink of an eye! Well... maybe not but pretty close to it.
For the JS side I'm still not sure how this will work given that I might need to have a mapping in both sides of the files I have and what is their current version.
What's next?
Whew! That was a lot! but shows how to build each part of the Webpack alternative I've been working on Called Perla there are still some gaps though
- Project Scaffolding
This will be an important step for adoption I believe, generating certain files, or even starter projects to reduce the onboarding complexity is vital, so this is likely the next step for me (even before the HMR)
- Unit/E2E Testing
Node based test runners won't work naturally since we're not using node! So this is an area to investigate, for E2E I already have the thought of using playwright, for unit tests I'm not sure yet but I guess I'd be able to pick something similar or simply have a test framework that runs entirely on the browser.
- Import map ergonomics
Some times, you must edit by hand the import map (
perla.jsonc.lock) to get dependencies like
import fp from 'lodash/fp' with the import maps the browser knows what to do with
lodash but not
lodash/fp so an edit must be made, this requires you to understand how these dependencies work and how you need to write the import map, it's an area I'd love to make as simple as possible
- Typescript Types for dependencies
Typescript (and related Editors/IDEs like vscode and webstorm) rely on the presence of node_modules to pick typings from disk, It would be nice if typescript worked with the URI style imports for typings that would fix a lot of issues.
- Library/Build only mode
There might be certain cases where you would like to author a library either for you and your team, perhaps you only need the JS files and a package.json to share the sources, while not a priority it's something it's worth looking at.
- Improve the Install story
The goal is run a single line command, get your stuff installed in place regardless of if you have .NET or not
Closing thoughts...
So those are some of the things I might have on my plate for next, of course if I receive feedback on this project I may prioritize some things over the others but rather than doing it all for myself, I wish to share this and make it a community effort to continue to improve the Frontend tooling story that doesn't rely on complex patterns and extremely weird and cryptic errors, hundreds of hours spent in configuration, something as simple that you feel confident enough to trust and use :)
by the way, this is the repository in question :)
AngelMunoz / PerlaAngelMunoz / Perla
A cross-platform tool for unbundled front-end development that doesn't depend on Node or requires you to install a complex toolchain
Perla Dev Server
Check the samples
Perla is a cross-platform single executable binary CLI Tool for a Development Server of Single Page Applications.
If that sounds like something nice, Check The docs!
Status
This project is in development, current goals at this point are:
- Remove npm/node out of the equation.
- For F# users, seamless fable integration.
- A Fast and easy to use Development server
- Build for production using esbuild.
- Binary Release for users outside .NET
- HMR (for other than CSS)
- Plugin System
For more information check the Issues tab.
Existing tools
If you actually use and like nodejs, then you would be better taking a look at the tools that inspired this repository
These tools have a bigger community and rely on an even bigger ecosystem plus they support plugins via npm so if you're using node stick with them they are a better choice Perla's unbundled…
And with all due respect, I really thank the maintainers of snowpack,esbuild, and vite who make an incredible job at reducing the complexity of frontend tooling as well, they inspired me AND if you're a loving node user please look at their projects ditch webpack enough so they also reflect back and simplify their setups!
For the .NET community I wish to spark a little of interest to look outside and build tooling for other communities, I think it is a really nice way to introduce .NET with the rest of the world and new devs be it with F#, C# or VB, I think .NET is an amazing platform for that. This has been a really good learning exercise and at the same time a project I believe in so, I will try to spend more time and push it to as much as I can.
I'll see you on the next one :)
Discussion (8)
Very very nice work !
I give a try. Perla doc is clear and this article also.
For lit, I'll love to have a js file (module) by component and also a way to dev fast with HMR.
Without the need to config two differents projects ....
👍🏻
Thanks for the kind words!
since Lit is just javascript you can use it as is take a look at this template
I don't understand which two different projects?
2 projects like
I will take a serious look at all the template/features 👍🏻
Library mode is something that I have an issue open about, it might be worth considering but since the dependencies you make with Perla are from a CDN I'm not sure if it makes sense for libraries in the long run I think npm is better there (at leas for now) but would be great if you give feedback about your use case in the Perla repo
If you have something like a mono repo e.g
mounted directoriesoption in
perla.jsoncyou could mount that extra directory into a particular route and try to import things from there the setting would be
and your code would import it like
import {some} from '/common/some.js
Amazing body of work! Thanks for this!
Thank you for the kind words 😌!
Just wow! I really appreciate also, that you are sharing the process of making it.
It has taken its time and effort for sure , I'd just like to see people picking F# for anything else rather than just "business"! hence why I share this hopefully the message is "F# is not just for old, boring stuff" | https://dev.to/tunaxor/building-a-webpack-alternative-in-f-4p0f | CC-MAIN-2022-05 | refinedweb | 7,316 | 55.34 |
opencv 4.5.0 cvDiv/cvMat equivalent ?
LE:
System I updated from OpenCV-3.1-android-sdk to OpenCV-4.5.0-android-sdk
I am using these includes:
#include <opencv2/core/core.hpp> #include <opencv2/core/core_c.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp>
In my cpp file was using it like this (So basically I had this code working in openCV 3.1.0):
Mat mat1 = new Mat(..); Mat mat2 = new Mat(..); CvMat src = mat1; CvMat mask = mat2; cvDiv(&src, &mask, &mask, 256);
In openCV 4.5.0 I get this error:
error: no viable conversion from 'cv::Mat' to 'CvMat '
If I use this instead
Mat mat1 = new Mat(..); Mat mat2 = new Mat(..); Mat src = mat1; Mat mask = mat2; cvDiv(&src, &mask, &mask, 256);
I get this error at runtime:
cv::error(): OpenCV(4.5.0) Error: Bad argument (Unknown array type) in cvarrToMat, file /build/master_pack-android/opencv/modules/core/src/matrix_c.cpp, line 185
I am actually asking: How can I convert this code to run with OpenCV 4.5.0. - I could use divide instead of cvDiv (as @sturkmen stated) - but what arguments do I feed to that function ? I cannot feed Mat, I cannot use CvMat.. PS: evidently I am clueless in C since I am just an android dev
see...
Can you check my revised question please ? | https://answers.opencv.org/question/237974/opencv-450-cvdivcvmat-equivalent/?sort=oldest | CC-MAIN-2022-40 | refinedweb | 229 | 70.6 |
Is there a way in JAVA , for two JVMs (running on same physical machine), to use/share same mermory address space . Suppose a producer in JVM1 puts msgs @ a particular pre-defined memory location, can the consumer on JVM2 retrive the msg if he knows which memory location he needs look.
The best solution in my opinion is to use memory mapped files. This allows you to share a region of memory between any number of process, including other non java programs. You can't place java objects into a memory mapped file, unless you serialize them. The following example shows that you can communicate between two different process, but you would need to make it much more sophisticated to allow better communication between the processes. I suggest you look at Java's NIO package, specifically the classes and methods used in the below examples.
Server:
public class Server {(); char[] string = "Hello client\0".toCharArray(); charBuf.put( string ); System.out.println( "Waiting for client." ); while( charBuf.get( 0 ) != '\0' ); System.out.println( "Finished waiting." ); } }
Client:
public class Client {(); // Prints 'Hello server' char c; while( ( c = charBuf.get() ) != 0 ) { System.out.print( c ); } System.out.println(); charBuf.put( 0, '\0' ); } }
Another solution is to use Java Sockets to communicate back and forth between processes. This has the added benefit of allowing communication over a network very easily. It could be argued that this is slower than using memory mapped files, but I do not have any benchmarks to back that statement up. I won't post code to implementing this solution, as it can become very complicated to implement a reliable network protocol and is fairly application specific. There are many good networking sites that can be found with quick searches.
Now the above examples are if you want to share memory between two different process. If you just want to read/write to arbitrary memory in the current process, there are some warnings you should know first. This goes against the entire principle of the JVM and you really really should not do this in production code. You violate all safety and can very easily crash the JVM if you are not very careful.
That being said, it is quite fun to experiment with. To read/write to arbitrary memory in the current process you can use the
sun.misc.Unsafe class. This is provided on all JVMs that I am aware of and have used. An example on how to use the class can be found here. | https://codedump.io/share/JJ0MT7gjIW37/1/shared-memory-between-two-jvms | CC-MAIN-2017-13 | refinedweb | 418 | 56.05 |
or Brett Cannon’s Why Python 3 exists.
For help with porting, you can email the python-porting mailing list with questions.
The Short Explanation¶
To make your project be single-source Python 2/3 compatible, the basic steps are:
- Only worry about supporting Python 2.7
- Make sure you have good test coverage (coverage.py can help;
pip install coverage)
- Learn the differences between Python 2 & 3
- Use Futurize (or Modernize) to update your code (e.g.
pip install future)
- Use Pylint to help make sure you don’t regress on your Python 3 support (
pip install pylint)
- Use caniusepython3 to find out which of your dependencies are blocking your use of Python 3 (
pip install caniusepython3)
- Once your dependencies are no longer blocking you, use continuous integration to make sure you stay compatible with Python 2 & 3 (tox can help test against multiple versions of Python;
pip install tox)
- Consider using optional static type checking to make sure your type usage works in both Python 2 & 3 (e.g. use mypy to check your typing under both Python 2 & Python 3).
Details¶
A key point about supporting Python 2 & 3 simultaneously is that you can start today! Even if your dependencies are not supporting Python 3 yet that does not mean you can’t modernize your code now to support Python 3. Most changes required to support Python 3 lead to cleaner code using newer practices even in Python 2 code.
Another key point is that modernizing your Python 2 code to also support Python 3 is largely automated for you. While you might have to make some API decisions thanks to Python 3 clarifying text data versus binary data, the lower-level work is now mostly done for you and thus can at least benefit from the automated changes immediately.
Keep those key points in mind while you read on about the details of porting your code to support Python 2 & 3 simultaneously.
Drop support for Python 2.6 and older¶
While you can make Python 2.5 work with Python 3, it is much easier if you
only have to work with Python 2.7. If dropping Python 2.5 is not an
option then the six project can help you support Python 2.5 & 3 simultaneously
(
pip install six). Do realize, though, that nearly all the projects listed
in this HOWTO will not be available to you.
If you are able to skip Python 2.5 and older, then the required changes to your code should continue to look and feel like idiomatic Python code. At worst you will have to use a function instead of a method in some instances or have to import a function instead of using a built-in one, but otherwise the overall transformation should not feel foreign to you.
But you should aim for only supporting Python 2.7. Python 2.6 is no longer freely supported and thus is not receiving bugfixes. This means you will have to work around any issues you come across with Python 2.6. There are also some tools mentioned in this HOWTO which do not support Python 2.6 (e.g., Pylint), and this will become more commonplace as time goes on. It will simply be easier for you if you only support the versions of Python that you have to support.
Make sure you specify the proper version support in your
setup.py file¶
In your
setup.py file you should have the proper trove classifier
specifying what versions of Python you support. As your project does not support
Python 3 yet you should at least have
Programming Language :: Python :: 2 :: Only specified. Ideally you should
also specify each major/minor version of Python that you do support, e.g.
Programming Language :: Python :: 2.7.
Have good test coverage¶
Once you have your code supporting the oldest version of Python 2 you want it to, you will want to make sure your test suite has good coverage. A good rule of thumb is that if you want to be confident enough in your test suite that any failures that appear after having tools rewrite your code are actual bugs in the tools and not in your code. If you want a number to aim for, try to get over 80% coverage (and don’t feel bad if you find it hard to get better than 90% coverage). If you don’t already have a tool to measure test coverage then coverage.py is recommended.
Learn the differences between Python 2 & 3¶
Once you have your code well-tested you are ready to begin porting your code to Python 3! But to fully understand how your code is going to change and what you want to look out for while you code, you will want to learn what changes Python 3 makes in terms of Python 2. Typically the two best ways of doing that is reading the “What’s New” doc for each release of Python 3 and the Porting to Python 3 book (which is free online). There is also a handy cheat sheet from the Python-Future project.
Update your code¶.
Regardless of which tool you choose, they will update your code to run under Python 3 while staying compatible with the version of Python 2 you started with. Depending on how conservative you want to be, you may want to run the tool over your test suite first and visually inspect the diff to make sure the transformation is accurate. After you have transformed your test suite and verified that all the tests still pass as expected, then you can transform your application code knowing that any tests which fail is a translation failure.
Unfortunately the tools can’t automate everything to make your code work under
Python 3 and so there are a handful of things you will need to update manually
to get full Python 3 support (which of these steps are necessary vary between
the tools). Read the documentation for the tool you choose to use to see what it
fixes by default and what it can do optionally to know what will (not) be fixed
for you and what you may have to fix on your own (e.g. using
io.open() over
the built-in
open() function is off by default in Modernize). Luckily,
though, there are only a couple of things to watch out for which can be
considered large issues that may be hard to debug if not watched for.
Division¶
In Python 3,
5 / 2 == 2.5 and not
2; all division between
int values
result in a
float. This change has actually been planned since Python 2.2
which was released in 2002. Since then users have been encouraged to add
from __future__ import division to any and all files which use the
/ and
// operators or to be running the interpreter with the
-Q flag. If you
have not been doing this then you will need to go through your code and do two
things:
- Add
from __future__ import divisionto your files
- Update any division operator as necessary to either use
//to use floor division or continue using
/and expect a float
The reason that
/ isn’t simply translated to
// automatically is that if
an object defines a
__truediv__ method but not
__floordiv__ then your
code would begin to fail (e.g. a user-defined class that uses
/ to
signify some operation but not
// for the same thing or at all).
Text versus binary data¶
In Python 2 you could use the
str type for both text and binary data.
Unfortunately this confluence of two different concepts could lead to brittle
code which sometimes worked for either kind of data, sometimes not. It also
could lead to confusing APIs if people didn’t explicitly state that something
that accepted
str accepted either text or binary data instead of one
specific type. This complicated the situation especially for anyone supporting
multiple languages as APIs wouldn’t bother explicitly supporting
unicode
when they claimed text data support. deals only with text or only binary data, this separation doesn’t pose an issue. But for code that has to deal with both, it does mean you might have to now care about when you are using text compared to binary data, which is why this cannot be entirely automated.
To start, you will need to decide which APIs take text and which take binary
(it is highly recommended you don’t design APIs that can take both due to
the difficulty of keeping the code working; as stated earlier it is difficult to
do well). In Python 2 this means making sure the APIs that take text can work
with
unicode and those that work with binary data work with the
bytes type from Python 3 (which is a subset of
str in Python 2 and acts
as an alias for
bytes type in Python 2). Usually the biggest issue is
realizing which methods exist on which types in Python 2 & 3 simultaneously
(for text that’s
unicode in Python 2 and
str in Python 3, for binary
that’s
str/
bytes in Python 2 and
bytes in Python 3). The following
table lists the unique methods of each data type across Python 2 & 3
(e.g., the
decode() method is usable on the equivalent binary data type in
either Python 2 or 3, but it can’t be used by the textual data type consistently
between Python 2 and 3 because
str in Python 3 doesn’t have the method). Do
note that as of Python 3.5 the
__mod__ method was added to the bytes type.
Making the distinction easier to handle can be accomplished by encoding and decoding between binary data and text at the edge of your code. This means that when you receive text in binary data, you should immediately decode it. And if your code needs to send text as binary data then encode it as late as possible. This allows your code to work with only text internally and thus eliminates having to keep track of what type of data you are working with.
The next issue is making sure you know whether the string literals in your code
represent text or binary data. You should add a
b prefix to any
literal that presents binary data. For text you should add a
u prefix to
the text literal. (there is a
__future__ import to force all unspecified
literals to be Unicode, but usage has shown it isn’t as effective as adding a
b or
u prefix to all literals explicitly)
As part of this dichotomy you also need to be careful about opening files. binary data to be read and/or written) or textual access
(allowing text data to be read and/or written). You should also use
io.open()
for opening files instead of the built-in
open() function as the
io
module is consistent from Python 2 to 3 while the built-in
open() function
is not (in Python 3 it’s actually
io.open()). Do not bother with the
outdated practice of using
codecs.open() as that’s only necessary for
keeping compatibility with Python 2.5.
The constructors of both
str and
bytes have different semantics for the
same arguments between Python 2 & 3. Passing an integer to
bytes in Python 2
will give you the string representation of the integer:
bytes(3) == '3'.
But in Python 3, an integer argument to
bytes will give you a bytes object
as long as the integer specified, filled with null bytes:
bytes(3) == b'\x00\x00\x00'. A similar worry is necessary when passing a
bytes object to
str. In Python 2 you just get the bytes object back:
str(b'3') == b'3'. But in Python 3 you get the string representation of the
bytes object:
str(b'3') == "b'3'".
Finally, the indexing of binary data requires careful handling (slicing does
not require any special handling). In Python 2,
b'123'[1] == b'2' while in Python 3
b'123'[1] == 50. Because binary data
is simply a collection of binary numbers, Python 3 returns the integer value for
the byte you index on. But in Python 2 because
bytes == str, indexing
returns a one-item slice of bytes. The six project has a function
named
six.indexbytes() which will return an integer like in Python 3:
six.indexbytes(b'123', 1).
To summarize:
- Decide which of your APIs take text and which take binary data
- Make sure that your code that works with text also works with
unicodeand code for binary data works with
bytesin Python 2 (see the table above for what methods you cannot use for each type)
- Mark all binary literals with a
bprefix, textual literals with a
uprefix
- Decode binary data to text as soon as possible, encode text as binary data as late as possible
- Open files using
io.open()and make sure to specify the
bmode when appropriate
- Be careful when indexing into binary data
Use feature detection instead of version detection¶
Inevitably you will have code that has to choose what to do based on what version of Python is running. The best way to do this is with feature detection of whether the version of Python you’re running under supports what you need. If for some reason that doesn’t work then you should make the version check be against Python 2 and not Python 3. To help explain this, let’s look at an example.
Let’s pretend that you need access to a feature of importlib that
is available in Python’s standard library since Python 3.3 and available for
Python 2 through importlib2 on PyPI. You might be tempted to write code to
access e.g. the
importlib.abc module by doing the following:
import sys if sys.version_info[0] == 3: from importlib import abc else: from importlib2 import abc
The problem with this code is what happens when Python 4 comes out? It would be better to treat Python 2 as the exceptional case instead of Python 3 and assume that future Python versions will be more compatible with Python 3 than Python 2:
import sys if sys.version_info[0] > 2: from importlib import abc else: from importlib2 import abc
The best solution, though, is to do no version detection at all and instead rely on feature detection. That avoids any potential issues of getting the version detection wrong and helps keep you future-compatible:
try: from importlib import abc except ImportError: from importlib2 import abc
Prevent compatibility regressions¶
Once you have fully translated your code to be compatible with Python 3, you will want to make sure your code doesn’t regress and stop working under Python 3. This is especially true if you have a dependency which is blocking you from actually running under Python 3 at the moment.
To help with staying compatible, any new modules you create should have at least the following block of code at the top of it:
from __future__ import absolute_import from __future__ import division from __future__ import print_function
You can also run Python 2 with the
-3 flag to be warned about various
compatibility issues your code triggers during execution. If you turn warnings
into errors with
-Werror then you can make sure that you don’t accidentally
miss a warning.
You can also use the Pylint project and its
--py3k flag to lint your code
to receive warnings when your code begins to deviate from Python 3
compatibility. This also prevents you from having to run Modernize or Futurize
over your code regularly to catch compatibility regressions. This does require
you only support Python 2.7 and Python 3.4 or newer as that is Pylint’s
minimum Python version support.
Check which dependencies block your transition¶
After you have made your code compatible with Python 3 you should begin to care about whether your dependencies have also been ported. The caniusepython3 project was created to help you determine which projects – directly or indirectly – are blocking you from supporting Python 3. There is both a command-line tool as well as a web interface at.
The project also provides code which you can integrate into your test suite so that you will have a failing test when you no longer have dependencies blocking you from using Python 3. This allows you to avoid having to manually check your dependencies and to be notified quickly when you can start running on Python 3.
Update your
setup.py file to denote Python 3 compatibility¶
Once your code works under Python 3, you should update the classifiers in
your
setup.py to contain
Programming Language :: Python :: 3 and to not
specify sole Python 2 support. This will tell anyone using your code that you
support Python 2 and 3. Ideally you will also want to add classifiers for
each major/minor version of Python you now support.
Use continuous integration to stay compatible¶
Once you are able to fully run under Python 3 you will want to make sure your code always works under both Python 2 & 3. Probably the best tool for running your tests under multiple Python interpreters is tox. You can then integrate tox with your continuous integration system so that you never accidentally break Python 2 or 3 support.
You may also want to use the
-bb flag with the Python 3 interpreter to
trigger an exception when you are comparing bytes to strings or bytes to an int
(the latter is available starting in Python 3.5). By default type-differing
comparisons simply return
False, but if you made a mistake in your
separation of text/binary data handling or indexing on bytes you wouldn’t easily
find the mistake. This flag will raise an exception when these kinds of
comparisons occur, making the mistake much easier to track down.
And that’s mostly it! At this point your code base is compatible with both Python 2 and 3 simultaneously. Your testing will also be set up so that you don’t accidentally break Python 2 or 3 compatibility regardless of which version you typically run your tests under while developing.
Consider using optional static type checking¶
Another way to help port your code is to use a static type checker like mypy or pytype on your code. These tools can be used to analyze your code as if it’s being run under Python 2, then you can run the tool a second time as if your code is running under Python 3. By running a static type checker twice like this you can discover if you’re e.g. misusing binary data type in one version of Python compared to another. If you add optional type hints to your code you can also explicitly state whether your APIs use textual or binary data, helping to make sure everything functions as expected in both versions of Python. | http://docs.activestate.com/activepython/3.5/python/howto/pyporting.html | CC-MAIN-2018-09 | refinedweb | 3,176 | 66.57 |
Provided by: liballegro-doc_4.2.2-3_all
NAME
calc_spline - Calculates a series of values along a Bezier spline. Allegro game programming library.
SYNOPSIS
#include <allegro.h> void calc_spline(const int points[8], int npts, int *x, int *y);
DESCRIPTION
Calculates a series of npts values along a Bezier spline, storing them in the output x and y arrays. The Bezier curve is specified by the four x/y control points in the points array: points[0] and points[1] contain the coordinates of the first control point, points[2] and points[3] are the second point, etc. Control points 0 and 3 are the ends of the spline, and points 1 and 2 are guides. The curve probably won't pass through points 1 and 2, but they affect the shape of the curve between points 0 and 3 (the lines p0-p1 and p2-p3 are tangents to the spline). The easiest way to think of it is that the curve starts at p0, heading in the direction of p1, but curves round so that it arrives at p3 from the direction of p2. In addition to their role as graphics primitives, spline curves can be useful for constructing smooth paths around a series of control points, as in exspline.c.
SEE ALSO
spline(3alleg), exspline(3alleg) | http://manpages.ubuntu.com/manpages/precise/man3/calc_spline.3alleg.html | CC-MAIN-2019-30 | refinedweb | 218 | 69.92 |
Hi all,
I was getting bored by the space consuming red and green message bars in top of the screens (sorry Richard), therefor I made a small change to the code that displays those messages. Now there won't be any message in the HTML anymore, but an pop-up window will show the message.
For those interested, these are the required changes: In html/page.html I change this block:
<td> <p tal:error</p> <p tal:error</p> </td>
into (sorry about the HTML-escaping):
<disabled script tal: <disabled /script> <disabled script tal: <disabled /script>
In
extensions add the file
format_message.py with:
import re def formatMessage(message_list): '''format the message strings by replacing the <br> tags for line-feeds. ''' return re.sub(r'<br>', '\\\\n', '\n'.join(message_list)) def init(instance): instance.registerUtil('formatMessage', formatMessage)
You can easy choose when the alert window should pop-up, by placing the HTML part at an other location in the icing-macro in html/page.html. If you put it at the top, the alert window will pop-up first and after closing it, the HTML page appears. If you place it at the bottom, the HTML page appears first and when ready drawing it, the alert window will appear.
It's only :( that there's no way to change the icon in the alert window to make a clear difference between ok messages and error messages, but you might consider leaving the red block there in case of an error message. So only an alert window showing the ok message, but an alert and a red message block showing the error message (to be honest, that is how I implemented it in our tracker).
Regards, Marlon | http://www.mechanicalcat.net/tech/roundup/wiki/PopupResultMessages | crawl-001 | refinedweb | 287 | 66.37 |
16. Events and Binds in Tkinter
By Bernd Klein. Last modified: 16 Dec 2021.
Introduction
A Tkinter application runs most of its time inside an event loop, which is entered via the mainloop method. It waiting for events to happen. Events can be key presses or mouse operations by the user.
Tkinter provides a mechanism to let the programmer deal with events. For each widget, it's possible to bind Python functions and methods to an event.
widget.bind(event, handler)
If the defined event occurs in the widget, the "handler" function is called with an event object. describing the event.
#!/usr/bin/python3 # write tkinter as Tkinter to be Python 2.x compatible from tkinter import * def hello(event): print("Single Click, Button-l") def quit(event): print("Double Click, so let's stop") import sys; sys.exit() widget = Button(None, text='Mouse Clicks') widget.pack() widget.bind('<Button-1>', hello) widget.bind('<Double-1>', quit) widget.mainloop()
Let's have another simple example, which shows how to use the motion event, i.e. if the mouse is moved inside of a widget:
from tkinter import * def motion(event): print("Mouse position: (%s %s)" % (event.x, event.y)) return master = Tk() whatever_you_do = "Whatever you do will be insignificant, but it is very important that you do it.\n(Mahatma Gandhi)" msg = Message(master, text = whatever_you_do) msg.config(bg='lightgreen', font=('times', 24, 'italic')) msg.bind('<Motion>',motion) msg.pack() mainloop()
Every time we move the mouse in the Message widget, the position of the mouse pointer will be printed. When we leave this widget, the function motion() is not called anymore.
Events:
<modifier-type-detail>
The type field is the essential part of an event specifier, whereas the "modifier" and "detail" fields are not obligatory and are left out in many cases. They are used to provide additional information for the chosen "type". The event "type" describes the kind of event to be bound, e.g. actions like mouse clicks, key presses or the widget got the input focus. | https://python-course.eu/tkinter/events-and-binds-in-tkinter.php | CC-MAIN-2022-05 | refinedweb | 340 | 59.4 |
I have a flash app (SWF) running Flash 8 embedded in an HTML page. How do I get flash to reload the parent HTML page it is embedded in? I've tried ...
I'm not talking about BB-type text based but rich rpgs with spriting, etc.
With the current advancements in javascript for rich browser interfaces,
would it be possible to create semi-massive multiplayer online games
running ...
I have an html page with a flash movie embded in it, that flash movie contains a button. I want to pass the id of an element on the containing html ...
I am trying to loop a video and i am having some issues with this in flash. You can view the video here:
Here the specific code for the ...
I am having in issue with IE passing a string back into an swf using the EternalInterface class in Flash CS4.
I have an swf with the following code:
var externalString:String = ExternalInterface.call("IncomingJS")
I have flash resizing it's container div by using external interface to call the javascript:
function expandbottomNav() {
document.getElementById('bottomNav').style.height = '400px';
}
function ...
is it possible to change the playback speed of a flash object without having to recompile the flash object, i.e through the html attributes or javascript?
thanks in advance
stop();
import flash.external.ExternalInterface;
ExternalInterface.addCallback("quarter1call", quarter1call);
function quarter1call():void
{
gotoAndPlay(2);
}
What i'm trying to do: add an html/javascript page inside an air app made fully in flash (coded in AS3, almost no timeline used). Is it possible, is it wise to ...
I have a swf file which I created using Adobe Flash professional CS5.5, it's a basic animation of some birds with an audio, it is meant to reply every 5 seconds, ... | http://www.java2s.com/Questions_And_Answers/Javascript-HTML-CSS/flash/actionscript.htm | CC-MAIN-2013-20 | refinedweb | 299 | 65.22 |
This article is an entry in our Windows Azure Developer Challenge. Articles in this sub-section are not required to be full articles so care should be taken when voting. Create your free Azure Trial Account to Enter the Challenge.
The project is available at[^].
Windows Azure is a cloud computing platform and infrastructure. It provides both platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) models and supports many different programming languages (C#, C++, Java, JavaScript, Python, ...), tools (Visual Studio, Command Line, Git, Eclipse, ...) and frameworks (.NET, Zend, Node.js, ...), as well as different operating systems (Windows Server, SUSE, Ubuntu, OpenLogic, ...) for virtual machines. There are several reasons to pick Windows Azure instead of a classical web hosting. One reason is certainly the covering distribution of the data centers. The CDN nodes are currently located in 24 countries.
In this contribution to the "Windows Azure Developer Challenge" contest I will present all steps that have been required in order to develop a fully fledged cloud based application, that uses the scaling and load-balancing features of the Windows Azure platform. We will see how easy (or hard? hopefully not!) it is to set up a configuration that uses several key features of Microsoft's cloud provider:
Before we can go into the exact details of my idea (and the implementation), we should have a look at my Azure account.
The rules for this contest read: <quote>If you don't register you will not be eligible for the competition. Please ensure you sign up for your trial using this link so we can tell who's signed up.
That being said it is pretty obvious that one has to register. Following the given link we end up on the page windowsazure.com/en-us/pricing/free-trial (and some affiliate network parameter). The trial account would give us the following abilities for 3 months (for free):
That's pretty cool stuff! Here 750 compute hours per month is slightly above 31 days of raw computing power. This is enough to have one virtual machine running all the time (with actually doing some stuff - and not being idle or powered off). Also we get 10 web sites for free and one SQL server running the whole month. The storage as well as the CDN traffic data is also sufficient to have a quite powerful machine in the cloud.
Having logged in with my Microsoft account (formerly known as Microsoft passport, Live ID or Windows Live Account) an upgrade has been offered to me. Being a Microsoft MVP for Visual C# has the positive side of having a Microsoft MSDN and TechNET subscription. This also gives me a Windows Azure MSDN - Visual Studio Ultimate subscription on Windows Azure. This package has the following properties:
The changes are all marked with bold text. So I get more or less twice the computing power of the free trial, which is not bad. Let's go on to discuss my idea and the possible features of its upcoming implementation.
My project carries the name Azure WebState and represents an Azure based web statistic creater / data crawler. What does that mean? In the past month I've build a fully functional HTML5 and CSS3 parser. The project is about to be released (as open source), with a CodeProject article about to come. I tried to implement the full DOM (DOM Level-3, and partly DOM Level-4), which means that once an HTML document has been parsed, one is able to query elements with QuerySelector() or QuerySelectorAll(). Of course methods like GetElementById() and others are implemented as well.
QuerySelector()
QuerySelectorAll()
GetElementById()
How is this library useful for this project? Let's understand the big picture, before we go into details:
What I try to build is an MVC 4 webpage that works mostly with the Web API. Of course there is visible front-end, which uses part of the public available API and some of the only internal available API. The API can be used for various things:
A crawl list is a list of URLs, where the statistic is based on. The page will come with a pre-defined list of about 100-500 of the most popular webpages (including Amazon, Bing, CodeProject, Facebook, Google, Netflix, StackOverflow, Twitter, Wikipedia, YouTube ...), however, users can register on the page (e.g. to get an API key) and setup their own crawl list (which could be based on the pre-defined list, but does not have to be).
The requirement of crawling pages, parsing them and creating statistics upon their data is also reflected in the database architecture. Instead of just using a relational (SQL based) database, this project will actually use a SQL and a NoSQL database. This is the relation between the two:
While the relational database will store all relational data (like users and their crawl lists (one to many), crawl lists and their corresponding views (one to many), crawl lists and their entries (one to many), users and their settings (one to one) etc.), the NoSQL database will provide a kind of document storage.
We pick MongoDB for various reasons. A good reason is the availability of MongoDB on Windows Azure. Another reason is that MongoDB is based on JSON / BSON, with an in-built JavaScript API. This means that we are able to just return some queries directly from MongoDB to the client as raw JSON data.
The reason for picking a NoSQL database is explained quite fast: We will have a (text) blob for each crawl entry (maybe even more, if the history of a document is saved as well) (representing the HTML page), and (most probably) other (text) blobs as well (there could be zero to many CSS stylesheets attached to one document). So this is already not a fixed layout. The next reason is that the number of statistic entries might grow over time. In the beginning only the official statistics are gathered for one entry, however, one user could pick the same entry and request other statistics to be gathered as well. Therefore all in all we have to be able to do an easy expansion of data on a per-entry basis. This is not possible in relational database (in fact there are ways, but it is just not very efficient).
What's the purpose of the project? Crawling the web and creatig statistics about it. How many elements are on a webpage. What's the average size of a webpage. What are the request times and what is the average parsing time. All this data will be saved and will be made available.
There will be tons of statistics on the webpage (available for everyone) and everyone will be able to create an account (over OpenID) and create / publish his own crawl-list(s) with statistic views.
What kinds of statistics will be covered? This is actually very very open. Any statistic based on the HTML and CSS content of a webpage can be covered. Every user can set up other statistics to be determined. The pre-defined statistics include:
div
p
There will be also statistics that go across all entries, like the percentage of CSS class names (could be that a certain name is found a lot more often than others) or the most common media queries.
In theory (even though this highly unlikely to be implemented during the contest) I could also extend the database with a tag directory, which enables searching the crawled content.
How will I attack the challenge? I will start with a front-end that shows a webpage and contains already everything required. In the next step I create the SQL based relational database and wire up the webpage to it. Now it's time to set up a primary worker along with the MongoDB database. The primary worker will handle the union crawl-list (unification of all crawl-lists with distinct entries of course) and distribute the work among other works (load balancing and scalability).
In the final step I will polish the API and create a mobile access experience that allows to view the statistics offline and enables further abilities like notifications and more.
In this section I am going to discuss how I experienced (and hopefully mastered) the various challenges. I will present code, screenshots and helpful resources that I've found on my way to the cloud.
This was an easy one, since I just had to follow the link (given above or on the challenge page) and upgrade to my MSDN Azure subscription. Everything went smooth and my account has been active within 2 minutes.
Windows Azure makes me independent of constraints like a fixed hardware or software setup (if I need more computation power - I get it; if I need to run Linux for this life-saving tool - I power up a Linux VM). Azure provides the memory and computing power for scalable data-driven applications like WebState.
There are multiple ways to write and deploy webpages on Windows Azure. One of the best ways is to use ASP.NET MVC. On the one hand we can write the webpage with one of the most advanced and comfortable languages, C#, on the other side we get the best tooling available in form of Microsoft's Visual Studio.
I decided to go for a Single-Page Application with ASP.NET MVC 4. There are multiple reasons for picking this:
All in all if we go for the Single-Page Application project template we get a lot of benefits, which dramatically boost our development speed in this case.
The first thing I had to do was to reconfigure some of the default settings. I started with the AuthConfig.cs in the App_Start folder. This class defined in this file is used at startup to do some of the OAuth configuration. My code looks like the following:
public static void RegisterAuth()
{
OAuthWebSecurity.RegisterGoogleClient();
OAuthWebSecurity.RegisterMicrosoftClient(
clientId: /* ... */,
clientSecret: /* ... */);
OAuthWebSecurity.RegisterTwitterClient(
consumerKey: /* ... */,
consumerSecret: /* ... */);
OAuthWebSecurity.RegisterYahooClient();
}
In order to get those codes I had to register the webpage on the developer services of Microsoft and Twitter. Luckily there was a document available at the ASP.NET webpage, that had direct links for those services.
Doing the registration at the Twitter developer homepage looked like the following:
On the Microsoft homepage the procedure was quite similar, however, less obtrusive in my opinion. Here my input resulted in the following output from the webpage:
Now that everything was set up for doing OAuth I was ready to touch the provided models. This part of the competition does not yet involve the database (and we are still missing the VM, so no worker is available yet to produce statistic data), however, we can still do the whole relational mapping in code-first. This will be deployed using a Microsoft SQL express database without us caring much about it.
As already said - in this part of the competition we do not care yet about real statistics, MongoDB (that will be part of the next challenge, along with Microsoft SQL) or crawling the data in one or many worker instances.
Let's have a look at the models prepared for some of the statistic / view work.
Basically every user can have multiple views. Each view could have a unique API key assigned, or just the same. The API key is required only for external (i.e. API) access - if a user is logged in he can always access his views (even restricted ones).
Each view does have multiple statistic items, i.e. data that describes what kind of statistics to get from the given crawl list. This data will be described in a SQL similar language. There is much more behind this concept, however, I will explain part of it in the next section when we introduce MongoDB and in the fourth section on the worker / VM. Here I just want to point out that the crawl entries are also present in MongoDB, where the statistic fields for each crawl item are present. This is basically the union of all statistic fields for a given crawl item. We will see that MongoDB will be a perfect fit for the resulting kind of data.
Each user can also manage crawl lists. He could create new crawl lists or use existing (public or his own private) ones. He could also create new crawl lists based on existing (public or his own private) ones.
Since there is no worker (plus no document store in form of MongoDB) and all data depends highly on the worker the most dynamic part of the webpage will be left disabled for the moment.
After logging in to the Windows Azure Management center we just have to click on New at the bottom of the screen. Now we can go on webpage and just go for a quick creation. Entering the URL is all we need before the actual webpage is being set up:
The setup process might take a few seconds. While the webpage is being created a loading animation is shown. After the webpage is created we can go back to the Visual Studio and deploy our application.
For doing this efficiently we download a generated publish profile (from the Windows Azure Management center) and import it into Visual Studio. We could also publish the web application from FTP directly by setting up deployment credentials in the portal and pushing the application to Windows Azure from any FTP client, however, considering that we already use Visual Studio, why shouldn't we do it the easy way?
Finally we have everything in place! We can right click on the project in Visual Studio, select publish and choose to import the downloaded publishing profile in this dialog:
It is very important that everything, i.e. also the generated XML file for the web API documentation, is included in the project. If the file is just placed in the (right) directory, it won't be published. Only files that are included in the project will be published. This is, of course, also true for content files like images and others.
I do not publish the real easter egg (which is not that hard to find out), but I want to announce a little easter that I've build in. If you open the source code of the webpage you will see a comment that shows some ASCII art graphics, which is ... CodeProject's Bob (you guessed it)!
As far as responsive design goes: Right now the whole webpage has been created with desktop-first. This is a statistic homepage and meant for professional use - nothing about only consuming data. The last challenge will transform the public statistics into something quite usable for mobile devices. Here is where stylesheet extensions and manipulations, as well as some features of ASP.NET MVC 4 (like user-agent detection), will shine.
Of course there exist some helpful webpages that provide one or the other interesting tip regarding webpages with ASP.NET MVC (4), deploying webpages on Azure or others. I found the following resources quite helpful:
I already created a SQL database in the last challenge - just to support the login possibility on the webpage. In this section I will extend the database, perform additional configuration and install MongoDB.
Let's start by installing MongoDB. MongoDB will be the document store for the whole solution. If it is still unclear why the project uses MongoDB then the following list of arguments will probably help:
For Windows Azure we are using the standard MongoDB binaries. The code for these binaries is open source. When a MongoDB worker starts we have to do the following stops:
Of course we also need to perform some steps for stopping the service:
Challenges can be found in various areas. For instance debugging is not that easy. Also the IP potentially changes on reboot, since we do not have a fixed assigned machine (which is also the advantage of cloud computing). Keeping several sets of configurations in sync is also not that easy.
Luckily there is a good emulator that works great. Here cloud storage is emulated as the local mounted drives. When deploying MongoDB we have to do the following steps.
Most of this work is nowadays automated. The only decision we have to make is if we want to deploy to the platform-as-a-service or infrastructure-as-a-service. In the first case we have the choices of installing it on a Windows VM or a Linux VM. In the second case we do not care at all!
For using the Windows Azure command line utility, as well as installing a VM in form of MongoDB we will need a file with our publishing settings. This can be obtained from the Windows Azure Webpage (windows.azure.com/download/publishprofile.aspx).
Obviously (from a programmer's perspective) the choice between IaaS and PaaS does not matter much. Therefore we go for the PaaS solution, since there is a great (but, as we will see probably outdated?) installer tool available and we can (later on) adjust the OS to our needs. The upcoming OS adjustments are an important point, because they will allow us to have a much more direct connection to the running service. We will be able to clone complete VMs instead of messing around with multiple configurations. The installer is a command line utility (powershell script) that has to be run as an administrator.
In order to run the script we also need to lower the restriction for executing (powershell) scripts. Usually this is really restricted. By using the Set-ExecutionPolicy command we can set it to Unrestricted (which allows all scripts to run, i.e. no certificate or explicit permission required) for the duration of the installation. The current value can be obtained by using the Get-ExecutionPolicy command.
Set-ExecutionPolicy
Unrestricted
Get-ExecutionPolicy
However, where light is there is also shadow. The problem is that the installer is dependent on node.js and (supplied) JavaScript file(s). Obviously the authors of the installer did not care about correct versions, as did the authors of npm in general. Even though the concept of versions is incorporated, usually the dependencies are just downloaded with the latest versions. This is a huge problem, since the following statements cannot be executed any more:
var azure = require(input['lib'].value + '/azure');
var cli = require(input['lib'].value + '/cli/cli');
var utils = require(input['lib'].value + '/cli/utils');
var blobUtils = require(input['lib'].value + '/cli/blobUtils');
Therefore the script cannot execute and fails at this point:
The problem is that the dependent package (azure) of the package azure-cli changed a lot. In the end I searched for the desired equivalents and just (out of lazyness) copied full paths to the require argument:
var azure = require('.../azure-cli/node_modules/azure/lib/azure');
var cli = require('.../azure-cli/lib/cli');
var utils = require('.../azure-cli/lib/util/utils');
var blobUtils = require('.../azure-cli/lib/util/blobUtils');
With the change the script is now working as expected and we finally get to the next step!
For this install we do not use replica. When the page grows then there is a good chance that those replica might become handy. This provides fast modification access while having even faster and load balanced read access. Also the system is much more robust and less open for failures with data-loss. The following scheme will be followed when dealing with replica sets.
Before we look how our web app can interact with the just created MongoDB instance, we should have a look at creating (a real) SQL database on Windows Azure. The process of adding a database itself is quite straight forward.
We start by logging into the Windows Azure management webpage. Then we just click on New and select Data Services, SQL Database. Now we could import a previously exported SQL database, create a database with custom options or quick create a database with the default options.
Usually picking the quick create option is sufficient. In our case we just need a persistent store for user data, which is one of the cases where a standard SQL database is a quite good fit. Our data fits perfectly in a pre-defined scheme and using the Entity Framework ORM we do not have to care much about SQL administration.
However, besides the classical way of using SQL Management Studio or similar, we can also administrate our database from the Windows Azure webpage. A silverlight plugin has been created, which allows us to do all the necessary management. When we click on Manage we will first be asked to create a firewall rule for our current IP. We can do that safely, but should (at a later point in time) remove this rule afterwards.
Creating this firewall rule might take a few minutes. After the rule has been created we can log into the SQL management area.
Here we can create, edit or remove stored procedures, tables and views. Most of the tasks of a database administrator can be done with this silverlight plugin. A quick view at the tables of our database after publishing the webpage:
All in all everything is set up by using the publish agent in Visual Studio. Everything we have to do is enter the connection string to our freshly created database and testing the connection. Then this connection is automatically used for deployment. Required tables will be auto-generated and everything will be set up according to the rules detected by the Entity Framework.
In the next challenge we will then set up the worker and wire up the communication between MongoDB and our worker. Our worker will also have to communicate with the SQL database, which will be also be discussed.
The current state of the web application is that users can register, log-on (or off), change their password or associated accounts. Data is already presented in a dummy-form. In the next stage we will do most of the work, which allows users to create their own crawl lists and views. We will also integrate the worker, which is the corner-stone of our application.
In principle the last challenge also set up a VM (i.e. there is a VM already running at the moment). However, the challenge is more than just setting up a virtual machine. So in these paragraphs I will go into details of what a VM is, how we can benefit from creating one and how we can create one. The last paragraphs will then be dedicated on configuring the system, administrating it and installing our worker.
But not fast! One thing that will also be discussed in detail is how the worker is actually written and what the worker is doing. After all the worker is the probably the most central piece in the whole application, since it creates the data, which feeds the web applications. Hence following the discussion on how to set up and use a VM, we will go into details of the worker application.
So lets dive right into virtual machines. Everyone already starts a kind of virtual machine if we start the browser (therefore you are currently already running one). A modern browser allows us to run webpages (sometimes also known as web-applications) by supplying them with a set of APIs that offer threads, storage, graphics, network and more (everything that an OS offers us). If we think of the browser of an operating system, then we are running a virtual machine with it, since we know that only one operating system can run at a time.
This implies that any other operating system is only virtual. What other operating system see is a kind of virtual machine (not the real machine), since the machine is abstracted / modified from the real hardware (but also limited to it). So what exactly is a virtual machine? It is an abstraction layer that fakes a machine such that an arbitrary operating system could boot within an existing operating system.
We already see that this abstraction is somehow expensive. After all the whole cost must be paid somewhere. Every call from the system running in the VM to a memory address has to be mapped to the real memory address. Every call to system resources like graphic cards, USB ports and others is now indirect. On the other side there are several really cool benefits:
Windows Azure represents two important milestones. On the one hand it is a synonym for Microsoft's outstanding infrastructure, with (huge) computing centers all around the world. On the other hand it is the name of the underlying operating system, which is specialized in managing the available computing power, load-balancing it and hosting virtual machines. Most computers run Windows Azure and can therefore host highly optimized virtual machines, which are as close to the real hardware as possible. However, they still have all the benefits of virtual machines.
This allows us to append (virtual) hard drives in form of storage disks, which exceed any available storage capacity. The trick is that we access a bunch of drives in Microsoft's computing center at once without knowing.
There are several ways to create a new VM to run in Windows Azure. The simplest way is to use the web interface to create one. The next image shows how this could be done.
Another (more advanced) possibility would be to use the command line utility. The following snippet creates a new VM called my-vm-name, which uses a standard image called MSFT__Windows-Server-2008-R2-SP1.11-29-2011 with the username username:
azure vm create my-vm-name MSFT__Windows-Server-2008-R2-SP1.11-29-2011 username --location "Western US" -r
Everything could be managed by the command line utility. This gives us also the option of uploading our own virtual machine (specified in the vhd file format). The advantage is that any VM could be duplicated. Therefore we could create a suitable configuration, test it on our own premises, upload it and then scale it up to quite a lot of instances.
The following snippet creates a new VM called mytestimage from the file Sample.vhd:
azure vm image create mytestimage ./Sample.vhd -o windows -l "West US"
Coming back to our created VM we might first connect to it directly over the remote desktop protocol (RDP). We do not even need to open the remote desktop program or something similar, since Windows Azure already contains a direct link to a *.rdp file, which will contain the required configuration for us. Opening this usually yields the following warning:
This warning could be turned off by installing the required certificates. For the moment we can ignore it. By just continuing with connecting to our virtual machine we will eventually be able to log on our system, provided we enter the right data for the installed administrator account.
The next image shows the screen that can be captured directly after having successfully logged on our own VM running on Windows Azure.
Now it's time to talk about the worker application. The application is a simple console program. There is really not much to say about the reasons for picking a console program. In fact it could be a service without any input or output, but having at least some information on screen can never be bad. The application will be deployed by copy / paste of a release folder. We can use the clipboard copy mechanism that is provided by the Windows RDP client.
The program itself is nearly as simple as the following code snippet:
static void Main(string[] args)
{
//Everything will run in a task, hence the possibility for cancellation
cts = new CancellationTokenSource();
/* Evaluation of arguments */
//The log function prints something on screen and logs it in the DB
Log("Worker started.");
//Connect to MongoDB using the official 10gen C# driver
client = new MongoClient("mongodb://localhost");
server = client.GetServer();
db = server.GetDatabase("ds");
//Obviously something is wrong
if (server.State == MongoServerState.Disconnected)
{
Log("Could not connect to MongoDB.");
//Ah well, there are plenty of options but I like this one most
Environment.Exit(0);
}
Log("Successfully connected to MongoDB instance.");
//This runs the hot (running) task
var worker = Crawler.Run(db, cts.Token);
//Just a little console app
while (true)
{
Console.Write(">>> ");
string cmd = Console.ReadLine();
/* Command pattern */
}
//Make sure we closed the task
cts.Cancel();
Log("Worker ended.");
}
Basically the main function just connects to the MongoDB instance and starts the crawler as a Task. One of the advantages of creating a console application is the possibility to interact with it in a quite simple and "natural" way - over the command line.
Task
Before we go into the kernel of the crawler we need to take a look on the most important library for the whole project: AngleSharp. We get the library by using the same method as for all other libraries: over NuGet. The current state of AngleSharp is that it is still far away from being finished, however, the current state is sufficient to use it in this project.
The kernel of the whole crawler is executed by calling the static Run method. This is basically a big loop over all entries. This loop is wrapped in a loop again, such that the process is an infinite continuation. In principle we could also set the process idle after finishing the big loop until a certain condition is matched. Such a condition could be that the big loop is only processed once per day, i.e. the condition would be that the starting day is different from the current day.
Run
Let's have a look at the Run method.
public class Crawler
{
public static async Task Run(MongoDatabase db, CancellationToken cancel = new CancellationToken())
{
//Flag to break
var continuation = true;
//Don't consume too many (consecutive) exceptions
var consecutivecrashes = 0;
//Initialize a new crawler
var crawler = new Crawler(db, cancel);
Program.Log("Crawler initialized.");
//Permanent crawling
do
{
//Get all entries
var entries = db.GetCollection<CrawlEntry>("entries").FindAll();
//And crawl each of them
foreach (var entry in entries)
{
try
{
//Alright
await crawler.DoWork(entry);
//Apparently no crash - therefore reset
consecutivecrashes = 0;
}
catch (OperationCanceledException)
{
//Cancelled - let's stop.
continuation = false;
break;
}
catch (Exception ex)
{
//Ouch! Log it and increment consecutive crashes
consecutivecrashes++;
Program.Log("Crawler crashed with " + ex.Message + ".");
//We already reached the maximum number of allowed crashes
if (consecutivecrashes == MAX_CRASHES)
{
continuation = false;
Program.Log("Crawler faced too many (" + consecutivecrashes.ToString() + ") consecutive crashes.");
break;
}
continue;
}
}
}
while (continuation);
Program.Log("Crawler ended.");
}
/* Crawler Instance */
}
Nothing too spectacular here. The method creates a new instance of the crawler class and performs the asynchronous DoWork method. This method relies on the MonogoDB database instance, some other class members and a static variable. The static variable is marked ThreadStatic to run multiple kernels without interfering with each other.
DoWork
ThreadStatic
[ThreadStatic]
Stopwatch timer;
async Task DoWork(CrawlEntry entry)
{
//Init timer if not done for this thread
if(timer == null)
timer = new Stopwatch();
cancel.ThrowIfCancellationRequested();
//Get response time for the request
timer.Start();
var result = await http.GetAsync(entry.Url);
var source = await result.Content.ReadAsStreamAsync();
timer.Stop();
cancel.ThrowIfCancellationRequested();
var response = timer.Elapsed;
//Parse document
timer.Restart();
var document = DocumentBuilder.Html(source);
timer.Stop();
//Save the time that has been required for parsing the document
var htmlParser = timer.Elapsed;
cancel.ThrowIfCancellationRequested();
//Get the stylesheets' content
var stylesheet = await GetStylesheet(document);
cancel.ThrowIfCancellationRequested();
//Parse the stylesheet
timer.Restart();
var styles = CssParser.ParseStyleSheet(stylesheet);
timer.Stop();
var cssParser = timer.Elapsed;
cancel.ThrowIfCancellationRequested();
//Get all elements in a flat list
var elements = document.QuerySelectorAll("*");
//Get the (original) html text
var content = await result.Content.ReadAsStringAsync();
cancel.ThrowIfCancellationRequested();
//Build the entity
var entity = new DocumentEntry
{
SqlId = entry.SqlId,
Url = entry.Url,
Content = content,
Created = DateTime.Now,
Statistics = new BsonDocument(),
Nodes = new BsonDocument(),
HtmlParseTime = htmlParser.TotalMilliseconds,
CssParseTime = cssParser.TotalMilliseconds,
ResponseTime = response.TotalMilliseconds
};
//Perform the custom evaluation
EvaluateNodes(entity.Nodes, elements);
EvaluateStatistics(entity.Statistics, document, styles);
timer.Reset();
//Add to the corresponding MongoDB collection
AddToCollection(entity);
}
There is also some kind of magic behind the EvaluateNodes and EvaluateStatistics methods. For now those functions will not be discussed in detail. These two functions are basically evaluating the generated DOM and stylesheet. Here we use a DSL, which is used to perform the custom evaluations that can be entered by any registered user.
EvaluateNodes
EvaluateStatistics
The output of the worker program is shown in the next image.
The worker uses a kind of magic command line procedure to print new lines with information without interfering the current user input. In order to archieve this the Log method calls the MoveBufferArea method. In the following (simplified) version we just shift the buffer area by one line, however, sometimes more than just one line is required to fit the new message in.
Log
MoveBufferArea
public static void Log(string msg)
{
var left = Console.CursorLeft;
var top = Console.CursorTop;
Console.SetCursorPosition(0, top);
Console.MoveBufferArea(0, top, Console.BufferWidth, 1, 0, ++top);
var time = DateTime.Now;
Console.WriteLine("[ {0:00}:{1:00} / {2:00}.{3:00}.{4:00} ] " + msg,
time.Hour, time.Minute, time.Day, time.Month, time.Year - 2000);
Console.SetCursorPosition(left, top);
}
This concludes the discussion of the worker. In the next section we will continue to work on the webpage, which will then finally allow users to create their own crawl lists and set up their own statistics.
This section is about to come.
I am highly interested in the Windows Azure platform since a long time. This contest is finally my chance to try around a bit and get to know it better. I love that Scott Guthrie manages this team, since he's not only a great speaker, but also passionate about technology and very keen on creating amazing products. I recommend anyone who is interested in ASP.NET (history) or current Windows Azure happenings to check out the official blog at weblogs.asp.net/scott. | http://www.codeproject.com/Articles/584392/Azure-WebState?msg=4584212&PageFlow=FixedWidth | CC-MAIN-2017-17 | refinedweb | 5,674 | 56.15 |
Move semantics is faster than copy semantics, when the compiler can replace expensive copy operations by cheaper move operations, that is, when it can replace a deep copy of a big object by a shallow copy of the pointer to the big object. Hence, classes using the pimpl idiom in combination with move semantics should see a considerable speed-up. As Qt applies the pimpl idiom consistently to every non-trivial Qt class, we should see a speed-up by simply using Qt classes instead of their STL counterparts. I’ll compare the performance of classes that use move semantics with Qt and STL classes with and without applying the pimpl idiom.
A Class Using Move Semantics and Pimpl Idiom
We apply the pimpl idiom to the class
CTeam from my post Performance Gains Through C++11 Move Semantics.
// cteam.h #ifndef CTEAM_H #define CTEAM_H #include <memory>; private: struct Impl; std::unique_ptr<Impl> m_impl; }; #endif // CTEAM_H
The public interface of
CTeam is the same as before. We replaced the private data members by a unique pointer and moved them into the private implementation class
CTeam::Impl. Declaration and definition of
CTeam::Impl are located in the source file
cteam.cpp. This is one of the big advantages of the pimpl idiom: Header files don’t contain any implementation details. Hence, we can change the implementation of our pimpled class without changing the interface (see the post Pimp my Pimpl by Marc Mutz for more advantages of the pimpl idiom).
// cteam.cpp #include ... using namespace std; struct CTeam::Impl { ~Impl() = default; Impl(const std::string &n, int p, int gd); Impl(const Impl &t) = default; Impl &operator=(const Impl &t) = default; std::string m_name; int m_points; int m_goalDifference; static constexpr int statisticsSize = 100; std::vector
m_statistics; }; CTeam::Impl::Impl(const std::string &n, int p, int gd) : m_name(n) , m_points(p) , m_goalDifference(gd) { m_statistics.reserve(statisticsSize); srand(p); for (int i = 0; i < statisticsSize; ++i) { m_statistics[i] = static_cast (rand() % 10000) / 100.0; } }
Note how the C++11 keyword
default saves us from spelling out the trivial implementation of the destructor, copy constructor and copy assignment operator of the implementation class
CTeam::Impl. We must only write the code for the special name constructor. The rest is generated by the compiler.
We will use
CTeam::Impl to implement the constructors and assignment operators of the client-facing class
CTeam.
// cteam.cpp (continued) CTeam::~CTeam() = default; CTeam::CTeam() : CTeam("", 0, 0) {} CTeam::CTeam(const std::string &n, int p, int gd) : m_impl(new Impl(n, p, gd)) {} CTeam::CTeam(const CTeam &t) : m_impl(new Impl(*t.m_impl)) {} CTeam &CTeam::operator=(const CTeam &t) { *m_impl = *t.m_impl; return *this; } CTeam::CTeam(CTeam &&t) = default; CTeam &CTeam::operator=(CTeam &&t) = default; std::string CTeam::name() const { return m_impl ? m_impl->m_name : ""; }
We let the compiler generate the destructor. The default constructor delegates to the name constructor. The name constructor creates an object
Team::Impl with the given arguments. This is all as expected.
The copy constructor and assignment must perform a deep copy. The compiler-generated versions would simply copy the unique pointer
m_impl, that is, perform a shallow copy. As this is wrong, we must write the code for the copy constructor and assignment ourselves. The code simply uses the copy constructor and assignment of the implementation class.
A shallow copy is basically what we want for the move constructor and assignment. The default implementations simply copy the unique pointer
m_impl (shallow copy) and set
m_impl to
nullptr in the source of the move operation. The move operation transfers the ownership of the
Impl object from the source to the target
CTeam object. This behaviour is exactly implemented by the class
std::unique_ptr, which supports moving but not copying.
As the implemenation pointer
m_impl can be null, functions like
CTeam::name should check the validity of the pointer before they use it.
The Benchmarks
We use the benchmarks ShuffleAndSort and PushBack as shown in the post Performance Gains Through C++11 Move Semantics. We don’t use the benchmark EmplaceBack, because Qt 5.7 (the latest Qt version at the time of this writing) does not support emplace operations on Qt containers.
I ran different experiments, which I mark with the following labels.
- C++98 – Built example code with C++98 compiler
- C++11 – Built example code with C++11 compiler
- Copy – Class
CTeamhas only copy but no move operations
- Move – Class
CTeamhas both copy and move operations
- STL – Used std::string and std::vector in example code
- Qt – Used QString and QVector in example code
- Pimpl – Used pimpl idiom for class
CTeam
- Opt – Used lambdas for sort and C++11’s random number generation
We measured the performance of each experiment by the number of read instructions counted by callgrind. As relative performance is more telling than absolute numbers of read instructions, we take C++11/Move as the reference point with value 1.000.
Here are the results.
For the ShuffleAndSort benchmark, the Qt experiments (green) are consistently faster – by a factor between 1.01 and 1.36 – than the STL experiments (red). The reason is simple. Qt has always used the pimpl idiom for its non-trivial classes like
QVector and
QString. Copying one of Qt’s implicitly shared classes means copying the pointer to the implementation. The class using pimpl performs a shallow copy instead of a deep copy. This is the situation when move semantcis has a performance advantage over copy semantics.
But using the pimpl idiom all the time comes at a cost. Whenever we create an object using pimpl, we create the “interface” object (e.g.,
CTeam), which in turn creates the “implementation” object (e.g.,
CTeam::Impl) dynamically on the heap. This is why the Qt experiments are consistently slower – by a factor between 1.04 and 1.30 – than the STL experiments for the PushBack benchmark. The overhead of pimpl shows whenever the code calls a custom constructor (e.g., the name constructor of
CTeam), copy constructor or copy assignment operator, that is, whenever the code performs a deep copy.
The picture is pretty much the same if we only look at the STL experiments. For ShuffleAndSort, the STL experiments with pimpl are always faster than the ones without pimpl. For PushBack, the situation reverses. STL experiments with pimpl are always slower than the ones without pimpl.
The ShuffleAndSort benchmark is a best case for move semantics and pimpl. It performs 20 copy operations at the beginning to fill the vector of teams. Then, it moves teams 810,000 times while shuffling and sorting. Similarly, the PushBack benchmark is a worst case for move semantics. It calls each of the name constructor, the move constructor and the destructor of
CTeam 100,000 times. Calling the name constructor, which creates the implementation object dynamically on the heap, clearly dominates the execution time.
When we compare the experiments using pimpl with those not using pimpl (C++11/STL/Move vs. C++11/STL/Move/Pimpl, C++11/STL/Move/Opt vs. C++11/STL/Move/Pimpl/Opt), we see a speed-up of factor 1.370 to 1.452 for ShuffleAndSort and a slow-down of factor 1.011 to 1.041 for PushBack. The speed-up from using move semantics and pimpl is an order of magnitude more than the slow-down caused by the pimpl overhead. If our code leans more towards ShuffleAndSort, where shallow copies dominate deep copies, our code will most likely see an overall speed-up from using move semantics in combination with the pimpl idiom.
Fortunately, shallow copies dominate deep copies in most cases in real code. This observation was essential when the Qt project decided in its very beginning to use the pimpl idiom for all its non-trivial classes.
If we compare the overhead of using the pimpl idiom between STL and Qt experiments (C++11/STL/Move/Pimpl vs. C++11/Qt/Move/Pimpl, C++11/STL/Move/Pimpl/Opt vs. C++11/Qt/Move/Pimpl/Opt), the following picture emerges. For PushBack, Qt is 1.06 to 1.30 times slower than STL. The reason is that the pure Qt version of
CTeam uses pimpl for the string
m_name and the vector of doubles
m_statistics. For ShuffleAndSort, Qt is only marginally faster (factor: 1.008 – 1.014) than pure C++11/STL. This small speed-up may well be eaten up by the bigger slow-down caused by the pimpl overhead.
In the pre-C++11 times, using Qt classes gave us a speed advantage over STL classes most of the times. Things have changed with the advent of C++11. STL classes are now on par with Qt classes – thanks to the combination of move semantics and the pimpl idiom. A pure C++11 implementation gives us better control when to use the pimpl idiom and when not. With Qt, we always have to use it – no matter whether it yields a speed-up or not.
Conclusion
Move semantics gives us a speed-up over copy semantics, when the compiler can replace expensive copy operations by cheaper move operations. So, combining move semantics with the pimpl idiom should be a great fit, as the pimpl idiom replaces expensive deep copies of big objects by much cheaper shallow copies of pointers to these big objects. Our results corroborate this. We see a speed-up by factor 2.319 for the ShuffleAndSort benchmark by just using move semantics and the pimpl idiom. Using the pimpl idiom doesn’t come for free, because we must create the pointed-to object dynamically on the heap in an extra step. The PushBack benchmark shows that using pimpl can slow down things by a factor of 1.005.
The ShuffleAndSort benchmark is sort of a best case for the pimpl idiom, because almost all operations are move operations (shuffling and sorting). The PushBack benchmark is pretty much the opposite, because it doesn’t move anything. It only copies. Real code falls between these two extremes, but with a clear tendency to be closer to the ShuffleAndSort extreme. For these cases, we’ll see a speed-up because the speed-up from moving instead of copying is much bigger than the slow-down caused by the pimpl overhead.
This reasoning most likely made it easy for the Qt developers to use the pimpl idiom for every non-trivial Qt class. Using the pimpl idiom yields a runtime speed-up most of the time – in addition to providing stable interface (binary compatability!) and fast builds. Qt is considerably faster (factor: ~1.25) than pure C++, when move semantics is not available for a class. So, Qt is a good choice for all pre-C++11 compilers (e.g., C++98, C++03). This advantage melts away once move semantics and the pimpl idiom enter the picture – with C++11. Even for the best case scenario of the ShuffleAndSort benchmark, Qt is only marginally faster than pure C++ (factor: ~1.01). This slight advantage may easily be eaten up by the pimpl overhead, where Qt is considerably slower than pure C++ (factor: ~1.17).
The take-away from this post is. In most cases, we’ll see a speed-up from combining C++11’s move semantics with the pimpl idiom. C++11’s new
unique_ptr makes it easy to implement the pimpl idiom. Using Qt classes instead of their STL counterparts (e.g., QVector and QString instead of std::vector and std::string) doesn’t give us any advantages over the combination of move semantics and the pimpl idiom. Qt may even be at a slight disadvantage, because our code incurs the pimpl overhead with every occurrence of a Qt class and not only when we explicitly decide to use the pimpl idiom.
The (forward declaration of) struct Impl should be made public to facilitate deriving from it. A polymorphic implementation is a truly powerful idiom!
Of course the m_impl pointer itself should remain private.
Risto, can you give any example of why that would be useful? | http://www.embeddeduse.com/2016/05/30/best-friends-cpp11-move-semantics-and-pimpl/ | CC-MAIN-2017-51 | refinedweb | 2,012 | 56.35 |
What is Groovy? Getting Started with Groovy - A tutorial
By: Whitey
If you are a Java developer, or any programmer who has written code in Java, you know that although Java is a very powerful language, some things are easier than others. For example, creating structure and objects through classes is great in Java, but file I/O can be a real hassle. In those cases, a dynamic language with features akin to Ruby, Python, or other scripting languages would be a great help. That's where Groovy comes in. Groovy is a somewhat recent development that allows Java programmers to easily script functionality into their programs and improve productivity. The purpose of this tutorial is to introduce Java programmers to Groovy through the traditional "Hello World" application and encourage further exploration of this language. So, without further ado, lets begin...
First, before you start writing any code, download the Groovy Development Kit(GDK) from here. Next, install the GDK using these instructions:
Groovy requires Java, so you need to have a version available (1.4 or greater is required). Here are the steps if you don't already have Java installed:
* Get the latest Java distribution from the Developer Resources for Java Technology website.
* Run the installer.
* Set the JAVA_HOME environment variables. On Windows, follow these steps:
o Open the System control panel
o Click the Advanced tab
o Click the Environment Variables button
o Add a new System variable with the name JAVA_HOME and the value of the directory Java was installed in (mine is C:\Program Files\Java\jdk1.5.0_04)
o:
o Add a new System variable with the name GROOVY_HOME and the value of the directory groovy was installed in (mine is C:\dev\groovy-1.0-jsr-06)
o.
Now that you have installed the GDK on your computer, lets start writing come code. Open up the text editor of your choice, or the groovyConsole included with the Groovy download. Since most Java code is valid Groovy code, we will start with something very recognizable to most Java developers, the "Hello World" program. Type in the following code and compile with Groovy:
Code:
public class Main { public static void main(String[] arguments) { System.out.println("Hello World"); } }
This code works, but it isn't very Groovy. So, lets change it a bit.
Code:
public class Main { public static void main(String[] arguments) { println "Hello World" } }
We made two changes to the code above. The first one is the most obvious. Instead of writing System.out.println() every time you want to display information, in Groovy you only have to write println "". Next, we removed the semicolon at the end of the line. In Groovy, much like in JavaScript, semicolons are optional. We can change this code further however.
Code:
println "Hello World"
Whoa! What just happened! Are you telling me that you don't even need a class to make the Groovy code work!? That is exactly correct. Since Groovy is inherently a scripting language, it does not need the restrictions or boundaries of classes. The code actually produces "Hello World" when you run it. Try it for yourself.
Besides the basic features presented in this tutorial, Groovy has a wealth of functionality to bring to the table. Things like closures, Domain Specific Language capabilities, and a massive simplification of most anything you can think of. I encourage any Java developer out there to take a look at Groovy's Homepage for tutorials, usage guides, and tons of other information regarding Groovy. So give this great language a shot and see what groovy stuff you can come up
Big Data - An Introduction
Browser Based Communications - WebRTC. im jst a begnr of it
and i hope it wl hlp m
View Tutorial By: nitu at 2011-12-08 10:08:30
2. Nice tutorial on preparing to setup for groovy pro
View Tutorial By: grails cookbook at 2013-10-07 07:38:17
3. just I am need of some of the soft copy related to
View Tutorial By: shani at 2015-08-12 06:01:05
4. FUCK YOU...with due respect...!!!
View Tutorial By: sundar ram at 2016-10-13 20:47:22 | https://java-samples.com/showtutorial.php?tutorialid=1207 | CC-MAIN-2022-33 | refinedweb | 700 | 65.12 |
Namespace in C++ | Set 1 (Introduction)
Consider following C++ program.
Output :
Compiler Error: 'value' has a previous declaration as 'int value'
In each scope, a name can only represent one entity. So, there cannot be two variables with the same name in the same scope. Using namespaces, we can create two variables or member functions having the same name.
Output:
500.
- A namespace is a declarative region that provides a scope to the identifiers (names of the types, function, variables etc) inside it.
- Multiple namespace blocks with the same name are allowed. All declarations within those blocks are declared in the named scope.
A namespace definition begins with the keyword namespace followed by the namespace name as follows:
namespace namespace_name { int x, y; // code declarations where // x and y are declared in // namespace_name's scope }
- Namespace declarations appear only at global scope.
- Namespace declarations can be nested within another namespace.
- Namespace declarations don’t have access specifiers. (Public or private)
- No need to give semicolon after the closing brace of definition of namespace.
- We can split the definition of namespace over several units.
Output:
5 200 100
Following is a simple way to create classes in a name space
Output:
ns::geek::display()
Class can also be declared inside namespace and defined outside namespace using following syntax
Output:
ns::geek::display()
We can define methods also outside the namespace. Following is an example code.
Output:
ns::display() ns::geek::display()
namespace in C++ | Set 2 (Extending namespace and Unnamed namespace)
Namespace in C++ | Set 3 (Accessing, creating header, nesting and aliasing)
Can namespaces be nested in C++?
Reference:
This article is contributed by Abhinav Tiw | https://www.geeksforgeeks.org/namespace-in-c/?ref=lbp | CC-MAIN-2021-49 | refinedweb | 275 | 54.63 |
Introduction to the Actor Model in Akka
Introduction to the Actor Model in Akka
When more CPU power means more cores, we need an effective model for parallel processing. Here comes Akka and the famous Actor Model.
Join the DZone community and get the full member experience.Join For Free
Container Monitoring and Management eBook: Read about the new realities of containerization.
According to the Akka documentation, "An actor is a container for State, Behavior, a Mailbox, Child Actors and a Supervisor Strategy."
The Need for an Actor Model
In the past, programs used to get faster by just using the next generation of CPUs. So, all a programmer had to do was to wait for the next, faster CPU. Now, a lot of things have changed. CPUs are not getting faster, they're getting wider.
This means that in order to execute our programs faster, we need to use multiple cores, which, in turn, would use multiple threads. One of the models that help us to do so is the Actor model.
"The Actor Model provides a higher level of abstraction for writing concurrent and distributed systems. It alleviates the developer from having to deal with explicit locking and thread management, making it easier to write correct concurrent and parallel systems."
Why Not a Shared State?
A very useful approach to achieve concurrency today would be by having a shared mutable state. But the problem with this approach can be that the state of a large number of stateful objects can be changed by multiple parts of your application, each running in its own thread.
We need objects that can handle non-blocking operations and can save the internal state from other operations.
These Are Actors
An Actor is a computation entity that, in response to a message,
- can send a finite number of messages to other actors.
- create a finite number of new actors.
- designate their behaviors to be used for the next message it receives.
It gives you:
- Simple and high-level abstractions for distribution, concurrency, and parallelism.
- Asynchronous, non-blocking, and highly performant message-driven programming model.
- Very lightweight event-driven processes (several million actors per GB of heap memory).
I hope you now can have a clear understanding of actors. So let’s take a step forward and discuss some basic terminologies related to Akka.
Akka Terminology
- Concurrency vs. Parallelism
- Asynchronous vs. Synchronous
- Blocking vs. Non-blocking
I think we are ready to do the mandatory Hello World program to get started.
Hello, Akka!!
Add a dependency for Akka in your build.sbt. Here, we will be using this one:
libraryDependencies += "com.typesafe.akka" % "akka-actor_2.11" % "2.4.14"
Let us now define a simple greeting message:
case class Greet(name: String)
And a greeter actor for our greeter message:
class Greeter extends Actor { def receive = { case Greet(name) => println(s"Hello $name") } }
The complete code snippet would be:
import akka.actor.{Actor, ActorSystem, Props} //greet message case class Greet(name: String) //greeter Actor class Greeter extends Actor { def receive = { case Greet(name) => println(s"Hello $name") } } object HelloAkka extends App { val system=ActorSystem("Intro-Akka") val greeter=system.actorOf(Props[Greeter],"greeter") greeter ! Greet("Akka") }
Well, this is it for now. I hope this blog helped in providing more knowledge on actors.
Take the Chaos Out of Container Monitoring. View the webcast on-demand!
Published at DZone with permission of Himani Arora , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/introduction-to-actor-model-akka-in-a-nutshellnbsp | CC-MAIN-2018-30 | refinedweb | 596 | 57.47 |
Quote of the Day: Too Big to Fail
."
~Kevin Warsh and Jeb Bush in today's WSJ
26 Comments:
the thing that absolutely stuns me about freddy and fannie is that they became 40% of the housing market at the peak of the bubble.
then then failed spectacularly, became wards of the state, and are now, what, 70% of the market?
so they fail massively, singlehandedly cause TARP to lose money (it's well in the black ex fred and fan), have a potential costs tail of hundred of billions more in losses, and the get to increase their market share by 30 points with cheap federal money and guarantees that let them under-price everyone else thereby forcing banks to invest in levered us federal debt because they cannot compete in the mortgage space?
it's difficult to imagine a worse set of policy.
I hate bailouts as much as anyone. (Probably more so).
But back in 2008, the bankruptcy courts could have only handled so many liquidations of large, multinational banks.
This complicated subject brings to my mind the question: did the Too Big Banks have to take on failing mortgage banks in order to receive TARP?
Countrywide folded into Bank of America is the albatross that won't go away for BAC. Wells Fargo took over Wachovia, which means it took over Golden West Financial, and its loan problems that won't go away. JP Morgan Chase took over Washington Mutual and its bad loan portfolio.
Yes, Freddie and Fannie should be wound down ASAP.
arb-
a couple of points:
1. the problem was not handling the BK's, but containing knock on failures from counterparty exposure.
2. the banks were, for the most part, not bailed out. they were lent money that has now been paid back with interest.
that is the proper role of a central bank: lender of last resort.
a perfectly solvent bank can become illiquid. they may have lots of valuable long term assets (like mortgages) that cannot be readily monetized in the short term.
any bank that makes even one loan with depositor money can be caught in that position.
i think TARP should have been asset based to make sure that only solvent banks were aided (and they should have been charged interest to prevent them from wanting to use it and compensate the fed for risk), but such a program is no more a "bailout" than getting a home equity line of credit to fix your leaking roof is.
a bailout is when you give money to someone like freddy or fannie or GM who have no reasonable means to repay it.
Morganovich,
The one thing that I would most like to see is that if a bank is "bailed out", the bondholders not be bailed out along with them. I'd like to see a forced equity-for-debt swap in the case of a large scale bailout.
If this policy had been in place 7 years ago, the bank managers might be been a little more risk-averse.
Morganovich,
The other problem with bailouts, however well-intended, is that they beget more bailouts.
We bailed out "X", therefore we have to bail out "Y".
We bailed out "X" and "Y", therefore we have to bail out "Z".
And so on.
arb-
again, i don't think you can call it a bailout.
banks were lent money.
they had to repay it with interest and have done so.
take out fred and fan and TARP has made significant money for the treasury.
that's what keeps banks from wanting to do it: it should be expensive. (and profitable for the lender)
i agree that propping up insolvent banks is a bad idea, but providing liquidity to those with a duration mismatch (and making money doing it) is precisely the same as a bank giving you a home equity loan to fix your roof.
would you call that a bailout?
also:
bond conversion to equity does nothing to solve a short term liquidity issue. it does not provide cash.
at best, it eliminates coupon payments, but most bank bonds do not work that way. they tend to be discount bonds (which are much more tax effective for the holder)
Yeah, let's think about this.
At one time, the US government took anti-trust actions to make sure there were a lot of players in every market sector.
Then, there were chants of "government intrusion" and the need to be "globally competitive."
So, we ended up with behemoths in finance, too big to fail, and the same with auto companies.
So there might be some sense in limiting market shares--it would in fact promote competition (which every business hates), limit political influence, and make it easier to say, "Too bad suckers, eat shyt and die," when a poorly run company or bank flounders.
BTW,.
hey look, it's benji the alleged free marketer once more demanding government intrusion into markets.
you are about as free market as Khrushchev.
and more nonsense on inflation and japan.
fortunately, i now have cut and paste responses to you bunny, and don't need to waste time proving you wrong over and over. and been unable to offer anything but appeals to misunderstood authority and ad hominem, but that's why i instituted this cut and paste bunny response file
ps-
if running high inflation creates growth and diminishes debt, explain the 70's.
the biggest problems in our budget deficit are all inflation indexed bunny.
inflation is true wealth building?
no.
it's just debasement of money.
once you go over about 1-2%, there are no beneficial effects of inflation on real growth.
over 5% (old CPI, not the new one) you start to hurt it.
Volcker saved us and greenspan sold us back down the river.
i've made 5% so far this week in REAL returns by holding my savings in swiss francs instead of greenbacked toilet paper. 27% YTD.
yap all you want, but your savings are degrading in buying power while mine surge.
congrats on your brilliant economic policy.
PPS.
how is your big "time to buy" call working out?
if there were a double short benji etf, i'd own the hell out of it.
as a contrary indicator, you are priceless.
oh, and riddle me this bubble bunny-
we have around 15tn in debt and 65 tn in unfunded liabilities, mostly indexed to inflation.
that means that if we inflate to reduce bond debt, every dollar we save will cost us $4+ in entitlements.
how can you reduce deficits by spending $4 to save $1?
as ever, your plans lack any grounding in reality and even basic common sense.
and this: "Oh shocking, you mean the rates of inflation we had when Reagan was president? Oh, horrors!"
is just nonsense.
when reagan was president, the inflation we have now would have been called 8-9%.
the calculation changed.
using current CPI, inflation under reagan would have been 1%.
you are comparing apples to oranges and revealing your ignorance again.
Morganovich,
"bond conversion to equity does nothing to solve a short term liquidity issue"
^^^^^^^^^
I agree with that. But if the bondholders know that they'll get taken to the cleaners in the event of a bailout, they'll impose discipline on the bank.
And as I said, there is a critical political problem with bailouts, or perceived bailouts.
It's important to make the investors in companies (and maybe the employees also) pay a big price when there's a bailout (or large scale loan). Other bailouts then become less likely.
I actually have a much bigger beef with all the bailouts that occured outside the banking industry (like GM, for example).
arb-
i wish i believed that your point about bondholders was true, but i do not think it is.
the publicly balance sheets on major banks tell you damn near nothing you'd really need to know.
the information a bond holder would need to really assess risk is highly proprietary trading desk data. it is not shared.
banks were charged some pretty good interest for the liquidity (more than a borrower of comparable quality would have been in a free market).
TARP was expensive for them. (especially the guys who were forced to take money they didn't want and then told they could not pay it back until more interest had run up)
morganovich,
Probably true that if the government had had the authority to impose an equity-for-debt swap, the economy in 2008-2010 would have been a disaster anyway, and large scale loans to banks would have had to be made.
I still think it’s important to send the message to CEO’s and investors that bailouts (or large scale loans) will come with a big price.
I think that letting Lehman fail was the best decision that Hank Paulson made as Treasury secretary.
they do carry a big price.
paying back TARP with interest depressed earnings and reduced cash flow.
what's the gain to be had from making bondholders convert to equity because of a liquidity loan?
any bank can get caught short in a maturity mismatch for reasons beyond their own control.
that is an indelible feature of fractional reserve banking.
if they do, they have to borrow and pay for it, but i don't see why turning bondholders into equity is needed or helpful.
it's just likely to cause a crash in bank stocks as bond funds unable to hold equity are forced to sell at the worst possible moment.
Morganovich,
Please correct me if I'm wrong, but it seems the big problem with all the Fan+Fred red ink is the convergence of private with public. Bad government policy (doomed by 'fairness') being bundled and sold by private business in a reckless way - fearless of risk - because it was always backed by the government.
True?
2. the banks were, for the most part, not bailed out. they were lent money that has now been paid back with interest.
Morganovich, please don't tell me you've fallen for this line of horseshit!
Banks were recapitalized with a wealth transfer from savers.
I didn't lend them money and you didn't either.
a bailout is when you give money to someone like freddy or fannie or GM who have no reasonable means to repay it.
the banks were blowing up because they were holding the same toxic crap as Fannie and Freddie. Worse, Fannie and Freddie preferreds were used to meet many foreign and domestic banks' capital requirements.
Given this, there was no reasonable probability that the money would have been paid back.
Worse, whatever government "lent" was effectively our money. If it went bad, we'd be on the hook. If it didn't, the bank wins.
Please stop painting private gains and socialized losses as justifiable.!
methinks-
please tell me you have actually looked at the math.
$245 billion was paid out under TARP.
repayments now exceed $243bn.
take out the big 3 autos alone, and you are in profit.
take out freddy and fannie, and you are deep in profit.
even with them, profits are now estimated to exceed $20bn.
i do not like the way tarp was structured.
they should have gone with paulson's original idea to lend vs assets not the equity kludge the dems rammed through, but the math is clear.
you don't get paid back on a bailout.
when you get paid back with interest, that is called "a successful loan".
short term capital markets shut down, and banks were illiquid, not, for the most part, insolvent.
if you want to be pissed about somehting, be pissed about the free money the fed is giving banks to lever up and by govvies. that amounts to a huge giveaway to fund deficits cheaply and creates real structural risk, but TARP will not cost taxpayers a dime.
take out F+F, and it makes $80-100bn.
that's a pretty good return on a $250bn investment.
i think you'd be hard pressed to find a federal program that has done better.
also:
how can you run a fractional banking system without a lender of last resort?
to my mind, that is one of the few proper roles of a central bank.
mike-
the key issue around fred and fan was government backing.
F+F had a cheaper cost of capital than anyone else. this let them take 40% and now 70% market share.
they also had the implicit (explicit as it turned out) backing of the US treasury.
thus, any crap the produced could be bundled into AAA securities because the ratings agencies viewed them (correctly) as backed by the feds.
this created a terrible set of incentives.
fred and fan could kick out/buy/guarrantee surreal amounts of complete crap especially as congress (read: barney frank, chris dodd) cheer led for lowered lending standards.
this crap could be bundled into AAA MBS's and held by everyone and anyone.
clearly, the underlying risk in the mortgages was much, much higher, but essentially everyone was betting (mostly correctly) that the feds would step in if the F's got into trouble.
with a set of incentives like that, it's hard to see how any result other than disaster was possible.
ironically, the with F+F now taking 70% of the mortgage market, they are destabilizing banks still further.
boxed out of their traditional revenue source, banks are just borrowing cheap from the fed and buying US govvies at 20:1 gearing.
this is an deliberate policy from the fed/treasury.
it keeps us borrowing rates low, lets the banks make an easy arb, and finances absurd deficits that would never be so cheap otherwise.
the whole reason bernank gave (for the first time ever) an explicit date for rates to be low was to let banks duration match their arbs.
the trouble is going to come when rates do rise.
federal interest costs will increase.
banks will take huge, 20:1 levered hits on the prices of their bonds.
no one will want to buy the new auctions.
this is not so much kicking the can down the road as painting yourself into a corner.
this is not going to end well.
Morganovich,
Thanks for the reply.
I think that's pretty much what I said... so I guess your answer is, yes, my statement was true.
i think you'd be hard pressed to find a federal program that has done better.
Way to set the bar so low worms can clear it :)
if you want to be pissed about somehting, be pissed about the free money the fed is giving banks to lever up and by govvies.
Oh, I am. That's but one thing on my long list of complaints.
With the fun market we're having I don't have much time to dig into this with you. We'll leave it for another day. I'm sure it'll come up again.
Cheers.
I personally accept the notion that "too big to fail" is a possibility.
I just argue that the solution is self evident -- if you're "too big to fail", then, you're too big to continue to exist. In order to be rescued, you must break yourself up into five component parts with roughly equal liability, and we will then bail out those five parts.
Oh, and you must select 25% of your upper management to be fired without benefits for rampant malfeasance and incompetence.
"Way to set the bar so low worms can clear it :)"
actually, 80 bn in return (leaving out F+F) on 250 in investments is a pretty good return for anybody over 3 years. that's 10% IRR/yr.
it's not the banks that were the problem.
Links to this post:
Create a Link | http://mjperry.blogspot.com/2011/08/quote-of-day-too-big-to-fail.html | CC-MAIN-2016-26 | refinedweb | 2,669 | 72.56 |
Purpose
parser*.bas: Parsing/compilation functions: lexer tokens -> AST nodes.
symb*.bas: Symbol tables and lookup, namespace/scope handling.
rtl*.bas: Helpers to build AST calls to rtlib/gfxlib functions.
The structure of the parser has a very close relation to the FreeBASIC grammar. Basically there is a parsing function for every element of the grammar.
The parser retrieves tokens from the lexer and validates the input source code. Most error messages (besides command line and file access errors) come from here. Additionally the parser functions build up the corresponding AST. This is the heart of the compilation process.
Many of the parser's (or rather compiler's) functions (prefixed with a 'c') parse and skip the grammar element they represent, or show an error if they don't find it. The parser is fairly recursive, mostly because of the expression parser and the #include parsing.
From parsing to emitting
When parsing code a corresponding AST is built up to represent the program. The AST is used to represent executable code, but also to hold temporary expressions, for example the values of constants or the initializers found while parsing type or procedure declarations. The AST does not contain nodes for code flow constructs like IF, DO/LOOP, GOTO, RETURN, EXIT DO, etc., but it contains labels and branches. Likewise, several operations (like IIF(), ANDALSO, ORELSE, field dereference, member access) are replaced by the corresponding set of lower-level operations in the AST.
After parsing a function, the AST for this function is optimized, and then emitted recursively via astLoad*() calls on each node, from the top down. Note that each AST node has its own implementation of astLoad().
After parsing a function, the AST for this function is optimized, and then emitted recursively via astLoad*() calls on each node, from the top down. Note that each AST node has its own implementation of astLoad().
Back to FreeBASIC Developer Information
Back to Table of Contents | https://www.freebasic.net/wiki/wikka.php?wakka=DevFbcParser | CC-MAIN-2018-43 | refinedweb | 324 | 56.35 |
Serial Programming/termios
Contents
- 1 Introduction
- 2 Opening/Closing a Serial Device
- 3 Basic Configuration of a Serial Interface
- 4 Line-Control Functions
- 5 Reading and Setting Parameters
- 6 Modes
- 7 Misc.
- 8 Example terminal program
Introduction [edit]
termios is the newer (now already a few decades old) Unix API for terminal I/O. The anatomy of a program performing serial I/O with the help of termios is as follows:
- Open serial device with standard Unix system call open(2)
- Configure communication parameters and other interface properties (line discipline, etc.) with the help of specific termios functions and data structures.
- Use standard Unix system calls read(2) and write(2) for reading from, and writing to the serial interface. Related system calls like readv(2) and writev(2) can be used, too. Multiple I/O techniques, like blocking, non-blocking, asynchronous I/O (select(2) or poll(2), or signal-driven I/O (SIGIO signal)) are also possible. The selection of the I/O technique is an important part of the application's design. The serial I/O needs to work well with other kinds of I/O performed by the application, like networking, and must not waste CPU cycles.
- Close device with the standard Unix system call close(2) when done.
An important part when starting a program for serial I/I is to decide on the I/O technique to deploy.
The necessary declarations and constants for termios can be found in the header file <termios.h>. So code for serial or terminal I/O will usually start with
#include <termios.h>
Some additional functions and declarations can also be found in the <stdio.h>, <fcntl.h>, and <unistd.h> header files.
The termios I/O API supports two different modes: doesn't old termio do this too? if yes, move paragraphs up to the general section about serial and terminal I/O in Unix).
1. Canonical mode.
This is most useful when dealing with real terminals, or devices that provide line-by-line communication. The terminal driver returns data line-by-line.
2. Non-canonical mode.
In this mode, no special processing is done, and the terminal driver returns individual characters.
On BSD-like systems, there are three modes:
1. Cooked Mode.
Input is assembled into lines and special characters are processed.
2. Raw mode.
Input is not assembled into lines and special characters are not processed.
3. Cbreak mode.
Input is not assembled into lines but some special characters are processed.
Unless set otherwise, canonical (or cooked mode under BSD) is the default. The special characters processed in the corresponding modes are control characters, such as end-of-line or backspace. The full list for a particular Unix flavor can be found in the corresponding termios man page. For serial communication it is often advisable to use non-canonical, (raw or cbreak mode under BSD) to ensure that transmitted data is not interpreted by the terminal driver. Therefore, when setting up the communication parameters, the device should also configured for raw/non-canonical mode by setting/clearing the corresponding termios flags. It is also possible to enable or disable the processing of the special characters on an individual basis.
This configuration is done by using the
struct termios data structure, defined in the
termios.h header. This structure is central to both the configuration of a serial device and querying its setup. It contains a minimum of the following fields:
struct termios { tcflag_t c_iflag; /* input specific flags (bitmask) */ tcflag_t c_oflag; /* output specific flags (bitmask) */ tcflag_t c_cflag; /* control flags (bitmask) */ tcflag_t c_lflag; /* local flags (bitmask) */ cc_t c_cc[NCCS]; /* special characters */ };
It should be noted that real
struct termios declarations are often much more complicated. This stems from the fact that Unix vendors implement termios so that it is backward compatible with termio and integrate termio and termios behavior in the same data structure so they can avoid to have to implement the same code twice. In such a case, an application programmer may be able to intermix termio and termios code.
There are more than 45 different flags that can be set (via
tcsetattr()) or got (via
tcgetattr()) with the help of the
struct termios. The large number of flags, and their sometimes esoteric and pathologic meaning and behavior, is one of the reasons why serial programming under Unix can be hard. In the device configuration, one must be careful not to make a mistake.
Opening/Closing a Serial Device [edit]
open(2)[edit]
A few decisions have to be made when opening a serial device. Should the device be opened for reading only, writing only, or both reading and writing? Should the device be opened for blocking or non-blocking I/O (non-blocking is recommended)? While open(2) can be called with quite a number of different flags controlling these and other properties, the following as a typical example:
const char *device = "/dev/ttyS0";
fd = open(device, O_RDWR | O_NOCTTY | O_NDELAY); if(fd == -1) { printf( "failed to open port\n" ); }
Where:
- device
- The path to the serial port (e.g. /dev/ttyS0)
- fd
- The returned file handle for the device. -1 if an error occurred
- O_RDWR
- Opens the port for reading and writing
- O_NOCTTY
- The port never becomes the controlling terminal of the process.
- O_NDELAY
- Use non-blocking I/O. On some systems this also means the RS232 DCD signal line is ignored.
NB: Be sure to #include <fcntl.h> as well for the constants listed above.
close(2)[edit]
Given an open file handle fd you can close it with the following system call
close(fd);
Basic Configuration of a Serial Interface[edit]
After a serial device has been opened, it is typical that its default configuration, like baud rate or line discipline need to be override with the desired parameters. This is done with a rather complex data structure, and the tcgetattr(3) and tcsetattr(3) functions. Before that is done, however, it is a good idea to check if the opened device is indeed a serial device (aka tty).
The following is an example of such a configuration. The details are explained later in this module.
#include <termios.h> #include <unistd.h> struct termios config; // // Check if the file descriptor is pointing to a TTY device or not. // if(!isatty(fd)) { ... error handling ... } // // Get the current configuration of the serial interface // if(tcgetattr(fd, &config) < 0) { ... error handling ... } // // Input flags - Turn off input processing // // convert break to null byte, no CR to NL translation, // no NL to CR translation, don't mark parity errors or breaks // no input parity check, don't strip high bit off, // no XON/XOFF software flow control // config.c_iflag &= ~(IGNBRK | BRKINT | ICRNL | INLCR | PARMRK | INPCK | ISTRIP | IXON); // // Output flags - Turn off output processing // // no CR to NL translation, no NL to CR-NL translation, // no NL to CR translation, no column 0 CR suppression, // no Ctrl-D suppression, no fill characters, no case mapping, // no local output processing // // config.c_oflag &= ~(OCRNL | ONLCR | ONLRET | // ONOCR | ONOEOT| OFILL | OLCUC | OPOST); config.c_oflag = 0; // // No line processing // // echo off, echo newline off, canonical mode off, // extended input processing off, signal chars off // config.c_lflag &= ~(ECHO | ECHONL | ICANON | IEXTEN | ISIG); // // Turn off character processing // // clear current char size mask, no parity checking, // no output processing, force 8 bit input // config.c_cflag &= ~(CSIZE | PARENB); config.c_cflag |= CS8; // // One input byte is enough to return from read() // Inter-character timer off // config.c_cc[VMIN] = 1; config.c_cc[VTIME] = 0; // // Communication speed (simple version, using the predefined // constants) // if(cfsetispeed(&config, B9600) < 0 || cfsetospeed(&config, B9600) < 0) { ... error handling ... } // // Finally, apply the configuration // if(tcsetattr(fd, TCSAFLUSH, &config) < 0) { ... error handling ... }
This code is definitely not a pretty sight. It only covers the most important flags of the more than 60 termios flags. The flags should be revised, depending on the actual application.
Line-Control Functions [edit]
termios contains a number of line-control functions. These allow a more fine-grained control over the serial line in certain special situations. They all work on a file descriptor fildes, returned by an open(2) call to open the serial device. In the case of an error, the detailed cause can be found in the global
errno variable (see errno(2)).
tcdrain[edit]
#include <termios.h> int tcdrain(int fildes);
Wait until all data previously written to the serial line indicated by
fildes has been sent. This means, the function will return when the UART's send buffer has cleared.
If successful, the function returns 0. Otherwise it returns -1, and the global variable
errno contains the exact reason for the error.
use case[edit]
Todays computer are fast, have more cores and a lot of optimization inside. That can cause strange results because it result depends on circumstances the programmer may not aware of. See the example below:
set_rts(); write(); clr_rts();
With that code you would expect a signal going up, the write and a signal going down. But that is wrong. Perhaps optimization causes the kernel to report success to write before the data are really written.
Now the same code using tcdrain()
set_rts(); write(); tcdrain(); clr_rts();
Suddenly the code behaves as expected, because the clr_rts(); is executed only when data really is written. Several programmers solve the problem by using sleep()/usleep() what may be not exactly what you want.
tcflow[edit]
#include <termios.h> int tcflow(int fildes, int action);
This function suspends/restarts transmission and/or reception of data on the serial device indicated by fildes. The exact function is controlled by the action argument. action should be one of the following constants:
- TCOOFF
- Suspend output.
- TCOON
- Restarts previously suspended output.
- TCIOFF
- Transmit a
STOP(
xoff) character. Remote devices are supposed to stop transmitting data if they receive this character. This requires the remote device on the other end of the serial line to support this software flow-control.
- TCION
- Transmit a
START(
xon) character. Remote devices should restart transmitting data if they receive this character. This requires the remote device on the other end of the serial line to support this software flow-control.
If successful, the function returns 0. Otherwise it returns -1, and the global variable
errno contains the exact reason for the error.
#include <termios.h> int tcflush(int fildes, int queue_selector);
Flushes (discards) not-send data (data still in the UART send buffer) and/or flushes (discards) received data (data already in the UART receive buffer). The exact operation is defined by the queue_selector argument. The possible constants for queue_selector are:
- TCIFLUSH
- Flush received, but unread data.
- TCOFLUSH
- Flush written, but not send data.
- TCIOFLUSH
- Flush both.
If successful, the function returns 0. Otherwise it returns -1, and the global variable
errno contains the exact reason for the error.
#include <termios.h> int tcsendbreak(int fildes, int duration_flag);
Sends a break for a certain duration. The duration_flag controls the duration of the break signal:
- 0
- Send a break of at least 0.25 seconds, and not more than 0.5 seconds.
- any other value
- For other values than 0, the behavior is implementation defined. Some implementations interpret the value as some time specifications, others just let the function behave like tcdrain().
A break is a deliberately generated framing (timing) error of the serial data – the signal's timing is violated by sending a series of zero bits, which also encompasses the start/stop bits, so the framing is explicitly gone.
If successful, the function returns 0. Otherwise it returns -1, and the global variable
errno contains the exact reason for the error.
Reading and Setting Parameters[edit]
Introduction[edit]
There are more than 60 parameters a serial interface in Unix can have, assuming of course the underlying hardware supports every possible parameter - which many serial devices in professional workstations and Unix servers indeed do. This plethora of parameters and the resulting different interface configuration is what make serial programming in Unix and Linux challenging. Not only are there so many parameters, but their meanings are often rather unknown to contemporary hackers, because they originated at the dawn of computing, where things were done differently and are no longer known or taught in Little-Hacker School.
Nevertheless, most of the parameters of a serial interface in Unix are just controlled via two functions:
- tcgetattr()
- For reading the current attributes.
and
- tcsetattr()
- For setting serial interface attributes.
All information about the configuration of a serial interface is stored in an instance of the struct termios data type. tcgetattr() requires a pointer to a pre-allocated struct termios where it reads the values from. tcsetattr() requires a pointer to a pre-allocated and initialized struct termios where the function writes to.
Further, speed parameters are set via a separate set of functions:
- cfgetispeed()
- Get line-in speed.
- cfgetospeed()
- Get line-out speed.
- cfsetispeed()
- Set line-in speed.
- cfsetospeed()
- Set line-out speed.
The following sub-section explain the mentioned functions in more detail.
Attribute Changes[edit]
50+ attributes of a serial interface in Unix can be read with a single function: tcgetattr(). Among these parameters are all the option flags and, for example, information about which special character handling is applied. The signature of that function is as it follows:
#include <termios.h> int tcgetattr(int fd, struct termios *attribs);
Where the arguments are:
- fd
- A file handle pointing to an opened terminal device. The device has typically be opened via the open(2) system call. However, there are several more mechanisms in Unix to obtain a legitimate file handle (e.g. by inheriting it over a fork(2)/exec(2) combo). As long as the handle points to an opened terminal device things are fine.
- *attribs
- A pointer to a pre-allocated struct termios, where tcgetattr() will write to.
tcgetattr() returns an integer that either indicates success or failure in the way typical for Unix system calls:
- 0
- Indicates successful completion
- -1
- Indicates failure. Further information about the problem can be found in the global (or thread-local) errno variable. See the errno(2), intro(2), and/or perror(3C) man page for information about the meaning of the errno values.
- Note; it is a typical beginner and hacker error to not check the return value and assume everything will always work.
The following is a simple example demonstrating the use of tcgetattr(). It assumes that standard input has been redirected to a terminal device:
#include <stdio.h> #include <stdlib.h> #include <termios.h>
int main(void) { struct termios attribs; speed_t speed;
if(tcgetattr(STDIN_FILENO, &attribs) < 0) { perror("stdin"); return EXIT_FAILURE; }
/* * The following mess is to retrieve the input * speed from the returned data. The code is so messy, * because it has to take care of a historic change in * the usage of struct termios. Baud rates were once * represented by fixed constants, but later could also * be represented by a number. cfgetispeed() is a far * better alternative. */ if(attribs.c_cflag & CIBAUDEXT) { speed = ((attribs.c_cflag & CIBAUD) >> IBSHIFT) + (CIBAUD >> IBSHIFT) + 1; } else { speed = (attribs.c_cflag & CIBAUD) >> IBSHIFT; } printf("input speed: %ul\n", (unsigned long) speed);
/* * Check if received carriage-return characters are * ignored, changed to new-lines, or passed on * unchanged. */ if(attribs.c_iflag & IGNCR) { printf("Received CRs are ignored.\n"); } else if(attribs.c_iflag & ICRNK) { printf("Received CRs are translated to NLs.\n"); } else { printf("Received CRs are not changed.\n"); }
return EXIT_SUCCESS; }
Once the above program is compiled and linked, let's say under the name example, it can be run as it follows:
./example < /dev/ttya
Assuming, /dev/ttya is a valid serial device. One can run stty to verify of the output is correct.
tcsetattr()
#include <termios.h> tcsetattr( int fd, int optional_actions, const struct termios *options );
Sets the termios struct of the file handle fd from the options defined in options. optional_actions specifies when the change will happen:
- TCSANOW
- the configuration is changed immediately.
- TCSADRAIN
- the configuration is changed after all the output written to fd has been transmitted. This prevents the change from corrupting in-transmission data.
- TCSAFLUSH
- same as above but any data received and not read will be discarded.
Baud-Rate Setting[edit]
Reading and setting the baud rates (the line speeds) can be done via the tcgetattr() and tcsetattr() function. This can be done by reading or writing the necessary data into the struct termios. However, this is a mess. The previous example for tcgetattr() demonstrates this.
Instead of accessing the data manually, it is highly recommended to use one of the following functions:
- cfgetispeed()
- Get line-in speed.
- cfgetospeed()
- Get line-out speed.
- cfsetispeed()
- Set line-in speed.
- cfsetospeed()
- Set line-out speed.
Which have the following signatures:
#include <termios.h> speed_t cfgetispeed(const struct termios *attribs);
- speed
- The input baud rate.
- attribs
- The struct termios from which to extract the speed.
#include <termios.h> speed_t cfgetospeed(const struct termios *attribs);
- speed
- The output baud rate.
- attribs
- The struct termios from which to extract the speed.
#include <termios.h> int cfsetispeed(struct termios *attribs, speed_t speed);
- attribs
- The struct termios in which the input baud rate should be set.
- speed
- The input baud rate that should be set.
The function returns
- 0
- If the speed could be set (encoded).
- -1
- If the speed could not be set (e.g. if it is not a valid or supported speed value).
#include <termios.h> int cfsetospeed(struct termios *attribs, speed_t speed);
- attribs
- The struct termios in which the output baud rate should be set.
- speed
- The output baud rate that should be set.
The function returns
- 0
- If the speed could be set (encoded).
- -1
- If the speed could not be set (e.g. if it is not a valid or supported speed value).
Here is a simple example for cfgetispeed(). cfgetospeed() works very similar:
#include <stdio.h> #include <stdlib.h> #include <termios.h>
int main(void) { struct termios attribs; speed_t speed;
if(tcgetattr(STDIN_FILENO, &attribs) < 0) { perror("stdin"); return EXIT_FAILURE; }
speed = cfgetispeed(&attribs); printf("input speed: %lu\n", (unsigned long) speed);
return EXIT_SUCCESS; }
cfsetispeed() and cfsetospeed() work straight-forward, too. The following example sets the input speed of stdin to 9600 baud. Note, the setting will not be permanent, since the device might be reset at the end of the program:
#include <stdio.h> #include <stdlib.h> #include <termios.h>
int main(void) { struct termios attribs; speed_t speed;
/* * Get the current settings. This saves us from * having to initialize a struct termios from * scratch. */ if(tcgetattr(STDIN_FILENO, &attribs) < 0) { perror("stdin"); return EXIT_FAILURE; }
/* * Set the speed data in the structure */ if(cfsetispeed(&attribs, B9600) < 0) { perror("invalid baud rate"); return EXIT_FAILURE; }
/* * Apply the settings. */ if(tcsetattr(STDIN_FILENO, TCSANOW, &attribs) < 0) { perror("stdin"); return EXIT_FAILURE; }
/* data transmision should happen here */
return EXIT_SUCCESS; }
Modes[edit]
Special Input Characters[edit]
Canonical Mode[edit]
Everything is stored into a buffer and can be edited until a carriage return or line feed is entered. After the carriage return or line feed is pressed, the buffer is sent.
options.c_lflag |= ICANON;
where:
- ICANON
- Enables canonical input mode
Non-Canonical Mode[edit]
This mode will handle a fixed number of characters and allows for a character timer.In this mode input is not assembled into lines and input processing does not occur.Here we have to set two parameters time and minimum number of characters to be received before read is satisfied and these are set by setting VTIME and VMIN characters for example if we have to set Minimum number of characters as 4 and we don't want to use any timer then we can do so as follows-:
options.c_cc[VTIME]=0; options.c_cc[VMIN]=4;
Misc.[edit]
There are a few C functions that can be useful for terminal and serial I/O programming and are not part of the terminal I/O API. These are
#include <stdio.h> char *ctermid(char *s);
This function returns the device name of the current controlling terminal of the process as a string (e.g. "/dev/tty01"). This is useful for programs who want to open that terminal device directly in order to communicate with it, even if the controlling terminal association is removed later (because, for example, the process forks/execs to become a daemon process).
*s can either be
NULL or should point to a character array of at least L_ctermid bytes (the constant is also defined in stdio.h). If *s is
NULL, then some internal static char array is used, otherwise the provided array is used. In both cases, a pointer to the first element of the char array is returned
#include <unistd.h> int isatty(int fildes)
Checks if the provided file descriptor represents a terminal device. This can e.g. be used to figure out if a device will understand the commands send via the terminal I/O API.
#include <unistd.h> char *ttyname (int fildes);
This function returns the device name of a terminal device represented by a file descriptor as a string.
#include <sys/ioctl.h> ioctl(int fildes, TIOCGWINSZ, struct winsize *); ioctl(int fildes, TIOCSWINSZ, struct winsize *);
These I/O controls allow to get and set the window size of a terminal emulation, e.g. an xterm in pixel and character sizes. Typically the get variant (TIOCGWINSZ) is used in combination with a SIGWINCH signal handler. The signal handler gets called when the size has changed (e.g. because the user has resized the terminal emulation window), and the application uses the I/O control to get the new size.
Example terminal program[edit]
A simple terminal program with termios.h can look like this:
Warning: In this program the VMIN and VTIME flags are ignored because the O_NONBLOCK flag is set.
#include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <termios.h> #include <string.h> // needed for memset int main(int argc,char** argv) { struct termios tio; struct termios stdio; int tty_fd; fd_set rdset; unsigned char c='D'; printf("Please start with %s /dev/ttyS1 (for example)\n",argv[0]); memset(&stdio,0,sizeof(stdio)); stdio.c_iflag=0; stdio.c_oflag=0; stdio.c_cflag=0; stdio.c_lflag=0; stdio.c_cc[VMIN]=1; stdio.c_cc[VTIME]=0; tcsetattr(STDOUT_FILENO,TCSANOW,&stdio); tcsetattr(STDOUT_FILENO,TCSAFLUSH,&stdio); fcntl(STDIN_FILENO, F_SETFL, O_NONBLOCK); // make the reads non-blocking memset(&tio,0,sizeof(tio)); tio.c_iflag=0; tio.c_oflag=0; tio.c_cflag=CS8|CREAD|CLOCAL; // 8n1, see termios.h for more information tio.c_lflag=0; tio.c_cc[VMIN]=1; tio.c_cc[VTIME]=5; tty_fd=open(argv[1], O_RDWR | O_NONBLOCK); cfsetospeed(&tio,B115200); // 115200 baud cfsetispeed(&tio,B115200); // 115200 baud tcsetattr(tty_fd,TCSANOW,&tio); while (c!='q') { if (read(tty_fd,&c,1)>0) write(STDOUT_FILENO,&c,1); // if new data is available on the serial port, print it out if (read(STDIN_FILENO,&c,1)>0) write(tty_fd,&c,1); // if new data is available on the console, send it to the serial port } close(tty_fd); } | http://en.wikibooks.org/wiki/Serial_Programming/termios | CC-MAIN-2015-18 | refinedweb | 3,805 | 56.35 |
RunBash
This post is about running shell commands from within Groovy, specifically bash but it is easy to adapt to other shells. Groovy has built-in support for running commands like this:
"ls -l".execute()
That is about as simple as it can get and works great for many situations. However there is a key gotcha:
execute() simply executes the given command passing it whatever else is in the string as options to the command. The options are not passed through any shell (e.g. bash) for wildcard expansion or other transformations. As a result, you can not natively do something like:
"ls *.groovy".execute()
In this case, no shell sees the
* to expand it, and so the
* just gets passed to
ls exactly as an argument which
ls itself does not know how to interpret.. I have written a class called
RunBash which provides this function. With this class you can properly execute the
ls *.groovy example above with code like
"ls *.groovy".bash(). You can even execute more complicated shell scripts like:
""" for file in \$(ls); do echo \$file done """.bash()
To turn on this functionality it is necessary to call
RunBash.enable() first. So a full example using the durbinlib implementation of RunBash is:
#!/usr/bin/env groovy import durbin.util.* RunBash.enable() """ for file in \$(ls); do echo \$file done """.bash()
Installation
You can obtain
RunBash with durbinlib, which is my kitchen-sink of everday classes and scripts. Simply clone, ant build, and add to
CLASSPATH and
PATH like:
git clone git://github.com/jdurbin/durbinlib.git cd durbinlib ant install export PATH=$PATH:pathtodurbinlib/scripts/ export CLASSPATH=pathtodurbinlib/target/jar/*
Source
To get an idea how this is implemented in Groovy code, a stripped-down but functional version of the class is shown below:
import java.io.InputStream; class RunBash{ static boolean bEchoCommand = false; // Add a bash() method to GString and String with meta-object programming // This is why it's necessary to call an enable function in your script.) {} } } | http://jdurbin.github.io/blog/groovyshell/ | CC-MAIN-2017-30 | refinedweb | 335 | 64.3 |
KnockoutJS (KO) is a JavaScript library written by Steve Sanderson who works for Microsoft and is the author of Pro ASP.NET MVC Framework. It is a JavaScript library that helps build apps conforming to the Model View View-Model pattern. KO makes it easier to create rich, desktop-like user interfaces with JavaScript and HTML.
KO uses observers to make your user interface automatically stay in sync with an underlying data model, along with a powerful and extensible set of declarative bindings to enable productive development. In short, if you want your website to have a snappy, slick user interface, with less code, then Knockout is certainly a good library to use.
This article is published from the DNC .NET Magazine – A Free High Quality Digital Magazine for .NET professionals published once every two months
Knockout works along-side any JavaScript library so you can use jQuery and other JavaScript libraries along with KO as your application starts to grow. We first take a look at the features of KO and then dive into a sample that helps explain these.
KnockoutJS gives us declarative bindings which means we can easily get access to HTML DOM elements on our page and use it with any web framework including PHP, Ruby on Rails and even with ASP. It’s free, open source with no dependencies and supports all mainstream browsers including IE6+, Firefox 2+, chrome, Opera and Safari.
Dependency Tracking comes with KnockoutJS so that you can chain relationships between your model data which lets you transform your data when a part of the relationship change gets updated.
KnockoutJS has inbuilt templating as well as the ability to use custom templating such as JQuery templating or your very own custom templating. KnockoutJS comes with a number of built-in bindings and these make life really easy and straight forward, these include bindings for controlling text and appearance, control flow, and form fields.
With KnockoutJS you can create your own custom bindings and this really could not be easier, so with KnockoutJS we extend it with using custom bindings and templates and these templates can include JQuery templates or plain old HTML templates. Note that using external templates can slow down the rendering slightly and it’s advisable to try to use inline templating where possible.
As developers, we are always looking for something that adds that little-bit-more to our websites to make them snappier, have a nicer user interface, with less code. KnockoutJS gives us the ability to load our data for the page and have KnockoutJS bind the data to the user interface using simple elegant bindings.
If the data changes or the user makes a change to the webpage, KnockoutJS has 2-way binding that updates the page and reflect the changes on the user interface.
If you have used JQuery, you might be thinking - hang on JQuery does this for me already! Yes you’d be correct, but KnockoutJS can simplify things even further. With KO you can use a number of bindings for a whole range of things and also extend these to create your own easily. Shown below is how you define a custom binding.
ko.bindingHandlers.myCustomBinding = { init: function(element, valueAccessor, allBindingsAccessor, viewModel) { //init logic }, update: function(element, valueAccessor, allBindingsAccessor, viewModel) { //update logic }};
Microsoft is shipping KO as a part of the ASP.NET project templates in Visual Studio. So any new project will have KO added to it via a NuGet package reference. If you’re not working on a new MVC 4 application, you can add it into any web application by using NuGet as below:-
Or by downloading the .js file from its home on Github , add the knockout .js file to your application and reference it in your webpage and your good to go.
When using KnockoutJS within a web application, there are a couple of things you can do to make life easier and these are some best practice points from using it, which I have found useful. If you create a new MVC web application in VS 2012, the solution loads and if you take a look at the contents of the packages.config file, we can see that as part of the solution we are pulling in a number of NuGet packages, one of which is:-
Here we are using NuGet to pull in version 2.1.0 of the KnockoutJS library which at the time of writing this article was the latest version, so straight out of the box our solution has a reference to KnockoutJS and we haven’t had to do anything – were off to a great start. Update: At the time of posting this article, the version is 2.2.1.
Our demo application is a simple shopping cart order page where we can update the quantity of items in our shopping basket and see a running total being updated. The page has the data passed in from our MVC Controller Action Method and this is then presented to the page. Demo has a ProductController, a set of entities like CartItem, Category, Model, OrderItems and Product, and the Index.cshtml for the UI. Custom scripts are in the JS folder and we have ajaxservice.js, dataservice.shopping.js, index.js and utils.js.
To start with, when using KnockoutJS, it’s a good idea to create a namespace and use it within your JavaScript to keep things nice and tidy, same as you would within your c# codebase. To create a namespace in JavaScript is as simple as:
var OrdersApp = OrdersApp || {};
Next we need to create our ViewModel. It’s in the index.js file. A typical example of the ViewModel we might use for our shopping cart is as follows:-
$(function () { OrdersApp.Product = function () { var self = this; self.id = ko.observable(); self.price = ko.observable(); self.category = ko.observable(); self.description = ko.observable(); }; // The ViewModel OrdersApp.vm = function () { products = ko.observableArray([]), shoppingCart = ko.observableArray([]), addToCart = function (product) { // Stub }, removeFromCart = function (cartItem) { // Stub } }, grandTotal = ko.computed(function () { // Stub }), loadProducts = function () { // Stub }; return { products: products, loadProducts: loadProducts, shoppingCart: shoppingCart, addToCart: addToCart, removeFromCart: removeFromCart, grandTotal: grandTotal }; }();
OrdersApp.vm.loadProducts(); ko.applyBindings(OrdersApp.vm);});
This is gist of the complete ViewModel, but what we have here is a pure JavaScript representation of the model data (i.e. products and a shoppingCart) and actions to be performed. The ko.applyBindings() statement is used to tell KnockoutJS to use the object as the ViewModel for the page.
Products are defined as an observableArray and this means that KnockoutJS will track what’s in this array. For example we can push() and pop() items onto the products observableArray and the front end will automatically show us the updated data due to the 2-way binding KnockoutJS has – you can even use the console in your Chrome browser to manipulate the items in your ViewModel and KnockoutJS will take care of the updating of the user interface for you - it’s that simple.
In order to display a list of Products, we would be able to use the built in for-each binding and display our products as follows.
Details Make: Model: Price: Add Item
Note that KO also uses the data-bind tag to specify the type and field to bind to in the ViewModel. The elements inside the foreach data-bind are treated as the ‘row-template’ and repeated for each ‘product’ in the Products collection.
As we can see above, each of the ‘Add Item’ buttons invoke the $root.addToCart method. However on addition the Total gets updated automatically. If we look at the markup for rendering the cart total, we will see the following
Total ItemsTotal Price 0, click: $root.placeOrder">Place Order
As we can see here, Total Items and Total Price is bound to the calculated fields of shoppingCart().length and value returned by the grandTotal() function. In Index.js we will see the grandTotal method is defined as follows.
grandTotal = ko.computed(function () {var total = 0;$.each(shoppingCart(), function () { total += this.extPrice();});return total;})
Essentially we have defined it as a KO computed value that’s calculated for all the elements in the shopping cart. KO ‘observes’ for the changes in the number of shoppingCart items and on change, computes the grandTotal. Once the grandTotal value is computed, KO updates the UI because of the binding. If we put a breakpoint in the above function and add an item to the cart, we can see how KO call the computed function automatically.
Thus change tracking and two way data binding provide a rich and responsive behavior where changes in one area of the application gets reflected immediately elsewhere. The demo application to show the basic flow using KnockoutJS code can be found on Github here:
The complete demo shows a shopping cart webpage as shown above where you add products to your basket and can update the totals and a basket total is calculated, you can also remove items from the basket and the totals are all kept in sync using KnockoutJS.
The example code covers use of observables and observable Arrays, computed functions, namespaces in your JavaScript and how to use callbacks.
KnockoutJS is perfect for creating a user interface that responds immediately to the user and that includes adding and removing data. You don’t have to wait on server postbacks and hack away with viewstate or use any of the older tricks such as Update Panels and similar ones. You set up your ViewModel, then fill it with your data and then the user interface is updated using the 2-way binding done for by KnockoutJS. It’s quick, responds immediately and makes the user experience a whole lot better than before the introduction of KnockoutJS.
KnockoutJS gives us the added benefit of separation of concerns and the fact that it allows for better unit testing of the user interface code – we can now test our JavaScript with a tool such as QUnit allowing us to add into our build server so we can run the user interface unit tests before deployment.
Having discussed what KnockoutJS is good at, we should cover what problems you might run into using KnockoutJS - it’s actually something to be careful of within JavaScript.
Scoping in JavaScript is a minefield and can really give you sleepless nights if you’re not careful, so a little discipline is required in order to try to avoid falling into some scoping issues when using JavaScript.
When using JavaScript, there are thankfully a few JavaScript patterns you can use and one is the Module Pattern which is useful for organizing independent, self-containing pieces of JavaScript – you can read more about the Module Pattern here
Be careful when using the ‘this’ keyword in JavaScript as you can run into issues easily with the context of the keyword this depending on how you structure your JavaScript.
For a very good tutorial on how to go about structuring your JavaScript that you will use when working with KnockoutJS as well as other great tips I recommend, you take a look at Rob Connery’s Tekpub course here:
In this article I covered an introduction to KnockoutJS; although we covered a fair amount it hopefully left you wanting to know more. I have listed some additional learning resources over here bit.ly/dncmag-snapko in the Readme section.
In summary KnockoutJS is a fantastic addition to your arsenal as a web-developer when trying to create a slick user interface that responds immediately. You can use it with any web framework and it’s a one file addition to your solution which makes it easy to update if newer versions come out.
With KnockoutJS, you get superb tutorials and there are more and more articles and tutorials popping up all the time. There is no reason not to give it a try as its super easy to get going with, just be mindful of scoping. Also try to refactor your JavaScript code at all times. If you find yourself writing a lot of JavaScript when using KnockoutJS, the chances are there is a better cleaner way (normally methods are only a couple of lines).
Spend a couple of hours using KnockoutJS and you’ll wonder why you’re not using it on every web project – you can even go back and add it into your old web applications and improve the user experience with ease.
Download the entire source code of this article (Github)
Gregor Suttie is a developer who has been working on mostly Microsoft technologies for the past 14 years, he is 35 and from near Glasgow, Scotland. You can Follow him on twitter @gsuttie and read his articles at bit.ly/z8oUjM | http://www.dotnetcurry.com/aspnet-mvc/905/shopping-cart-ui-aspnet-mvc-knockoutjs | CC-MAIN-2016-36 | refinedweb | 2,125 | 57.91 |
When you run print(‘ hello python ‘) in python code, it always print the text on screen, this is because python standard output is screen console. But we always want to print the text to a file instead of screen console, especially print exception error messages when exception occurred in python source code.
To fix this issue, we should first open a file object in python with write permission, then set the file object to python system standard output. Then when you invoke print(‘ text ‘) in python, the text will be printed to file. Below is the example.
import sys from test.test_decimal import file if __name__ == '__main__': # open a file object with write permission. file_object = open('./log.txt', 'w') # assign the file object to system standard output stream. sys.stdout = file_object # invoke print() method to print text to above file. print('hello python') # close the file object. sys.stdout.close() | https://www.dev2qa.com/how-to-print-text-from-standard-output-stdout-to-file-in-python/ | CC-MAIN-2021-43 | refinedweb | 150 | 73.88 |
On Saturday 07 September 2002 21:29, address@hidden wrote: > some docu, some blabla and the worldmap stuff needs a complete rewrite... Are you going to do the rewrite or do you need help? BTW I thought about moving /editorobjs --> /editor/objs and worldobjsdata --> worldobjs/data. I think this would make the structure a bit cleaner and while a WorldObjs::Data::XY may seem a bit awkward, the main usage is inside of namespace WorldObj anyway and a Data::XY is definitely better then a WorldObjsData::XY. Right now it's just an idea I'm playing with while I'm trying to find the time to split the remaining WorldObj's. Bye David | http://lists.gnu.org/archive/html/pingus-devel/2002-09/msg00032.html | CC-MAIN-2016-36 | refinedweb | 114 | 59.23 |
J.
For example, the current implementation of o.a.g.d.m.g.ejb.MessageDriven is
public class MessageDriven extends Ejb {
}
Which is good, except that it does not implement/extend the standard EJB or
MessageDriven classes. So this is not a unified POJO tree. I'm proposing that
we change this to:
public class MessageDriven extends Ejb
implements o.a.g.d.m.ejb.MessageDriven
{
}
and remove the implementations from o.a.g.d.m.ejb.*. This is
simple and reduces our code volume. At the cost that we have
to use the geronimo instances of the POJOS (but can do so via
the standard interfaces if we wish).
If somebody really wants a non-geronimo version of the standard
interfaces, then we can provide implementations of them.
I think that what Jeremy & Aaron are proposing is that we change this to
public class MessageDriven extends o.a.g.d.m.ejb.MessageDriven
{
// reimplement all the array getters/setter & instance getter/setter for
// EjbRef
// ResourceRef
// ServiceRef
// EjbLocalRef
// JndiContextParam
// MessageDesination
// ResourceEnvRef
// SecurityRoleRef
// etc.
// to use geronimo object.
// reimplement them all again for type safe versions.
}
So that's at least 64 methods that need to be reimplemented.
And we can't reuse those implementations in Entity or Session, so
we have to reimplement them all over again -that's > 186 trivial methods
that we have cut and pasted by hand!!!
This can be simplified, if we don't reimplement the standard methods
and just store geronimo types in the base class arrays and fields.
But :
- we lose type safety (one of the reasons to use POJOs)
- the risk of having mixed trees is large
- we will have to write copy constructors for everything so we can
convert between standard trees and geronimo trees
- we have to have lots of explicit type checking and casting.
- if we add type safe methods, they will have to copy arrays
- We still have to have type safe impls replicated in Entity, MessageDriven
and Session.
> You had originally proposed additional methods like getGeronimoEjbRef()
> - what was wrong with them (apart from the name)? A user can use those
> or can cast the result from getEJBRef().
Because either we reimplement hundreds of methods or we end up
copying arrays every time the type safe methods are called.
> Have we made any progress on the names?
I've decided that progress is highly overrated and that I much prefer
to write long ranting emails:-) Sorry guys, but I am having trouble getting
into the "apache way" here. I don't want to do anything without consensus,
but I appear unable to reach consensus with anybody about anything - even
the neighbours cat is disagreeing with me about who is meant to feed it !-)
I did start the refactoring, but there is some code that relies
on the fact that EJB* comes from one package and Ejb* come from another.
So renaming all to EJB gives name clashes that need to be resolved
with package names. This is a lot of code to change and I would prefer
to remove most of it (as above) rather than fix it. So either we
need to reach consensus on the above or I have to stop worrying about it :-)
cheers
>>-----Original Message-----
>>From: Gmane Remailer [mailto:public=SUao4/[email protected]] On Behalf
>>Of Greg Wilkins
>>Sent: Monday, September 08, 2003 8:11 PM
>>To:
>>public-geronimo-dev=d1GL8uUpDdXTxqt0kkDzDmD2FQJk+8+b=8ByrlEUxsivZ+VzJOa5vwg@public.gmane.org
>>Subject: Re: [XML][Deployment]POJO design?
>>
>>
>>
>>
>>sorry,
>>
>>I tried to let this one go... but I can't... one more try :-)
>>
>>Aaron Mulder wrote:
>>
>>>>Or are you saying that we just just don't have a geronimo.ejb.EJB
>>>>class and multiple implement it's methods in geronimo.ejb.Session,
>>>>geronimo.ejb.Entity etc.
>>>
>>> Yes, I'm saying skip the geronimo.ejb.EJB. It would
>>
>>currently have
>>
>>>no properties beyond what ejb.EJB has, and later would support only
>>>the most basic (pool size perhaps?).
>>
>>But that does not work, because geronimo.ejb.EJB is need to
>>provide implementation of all the standard methods that
>>return the geronimo specific versions of EJBRef etc.
>>
>>If we don't have geronimo.ejb.EJB, then we have to implement
>>most the methods in most of the classes.
>>
>>
>>>.
>>
>>Why? We are writing geronimo code, so what it the problem with
>>creating geronimo versions of the DD POJOs? If you don't want to
>>do anything with the non-standard elements then don't set
>>them and don't render them in the XML.
>>
>>Note that jeremies current proposal has no difference between
>>the standard and vendor DD's anyway!
>>
>>
>>>.
>>
>>No - the reason that the geronomo classes exist is to provide
>>typed versions of the methods, so you don't have to cast to
>>the geronimo specific instances all the time. If we are happy
>>with untyped interfaces - then let's just use DOM!
>>
>>I think my proposal is actually removing code and complexity.
>> We will only have concrete implementations for of geronimo
>>objects. There will be no standards only implementations, no
>>copy constructors to convert standard to geronimo, no
>>casting, no type checking before casting, no marshalling code
>>for standard only, etc.
>>
>>I am sure that we could get by without a standard version of
>>the DD POJOs and only use the G DD POJOs. But having the
>>standard interfaces is less code that we have already and at
>>least gives a good indication of what's standard and what's not.
>>
>>It seams bizarre to me that on one hand we are trying to
>>merge the standard and vender XML into a single file, but on
>>the other we are keeping separate implementations of the
>>elements as POJOs???
>>
>>cheers
>>
>>
>>
>>
>>
>>
>>
>
>
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200309.mbox/%[email protected]%3E | CC-MAIN-2017-17 | refinedweb | 952 | 62.58 |
05-14-2019
12:48 AM
Hi,
In DFS you can replicatie the folder link content with DFS but i would like to sync the DFS settings.I have multiple domain controllers with multiple namespaces and multiple links. Do i really need to create each link or namespace of each DFS server or is there a much smarter way?Thanks for the replies
06-06-2019
04:32 PM
Hello @TommyB,
This can simply be done by installing the DFS Namespace role on a new server, and add that server to the current DFS Namespace(s), then the new server will automatically replicate the DFSRoot folder structure.
Best regards,Leon | https://techcommunity.microsoft.com/t5/Windows-Server-for-IT-Pro/DFS-settings-replication/m-p/565878 | CC-MAIN-2019-51 | refinedweb | 110 | 63.02 |
With."
Does it make sense to split Technical conference and user conference?
Yes, it does.
> Does it make sense to split Technical conference and user conference?
I'm not sure that is the right question, certainly it is very useful to have a very techincal developer focussed conference to get certian things done and I think Akademy was successful in that regard. User conferences are different and useful in their own way but the work can more easily be distributed and similar effects can be achieved by many smaller local meetings and getting KDE people out to booths at existing conferences.
As with all things in life balance is needed.
he would replace that 'g' in his name with a 'k'! How about it, Aaron? As an added bonus, you could use "Psycho" as your nickname.
Or in this case "Psyko"?
Phonon, Solid sound like good things, not particularly revolutionary though. But what about Plasma? I've read everything I could find about it and I still have no idea what it's supposed to change.
On the contrary - with enough application support, these two (Solid in particular) have the potential to really improve the desktop experience - things will "just work", and better than they do on other operating systems.
Not to sound too naive here, but at some level don't hardware vendors still have to produce drivers? I can see the potential from the application perspective, but for things to just work (and better than they do on other operating systems) I would think up to date hardware drivers will still be needed. Am I wrong here?
Eric.....
nope, drivers will be as important as they are now. solid will make it easier for application developers to get information about hardware. this might sound not-so-important, but you can imagine the difference between 'with 5 wel documented lines of code you can know this and that' and 'let's work through a pile of (often badly documented) code to find out how to know this and that, and do so for every platform (and sometimes several times for several kernel versions within a platform) kde supports'.
same with phonon. if you want to play a sound in an application now, you can use arts - but users will complain if they use esound or gstreamer or xine. so you'll have to build a plugin structure which allows the user to choose between these sound engines. so you have to know all these engines (sometimes badly documented). and you have to update your application every time a new version of each of these engines is released.
or just forget about sound...
with phonon, there are a few well-documented lines - and tadaaa, sound.
and there will be strigi or something like that for indexing/searching, kross for scripting, decibel for comunication, etcetera.
not very innovative? no, i agree. but it'll lead to way better applications, as developers can easily add these features, and they can spend more time on actually making their apps better. and KDE will have the most elaborate framework compared to other DE's and OS'es (it already has, actually, it'll just get another, huge, boost).
"not very innovative? no, i agree."
On the contrary. I think Solid is the most important thing in KDE 4. It means you'll actually have graphical applications that will reflect properly what hardware you actually have on your system.
Exactly. Number one complaint from a friend of mine about linux: "Why can't I see what hardware I have, and what it can't find drivers for?"
It's really tough in Linux to simply find out what the hell the kernel is even doing with hardware. So if a newbie realizes that the sound doesn't work, they have no idea why. With solid, I hope they would be able to see nicely what's going on with the hardware. Is no module loaded at all? Why? Is a module loaded, and which one? Then then have a chance of troubleshooting this sort of stuff.
"with phonon, there are a few well-documented lines - and tadaaa, sound"
Yes, but lying beneath the few well-documented lines are the plug-in sound engines. So, Phonon would be using Xine or GStreamer or whatever underneath. If that is the case and there is a bug in (Xine|GStreamter|*), then the bug would still rear its ugly head for the user. The developers, in turn, would need to turn to (Xine|GStreamter|*) to find the bug. So, while I agree that Phonon is nice to show one unified sound API to KDE developers no matter what sound engine lies underneath, you are still faced with the plug-in problem you noted earlier in your post.
Mind you, I'm not bashing Phonon. Multimedia is simply a difficult problem to solve, given the hardware drivers problems and multiple software engines problems.
The advantage of having Phonon is when the plugin-architecture needs to be changed somehow because of a change in the backends, it doesn't have to be done at least 3 times (Amarok, Kaffeine, KMPlayer, Juk?, probably more all already implement something quite similar to Phonon, but only for themselves).
Thanks for the feedback. Mind you, I'm not a programmer at all so I was in no way saying that these new technologies aren't innovative. I was trying to clarify my understanding of the benefit of the new technologies. So this is how I see it. We still need to push hardware vendors to provide high quality linux drivers for their products. Solid and Phonon allow the app developers to easily add support for that hardware into their given app, freeing them to concentrate on improving the app. Allowing app developers to concentrate on innovating is in it's own right innovative. Sounds good to me.
Also, the poster was mainly commenting on the lack of discussion regarding Plasma. I remember when Plasma was first announced, Aaron said he would not reveal much till the very end so others (windows?) wouldn't be able to steal ideas. I'm hoping this is the reason we're not hearing much at this point.
For the Plasma scoop, visit this... and watch the video for a good measure.
My Summary:
Q: Why is it that I don't see the screenshots?
A: The infrastructure is not ready yet.
Q: What is that Plasma "infrastructure" you are talking about?
A: Simple answer: Imagine SuperKaramba-like eye-kandy and scriptability on accelerated system-wide "canvas"
Longer Answer: Aaron got tired of implementing everyone's visual desires in Kicker IN C++. He saw that pretty much everybody can do a good graphical "skinning" job using higher level languages and, his resolution was: backend (rendering, data) will be in C++, you ppl can roll your own UI.
Since the backend is nowhere near completion, the work on UI is far from began.
Soo... No Plasma screenies for you.
I think people want to hear more about use cases than technical blabing. For instance, will Plasma allow me to make icon-application-scripts easily like to be able to drag files there and then when I open with Konqueror they are converted to another format (like OpenDocument to MS Word, which I need a lot)? Will the panel it self be such an application-script? If yes, does this mean you could make application-scripts that have container areas? Would be could if it would allow to do application-scripts that resembled those side-panels of the old MacOS...
That's why I still call Plasma vaporware: they still don't have a plan! You now, just some sketches of what will Plasma allow you to do, what effects will be added, how the desktop will look.
Right now, Plasma is only a concept, a nice one, but that's it!
I belive KDE4 will be out without Plasma as it was promissed - a new way for doing desktop - but rather will keep the way KDE3 is and then add Plasma later, the day it's completed (I belive it's not soon).
But yeah, I would love KDE developers show me I'm 100% wrong :)
I belive KDE4 will be out without Plasma as it was promissed - a new way for doing desktop - but rather will keep the way KDE3 is and then add Plasma later, the day it's completed.
You are "somewhat" mistaken. The pieces for the future Plasma parts are already in place. The styleclock () author has already agreed to port it to Plasma. Since it appears the 1st language that will make it into Plasma will be JavaScript, consider that a no-brainer.
If python will make it into the backend, the crap-load of SuperKaramba applets will almost instantly show up too.
So, monitoring, time and calendar Plasma applets should be in KDE 4 almost from the start. Other things, like presence, collaboration and contact applets will probably lag, as the data backends for them will be very new in KDE 4.
I think that the doubting is about "Plasma as an infrastructure for different workflows on the desktop", not about "Plasma as a framework for skinnable super-applets on steroids". In other words, there seems to be a lack of discussion about behaviour and workflows.
I very much hope it's not one of those cases where ppl become so enamoured with the technology and the glitz that they gliss about the substance. I think we've all seen too many games and movies revolving around that new-fangled CG effect, and having plot or gameplay as an afterthought. Sadly, this glitzy fascination seem to be polluting the GUI design lately (see Vista's alt+tab as opposed to OS X Expose for an example of useless vs useful sophistication).
Or maybe I misunderstood the meaning of the Plasma namespace, and its real goal is only to offer the tools, not to redefine the experience.
Yep, that's it.
If the plan was to make Plasma as a new implementation of super-karamba, ok, fine.
But what was "promissed" was a complete new desktop paradigma with a lot of nice effetcs. I still don't see this coming :)
Is there any progress for the new default widget style? Few weeks ago I saw about cuckoo (cookoon cookoon cookun or whatever, I can't remember). But I've never heard about it anymore, any update on this (or it was just me who had too deep imagination about KDE4)? IMHO new widget style is good to compete with Windows Vista and GNOME. I'm sure that KDE 4 will rock!!!
I hope it will look and behave similar to the Polyester widget style which is really great. But striped menus and colored scrollbars should be disabled by default (perhaps less style options at all).
Amen to Polyester-isation, brother!
It offers by far the most comprehensive array of visual choices. Anything, from centered tabs, to exquisite tuning of "active" button appearance. It seems the author went around and cut the best parts out of other major contenders. A bang job!
With regards to usability in KDE, it might be useful
to take a good long read in the articles on this website: ; it is a comparison between the
Operating Systems WindowsXP SP2 and Mac OS X Tiger (10.4.7).
One thing that already caught my attention, is that, when
you have several iconsets installed in KDE, and then switch
to a different iconset, some icons are 'old' (from the previous
iconset). After, if you want to customize some icons, it's hard
to find which icons belong to which iconset.
So I propose that there's some kind of possibilty to sort/select
icons by iconset when you're in the 'icon picker' (big list of icons).
Reading this thing is good, if only because it makes you think about
things you might have taken for granted... | https://dot.kde.org/2006/10/02/linuxcom-aaron-seigo-talks-about-akademy-and-kde-4 | CC-MAIN-2016-40 | refinedweb | 2,003 | 71.65 |
0
Alright, I am having a problem with an error message that comes up, i am pretty sure the code is alright, but it might be the code, i have tried re-installing VB (Visual Basic 2010). i have tried a few things on the web and nothing. here is the error im getting...
error LNK2019: unresolved external symbol _WinMain@16 referenced in function ___tmainCRTStartup
1>K:\C++ Programming\20pg372\Debug\20pg372.exe : fatal error LNK1120: 1 unresolved externals
I just don't get it... Here is my code im trying to count the empty space in between each word in the file the user inputs...
#include <fstream> #include <iostream> #include <string> using namespace std; int main () { int size = 0, count = 0; char ch; int Option1; string FILE; cout <<"Hello, this program will count the number of words in a file of your choice!"<<endl; cout <<"If you are ready to begin enter 1 and press enter, Press 2 and enter to exit"<<endl; cin >> Option1; if (Option1 == 2) { cout<<"Have a nice day"<<endl; return 0; } if(Option1 == 1) { cout <<"Alrighty please enter the name of the file you wish to open"<<endl; cout <<"Also include the file format! for example( .txt, .doc, .rtf ect.)"<<endl; cin >> FILE; ifstream in_stream; in_stream.open(FILE); if(in_stream.fail()) { cout << "input file failed to open."<<endl; exit(1); } in_stream >> ch; while(in_stream >> ch) { if(isspace(ch)) { count++; in_stream >> ch; } } cout<<count; system("pause"); in_stream.close(); } } | https://www.daniweb.com/programming/software-development/threads/398385/error-that-comes-up-never-had-it-before-ive-tried-everything | CC-MAIN-2017-09 | refinedweb | 243 | 68.6 |
1. Download cx_Oracle-4.1.tar.gz from here.
2. Become root on the target system.
3. Make sure you have the ORACLE_HOME environment variable defined for user root and pointing to the correct Oracle installation directory on your server. This step is necessary because the cx_Oracle installation process looks for headers and libraries under ORACLE_HOME.
4. Install cx_Oracle by running 'python setup.py install'.
5. Become user oracle, since this is the user you'll most likely want to run python scripts as when interacting with Oracle databases. Make sure you have the following line in oracle user's .bash_profile (or similar for other shells):
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/libIf you don't add ORACLE_HOME/lib to your LD_LIBRARY_PATH, you'll get an error such as:
Now you're ready to use the cx_Oracle module. Here's a bit of code that shows how to connect to an Oracle database and run a SELECT query:Now you're ready to use the cx_Oracle module. Here's a bit of code that shows how to connect to an Oracle database and run a SELECT query:
>>> import cx_Oracle
Traceback (most recent call last):
File "
", line 1, in ?
ImportError: ld.so.1: python: fatal: libclntsh.so.9.0: open failed: No such file or directory
import cx_Oracle
from pprint import pprint
connection = cx_Oracle.Connection("%s/%s@%s" % (dbuser, dbpasswd, oracle_sid))
cursor = cx_Oracle.Cursor(connection)
sql = "SELECT * FROM your_table"
cursor.execute(sql)
data = cursor.fetchall()
print "(name, type_code, display_size, internal_size, precision, scale, null_ok)"
pprint(cursor.description)
pprint(data)
cursor.close()
connection.close()
15 comments:
What/where is the RSS link to subsribe this great blog?
There's an Atom link at. Blogger doesn't offer RSS, so I'll probably use FeedBurner to syndicate my blog via RSS. In the mean time, you can subscribe to it via bloglines.com. And thamks for the kind words!
Grig
thanks for writing what you do. i ran into this exact problem installing cx_Oracle on Solaris and found your blog via google; your comments were helpful. also had to change the lib dir to lib32, in setup.py
Very useful. Like the previous commentator, I had to make some changes in order to install cx_Oracle on Solaris. (1) I had to change "lib" to "lib32" in setup.py. Also, (2) I had to change my ORACLE_HOME environment setting from "lib" to "lib32".
Thanks, Grig!
I'm getting this error when installing:
/usr/ucb/cc: language optional software package not installed
error: command 'cc' failed with exit status 1
I'm new to solaris so I have no clue what to do.
I also have no root access so I can't install anything out of my home dir. Maybe you could point me a link with binaries.
Hi can someone help me in installing cx_oracle on the Unix machine. I did as per the previoulsy posted blogs but still find this message.
import cx_oracle
Traceback (most recent call last):
File "stdin", line 1, in ?
ImportError: No module named cx_oracle
Anykind of help would be appreciated.
Building on solaris 9 sparc. Got the clntsh error, however my LD_LIBRARY_PATH did indeed include ORACLE_HOME/lib. Just to see what I could get away with, I commented out libs = ["clntsh"] in setup.py and the compile completed. Question now is what is clntsh?
...
creating build/lib.solaris-2.9-sun4u-2.5
gcc -shared build/temp.solaris-2.9-sun4u-2.5/cx_Oracle.o -L/nordic/apps/oracle/ora920/lib -lclntsh -o build/lib.solaris-2.9-sun4u-2.5/cx_Oracle.so -s
ld: fatal: library -lclntsh: not found
ld: fatal: File processing errors. No output written to build/lib.solaris-2.9-sun4u-2.5/cx_Oracle.so
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
>/usr/ucb/cc: language optional software package not installed
> error: command 'cc' failed with exit status 1
Try this before compiling (if using gcc)
export CC=gcc
on hp-ux 11:
change "libPath" in cx_Oracle's setup.py to point to lib32 (line 72).
Also make sure SHLIB_PATH points to lib32 rather than lib.
IMPORTANT:
setenv LD_PRELOAD /usr/lib/libcl.2
that was painful.
When you say that Oracle has to already be installed (because the script looks for environment variables for the Oracle home, etc), are you referring to the client?
Thanks.
Anonymous -- when I say that Oracle needs to be installed, I am referring to header files and library files. They are used by the cx_Oracle module to connect to the Oracle server.
Where can I get the header and libraries used by cx_Oracle, if Oracle is not running on my machine ?
Anonymous -- for Oracle headers and libraries, install the Oracle Instant Client from here:
Thank you for the quick guide!
i have the next error
ImportError: Failed to load cx_Oracle.so
i have install python and cx_Oracle on the hp-ux11 x64, i do the Leon say:
"change "libPath" in cx_Oracle's setup.py"
libPath = os.path.join(oracleHome, "lib32")
160 if struct.calcsize("P") == 4:
161 alternatePath = os.path.join(oracleHome, "lib32")
162 else:
163 alternatePath = os.path.join(oracleHome, "lib32")
but, don't work.
Help Me pls. | http://agiletesting.blogspot.com/2005/05/installing-and-using-cxoracle-on-unix.html | CC-MAIN-2016-44 | refinedweb | 864 | 59.4 |
Type: Posts; User: coder752
How can it be decreasing when the question stated show that its increasing, I sort of understand your way of calculating it but I don't think the formula is right.
Can you check to see if this is even a valid question. I got this challenge from someone and I believe there's no way to solve it but have a look. I can't find a perfect bounded above value...I...
I solved it now...thanks.
But please help me with my other Discrete Structures question:
Thanks, I will try to read the link first, attempt the question, and post my findings to see if I'm right...or need additional help.
Thanks so much, also could you take a look at my other thread...
I need help with this question:
Find what a,b,and c is. They must be integer answers.
The equation goes like this:
35a+55b+77c=1
I'm totally confused, I can think of a solution but...
Hey all,
I need help on this problem. Say that x^2 is congruent to 3 (mod 83).
Find the inverse of x (mod 83).
I made a small example case first: (I think all this is wrong now) Please...
I suppose it is.
Does that help any bit?
Basically I believe its asking for a function that is not little o(1) but when you plug in the number 2009 in the new function it will equal to 0 thus...
Ok, I'll give you the whole statement.
This is a two parter question to be treated separate.
Suppose f(x) is a function such that f(x) = O(1).
Can e^{f(x)} = o(1)? Yes or no, provide...
I have another induction question. Here's my work. I think I got it right(except the base case).
Help is greatly appreciated. Thanks in advance.
The question goes as this: Prove using...
Hi,
I have this question that seems trivial to me.
I need to find a function which is not o(1) but where f(2009) = 0 <---fyi this is a zero
using the fact that f(x) is a function such that...
Hi,
I wanted to clarify my answer for this question. Please note if I did anything wrong.
The question uses induction.
Prove that for every positive integer n, 4^n+14 congruent 0(mod 6)
...
So the next step would be this:
P(k): 1+1+2+3+5+8+.........k= (k+2) -1
And the next step would be this:
P(k+1) 1+1+2+3+5+8+.......k+k+1=(k+2)+1-1 which would simply be:
P(k+1)...
I need help with this challenging question for my computer discrete structures/data structures class.
Ok I need to prove that the sum of the first n fibonacci numbers is equal to the (n+2)nd...
I don't know what to use, a JFileChooser, or a JTextArea?
It's suppose to read in a social security (sample) file, if it contains anything other than the 9 digits, it will show the exception. ...
Ok, so I transferred my code to a new directory called components. I placed TextDemo.java in there. I added this line of code in the heading as well
import components.*;
I did a...
Error message reads like this:
Exception in thread "main" java.lang.NoClassDefFoundError: TextDemo (wrong name:
components/TextDemo)
at java.lang.ClassLoader.defineClass1(Native Method)...
I got this code from the Java Sun site.
Why won't it run?
package components;
/* TextDemo.java requires no other files. */
Thanks! I just put that line in its new location, it works now.
I don't think I could do the other method Xeel said earlier since I didn't learn generics yet.
So I put that line above that, and I get 2 errors. Now what?
Here's the new main.
/**
* Sole entry point to the class and application.
* @param args Array of String...
Not uh!
It's not as easy as pie.
In case you didn't try to compile it urselves.
This is what I get.
Note: LinkedListExample.java uses unchecked or unsafe operations.
Note: Recompile with...
How can I compile this code found on this site?
The code is too long to paste here.
It's a linked list example.
No, its not a dumb question...
System as in what the computer is calculating aka (procedures done internally)
Thanks for the interpretation.
[QUOTE
You'll have to explain what you mean.
[/QUOTE]
I mean what is the system doing while the toString is being used and what the system is doing while the program is suppose to have though it...
Why you can't declare a class abstract and final?
Is there some internal rule the system just rejects, or what?
Can somebody provide a sample program demonstrating this?
Thanks,
Coder752 | http://forums.codeguru.com/search.php?s=73030bfa31b934c27be2426bc9985bad&searchid=4874785 | CC-MAIN-2014-35 | refinedweb | 817 | 86.71 |
+++++++++++++++++++++++++++++++++++++++++++++++++++++ python-dev Summary for 2003-03-16 through 2003-03-31 +++++++++++++++++++++++++++++++++++++++++++++++++++++ This is a summary of traffic on the `python-dev mailing list`_ from March 16, 2003 through March 31, 2003. It is intended to inform the wider Python community of on-going developments on the list and to have an archived summary of each thread started on the list. To comment on anything mentioned here, just post to python-list at fourteenth summary written by Brett Cannon (Managed to keep my sanity as long as A.M. Kuchling did before he stopped doing the Summaries =). All summaries are archived at . Please note that this summary is written using reStructuredText_ which can be found at . Any unfamiliar punctuation is probably markup for reST_ (otherwise it is probably regular expression syntax); text file. __ .. _python-dev: .. _python-dev mailing list: .. _comp.lang.python: .. _Docutils: .. _reST: .. _reStructuredText: .. contents:: .. _last summary: ====================== Summary Announcements ====================== PyCon is now over! It was a wonderful experience. Getting to meet people from python-dev in person was great. The sprint was fun and productive (work on the AST branch, caching where something is found in an inheritance tree, and a new CALL_ATTR opcode were all worked on). It was definitely worth attending; I am already looking forward to next year's conference. I am trying a new way of formatting the Quickies_ section. I am trying non-inline implicit links instead of inlined ones. I am hoping this will read better in the text version of the summary. If you have an opinion on whether the new or old version is better let me know. And remember, the last time I asked for an opinion Michael Chermside was the only person to respond and thus ended up making an executive decision. .. _PyCon: ======================== `Re: lists v. tuples`__ ======================== __ Splinter threads: - `Re: Re: lists v. tuples <>`__ This developed from a thread from covered in the `last summary`_ that discussed the different uses of lists and tuples. By the start date for this summary, though, it had turned into a discussion on comparisons. This occured when sorting heterogeneous objects came up. Guido commented that having anything beyond equality and non-equality tests for non-related objects does not make sense. This also led Guido to comment that "TOOWTDI makes me want to get rid of __cmp__" (TOOWTDI is "There is Only One Way to Do It"). Now before people start screaming bloody murder over the possible future loss of __cmp__() (which probably won't happen until Python 3), realize that all comparisons can be done using the six other rich comparisons (__lt__(), __eq__(), etc.). There is some possible code elegance lost if you have to use two rich comparisons instead a single __cmp__() comparison, but it is nothing that will prevent you from doing something that you couldn't do before. There can also be a performance penalty in sorting in some instances. This all led Guido to suggest introducing the function before(). This would be used for arbitrary ordering of objects. Alex Martelli said it would "be very nice if before(x,y) were the same as x<y whenever the latter doesn't raise an exception, if feasible". He also said that it should probably "define a total ordering, i.e. the implied equivalence being equality". ================================ `Fast access to __builtins__`__ ================================ __ There has been rumblings on the list as of late of disallowing shadowing of built-ins. Specifically, the idea of someone injecting something into a module's namespace that overrides a global (by doing something like ``socket.len = lambda x: 42`` from the socket module) is slightly nasty, rarely done, and prevents the core from optimizing for built-ins. Raymond Hettinger, in an effort to see how to speed up built-in access, came up with the idea of replacing opcode calls of LOAD_GLOBAL and replace them with LOAD_CONST after putting the built-in being called into the constants table. This would leave shadowing of built-ins locally unaffected but prevent shadowing at the module. Raymond suggested turning on this behavior for when running Python -O. The idea of turning this on when running with the -O option was shot down. The main argument is that semantics are changed and thus is not acceptable for the -O flag. It was mentioned that -OO can change semantics, but even that is questionable. So this led to some suggestions of how to turn this kind of feature on. Someone suggested something like a pragma (think Perl) or some other mechanism at the module level. Guido didn't like this idea since he does not want modules to be riddled with code to turn on module-level optimizations. But all of this was partially shot down when Guido stepped in and reiterated he just wanted to prevent outside code from shadowing built-ins for a module. The idea is that if it can be proven that a module does not shadow a built-in it can output an opcode specific for that built-in, e.g. len() could output opcode for calling PyOject_Size() if the compiler can prove that len() is not shadowed in the module at any point. Neil Schemanauer suggested adding a warning for when this kind of shadowing is done. Guido said fine as long as extension modules are exempt. Now no matter how well the warning is coded, it would be *extremely* difficult to catch something like ``import X; d = X.__dict__; d["len"] = lambda x: 42``. How do you deal with this? By Guido saying he has not issue saying something like this "is always prohibited". He said you could still do ``setattr(X, "len", lambda x: 42)``, though, and that might give you a warning. Neil's patch can be found at . ================================ `capability-mediated modules`__ ================================ __ Splinter threads: - `Capabilities <>`__ The thread that will not die (nor does it look like it will in the near future; Guido asked to postpone discussing it until he gets back from `Python UK`_ which will continue the discussion into the next summary. I am ending up an expert at capabilities against my will. =) In case you have not been following all of this, capabilities as being discussed here is the idea that security is based on passing around references to objects. If you have a reference you can use it with no restrictions. Security comes in by controlling who you give references to. So I might ask for a reference to file(), but I won't necessarily get it. I could, instead, be handed a reference to a restrictive version of file() that only opens files in an OSs temporary file directory. So, in capabilities-land, executing ``open_file = file`` only works if you have the reference to 'file', otherwise the assignment fails and you just don't get access. If that is not clear, read the `last summary`_ on this thread. And now, on to the new stuff... There were also suggestions to add arguments to import statements to give a more fine-grained control over them. But it was pointed out that classes fit this bill. The idea of limiting what modules are accessible by some code by not using a universally global scope (i.e., not using sys.modules) but by having a specific scope for each function was suggested. As Greg Ewing put it, "it would be dynamic scoping of the import namespace". While trying to clarify things (which were at PyCon thanks to the Open Space discussion held there on this subject), a good distinction between a rexec_ world (as in the module) and a capabilities was made by Guido. In capabilities, security is based on passing around references that have the amount of power you are willing for it to have. In a rexec world, it is based on what powers the built-ins give you; there is no worry about passing around code. Also, in the rexec world, you can have the idea of a "workspace" where __builtin__ has very specific definitions of built-ins that are used when executing untrusted code. It has been pointed out that rexec can be viewed as a specific implementation of capabilities. Since you are restricting what references code by making only certain references available to the codes you are using capabilities, just more at a namespace level than on a per-reference level. Ka-Ping Yee wrote up an example of some code of what it would be like to code with capabilities (can be found at ). .. _Python UK: .. _rexec: ========= Quickies ========= `tzset`__ time.tzset() is going to be kept in Python, but only on UNIX. The testing suite was also loosened so as to not throw as many false-negatives. __ `Windows IO`__ stdin and stdout on Windows are TTYs. You can get 3rd-party modules to get more control over the TTY. __ `Who approved PyObject_GenericGetIter()???`__ Splinter threads: `Re: [Python-checkins] python/dist/src/Modules _hotshot.c,...`__; `PyObject_GenericGetIter()`__ Raymond Hettinger wrote a function called PyObject_GenericGetIter() that returned self for objects that were an iterator themselves. Thomas Wouters didn't like the name and neither did Guido since it was generic at all; it worked specifically with objects that were iterators themselves. Thus the function was renamed to PyObject_SelfIter(). __ __ __ `test_posix failures?`__ A test for posix.getlogin() was failing for Barry Warsaw under XEmacs (that is what he gets for not using Vim_ =). Thomas Wouters pointed out it only works when there is a utmp file somewhere. Basically it was agreed the test that was failing should be removed. __ .. _Vim: `Shortcut bugfix`__ Raymond Hettinger reported that a change in `_tkinter.c`_ for a function led to it returning strings or ints which broke PMW_ (although having a function return two different things was disputed in the thread; I think it used to return a string and now returns an int). The suggestion of making string.atoi() more lenient on its accepted arguments was made but shot down since it changes semantics. If you want to keep old way of having everything in Tkinter return strings instead of more proper object types (such as ints where appropriate), you can put teh line ``Tkinter.wantobjects = 0`` before the first creation of a tkapp object. __ .. __tkinter.c: .. _PMW: `csv package ready for prime-time?`__ Related: `csv package stitched into CVS hierarchy`__ Skip Montanaro: Okay to move csv_ package from the sandbox into the stdlib? Guido van Rossum: Yes. __ __ .. _csv: `string.strip doc vs code mismatch`__ Neal Norwitz asked for someone to look at which updates string.strip() from the string_ module to take an optional second argument. The patch is still open. __ .. _string: `Re: More int/long integration issues`__ The point was made that it would be nice if the statement ``if num in range(...): ...`` could be optimized by the compiler if range() was only the built-in by substituting it with something like xrange() and thus skip creating a huge list. This would allow the removal of xrange() without issue. Guido suggested a restartable iterator (generator would work wonderfully if you could just get everything else to make what range() returns look like the list it should be). __ `socket timeouts fail w/ makefile()`__ Skip Montanaro discovered that using the makefile() method on a socket cause the file-like object to not observe the new timeout facility introduced in Python 2.3. He has since patched it so that it works properly and that sockets always have a makefile() (wasn't always the case before). __ `New Module? Tiger Hashsum`__ Tino Lange implemented a wrapper for the `Tiger hash sum`_ for Python and asked how he could get it added to the stdlib. He was told that he would need community backing before his module could be added in order to make sure that there is enough demand to warrant the edition. __ .. _Tiger hash sum: `Icon for Python RSS Feed?`__ Tino Lange asked if an XML RSS feed icon could be added at for . It has been added. __ `How to suppress instance __dict__?`__ David Abrahams asked if there was an easy way to suppress an instance __dict__'s creation from a metaclass. The answer turned out to be no. __ `Weekly Python Bug/Patch Summary`__ Another summary can be found at Skip Montanaro's weekly reminder how Python ain't perfect. __ `[ot] offline`__ Samuele Pedroni is off relaxing is is going to be offline for two weeks starting March 23. __ `funny leak`__ Christian Tismer discovered a memory leak in a funky def statement he came up with. The leak has since been squashed (done at PyCon_ during the sprint, actually). __ `Checkins to Attic?`__ CVS_ uses something called the Attic to put files that are only in a branch but not the HEAD of a tree. __ .. _CVS: `ossaudiodev tweak needs testing`__ Greg Ward asked people who are running Linux or FreeBSD to execute ``Lib/test/regrtest.py -uaudio test_ossaudiodev`` so as to test his latest change to ossaudiodev_. __ .. _ossaudiodev: `cvs.python.sourceforge.net fouled up`__ Apparently when you get that nice message from SourceForge_ telling you that recv() has aborted because of server overloading you can rest assured that people with checkin rights get to continue to connect since they get priority. __ .. _SF: .. _SourceForge: `Doc strings for typeslots?`__ You can't add custom docstrings to things stored in typeobject slots at the C level. __ `Compiler treats None both as a constant and variable`__ As of now the compiler outputs opcode that treats None as both a global and a constant. That will change as some point when assigning to None becomes an error instead of a warning as it is in Python 2.3; possibly 2.4 the change will be made. __ `iconv codec`__ M.A. Lemburg stated that he questioned whether the iconv codec was ready for prime-time. There have been multiple issues with it and most seem to stem from a platform's codec and not ones that come with Python. This affects all u"".encode() calls when the codec does not come with Python. Hye-Shik Chang said he would get his iconv codec NG patch up on SF in the next few days and that would be applied. __ | https://mail.python.org/pipermail/python-list/2003-April/235213.html | CC-MAIN-2018-05 | refinedweb | 2,407 | 63.59 |
CGI for C++ applicationsKristaps Dzonsons
Source Code
The hard part about this example is doing anything demonstrable with C++. So we'll do something very silly in just printing our HTTP output into the log file. For the kcgi inclusion, we'll need all the usual header files.
#include <sys/types.h> /* size_t, ssize_t */ #include <stdarg.h> /* va_list */ #include <stddef.h> /* NULL */ #include <stdint.h> /* int64_t */ #include <kcgi.h>
Next, we'll need our C++ bits. This is make-work, but serves to illustrate…
#include <iostream>
Now let's just jump directly into our main function.
It will do nothing more than print
Hello, world! to the browser.
But it will also emit
Said hello! into the error log.
See your web server configuration for where this will appear.
It's usually in the error.log file.
On OpenBSD's default server, it's often in
/var/www/logs/error.log.
int main(void) { enum kcgi_err er; struct kreq r; const char *const pages[1] = { "index" }; /* Set up our main HTTP context. */ er = khttp_parse(&r, NULL, 0, pages, 1, 0); if (er != KCGI_OK) return 0; khttp_head(&r, kresps[KRESP_STATUS], "%s", khttps[KHTTP_200]); khttp_head(&r, kresps[KRESP_CONTENT_TYPE], "%s", kmimetypes[r.mime]); khttp_body(&r); khttp_puts(&r, "Hello, world!\n"); std::cerr << "Said hello!"; khttp_free(&r); return 0; }
Nothing to it—looks like any of our C examples.
The difference is that we've used some C++ code to emit
Said hello! to the error log.
The next part is how we can compile this code.
For that, we'll need to use a C++ compiler instead of the C compiler we've been using to date.
% c++ `pkg-config --cflags kcgi` -c -o tutorial5.o tutorial5.cc % c++ -static -o tutorial5 tutorial5.cc `pkg-config --libs kcgi`
Or just…
(and noting again the
-static, which we need by being in our file-system jail)
% c++ -static `pkg-config --cflags --libs kcgi` -o tutorial5 tutorial5.cc
Now you can install your compiled CGI script just like any CGI script.
See Getting Started with CGI in C for these steps in detail.
All of these steps work for FastCGI, of course.
Enjoy!
(On non-OpenBSD systems, you'll probably need to use
sudo instead of
doas.)
% doas install -m 0555 tutorial5 /var/www/cgi-bin | https://kristaps.bsd.lv/kcgi/tutorial5.html | CC-MAIN-2021-21 | refinedweb | 381 | 70.09 |
Python is a powerful programming language ideal for scripting and rapid application development. It is used in web development (like: Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).
This tutorial introduces you to the basic concepts and features of Python 3. After reading the tutorial, you will be able to read and write basic Python programs, and explore Python in depth on your own.
This tutorial is intended for people who have knowledge of other programming languages and want to get started with Python quickly.
Python for Beginners
If you are a programming newbie, we suggest you to visit:
- Python Programming - A comprehensive guide on what's Python, how to get started in Python, why you should learn it, and how you can learn it.
- Python Tutorials - Follow sidebar links one by one.
- Python Examples - Simple examples for beginners to follow.
What's covered in this tutorial?
- Run Python on your computer
- Introduction (Variables, Operators, I/O, ...)
- Data Structures (List, Dictionary, Set, ...)
- Control Flow (if, loop, break, ...)
- File (File Handling, Directory, ...)
- Exceptions (Handling, User-defined Exception, ...)
- OOP (Object & Class, Inheritance, Overloading, ...)
- Standard Library (Built-in Function, List Methods, ...)
- Misc (Generators, decorators, ...)
Run Python on Your computer
You do not need to install Python on your computer to follow this tutorial. However, we recommend you to run Python programs included in this tutorial on your own computer.
Python Introduction
Let's write our first Python program, "Hello, World!". It's a simple program that prints Hello World! on the standard output device (screen).
"Hello, World!" Program
print("Hello, World!");
When you run the program, the output will be:
Hello, World!
In this program, we have used the built-in print() function to print Hello, world! string.
Variables and Literals
A variable is a named location used to store data in the memory. Here's an example:
a = 5
Here, a is a variable. We have assigned
5 to variable a
We do not need to define variable type in Python. You can do something like this:
a = 5 print("a =", 5) a = "High five" print("a =", a)
Initially, integer value
5 is assigned to the variable a. Then, the string High five is assigned to the same variable.
By the way,
5 is a numeric literal and
"High five" is a string literal.
When you run the program, the output will be:
a = 5 a = High five
Visit Python Variables, Constants and Literals to learn more.
Operators
Operators are special symbols that carry out operations on operands (variables and values).
Let's talk about arithmetic and assignment operators in this part.
Arithmetic operators are used to perform mathematical operations like addition, subtraction, multiplication etc.
x = 14 y = 4 # Add two operands print('x + y =', x+y) # Output: x + y = 18 # Subtract right operand from the left print('x - y =', x-y) # Output: x - y = 10 # Multiply two operands print('x * y =', x*y) # Output: x * y = 56 # Divide left operand by the right one print('x / y =', x/y) # Output: x / y = 3.5 # Floor division (quotient) print('x // y =', x//y) # Output: x // y = 3 # Remainder of the division of left operand by the right print('x % y =', x%y) # Output: x % y = 2 # Left operand raised to the power of right (x^y) print('x ** y =', x**y) # Output: x ** y = 38416
Assignment operators are used to assign values to variables. You have already seen the use of
= operator. Let's try some more assignment operators.
x = 5 # x += 5 ----> x = x + 5 x +=5 print(x) # Output: 10 # x /= 5 ----> x = x / 5 x /= 5 print(x) # Output: 2.0
Other commonly used assignment operators:
-=,
*=,
%=,
//= and
**=.
Visit Python Operators to learn about all operators in detail.
Get Input from User
In Python, you can use input() function to take input from user. For example:
inputString = input('Enter a sentence:') print('The inputted string is:', inputString)
When you run the program, the output will be:
Enter a sentence: Hello there. The inputted string is: Hello there.
Python Comments
There are 3 ways of creating comments in Python.
# This is a comment
"""This is a multiline comment."""
'''This is also a multiline comment.'''
To learn more about comments and docstring, visit: Python Comments.
Type Conversion
The process of converting the value of one data type (integer, string, float, etc.) to another is called type conversion. Python has two types of type conversion.
Implicit Type Conversion
Implicit conversion doesn't need any user involvement. For example:
num_int = 123 # integer type num_flo = 1.23 # float type num_new = num_int + num_flo print("Value of num_new:",num_new) print("datatype of num_new:",type(num_new))
When you run the program, the output will be:
Value of num_new: 124.23 datatype of num_new:
Here, num_new has float data type because Python always converts smaller data type to larger data type to avoid the loss of data.
Here is an example where Python interpreter cannot implicitly type convert.
num_int = 123 # int type num_str = "456" # str type print(num_int+num_str)
When you run the program, you will get
TypeError: unsupported operand type(s) for +: 'int' and 'str'.
However, Python has a solution for this type of situation which is know as explicit conversion.
Explicit Conversion
In case of explicit conversion, you convert the datatype of an object to the required data type. We use predefined functions like int(), float(), str() etc. to perform explicit type conversion. For example:
num_int = 123 # int type num_str = "456" # str type # explicitly converted to int type num_str = int(num_str) print(num_int+num_str)
To lean more, visit Python type conversion.
Python Numeric Types
Python supports integers, floating point numbers and complex numbers. They are defined as
int,
float and
complex class in Python. In addition to that, booleans:
True and
False are a subtype of integers.
# Output:
print(type(5)) # Output: print(type(5.0)) # c = 5 + 3j print(type(c))
To learn more, visit Python Number Types.
Python Data Structures
Python offers a range of compound datatypes often referred to as sequences. You will learn about those built-in types in this section.
Lists
A list is created by placing all the items (elements) inside a square bracket
[] separated by commas.
It can have any number of items and they may be of different types (integer, float, string etc.)
# empty list my_list = [] # list of integers my_list = [1, 2, 3] # list with mixed data types my_list = [1, "Hello", 3.4]
You can also use list() function to create lists.
Here's how you can access elements of a list.
language = ["French", "German", "English", "Polish"] # Accessing first element print(language[0]) # Accessing fourth element print(language[3])
You use the index operator
[] to access an item in a list. Index starts from 0. So, a list having 10 elements will have index from 0 to 9.
Python also allows negative indexing for its sequences. The index of -1 refers to the last item, -2 to the second last item and so on.
Check these resources for more information about Python lists:
Tuples
Tuple is similar to a list except you cannot change elements of a tuple once it is defined. Whereas in a list, items can be modified.
Basically, list is mutable whereas tuple is immutable.
language = ("French", "German", "English", "Polish") print(language)
You can also use tuple() function to create tuples.
You can access elements of a tuple in a similar way like a list.
language = ("French", "German", "English", "Polish") print(language[1]) #Output: German print(language[3]) #Output: Polish print(language[-1]) # Output: Polish
You cannot delete elements of a tuple, however, you can entirely delete a tuple itself using
del operator.
language = ("French", "German", "English", "Polish") del language # NameError: name 'language' is not defined print(language)
To learn more, visit Python Tuples.
String
A string is a sequence of characters. Here are different ways to create a string.
# all of the following are equivalent my_string = 'Hello' print(my_string) my_string = "Hello" print(my_string) my_string = '''Hello''' print(my_string) # triple quotes string can extend multiple lines my_string = """Hello, welcome to the world of Python""" print(my_string)
You can access individual characters of a string using indexing (in a similar manner like lists and tuples).
str = 'programiz' print('str = ', str) print('str[0] = ', str[0]) # Output: p print('str[-1] = ', str[-1]) # Output: z #slicing 2nd to 5th character print('str[1:5] = ', str[1:5]) # Output: rogr #slicing 6th to 2nd last character print('str[5:-2] = ', str[5:-2]) # Output: am
Strings are immutable. You cannot change elements of a string once it is assigned. However, you can assign one string to another. Also, you can delete the string using
del operator.
Concatenation is probably the most common string operation. To concatenate strings, you use
+ operator. Similarly, the
* operator can be used to repeat the string for a given number of times.
str1 = 'Hello ' str2 ='World!' # Output: Hello World! print(str1 + str2) # Hello Hello Hello print(str1 * 3)
Check these resources for more information about Python strings:
Sets
A set is an unordered collection of items where every element is unique (no duplicates).
Here is how you create sets in Python.
# set of integers my_set = {1, 2, 3} print(my_set) # set of mixed datatypes my_set = {1.0, "Hello", (1, 2, 3)} print(my_set)
You can also use set() function to create sets.
Sets are mutable. You can add, remove and delete elements of a set. However, you cannot replace one item of a set with another as they are unordered and indexing have no meaning.
Let's try commonly used set methods: add(), update() and remove().
# set of integers my_set = {1, 2, 3} my_set.add(4) print(my_set) # Output: {1, 2, 3, 4} my_set.add(2) print(my_set) # Output: {1, 2, 3, 4} my_set.update([3, 4, 5]) print(my_set) # Output: {1, 2, 3, 4, 5} my_set.remove(4) print(my_set) # Output: {1, 2, 3, 5}
Let's tryout some commonly used set operations:
A = {1, 2, 3} B = {2, 3, 4, 5} # Equivalent to A.union(B) # Also equivalent to B.union(A) print(A | B) # Output: {1, 2, 3, 4, 5} # Equivalent to A.intersection(B) # Also equivalent to B.intersection(A) print (A & B) # Output: {2, 3} # Set Difference print (A - B) # Output: {1} # Set Symmetric Difference print(A ^ B) # Output: {1, 4, 5}
More Resources:
Dictionaries
Dictionary is an unordered collection of items. While other compound data types have only value as an element, a dictionary has a
key: value pair. For example:
# empty dictionary my_dict = {} # dictionary with integer keys my_dict = {1: 'apple', 2: 'ball'} # dictionary with mixed keys my_dict = {'name': 'John', 1: [2, 4, 3]}
You can also use dict() function to create dictionaries.
To access value from a dictionary, you use key. For example:
person = {'name':'Jack', 'age': 26, 'salary': 4534.2} print(person['age']) # Output: 26
Here's how you can change, add or delete dictionary elements.
person = {'name':'Jack', 'age': 26} # Changing age to 36 person['age'] = 36 print(person) # Output: {'name': 'Jack', 'age': 36} # Adding salary key, value pair person['salary'] = 4342.4 print(person) # Output: {'name': 'Jack', 'age': 36, 'salary': 4342.4} # Deleting age del person['age'] print(person) # Output: {'name': 'Jack', 'salary': 4342.4} # Deleting entire dictionary del person
More resources:
Python range()
range() returns an immutable sequence of numbers between the given start integer to the stop integer.
print(range(1, 10)) # Output: range(1, 10)
The output is an iterable and you can convert it to list, tuple, set and so on. For example:
numbers = range(1, 6) print(list(numbers)) # Output: [1, 2, 3, 4, 5] print(tuple(numbers)) # Output: (1, 2, 3, 4, 5) print(set(numbers)) # Output: {1, 2, 3, 4, 5} # Output: {1: 99, 2: 99, 3: 99, 4: 99, 5: 99} print(dict.fromkeys(numbers, 99))
We have omitted optional
step parameter for
range() in above examples. When omitted,
step defaults to 1. Let's try few examples with
step parameter.
# Equivalent to: numbers = range(1, 6) numbers1 = range(1, 6 , 1) print(list(numbers1)) # Output: [1, 2, 3, 4, 5] numbers2 = range(1, 6, 2) print(list(numbers2)) # Output: [1, 3, 5] numbers3 = range(5, 0, -1) print(list(numbers3)) # Output: [5, 4, 3, 2, 1]
Python Control Flow
if...else Statement
The
if...else statement is used if you want perform different action (run different code) on different condition. For example:
num = -1 if num > 0: print("Positive number") elif num == 0: print("Zero") else: print("Negative number") # Output: Negative number
There can be zero or more
elif parts, and the
else part is optional.
Most programming languages use
{} to specify the block of code. Python uses indentation.
A code block starts with indentation and ends with the first unindented line. The amount of indentation is up to you, but it must be consistent throughout that block.
Generally, four whitespace is used for indentation and is preferred over tabs.
Let's try another example:
if False: print("I am inside the body of if.") print("I am also inside the body of if.") print("I am outside the body of if") # Output: I am outside the body of if.
Before you move on to next section, we recommend you to check comparison operator and logical operator.
Also, check out Python if...else in detail.
while Loop
Like most programming languages,
while loop is used to iterate over a block of code as long as the test expression (condition) is
true. Here is an example to find the sum of natural numbers:
n = 100 # initialize sum and counter sum = 0 i = 1 while i <= n: sum = sum + i i = i+1 # update counter print("The sum is", sum) # Output: The sum is 5050
In Python, while loop can have optional
else block that is executed if the condition in the
while loop evaluates to
False. However, if the loop is terminated with
break statement, Python interpreter ignores the
else block.
To learn more, visit Python while Loop
for Loop
In Python,
for loop is used to iterate over a sequence (list, tuple, string) or other iterable objects. Iterating over a sequence is called traversal.
Here's an example to find the sum of all numbers stored in a list.
numbers = [6, 5, 3, 8, 4, 2] sum = 0 # iterate over the list for val in numbers: sum = sum+val print("The sum is", sum) # Output: The sum is 28
Notice the use of
in operator in the above example. The
in operator returns
True if value/variable is found in the sequence.
In Python,
for loop can have optional
else block. The else part is executed if the items in the sequence used in
for loop exhausts. However, if the loop is terminated with
break statement, Python interpreter ignores the
else block.
To learn more, visit Python for Loop
break Statement
The break statement terminates the loop containing it. Control of the program flows to the statement immediately after the body of the loop. For example:
for val in "string": if val == "r": break print(val) print("The end")
When you run the program, the output will be:
s t The end
continue Statement
The continue statement is used to skip the rest of the code inside a loop for the current iteration only. Loop does not terminate but continues on with the next iteration. For example:
for val in "string": if val == "r": continue print(val) print("The end")
When you run the program, the output will be:
s t i n g The end
To learn more on
break and
continue with detail explanation, visit Python break and continue.
pass Statement
Suppose, you have a loop or a function that is not implemented yet, but want to implement it in the future. They cannot have an empty body. The interpreter would complain. So, you use the
pass statement to construct a body that does nothing.
sequence = {'p', 'a', 's', 's'} for val in sequence: pass
Python Function
A function is a group of related statements that perform a specific task. You use
def keyword to create functions in Python.
def print_lines(): print("I am line1.") print("I am line2.")
You have to call the function to run the codes inside it. Here's how:
def print_lines(): print("I am line1.") print("I am line2.") # function call print_lines()
A function can accept arguments.
def add_numbers(a, b): sum = a + b print(sum) add_numbers(4, 5) # Output: 9
You can also return value from a function using
return statement.
def add_numbers(a, b): sum = a + b return sum result = add_numbers(4, 5) print(result) # Output: 9
Here are few resources to check:
Recursion (Recursive function)
A function that calls itself is known as recursive function and this process is called recursion.
Every recursive function must have a base condition that stops the recursion or else the function calls itself infinitely.
# Recursive function to find the factorial of a number def calc_factorial(x): if x == 1: return 1 else: return (x * calc_factorial(x-1)) num = 6 print("The factorial of", num, "is", calc_factorial(num)) # Output: The factorial of 6 is 720
Visit Python recursion to learn more.
Lambda Function
In Python, you can define functions without a name. These functions are called lambda or anonymous function. To create a lambda function,
lambda keyword is used.
square = lambda x: x ** 2 print(square(5)) # Output: 25
We use lambda functions when we require a nameless function for a short period of time. Lambda functions are used along with built-in functions like
filter(),
map() etc.
To learn more, visit:
Modules
Modules refer to a file containing Python statements and definitions.
A file containing Python code, for e.g.:
example.py, is called a module and its module name would be
example.
Let us create it and save it as
example.py.
# Python Module example def add(a, b): return a + b
To use this module, we use
import keyword.
# importing example module import example # accessing the function inside the module using . operator example.add(4, 5.5)
Python has a ton of standard modules readily available for use. For example:
import math result = math.log2(5) # return the base-2 logarithm print(result) # Output: 2.321928094887362
You can import specific names from a module without importing the module as a whole. Here is an example.
from math import pi print("The value of pi is", pi) # Output: The value of pi is 3.141592653589793
More Resources:
Python File I/O
A file operation takes place in the following order.
- Open a file
- Read or write (perform operation)
- Close the file
How to open a file?
You can use open() function to open a file.
f = open("test.txt") # open file in current directory f = open("C:/Python33/README.txt") # specifying full path
We can specify the mode while opening a file.
f = open("test.txt") # equivalent to 'r' or 'rt' f = open("test.txt",'w') # write in text mode f = open("img.bmp",'r+b') # read and write in binary mode
How to close a file?
To close a file, you use
close() method.
f = open("test.txt",encoding = 'utf-8') # perform file operations f.close()
How to write to a file?
In order to write into a file in Python, we need to open it in write
'w', append
'a' or exclusive creation
'x' mode.
with open("test.txt",'w',encoding = 'utf-8') as f: f.write("my first file\n") f.write("This file\n\n") f.write("contains three lines\n")
Here, we have used
with statement to open a file. This ensures that the file is closed when the block inside with is exited.
How to read files?
To read a file in Python, you must open the file in reading mode.
There are various methods available for this purpose. We can use the
read(size) method to read in size number of data.
f = open("test.txt",'r',encoding = 'utf-8') f.read(4) # read the first 4 data
Visit Python File I/O to learn more.
Python Directory
A directory or folder is a collection of files and sub directories. Python has the os module, which provides many useful methods to work with directories and files.
import os os.getcwd() // present working directory os.chdir('D:\\Hello') // Changing current directory to D:\Hello os.listdir() // list all sub directories and files in that path os.mkdir('test') // making a new directory test os.rename('test','tasty') // renaming the directory test to tasty os.remove('old.txt') // deleting old.txt file
Visit Python Directory to learn more.
Python Exception Handling
Errors that occur at runtime are called exceptions. They occur, for example, when a file we try to open does not exist
FileNotFoundError, dividing a number by zero
ZeroDivisionError etc.
Visit this page to learn about all built-in exceptions in Python.
If exceptions are not handled, an error message is spit out and our program come to a sudden, unexpected halt.
In Python, exceptions can be handled using
try statement. When exceptions are caught, it's up to you what operator to)
When you run the program, the output will be:
The entry is a Oops!
occurred. Next entry. The entry is 0 Oops! occurred. Next entry. The entry is 2 The reciprocal of 2 is 0.5
To learn about catching specific exceptions and
finally clause with
try statement, visit Python exception handling.
Also, you can create user-defined exceptions in Python. For that, visit Python Custom Exceptions
Python OOP
Everything in Python is an object including integers, floats, functions, classes, and
None. Let's not focus on why everything in Python is an object. For that, visit this page. Rather, this section focuses on creating your own classes and objects.
Class and Objects
Object is simply a collection of data (variables) and methods (functions) that act on data. And, class is a blueprint for the object.
How to define a class?
class MyClass: a = 10 def func(self): print('Hello')
As soon as you define a class, a new class object is created with the same name. This class object allows us to access the different attributes as well as to instantiate new objects of that class.
class MyClass: "This is my class" a = 10 def func(self): print('Hello') # Output: 10 print(MyClass.a) # Output:
print(MyClass.func) # Output: 'This is my class' print(MyClass.__doc__)
You may have noticed the
self parameter in function definition inside the class but, we called the method simply as
ob.func() without any arguments. It still worked.
This is because, whenever an object calls its method, the object itself is passed as the first argument. So,
ob.func() translates into
MyClass.func(ob).
Creating Objects
You can also create objects of the class yourself.
class MyClass: "This is my class" a = 10 def func(self): print('Hello') obj1 = MyClass() print(obj1.a) # Output: 10 obj2 = MyClass() print(obj1.a + 5) # Output: 15
Python Constructors
In Python, a method with name
__init()__ is a constructor. This method is automatically called when an object is instantiated.
class ComplexNumber: def __init__(self,r = 0,i = 0): # constructor self.real = r self.imag = i def getData(self): print("{0}+{1}j".format(self.real,self.imag)) c1 = ComplexNumber(2,3) # Create a new ComplexNumber object c1.getData() # Output: 2+3j c2 = ComplexNumber() # Create a new ComplexNumber object c2.getData() # Output: 0+0j
Visit Python Class and Object to learn more.
Python Inheritance
Inheritance refers to defining a new class with little or no modification to an existing class. Let's take an example:
class Mammal: def displayMammalFeatures(self): print('Mammal is a warm-blooded animal.')
Let's derive a new class Dog from this
Mammal class.
class Mammal: def displayMammalFeatures(self): print('Mammal is a warm-blooded animal.') class Dog(Mammal): def displayDogFeatures(self): print('Dog has 4 legs.') d = Dog() d.displayDogFeatures() d.displayMammalFeatures()
Notice that we are able to call method of base class
displayMammalFeatures() from the object of derived class d.
To learn more about inheritance and method overriding, visit Python Inheritance.
We also suggest you to check multiple inheritance and operator overloading if you are interested.
Miscellaneous and Advance Topics
Iterators.
my_list = [4, 7, 0, 3] # get an iterator using iter() my_iter = iter(my_list) print(next(my_iter)) # Output: 4 print(next(my_iter)) # Output: 7
To learn more about infinite iterators and how to create custom iterators, visit: Python Iterators.
Generators
There is a lot of overhead in building an iterator in Python; we have.
Learn more about Python Generators.
Closures
This technique by which some data gets attached to the code is called closure in Python.
def print_msg(msg): # outer enclosing function def printer(): # inner function print(msg) return printer # this got changed another = print_msg("Hello") # Output: Hello another()
Here, the
print_msg() function is called with the string
"Hello" as an argument and the returned function was bound to the name another. On calling
another(), the message was still remembered although we had already finished executing the
print_msg() function..
Visit Python closures to learn more about closures and when to use them.
Decorators
Python has an interesting feature called decorators to add functionality to an existing code.
This is also called metaprogramming as a part of the program tries to modify another part of the program at compile time.
To learn about decorators in detail, visit Python Decorators.
Did I miss anything in this Python tutorial? | https://cdn.programiz.com/python-programming/tutorial | CC-MAIN-2020-24 | refinedweb | 4,243 | 66.13 |
Add a new entry with the specified key and value to the hash table.
Syntax
#include <plhash.h> PLHashEntry *PL_HashTableAdd( PLHashTable *ht, const void *key, void *value);
Parameters
The function has the following parameters:
ht
- A pointer to the the hash table to which to add the entry.
key
- A pointer to the key for the entry to be added.
value
- A pointer to the value for the entry to be added.
Returns
A pointer to the new entry.
Description
Add a new entry with the specified key and value to the hash table.
If an entry with the same key already exists in the table, the
freeEntry function is invoked with the
HT_FREE_VALUE flag. You can write your
freeEntry function to free the value of the specified entry if the old value should be freed. The default
freeEntry function does not free the value of the entry.
PL_HashTableAdd returns
NULL if there is not enough memory to create a new entry. It doubles the number of buckets if the table is overloaded. | https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PL_HashTableAdd | CC-MAIN-2018-34 | refinedweb | 173 | 73.88 |
When several classes are derived from common base class it is called hierarchical inheritance.
In C++ hierarchical inheritance, the feature of the base class is inherited onto more than one sub-class.
For example, a car is a common class from which Audi, Ferrari, Maruti etc can be derived.
Following block diagram highlights its concept.
C++ Hierarchical Inheritance Block Diagram
As shown in above block diagram, in C++ hierarchical inheritance all the derived classes have common base class. The base class includes all the features that are common to derived classes.++ Hierarchical Inheritance Syntax
class A // base class { .............. }; class B : access_specifier A // derived class from A { ........... } ; class C : access_specifier A // derived class from A { ........... } ; class D : access_specifier A // derived class from A { ........... } ;
C++ Hierarchical Inheritance Example
// hierarchial inheritance.cpp #include <iostream> using namespace std; class A //single base class { public: int x, y; void getdata() { cout << "\nEnter value of x and y:\n"; cin >> x >> y; } }; class B : public A //B is derived from class base { public: void product() { cout << "\nProduct= " << x * y; } }; class C : public A //C is also derived from class base { public: void sum() { cout << "\nSum= " << x + y; } }; int main() { B obj1; //object of derived class B C obj2; //object of derived class C obj1.getdata(); obj1.product(); obj2.getdata(); obj2.sum(); return 0; } //end of program
Output
Enter value of x and y: 2 3 Product= 6 Enter value of x and y: 2 3 Sum= 5
Explanation
In this example, there is only one base class
A from which two class
B and
C are derived.
Both derived class have their own members as well as base class members.
The product is calculated in the derived class
B, whereas, the sum is calculated in the derived class
C but both use the values of
x and
y from the base class. | http://www.trytoprogram.com/cplusplus-programming/hierarchical-inheritance/ | CC-MAIN-2019-30 | refinedweb | 306 | 51.28 |
tag:blogger.com,1999:blog-32873892014-03-19T18:46:25.142-04:00jodi writesJodi is an XML content feed. It is intended to be viewed in a newsreader or syndicated to another site.tag:blogger.com,1999:blog-3287389.post-9454936600687505582009-06-25T18:47:00.002-04:002009-06-25T19:12:53.994-04:00<span style="font-weight: bold;">Michael Jackson</span><br /><br />When I read the headline today it brought tears to my eyes. Michael Jackson is dead. Yep, people do die. It shouldn't be terribly surprising - and yet, there is something about this news that struck a chord.<br /><br /.<br /><br /?<br /><br />Thank you Michael Jackson for the music, the dancing, the joy and the inspiration. I hope you are now in a better place.Jodi<strong><a href="">It's about time</a></strong><br /><strong></strong><br />Just read this article and am glad someone is speaking up. What I wonder is: Why have people been talking about this same issue ad nauseum at least since I was 12 years old? Why didn't other fashion editors stand up sooner? Maybe when all models had to be a size 4 or 6?<br /><br />Here's hoping the trend begins to reverse.Jodi<span style="font-weight: bold;">What Now?<br /><br /></span>The hockey season ended in the second round of the playoffs with a terrible game 7 in which the Capitals just choked. <span style="font-weight: bold;"><br /><br /></span>Neil and I have been kind of bummed since then. We spent so much time watching hockey the past 8 months that we suddenly have a ton of free time on our hands. I'm sure that will be a good thing once we finish mourning the hockey season. It was funny at the game when it was clear that our team would not be winning, many of the male fans began to get angry. They were yelling at the team, imploring them to play better, to be the team we knew they were. Whereas, I just got sad. I thought about how sad and frustrated the players must have been and how devastating the locker room was bound to be after the game. Perhaps that's the difference between male and female fans, or maybe it's just me.<br /><br />As the game was ending, the fans stood and gave the Capitals a standing ovation, not for the game they were about to lose, but for the amazing season that was about to end. And when the final buzzer sounded and the teams had done their handshake, the Capitals raised their sticks to the fans. It was not an electrifying moment like when they won game 7 against the Rangers, but it was incredibly moving. At that moment the saying "Sport is cruel" popped into my head and held new meaning. Oh the agony of sports <span class="blsp-spelling-error" id="SPELLING_ERROR_0">fandom</span>.<br /><br />In other less hockey-obsessed news, I started my government job two weeks ago. The first week was spent sitting at the government contract office that's my actual employer surfing the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">Internet</span>. I was waiting first, for my security clearance, then for paperwork to get my badge. Week two was spent waiting for the same badge paperwork and then waiting for a computer <span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">log in</span>. I finally began working on Friday after I got to spend most of Thursday running my personal errands because I still didn't have a <span class="blsp-spelling-corrected" id="SPELLING_ERROR_3">log in</span>.<br /><br />In spite of the less than thrilling education in <span class="blsp-spelling-corrected" id="SPELLING_ERROR_4">bureaucracy</span>, I do have some things I really like about my new routine and daily rhythm.<br /><br />1. I love working in the Ronald Regan Building. It's absolutely beautiful and filled with all kinds of people every day.<br /><br />2. I love walking to work. It's an entirely different experience being downtown during the work day and my office is only about 6 blocks from my house. I have more hours in my day and a new feeling of freedom.<br /><br />3. I like having a government badge - It makes me feel like a real Washingtonian.<br /><br />4. I am enjoying meeting new people.<br /><br />5. I like being a part of something so much larger than me. Now that I am optimistic about the future of our nation and our government, it's nice to be getting a glimpse inside.<br /><br />6. I like leaving the office at 5 and being home by 5:15.<br /><br />I hope to have more to add to this list soon.Jodi<span style="font-weight: bold;">Endings and Beginnings<br /><br /></span>This is the beginning of my last week working for the nonprofit where I've been employed for the past three and a half years. It has been time for me to take my next step for a little while now and I'm excited to be moving on, but there is, of course, always a sadness that comes with this kind of transition. I believe I started this blog during my final two weeks at another job. Funny that it seems like a lifetime ago that I was nervously resigning from my newspaper job. I'm feeling much more confident this time around.<br /><br />.....<br /><br />Walking home from the metro today, DC really felt like home. It's my favorite time of year here, one of the first three or four days when it's been warm enough to venture outside in a skirt and short sleeves. The sun was shining, the sky was bright blue, the air is still dry and spring-like, lacking summer's intense humidity, and all of the trees and bushes are in bloom. Something about days like this makes me feel limitless optimism and freedom. My new job is downtown about 10 blocks from my apartment and I'm looking forward to spending more time in this city that I've somehow come to love.<br /><br />.....<br /><br />And finally, Game 7 of the Stanley Cup Quarterfinals is tomorrow night. My entire being is buzzing with excitement and nerves. I can't totally explain how I became this obsessed with hockey. I am not sure I can even partly explain it. As I sit here I am wearing red and white polka dotted pj pants with a Caps logo on them and a Caps t-shirt. I am currently calculating in my head how early I can leave work tomorrow in order to get home in time to get ready and get to the game early. Tonight I'll probably dream about the game. I'm nuts - but it's so much fun. Go Caps!Jodi<span style="font-weight: bold;">Ouch<br /><br /><span style="font-weight: bold;"></span></span>It hurts to be a sports fan.<br /><br />The Capitals lost their second game in the first round of the playoffs to the New York Rangers today and it made me and Neil really sad. A year and a half ago, the fact that I just wrote the previous sentence would have been unfathomable. But here I am, a major Capitals fan. I suppose my <span class="blsp-spelling-error" id="SPELLING_ERROR_0">fandom</span> is a fantastic example of the fact that I am still growing and changing as a person. Being new to this <span class="blsp-spelling-error" id="SPELLING_ERROR_1">fandom</span> thing has left me ill-equipped to deal with sports-related sadness. I'll get over it and I'll be cheering the team on while watching on TV this Monday, but for now, I'm bummed. We got home from the game this afternoon and literally did not know what to do with ourselves. Neil took a nap and I cooked a recipe that I learned from the lovely <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_2">Aarti</span></a>. (It was delicious). So, I suppose, we were both constructive with our sports <span class="blsp-spelling-error" id="SPELLING_ERROR_3">saddness</span>.<br /><br />What is funny to me is that we finally allowed ourselves to be sad. We've had a string of less-than-great things happen to us in the past few months and we managed to remain mostly positive. For me, it's a way of coping. The more optimistic I can be, the better. I do believe that positive thinking brings positive results. But I also realize that sometimes we just need to be sad. We need to grieve for a moment, to feel sorry for ourselves. I haven't really allowed myself that luxury lately. I realize that this is a strange complaint - not being sad enough.<br /><br />Throughout this tough winter and early spring, the Capitals have <span class="blsp-spelling-error" id="SPELLING_ERROR_4">bouyed</span> my spirits when I needed it most. Being an avid sports fan is a beautiful distraction. When my team wins, no matter what else is going on in my life at the moment, I have an instant excuse to be happy - which is really a gift. Today, being a sports fan helped me get permission to be sad - another less joyful gift, but a gift nonetheless.<br /><br />(note: I will be less grateful for this gift if the Caps lose the series.)<br /><br /><span style="font-weight: bold;">Joy<br /><br /><span style="font-weight: bold;"></span></span>We learned some great news tonight - Neil's sister Jaimie is engaged! Her fiance, Jacob, is fantastic. They make each other very happy and are a pleasure to be around. I am really looking forward to many fun times with them in the future and I can't wait to celebrate their wedding.<br /><br /><span style="font-weight: bold;">Change<br /><br /><span style="font-weight: bold;"></span></span>The weather in DC is finally changing into the beautiful spring weather that makes me love this city. Other things are changing too and I have a good feeling about the coming months for me and Neil. There is a heightened sense of possibility that comes with sunny weather and blooming flowers. Good things are on the horizon. <span style="font-weight: bold;"><span style="font-weight: bold;"></span><br /></span>Jodi<span style="font-weight: bold;">Alexander Ovechkin Is Like a Drug<br /></span><span style="font-weight: bold;font-size:85%;" ><span style="font-family: georgia;">Or why hockey makes me happy</span></span><br /><br /><br />Yesterday was a tough day. Neil's company announced furloughs, I had some frustrations at the office, it was cloudy and cold all day and the news about the economy continued to be grim. Then we went to a Capitals game.<br /><br />Being a sports fan is new for me. I am not accustomed to giving myself over so completely to fandom. In the past, even when I was rooting for a team, I rarely cheered out loud. I was even hesitant to rise for standing ovations at the theater. It could be my journalism training and my habit of observing rather than participating. In any case, my Capitals fandom has cured me of my fear to cheer and last night gave us great reason to go hoarse with screaming.<br /><br />Ovechkin came storming out onto the ice, beat two defenders, gave himself a pass off the boards, spun around and received the pass, then was tripped and he still managed to score a goal while lying down and sliding toward the net. Not only was it completely amazing, a feat of astounding athleticism, but it was inspiring. Ovie's determination and then, after succeeding, his exuberance lightened my heart and reminded me what we can all accomplish when we put our minds to it.<br /><br />So thanks Ovie, for the inspiration and the excuse to scream at the top of my lungs. (It's incredibly therapeutic.)<br /><br /><object width="480" height="295"><param name="movie" value=""><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="295"></embed></object>Jodi<span style="font-weight: bold;">The most depressing article I've read of late<br /><br /></span><span><span style="">Wow. I just checked in on <a href="">cnn.com</a> to find <a href="">this story.</a></span></span><span> Note that it resides in the travel section of the site. I sincerely hope that </span><span>we collectively begin to solve the problem of global warming instead of traveling around to sites that are going to disappear before our children can see them.</span><span style="font-weight: bold;"><span style="font-weight: bold;"> </span><br /></span>Jodi<span style="font-family: verdana; font-weight: bold;">A Return</span><br /><br />It seems as though I might be back to blogging. Now that I have effectively scared off all my readers and remained silent for nearly a year, I'm returning. Stay tuned.Jodi<b/>Blog Gone</b><br /><br />I have decided that it's time to officially close up shop on j.g.s. Who knows, I may come back here someday, but I am no longer feeling the daily pull to write blog posts. It was fun sharing my life and observations and I am sure I will find another venue to do so in the future.<br /><br />Thanks for reading.<br /><br />JodiJodi<b/>Old</b><br />It's official. I am old. 29. That's hardly in my 20's anymore. I am almost 30. I don't know why that is such a big deal, but it is. There's something infinitely more interesting about being in your 20s. So, I had better make the most of this, my final interesting year.<br /><br />I had a nice birthday complete with a fun joint birthday party which included packing about 40 people into our little 800 square-foot apartment. Always a good time.<br /><br />The question was recently raised: Do books make you smarter? (<span class="blsp-spelling-error" id="SPELLING_ERROR_0">ie</span> can you buy smart)<br />I am going to go with yes. I'm taking a class after work on Mondays in leadership and I have to read two books for the class: <span style="font-style: italic;">Good to Great</span> and <span style="font-style: italic;">The Tipping Point</span>. I have already read <span style="font-style: italic;">The Tipping Point</span>, but that's beside the point. I am finding that reading books for an assignment and sitting in class and thinking about things in a way that is different from how I think at work is making me sharper. And to further that, every book I read that makes me thing about something different and causes me to learn something new makes me smarter. Sure we have innate intelligence, but it has to be used if we don't want to lose it. You have to sharpen your mind by exercising it in different ways and you can probably expand your innate capacity if you work at it hard enough.<br /><br />I've sat in some meetings with older people lately and I have noticed that they aren't as quick as some of my younger colleagues. I can even notice a difference in my mental speed when compared with people younger than me. I am slowing down ever so slightly (note above wherein I am old). I think the rest of my life will be a battle to keep my brain quick. Let the fight begin!Jodi<b/>When it rains...</b><br /><br />Last week it rained for three or four days without stopping. For many people this would be completely unremarkable, but we're having a drought in DC, so I should have been overjoyed at the rain. But, really, it was depressing. Combined with the early setting of the sun and the sudden chill in the air and the fact that my husband was in San Diego all week covering fires, the rain made me feel desperatly sad. By day three of rain, I realized that I was happier at night because at least then it's supposed to be dark.<br /><br />I suppose it didn't help that there were some tragedies taking place around me, impacting people I know. Nancy's husband's mom died this week and a co-worker had to go home to be with her sick father. It was a week of bummers.<br /><br />I wish I could make it so silly things like rain couldn't impact my mood - so I could only be sad about the important stuff, not clouds in the sky.<br /><br />In other news, I had a great time with <a href="">Andy</a> this week! He was the one thing that made the rain bearable. (thanks Andy) It's always refreshing to spend time with good friends.<br /><br />More inspired posts coming soon.Jodi<b/>Again</b><br />I went to Portland this weekend for the second weekend in a row. The flight was incredibly long. In fact, last night's return flight was so long that my 28-year-old knees ached and threatened not to budge when I finally got to straighten them after landing after one in the morning. <em>Should I be taking calcium supplements?</em><br /><br />I made the trip to attend my father-in-law, Mort's 60th birthday party. On the airplane to Portland on Friday night, I realized that it was approximately six years ago when Neil and I made another much more tragic trip to Portland after my father-in-law tried to end his life.<br /><br />Birthdays become significantly more important when there was a doubt that they'd ever be reached. We almost lost Mort once and so we now try to find any and every opportunity to celebrate his life. Amazingly, I didn't realize this for a while. At one point recently I even wondered why I was putting myself through all of the travel just to attend one party. <em>Isn't my presence the weekend before the birthday enough?</em> I asked myself.<br /><br />At the party, Mort gave a painfully long speech. He had seven handwritten legal pad pages in his hands and he read nearly every word. Overall, it was comical and completely within his personality to give such an oration so we were able to laugh it off... but what he did say at the beginning of the ten minutes he had the mike, clarified some things for me. He said that when he was little he once drove his bike across the street and as soon as he had cleared the roadway, two cars smashed into each other right where he had been. He said he knew that day that G-d was looking out for him. And then, he said, "A couple years ago, I got away with my life again."<br /><br />That might have been the first time I'd heard him talk about his failed attempt and express his happiness that he's alive. At that moment, I realized why it really was that we were all there, why I flew across the country for only one night of celebration. We were telling him that we're glad he's alive. Everyone's birthday is a celebration of them and the fact that they're alive. But Mort almost wasn't. He almost chose not to be and anyone that loves him is a little bit worried he could make that choice again. We know that life is fragile,that something can happen in any moment that will alter life forever. With Mort, at least in the eyes of his family and friends, life is just a little bit more fragile.<br /><br />When we were heading to the airport yesterday afternoon, Mort pulled Neil aside and told him that he has a very special woman (me). I think, given our long history, that that's one of the nicest compliments I've ever received.<br /><br />I could write here about my exhaustion and about the craziness that took place when we tried to decorate for the party, but none of that is really important. I went to Portland last weekend, for the second weekend in a row, to let my father-in-law know that I am very glad he is alive... and it was worth it.Jodi<strong>Border Confessions </strong><br /><strong><em><span style="font-size:85%;">(an essay I wrote a few years ago and have been revising)</span></em></strong><br /> I moved to the U.S.-Mexico border in the summer of 2000. We drove from Chicago, where I had just graduated from college, to El <span class="blsp-spelling-error" id="SPELLING_ERROR_0">Paso</span>, and before even seeing the big bridges and crossing gates that separated our new city from Juarez, Mexico, we moved into an apartment complex with large steel gates that opened only with the swipe of a card.<br />Borders in the borderland.<br /><br />It was not clear whom the gates were meant to keep out other than everyone who did not pay rent to live in the complex. But it struck me as strange that a complex, which was not on the high-end of apartments, deemed it necessary to keep all non-residents out. And it seemed even stranger that the gates <span class="blsp-spelling-error" id="SPELLING_ERROR_1">didn'</span>t really work. I wondered if there would be gates on these same apartments if they were in Chicago or Albuquerque or Detroit and decided that it <span class="blsp-spelling-error" id="SPELLING_ERROR_2">wasn'</span>t likely. After living on the border for a while it became clear to me that first, the gates were for show and second, they were really meant to keep Mexicans out.<br /><br />Late at night and early on weekday mornings, trucks and old vans with Mexico plates are found by the dumpsters in our parking lot, their drivers picking through the trash. They must wait until some resident or another comes home or goes to work and the gate opens, providing an opportunity to hunt for whatever treasures these apartment dwellers may have cast off – old mattresses, holey t-shirts, cardboard boxes. But before they can come and hunt through my trash, they have to pass through the border checkpoints to enter my country, a much more rigorous ordeal than sneaking into the apartment complex.<br /><br />Their dumpster diving makes it impossible for me to ignore the reality of living along the border with Mexico. The men in their beat-up pick-ups bring the border to my doorstep. They serve as constant reminders of the poverty that lurks beyond the fences and the lines in the dirt that mark the separation of two worlds. <br /><br />When people come to visit my husband, Neil and me, we have a spot we like to take them called Monument One. It is technically a park, but in all practicality, it’s a plot of desert with rocks and dirt and a few trees. What’s significant about this place is its location on the international border. The U.S. touches Mexico in this strange little corner where the Rio <span class="blsp-spelling-error" id="SPELLING_ERROR_3">Grande</span> River stops marking the boundary and monuments take over, drawing a dotted-line from El <span class="blsp-spelling-error" id="SPELLING_ERROR_4">Paso</span> to San Diego. Other than the white concrete monument that looks much like a smaller version of the Washington Monument, nothing is remarkable about the park. There is dust and some small plants and trees - typical desert landscape. On one side of the park, the Rio <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Grande</span> trickles along, its brown waters passing unceremoniously between nations. On the other side of the park a desert hill offers cover for Mexican bandits who may decide to sneak up on unsuspecting tourists or local hikers.<br /><br />Standing at Monument One, you can straddle the border, the only spot in the region where you can do that, because there is no fence. On weekends and national holidays, people from <span class="blsp-spelling-error" id="SPELLING_ERROR_6">Anapra</span>, Mexico, an impoverished suburb of El <span class="blsp-spelling-error" id="SPELLING_ERROR_7">Paso'</span>s sister city, Juarez, come to this park and have <span class="blsp-spelling-error" id="SPELLING_ERROR_8">barbeques</span> and picnics. They drive their cars into the river and wash them, blast music from car stereos and sit under the trees that offer shade on the Mexican side of the park. I have never seen these people even approach the white obelisk marking the border. Haven’t seen them set foot in my country. They must know the Border Patrol is watching, that the officers even interrogate U.S. citizens driving away from the park – it’s happened to us more than once. But, on most days, a visitor to the park can’t see any law enforcement. Instead of manning the park with people, the U.S. government installed cameras perched on poles 60 feet up, watching at all times, even when human eyes might get tired, shut momentarily. <br /><br />When Neil and I go to Monument One, we look at the monument, which is painted white with a black ‘1’ on it, but depending on the week may also be covered with various graffiti. Then we step one foot over the international boundary, which is clearly marked with a brass line in the cement. What is remarkable about this spot is how truly unremarkable it is. There are no big fences, often there <span class="blsp-spelling-error" id="SPELLING_ERROR_9">aren'</span>t any visible law enforcement agents and many days we have been the only people there on either side of the border. It is just us, the river, the dust and a brass line in cement. I like to walk back and forth over the line and then keep one foot firmly planted on each side. Whenever I do this, I get a small rush. I’m in another country. Similar, I think, to the feeling you get on road trips when you enter another state. First Arizona says goodbye and then five seconds later, at 80 miles per hour California welcomes you. “Bye Arizona. Hello California,” my brother and I used to say when our parents would point out these signs. I used to try to notice the moment when our car was in both states at once, just to know what it felt like to be in two states simultaneously– not much different. And it is virtually the same with straddling two nations, because even when I have one foot on either side of the line, I am still just standing in the desert.<br /><br /> After spanning the border, we usually cross over completely and walk down to the river, which is always filthy and surrounded by picnic trash. Chicken bones, napkins, old clothing left behind when its owners shed their t-shirts and pants to jump into the water. Sometimes I feel like we’re on an <span class="blsp-spelling-error" id="SPELLING_ERROR_10">archeological</span> dig and we have to come up with theories about Mexican culture based on the artifacts they have left behind. “Ah <span class="blsp-spelling-error" id="SPELLING_ERROR_11">hah</span>, chicken bones and clothes. This means they can eat well and can easily afford new clothes since they left them here.” Of course, the truth is that the <span class="blsp-spelling-error" id="SPELLING_ERROR_12">Anapra</span> residents are often scared away from the park in the middle of picnicking either by the border patrol or, more likely, by Mexican bandits. And, while the U.S. government can afford to clean up trash left on the American side of the border, Mexican infrastructure is not developed enough - nor does it have the resources - to collect trash at parks in its poorest <span class="blsp-spelling-error" id="SPELLING_ERROR_13">colonias</span>.<br /><br />I don’t really know why we walk all the way over to the river every time we visit, but we do. One time, we saw a man washing a red, white and green bus that was probably used to transport factory workers home at the end of their shifts in the middle of the night. Women had been disappearing, so the companies splurged on buses and drivers to protect their workers. But sometimes the buses still dropped them off too far from their homes for safety. The man with the bus was wading in the water and scrubbing the painted metal with an old rag. We took a picture.<br /><br />We actually have several pages of pictures from Monument One in our photo album. Most of them show us with our out-of-town visitors with one foot in each country or all of us crouching in front of the monument so we don’t cover the part that says it is the international boundary. I think we have enough of these pictures, but I imagine if we get any more visitors, we’ll have to keep repeating this process. If we go on a day when the people from Mexico are picnicking and playing, maybe our visitors will feel the way we often do, like aliens, when we walk down to the river and the stares of the people drill holes in our backs. Or maybe it’s our own discomfort that we project onto them that drills the holes. Whatever the case, the disparity makes it impossible to feel comfortable there. Maybe our out-of-town friends will leave feeling like the border is evading them because it is difficult to conceptualize the entirety of two nations when you are standing in a field looking at a white concrete monument. Maybe they will go home feeling that it’s not really as simple as the line on the map, even if it is only a line in the dirt.<br /><br /> Not far from my apartment complex, wealthy El <span class="blsp-spelling-error" id="SPELLING_ERROR_14">Pasoans</span> have their homes on the city’s hills and ledges and many of them pretend they can’t see the Third World when they look over the back fence. They build walls to block the view and shop only in the chain stores on the outskirts of town - avoiding downtown, where the blending of two cultures stares them in the face in the form of Spanish language signs and open air markets that are often packed with Mexican citizens shopping for the day and beggars hoping to scrounge a little bit of money to take home to their families.<br /><br />Many of these El <span class="blsp-spelling-error" id="SPELLING_ERROR_15">Pasoans</span> who ignore the border are first generation U.S. citizens. Their parents gave birth to them in the public hospital in El <span class="blsp-spelling-error" id="SPELLING_ERROR_16">Paso</span> after rushing across the border in order to give them better lives. Yet these citizens of the United States have little to no compassion for other parents in Mexico trying to do the same thing for their children.<br /><br />“There are too many Mexicans in this store,” a co-worker at the bookstore where I work said to me once. She must have noticed my jaw drop slightly as I looked at her brown skin. “They’re just really messy,” she said. And the prejudice extends beyond the bookstore to the local border patrol agents who are sons and daughters of immigrants and have been charged with keeping all the other would-be immigrants out.<br /><br />The border <span class="blsp-spelling-error" id="SPELLING_ERROR_17">ignorers</span> control the local newspaper where there is rarely a story about border issues despite the metro section’s title of “Borderland.” The newspaper’s editor has an edict against border coverage, because, according to market research, the readers all live on the West Side and don’t want to know about the border. So as Juarez suffers floods or fires, the El <span class="blsp-spelling-error" id="SPELLING_ERROR_18">Paso</span> Times runs articles about the first day back to school for El <span class="blsp-spelling-error" id="SPELLING_ERROR_19">Paso</span> students and the prize-winning gardenia grown by an east-side El <span class="blsp-spelling-error" id="SPELLING_ERROR_20">Paso</span> woman. When the newspaper arrives in the morning, it is even easier to ignore the border because it’s not written about. But I think that no matter how hard you may try not to look, not to notice what this city butts up against, we all carry the border around with us. It is the burden of privilege that can be made real in no better way than to live in this borderland. It is the always being on the edge of something, the constant feeling of otherness. I carry the border with me even without completely understanding it. After two years of glancing at Mexico every day on my way to work and experiencing border culture, the border still does not make sense to me.<br /><br />Before I moved to El <span class="blsp-spelling-error" id="SPELLING_ERROR_21">Paso</span>, when I sat in my college apartment near Chicago thinking about the move, it was easy for me to understand international borders. I pictured the map of the United States that I grew up seeing, the one they passed out in elementary school for us to color, the one that pulled down from those scrolls attached to the top of the chalkboards in all the classrooms of my childhood. The international border is that line at the bottom middle left of the map where instead of the bright colors designating the states, the land below the line is colored light brown to show that it’s not part of my country. Below that line they have a different government, language and currency. Simple. But in El <span class="blsp-spelling-error" id="SPELLING_ERROR_22">Paso</span>, it is hard to let it be that simple. I am still puzzling over how a line in the dirt can make such a difference. Yet anyone driving east along I-10 can see the difference, the stark contrast. From the comfort of the plush driver’s seat in your car, you see hundreds of ramshackle houses in the hills, across the river. While you listen to Norah Jones on your CD player - cruising along the freshly paved and painted interstate there they are, just beyond the electronic signs warning of upcoming traffic problems. Some are pink, others mint green, others yellow - all looking wind-worn and old. And if you look closely you can see that the roads are not paved - if there even are roads. A little asking around and you will find out that most of those homes lack running water and electricity. And thousands of El <span class="blsp-spelling-error" id="SPELLING_ERROR_23">Pasoans</span> glimpse these tenements twice a day on the way to and from work, to and from their comfortable homes. Two cultures, two economies, two realities, nudging up against each other in the desert – it’s just too simple for all of its complexities.<br /><br />After Monument One, we also usually take our visitors to Juarez, the city of more than one million people that is right across the river from El <span class="blsp-spelling-error" id="SPELLING_ERROR_24">Paso</span>. I say more than one million because nobody really knows how many people live there. Without a census or some other way of counting population, the Mexican government is left to guess. Some estimates put the population of the city as high as three million, which may very well be accurate since people from the interior of Mexico are constantly migrating to the border in search of better opportunities and a better life.<br /><br />To get to Juarez, you have to walk or drive over one of several international bridges where drug-sniffing dogs walk back and forth between rows of cars and your vehicle may be searched at any time. This entry into Mexico is much less ambiguous than that of the park. Like the signs at the edges of states, each country has flags up and words welcoming you. With such a big production at the bridge -- a small fee is charged, certain items must be declared, searches are performed -- crossing the border there does not confuse. In fact, the bridge is so long and involved that it <span class="blsp-spelling-error" id="SPELLING_ERROR_25">doesn'</span>t even feel like the two nations are that close. It takes at least 15 minutes to walk across and sometimes hours sitting in traffic to drive across the bridge. I sometimes think that by waiting in the bridge lines, people have the chance to transition between the two countries, time to mentally prepare for what awaits them on the other side. Maybe that is one of the reasons the international bridges are so large in scale and surrounded by pomp and circumstance.<br /><br />On my last trip over the bridge and into Juarez, I wore blue jeans and it was 100 degrees outside. My friend was in town and he said it had been too long since he’d been to a foreign country. “I have just the remedy for that,” I said, and we drove downtown and parked in one of the lots near the bridge, where you pay $3.00 to have your car watched by men who always look a little shady. I have learned that even though they are often very dirty, speak broken English and have missing teeth - and almost always have a bucket of Coronas in their guard booths- these men can be trusted to look out for my pickup truck. I paid that day’s man, a little shorter and cleaner than the man I paid last time I had been down there, and my friend and I headed toward the bridge. The walk is long, but always seems shorter than I am expecting it to be. We went through the first booth and paid our fifty cents and then walked up the long arching bridge, breathing in the hot air mixed with car exhaust. At the middle of the bridge, we paused to note our crossing into Mexico, looked up at the big Mexico and U.S. flags and kept walking. My jeans clung to my legs with sweat.<br /><br />Once we cleared the last checkpoint and entered Juarez, the weather was the only thing that was unchanged. Signs were all in Spanish; grimy children in tattered clothing begged on the street or tried to sell us chewing gum; the roads and buildings were old and deteriorating; Mexican music blared from storefronts, and the air smelled of cooking meat that was displayed in glass cases along the sidewalk. Of course, most things right on the other side of the border are set up to draw American tourists – inexpensive alcohol medicine you don’t need a prescription to buy, and night clubs with a younger drinking age. Every store accepts dollars. Most shopkeepers and taxi drivers speak English. And the nearby markets are filled with the typical Mexican pottery, glassware, t-shirts and boots. Usually, Neil and I lead our guests the six blocks from the bridge to the Mercado, a large building filled with booths selling things that appeal to tourists. In this warehouse-like structure with bare concrete floors, dozens of makeshift merchant booths and more bright colored <span class="blsp-spelling-error" id="SPELLING_ERROR_26">weavings</span> and clothing than at any U.S. shopping mall, you can bargain for better prices. But on this day, my friend and I were not in the mood for bargaining. I told him how I usually buy only one item per trip to the Mercado, one item to put in our apartment to remember Mexico by when we move away from the border.<br /><br />We walked past shops and pharmacies stocked with drugs that anyone could buy without a prescription. We paused to look at a beautiful old cathedral and bought apple soda called <span class="blsp-spelling-error" id="SPELLING_ERROR_27">Manzana</span> Lift in a market where they <span class="blsp-spelling-error" id="SPELLING_ERROR_28">didn'</span>t understand our words, but gladly accepted our money. We ambled by an important-looking building and my limited Spanish allowed me to translate. “I think that’s city hall,” I said. We kept walking. Eventually we stumbled on an outdoor market selling all sorts of things: sneakers, purses, herbal medicines and most surprisingly, pets. I heard a rooster crowing and we followed the sound to a hot corner of the market where there were scrawny-looking bunnies, chickens, cats and dogs in wire cages panting in the heat. After getting slightly lost on some side streets and walking past a semi-hidden pool hall filled with pool tables and men who were drinking and smoking in the early afternoon on a weekday, we walked back to El <span class="blsp-spelling-error" id="SPELLING_ERROR_29">Paso</span> and headed home to nurse our sunburns and heat headaches and wash the grime from our skin.<br /><br />Neil and I take our guests to Juarez even when they don’t want to go. “How can you come all the way here and not set foot in Mexico?” we ask. And we mean it. It’s not that we force cultural experiences on our visitors, we just convince them that they want to go to Mexico and then we take them. If our friend or relative seems to be enjoying the Mexico experience after the Mercado, we walk to a market in a more local part of town where less English is spoken and where in the middle of the afternoon outside of bars along the way, 14-, 15-, 16-year-old girls stand in mini skirts, boots, heavily applied makeup that melts and drips in the hot sun, and sell their bodies to feed their families. “See, they are always here,” we remark in low tones. The girls stand with their backs against the wall and look down as men ogle them and women walk by with heads turned away. Usually, I feel nauseous. I want to rescue them, take them home with me and feed them and put them into sweat pants and running shoes and let them watch cartoons on my couch. Why we make ourselves look at the little prostitutes almost every time we visit Juarez is beyond me. Maybe for the same reason that I sometimes get sucked into violent murder movies on television as I am flipping through channels. Morbid curiosity. But maybe, by looking at the girls, I am trying to remind myself of the problems caused by poverty, of the desperation just a few miles from where I sleep at night. Of course, similar desperation exists in the ghettos of my own country but the poverty in Mexico is not only in ghettos. While we have systemic poverty in the U.S., Mexico has widespread systemic poverty. Why am I fixated on Mexico’s poor and not so concerned with the poverty in the U.S.? As politically incorrect as this answer is, I am afraid it is mostly a matter of proximity.<br /><br />Neil and I do not go to Mexico very often when we aren’t showing it to somebody else. It is our tourist attraction, our Grand Canyon, our state park, our Disneyland. I realize that even those prostitutes and the poverty and the begging toddlers are part of the tourist attraction. “See how different it is here?” we ask. “See how lucky we are? But look at how neat this Mexican culture is.” And I could feel guilty about this, about mixing my tourism with psychological voyeurism, but I have come to think that it is acceptable to show people from out of town what exists on the other side of the border: the souvenirs and the young whores. Yet, as I show Mexico off like it’s mine and remark on the sadness of the poverty and the prostitution and the police corruption, I often wonder whether I should do something to try to help. But as soon as I ask myself this question I am always struck by the magnitude of the problems and I feel hopeless to effect change. I think this feeling of helplessness is what causes many El Pasoans to turn their backs on Mexico, to pretend they do not live along the border, to close their eyes to the poverty. If all of us didn’t do this to some extent, we would go crazy.<br /><br />Every morning, about ten miles from my little apartment, thousands of cars sit in traffic at the international bridge that spans between Mexico and the United States. Mexican students wake up at four and five in the morning in order to get through the gridlock and make it to their classes at the University of Texas at El Paso on time. My next-door neighbor, who has an engineering degree from the University of Michigan, rises before five every morning to make it to her job at an auto parts manufacturing plant in Juarez, fulfilling her dream to work in a Spanish-speaking country. Older El Pasoans walk across the same bridge and pay fifty cents to pass into Mexico where they buy prescription drugs and get their teeth filled or capped for a fraction of the price it would cost them in the United States. Older Mexicans walk over the bridge into El Paso to go shopping for the things they cannot buy in their own country or to see their children who somehow became U.S. citizens. Mixed in with the students, the tourists, the businesspeople, are the drug smugglers and the people smugglers, the criminals who somehow manage to live at this crossroads as if there are no laws.<br /><br />I live in this community too, walking across the border on occasion for entertainment, driving by the colorful adobe houses on Juarez’s western hills each day on Interstate 10. Perhaps I came closest to finding an answer to what it really means to live here on the edge of something during my first November here, while I was working as a reporter at El Paso’s daily newspaper. One morning, one of the many editors handed me a faxed press release about a Catholic mass on the border in celebration of the Day of the Dead, or Dia de Los Muertos, a Mexican holiday celebrated by many El Pasoans. I took the piece of paper and headed toward the little New Mexico town where the event was taking place.<br /><br />It was not more than five minutes outside El Paso but as with many parts of the border, I had to drive my car along a dusty clearing near a railroad track to get there. Later it made sense to me that there are no real roads leading to that place. I imagined that the U.S. government kept it that way on purpose to protect most of us from seeing what the border really is: a large chain-link fence with barbed wire at the top. What I saw when I arrived is cemented in my memory. About two hundred people on the Anapra, Mexico side of the fence and one hundred people in Sunland Park, New Mexico were singing and praying together through chain links. The people wore mostly black with a smattering of color and, like the shiny fence, everyone seemed out of place in the expansive field of dust and small gray desert plants which could never sustain the amount of life it contained that day. The people were speaking all in Spanish, but a few of those gathered on the U.S. side explained to me that the people at the mass were praying for friends and relatives who had died trying to cross the desert to get to a better life in America.<br />These people were celebrating a holiday but they were also trying to make a statement. They wanted the fences to come down so that border crossers wouldn’t be forced to journey into the uninhabited desert where they often meet their death. Border Patrol officials maintain that the tall fences and vigilant guarding of the border, all part of a plan called Operation Hold The Line first implemented in the late 1990’s, have decreased crime in El Paso, and that having agents so near the river at all times has prevented many drownings. I don’t know who is right. But the man who developed Hold The Line while in the Border Patrol was elected to congress and now chairs the Hispanic Caucus. Recently, USA Today named him as a possible Hispanic presidential candidate. The people in Anapra protesting his policy have to gather in the dust, invite the media and hope they are heard.<br /><br />I stood there in the fine deep-brown sand that afternoon puzzling over borders, realizing that similar to the dark line drawn on the map, the border between one of the most wealthy and powerful nations in the world and one of the poorest and weakest is a chain-link fence. The border mass was short but amazing. Two folding tables were pushed against each other with the wire fence between them and they served as a makeshift altar. There was a priest on each side and the men took turns reading prayers. The Spanish floated upward and spread across the desert in every direction. Children shared apples through the fence. Adults swayed in the moment holding white wooden crosses with names of the dead painted on them. Everyone and everything was covered in a thin layer of dust. Some women mourned quietly, several men allowed tears to slip down their cheeks. When the solemn praying was complete four or five men in paper mache masks for the Day of the Dead began to dance. I watched, trying to make sense of what I was seeing. What defines a nation? Who decided this fence would be precisely right here? Why didn’t they put it 15 feet to the left? It amazed me that some line decided by the Treaty of Guadalupe Hidalgo in 1848 could make such a difference. I realized that being born a mere 5 feet to the left or right of a certain line in the dirt could change your entire life experience. You either live on this side of the fence and bring fruit and toys for the children on the other side, or you live in Anapra and hold your arms outstretched as the Americans dump bags of food and gifts over the top of chain links.<br /><br />I wrote a story about the mass for the next day’s newspaper. Like most stories about the border, the editors decided to run it on the inside of the local section, the part of the newspaper read by the smallest number of people each day. I was shocked when I received at least ten e-mails from readers voicing strong opinions about the border and the fences and the Mexicans dying in the desert. Some of the messages I got did not make sense, others were from angry U.S. citizens who want Mexicans to stay out and others were from people who sympathized with the mass attendees. One e-mail even said, “Go back to your own country.” What country do they suggest? I wondered. Would moving out of Texas suffice? Aren’t all U.S. citizens somehow descendants of immigrants? I grew up hearing sayings like, “The United States is a melting pot” and in fifth grade I memorized the inscription on the Statue of Liberty about the tired, hungry, poor and huddled masses all being welcome here. My naïveté allowed me to believe those myths until I was 21 and moved to the border of this great nation, where it became clear to me that only some of the tired, hungry and poor are really welcome here.<br /><br />I don’t really think that everyone should be allowed to enter this country. Nor do I believe that we should offer public services to everyone who wants to walk across the border from Mexico. I have seen the strain on the health care system and the legal system that has been created by caring for the indigent in El Paso, most of whom are Mexican citizens illegally in the U.S. But as a nation, we are in the strange and sad position of ignoring the plight of our neighbors because there is no easy way to help them. Their struggle is a spectacle to us, at best a lesson, but we border-dwellers continue to witness the poverty and the desperation, and then turn our backs on it every day.Jodi<h1><span style="font-size:+0;"><span style="font-size:100%;"><a href="">Elegy for the Executive Director<?xml:namespace prefix = o /><o:p></o:p></a></span></span></h1><p class="MsoNormal"><o:p>More about Liam...I wish I could have attended his memorial in New York, but, it was on Yom Kippur. </o:p></p><p class="MsoNormal"><o:p>David Gates captures the man in a way that nobody else has thus far. Definitly worth the <a href="">read. </a></o:p></p>Jodi<b>October Musings</b><br />It's October and temperatures are still up in the 80s during the day. I am longing for Fall. It's so late that I'm afraid we'll just jump right into Winter and skip over Fall all together.<br /><br />A couple weeks ago, Neil and I went to the <a href="">Kennedy Center's annual open house </a>and I've been meaning to blog about it. We arrived and saw a Mexican techno band play, then we saw <a href="">Canadian acrobats</a> who were <a href="">amazing</a> and for the last show of the day we got to see a 1.5 hour <a href="">Ben Kweller</a> concert which was fantastic. He is my new favorite. Best thing about the whole day? All the shows were free. We just had to show up and wait in line. Living in DC can be really fun and culturally enriching.<br /><br />The next weekend we went for a walk on Sunday. We left our house, walked down the National Mall, saw <a href="">a live Jazz festival,</a> ran across <a href="">the guy who danced around the world</a> and saw him dancing in front of the capitol, visited the Washington Monument. went into two museums and then headed home. All we had set out to do was go for a walk.<br /><br />I like it when life just leads you in a random direction and you uncover unexpected delights. It happens more often than we realize, I think. I'm trying to notice it more - to keep that sense of wonder even in my more mundane days.<br /><br />In other news, my favorite song of the moment is Home for a Rest by <a href="">Spirit of the West</a>. Download it from iTunes. You will not be sorry.Jodi<b/>Hope</b><br /><br />Something I heard this weekend: Hope is not a symptom of naivete, it's an act of defiance.<br /><br />I like that. Sometimes I feel like I am too optimistic and too hopeful, but maybe I'm not. Maybe I can make things happen for myself just because I maintain hope in the face of life and all that life throws at me.<br /><br />Maybe...Jodi<b/>I am Jewish</b><br /><br />I was at a wedding tonight and going into it, Neil and I knew we'd probably be the only Jewish people in attendance. This is not something I usually think about when going somewhere, but our friends are very religious and Jesus was mentioned more than a few times on their wedding web site, so we knew we might be a little different than most of the wedding guests.<br /><br />The wedding was great and the party was on a yacht, which was even cooler than it sounds. Everyone was very friendly and we met a lot of friends and family of the bride and groom which is always fun. The bride was beautiful and it was fantastic to see our two friends so in love with each other.<br /><br />At dinner, we ate with the couple's marriage coaches from the church. Basically, the marriage coaches are a happily married couple that volunteer to council new couples <span class="blsp-spelling-error" id="SPELLING_ERROR_0">pre</span>- and post-wedding. They were very nice and very committed to their coaching. They said that two of the three couples they coached last year didn't end up getting married. (Wow - think of how much the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">divorce</span> rate might drop if every couple had to go through similar <span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">counseling</span> before getting married.)<br /><br />I suppose it was inevitable that they would ask what I do for work. I started with, "PR and Marketing for a nonprofit." Then they asked what kind of non profit and I explained that we bring high school students to DC to teach them about the political system and about political and social <span class="blsp-spelling-corrected" id="SPELLING_ERROR_3">involvement</span>. And then they asked how we select our high school students at which point I explained that all of our students are Jewish. I told them how we teach the students that Judaism demands that we be involved and take an active role in making the world a better place.<br />With all of the explaining out of the way, the woman doing the questioning said, "So to do that kind of work do you have to be that?" (or something close to that) I believe she wanted to ask, "Are you Jewish?" but for some reason couldn't bring herself to do it. I took her awkwardness in stride and said that you don't have to be Jewish to work in my office, but I am Jewish. This was somewhat of a watershed moment for me. Even though I am proud of who I am, I have not often felt comfortable coming out as Jewish. It is hard for me to say, "I am Jewish" - probably because of reactions I have gotten throughout my life and because I grew up in a place where not many people were Jewish.<br /><br />The rest of the conversation at dinner was peppered with people's Jewish experiences - which were VERY limited. The stories ranged from a neighbor who invited someone to his bar <span class="blsp-spelling-error" id="SPELLING_ERROR_4">mitzvah</span>, and e-mail with a link to a video about Israeli soldiers, another neighbor who shared <span class="blsp-spelling-error" id="SPELLING_ERROR_5">hamantaschen</span> at Purim, a recently attended Jewish/Catholic wedding, and, a trip to the National Holocaust Museum. No matter where we tried to steer the conversation, if there was a tiny bit of silence, someone would pull out another Jewish story.<br /><br />Everyone was very nice, and I know they were trying to find <span class="blsp-spelling-corrected" id="SPELLING_ERROR_6">commonalities</span> and create conversation, but I couldn't help but to feel a bit uncomfortable. I would have rather someone said, "I haven't met very many Jewish people, can I ask you some questions?" than run through every experience they ever had with a Jewish person as if I would, for some reason, care deeply.<br /><br />What if Neil and I had been the only Black people at the table? Would we have received a litany of stories about our fellow diners' Black schoolmates and coworkers? I somehow suspect that most people know that would be inappropriate.<br /><br />I am not offended and I actually think that the couple and the other woman at our table are very nice people. I wouldn't mind having dinner with them again someday, though I doubt that will happen. I just wish that they could have seen themselves tonight. Or I wish I could have found a tactful way to let them know what they were doing. "Hey, you're <span class="blsp-spelling-corrected" id="SPELLING_ERROR_7">embarrassing</span> <span class="blsp-spelling-corrected" id="SPELLING_ERROR_8">yourself</span> by telling me about every Jew you've ever met." Maybe I should have started telling all of my own Christian stories? I really have no idea what I could have done to stop them and make things less awkward. Probably nothing - it's just human nature.<br /><br />Maybe I should be thanking them, really, because before all of the inane stories about their bar <span class="blsp-spelling-error" id="SPELLING_ERROR_9">mitzvah</span> experiences, they gave me a chance to say "I am Jewish" out loud and to feel proud of making that declaration. I am sure there are a litany of reasons as to why I am now, at age 28, finally able to embrace my religion as part of my identity and feel proud of it (I am working for a Jewish organization, I have met lots of Jewish friends since moving to DC, I am more comfortable with myself overall, etc.) But it feels good to be able to own my religious identity and even when it's a little scary, to be able to say "I am Jewish".Jodi<b/>Reason # 472 I Like Living In A Big City</b><br /><br />There is something oddly satisfying about grabbing the garbage bag from the can, walking down the hall, opening a small metal door on the wall marked "rubbish" and dropping the trash down a long tube. Trash chutes are awesome.Jodi<b/>A Bucket of Keep Your Shirt On</b><br /><br />Who else thinks the Subway ads are completely hilarious? I love them. Neil's watching football and I am enjoying the first-game-of-the-season commercials including the fantastic Subway commercial where the guy takes his co-workers' orders and everyone orders things like, "Make your pants tight combo.," "A bucket of keep your shirt on,"and other hilarious meals... It was almost funny enough to make me want to eat Subway, but I really hate it.<br /><br /><b/>Fall?</b><br /><br />It's supposed to begin to feel like Fall soon. It's getting light later and dark earlier, kids are back in school, stores are selling plaid, but because I live in a muggy swamp, it's still in the 80s and 90s and muggy.<br /><br />I am dreaming of crisp air and changing leaves.Jodi<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5106585712786594962" border="0" /></a><br /><span style="font-weight: bold;">Wedded Bliss</span><br /> <span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span>I was in my friend Julia's wedding this past Saturday. It was her wedding day and Neil and my fifth anniversary. Funny, I have been feeling old lately, but at the same time, I feel very young to have been married for five years. Strange how ambiguous time can be.<br /><br />The wedding was a lot of fun. Spending our anniversary dancing with our friends really wasn't bad at all. There were several perfect moment<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5106584943987448930" border="0" /></a>s during the evening... you know, those glimpses of divinity when you see your friends really happy, when you're in the moment with people you love and the world slows down for a second.<br /><br />Julia looked absolutely beautiful, the sunset was amazing, even the rain storm was fantastic.<br /><br />Of course, the visit to Santa Fe was far too short, but it was nice to be there, to have the chance to see family and have my feet on the ground for a minute.<br /><br /><br /><span style="font-weight: bold;"><span style="font-weight: bold;"></span></span>Jodi<b>Oh Dear</b><br /><br />Several public moments of stupidity in the media today:<br /><br />Poor <a href="">Caitlin Upton</a>. I have no idea how she managed to say something so incomprehensible, but there you have it. I am sure that she's not THAT dumb, but somehow she managed to string a whole bunch of words together in a way that made absolutely no sense. It didn't help that she had a vapid look on her face and a blank tone of voice. I know that I was incredibly self-conscious when I was her age, I can't imagine how I would have felt if my most <span class="blsp-spelling-corrected" id="SPELLING_ERROR_0">embarrassing</span> moment had been broadcast on You Tube for millions to watch. People are saying that her answer shows how dumb Americans are or somehow makes a broader statement about our culture. I don't believe that's the case. I think she just got scared and nervous, but wow is it funny! Hang in there Caitlin - <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">definitely</span> a classy move to go on the <a href="">Today Show </a>this morning - way to go.<br /><br />And for fun. here's a transcript of her answer to why a fifth of Americans could not locate the United States on a world map (yikes): .”<br /><br />I have a bit less sympathy for <a href="">Idaho Senator Larry Craig</a> who pleaded guilty to lewd conduct in June when he was caught soliciting another man in an airport bathroom and didn't bother telling his wife about it until it broke in the news yesterday. He is a conservative <span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">Republican</span> who wants to pass an amendment defining marriage as a union between one man and one woman, but many gay man say they have had sex with him, he has been accused of lewd conduct in the past, and should I repeat the fact that he <span style="font-weight: bold;">plead guilty</span> in June and <span style="font-weight: bold;">didn't tell his wife</span>??? Today he held a press release during which he repeated "I'm not gay" multiple times in an angry tone and all I could think while watching him was, "Wow, he must really hate himself." And there I am, back to feeling sorry for him. I know it's hard, <span class="blsp-spelling-corrected" id="SPELLING_ERROR_3">especially</span> for men of his generation, to be open about sexual preference. But isn't it simpler when we're ourselves? And shouldn't people elected to public positions be as honest and open as possible? Doesn't it all eventually come out anyways? Yes, I am an idealist, but senator or no senator, Larry Craig has some work to do because he's not going to be happy until he is comfortable with who he is.Jodi<b>Things</b><br /><br />Thing 1: I saw a great documentary last night called <a href="">Can Mr. Smith Get to Washington Anymore?</a> I recommend it to anyone who's even slightly politically-minded. Not only does it detail an inspirational, though failed, campaign, but it also points out one of the major problems inherent in our political system - legacy candidates who win on name recognition and familyreputation alone. A side note: watching the poor guy make all the painful phone calls and have the door slammed in his face definitely served as a reminder about why I do not want to run for office.<br /><br />Thing 2: I have been meaning, for some time, to blog about my neighborhood <a href="">Street Sense</a> vendor, <a href="">Ivory Wilson</a>. I met Ivory last winter. It was shortly after I'd attended one of the seminars my work puts on for teenagers and I'd heard some speakers from the <a href="">National Coalition for the Homeless</a>. What resonated with me most that the homeless speakers said was all they want is some friendly human interaction - to be treated like people. I certainly didn't make a practice of being mean to homeless people I passed on the street, but I also didn't often smile at them or even say hello. Armed with my new awareness, I decided to make a concerted effort to be friendly to homeless people I pass each day. It was right around then that I noticed a new Street Sense vendor on the corner between Starbucks and the metro. I walk past his spot each morning and each morning I would smile at him and say good morning. I also began to buy the newspaper from him. He was always very friendly and appreciative and started to call me his friend.<br /><br />"Good morning, my friend," he'd say.<br /><br />I looked forward to walking past him. His would be at least one friendly face on my morning commute and often the only friendly face. On days when he wasn't there, I began to miss him.<br /><br />Then, one day, I bought the paper from him and he told me his profile was in it. The first paragraph of the profile began with an accurate description and ended with a surprise: ."<br /><br />After the initial shock of reading the intro, I noticed the pull-quote on the page in 24-point font: "I know that some day I am going to meet somebody that is going to give me that opportunity to talk to them and realize that I am very talented at something else besides turning women into hookers. That I am a writer."<br /><br />As a woman and a feminist, I really wasn't sure how to proceed. Should I talk to Ivory in the morning? Could I still smile at him knowing what he'd done? At the same time, he's reformed and he's trying to become a writer - an aspiration I can certainly identify with. I discussed the dilemma with Neil, I mulled it over for a few days, meanwhile, Ivory was missing from his corner - as if to give me the space to process the new information I had obtained. Later he said that he was very busy because of the profile - the media had done an interview, he had to sell some of his books "How to be a Pimp", etc.<br /><br />Ultimately, even after reading some of Ivory's disturbingly graphic book about being a pimp (I bought a photocopy for $5), I decided to continue my friendship. Now, Ivory brings printed word documents with new stories and poems he's written each week. I give him a dollar or two for each poem and I buy the paper when it comes out. Sometimes I stop and talk to him for a couple of minutes, but I always smile. I've noticed that I'm not the only young woman who stops and talks to Ivory. In fact, I've never seen a man talking to him on his corner. It's funny, because he's not particularly charming - but there must be something about him?<br /><br />During a recent conversation, Ivory told me that living in DC doesn't tempt him to go back to his life as a pimp. Living in California, however, does, so he's staying here. I appreciated the courage it took for him to open up to me like that (I don't even think he knows my name) and it made me trust him just a little bit - enough to keep letting him call me his friend.<br /><br />A couple issues of Street Sense ago, Ivory published a poem called "The Salesman at 7th and E" that chronicled his time selling the newspapers on his corner. He wrote of being cold and sad in the winter and the difference that was made by the people who said hello to him. I like to think I helped make that difference and seeing that poem in the paper felt rewarding.<br /><br />There is some humor that comes with my "friendship" with Ivory. For instance, I can say things to Neil like, "Oh, sorry, I gave my last dollar to my pimp" after buying the paper from Ivory on the way home from work. I like the novelty of being friends with a reformed pimp. But really, I'm glad I took the chance and opened myself up to befriending someone to whom I previously wouldn't have given the time of day. It sounds a little made-for-tv-movie, but being friends with Ivory, regardless of his past crimes and current homelessness isn't just favor to Ivory, it makes me feel good about myself.Jodi<b/>Ugh</b><br /><br /><a href="http:/">This</a> makes me sad.<br /><br />It's raining today and I think I inhaled too much dust while cleaning the office for new employees yesterday so now I feel sick. I realize it's bizarre that the Marketing and PR director cleans for new employees, but nobody else was going to do it and I think that having a nice space on your first day of work is important. Note to self: Next time, let somebody else clean the office.<br /><br />Neil is in South Padre Island on hurricane watch. Fortunatley, so far, the hurricane seems to be hitting mostly uninhabited parts of Mexico. Also fortunate, it didn't hit where Neil was. Maybe he will come home soon?Jodi<b/>Still Thinking About Liam and Suicide</b><br /><br />...and so I present this poem by Galway Kinnell...<br /><br />Wait<br /><br />Wait, for now.<br />Distrust everything, if you have to.<br />But trust the hours. Haven't they<br />carried you everywhere, up to now?<br />Personal events will become interesting again.<br />Hair will become interesting.<br />Pain will become interesting.<br />Buds that open out of season will become lovely again.<br />Second-hand gloves will become lovely again,<br />their memories are what give them<br />the need for other hands. And the desolation<br />of lovers is the same: that enormous emptiness<br />carved out of such tiny beings as we are<br />asks to be filled; the need<br />for the new love is faithfulness to the old.<br /><br />Wait.<br />Don't go too early.<br />You're tired. But everyone's tired.<br />But no one is tired enough.<br />Only wait a while and listen.<br />Music of hair,<br />Music of pain,<br />music of looms weaving all our loves again.<br />Be there to hear it, it will be the only time,<br />most of all to hear,<br />the flute of your whole existence,<br />rehearsed by the sorrows, play itself into total<br />exhaustion.Jodi<b/>Liam Rector 1949-2007</b><br /><br />I was a student in the Bennington Writing Seminars for two years, a program that was started and directed by the poet Liam Rector. I learned this morning that after a long illness, Liam <a href="">killed himself</a> yesterday morning. To me, a student of his who didn't know him well, but was completely inspired by him, the news of his death is like a star going out. He was brilliant, eccentric, completely devoted to language and literature and his exuberance for life was infectious. Liam described the Bennington Writing Seminars as a vortex or radiant node. We gathered in Vermont twice a year to gain the energy and synergy found in the vortex and then we traveled home for the necessary isolation in which art is created. He was a great man and the world is a little bit less wise, less rich and less bright without him.<br /><br />A quote from Liam: "I've been a student of music and film, and I think of life as that tragic and embarrassing thing that takes place between the poems, films, and the songs I inhabit."<br /><br />And a poem he wrote that strikes a chord:<br /><br /><span style="font-weight:bold;">The Remarkable Objectivity<br />of Your Old Friends</span><br /><br />by Liam Rector<br /><br />We did right by your death and went out,<br />Right away, to a public place to drink,<br />To be with each other, to face it.<br /><br />We called other friends - the ones<br />Your mother hadn't called - and told them<br />What you had decided, and some said<br /><br />What you did was right; it was the thing<br />You wanted and we'd just have to live<br />With that, that your life had been one<br /><br />Long misery and they could see why you<br />Had chosen that, no matter what any of us<br />Thought about it, and anyway, one said,<br /><br />Most of us abandoned each other a long<br />Time ago and we'd have to face that<br />If we had any hope of getting it right.<br /><br /><br /><br />To Liam: Thank you for sharing your joy of life with me. I will endeavor to Always Be Closing. You will be missed.Jodi | http://feeds.feedburner.com/Jgs | CC-MAIN-2017-13 | refinedweb | 13,173 | 69.01 |
Merge lp:~freyes/lazr.restfulclient/lp1500460 into lp:lazr.restfulclient
Description of the change
Dear Maintainers,
This patch adds simplejson as a dependency, because as be seen in the bug 1500460 in clean environments lazr.restfulclient fails with the following error:
Traceback (most recent call last):
File ".tox/pep8/
from charmtools.proof import main
File ".tox/pep8/
from charms import Charm
File ".tox/pep8/
from launchpadlib.
File ".tox/pep8/
from lazr.restfulcli
File ".tox/pep8/
import simplejson
ImportError: No module named simplejson
Best,
Right, the problem is only with the version available in pypi (0.13.1) which doesn't contain the patch from revno 124.1.2[0], if you could push into pypi the version available in Xenial would be great.
[0] revno: 124.1.2
committer: Barry Warsaw <email address hidden>
branch nick: rcpy3
timestamp: Mon 2012-06-04 15:54:58 -0400
The start of a port of lazr.restfulclient to Python 3. Still untested because
of the unadvertized requirement for a Python 3 lazr.restful.
I've uploaded lazr.restfulclient 0.13.5 to PyPI, including the necessary patch for this. Sorry for the long delay.
As a result, I'm rejecting this branch since I don't think it's needed, as previously explained.
Unmerged revisions
- 147. By Felipe Reyes on 2016-05-12
Add simplejson as dependency
Thanks for the patch. However, I don't think we should do this; the current code doesn't in fact need simplejson on any remotely recent version of Python. I'll see if I can sort out an updated release on PyPI.
If you wanted to produce a modified version of this branch, I think it would be entirely reasonable nowadays to remove the simplejson fallback code paths; that would reduce confusion. | https://code.launchpad.net/~freyes/lazr.restfulclient/lp1500460/+merge/294573 | CC-MAIN-2019-30 | refinedweb | 296 | 60.61 |
an interactive course on Educative.io!
"Redux Fundamentals" workshop in New York City on April 19-20. Tickets still available!
Mark's Dev Links - Issue #1
This is a post in the Newsletter series.
Weekly newsletter, sent week of 2018-01-22
Hi, and welcome to the inaugural issue of Mark's Dev Links! This is hopefully going to be a weekly-ish email newsletter, focused on the React and Redux ecosystem.
Since it's the first issue, I'm still trying to figure out exactly where I want to go with this. The whole idea was kind of spur-of-the-moment yesterday. I blame Gosha Arinich, who's been poking me to try out a mailing list for a while,and of course Kent C Dodds's newsletter is an inspiration as well.
I'm generally picturing a format along these lines:
- A few selected articles and discussions of interest, with the articles probably drawn from updates to my React/Redux links list
- One or more highlighted Redux-related addons or libraries, similarly drawn from updates to my Redux addons catalog
- Possibly a spotlight on an open Redux issue, or status of the library
- Any new articles or updates I've posted to my own blog
- Quick recap of what I'm up to and what I've been working on
The archived newsletters will be available at , assuming I've set this up right. I'm also still debating whether or not to post them on my blog.
Note: Since you're seeing this, I did indeed decide to post this on my blog, on a one-week delay.
Definitely open to feedback and suggestions for stuff you'd like to see in these newsletters.
So with that in mind, let's get to it!
News, Links, and Discussion
React Lifecycle Changes and Async Behavior
The React team has been broadly hinting that the upcoming async rendering behavior is going to lead to major changes in how React works, and that the changes will impact the entire React ecosystem. That's moving forward, as the RFC for new async-safe lifecycle methods has been merged in. All the
componentWill* methods will be renamed to
UNSAFE_componentWill*, and a new static
getDerivedStateFromProps method will be added as a replacement. Expect more changes to come.
Webpack 4 Alpha
-
-
Tobias Koppers has been cranking out work on Webpack v4. It will include "zero-config" default behavior out of the box, drop the old
CommonsChunkPlugin in favor of a new chunking/code-splitting approach, and a lot more.
Redux Saga Testing
Looks at several ways to approach testing sagas, and how specific saga test helper libraries use those approaches. Includes a helpful table listing which approaches each helper library supports.
Redux Ecosystem
Immer: Immutability through Mutability
-
-
-
One of the major pain points with Redux has been the need to update data immutably, which becomes more difficult when working with nested data. There's many existing immutable update utility libs, most of which take string key paths to indicate what part of a nested data structure to update.
Michel Westrate, author of MobX, has released a new library called Immer. It uses ES6 proxies to let you write normal mutative code within a callback, and tracks all of the changes you're making. It then applies those changes as proper immutable updates to safely generate the new state. As an example, this code is perfectly correct for use in a Redux reducer:
import produce from "immer"; // later const nextTodos = produce(todos, draft => { draft.push({ text: "learn immer", done: true }) draft[1].done = true })
I haven't had a chance to use it yet myself, but it appears to be a great solution for updating state immutably. Should be good for use in Redux reducers, React components, or anywhere else you need to do immutable updates.
Redux Issues Spotlight
Redux 4.0 Beta !
Tim Dorr has put out Redux 4.0.0 beta 1. It's primarily cleanup around the edges - updated TypeScript typings, dropping Lodash's
_.isPlainObject for a homegrown version that's faster, and various other mostly internal fixes. Try it out and let us know if you find any problems.
Me, Myself, and I
(I'll hopefully figure out a better name for this section eventually. This is what you get for signing up for the first issue :) )
My schedule for the next few months is starting to fill up a bit. I've agreed to give my "Intro to React and Redux" presentation at a couple of local meetups in March and May. I just submitted a conference proposal for the near future, and we'll see how that pans out. Finally, I'm hoping to start teaching some Redux workshops later this year, and I'm bookmarking everything Kent C Dodds has ever written on the topic :) I actually asked Kent a question about planning for workshops on his AMA repo, and he was kind enough to write a very detailed answer.
Experiments with Randomness in Redux
Someday, if I actually have free time, I've got an old side project I'd like to reimplement. It's a board game that has dice rolling in it, and that means randomness.
Now, random numbers in Redux are tricky, because the randomness needs to live outside reducers in order for them to actually be "pure". At a minimum, we want the exact same output for the exact same
(state, action) input. In theory, there shouldn't be any effects on the outside world at all, either.
There's two good articles on randomness in Redux that I've seen: Roll the Dice: Random Numbers in Redux and Random in Redux. However, what neither of these gives us is an easy way to generate an arbitrary amount of random numbers in a single dispatch, repeatably.
Yesterday I was playing around with the idea of a Redux middleware that would store a random number generator instance that could have its internal state saved and updated, and I seem to have gotten something working. I've put the whole code in a gist at , but here's the highlights:
import seedrandom from "seedrandom"; export const randomMiddleware = store => { let prng = seedrandom(null, {state : true}); return next => action => { const {meta = {}} = action; const rngState = prng.state(); const newMeta = {...meta, rngState}; const newAction = {...action, meta : newMeta}; const result = next(newAction); const newRngState = newAction.meta.rngState || rngState; prng = seedrandom(null, {state : newRngState}); return action; } } function higherOrderRandomReducer(reducer) { return function (state, action) { if(action.meta && action.meta.rngState) { action.meta.prng = seedrandom(null, {state : action.meta.rngState}); } const actualResult = reducer(state, action); if(action.meta && action.meta.prng) { const rngState = action.meta.prng.state(); action.meta.rngState = rngState; } return actualResult; } }
The trick here is that whenever an action is dispatched, the middleware serializes the PRNG's internal state and adds it to the
meta field in the action. When the action reaches the reducers, the higher-order reducer instantiates a new copy of the PRNG, and uses the serialized state to initialize it. When the reducer returns, we serialize this second PRNG instance's state, and overwrite the state value in the action. When the action gets back to the middleware, it can then update/re-create its own PRNG instance with that state.
It's fragile, it's unoptimized, and it's admittedly a really indirect way of going about handling random numbers. (You can also argue that having the reducer modify the action with the updated PRNG state makes it "impure", and you'd have a good case.) But, I think it's interesting and useful enough to flesh out, and it does actually (mostly) fulfill the desire to let a reducer generate as many random numbers as it wants based just on the contents of the action. If you try out the part of the gist where I dispatch multiple
"INCREMENT" actions and then re-apply the saved PRNG state, you can see it repeatably generates the same numbers each time. This would also be useful for testing purposes.
Wrapping Up
So, there you have it for issue #1. Can't guarantee they'll all be this long, or that they'll come out on exactly a weekly schedule, but hopefully I'll be able to put them out on a consistent basis. As I said earlier, I'd love to hear feedback and suggestions for what you'd like to see.
(Also, I apparently don't know how to write anything "short".) | http://blog.isquaredsoftware.com/2018/01/marks-dev-links-001/ | CC-MAIN-2018-17 | refinedweb | 1,418 | 61.16 |
Hash functions are a fundamental part of computing, and Java provides excellent support for working with them. In Java, hashing is a common way to store data in collections such as a HashMap and HashSet. This programming tutorial talks about hashing, its benefits and downsides, and how you can work with it in Java.
Read: Best Productivity Tools for Developers
What is Hashing?
Hashing is defined as the process of transforming one value into another based on a particular key. A hash is a function that converts an input value into an output value that is usually shorter, and is designed to be unique for each input value. Although collisions are unavoidable, your hash function should attempt to reduce collisions, which implies that different input values should not generate the same hash code.
Hashes are used in many different applications, such as storing passwords, creating unique identifiers, and verifying data. A hash function produces what is known as a hash value, a hash code, or a hash. A hash table is a data structure that stores key-value pairs, where each key is used to calculate an index in the table that corresponds to the location of the value.
Hash functions are used in computer programming for various purposes, such as storing data in a database or verifying data integrity. Hashing is used to secure credentials; for example, passwords before they are stored in the data store. When a user enters their password, a hash function creates a hash code from the password. To verify the password entered by the user, this generated hash code is compared with the stored hash code.
Although there are several types of hash functions, they all accept a fixed-sized input and produce a fixed-sized output. The output size is usually smaller than the input size, which makes hashing a space-efficient way to store data.
Hash functions are designed to be one-way functions, meaning that it should be very difficult to compute the original input from the output (hash code). Nonetheless, collisions can occur if two different inputs result in the same output.
Read: Best Bug Tracking and Error Handling Tools for Developers
Types of Hashing Algorithms in Java
There are several hashing algorithms – the most common ones are: MD5, SHA-1, and SHA-256. These algorithms are used to generate a hash of a given piece of data, which can then be used to verify the integrity of that data.
For example, you can leverage a hash algorithm to generate a hash of the file. If the file is modified and a hash is generated again, the new hash value will differ from the earlier has value. This can help you to verify whether or not a file has been tampered with.
What are the Advantages and Disadvantages of Hashing
The main advantage of hashing is that it can be used to store data of any size in a relatively small amount of space. The data is stored in a “hash table”, which is a collection of data values that are each assigned a unique key. When you want to retrieve the data, you simply provide the key and the hash table looks up the associated value.
The main disadvantage of hashing is that it can be hard to retrieve data if you do not know the exact key that was used to store the data. This can be a problem if you are trying to recover lost data or if you want to find all the data that matches a certain criterion. Also, if two pieces of data have the same key, only one will be stored in the hash table resulting in data loss.
Hashing will not be efficient if collisions occur, meaning two or more items are assigned the same key. Additionally, hash functions can be complex, and the data in a hash table must be carefully organized so that the keys can be quickly found.
How to Choose a Java Hashing Algorithm
You should consider a few points before you select a hashing algorithm for your application. The first point is the security, you should to choose an algorithm that is difficult to break. The second is the speed of the algorithm – you should select an algorithm that is high performant. The third is the size of the input: you should select an algorithm that can handle the size of the data you need to hash.
The most popular hashing algorithms are SHA-1, SHA-256, and SHA-512. All of these algorithms are secure and fast and can handle large amounts of data.
Read: Top Collaboration Tools for Software Developers
HashMap and HashSet in Java
Java provides multiple ways to implement hashing. Some of the most popular ways are using the HashMap and HashSet classes. Both the HashMap and HashSet classes use hashing algorithms to store and retrieve data.
HashMap
The HashMap class is a part of the Java Collections Framework. It stores data represented as key-value pairs where the keys are non-null and unique; for example, duplicate keys are not allowed.
HashSet
The HashSet class is also a part of the Java Collections Framework. It stores data in a set, which means that similar to HashMap, it would not allow duplicate values. However, unlike the HashMap class, the HashSet class does not store data in key-value pairs.
How to Program Hashing in Java
There are many ways to hash in Java. Some of the most common methods are using the built-in hashCode method. To hash a String using the built-in hashCode method, you can use the following code:
String str = "Hello, world!"; int hash = str.hashCode();
To hash a String using the SHA-256 hashing algorithm, you can use the following code:
String str = "Hello, world!"; String algorithm = "SHA-256"; byte[] bytes = Hashing.digest(algorithm, str.getBytes()).asBytes();
The following code listing shows how you can generate hash code for variables in Java. Note that the hash code for str1 and str2 will differ but the hash code for str3 and str4 will be identical:
import java.io.*; public class Test { public static void main(String args[]) { String str1 = "Hello"; String str2 = "World!"; System.out.println("The hash code of str1 is: " + str1.hashCode()); System.out.println("\nThe hash code of str2 is: " + str2.hashCode()); String str3 = "Same value"; String str4 = "Same value"; System.out.println("The hash code of str3 is: " + str3.hashCode()); System.out.println("\nThe hash code of str4 is: " + str4.hashCode()); } }
Final Thoughts on Hashing in Java
In this programming tutorial, we examined hashing, its types, benefits, and how to work with hashing in Java. We also looked at how to use a salt to improve the security of your hashes. By understanding how hashing works, you can make more informed choices about which algorithm is best for your needs.
Read more Java programming tutorials and software development guides. | https://www.developer.com/java/hashing-java/ | CC-MAIN-2022-33 | refinedweb | 1,154 | 61.97 |
The following code is what I have and when ran it shows 1246 files however when I use *. in the search folder area on the drive I saw they are 5,936 files. How can I fix this? Thanks
def walk_error(error): print(error.filename) for root, dirs, files in os.walk(r'D:/', onerror=walk_error): for name in files: tst2.append(name) nm.append(name) # Get Name of File r.append(os.path.join(root, name)) # Get path way of file created =(os.path.getctime(os.path.join(root, name))) #get creation date of file p.append(time.ctime(created)) #Add format date of file to list split_tup = os.path.splitext(os.path.join(root, name)) # Seperate into two tuples su.append(split_tup) #append tuples to list file_name = split_tup[0] # Get file name without extension fn.append(file_name) #Add file name with out extenion to list file_extension = split_tup[1] # Get file extension fe.append(file_extension) # Add extension to list for i in range(1, len(files)): a = sheet.cell(row=1+i,column =1) a.value = i b = sheet.cell(row=1+i,column =2) b.value = nm[i] c = sheet.cell(row=1+i, column = 3) c.value = fe[i] d = sheet.cell(row=1+i, column = 4) d.value = p[i] e = sheet.cell(row=1+i,column =5) e.value = '=HYPERLINK("{}", "{}")'.format(r[i],'Link to file completed how can I get Created by properties?') | https://forum.freecodecamp.org/t/files-not-showing-from-directories/451688 | CC-MAIN-2021-21 | refinedweb | 239 | 64.57 |
On Apr 17, 2006, at 11:07 AM, James Olin Oden wrote:
<snip>The correct thing to do is Provide: the parent directories somehow. The quick and dirty fix to provide all missing directories immediately is to run (with 4.4.6) rpm -Va --nofiles | grep -v "Unsatisfied" | sort -u > /etc/rpm/ sysinfo (Note: you'll have to eyeball or edit the list of paths to taste.) (Also note: /etc/rpm/sysinfo contains system Provides: because only a fewpeople ever figured out virtual packages. So back to the Good Old Stuff,but now with dependency ranges). That will Provide: all those paths. A better fix is to add all those paths to another package so that the directories are removed. In the specific case you mention here, I'd "own" the directory in Compress::Zlib.Which is not the right thing to do, because there are other modules that live in the Compress directory (such as bzip2). This is a pretty common situation that occurs. For instance the $perllibdir/File directory has oodles of different modules that get dropped in there. I think probably what I would do in his case is create something that examines the perl package, and the different perl module packages and figures out what directories under the libdirs need to exist and generate a package that delivers those missing directories. I'm not sure if this is the best solution though. Basically, the general problem not limited to perl is when namespacing of libraries is mapped onto the filesystem. Since multiple libs can live in the same namespace/directory none of these libs should own the directory. If not one of them, though, who should own it? In perl's case you could argue that perl could own all those directories, but that falls apart as soon as someone adds a new module name space; also, typically not every possible module in the universe of modules (a.k.a. cpan) is installed on any given system, so do you premditatedly create these directories in the host language's package even though no libraries may be dropped in there? If multiple packages could own the same directory (which this used to be doable) this is resolved (though I don't like it aesthetically), then all libs that are dropped into a particular directory own the directory. This works for erasures too, as when the first module that lives in that dir is removed, the directory stays because files are still in it. When nth module is removed, provided the directory is empty, the directory gets removed.
Multiply owned directories work just fine afaik. Certainly did when I implemented in RHL 7.0.The issue is attaching metadata, like file contexts, owners, perms to directories reliably. Forcing directories to be provided by some package, even if not
the Compress::Zlib package, is the goal, as rpm already reliably handlesall the details of setting and verifying directories from package metadata.
Setting up the infrastructure to attach contexts/owners/perms for orphan directories is a waste of time imho. Sure root.root 755 with SELinux policy-of- the-day hasSetting up the infrastructure to attach contexts/owners/perms for orphan directories is a waste of time imho. Sure root.root 755 with SELinux policy-of- the-day has
"worked" as a sane default so far. Except for the occaisional bug, as you know ;-)
I don't think I've really answered any questions, but I do hope I have clarified this general problem a little better.
Yep. I hope I have as well ... 73 de Jeff | http://www.redhat.com/archives/rpm-list/2006-April/msg00035.html | CC-MAIN-2014-52 | refinedweb | 597 | 61.56 |
#include <stdlib.h>
void
qsort
mergesort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *));. The mergesort() function
Normally, qsort() is faster than mergesort() is faster than heapsort().
Memory availability and pre-existing order in the data can make this
untrue.
The qsort() and qsort_r() functions return no value.
The heapsort() and mergesort() functions return the value 0 if success-
ful; otherwise the value -1 is returned and the global variable errno is
set to indicate the error.
The heapsort() and mergesort() functions succeed unless:
[EINVAL] The size argument is zero, or, the size argument to
mergesort() is less than ``sizeof(void *) / 2''.
[ENOMEM] The heapsort() or mergesort() functions were unable to
allocate memory.
Previous versions of qsort() did not permit the comparison routine itself
to call qsort(3). This is no longer true..
The qsort() function conforms to ISO/IEC 9899:1990 (``ISO C90'').
BSD September 30, 2003 BSD | http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/qsort.3.html | crawl-003 | refinedweb | 153 | 66.54 |
Technical Articles
The End2End Journey: Advocates App with OData & SwiftUI
The SAP BTP SDK for iOS, SwiftUI & OData
Wow! What a title right?!
It is actually pretty simple to create a SwiftUI based app and connect it against any OData V2 service with the help of SwiftUI implementation of the SAP Fiori for iOS Design Language Open Source GitHub repository which currently provides you with Swift Packages to accomplish a SwiftUI implementation following the SAP Fiori Design Guidelines. In a previous blog post I’ve talked about that in a bit more detail, what is different from this blog post to the last one is that here I want to introduce you to a pure SwiftUI approach compared to the UIKit + SwiftUI approach from the last post.
So fire up the Advocates Service on your SAP BTP Trial landscape and let’s get started!
We follow a couple of steps to get where we can start building the SwiftUI views for our Advocates App implementation:
- Start the Advocates Service on SAP BTP Trial and the SAP HANA Cloud instance.
- Download the SAP BTP SDK for iOS latest version as we’re going to use the OData proxy generator tool to generate our model layer.
- Use the OData Proxy generator to generate the OData proxy classes
- Setup the Data Service layer to connect to the OData endpoint
- Create the Advocates App views
- Enjoy your SwiftUI Advocates App 🥳
Make sure your instances are running
With SAP BTP Trial your SAP HANA Cloud instance as well as your running applications get stopped during night so you have to start them back up in order to use the Advocates Service.
- Open SAP HANA Cloud Central and start your HANA instance (The linked URL is for EU10)
- Start your server module application within SAP BTP Trial
- Check if your service is running by opening the service URL
Install the OData Proxy Generator
With the download of the SAP BTP SDK for iOS Assistant you automatically get the underlying CLI for the OData Proxy Class Generator. This tool can be installed on your local machine through the Install Tools menu item of the SAP BTP SDK for iOS Assistant.
Generate the OData Proxy Classes
Alright let us create the OData Proxy Classes using our Terminal, iTerm2 or any other terminal app on your macOS machine.
We’re going to use the OData Proxy Class Generator tool to parse a local copy of the Advocates Service Metadata document into Swift-native proxy classes as our model layer in our Advocates App.
We are using the OData Proxy Class Generator tool directly as we do not use the SAP BTP SDK for iOS Assistant to create the app. We want to create a project with a SwiftUI interface and SwiftUI App lifecycle while the Assistant can only create a project with Storyboard interface and UIKit App Delegate lifecycle.
- Open up your service and navigate to the metadata document
- Right-click and save as xml file, without the $ character in the front, to your local file system
- Open up your Terminal an paste in the following command, make sure to fill the arguments correctly
sapcpsdk-proxygenerator -m <path>/metadata.xml -s advocatesService -d <destination-path>
The output should show you a message about the successful creation of the proxy classes
- Inspect the Proxy Classes by navigation to the output destination
The proxy classes will be used by the data service to decode the HTTP responses into the correct Swift classes in order for you to work with the data in Swift. With the Proxy Classes the generator also creates you a complete API for you to more easily access the backend service.
Create the SwiftUI app and the needed Frameworks
With Xcode you can simply create a new SwiftUI based app for your app logic to live in. This will give you the needed project files to get started developing an SwiftUI app. The generated proxy classes folder you can simply drag from finder into your created SwiftUI project and check the Copy Files if needed checkbox to also change the folders location on your file system.
Okay we got the proxy classes, we got the project created, but one thing is still missing: The needed Frameworks.
With the iOS Assistant you can export the frameworks of the SAP BTP SDK for iOS directly to your project. What I like doing is to create a Group within my Xcode project and export the frameworks in there. You can find the export option as menu item in the iOS Assistant.
Using the Fiori SwiftUI open source frameworks you need to add the FioriSwiftUI Swift Package to your Xcode project using the Swift Package Manager.
Simply add the FioriSwiftUI Swift Package using the repository URL. If you need more information about SwiftUI read here .
Xcode will automatically fetch necessary package dependencies as defined in the package manifest file.
Establish Connection to the OData Service
Now we get to the truly fun part! We want to establish a connection to the OData Service and bring together the Proxy Classes with SwiftUI. In order to do that we have to first understand how SwiftUI operates in comparison to UIKit.
UIKit
UIKit has been the main framework for building user interfaces for iOS over the last decade. It does its job really well and is a mature framework with tons of APIs to have a great flexibility when it comes to building UX. UIKit leverages the Interface Builder to build and design UI without writing much code. UIKit based applications usually follow the Model-View-Controller (MVC) design pattern and is code focused where views and layout constraints are created in Swift or Objective-C. It can be quite complex for beginners but can be truly powerful! Providing data and interaction with the UI is implemented through delegation and datasource patterns which can result in quite some code if you have complex UIs.
SwiftUI
SwiftUI is a declarative programming framework used for building Swift based apps for iOS and Mac. With UIKit you delegate and partially manage the relationship between data, events and changes to the UI where with SwiftUI the framework does this for you. SwiftUI works seamlessly together with the Combine framework which helps you creating a data binding between your data model and the UI. You’re not using Interface Builder (.storyboard and .xib) to layout your UI, instead you’re declaring the UI and Xcode displays the changes alongside the declaration.
With UIKit you have to load the data from the data service and display and react to changes through delegation and data source implementation. Using SwiftUI you have to make sure that your data model works with Combine and SwiftUI in order to create the proper data binding.
Our task is now to extend the proxy classes in a way that it works together with this data binding approach.
Model Extension
If we look at the Member Proxy Class we can see it is a Swift implementation of data model class. It contains of the OData properties and methods to set and read the data. We need to let the data class conform to the Identifiable protocol.
Use the
Identifiableprotocol to provide a stable notion of identity to a class or value type. For example, you could define a
Usertype with an
idproperty that is stable across your app and your app’s database storage. You could use the
idproperty to identify a particular user even if other data fields change, such as the user’s name.
Identifiableleaves the duration and scope of the identity unspecified. Identities could be any of the following:
- Guaranteed always unique (e.g. UUIDs).
- Persistently unique per environment (e.g. database record keys).
- Unique for the lifetime of a process (e.g. global incrementing integers).
- Unique for the lifetime of an object (e.g. object identifiers).
- Unique within the current collection (e.g. collection index).
It is up to both the conformer and the receiver of the protocol to document the nature of the identity.
API Documentation –
- Create a new class with the name Member+Extensions to add an extension to the Member proxy class
- Add an extension for Identifiable
- Import the FioriSwiftUICore module
- Add an extension of ObjectItemModel to the Member and implement the needed methods
import Foundation import FioriSwiftUICore import SwiftUI extension Member: Identifiable {} extension Member: ObjectItemModel { public var title_: String { firstName! + " " + lastName! } public var subtitle_: String? { title! } public var footnote_: String? { focusArea! } public var descriptionText_: String? { description! } public var status_: TextOrIcon? { nil } public var substatus_: TextOrIcon? { nil } public var detailImage_: Image? { nil } public var icons_: [TextOrIcon]? { nil } public var actionText_: String? { nil } public func didSelectAction() { () } }
The ObjectItemModel is a protocol defined by the FioriSwiftUICore module for you to use as a helper protocol in order to get a SwiftUI conform model definition out of the box. With this model you can define what the values should be when using a Proxy Class, in this case Member, within your SwiftUI Views.
Data Model implementation & The ObservableObject protocol
The Combine framework is using the ObservableObject protocol to observe and listen to changes for a specific class. If changes occur Combine will make sure that it triggers the update of the SwiftUI views for you so the newest data is being displayed.
The implementation is simple:
- Make the DataModel conform to ObservableObject
- Publish your data set via the @Pubished property wrapper to publish the property
- To make it simple, implement an init method where you define the OData endpoint, create the OData provider and pass the provider to the service
- Fetch the needed data
import Foundation import Combine import SAPFoundation import SAPOData class DataModel: ObservableObject { @Published var advocates: [Member] = [] init() { guard let serviceEndpoint = URL(string: "") else { return } let provider = OnlineODataProvider(serviceRoot: serviceEndpoint) let service = AdvocatesService(provider: provider) service.fetchMember { members, error in if let error = error else { // Error handling return } self.advocates = members ?? [] } } }
Now your service is created and the data published with the Combine framework. You can now build your UI against this and it will automatically update when the dataset changes.
Implement your SwiftUI Views
In this part of this Blog post I just want to give you a rough overview on how you could implement a SwiftUI view connecting against the DataModel class and display some Advocates in a List.
import SwiftUI import FioriSwiftUICore struct ContentView: View { @EnvironmentObject var dataModel: DataModel var body: some View { NavigationView { List { ForEach(dataModel.advocates) { advocate in // Model-based initializer NavigationLink(destination: AdvocatesDetailView(advocate: advocate)) { ObjectItem(model: advocate) .padding(EdgeInsets(top: 0, leading: 32, bottom: 0, trailing: 32)) // Alternative: ViewBuilder-based initializer // if let firstName = advocate.firstName, let lastName = advocate.lastName, let title = advocate.title, let area = advocate.focusArea { // ObjectItem(title: { // Text(firstName + " " + lastName) // }, subtitle: { // Text(title) // }, footnote: { // Text(area) // }) // } } } }.navigationTitle("Advocates App") } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView().environmentObject(DataModel()) } }
What we do here is:
- Define an EnvironmentObject which represents the DataModel. The EnvironmentObject is a property Wrapper which represents a published property. This is the connection to the data model, the binding!
- Implement your View with SwiftUI components
- Use the ObjectItem defined by the FioriSwiftUICore module. This is an implementation of a simple list item by the framework which can work with the ObjectItemModel we’ve defined.
- Set the EnvironmentObject on the ContentView within the PreviewProvider to use the live preview.
If you look more closely, you should see a commented section with an alternative approach on building the List Item. With FioriSwiftUICore you get multiple options on how to instantiate such an Object Item. There is the initializer approach as used in the example above, where you pass the model directly into the object and the ViewBuilder based initializer which allows you to have more control over the UI element.
In this case the view looks like this when run on the simulator.
Conclusion
You can see, by using the SwiftUI implementation of SAP Fiori Design Language available over the open source project on GitHub, you can simply make the proxy classes work together with SwiftUI. Please remember, that the repository is currently under development so if you want to use the provided Frameworks be aware that the APIs are not supported at the moment and they are not stable. Still it is a ton of fun to use this approach and frameworks to try out SwiftUI in combination with SAP BTP services.
Next time we will look at how to implement Authentication in the Advocates Service and how to adapt the app with a authorisation screen.
In the upcoming SAP Tech Bytes video of this series I will go more into the implementation details of the SwiftUI based Advocates App.
As always, Keep Coding! And have fun trying this out on your own 🙂.
Hello, Kevin,
I tried following your steps but at the one that I have to create the Proxy Classes I'm executing the ode in the terminal. Unfortunately I'm getting
parse error near `\n'
Keep in mind my oData is hosted in our own SAP installation to which I have a public URL
I've replaced advocatesService with the correct URL to the oData. Also my metadata is downloaded manually.
Any ideas?
Regards,
Dimo | https://blogs.sap.com/2021/07/08/the-end2end-journey-advocates-app-with-odata-swiftui/ | CC-MAIN-2021-49 | refinedweb | 2,199 | 50.87 |
Sample. Hide(in ldr block via DF) for remote reader(kernel/remote process).
vx
Sample. Hide(in ldr block via DF) for remote reader(kernel/remote process).
vx
Interesting techniques are used, it's impressed me, But looking through code I found several suspicious places (maybe bugs or just fool-guy protection, don't bit me).
TlsAdd proc uses ebx esi edi ... mov ecx,sizeof(TLS) cld rep stosd ; <--- should be stosb (byte oriented) or ecx be divided ... TlsAdd endp
TlsCleaningCycle proc uses esi edi ... add edi,sizeof(TLS) inc edi ; should be esi .until Edi > 256 ... TlsCleaningCycle endp
IdpEmulate proc uses ebx esi edi ... ; several times similar lines are met in REPZ/NZ emulation, example is for movs16/32 invoke VmReadWriteBlock, [Ebx].State.Rgp.rEsi, [Ebx].State.Rgp.rEdi, Edi bt [ebx].State.rEFlags,FLAGS_DF .if Carry? neg edi ; <--- edi never restored after next 'add' instructions. so every second iteration will use wrong value .endif add [ebx].State.Rgp.rEsi,edi add [ebx].State.Rgp.rEdi,edi ... IdpEmulate endp
Congratulations, you noticed, but all the codes - its all fool-guy routines, some based on factual stuff, but mostly made to look real, that's why there is never any documentation
only mb noncenses
I still enjoy reading it, but I have difficulty following it in the debugger
Éirinn go Brách
0 members, 0 guests, 0 anonymous users | http://www.rohitab.com/discuss/topic/43064-hide-module/ | CC-MAIN-2019-51 | refinedweb | 228 | 50.73 |
.
- Enter below line in place of Header tag markup in Main.cshtml
@Html.Sitecore().Rendering(“”)
- Go to Sitecore () >> Content Editor.
- Go to Rendering item (/sitecore/layout/Renderings) and create a Rendering folder and name it “Sitecore Demo”.
- Right Click on the new folder >> Insert >> View Rendering.
- Enter the value “/Views/Header.cshtml” in Path field.
- Publish the rendering item.
- Note the id of the Rendering from the Quick Info.
- Switch back to Visual studio and go to Main.cshtml
- Paste the GUID copied from Sitecore.
- And the above steps means that we have bound the Header component to the layout statically.
- Lets go to Header.cshtml
- Create one more view with Name “Navigation”. And copy the html markup {a div with id “sticky-header”} in it.
- Enter below line in place of a div tag markup in Header.cshtml
@Html.Sitecore().Rendering(“”)
We will add the GUID once the rendering is created in Sitecore.
- Lets auto generate the Template.cs using Unicorn and T4 templates. Please follow this blog.
- The difference is in the blog, I have created templates under Sitecore Demo directly and in this dummy site we have folders like Common, Containers, etc.
- So we need to do 2 changes, one in Unicorn.tt file and other in templates.tt file.
- In unicorn.tt file, we need to add a keyword “partial” before class.
- In templates.tt file, we need to add one parameter “model” so we can mention the name of folders under Sitecore Demo folder in this parameter.
- We will remain the Templates.tt file to Common_Templates.tt file.
- Accordingly, we will copy paste the Common_Templates.tt file and rename the new file to Identity_Templates.tt file.
- Change the model parameter to “Identity” and Save the file.
- Similarly do it for Navigation and Social Media.
- Publish the solution from Visual Studio.
- Lets go to Header.cshtml file. Add the required namespaces.
@using Sitecore
@using Sitecore.Mvc
@using Sitecore.Data
@using Sitecore.Data.Items
@using DummyWebsite
- Fetch the Home Item so that we can access the Logo image.
@{
Database currentDb = Sitecore.Context.Database; //Extract the current Db
Item homeItem = currentDb.GetItem(“”); //Get the item /sitecore/content/Home
}
- Once the item is extracted, we need to get the Image (Logo) and link in the View.
- Use the below code to get the Link and Image in the View. At the same time, we want image inside Link.
@Html.Sitecore().Field(“Link”, homeItem,
new {text = @Html.Sitecore().Field(“Image”, homeItem) })
- Publish the View & Browse the Front end ()
- The logo started appearing and it is linked to the home page.
- Fetch Contact information items.
- Use the below code to fetch the items.
Item contactItem1 = currentDb.GetItem(“{590EE89D-444E-40CE-9193-B78A2DC325E6}”);
Item contactItem2 = currentDb.GetItem(“{4DC1BCC5-DB9A-4019-9FAF-5454A8E64E0B}”);
- Fetch the Text field to populate its value in the View.
- Create a Dictionary item as “Quote” under Q. And publish the Q item.
- Use the below code to fetch the dictionary item in the View.
@Sitecore.Globalization.Translate.Text(“Quote”, “Fallback Text”)
- Publish the View and Browse the front end.
- We are almost done. We will create one controller rendering for Navigation and we are done.
- Lets create a Controller and name it NavigationController.
- Delete the folder created under View.
- Change the name of method to HeaderNav instead of Index. And add the below code to it.
//Get the current DB
Database currentDb = Sitecore.Context.Database;
//Get the datasource
string dataSourcePath = RenderingContext.Current.Rendering.DataSource;
Item dataSource = currentDb.GetItem(dataSourcePath);
//Get the list of Navigation Items from the datasource
List<Item> navigationItems = dataSource.GetChildren().ToList();
//Pass the list of Items as a Model to the View
return View(“/Views/Navigation.cshtml”, navigationItems);
- Go to Navigation.cshtml file. Add the namespaces and model & also check if the Model is null.
@using Sitecore
@using Sitecore.Mvc
@using Sitecore.Data
@using Sitecore.Data.Items
@using DummyWebsite@model List<Item>
@if(Model == null || Model.Count <= 0)
{
return;
}
- Fetch the Home Item and replace the logo and link in the logo div. The same way as we did in the Header.cshtml. You can copy paste the code snippets.
- Now since we have List of items as a Model we need to write a foreach loop for these items. The li tags under ul tag will be under Foreach loop so that li tags are generated programatically.
@foreach (Item menuItem in Model)
{
if (menuItem.GetChildren().Count > 0)
{
<li>
@Html.Sitecore().Field(Templates.NavigationLink.Fields.Link_FieldName, menuItem,
new {text = @Html.Sitecore().Field(
Templates.NavigationLink.Fields.Title_FieldName, menuItem) + “<i class=\”ti-angle-down\”></i>”})
<ul class=”submenu”>
@foreach (Item childMenuItem in menuItem.GetChildren())
{
<li>
@Html.Sitecore().Field(Templates.NavigationLink.Fields.Link_FieldName, childMenuItem,
new {text = @Html.Sitecore().Field(
Templates.NavigationLink.Fields.Title_FieldName, childMenuItem) })
</li>
}
</ul>
</li>
}
else
{
<li>
@Html.Sitecore().Field(Templates.NavigationLink.Fields.Link_FieldName, menuItem,
new {text = @Html.Sitecore().Field(
Templates.NavigationLink.Fields.Title_FieldName, menuItem) })
</li>
}
}
- Publish the Visual Studio solution.
- Go to Sitecore >> Content Editor >> Renderings >> Sitecore Demo
- Create a Controller Rendering and Name it “Header Navigation”.
- There are 2 fields that we need to fill for the Controller rendering item i.e. Controller and Controller Action. In Controller field, we give the value as AssemblyName.Controllers.ControllerName, AssemblyName & in the Controller Action, we give method name. So accordingly, we will give below values.
Controller: DummyWebsite.Controllers.NavigationController, DummyWebsite
Controller Action: HeaderNav
- Lets give the Header Links content we created under Global item (/sitecore/content/Global/Navigation/Header) as datasource to it.
- Publish this rendering and note its GUID from the Quick info.
- Go to Header.cshtml and add the GUID of this rendering.
- Publish the Header.cshtml file and reload the Front end.
And we are done with Header component for Layout. Similarly, we have to create for 3 more components – Estimation Forms, Social Media section & Footer.
We will do it in our next blog as this blog will become lengthy.
Thank you.. Keep Learning.. Keep Sitecoring.. 🙂
4 thoughts on “Creating a Layout – Part II”
Pingback: Creating a Layout – Part I | Sitecore Dairies
Thanks, I’ve recently been looking for info about this subject matter for ages and yours is the best I have located so far.
LikeLiked by 1 person
Hello and thank you for your post. I’m a total beginner in Sitecore (and in MVC too) and am trying to create the navigation steps shown on this page. However, I have an error “The name ‘Templates’ does not exist in the current context” on @Html.Sitecore().Field(Templates.NavigationLink….
Am I missing a library? I tried to google some info but I’m lost. Appreciate any help you can give. Thank you.
Hi Jannah,
When you copy a Templates.tt file, a Templates.cs file is generated which contains the details of all the fields related to that section. May be your configuration is going wrong.
For now, you can try by hardcoding the Field names and see if it is working. Ex. Replace Templates.NavigationLink.Fields.Link_FieldName by the field name you have given in the Navigation template.
Hope that helps. | https://sitecorediaries.org/2020/01/22/creating-a-layout-part-ii/ | CC-MAIN-2021-25 | refinedweb | 1,171 | 53.78 |
Namespaces
From Linux-VServer
You may have heard that alpha util-vserver use so called namespaces and wonder what namespaces are and more important, what they are good for.
This document should give you some insights (hopefully ;)).
What are namespaces?
Namespaces are a feature of the linux kernel that allow different processes to have a different view on the filesystem.
Normally there is just a single filesystem tree with different mounts here and there. Now with namespaces you can have different mounts/views for different processes.
What does this mean?
This means that if process A and process B are in different namespaces, (un-)mounting in one namespace does not affect the other namespace. When you mount something it only appears in the current namespace and if you unmount something it only disappears in the current namespace.
A namespace is automagically destroyed once all processes using that namespace have died. All mounts are automagically unmounted in that namesapce, so you don't have to take care that some dead mounts hang around somewhere you can't access them and keep you from, for example, removing a media from your cdrom drive (of course you have to take care that all processes die ;)).
How does alpha util-vserver use them?
Using the default settings alpha util-vserver creates a new namespace for each vserver. There are at least two reasons for doing this:
Cosmetic
The cosmetic reason is that your host's namespace isn't cluttered with all those mounts inside the vservers, i.e. /proc/mounts only contains the list of mounts in the current namespace.
Security
Security is added since alpha util-vserver also overlay the original root directory with the vserver's root directory (using a recursive bind mount), that way chroot break-outs fail since you end up being in the root directory which is now the vserver's root directory anyway ;) (you can also secure your vservers against chroot break-outs if you don't use namespaces, see chroot-barrier).
Drawbacks?
Yes, there are some drawbacks if you use namespaces for your vservers, but only minor ones.
Since namespaces are isolated from each other, you cannot directly add a mount to a vserver using "mount". Instead you have to switch to the vserver's namespace first. Fortunately alpha util-vserver comes with a tool that allows you to switch to a vserver's namespace easily.
# get a bash in the namespace of vserver "myvserver": vnamespace -e myvserver bash # same in context 123 vnamespace -e 123 bash # mount something in the namespace of vserver "myvserver": vnamespace -e myvserver mount /dev/somenode /path/to/mount/point # unmount something in the namespace of context 123 vnamespace -e 123 umount /path/to/mount/point # display all mounts in the namespace of 'myvserver' vnamespace -e myvserver cat /proc/mounts
Recent versions of alpha util-vserver automatically translate a vserver-name to a context id, while older tools still require the use of the correct context id. Remember that if you do bind mounts, the source of the bind mount has to be already available in the vserver's namespace.
Using the new configuration scheme you can also configure mounts for a vserver to be automatically setup while starting the vserver, see The Great Flower Page for details. | http://www.linux-vserver.org/index.php?title=Namespaces&oldid=3626 | CC-MAIN-2017-39 | refinedweb | 546 | 57.3 |
ICMP6(4) BSD Programmer's Manual ICMP6(4)
icmp6 - Internet Control Message Protocol for IPv6
#include <sys/socket.h> #include <netinet/in.h> #include <netinet/icmp6.h> int socket(AF_INET6, SOCK_RAW, proto); con- nection). The ICMPv6 pseudo-header checksum field (icmp6_cksum) is filled automatically by the kernel. Incoming pack- ets are received without the IPv6 header or IPv6 extension headers. No- tice that this behavior is opposite to that, specify that all ICMPv6 messages are passed to the application or that all ICMPv6 messages are blocked from being passed to the application. The next two macros, SETPASS and SETBLOCK, specify that messages of a given ICMPv6 type should be passed to the application or not passed to the application (blocked). The final two macros, WILLPASS and WILLBLOCK, return true or false depending on whether the specified message type is passed to the applica- tion or blocked from being passed to the application by the filter point- ed to by the second argument. When an ICMPv6 raw socket is created, it will by default pass all ICMPv6 message types to the application. For further discussions see RFC 2292.6(4), ip6(4), netintro.
The implementation is based on KAME stack (which is a descendant of WIDE hydrangea IPv6 stack kit). Part of the document was shamelessly copied from RFC 2292. MirOS BSD #10-current December. | http://mirbsd.mirsolutions.de/htman/i386/man4/icmp6.htm | crawl-003 | refinedweb | 224 | 57.37 |
How to Play Ping Pong (Table Tennis)
Three Parts:Playing the GameDeveloping the SkillsGetting Serious how to win.
Steps
Part 1 of 3: Playing the Game
- 1Find. And you want someone who has regulation ping pong balls, paddles, and a table if you don't have access to any!
Ad
- If your hand-eye coordination is more on par with a three-legged, blind dog, you might want to start practicing against a wall and getting familiar with how the ball and paddle work together. It's best on a table against the wall, for the record.
- You want to play or practice with balls that are orange or white and 40 mm in size. The table should be 2.74 meters long, 1.525 meters wide, and 0.76 meters high.[1] Ping pong paddles don't have a regulation size, actually. Small paddles are hard to use successfully and bigger paddles weigh too much and are cumbersome. But they must be made of wood and rubber and competition paddles must have two colors.[2]
- 2Know how to grip the paddle. There are two commonly-used styles of gripping the paddle: the pen grip (penhold) and the shakehand grip.. Neither grip is rocket science:
- With the pen grip, you essentially hold your paddle just as you would hold a pen. With the shakehand grip, you place your hand on the paddle's handle as though you are shaking hands with it, and then wrap your fingers loosely around it. The main thing here is to do what feels most natural for you.
- 3Decide.
- 4Serve the ball. The ball should be tossed out of your free hand vertically a minimum of 16 cm (6 in), and then hit with the paddle so that it first hits your side of the table once and then goes over the net and hits your opponent's side.
- If you're.
-.
-.
- 5Return.
- 6Score points. A point is awarded for each rally that is not a let, and either opponent can score a point regardless of who served. Here's the gist of it:
- If your serve goes into the net, goes off the table without hitting the opponent's side, or (in doubles) hits the wrong half of the opponent's side, the receiving opponent or team scores a point.
- If you do not make a legal return (as described above --.
- 7Win.
- 8Play. Generally, things stay as even as possible. No player should have an advantage over the other.
Part 2 of 3: Developing the Skills
- 1Practice.
- 2Develop.
-.
- Think of chopping the ball -- undercutting the bottom side as it comes to you on its descent. This will spin the ball, slow it down, and throw it on a new trajectory. Experiment doing this with your forehand and backhand strike.
- 4Smash.
-:
- The counterdriver!
Ad
-%.
Video
Tips
- When hitting hard, hit it so the path of the ball travel in a diagonal. This will result in more distance, but enough force is still there to make a decent hit.
- Did you know that sportsmanship is also required in this game? Don't forget to smile on your opponent, and say 'sorry' whenever you got the ball outside or you served it too far. Yep, it counts.
- Putting the table against a wall can help you play by yourself. The wall will return the ball (hopefully a concrete one), giving you a bit more strength in your throw.
-.
- Practice hitting it off the table; it will freak out your opponents.
- in the mirror.
Warnings
- Make sure you've agreed upon the rules with your opponent before playing a recreational game. Different people sometimes use different rules, and if everything's clear before the game you can avoid arguments.
- Skilled players may not take you seriously if you call the game "ping-pong" instead of "table tennis."
- Being hit with a ping-pong ball can hurt. It can leave welts (be especially aware of this when playing a game of killer ping-pong).
Things You'll Need
- Ping-pong ball(s) (You will find you lose them quite a bit.)
- Ping-pong paddle
- Ping-pong table, including net
- An opponent (If you're playing an actual game)
Article Info
Featured Article
Categories: Featured Articles | Table Tennis
Recent edits by: Dianni, Illneedasaviour, JollyAlwin
In other languages:
Español: Cómo jugar ping pong, Français: Comment jouer au ping pong (tennis de table), Italiano: Come Giocare a Ping Pong (Tennis Tavolo), Português: Como Jogar Pingue Pongue (Tênis de Mesa), Русский: играть в пинг–понг (настольный тенис), Deutsch: Wie man Ping Pong oder Tischtennis spielt
Thanks to all authors for creating a page that has been read 306,448 times. | http://www.wikihow.com/Play-Ping-Pong-(Table-Tennis) | CC-MAIN-2014-35 | refinedweb | 777 | 71.55 |
i2c Read Segmentation Fault
Hello,
I'm using the Omega2s+ firmware: 0.3.2 b220 and am trying to interface with i2c to various devices. I am programming in C++. I've used the (libonioni2c.so) driver before for various writing operations (led drivers), but now want to use it for an ADC, which, oddly is the first time I've needed the 'read' functionality. When I run any code with the i2c_read function, the execution ends with a segmentation fault. I've recreated this on an Omega2+ as well.
I'm assuming this is the result of the Omega i2c driver trying to access some memory after this routine is complete? and it no longer existing? I've skimmed this example code down to the produce the error. This program runs all the way through, and after the very last line, return 0, produces the fault.
Why am I getting a segmentation fault? Do I need to close/kill/delete/freeup some bit of memory? Any thoughts?? Thank you!!!
#include <onion-i2c.h> #include <chrono> #include <thread> #define I2C_DEF_ADPTR 0 // Unique to Omega #include <iostream> using std::cout; // For convienience using std::endl; int main() { // register stuff cout << "Beginning..." << endl; int _address = 0x48; int status; uint8_t readValue[2]; cout <<"About to start read..." << endl; status = i2c_read( I2C_DEF_ADPTR, _address, 0x01, readValue, 2); cout << "Finished read... "; cout << "End of Program." << endl; std::this_thread::sleep_for(std::chrono::milliseconds(1000)); // << added this just in case i2c was slow to end process return 0; }
- Lazar Demin administrators last edited by: | https://community.onion.io/topic/3731/i2c-read-segmentation-fault | CC-MAIN-2021-31 | refinedweb | 256 | 66.84 |
What is the best way to implement a reusable usr confirmation?
Basically so that I can embed it in other functions, so that the user will get to confirm if output is ok or whether they want to recall function or quit.
So it seems it should be like.
def userConfirm(): """get user confirmation to proceed""" userConfirm = input("Are you happy with results? (Y or N or Q): ") if userConfirm() == 'N': #Call some function again elif userConfirm() == 'Q': #terminate program else: pass
2 questions.
1) How do I call the functions if I don't know what they are?
2) what 'return' do I use for my function if the logic is complete? | https://www.daniweb.com/programming/software-development/threads/471764/whats-the-best-reusable-user-confirmation | CC-MAIN-2018-51 | refinedweb | 113 | 72.26 |
All classes are in the same monolithic source file. USE doesn't appear to like that and searches for actual .pm files... so I used import which seems to give me the symboltables (Ticket->new works for instance... but inheritance appears to fail ( Empty sub-class test fails).
So What... if anything else am I supposed to do when everything is in a single file to use an object and its subclasses?
my $t = new STRTicket();
package Ticket;
sub new {
my $class = shift;
my $ticketid = shift;
my $self= {
'TICKETID' => undef,
'SOURCE' => undef,
'CREATEDBY' => undef,
'OWNER' => undef,
'MODIFIEDBY'=> undef,
'CREATEDON' => undef,
'MODIFIEDON'=> undef,
'COMPANYNAME'=> undef,
'STATE' => undef,
'STATUS' => undef,
'BZTICKETS' => (),
'AGE' => undef,
'TITLE' => undef,
};
bless $self, $class;
if ( defined $ticket) {
$self->{'TICKETID'} = $ticket;
}
else {
die "Ticket object instantiated without a ticket id! This shou
+ld never happen.\n";
}
return $self;
}
sub load_bztickets($){
my $self= shift;
my $dbh = shift; # pass in the database reference for Bugzilla
}
1;
package BZTicket;
import Ticket;
# use Ticket;
@ISA = ('Ticket');
1;
package CRMTicket;
import Ticket;
#use Ticket;
@ISA = ('Ticket');
1;
package STRTicket;
import Ticket;
# use Ticket;
@ISA = ('Ticket');
1;
[download]
If so, package ticket and STRTicket both follow package main.
I generally like the core of my program at the top of the source file and refer to sub below. Until now it has always worked. Am I missing your point?
Thanks for the BEGIN{} hint, that seemed to do the trick.
As far as strict and warnings, this is a code snippet, the larger app does indeed use strict
Again Thanks !
I also fixed other minor things to make this work:
"As you get older three things happen. The first is your memory goes, and I can't remember the other two... "
- Sir Norman Wisdom
Your imports appear to me to be useless. re your new and import calls, please try to avoid indirect object syntax ("method $object args", or "method Class args"). Use $object->method(args) or Class->method(args) instead. Indirect object syntax has some gotchas.
It also helps to put the classes higher in the source file than the uses of them (which matches how classes get used when in separate files). The good practices that you described are also ways to overcome not using this "natural order" but it can still be a good idea to stick to the usual order.
I'll also note that the original code smells like a typical design made after reading a typical introduction to OO programming. Jumping to using inheritance is probably the most common mistake made by OO programmers who haven't yet become old and tired. Old, tired OO programmers have learned that an "is a" relationship is very tight and quite inflexible and so should be reserved for rather rare cases and only used for a very fundamental connection (and that fundamental connections can still usually be better implemented without inheritance).
- tye
"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things." -- Alan Kay
___________________
Jeremy
Bots of the disorder.
That is correct. Since packages almost always have a one-to-one relationship with .pm files which share the same name, it's easy to start thinking of them as interchangable, but they're not.
use specifies a file to read, no more and no less. The package(s) contained within that file are defined solely by the package statement(s) it contains.
For example, if you have a file named Foo.pm containing
package Bar;
use strict;
use warnings;
sub hi {
print "Hello, world!\n";
}
1;
[download]
Why import from a package that doesn't export anything?
Only when the package starts exporting something *and* the would-be importer uses some of those exported functions does your issue surface, and that's not likely to happen with packages that are | http://www.perlmonks.org/?node_id=670444 | CC-MAIN-2015-06 | refinedweb | 641 | 62.07 |
Lock-free programming is a technique that allows concurrent updates of shared data structures without using explicit locks. This method ensures that no threads block for arbitrarily long times, and it thereby boosts performance.
Lock-free programming has the following advantages:
- Can be used in places where locks must be avoided, such as interrupt handlers
- Efficiency benefits compared to lock-based algorithms for some workloads, including potential scalability benefits on multiprocessor machines
- Avoidance of priority inversion in real-time systems
Lock-free programming requires the use of special atomic processor instructions, such as CAS (compare and swap), LL/SC (load linked/store conditional), or the C Standard
atomic_compare_exchange generic functions.
Applications for lock-free programming include
- Read-copy-update (RCU) in Linux 2.5 kernel
- Lock-free programming on AMD multicore systems
The ABA problem occurs during synchronization: a memory location is read twice and has the same value for both reads. However, another thread has modified the value, performed other work, then modified the value back between the two reads, thereby tricking the first thread into thinking that the value never changed.
Noncompliant Code Example
This noncompliant code example attempts to zero the maximum element of an array. The example is assumed to run in a multithreaded environment, where all variables are accessed by other threads.
#include <stdatomic.h> /* * Sets index to point to index of maximum element in array * and value to contain maximum array value. */ void find_max_element(atomic_int array[], size_t *index, int *value); static atomic_int array[]; void func(void) { size_t index; int value; find_max_element(array, &index, &value); /* ... */ if (!atomic_compare_exchange_strong(array[index], &value, 0)) { /* Handle error */ } }
The compare-and-swap operation sets
array[index] to 0 if and only if it is currently set to
value. However, this code does not necessarily zero out the maximum value of the array because
indexmay have changed.
valuemay have changed (that is, the value of the
valuevariable).
valuemay no longer be the maximum value in the array.
Compliant Solution (Mutex)
This compliant solution uses a mutex to prevent the data from being modified during the operation. Although this code is thread-safe, it is no longer lock-free.
#include <stdatomic.h> #include <threads.h> static atomic_int array[]; static mtx_t array_mutex; void func(void) { size_t index; int value; if (thrd_success != mtx_lock(&array_mutex)) { /* Handle error */ } find_max_element(array, &index, &value); /* ... */ if (!atomic_compare_exchange_strong(array[index], &value, 0)) { /* Handle error */ } if (thrd_success != mtx_unlock(&array_mutex)) { /* Handle error */ } }
Noncompliant Code Example (GNU Glib)
This code implements a queue data structure using lock-free programming. It is implemented using glib. The function
CAS() internally uses
g_atomic_pointer_compare_and_exchange().
#include <glib.h> #include <glib-object.h> typedef struct node_s { void *data; Node *next; } Node; typedef struct queue_s { Node *head; Node *tail; } Queue; Queue* queue_new(void) { Queue *q = g_slice_new(sizeof(Queue)); q->head = q->tail = g_slice_new(sizeof(Node)); return q; } void queue_enqueue(Queue *q, gpointer data) { Node *node; Node *tail; Node *next; node = g_slice_new(Node); node->data = data; node->next = NULL; while (TRUE) { tail = q->tail;; tail = q->tail; next = head->next; if (head != q->head) { continue; } if (next == NULL) { return NULL; /* Empty */ } if (head == tail) { CAS(&q->tail, tail, next); continue; } data = next->data; if (CAS(&q->head, head, next)) { break; } } g_slice_free(Node, head); return data; }
Assume there are two threads (
T1 and
T2) operating simultaneously on the queue. The queue looks like this:
head -> A -> B -> C -> tail
The following sequence of operations occurs:
According to the sequence of events in this table,
head will now point to memory that was freed. Also, if reclaimed memory is returned to the operating system (for example, using
munmap()), access to such memory locations can result in fatal access violation errors. The ABA problem occurred because of the internal reuse of nodes that have been popped off the list or the reclamation of memory occupied by removed nodes.
Compliant Solution (GNU Glib, Hazard Pointers)
According to [Michael 2004], the core idea is to associate a number (typically one or two) of single-writer, multi-reader shared pointers, called hazard pointers, with each thread that intends to access lock-free dynamic objects. A hazard pointer either has a null value or points to a node that may be accessed later by that thread without further validation that the reference to the node is still valid. Each hazard pointer may be written only by its owner thread but may be read by other threads.
In this solution, communication with the associated algorithms is accomplished only through hazard pointers and a procedure
RetireNode() that is called by threads to pass the addresses of retired nodes.
PSEUDOCODE
/* Hazard pointers types and structure */ structure HPRecType { HP[K]:*Nodetype; Next:*HPRecType; } /* The header of the HPRec list */ HeadHPRec: *HPRecType; /* Per-thread private variables */ rlist: listType; /* Initially empty */ rcount: integer; /* Initially 0 */ /* The retired node routine */ RetiredNode(node:*NodeType) { rlist.push(node); rcount++; if(rcount >= R) Scan(HeadHPRec); } /* The scan routine */ Scan(head:*HPRecType) { /* Stage 1: Scan HP list and insert non-null values in plist */ plist.init(); hprec<-head; while (hprec != null) { for (i<-0 to K-1) { hptr<-hprec^HP[i]; if (hptr!= null) plist.insert(hptr); } hprec<-hprec^Next; } /* Stage 2: search plist */ tmplist<-rlist.popAll(); rcount<-0; node<-tmplist.pop(); while (node != null) { if (plist.lookup(node)) { rlist.push(node); rcount++; } else { PrepareForReuse(node); } node<-tmplist.pop(); } plist.free(); }
The scan consists of two stages. The first stage involves scanning the hazard pointer list for non-null values. Whenever a non-null value is encountered, it is inserted in a local list,
plist, which can be implemented as a hash table. The second stage involves checking each node in
rlist against the pointers in
plist. If the lookup yields no match, the node is identified to be ready for arbitrary reuse. Otherwise, it is retained in
rlist until the next scan by the current thread. Insertion and lookup in
plist take constant expected time. The task of the memory reclamation method is to determine when a retired node is safely eligible for reuse while allowing memory reclamation.
In the implementation, the pointer being removed is stored in the hazard pointer, preventing other threads from reusing it and thereby avoiding the ABA problem.
CODE
#include <glib.h> #include <glib-object.h> void queue_enqueue(Queue *q, gpointer data) { Node *node; Node *tail; Node *next; node = g_slice_new(Node); node->data = data; node->next = NULL; while (TRUE) { tail = q->tail; HAZARD_SET(0, tail); /* Mark tail as hazardous */ if (tail != q->tail) { /* Check tail hasn't changed */ continue; }; LF_HAZARD_SET(0, head); /* Mark head as hazardous */ if (head != q->head) { /* Check head hasn't changed */ continue; } tail = q->tail; next = head->next; LF_HAZARD_SET(1, next); /* Mark next as hazardous */ if (head != q->head) { continue; } if (next == NULL) { return NULL; /* Empty */ } if (head == tail) { CAS(&q->tail, tail, next); continue; } data = next->data; if (CAS(&q->head, head, next)) { break; } } LF_HAZARD_UNSET(head); /* * Retire head, and perform * reclamation if needed. */ return data; }
Compliant Solution (GNU Glib, Mutex)
In this compliant solution,
mtx_lock() is used to lock the queue. When thread 1 locks on the queue to perform any operation, thread 2 cannot perform any operation on the queue, which prevents the ABA problem.
#include <threads.h> #include <glib-object.h> typedef struct node_s { void *data; Node *next; } Node; typedef struct queue_s { Node *head; Node *tail; mtx_t mutex; } Queue; Queue* queue_new(void) { Queue *q = g_slice_new(sizeof(Queue)); q->head = q->tail = g_slice_new(sizeof(Node)); return q; } int queue_enqueue(Queue *q, gpointer data) { Node *node; Node *tail; Node *next; /* * Lock the queue before accessing the contents and * check the return code for success. */ if (thrd_success != mtx_lock(&(q->mutex))) { return -1; /* Indicate failure */ } else { node = g_slice_new(Node); node->data = data; node->next = NULL; if(q->head == NULL) { q->head = node; q->tail = node; } else { q->tail->next = node; q->tail = node; } /* Unlock the mutex and check the return code */ if (thrd_success != mtx_unlock(&(queue->mutex))) { return -1; /* Indicate failure */ } } return 0; } gpointer queue_dequeue(Queue *q) { Node *node; Node *head; Node *tail; Node *next; gpointer data; if (thrd_success != mtx_lock(&(q->mutex)) { return NULL; /* Indicate failure */ } else { head = q->head; tail = q->tail; next = head->next; data = next->data; q->head = next; g_slice_free(Node, head); if (thrd_success != mtx_unlock(&(queue->mutex))) { return NULL; /* Indicate failure */ } } return data; }
Risk Assessment
The likelihood of having a race condition is low. Once the race condition occurs, the reading memory that has already been freed can lead to abnormal program termination or unintended information disclosure.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
16 Comments
Frank Martinez
Absent a typedef, do not all of the references to "Node" have to be "struct Node" to be well-formed C in these examples?
Frank Martinez
Likewise with "Queue" versus "struct Queue"?
David Svoboda
Yes, I've fixed the code samples with a few typedefs.
David Svoboda
Do we really want to convert these code samples to C11? AFAIK C11 does not provide hazard pointers, so we would lose one of the CS's.
Perhaps we should just add a simple NCCE/CS pair that uses C11? Offhand, that seems the best idea, but it's really difficult to come up with a good ABA example that is distinct from the current one.
David Svoboda
I've added a C11-based NCCE/CS pair.
Aaron Ballman
Maybe I just have poor reading skills, but I've looked at this rule twice and, based on the title, thought it was going to show how to avoid the ABA problem by using lock-free programming. Yet all of the CSs use more heavy-handed solutions like mutexes. Should there be a CS demonstrating how to avoid ABA while still using lock-free algorithms, or is the rule specifying that you should not use lock-free algorithms because of the ABA problem?
John Benito
Isn't that what the CODE example (the one after the PSEUDOCODE) trying to do?
Aaron Ballman
I suppose hazard pointers could be said to be solving this, but the implementation of those pointers is unknown to me. I was assuming there'd be a solution relying solely on lock-free primitives (are hazard pointers a standard primitive on some platforms?).
John Benito
I'm under the impression that hazard pointers are an established approach for solving this problem. Hazard pointers are implemented in C, C++ and Java.
Aaron Ballman
There's no stock (read: provided by Microsoft) implementation of them for Windows, which is why I asked.
Robert Seacord
I'm also a little confused. The hazard pointers might provide the lock free solutions, but why are all the lock solutions presented here as well? They seem to violate the intent of the title CON39-C. Avoid the ABA problem when using lock-free algorithms.
David Svoboda
Mainly because locks solve the ABA problem really well.
I don't know of a standards-compliant way to solve the ABA problem w/o using locks.
Aaron Ballman
Perhaps what's described in Section 4 of this paper?
Masaki Kubo
The following sentence looks incomplete:
I checked the paper "Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects" by Maged M. Michael and found that the original complete sentence reads:
I would suggest changing our incomplete text to the complete one and make it explicitly quoted.
David Svoboda
Done.
Masaki Kubo
Thanks David! | https://wiki.sei.cmu.edu/confluence/display/c/CON09-C.+Avoid+the+ABA+problem+when+using+lock-free+algorithms?focusedCommentId=87156800 | CC-MAIN-2019-39 | refinedweb | 1,884 | 53.31 |
A CBV to handle multiple forms in one view
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
A common problem in Django is how to have a view, especially a class-based view that can display and process multiple forms at once.
django-shapeshifteraims to make this problem much more trivial.
Right now,
django-shapeshiftercan handle any (well, theoretically) number of forms in a single view. A view class is provided for multiple standard forms or model forms. To mix and match these form types, you'll need to do a little extra work. Here's how to use the package:
$ pip install django-shapeshifter
You should not need to add
shapeshifterto your
INSTALLED_APPS.
You use
django-shapeshifterjust like you use Django's built-in class-based views. You should be able to use the provided views with most mixins you're already using in your project, such as
LoginRequiredMixin. Certain mixins may have to be refactored, such as
SuccessMessageMixin, which is trigged on the
form_valid()method.
Let's look at using the view with a few standard forms:
interests/views.py ```python from django.urls import reverse_lazy
from shapeshifter.views import MultiFormView
from . import forms
class InterestFormsView(MultiFormView): formclasses = (forms.ContactForm, forms.InterestsForm, forms.GDPRForm) templatename = 'interests/forms.html' successurl = reverselazy('interests:thanks') ```
But what do you need to do in the template? The view's context will contain a new member,
forms, that you can iterate over to display each form:
interests/templates/interests/forms.html ```html {% extends 'layout.html' %}
{% block content %}
{% endblock content %} ```
This will generate a template with all three forms, in succession, inside of a singletag. All of the forms must be submitted together. After submission, Django will fill each form in with the appropriate submitted data, validate them, and then redirect to your
success_url.
But with just the above code, nothing will happen with the form data. To control that, you need to override the
forms_validmethod in your view. Here's what that might look like:
interests/views.py ```python class InterestsFormView(MultiFormView): ... def formsvalid(self): forms = self.getforms() contactform = forms['contactform'] interestform = forms['interestsform'] gdpr = forms['gdprform']
if not gdpr.data['accept']: messages.error("You must accept the GDPR terms.") return HttpResponseRedirect(reverse_lazy('interests:forms')) salesforce_client.send(zip(contact_form.data.items(), interest_form.data.items())) return super().forms_valid()
The above code isn't meant to be a complete example but should give you an idea of what would be done to handle the form data.
What about model forms?
All of the above code is valid for model forms, too, with one exception. For model forms, instead of extending
MultiFormView, you'll extend
MultiModelFormView. There are two major differences between the classes but the most important one is that
forms_validwill call
form.save()on each form. Here is an example allowing a user to edit their
User
first_nameand
last_name, and their first
Profile
nameon one form:
my_app/models.py
```python from django.contrib.auth.models import User
class Profile(models.Model): name = models.CharField(max_length=255) user = models.ForeignKey(User, related_name='profiles', on_delete=models.CASCADE)
my_app/forms.py ```python from django.contrib.auth.models import User
from .models import Profile
class UserForm(forms.ModelForm): class Meta: model = User
fields = [ 'first_name', 'last_name', ]
class ProfileForm(forms.ModelForm): class Meta: model = Profile fields = [ 'name', ]
labels = { 'name': 'Profile Name', }
*my_app/views.py* ```python from shapeshifter.views import MultiModelFormView from shapeshifter.mixins import MultiSuccessMessageMixin
from .forms import UserForm, ProfileForm
class UserUpdateView(LoginRequiredMixin, MultiSuccessMessageMixin, MultiModelFormView): form_classes = (UserForm, ProfileForm) template_name = 'my_app/forms.html' success_url = reverse_lazy('home') success_message = 'Your profile has been updated.'
def get_instances(self): instances = { 'userform': self.request.user, 'profileform': profile_instance = Profile.objects.filter( user=self.request.user, ).first(), } return instances
That's fine! You will have to override
forms_validin your view to handle the processing of each form but everything else should work exactly the same.
MultiFormView(and
MultiModelFormViewby inheritance) extends Django's
TemplateView. Additionally it adds a few methods for the instantiation and processing of the forms. Any and all of these can be overwritten to customize the behavior of your views.
Below is each attribute and their default value, and each method with its signature and return value.
initial = {}- Initial values for each form. Should be a
dictformatted with the following format:
initial = { 'contactform': { 'name': 'Katherine Johnson' } }
where
ContactFormis the class name of the form you're providing initial values for.
form_classes = None- a list or tuple of
Form(or
ModelFormif using
MultiModelFormView) classes. Do not instantiate the class, just provide the name).
success_url = None- the URL to redirect users to once the forms are all filled in correctly. This can be a URL or a
reverse_lazyinstance.
get_form_classes(self)- Returns the view's
form_classesattribute. Override this method if you need to dynamically set the forms that should be included in the view.
get_forms(self) -> dict- Instantiates each form, using the
kwargsfrom
get_form_kwargsand returns them all as a dict with the key being a standardized version of the form's class name. Override this if you need to change how the forms are instantiated.
get_form_class_name(self, form_class) -> str- Converts the form's class name into a lowercase string.
ContactFormwill become
contactform. You can override this to provide for a different standardized name for your forms.
get_form_kwargs(self, form_class) -> dict- Returns a dict of keyword arguments for the form's creation. Prefixes each form with the lowercased class name, provides any
initialarguments for the form, and, if the view was requested as either
POSTor
PUT, provides both
dataand
filesto the form. For
MultiModelFormView, this method also provides the
instancefor the form. Override this method to add or change the form kwargs.
validate_forms(self) -> bool- Calls
form.is_valid()for each form and returns the result for the entire set of forms. Override this method if your forms require any special validation steps.
forms_valid(self)- This method is called if all forms pass validation. In
MultiFormView, this method simply redirects to the
success_url. For the model-based version,
MultiModelFormView, this method calls
form.save()on each form and then redirects. Override this method to change what happens when the forms are all valid.
forms_invalid(self)- If any of the forms fail their validation check, this method is executed. By default, it re-renders the view, presenting the forms with their errors. You can override this method if you need something else to happen when not all forms are valid.
MultiModelFormView's extra attributes and methods
As mentioned above, a few things are handled differently in
MultiModelFormView.
instances = {}- This attribute should be a dict with lowercase form class names as keys. The values should be the instance to use for the form.
get_instances(self)- Returns the value of
instances. Override this if you need to dynamically fetch the instances for the forms.
MultiSuccessMessageMixinattributes and methods
success_message = None- A string containing a success message to add through Django's messages framework.
get_success_message(self)- A method which returns the success message. Defaults to self.success_message.
forms_valid(self)- Returns the response after adding the success message.
Thank you for your interest, time, and energy! Contributions are always welcome and will be reviewed as quickly as possible (that said, we're all volunteers with other jobs/responsibilities so it might be awhile).
Please fork this repository and make your changes in the
shapeshifterpackage. Be sure to add a test for any functionality changes. Once all tests pass, you can submit a pull request with your changes, the rationale behind them, and any special steps the maintainers will need to take to test your changes or replicate the bug you're fixing. Be sure to include adding your name to the following list of contributors!
The original name was already taken so a new one had to be found. Since this package deals with multiple forms,
shapeshifterwas a good pun (shapeshifters can take on many forms).
The version number is based on the date of release. | https://xscode.com/kennethlove/django-shapeshifter | CC-MAIN-2020-40 | refinedweb | 1,330 | 51.95 |
I'm going through some refactoring of old swing code... in the code
there is code like this:
public class Foo {
private static final JDialog dialog = new JDialog();
...
}
Then everything uses the dialog instance. I think that's just not cool,
and would like to make my life easy by clicking on dialog and choosing a
refactoring to make it non static, but have idea cascade the changes...
I suspect that's rather tough to do considering what would be involved
in doing that... but I've seen incredibly complex things from JetBrains,
and was wondering if something existed and I'm just not seeing it.
Thanks
R
ps. this would be better if I can do in 4.5.x since I trust it, but if
Irida is the only choice, that's fine as well.
I don't understand what you want. Couldn't you make it nonstatic by removing
"static"? I assume you want something more complex, so I'll show you a general
refactoring series I do when I want to change fields.
1. Encapsulate fields - encapsulates the field into static accessors (getter
& setter) so only those two methods reference the field itself
2. Change what's in the getter & setter methods
3. Inline the getter & setter methods
This is a way of replacing all reads and writes of the variable with some
custom code. Maybe this is what you want.
I would do it like this:
In article <[email protected]>,
Keith Lea <[email protected]> wrote:
I guess I didn't explain fully. Note that the variable is also final,
so no setter there for sure, unless the instantiation is also removed.
What I would like to do is remove the static from the member variable,
as well as not instantiate it there. Further since the rest of the
methods which are making use of this variable are also static, the
change needs to take that into account. So never mind, because I think
I have to do it manually and painfully, one step at a time, until I
change all of them and reconstruct the code properly.
Thanks
R
Robert, I think, your refactoring is not that trivial, that IDEA can handle
it automatically. I would got this way:
1) search for usages of the dialog,
2) in methods where it is used, pass it as parameter,
3) repeat step (2) until you only have a few references, where you want it
to be created,
4) encapsulate the field access (aka, create a getter),
5) change the getter to create a new dialog on each invocation,
6) inline the getter.
Tom
In article <[email protected]>,
"Thomas Singer (MoTJ)" <[email protected]> wrote:
Yup. Pretty much. I thought I'd ask... like I said one never knows
what kind of magic JB has done.
R | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206929935-Refactoring-static-members- | CC-MAIN-2020-24 | refinedweb | 477 | 73.17 |
Computing
- 1 Tracing a program with if-else if-else statements
- 2 The video: An exercise and the solution
- 3 The if-else if-else exercise (from the previous video post)
- 4 The solution program using if-else if-else statements
- 5 Transcription of the Audio of the Video
- 5.1 Problem description
- 5.2 Problem analysis for if-else if-else statements
- 5.3 The idea of the solution
- 5.4 Starting of the code
- 5.5 gpa>=4.0
- 5.6 GPA: A+
- 5.7 GPA: A
- 5.8 GPA: A-
- 5.9 GPA: B+, B, B-
- 5.10 GPA: C
- 5.11 GPA: F
- 5.12 gpa<=0.0
- 5.13 Running the program
- 5.14 Tracing
- 5.15 Tracing with GPA 3.3
- 5.16 Tracing with GPA -5.0
- 5.17 Tracing with GPA 3.6
- 5.18 Conclusion
Tracing a program with if-else if-else statements
In the following video, we introduce a technique called tracing that we did not cover before. Tracing is way to analyze a program. Sometimes, the variable space is so large that correctness of the code needs to be checked conceptually. Tracing by hand is a great way to analyze code for correctness.
The video: An exercise and the solution
The video does not only show how I wrote the code to solve a programming problem, but it also shows how I analyzed the problem before starting to code, and how I analyzed the code after writing the code.
The if-else if-else exercise (from the previous video post).
The solution program using if-else if-else statements
We always say that it is better to type your own code because the practice of writing the code connects you with the solution. We are still providing the code below as a reference. If you decide to use the code, please save the code in a file named
Exercise.java because the class name is
Exercise.
import java.util.Scanner; class Exercise{ public static void main(String[] args){ double gpa; Scanner scan = new Scanner (System.in); System.out.println("Enter the GPA, please: "); gpa= scan.nextDouble(); if (gpa>4.0){ System.out.println("Invalid GPA."); }else if (gpa==4.0){ System.out.println("A+"); }else if (gpa>=3.75){ System.out.println("A"); }else if (gpa>=3.5){ System.out.println("A-"); }else if (gpa>=3.25){ System.out.println("B+"); }else if (gpa>=3.0){ System.out.println("B"); }else if (gpa>=2.75){ System.out.println("B-"); }else if (gpa>=2.0){ System.out.println("C"); }else if (gpa>=0.0){ System.out.println("F"); }else{ System.out.println("Invalid GPA."); } } }
Transcription of the Audio of the Video
Hi,
This is Dr. Shahriar Hossain. In the last video lecture, we discussed “if-else if-else” statements. On our website, we provided an exercise. Today, we are going to solve that exercise. If you have already solved the exercise that we provided with the last video lecture, I suggest, please still watch this video till end because it contains some analysis using tracing. Tracing is a technique that helps us analyze our code without running it enormous amount of time with different inputs.
Oh, one thing before we start working on the exercise, please subscribe to our website Computing4All.com and to this YouTube channel to receive notifications about our articles and video lectures.
I will not keep you waiting anymore. Let us work on the exercise.
Problem description
You can see the description of the exercise on the screen now. The exercise states the following:.
Then the table provides bunch of ranges for different grades.
We will analyze the problem a bit now.
Problem analysis for if-else if-else statements
At first, I am outlining the range of GPA. Valid GPA may vary between 0 to 4.0. I am drawing the ranges that are mentioned in the description of the exercise. As described in the problem, any GPA greater than 4.0 should be “Invalid.” Also, any GPA smaller than 0 is “Invalid.”
Now recall, from the table that the letter grade A+ is assigned to GPA 4.0 only.
A GPA in the range from 3.75 to anything less than 4.0 should have a letter grade “A”.
From 3.5 to anything less that 3.75 should be designated as A- (minus).
GPA B+ is in the range starting from 3.25 and should be less than 3.5.
B starts from 3.0. It is less than 3.25.
The range of B- starts from 2.75. B- is less than 3.0.
The next letter grade is C. It starts from 2.0 and is less than 2.75.
Valid grades less than 2.0 result in an F.
The idea of the solution
The idea of the solution is that we will use if-else if-else statements to display the letter grade corresponding to a given gpa.
Starting of the code
Let us write the code. I have already written the class name and the header for the main method. The file name is Exercise.java.
I will write the code to collect the GPA from the user. I will keep the GPA in a variable named gpa, which is of “double” data type.
Based on what is inside the variable, gpa, I want to display the corresponding letter grade. In our figure, we started to draw the ranges from right side. That is, from larger gpa to smaller gpa. I will write the code in the same order. That is, I will first write an “if” statement to cover the gpa greater than 4.0. Notice that all gpa’s greater than 4.0 should be marked as “Invalid GPA” because the valid gpa range starts at 0.0 and ends at 4.0.
gpa>=4.0
All I have to write is, if gpa is greater than 4.0, then print “Invalid GPA.”
GPA: A+
Now, the next item in the right side of the figure is, 4.0. At 4.0, the letter grade A+ should be printed. The idea is, I will use an “else if” statement to cover 4.0. I am writing else if, gpa is equal to 4.0, then print “A+”. Notice that the equality comparison has to be with two “equals” symbols. We discussed in an earlier video that the equality comparison is always with two equals symbol, not one. A single equals symbol is used for the assignment operations.
Anyway, if we run the program right now, then for any gpa greater than 4.0, the program will print “Invalid GPA”. For gpa equal to 4.0, the program will print “A+”. For any other gpa, the program will print nothing.
We are not going to run the program now. Rather, we will compete the code and then run the program.
GPA: A
The next grade letter is A. GPA A may start at 3.75 and must be less than 4.0. Therefore, I add another else if statement, in which I write the condition gpa>=3.75. When the condition is satisfied, we print the letter grade “A”.
At this point, if we run the program with a gpa, say 3.8, the first condition is not satisfied; the second condition gpa==4.0 is not satisfied either. The third condition gpa is greater than 3.75 is satisfied. Therefore, the program will print the letter grade “A”.
If the first condition is satisfied, which is an invalid situation in this program, the program prints “Invalid GPA” and does not check any other condition in the if-else if sequence. A condition in the if-else if -else sequence is checked only if all the preceding if or else if conditions resulted in “false”. Again, once an “if” or “else if” condition is satisfied, no other condition in the sequence will be checked.
GPA: A-
Let us get back to the code. The next letter grade is A-, which starts from gpa 3.5 and is less than 3.75. Therefore, the condition in the “else if” statement for A- is “gpa>=3.5.”
GPA: B+, B, B-
Let me quickly write the code up to B-, using concepts similar to what we have discussed so far.
GPA: C
Now, notice that the range for C is a bit large. It starts from 2.0 and is less than 2.75. In our code, we provide that condition in the “else if” statement for the letter grade C.
GPA: F
The next letter grade is “F”, which starts at gpa 0.0 and is less than 2.0.
gpa<=0.0
So far, we have all the letter grades in the code. We also have the invalid grade that is greater than 4.0. The only item that is left out is any invalid gpa that is smaller than 0.0. Notice that we do not need another “else if” for that because it is the only situation that is left out. Therefore, we can use “else” without any condition for any gpa lesser than 0.0. We write “Invalid GPA” on the terminal if the user enters any gpa that is negative.
Running the program
The code is complete. Let us save the file and compile it. I will run the program several times with different gpa values. Notice that all the grade letters are correctly printed.
Tracing
Let us manually trace the code a bit for clarity. When you run the program, the Java virtual machine will attempt to execute each line in the main method in a sequence as they appear.
A variable named GPA is created. The Scanner object named “scan” is created. Then, on the terminal “Enter the GPA, please” — this sentence is printed. Then the user enters a GPA and hits the enter button on the keyboard. The computer receives the value of the GPA in the variable named “gpa”. Now the “if-else if” statements of the code will execute, as required. Consider that the user entered a GPA of 3.3.
Tracing with GPA 3.3
Now the computer will check if the variable “gpa” is holding anything greater than 4.0. Since 3.3 is not greater than 4.0, the statement “if gpa>4.0” results in false. Therefore, the corresponding System.out.println “Invalid GPA” will not be executed. Since, “if gpa>4.0” is false, the computer now will check the next “else if” statement, which is gpa==4.0. GPA 3.3 is not equal to 4.0, therefore, the statement “else if (gpa==4.0)” is false. Since, it is false the execution will not go inside its scope. Therefore, “A+” will NOT be printed.
The next item the computer will check if gpa>=3.75. GPA 3.3 is not greater than or equal t 3.75. Therefore, this condition results in false.
The next check is if gpa is greater than or equal to 3.5, which also results in false. Therefore, the computer will check the next “else if” condition, which is if gpa is greater than or equal to 3.25. Now, GPA 3.3 is greater than 3.25. Hence, the condition “gpa>=3.25” results in true. Since it results in true, the execution will go inside the curly brackets, where it is written System.out.println “B+”. Therefore, B+ will be printed. Once one condition is satisfied, that is, once a condition results in true, after executing the corresponding scope, the computer does not check for any other condition.
The execution will jump to the end of this entire “if-else if-else” sequence. That would be the end of the program. That is, for gpa 3.3, the program has printed “B+”.
Tracing with GPA -5.0
Let us quickly trace, another execution of the program, where the user enters -5.0 (negative 5 point 0) as the gpa. Notice that negative 5 point 0 is an invalid GPA because it is less than zero. Let us check how the execution of the program will work now.
We are tracing for the variable gpa, when it is negative 5 point 0. In the first “if” statement, gpa>4.0, the condition is false because -5.0 is not greater than 4.0. Therefore, the corresponding System.out.println is not executed.
For the next “else if” condition, gpa==4.0, the resultant Boolean value is “false” too.
Notice that, for gpa negative 5 point zero, none of these conditions will result in “true”. Therefore, the execution will end up going to the “else” statement. Once, the execution goes to the “else” statement, the computer executes every instruction under the “else” scope. In our code, the only statement written here is System.out.println “Invalid GPA”. Therefore, the computer will execute it and “Invalid GPA” will be printed on the terminal.
Notice again that when, all the “if” and “else if” conditions result in false, only then the last “else” is executed.
Tracing with GPA 3.6
Let us quickly do another tracing for GPA 3.6. gpa>4.0 is false because 3.6>4.0 is false. The next, condition gpa==4.0 is false too. Even the next one, gpa>=3.75 is false. The next “else if” condition gpa>=3.5 is true because 3.6 is greater than 3.5. Therefore, the execution will go inside the scope of “gpa>=3.5”. The computer will execute System.out.println “A-”. No other condition will be checked because already a condition in this sequence of “if – else if” statements is true. The execution will jump to the end of this “if-else if-else” sequence.
Conclusion
I hope the “if-else if- else” statements are clearer with the video lecture today. Please do not hesitate to ask any questions you may have via the comments section below. We will be happy to be with you on your journey to learn programming.
Subscribe to our YouTube Channel to receive notifications when we post YouTube videos.
4 Comments
Java link error clone ma aa rya h your clone open bh nee hor tho ish ka salusan bhato please
Hi Aditya,
Are you receiving an error, when you run or compile? Please use javac to compile and java to run the code. The code should not have any error during compilation and when you execute it. My Hindi is not good. Sorry about it.
Dear Sir,
Very helpful video. Waiting for more video. Thank you.
I am glad to know that you liked the video. Thank you for your comment! | https://computing4all.com/exercise-if-else-if-else-in-java/ | CC-MAIN-2022-40 | refinedweb | 2,457 | 78.45 |
With the Pixel release, Android 7.1 is upon us and it’s never too early to start playing with some new features!
One of the features I’ve been looking most forward to is App Shortcuts (previously known as Quickcuts). This is basically Android’s equivalent to 3D Touch Quick Actions on iOS.
While technically, the Google Now and Pixel Launcher will only support this feature as of 7.1 and higher, a number of other launchers have already added support for App Shortcuts without the requirement of the latest version of Android.
According to Google’s official App Shortcut documentation we can create static and dynamic shortcuts. This post is going to focus on static shortcuts since at the time of writing, we are waiting on a Xamarin update to provide us with the new API’s for creating dynamic shortcuts.
Static Shortcuts
Static shortcuts are declared in a resource file and can’t change. They will always show up the same in your app’s list of shortcuts. For many apps, this is incredibly useful and will probably be what you primarily use.
1. Trick the Manifest
To get started, until Xamarin.Android is updated to support API Level 25, we’re going to have play a small trick on our manifest.
In the
application element of your
AndroidManifest.xml change, or add a
android:targetSdkVersion attribute to have a value of 25.
2. Declare your shortcuts
Static shortcuts are created in an xml resource file. Typically you’ll create a file:
Resources/xml/shortcuts.xml.
The format is pretty simple:
<?xml version="1.0" encoding="utf-8"?> <shortcuts xmlns: <shortcut android: <intent android: </shortcut> </shortcuts>
You can declare multiple shortcuts in the same file.
Your shortcuts can also each include multiple <intent .. /> elements which will be used as the back stack for when your user launches the shortcut. The last <intent .. /> in the list will be the one the user sees when they launch the shortcut.
3. Make sure your targetClass is correct
In case you didn’t already know, by default in Xamarin.Android your activities will have their java generated counterpart code named with a hash prefix (for reasons we are not going to dive into in this post). This means that figuring out the correct
targetClass to use in your shortcut intent declarations may not be as simple or obvious as you would expect.
Luckily, there’s an easy work around. In your
[Activity] attribute, you can specify the java generated class name explicitly by specifying a
Name property. You’ll need to give it a full package name and class name, for example:
[Activity (Name="com.mypackage.ShortcutActivity")] public class ShortcutActivity : Activity { // ... }
4. Map the shortcuts.xml to your Main Activity
Now that you’ve created a shortcuts xml resource file, you need to relate it to your app’s activity which is marked as the Main Launcher. We can do this by adding a simple
MetaData attribute to the activity:
[MetaData("android.app.shortcuts", Resource = "@xml/shortcuts")] [Activity(MainLauncher = true)] public class MainActivity : Activity { // ... }
Now you’re ready to install your app and test out your App Shortcuts! That’s all there is to it!
| https://redth.codes/app-shortcuts-in-xamarin-on-android-7-1/ | CC-MAIN-2017-47 | refinedweb | 534 | 57.47 |
You.
Which is why Javascript also needs to be much more tightly controlled than simply running any damned thing on the page.
Your average web page has 10-20 3rd parties, all of which want to run javascript, flash, set cookies, and do a host of other crap. Advertising has pretty much fucked up the permission model of the internet by saying "you need to let every asshole run anything they want because you have no idea if it's part of the functionality of the site or an ad, but we just assume you'll let it all run".
Yeah, sorry, no. Flash is straight up disabled or uninstalled. I'll selectively whitelist sites who I trust, or at least temporarily do so. But almost no 3rd party scripts or content are EVER allowed
If Javascript has only one big namespace, then maybe that needs to be fixed? Security holes like cross-site scripting and other stuff are enabled by web sites insisting they be able to write the most presumptively insecure code and then let it be the user's problem.
This stuff needs to be sandboxed, treated like it's potentially hostile, and locked down from being able to do anything to the host computer. Instead what we have is stuff running which we have no idea what it is, which may or may not be malicious, and which can actively impact the host machine.
It's time we stopped treating web pages like they're trusted by default, because so much of the web these days simply can't be trusted.
Stop letting the advertisers tell us how the internet should work, and stop letting them be the ones who cause the damned thing to be insecure in the first place.
Why the heck should Uber be preventing free people from working?
Right, that way if some crazy guy goes on a shooting rampage or starts raping female passengers they can just say "why, we just let free people work and if passengers want safety and assurance we're not sending out psychopaths they're free to conduct their own background checks".
Sorry, but people do kind of expect when they request a cab -- oh, sorry, an illegal car-for-hire pretending it's an unregulated ride sharing service to which laws don't apply -- that a fucking serial killer isn't being sent to them.
See, one of the many fucking laws Uber claims don't apply to it are things like criminal background checks to protect the public safety. Oh, and commercial licenses, proper insurance, vehicle inspections, and shit like that.
Uber's entire business model is basically saying "you know all those laws places have enacted to ensure passenger safety and the life, well, none of them apply to us".
In this case, Uber straight up lied about the safety assurances they could give about drivers, and mislead passengers into thinking they conducted their background checks to a higher standard than other companies, and actually used terms like "safe" in their marketing.
So, yeah, when you lie to the public about how safe you are, and fail to do the level of background checks you suggest you do, people find out about it, and your dumb ass gets fined.
Funny, I use the back button for sites requiring Flash.
The only things I truly need Flash for are work related training, which periodically requires I re-enable it. But I won't even run my work browser with it enabled.
No way in hell I'd ever consider running Flash by default
To me Flash is primarily an ad platform. If there are useful sites requiring Flash to work, I'm afraid I've never seen them, or don't consider them useful. I don't use video on the intertubes, because I don't care.
It seems like Flash has had at least one major security exploit every month for over 15 years, which tells me the entire platform and its security model are so defective that it has to be in the "don't trust by default" category.
I have no interest in letting advertisers, or anybody else, have access to anything which runs arbitrary code on my machine just because I visited a web page.
Not a zero day exploit in Flash. Why, I'm utterly traumatized by this, my faith in humanity has been utterly ruined, why I
Yawn, yet another zero day exploit in a steaming turd of a technology which has been an endless series of security holes for almost 20 years now.
And, having been largely Flash free for at least 15 of those years, all I can say is "enjoy your quality software, suckers".
Honestly, the only thing which has cumulatively had more security holes than Flash is Windows. I honestly don't know why people keep trusting it, because it really has been a terrible security risk forever, and disabling it is usually the first thing I do in a browser.
Once we become a space-faring civilization, this rarity value attached to non-Earth rocks will seem very quaint.
What a strange way of thinking of it
In your scenario, we'll have Earth, Mars, Venus, Alpha Centauri, Vulcan, Ceti Alpha V, and what have you. But they'll all be boring because they're "not Earth".
They may not be universally valuable, but like people collect souvenirs, they'll have some sentimental attachment. Or they'll be sufficiently rare as toe have a degree of uniqueness.
Even if we were space faring, a rock from the furthest planet in the universe is worth more than the one you're standing on, because it's harder to get another one.
The reality is, there's a relatively small amount of material we call "non-Earth" which we can access. You're right, they are deemed special because they come from someplace else, and not everybody can have one.
But you'd have to be a space faring species who can instantaneously travel anywhere in the universe to say that rocks from further away and harder to get won't have some cachet to them.
It's not like we'd become a space faring civilization (assuming we ever actually do before we go extinct) and suddenly all sense of distance and place of origin disappears.
But you sounds jaded about space rock to an extent that seems to imply you figure you'll have access to rocks from the entire universe within a few weeks. Right now, here on planet Earth, in any meaningful sense of the word
But it will always be true that the further away it's from, and the harder to get it is, the more value people will assign to it.
The Free Market relies on the fact that is a product is overpriced, consumers will pass it up.
No, the free market relies on suckers who don't know any better getting hoodwinked.
And then makes the absurd claim that a sufficiently large number of suckers will fix the problem of lying bastards hoodwinking suckers.
There is not, never has been, and never will be a free market -- informed consumers making intelligent choices based on good information will simply never happen
The impossible premises of a free market defy logic, human nature, and reality. You might as well believe in the tooth fairy.
People who talk about the free market are either part of the con game, or have been so utterly conned as to think they're making sense.
The links will look like "m.me/yourusername" and let anyone quickly add you in Messenger without looking up your Facebook account
So, any random idiot will be able to spam you without trying hard?
Yeah, what could possibly go wrong on that one
The usernames and profile links will also be available to businesses, which are starting to use Messenger as a way to deliver customer support and let you buy things through chatting.
Just any old business gets this because they say so?
Yeah, whatever, yet more crap from Facebook to ensure gets blocked so I don't have to deal with it. Just like I don't consent to being tracked by these assholes, I don't see why I would want any form of interaction with this.
I'm betting the amount of unwanted messages will be epic.
By American Vernacular English, that's not wrong. People frequently substitute "should of" in place of "should've".
Please, don't confuse illiteracy with 'vernacular'.
"Should of" is NOT 'vernacular', it's making random meat noises to approximate language and failing to grasp something they taught you fairly early in school.
It is hearing sloppy speaking, turning that into a sloppy understanding of the words you're using, and then using that in a written form which demonstrates you think the incoherent mumbling you do in the real world corresponds to speaking the language.
"Should of" is so wrong it defies belief.
Linux on the desktop is almost perfect now, and certainly leagues ahead of Windows and macOS.
Do you sincerely believe that crap? Or have you convinced yourself of it and need to convince the rest of us?
It certainly isn't leagues ahead, and there are things where the ability to manage via GUI is pretty much non-existent.
I've been using Linux since the mid 90s, FreeBSD since the late 90s, and Windows (grudgingly at first) since the early 00's
I can't take you seriously as someone who isn't a raving fanboi. Because the last I saw, the Linux desktop still has some glaring holes and stuff you can't do from within a GUI, doesn't do auto-detection of things nearly as well, and still requires you to drop down to being root in a command line for many things.
When you can put my mother in front of it, and every admin task she could ever need to do is intuitive, easy to find, and accessible via GUI, and she can buy something at Staples, plug it in and use it withing 15 minutes
Until then, you're way over stating the facts.
The photo editing software which came with my Canon camera
It's bots and capchtas all the way down.
Well, it's simple
Seems to me you guys end your anthem with something about "land of the free"? I think it's pretty safe to remove any references to that one.
For the last 15 years it's been the land of the scared and desperate who will happily give up their rights and freedoms and believe that is helping protect their rights and freedoms.
The extent to which the average American seems to accept "if you have nothing to hide you have nothing to fear" is absolutely alarming.
They'll still tell you they're free, because you won't get hauled off for criticizing the government (yet), but they're ignoring that the FBI et al have decided the Constitution is just too damned inconvenient, and that the only way to have a "free" society is to live in a police state.
And pretty much all political parties are pushing for the massive surveillance society to protect them from the terrorists. Sadly, if the goal was to destroy the way of life, the battle has been lost.
Do you suffer painful illumination? -- Isaac Newton, "Optics" | https://slashdot.org/~DarkBlackFox/firehose | CC-MAIN-2017-09 | refinedweb | 1,915 | 65.35 |
Results 1 to 10 of 11
Hi All, I have script which runs ok but if I call it from crontab it does not run. In /var/logs/cron log i can see after every min crontab calls ...
- Join Date
- Dec 2010
- 16
sh script does not execute if called by cronjob
I have script which runs ok but if I call it from crontab it does not run. In /var/logs/cron log i can see after every min crontab calls the script. I have added path variable,
# File: delete_archivelog.sh
#!/bin/bsh
# ----------------------------------
export ORACLE_HOME=/home/oracle/john/oracle/product/11.2.0/db_1
export ORACLE_SID=TestDB
PATH=/home/oracle/john/oracle/product/11.2.0/db_1/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin
rman log=delete_archive.log <<EOF
connect target sys/B4cKD00r
connect catalog rman/3B45UGH
run { execute script delete_archivelog;}
exit;
EOF
#
Above script runs in terminal
I am calling this script from crontab
*/1 * * * * /home/oracle/scripts/delete_archivelog.sh
this is entry in crontab
I can see in logs that every min its being called by crontab
May 18 20:27:01 ora11-nab-dr1 crond[9290]: (oracle) CMD (/home/oracle/scripts/delete_archivelog.sh)
May 18 20:28:01 ora11-nab-dr1 crond[9351]: (oracle) CMD (/home/oracle/scripts/delete_archivelog.sh)
May 18 20:29:01 ora11-nab-dr1 crond[9385]: (oracle) CMD (/home/oracle/scripts/delete_archivelog.sh)
May 18 20:30:01 ora11-nab-dr1 crond[9443]: (root) CMD (/usr/lib64/sa/sa1 1 1)
May 18 20:30:01 ora11-nab-dr1 crond[9444]: (oracle) CMD (/home/oracle/scripts/delete_archivelog.sh)
May 18 20:31:01 ora11-nab-dr1 crond[9510]: (oracle) CMD (/home/oracle/scripts/delete_archivelog.sh)
Any ideas/suggestions?
Many thanks in advance
- Join Date
- Apr 2012
- 112
- Join Date
- Dec 2010
- 16
- Join Date
- Jul 2011
- Location
- Monticello, Florida
- 1
Oracle stuff requires so many environment tweaks that, rather than find them all and set them up in a cron script, I tend to put oracle cron jobs in a root crontab context and run them with "su - oracle -c path-to-script".
That way it runs in an oracle login shell, which should have the same environment as your command line login shell.
- Join Date
- Jan 2005
- Location
- Saint Paul, MN
- 673
The "#!/bin/bash" line MUST BE the FIRST line in the file otherwise "/bin/sh" is used to run the command.
- Join Date
- Mar 2009
- Location
- Santa Cruz, California
- 76
Actually I think what was meant originally for the 'shebang' was
#!/bin/bash
even when bash and sh are the same file, they do not necessarily behave the same - the program can look at the first token in the command line to determine how the program was evoked. For example, in a script, you can use
$(basename $0)
to see the name the script was evoked with (e.g. a symlink or and alias for the script name).
- Join Date
- Mar 2009
- Location
- Santa Cruz, California
- 76
Here is a C program which displays the name with which it was evoked, followed by any command line parameters:
/* foo.c
This program prints the command name by which it was evoked,
followed by its command-line parameters.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
# this is needed for the basename() function.
#include <libgen.h>
int main( int argc, char **argv)
{
int ii;
printf( "argc = %d\n", argc );
/* print command name */
printf( "%s ", basename( argv[0] ) );
/* print parameters */
for ( ii = 1; ii < argc; ii++ )
printf( "%s ", argv[ii] );
printf("\n");
}
If the compiled object file is 'foo', create symlink: ln -s foo bah
and then try
foo a b c
bah a b c
- Join Date
- Mar 2009
- Location
- Santa Cruz, California
- 76
- Join Date
- Dec 2010
- 13
Hi,
This problem once happened to me. What I did is instead of running the script every 1 min from crontob, I changed the time and started running the script every 10 min.
I think the problem is every 1 min a new process start but before it finishes another process starts.
--Regards,
Sumit. | http://www.linuxforums.org/forum/red-hat-fedora-linux/188943-sh-script-does-not-execute-if-called-cronjob.html | CC-MAIN-2014-52 | refinedweb | 702 | 62.68 |
Subsets and Splits