text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Re-Imagining Linux Platforms to Meet the Needs of Cloud Service Providers
Watch→
Kernel.org
Mirrors
Full
Changelog
Trying to be a bit more timely about releases, especially since some
people couldn't use 2.5.37 due to the X lockup that should hopefully
be fixed (no idea _why_ that old bug only started to matter recently, the
bug itself was several months old).
ia64 updates, a vm86 mode bug that bit XFree86 startup (and must have
bitten dosemu too, but maybe people aren't using DOS much any more), PCI
driver attach fixes, JFS, ACPI, net drivers etc.
Linus
---
Summary of changes from v2.5.37 to v2.5.38
============================================
David Mosberger <[email protected]>:
o ia64: Minor cleanups
o ia64: First draft of perfmon sampling-interval randomization
support
o ia64: Make sure register-backing store gets mapped with the right
PTE protection bits
o ia64: Fix return path of signal delivery for sigaltstack() case
o ia64: Fix narrow window during which signal could be delivered with
only the memory stack switched over to the alternate signal stack.
o Fix edge-triggered IRQ handling. See Linus's cset 1.611 for
details
o ia64: Fix comment in arch/ia64/kernel/signal.c
o ia64: Fix x86 struct ipc_kludge (reported by R Sreelatha, fix
proposed by Dave Miller).
o ia64: Preserve f11-f15 around calls into firmware. Patch by John
Marvin
o ia64: Dont execute srlz.d needlessly (reported by Chris Ruemmler)
o ia64: Fix typo in perfmon.c. (Patch by Stephane Eranian.)
o ia64: Sync up with 2.5.35+. Add ia64-specific huge page support
(by Rohit Seth)
o ia64: Make include/asm-ia64/suspend.h non-empty (suggested by Keith
Owens)
o ia64: Add arch/ia64/mm/hugetlbpage.c by Rohit Seth
o ia64: A few huge page fixes (patch by Rohit Seth)
o ia64: Fix zx1-platform support
o ia64: Sync with 2.5.37
o ia64: Reorganize initialization sequence a bit
o ia64: Switch over to using ACPI PCI support routines. This gets
rid of much of the code-duplication that existed between ACPI and
the ia64 IOSAPIC code.
<[email protected]>:
o Add support for get-MII-data ioctls in 8139cp net driver
<[email protected]>:
o Optimize __ia64_save_fpu() and __ia64_load_fpu() for Itanium 2
<[email protected]>:
o free_area_init_node fix (for non discontigmem direct use)
Alexander Viro <[email protected]>:
o gendisk for pcd, cdu31a, cm206, mcd, mcdx, sbpcd, jsflash, mtdblock_ro,
pf, swim3, loop, aztcd, gscd, optcd, sjcd, sonycd, stram, rd, nbd, xpram,
acorn floppy, swim_iop
o devfs handling for cdroms moved to register_disk()
o misc cleanups
o crapectomy and Lindent pf.c
o switch to add_disk()
o removal of bogus exports
o beginning of probe_disk() and gendisks for floppy
Andy Grover <[email protected]>:
o ACPI: change a non-critical debug message to a lower output level
o ACPI: Add include to provide PREFIX (Adrian Bunk)
o ACPI: Re-enable compilation of ACPI subordinate drivers as modules
(Bjoern A. Zeeb)
Dave Kleikamp <[email protected]>:
o JFS: Avoid parallel allocations within the same allocation group
o JFS: Slightly relax allocation group reservation
o JFS: swsusp support
o JFS: Put legacy OS/2 extended attributes in "os2." namespace
o JFS: Fix compiler errors in xattr.c
David S. Miller <[email protected]>:
o missing unlock_kernel
Erich Focht <[email protected]>:
o Remove global semaphore_lock for ia64, similar to i386 change for
2.5.25
Jean Tourrilhes <[email protected]>:
o Fix wavelan_cs net driver build
o update irda nsc-ircc driver
o More __FUNCTION__ cleanups for IrDA
Jeff Garzik <[email protected]>:
o Add new MII lib functions mii_check_link, mii_check_media
o Fix more IrDA __FUNCTION__ breakage. It now builds, yay
Jens Axboe <[email protected]>:
o IDE fixes
Linus Torvalds <[email protected]>:
o Don't try to attach a driver to a pci device that already has one
o Don't do a 64-bit divide when a simple shift will do
o Avoid confusion "mount" and "fsck" - don't show things like
floppies and CD's in /proc/partitions.
o Fix vm86 system call interface to entry.S. This has been broken
since the thread_info support went in (early July), and can cause
lockups at X startup etc.
Patrick Mochel <[email protected]>:
o Adding driver model support in IDE
Petr Vandrovec <[email protected]>:
o Fix NCP_IOC_SETOBJECTNAME ioctl in ncpfs
o Fix bigendian problems in ncpfs
o Add support for text mount option string to ncpfs
o ncpfs: Proper handling of watchdog packets
o ncpfs: Verify packet signatures on replies
o ncpfs: Pass unknown packets from server to userspace daemon. Now we
can deliver server messages to logged-in users even with UDP or TCP
transport.
Robert Love <[email protected]>:
o schedule() in_atomic() check
Steven Cole <[email protected]>:
o Link eepro100 net driver with mii module, fixing static build
Stéphane Eranian <[email protected]>:
o ia64: perfmon update
o perfmon cleanup patch
o Fix bug in pfm_write_pmds()
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
|
https://www.linuxtoday.com/developer/2002092200126NWKNDV
|
CC-MAIN-2018-47
|
refinedweb
| 856 | 65.01 |
WSDL 2.0 support for Axis2
Interested in knowing how I spend my time working on my Google Summer of Code project?
Well, here I present some snapshots of my journey. Hope you will enjoy reading about the road map, milestones, scenery, pot-holes and road blocks I encounter!
You can find my original proposal
here, and the abstract I've written for Google
here.
Got accepted
26.05.2006
Yey!
The short-term solution
27-31.05.2006
Studied the
Axis2 source in order to understand the changes that need to be made for WSDL2.0 support. Also, I did a bit of experimentation by plugging in the
Woden API. However, Woden does not parse WSDL1.1, and WSDL4J (which Axis2 uses now) does not parse WSDL2.0! Therefore it's not just a matter of unplugging WSDL4J and plugging in Woden.
My mentor
Deepal Jayasingha, helped me identify the 2 classes WSDL11ToAxisServiceBuilder and WSDL20ToAxisServiceBuilder. Apparently these classes need to be used to populate the AxisService from the respective WSDL elements. The possible solutions suggested were:
Parse the WSDL1.1 documents with WSDL4J and WSDL2.0 documents with Woden. PRO: None of the rest of Axis2 code needs to be changed. CON: Validating and recognizing WSDLs are not fully implemented yet in Woden, WSDL4J and in Axis2.
Join the Woden team and provide WSDL1.1 capability there. PRO: Would provide the exact programming model needed. CON: Requires lot of effort from Woden.
Switch over to Woden without the WSDL1.1 parsing functionality. There is a converter util to convert 1.1s to 2.0s, therefore with the help of this tool the WSDL4J can be completely eliminated. PRO: Minimum amount of work required!. CON: Loss of information during the conversion may adversely affect the quality of the AxisService.
I hit a road block as there was no direct way of mapping the messages required in the AxisService from WSDL2.0 and the MEPs defined in Axis2 and Woden were totally different. So, further study and experience with the WSDL2.0 specification was necessary.
Reading a WSDL from the archive
01-07.06.2006
I explored how a WSDL is read into the Axis2 engine through the ArchiveReader and the DeploymentEngine. Wrote a little utility class called WSDLVersionDeterminer which will determine the version of the WSDL. This is independent of Woden or WSDL4J APIs and was intended to be included into the Axis2 source. The version is to be determined by checking the name of the first element.
The patch is attached at.
readWSDL() in particular...
08-11.06.2006
The Woden API only facilitates one readWSDL method which only accepts the URI of the WSDL document. Since service deployment in Axis2 is done as an AAR (Axis ARchive) file, this method signature is of no use. Since other types of overloaded readWSDL methods are not currently provided in the WSDLReader interface, the only solution which came to my mind was to extract the WSDL from the archive and write it as a file to a particular location on the local file system, and then give that location as the argument to the readWSDL. Obviously this is NOT the way to go!
Therefore, taking the WSDL4J API as an example, I implemented a method which will take a DOM document element as an additional argument to the readWSDL method and return the DescriptionElement.
The patch is attached at.
The solution I had in mind for this problem is this:
WSDLReader reader = WSDLFactory.newInstance().newWSDLReader();
//in is an InputStream, in Axis2's case it could be the zipped inputstream from the aar Document doc = org.apache.axis2.util.XMLUtils.newDocument(in);
Element element = doc.getDocumentElement();
DescriptionElement desc = reader.readWSDL(null,element); //Note: the WSDL Uri could be null
The developers from Woden who examined the patch agreed that the readWSDL method's signature should be extended. The existing readWSDL(URIstring) method has been only the starting point to allow them to focus on developing the object model and parsing / validation logic. However there were few concerns about adding DOM dependencies directly to the WSDLReader API via methods like readWSDL(uri, Element) or readWSDL(uri, Document). So they emphasized on avoid introducing to the Woden API, any dependencies on a particular XML parser or XML object model if possible, or at least to capture any required dependencies in some way that lets the API still reflect a choice of XML parsing strategy.
You can find their suggested solution
here.
Some findings about validation, etc
11-14.06.2006
So far from the work I have done one major bottleneck was the pull-parsing model (hmm.. guess that's what makes StAX so special) and the loss of information about the other elements as the cursor moves through the document. With DOM, parsing the WSDL is very easy and could be done without much of a problem since the object model is taken into the memory.
StAX also poses huge problems when it comes to validating. Some of the validation assertions require information from other elements. So, if I create an interface and validate it, the assertion may need access to the rest of the interfaces in the model. This may cause more of the model to be loaded than expected. There may be ways to work around this by refactoring the validation model. Definitely it is clear that the validator won't necessarily be able to play nice for all assertions. I am starting to think that validation may not make sense for a pull parsing model!
I also learnt that the validators are broken down in a way that they can be invoked on a certain element, but they currently assume that the rest of the model is present. With StAX, we may want to create additional validation options that disable the assertions that perform deep checks. This should perform better for pull parsing with the cost of not having complete validation.
In the current implementation there are logically 4 phases to a WSDL 2.0 validation.
XML well formedness check
Schema validation
WSDL 2.0 document validation
WSDL 2.0 component validation
1 and 2 are performed by the DOM parser. A StAX parser will need to perform these as well. 3 and 4 are run after the Woden model is created, and only rely on the Woden and XmlSchema models. The Schema validation depends on the parser. For DOM Woden uses Xerces, which contains a schema validator. They have used this validator to avoid having to rewrite all of the rules that are defined by schema. But, as far as I can see, I would have to write up my own schema validator for the StAX implementation, or else put up an ugly hack to make use of the current validator.
So, for the moment I will leave validation aside and concentrate on the prototype which will provide basic functionality to read the hotel-reservation.wsdl (the one given with the WSDL2.0 spec) into a StAX model.
Implementing a pure StAX parser was problematic
14-19.06.2006
I initially started parsing a WSDL purely with a StAX XMLStreamReader to build the Woden element model. The idea I had, was to cache the XMLStreamReader at each and every top level element every time as they are accessed. I wanted to use this cached parser in cases where the later elements needed information from previously accessed elements.
However, I realized that when there are so many nested elements, this approach created many parser instances even when it was not required (i.e. when those elements could have been accessed with the current parser). And this was a major problem when it came to the schema validation.
The obvious solution - Use Axiom!
19-21.06.2006
Since AXIOM is based on StAX, the resulting implementation would be fast and efficient, as it is expected from a StAX parser. If one of the objectives of Woden is to be used in Axis2, I suppose using AXIOM in Woden would not be much of a problem :).
I implemented a prototype OMWSDLReader as an alternative to the DOMWSDLReader. In the case of schema, I used XMLSchema as in the current DOM impl. However, the arguments to the XMLSchemaCollection's read method posed a problem, and I could only come up with the following work-around.
//omElement is an OMElement which contains the <xs:schema> element String elementString = omElement.toStringWithConsume(); byte[] bytes = elementString.getBytes();
//Deserialize from the byte array InputStream inputStream = new ByteArrayInputStream(bytes); InputSource inputSource = new InputSource(inputStream);
XmlSchemaCollection xsc = new XmlSchemaCollection(); XmlSchema schemaDef = xsc.read(inputSource, null);
This returned the correct XMLSchema as it was there in the WSDL. However, unlike as in the DOM impl, apart from the targetNamespace, the other namespaces were not there as attributes to <xs:schema>. I wonder whether this could lead to a bug later in the model for schema in Woden!
More discussions
There were several discussions on the Woden-dev list about how Axiom should be plugged into the Woden interface. The possible solutions were:
To implement Woden object model extending Axiom elements.
Build the Axiom object model from the parser and to use that to populate the Woden model.
First approach is preferred as it won't create two object models. But this requires some one to re-implement Woden object model. So the best short term option is to head for the second option.
The plan I had and More problems
22.06.2006
I tried the 2nd option, where the the AXIOM object model is built from the WSDL and the Woden interface implemented from it. My idea is to have an OMWSDLReader which does not have any DOM dependencies and uses AXIOM to get whatever the elements in the WSDL and parse them into the Woden specific objects. This approach is quite easy, and the current implementation in Woden seem to support it.
However, when I was trying to handle the extension elements and attributes, I came across several classes such as ExtensionDeserializer, XMLAttrImpl (and in fact a whole bunch of xxxAttrImpl classes) which seem to be heavily dependent on DOM.
This was a blocker for me, and I wanted to know whether it's possible to work around these classes, or whether it would be possible for the init and convert methods to take in OMElements? I really couldn't grok the logic behind handling the extension attributes and elements in Woden. So, I was also seeking any insight on that as well.
The most significant part of large WSDLs will be the schema part. Since Woden relies on ws-commons XMLSchema for schema parsing and that relies on DOM this could be a bottleneck even if the WSDL is read using AXIOM. So to boost overall performance there was a need to look at XMLSchema using AXIOM/StAX instead of DOM.
Initial implementation
23-26.06.2006
I opened up a JIRA at [ ], and attached a patch that will provide some initial StAX based parsing through the OMWSDLReader and several util classes. This follows the same structure as in the DOM model. However, several parsing methods are yet to be added awaiting the abstraction of the Woden object model.
Woden telecon outcome
28.06.2006
I informed about the patch I've sent them and inquired whether anybody has had a chance to review it. Since they were all getting ready for the interop, nobody haven't had the time to do so.
I also clarified several problems I had, especially about downloading all the schema for schema in the current model. They assured that it is done to import the implicitly available XML schema simple types. This will be removed for a more elegant and performant solution.
They also asked me about the need to remove all the DOM dependencies. The point they have is the issue that some WSDL elements can contain arbitrary XML. And Woden doesn't provide a solution for this yet. They asked me these questions, and I need to find the solutions in order to convince that Axiom is better than DOM!
What is so bad about having a DOM element?
Can it represent arbitrary XML?
Is it possible when using StAX to parse the content into DOM elements? (Since DOM is just an interface.)
What are my ideas of how to represent mixed type elements in Woden.
And the meeting ended with the conclusion that they may want to preserve arbitrary unknown extensions as well and to use the DOM API instead of reinventing the same functionality.
Test failures?
03.07.2006
Before applying my patch to the Woden SVN, they have run it against the AllWodenTests testsuite, and have got 46 junit failures which mostly seem to indicate that a null DescriptionElement has been returned!
This is expected as I have not implemented the readWSDL(uri, errorhandler) method, and all the tests are based on that.
So, now that the patch is committed, I guess I would have to fine tune the code and make sure all the test cases are handled. :).
|
http://wiki.apache.org/general/OshaniSeneviratne/GSoC/progress
|
crawl-002
|
refinedweb
| 2,182 | 64.91 |
Can someone kindly provide me with the URL that explains how I can enhance
my program below to
1. Turn on and off various levels of logging (eg: turn on info logging
but turn off debug logging)
2. Display the logs somewhere besides standard output?
I have read and it appears
this can be done on tomcat using a log4jconfiguration file. That is what I
want to do with my sample program below. Where do I put such a file and what
is it called? Can I make it write to a socket instead of a file? How would I
read it then?
Now with the other projects like xalan and xerces, you can download them and
you get some sample source code. OK, so I am supposed to use maven. How do I
download the examples? The FAQ references "examples/Sort.java example and
associated configuration files" but I cannot find them. I found a reference
to some example source code for XML at but the link was bad!
Do I have to buy the $20 manual just to make a trivial example work?
Thanks,
Siegfried
package world;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class HelloJavaWorld {
private static final Log log = LogFactory.getLog(HelloJavaWorld.class);
public HelloJavaWorld() {
log.debug("begin debug HelloJavaWorld ctor");
log.info("begin info HelloJavaWorld ctor");
// TODO Auto-generated constructor stub
log.info("end HelloJavaWorld ctor");
}
/**
* @param args
*/
public static void main(String[] args) {
log.debug("begin main");
System.out.println("hello world;");
log.info("end main");
}
}
|
http://mail-archives.apache.org/mod_mbox/logging-log4j-user/200908.mbox/%3CAFDC556040E148A2BEC7FDC19BBE10F7@kingmark%3E
|
CC-MAIN-2019-39
|
refinedweb
| 258 | 61.73 |
Combo boxes - dynamic font color change
Hello,
I'm new to Qt, but I know C++ pretty well. I was searching about my problem and know that there is a couple ways of doing that (custom model, custom delegate, stylesheets).
Details about my goal:
- several combo boxes (up to 4) with the same items (serial ports names)
- no filtering (any port can be set on any combo box)
- only indication that we chose the same ports in 2 or more boxes
- indication as red, bolded ports names in combo boxes (both selected and on expanded list)
- slot that checks current indexes and sets desired formatting
I need the simplest way of doing that. I'm trying but as I said I'm new and got mixed up a lot now. Can't manage to write working code using methods mentioned above. I really don't care if it's single items or model/view I just need to achieve my goal.
P.S. Sorry for my english, it's not my native language.
- ambershark Moderators
You could use a delegate to control this but it can be even easier if you have a custom combo box you can just set the foreground role to the color/boldness you want. Here is a quick color example:
@
#include <QComboBox>
#include <QApplication>
#include <QBrush>
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QComboBox box; box.addItems(QStringList() << "Red Text" << "Blue Text" << "Green Text"); box.setItemData(0, QBrush(QColor(Qt::red)), Qt::ForegroundRole); box.setItemData(1, QBrush(QColor(Qt::blue)), Qt::ForegroundRole); box.setItemData(2, QBrush(QColor(Qt::green)), Qt::ForegroundRole); box.show(); return app.exec();
}
@
You can set all kinds of roles to whatever you want, check the Qt docs for role information.
|
https://forum.qt.io/topic/43941/combo-boxes-dynamic-font-color-change
|
CC-MAIN-2018-51
|
refinedweb
| 292 | 62.48 |
And much harder to maintain. Actually I do use switch statements andAnd much harder to maintain. Actually I do use switch statements andQuote:>Using a switch (or, equivalently, a series of if (blah == foo)
>statements) ties the meaning of the enumerator (i.e. what code path
>will be executed) directly to the name of the enumerator. Easier to
>get right, harder to get wrong, IMHO.
enums for multi-way choices but only where almost all the implementation
is tucked away in an unnamed namespace (including the enum which is only
used for inter-function communication).
--
ACCU Spring Conference 2003 April 2-5
The Conference you should not have missed
ACCU Spring Conference 2004 Late April
Francis Glassborow ACCU
[ See for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
|
http://www.verycomputer.com/41_81f037f2967e6e55_2.htm
|
CC-MAIN-2019-13
|
refinedweb
| 131 | 55.84 |
DESIGN GUIDELINE
Prepared By: Dr. Robert Hammon Building Industry Institute Sacramento, CA Contract No. 400-00-037
Prepared For:
Martha Brook, Contract Manager Ann Peterson, PIER Buildings Program Manager Nancy Jenkins Office Manager ENERGY EFFICIENCY RESEARCH OFFICE Martha Krebs, Ph.D. Deputy Director ENERGY RESEARCH AND DEVELOPMENT DIVISION B. B. Blevins, Executive Director
DISCLAIMER
This report was prepared as the result of work sponsored.
Version 1.0 an attachment to the final report for the Profitability, Quality, and Risk Reduction through Energy Efficiency program, contract number 400-00-037, conducted by the Buildings Industry Institute. This project contributes to the PIER Building End-Use Energy Efficiency program. This attachment, California Residential New Construction HVAC Design Guide" (Attachment 2), provides supplemental information to the program final report. For more information on the PIER Program, please visit the Commission's Web site at: or contact the Commission's Publications Unit at 916-654-5200.
_____________________________________________________________________ i
Version 1.0
_____________________________________________________________________ ii
Version 1.0
Table of Contents
Abstract ..................................................................................................... 1 1.0 Introduction ................................................................................................... 2
1.1 1.2 1.3 Purpose ....................................................................................................................... 2 Target Audience .......................................................................................................... 3 Limitations ................................................................................................................... 4
2.0
2.1 2.2
3.0
3.1 3.2
4.0
4.1 4.2 4.3 4.4 4.5 4.6 4.7
5.0
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9
Combustion air supply............................................................................................... 58 Thermostat location................................................................................................... 59 Ventilation and Indoor Air Quality.............................................................................. 60
5.9.1 Indoor Air Quality .............................................................................................................. 60 5.9.2 Ventilation Systems........................................................................................................... 61 5.9.3 Ventilation and Indoor Air Quality Standard...................................................................... 61
Appendix A: Appendix B:
_____________________________________________________________________ iii
Version 1.0
Table of Figures
Figure 1: Ceiling Register Locations ..........................................................................................16 Figure 2: Example House Plan ...................................................................................................18 Figure 3: Example HVAC Design...............................................................................................19 Figure 4: Example Void in Interior Stair Chase which often occurs adjacent to round room or stairways .....................................................................................................................................20 Figure 5: Example Void in Dead Space .....................................................................................20 Figure 6: Example Exterior Chase .............................................................................................21 Figure 7: Walk-In Closet with Interior Chase .............................................................................21 Figure 8: Closet Chase Example ...............................................................................................22 Figure 9: Media Chase A good location for creating chases is in a media niche.......................22 Figure 10: Water Closet Chase Another good location for creating chases is in a water closet 23 Figure 11: Chimney Chase Chases can also be in chimneys, even as false chimneys ............23 Figure 12: Riser Can Installation................................................................................................24 Figure 13: Riser Can Detail........................................................................................................26 Figure 14: Floor Joist Detail .......................................................................................................27 Figure 15: Floor Truss................................................................................................................28 Figure 16: Duct-to-Register Connections...................................................................................29 Figure 17: Soffit Chase ..............................................................................................................30 Figure 18: ON/OFF run times for three cooling configurations with ceiling returns: supply register interior ceiling; ceiling over windows; and in-wall...........................................................36 Figure 19: Sample Site Plan with Varying Orientation ...............................................................38 Figure 20: Comparison of HVAC Cycle Time for Case 1, 2 and 3 .............................................50 Figure 21: FAU Clearance .........................................................................................................53
_____________________________________________________________________ iv
Version 1.0
Table of Tables
Table 1: Table 2: Table 3: Table 4: Table 5: Table 6: Table 7: Matrix of Trades .............................................................................................................7 Orientation Effect on Heat Transfer Multiplier..............................................................37 Subdivision Site Plan Orientation.................................................................................39 Plan 1 Loads and Equipment Sizing ............................................................................39 Plan 2 Loads and Equipment Sizing ............................................................................40 Plan 3 Loads and Equipment Sizing ............................................................................40 Branch duct diameters under multiple orientations......................................................41
_____________________________________________________________________ v
Version 1.0
Abstract
Adequate tools and methods now exist to design energy-efficient HVAC systems. Failure to correctly apply them in production homes costs California homeowners. This major missed opportunity is a function of both a faulty design process and inaccessibility of the design methods. The cost-centric design-build process commonly employed by production builders rarely includes a skilled HVAC designer early in the development phase where they can most effectively integrate HVAC requirements with the house design. Currently available HVAC design tools and methods require time and high levels of skill, which negatively affects the cost/profit agenda. A more integrated design process and simplified design methods are essential to improve usage, increase HVAC design quality, and reduce HVAC energy consumption. This design guide is not intended to be a step-by-step instruction book on how to design an HVAC system because adequate methodologies already exist for that. Rather, it is intended to be a step-by-step guide for clarifying those methodologies and integrating them into the overall design process for an entire house. It also addresses important topics particularly important to California, and specific to new-construction production homes.
_____________________________________________________________________ 1
Version 1.0
1.0 Introduction
1.1 Purpose
The purpose of this Design Guide is: 1. To be a useful tool for the planning and implementation of a good residential HVAC design process and to assist during that process. 2. To encourage coordination between key players such as the architect, builder, structural engineer, framer, HVAC designer, HVAC installer, energy consultant, electrical designer, and plumber to minimize conflicts during the installation of a properly designed system. 3. To help identify how all of the designers, consultants, and trades people are impacted by the process and how they need to communicate in order to further minimize conflicts. 4. To explain and simplify current HVAC design methodologies so that they are more applicable to California production homes, more useful, and more widely used. 5. To address topics not well covered by existing HVAC design methodologies and provide guidance on issues that have been of particular concern in production homes.
_____________________________________________________________________ 2
Version 1.0
1.2
Target Audience
The target audience for this design guide is: 1. HVAC designers, whether they work for the design-build contractor who will eventually be installing an HVAC system or a consulting engineering firm hired to provide a detailed design for others to follow. 2. Architects desiring to better incorporate the HVAC system into their house designs. 3. Builders desiring to better coordinate the installation of the HVAC system into their houses. 4. Related trades or consultants interested in better coordinating their work with that of the HVAC designer and installer.
_____________________________________________________________________ 3
Version 1.0
1.3
Limitations
This design guide is not intended to walk you through all of the steps necessary to design an HVAC system. There are some very sophisticated design methodologies currently available which are well-supported by trade and professional organizations (e.g., ACCAs Manuals J, S, and D). Unfortunately, they tend to be complex and overly precise. Also, the time necessary to properly use them (not to mention the time needed to learn them) does not fit well within the current design process. They tend to be slanted toward issues related to custom houses and retrofitting older houses. They also devote much time and text to building practices atypical of California residential new construction, such as basements and sheet metal ducting. This Design guide is intended to supplement those methodologies and encourage wider use by making them more consistent with current practices in the construction of California production homes.
_____________________________________________________________________ 4
The Design Process 2.1 Designing the house around the HVAC System
Version 1.0
Wouldnt it be nice houses were designed around the HVAC system? If special consideration was given to the architectural design for making the HVAC system easy to design and install? If adequate space was provided for the furnace and all of the duct work? If the house was designed with thermodynamics in mind, to minimize stratification, cross-zone interference and other problems that are difficult and/or expensive to remedy with standard HVAC practices? This is unlikely to happen without the input of a qualified HVAC designer, and the designers involvement needs to happen early in the design process. More typically, a house is almost completely designed before an HVAC designer ever sees it, and the HVAC system designed with an emphasis on fitting into the house rather than efficiently conditioning the house. Unfortunately, HVAC installers have become quite proficient at getting systems to fit into houses (whether they will work or not!). The result has been undersized and inefficient ducts that are difficult to balance and create unnecessary operating pressure on the fan motor. To compensate for the shortcomings of such duct systems, many installers have increased the size of the furnace, coil and condenser. This is the same logic as putting a larger engine in your car because the tires are too small. The car might go faster, but it sure wouldnt perform well or get very good gas mileage.
Often the reason given for a particular size duct being installed is, thats the largest that would fit. If adequate space is a critical impediment to the installation of a properly designed system, then adequate space and clearance must be designed into the home by the architect and built into the home by the framer. No matter how well an HVAC system is designed on paper, the design efforts are wasted if the system cannot be installed in the field. Typically a house goes through the following design process: Conceptual Development: Determines price range, square footage, number of stories, lot sizes, general features and styles. Preliminary Design: Develops floor plan sketches, number of bedrooms, major options, basic circulation and function locations, as well as some elevation concepts. Some early Value Engineering (VE) meetings. Design Development: Preliminary structural, mechanical, electrical, plumbing and Title 24 energy compliance. Some VE meetings.
_____________________________________________________________________ 5
The Design Process 2.1 Designing the house around the HVAC System
Version 1.0
Construction Documents: final working drawings ready for bidding, submittal. Back checking and coordination by consultants. Some late VE meetings.
The HVAC designers need to provide input as early as possible. They need to tell the architect which architectural features cause comfort issues and are difficult or impossible to overcome with typical HVAC practices. They also need to make sure the architect allows adequate space to run ducts. Many architects have had to re-design plans enough times due to HVAC issues that they know fairly well how to accommodate HVAC items. Still, many problems commonly arise that could be avoided through earlier input and better coordination.
_____________________________________________________________________ 6
Version 1.0
2.2
The following matrix shows the main trades and consultants who are affected by the HVAC system. The first column lists the item or issue and each subsequent column how each trade is affected by it.
Matrix of Trades
Item Architect Builder/Framer /Structural Engineer
Truss design, platform, clearance, closets, bollards, attic access framing Structural impacts (weight)
HVAC Installer
Type of FAU (upflow, horizontal), clearance, timing of installation Materials, labor, costs
Energy Consultant
Modeling correct location of ducts for computer model Energy features impact sizing
Electrical
Plumber
Drywall or insulation
Insulation under platform may be different
FAU location
Roof pitch, furnace closets, clearance in garage Clearances, # of systems, building features Aesthetics, clearances Aesthetics, noise issues Aesthetics, noise issues
Equipment size, load calculations Supply register locations Return grille locations Condenser locations and line set Attic access Routing Bvent Chases, soffits, and drops Thermostat location Equipment efficiency Combustion air
Electrical loads
Materials, labor Materials, labor Materials, labor, serviceability Power, service disconnect
Clearance, accessibility to yard (set-back issues), 2x6 walls, chases Framed opening, truss issues Framed chases, roof cap
Access, serviceability Materials, labor, installation Materials, labor, installation Materials, labor, installation Materials Wiring
Ducting, if any
_____________________________________________________________________ 7
Version 1.0
_____________________________________________________________________ 8
Version 1.0
_____________________________________________________________________ 9
Version 1.0
The calculated heat gain and heat loss rates (load calculations) are just two of the criteria for sizing and selecting equipment. The load calculations may be prepared by: (1) the [Title 24] documentation author and submitted to the mechanical contractor for signature, (2) a mechanical engineer, or (3) the mechanical contractor who is installing the equipment. Title 24 does not specifically state how cooling loads should be considered when sizing an air conditioner. It doesnt even state that an air conditioner has to be installed at all. Most jurisdictions treat the Title 24 cooling loads as a minimum sizing criteria. In other words, a system must be installed that has a cooling capacity that at least meets the Title 24 cooling load. In some climate zones, it is common practice to offer air conditioning as an option. So, apparently the sizing criteria only apply if air conditioning is to be installed. [note: 2005 amendments to Title-24 will offer an alternate sizing method.] The following link will direct you to an on-line copy of the Title 24 Residential Energy Manual, Appendix C California Design Location Data. A map of the California climate zones can be found in this appendix along with information on California climate zone requirements.. Or, if you are connected to the internet, you can click on the link below: Title 24 Residential Manual, Appendix C -- California Design Location Data
_____________________________________________________________________ 10
Version 1.0
3.2
_____________________________________________________________________ 11
Version 1.0
Energy Efficiency Standards for Residential and Nonresidential Buildings Publication Number: 400-01-024, August 2001
_____________________________________________________________________ 12
Version 1.0
As homes get more and more efficient, especially in regard to window technologies, larger and larger homes can be served by a single 5-ton system. At some point, other considerations need to be taken into consideration. Things such as adequate airflow (air changes) need to be considered. Does a single 5-ton system at approximately 2000 cfm have enough air moving capability to adequately distribute air throughout a very large house, even if it can meet the steady state cooling load? Also, how susceptible is the house to non-steady state conditions? In other words, what happens if in cooling mode the temperature is inadvertently allowed to substantially exceed the comfort temperature? Will the system be able to catch up in a reasonable amount of time? This can be a critical customer service issue in production homes and is a topic that needs further research. If the house can be served by a large single system (i.e., 5-tons) but has distinct zones (e.g., upstairs downstairs) it is recommended that those zones be controlled independently (separate thermostats). This can be accomplished by multiple systems or by a single system with zonal controls. See Section 4.4 for more on zonal control
2 3
_____________________________________________________________________ 13
Version 1.0
In California residential new construction the following conditions are typical: 1. Outdoor temperature: This is the temperature of the air that is blowing through the condenser to cool the refrigerant and is usually the same outdoor temperature that is used for the cooling load calculations unless it is known that the condenser will be located in a hotter location such as on a roof. Indoor entering wet bulb and dry bulb: These describe the condition of the air blowing across the coil and are usually assumed to be the same as the indoor conditions used in the load calculations. Title 24 cooling loads are calculated using an indoor temperature (dry bulb) of 78 deg F. Some designers use a lower temperature, such as 75 degrees to be safe. (Note: lower indoor temperatures drive up the cooling load and decrease the calculated capacity, potentially requiring a larger system.) Except for some coastal areas, California is considered a dry climate. A safe indoor wet bulb temperature is 65 degrees F. This corresponds to 78 degrees F and 50% relative humidity on the psychometric table. (Note: The higher the humidity, the higher the wet bulb temperature, and the lower the cooling capacity will be.)
2.
The wet bulb temperature (WBT) relates relative humidity to the ambient air or dry bulb temperature. When moisture evaporates, it absorbs heat energy from its environment in order to change phase (via latent heat of vaporization), thus reducing the temperature slightly. The WBT will vary with relative humidity. If the relative humidity is low and the temperature is high, moisture will evaporate very quickly so its cooling effect will be more significant than if the relative humidity were already high, in which case the evaporation rate would be much lower. The difference between the wet bulb and dry bulb temperature therefore gives a measure of atmospheric humidity.
Dry bulb temperature refers basically to the ambient air temperature. It is called dry bulb because it is measured with a standard thermometer whose bulb is not wet - if it were wet, the evaporation of moisture from its surface would affect the reading and give something closer to the wet bulb temperature. In weather data terms, dry bulb temperature refers to the outdoor air temperature.
_____________________________________________________________________ 14
Version 1.0
3.
Airflow across the coil: This is typically the same as the design airflow for the system. It comes from the furnace airflow tables at the design static pressure (usually between 0.5 and 0.7 inches water column, 0.6 is a reasonable number to use but it depends on the specific design criteria) and ranges from 350-425 cfm per ton of the furnace.
The following basic concepts are good things to keep in mind when designing (or evaluating the performance of) a system: 1. As the outdoor design temperature goes up, the cooling capacity of the AC unit goes down (and the load on the house goes up). This is because the outdoor air is the heat sink used by the air conditioner to dump the heat into that is extracted from the indoor air. As the outside air gets warmer, it is harder for the air conditioner to dump heat into it. 2. As the indoor dry bulb temperature goes down, the cooling capacity goes down. This is because it is harder to extract heat from colder air. 3. As the indoor wet bulb temperature goes down, the cooling capacity goes down. This is because the air has more moisture in it and cooling capacity is used up when this moisture is condensed out of the air. 4. As the airflow across the coil goes down, the cooling capacity goes down. This is because with less air passing across the coil, there is less opportunity for the coil to extract heat from the air stream.
_____________________________________________________________________ 15
Version 1.0
3-way, etc.), pressure drop, face velocity, noise criteria, and throw distance. In residential new construction grilles are often sized based on the size of the duct serving them, which is altogether inadequate. Similarly, grille types are often selected based on personal preference and sometimes faulty reasoning. Much more thought should go into this process. In a typical, square-ish room such as a secondary bedroom, there are four basic locations for a supply registers, five if you count floor registers, which are almost always located under a window. The four main locations are shown Figure1.
_____________________________________________________________________ 16
Version 1.0
A study on the impacts of energy consumption, comfort and supply register location was performed as part of the research project that included the writing of this design guide. This study evaluated and compared the most common of these locations: 2-way over a window, 3-way near an interior wall, and high sidewall opposite a window. See Section 4.2 for details on this study. Given a choice, the results of this study provide important considerations. Sometimes, however, the geometry of the room dictates where you must place registers. For example, in a long narrow room where the exterior wall is on the narrow dimension, you may be forced to put a register over the window because the interior wall is too far away. Also, structural and architectural constraints such as locations of chases, floor joist directions and beams may dictate register locations. Any of the locations mentioned above can be made to work adequately well if certain considerations are made. Whatever the register location, the following considerations should be emphasized: 1. Register over window or on exterior wall. Use a 2-way register oriented parallel to the window/exterior wall. This will create a curtain or sheet of supply air parallel to the exterior wall and the air will naturally move away from the wall and mix with the air in the room. Using a 3-way register pointed away from the window/exterior wall will throw the back into the room too quickly and may not adequately condition the area directly in from of the window. It may also short circuit the airflow by throwing it back into the natural return path before it has a chance to mix with the return air. A 3-way register located near a window but pointed directly at it will blow air directly on the window. This will heat and cool the window, which serves little benefit when the purpose is to heat and cool the air inside the room. In fact, this most likely wastes substantial energy. 2. Register near an interior wall. Use a 1-way or 3-way register with the primary direction toward the window/exterior wall. It is important to ensure that the registers throw distance is adequate to reach near the window/exterior wall. 3. Register centered in a room. Use a 4-way register. 4-way registers deliver the air equally in all four directions. Consideration must be given for interference with light fixtures or ceiling fans. If this is the case, then locate the register an aesthetically appropriate distance away from the fixture, but toward the exterior wall. 4. High sidewall registers. Use a bar-type register that throws air perpendicular to the face of the register. Point the register toward the window/exterior wall. As with a register near an interior wall, it is important to ensure that the registers throw distance is adequate to reach near the window/exterior wall. Bar-type registers located in a vertical wall typically have much, much greater horizontal throw distances than 3-way or 1-way ceiling registers, and better overall air flow characteristics in general (more cfm per square inch, quieter, etc.).
_____________________________________________________________________ 17
Version 1.0
The basic things to keep in mind when selecting and locating a register are: 1. Good air mixing: you want the supply air to mix in with the room air as much as possible. This is aided by directing the air in the opposite direction of the natural path back to the return (e.g., out the door). 2. Good air distribution and no stagnant areas: you want the supply air to reach all of the occupied areas of a room, especially areas close to a load (e.g., window). Throw distance is an important consideration for this. o Determining sub-zones (trunks) and the use of balancing dampers In production building, a designer is typically designing the system for a home that may be built in several different orientations. (See Section 4.3 for discussion on designing for multiple orientations.) The system is typically designed for the worst-case orientation with consideration for airflows needed in other orientations. The system must at least be able to be easily balanced to work in all orientations. A strategy that helps accomplish this is to divide the main zones of the house into sub-zones. These subzones are areas in the main zone that will be affected similarly when the house is in an orientation other than worst case. For example, Figure 2 shows a basic singlestory, single-zone house in its worst-case orientation.
If the house is rotated 180 degrees, bedrooms 2 and 3 will go from the south side of the house to the north side of the house and probably need much less air. If these two rooms are on the same trunk, this can be accomplished easily by using a manual balancing damper located right at the supply plenum. The family/kitchen area, living/dining area master bedroom may be treated similarly.
_____________________________________________________________________ 18
Version 1.0
Figure 3 shows a reasonable layout and approach to accomplish orientationdependent balancing using manual balancing dampers that are easily accessible.
Routing ducts The actual routing of ducts is a function of the number and location of supply registers (and to a lesser extent return grilles), architectural and structural constraints, duct size, duct length, and other practical issues such as preferred types of fittings (t-wyes vs. duct-board transition boxes). In a single-story house with ample attic space this is pretty straightforward. You can locate the registers first and then simply sketch the ducts in. In a multiple-story house, this is a much greater challenge, at least for all but the top floor. Assuming the system serving the first floor is located in the attic (a typical scenario), the ducts serving the first floor must pass vertically through the upper floor(s), and then horizontally (unless you are lucky) to the ceiling registers on the first floor. There is usually a great deal of framing (such as trusses, blocks, joists, beams, headers, and top/bottom plates) between the furnace and the register. In fact, very often the framing is the deciding factor in determining where registers are ultimately placed. The following are some ideas for getting ducts from one point to another. Vertical Duct Runs Chases and voids These are shafts between walls, either created intentionally (chases) or incidentally (voids) that can be used to run ducts from the attic, through the upper floor(s), to the lower floor(s).
_____________________________________________________________________ 19
Version 1.0
Figure 4: Example Void in Interior Stair Chase which often occurs adjacent to round room or stairways
Figure 5: Example Void in Dead Space (where spaces of unequal size or shape are adjacent to each other)
_____________________________________________________________________ 20
Version 1.0
Samples of Chases
Figure 6: Example Exterior Chase Voids can be found in the bump outs of exterior architectural details, but care must be taken to ensure that that particular architectural detail occurs in all elevation styles
Figure 7: Walk-In Closet with Interior Chase Chases can be created in corners of closets. The dead corner of a walk-in closet is an ideal place because it has minimal impact or hanging space and it provides a convenient way for the shelf and pole to be supported.
_____________________________________________________________________ 21
Version 1.0
Figure 8: Closet Chase Example Chases may also be added to either end of a flat closet. If given the choice, it is preferable not to have a chase adjacent to an exterior wall when the roof slopes down to that wall (i.e., hip roof), because the roof can interfere with the duct getting down through the top of the chase. If this cannot be avoided there are various ways to drop the ceiling in the closet to better accommodate the duct.
Figure 9: Media Chase A good location for creating chases is in a media niche
_____________________________________________________________________ 22
Version 1.0
Figure 10: Water Closet Chase Another good location for creating chases is in a water closet
Figure 11: Chimney Chase Chases can also be in chimneys, even as false chimneys
_____________________________________________________________________ 23
Version 1.0
Riser cans These are rectangular ducts, usually sheet metal, which fit in a wall cavity between the studs. They are relatively common, but due to potential noise problems, high resistance to airflow (high equivalent length), structural constraints, and installation costs, they are typically used only as a last resort. If care is taken in their design and construction, they can however be a viable solution to many routing problems. You should keep the following things in mind if considering riser cans: 1. Noise Thermal expansion and contraction can cause sheet metal riser cans to make substantial amounts of noise. This is called oil canning and can manifest itself in clicking, popping, clanking, squeaking and other annoying noises. Many contractors have had to tear out riser cans due to customer service complaints. This is a very expensive and messy retrofit. Some contractors will flat-out refuse to install them. Avoid putting riser cans in bedroom walls if at all possible. Some precautions to preventing noise are using heavier gauge metal, caulking between all metal-to-metal seams, and using lead tape as a sound dampener. You might also consider using duct board rather than sheet metal. It requires a larger cross sectional area than sheet metal but is virtually silent and has much better insulation properties. 2. High Resistance to air flow The available space in a typical (16 on center) 2x4 and 2x6 stud wall is 3x14 and 5x14. The typical size riser cans used in these walls are 3x14 and 5x14, which correlate to round flex duct equivalent sizes of 8 and 9, respectively The high resistance to air flow comes not so much from the riser can itself, but from the round-to-rectangular and rectangular-to-round transitions. It is highly recommended that smooth, rounded transitions be used where possible. It is highly discouraged to simply cut a round hole in the side face of the riser can.
_____________________________________________________________________ 24
Version 1.0
3. Structural Constraints Because the riser can takes up the entire stud bay in a wall it is necessary to cut out a 3x14 and 5x14 piece of the top and bottom plates. This is never allowed in a structural shear wall and rarely allowed on an exterior wall (not to mention the requirement for at least R-13 insulation in the wall and R-4.2 insulation on the riser can itself, if not located within the conditioned shell). One solution is to double the wall, install the riser can in one side, and leave the other intact.
_____________________________________________________________________ 25
Version 1.0
Care must be taken to ensure that no truss sits on top of the stud bay that you intend to use and the stud bay must line up with the floor joists below. The use of riser cans requires careful coordination between the HVAC subcontractor, the architect, the structural engineer, and the framer. Horizontal Duct Runs Floor Joist Bays These are the spaces between the parallel floor joists. California builders often use wooden I-beam type floor joists.
_____________________________________________________________________ 26
Version 1.0
Common sizes (heights) are 12, 14, and sometimes 16. While it is possible to cut holes in floor joists as big as the height of the web, there are strict limitations on this and joist penetrations must always be approved by the structural engineer. Even if you do cut the I-joists it can be difficult to pull flex duct through these holes. The other coordination that must take place is with the trades that will be sharing this space, especially plumbers. Gas piping, sanitary drains and water piping can all be run either perpendicular to or parallel with the I-joists, and can interfere with ducts. Some builders use floor trusses rather than I-joists. These consist of diagonal framing members similar to a roof truss rather than solid webbing.
_____________________________________________________________________ 27
Version 1.0
These are much more accommodating of ducts without cutting holes but similar coordination must be made with the plumbers. One important thing to keep in mind when running ducts in floor joist bays is that the best practice for connecting to a ceiling register may require a special transition fitting rather than simply making a 90-degree bend in the duct.
_____________________________________________________________________ 28
Version 1.0
Dropped ceilings and Soffits Sometimes the only way to get past a beam, wall or floor joists is to create a dropped or false ceiling below the obstruction that provides room to run a duct. When considering these as an option one must realize that they can be relatively expensive to build and often have aesthetic disadvantages because they lower the ceiling height. Usually lowering the ceiling in a small room such as a bathroom, laundry room, or hallway is not a big problem. The total drop required to run ducts is the outer diameter of the duct plus 3 for the framing. In smaller rooms the dropped ceiling can be flat studded (with the 2x4s turned sideways) and then you only need to add 1 to the outer diameter of the duct. Most builders and architects do not like to go with less than an 8 ceiling height, but may sometimes allow a 7 6 ceiling height if absolutely necessary.
_____________________________________________________________________ 29
Version 1.0
Soffits are similar to dropped ceilings except that they are localized and resemble a horizontal chase. Soffits provide a boxed-in area where a wall meets a ceiling as an alternative to dropping the entire ceiling. They are common in garages. When building a soffit in a garage care must be taken to maintain the integrity of the 1-hour fire separation between the garage (Group U occupancy) and the house (Group R occupancy).
_____________________________________________________________________ 30
Version 1.0
recommended range, resulting in poor performance and premature equipment failure. In addition, the airflow will be too low, decreasing the performance of the system and possibly reducing cooling capacity to below the cooling load (in effect making the air conditioner too small). A design static pressure that gives good airflow and results in reasonably sized ducts is 0.6 iwc. ACCA utilizes a value called Available Static Pressure in its important equations. It is the operating static pressure across the furnace less the static pressure drops of various items such as, the coil, filters, heat exchangers (external to furnace), registers, grilles, etc. The values for all of these pressure losses are given in Manual D. o Total CFM Total Cubic Feet per Minute (CFM) can be determined by picking the design static pressure and referring to the furnace manufacturers airflow table for the airflow at that static pressure. Use high speed for cooling. The total CFM is used to determine actual design cooling capacity. This number is distributed to each room proportional that each rooms load. As long as the ducts are sized properly, this total airflow will be met or exceeded in the field. o Equivalent lengths The pressure drop of duct and duct fittings are accounted for using equivalent lengths. They are expressed in units of feet, which make sense for a length of duct but is a bit unusual for a fitting such as a t-wye or elbow. It is simply a way of accounting for pressure drop of a fitting by equating it to an equivalent length of duct. Equivalent lengths are used in the calculation for friction rate. o Friction rates The friction rate is the critical factor for determining what size duct is needed to provide a certain amount of CFM. The units are inches of water per 100 feet. It describes the pressure loss for every 100 feet of duct. The equation for friction rate is fairly simple:
_____________________________________________________________________ 31
Version 1.0
cfm, the area of chart 7 that is commonly used is very small and the accuracy is questionable. It is recommended that a designer not using the software use a good quality duct slide rule such as the wheel-type duct-sizing calculator published by ACCA. Several duct slide rule manufacturers recommend that you use a friction rate of 0.1. This only works if you can design the system to ensure the correct available static pressure and total equivalent length. However, simply using a friction rate of 0.1 and the room-by-room air flows generated by Manual J for a residential new construction home would be better than most rules of thumbs currently being used. Here are some examples using the friction rate equation and friction chart: Example 1.The available static pressure (ASP) is calculated to be about 0.25 iwc. The total equivalent lengths (TEL) are estimated to be about 250 feet. The equation for friction rate (FR) yields a value of 0.1. If 130 cfm are required, the duct calculator shows that a 7 flex duct is not adequate so an 8 must be used. In the field, it is determined that the duct cannot be run as expected and a new route is determined, which adds 30 of extra length to the duct. Will this affect the duct sizing? In this case, no, it would not. Adding 30 feet changes the friction rate to 0.09. Using the duct calculator, an 8 duct is still adequate. In fact, an 8 duct would work as long as the friction rate was 0.065 or higher. This means that up to 130 feet of extra length (actual or equivalent) could be added and the duct would still supply at least 130 cfm. This is not always the case, however. Each duct diameter can handle a range of airflows. It depends on how close you are to the upper limit of that range. Theoretically, adding just one foot of extra length could require increasing the duct size.
Example 2: Using the same starting point as Example 1 (ASP=0.25, TEL = 250 and FR = 0.1), the builder wants to offer electronic filters and needs to know if they would affect the duct sizing. The filter manufacturer lists a static pressure drop of 0.10 iwc. This changes the friction rate from 0.1 to (0.25 - 0.10) * 100/250 0.06 , which would require that a 9 duct be used to deliver 130 cfm and because the filter affects the entire system, many other ducts may be affected as well. This scenario assumes that the designer intends to maintain the operating static pressure of 0.6 iwc in order to maintain a certain total airflow. A different approach would be to keep the ducts the same size and let the static pressure change. For the ducts to stay the same size, the friction rate must not change. For this to be true the available static pressure needs to stay the same (assuming that the equivalent lengths are not going to change, in other words the basic duct layout does not change), which means that the starting static pressure
_____________________________________________________________________ 32
Version 1.0
across the fan has to go up by the same amount that the electronic filter will use up. If we assume an operating static pressure across the fan of 0.7 iwc (0.6 originally + 0.10 for the filter), the most obvious impact will be that the airflow will go down. This can be quantified using the furnace fan flow table. What needs to be confirmed is that the airflow is still adequate to meet the sensible cooling capacity (remember that as air flow goes down, so does cooling capacity). Also, maximum air velocities must be confirmed as does the furnace manufacturers recommended operating range for static pressure.
_____________________________________________________________________ 33
Version 1.0
As part of the task of developing this design guide, a case study was conducted to evaluate the impact of furnace and register placement on energy, comfort, and quality. The results of that study, as related to furnace location are: Furnace location has little impact on energy consumption and effectiveness of the HVAC system; One difference between an attic and a garage location is that the furnace in the garage tends to have somewhat longer ducts, resulting in more conductive losses/gains and more resistance to air flow; and More fan power consumption is required due to the longer duct runs, but this can be compensated for by using larger ducts, if they can be accommodated.
Detailed information on this study is available from the California Energy Commission as Attachment 2 to the Final Report for the Profitability, Quality, and Risk Reduction through Energy Efficiency program. The report is also available through the Building Industry Institute (BII) or ConSol.
_____________________________________________________________________ 34
Version 1.0
4.2
Register Location
As part of the task of developing this design guide, a study was conducted to evaluate the impact of furnace and register placement on energy, comfort, and quality. Three supply register configurations were evaluated using a computational fluid dynamics model (CFD) for both heating and cooling. These three configurations represent the most common practice in California production homebuilding: register centered in the ceiling, register over window, and high sidewall. Two return locations, ceiling and low-wall, were also evaluated. This study used a computer simulation and is not a perfect model of reality. For example, interior furnishings were not included in the model. However, the results do provide a reasonable picture that matches well with real-world experience. Detailed information on this study is available from the California Energy Commission as Attachment 2 to the Final Report for the Profitability, Quality, and Risk Reduction through Energy Efficiency program. The report is also available through the Building Industry Institute (BII) or ConSol. The studies indicate that the most energy efficient location, with no negative impact on comfort, is to place the supply register on a high sidewall. The study results show that this location provides the best mixing and is the preferred location. In general, high wall registers are a good idea since they allow the air stream to mix with room air above the heads of the occupants and minimize air velocity and temperature non-uniformities in the occupied part of the room. There are other considerations in selecting the supply register location and these are covered in Step 4 of the Overall Design Method. The figure below is an example of the information generated by this study. This example shows the duty cycle for the three supply configurations with a ceiling return under cooling conditions. The duration of the HVAC ON time is notably shorter for the in-wall supply. Also note that the total duty cycle time for the in-wall configuration is nearly 25% longer than the other cases.
_____________________________________________________________________ 35
Version 1.0
81.00
Temperature at Thermostat (F)
Time (mins)
Ceiling Interior AC ON #1 Ceiling Interior AC OFF #2 Over Windows AC ON #2 In Walls AC OFF #1
Figure 18: ON/OFF run times for three cooling configurations with ceiling returns: supply register interior ceiling; ceiling over windows; and in-wall
_____________________________________________________________________ 36
Version 1.0
4.3
In a cooling dominated climate, which includes most of California, orientation has a dramatic impact on equipment sizing because most homes, especially new production homes, have the largest concentration of glazing on the back of the home. The required cooling equipment of a typical 2300 square foot home can change from 3.5-ton to 5-tons, a 30% increase in capacity, just by rotating the house from south-facing to east-facing. The orientation of a home, or more precisely its windows, is what determines the majority of its heat gain. East- and west-facing windows have the greatest heat gain because the sun is lower in the sky and shines through the window at an angle more perpendicular to the windows, increasing the amount of radiation entering the home. Sun angle and window orientation are accounted for in the heat transfer multipliers used in the load calculation methods. Heat transfer multipliers (HTM) are values that when multiplied by the area of the window produces the heat gain of that window including conductive as well as radiative heat gains. The units are Btuh/sf. The following HTMs for a dual-pane, low-e, aluminum-framed window illustrate the impact of orientation on heat gain.
North 21.4
East/West 61.0
South 32.8
SE/SW 53.1
NE/NW 44.3
As this shows, each square foot of east- or west-facing glass has nearly twice the heat gain of south facing glass and nearly triples that of north facing glass. Most typical homes tend to have the majority of the glass on the back of the house. This is where most of the sliding glass doors and large family room/great room windows are typically located. When so much of the glass is loaded on one side of the house, the variation in total cooling load is much greater between orientations. Conversely, if the glazing area of a house were exactly evenly distributed on all four sides of the home, the total cooling load would be equal in all orientations. This is rarely, if ever, the case in typical production home design. Because the majority of homes built in California are production homes using the master plan concept (several plan types used over and over, and built multiple times in various orientations), the variation between best and worst case orientation must be considered. Standard practice is to design for worst-case orientation. This is an acceptable practice for the vast majority of plans. The risk of this approach is that the equipment in the best-case orientation is oversized to a degree that can negatively impact effectiveness and efficiency. Not only does orientation impact the total cooling load of a home, it has an even greater impact on an individual rooms load. The key to a good duct design is even distribution of air in amounts proportional to the load from each room. If a house is built in multiple orientations, then each of its rooms can and will face any orientation. This means that an individual rooms calculated cooling load can change by a factor of nearly three times (recall the difference between the North HTM and East/West HTM.) This, in turn means that a rooms air flow requirement can nearly triple. The net result is that duct sizing requirements for a given room can change as the orientation changes, but it is extremely impractical to require different duct layouts for a single master plan depending on what orientation it is to be built in. Thus, the worst-case orientation is used even though it may not provide the best layout for all orientations.
_____________________________________________________________________ 37
Version 1.0
Best Practices The best practice for evaluating and implementing orientation dependent features in a residential HVAC design is to assess the potential equipment and duct-sizing impacts for all of the eight cardinal and semi-cardinal orientations that may be built for a given plan. To do this the designer should obtain a site/plot map of the subdivision and create a list of all possible orientations (to the nearest 45 degrees) for the project. It is possible that even in a large project the worst-case orientation may not even be plotted for one or more plan types. Once this information has been determined, the loads can be calculated for just the orientations to be built. If the loads result a very high variation in equipment sizing (1 ton or more per system) then the designer should confer with the builder developer to see if it would be cost-effective to vary the equipment size by orientation. It is recommended that only the condenser tonnage be varied and not the furnace or coil. Leaving the furnace and coil the same for all orientations will allow the system air flow to remain essentially the same and reduces the potential need for varying duct sizes
Most manufacturers allow a 1-ton or more variation between condenser and furnace/coil. In other words, it is not uncommon to match a 4-ton condenser with a 5-ton furnace and coil, or a 3-ton condenser with a 4-ton furnace and coil. This allows the designer to have up to three levels of cooling capacity for a given duct layout. For example a single plan could utilize a 3/4/4, a 3.5/4/4 or a 4/4/4 system (condenser/coil/furnace) with sensible cooling capacities of around 26,000 Btuh, 30,000 Btuh and 34,000 Btuh. All of these systems would deliver approximately 1600 cfm. Once the system airflow is determined the duct sizes can be determined and evaluated for all orientations. Currently it is a very tedious exercise to do this because it must be done manually. Eight duct tables must be printed out and each trunk and branch evaluated for the maximum size.
_____________________________________________________________________ 38
Version 1.0
Example:
The following example is for a 30-lot subdivision with three plan types. Plan 1 is a 2000 square foot single-story home. Plan 2 is a 2400 square foot two-story home. Plan 3 is a 2850 square foot two-story home. Each plan is to be built 10 times as shown below.
Lot 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Plan 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
Front Orientation N N NE NE NE E E NE NE N NW NW NW W W
Lot 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Plan 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
Front Orientation SW SW SW S S S SE SE SE E NE NE N N E
The loads and equipment sizing can be tabulated as shown below. Table 4: Plan 1 Loads and Equipment Sizing
Plan1
Orientation Lots Sensible Load (Btuh) Cond/coil/furnace (tons)
N NE E SE S SW W NW
1, 10, 28 4 7, 25 22 19 16 13
_____________________________________________________________________ 39
Version 1.0
Plan 3
Orientation Lots Sensible Load (Btuh) Cond/coil/furnace (tons)
N NE E SE S SW W NW
2, 29 5, 8, 26 23 20 17 14 11
Plan 3
Downstairs System
Orientation Lots Sensible Load (Btuh) Cond/coil/furnace (tons)
Upstairs System
Sensible Load (Btuh) Cond/coil/furnace (tons)
N NE E SE S SW W NW
3, 9, 27 6, 30 24 21 18 15 12
Plan 1: Since only lot 19 had a load low enough to make it a 3/4/4, it would be recommended that a 3.5/4/4 be used here and on the other lots where appropriate. The other lots would get 4/4/4 systems. Plan 2: The sizing shown is a reasonable breakdown. Note that there is no such thing as 4.5ton system. If there were, there would be three sizes of systems. Plan 3: The sizing shown is a reasonable breakdown. Note that all of the lots had the same equipment sizing upstairs. This is because the second floor typically has a more even window distribution.
_____________________________________________________________________ 40
Version 1.0
Note that this approach would result in the opportunity to downsize 10 out of 40 condensers by at least one-half ton at a substantial cost savings. An example of how the front orientation of the house affects the duct layout for an example house is tabulated below. The numbers are the diameter of the branch duct serving the rooms shown. The numbers vary because as the house turns the orientation of each room changes, which changes each rooms load and subsequently, its air flow. Trunk ducts are not shown but are affected similarly.
Room Living Dining Living Family Family Kitchen Nook Den Bath3 Laundry Mbed Mbath Mwic Bed2 Bath2 Bed3 Bed4
N 7 7 7 7 7 7 7 6 4 5 8 6 4 6 4 6 6
NE 6 6 6 7 7 7 7 6 4 5 8 6 4 6 4 6 6
E 6 6 6 7 7 7 7 6 4 5 8 6 4 6 4 6 6
SE 7 7 7 7 7 7 7 6 4 5 8 6 4 6 4 6 6
S 7 7 7 7 7 7 7 5 4 5 7 6 4 6 4 6 6
SW 7 7 7 7 7 7 7 6 4 5 8 6 4 6 4 6 6
W 6 6 6 7 7 7 7 6 4 5 8 6 4 5 4 6 6
NW 7 7 7 7 7 7 7 6 4 5 8 6 4 6 4 6 6
Max 7 7 7 7 7 7 7 6 4 5 8 6 4 6 4 6 6
As one can see, the required duct sizes never vary more than one size for any particular room. Also, many rooms are unaffected by orientation. This particular house had a fairly good fenestration distribution. As glazing gets more loaded on any single side, the variation in duct sizes gets greater. Designing to the maximum size for each room does not result in a large amount of change for most homes but it does insure that all rooms will have ducting large enough to provide its fair share in all orientations.
_____________________________________________________________________ 41
Version 1.0
Balancing
Once the home is built according to the mechanical plans, the next challenge is to properly balance the system. Because the system is designed to accommodate any and all orientations, there will be some adjustment necessary for each and every home by means of in-line manual balancing dampers. In most cases, these adjustments will be very small. The number of manual balancing dampers can be reduced and the locations can be more accessible if the duct system is laid out carefully. A simple four-trunk system can work adequately for most homes. The house is divided into four sub-zones. Sub-zones are one or more adjacent rooms whose loads are impacted in a similar fashion as the house rotates and are otherwise thermodynamically similar. Each sub-zone is served by a supply trunk that is controlled by a single balancing damper. The more complex that a homes floor plan is, the more sub-zones it will need. It is common practice to leave the entire manual balancing dampers fully open until the homeowner has lived in the home for a while. If areas of excess air flow (over conditioning) occur the dampers controlling those areas can be closed down. It is usually not necessary to precisely balance a home to the exact design flows because individual homeowner preferences and use pattern sometimes outweigh the design assumptions.
_____________________________________________________________________ 42
Version 1.0
4.4
Zonal Control
Zonal control typically refers to a single HVAC system with 2 or more independent zones. This independence is accomplished through a control panel and motorized dampers that send air to the zones that require it and limit or stop altogether the air going to zones that do not require it. Each zone has its own thermostat. As homes get more and more efficient, the size of a home served by a single system gets larger and larger. The larger a house is, the more difficult it can be to adequately control the indoor temperature with a single thermostat. Zonal control is an effective way to add zones without the expense of multiple systems. Zonal control should be used for comfort only. It will not reduce the load of the envelope nor will it increase the total capacity of the system at peak conditions. In deciding whether zonal control is needed or not, the designer must consider the diversity of the home. For example a 3000 square foot 1 story house that is sprawling and spread out with many wings and appendages would be more likely to need zonal control than a house with the exact same cooling load but that is larger but more compact. The designer must also consider the relative airflow requirements between the two zones as they change between heating and cooling modes. For example a two-story house may require more air downstairs than upstairs in heating mode but that may reverse in cooling mode. Because the ducts are sized for cooling air flow (due to the higher fan speed) the home may need to be balanced seasonally by closing dampers and/or registers in order to get adequate comfort distribution between the upstairs and downstairs in heating mode. This is not an unreasonable expectation but a zonal control system would help alleviate this effort. If a zonal control is not installed in this situation, the occupants should be informed of the seasonal balancing requirement and educated on how to perform it. For more discussion on zonal control, see Section 3.2.1 The Overall Design Method, Step 1.
_____________________________________________________________________ 43
Version 1.0
4.5
Window Loads
Windows account for a very large fraction of cooling and heating loads in a building. The glazing type, the amount of glazing, insulation and shading devices used all contribute to a significant portion of the overall cooling loads (mainly solar gains) and heating loads (conductive heat losses) in a building. As an example, a 1940 square foot home with an 18.6% window-to-wall ratio was analyzed in 4 climate zones (zones 7, 10, 12, and 14) and four orientations using Micropas6. Heating loads attributed to glazed surfaces remained approximately equal (16.5% - 18.0%, depending on climate zone). Cooling loads varied between 32.0 % and 41.3% depending on both orientation and climate zone. Because windows represent such a high percentage of heating and cooling loads, it is important that their impact be accurately quantified.
q UA T
A T R
In this equation, U is the overall window u-value including glass and frame; A is the rough opening of the window; and T is simply the difference between the indoor and outdoor winter design temperatures. The ability of the UA T formula to predict actual heat losses is limited by the accuracy of the input parameters. Area is not a problem since it is a fixed value. U-value is limited by the accuracy of generic window descriptions to accurately reflect the actual U-values of all the different brands of windows that may meet the generic definition. If the make and model of the window to be installed is known and it is a window that has been tested to National Fenestration Rating Council (NFRC) standards there will be a reasonably accurate U-value that can be used for that window. Even tested values have their limitations. U-value within a particular make and model of window will vary by window size because the frame-to-glass ratio changes. As a reasonable simplification and to keep the cost of testing windows down, only a single common size window is tested and that tested U-value is used for all windows in that product line. The actual T (difference between the indoor and outdoor winter design temperatures) value can vary somewhat from the number used in the calculations. Of course, outdoor temperature varies with season and time of day, but the T used in the calculation can be wrong even at the time when they are supposed to be correct. To understand this, it is important to understand how these temperatures are selected. The indoor design temperature is the desired indoor temperature. It can be thought of as the thermostat set point. However, even when a thermostat reads a certain temperature, 70 degrees for example, it will not be 70 degrees everywhere in a house. There can be places in the house where the temperature is substantially higher or lower than 70 degrees. For example, supply air registers are commonly placed directly above or below windows. When the heater is operating, hot air of up to 150 degrees is blowing on or near the window. With an
6
Enercomp, Inc
_____________________________________________________________________ 44
Version 1.0
outdoor temperature of 30 degrees, this yields a real T of 120 degrees. If the design temperatures were assumed to be 70 degrees indoors and 30 degrees outdoors, the real T is three times the design T of 40 degrees, tripling the heat loss. The outdoor design temperature is a statistically derived temperature based on historical temperature data collected at a nearby data collection point. There are hundreds of these throughout the state. Because it is a statistically derived value, rather than the coldest temperature on record, for example, it is understood that this temperature will, by definition, be exceeded a certain number of hours per year. The statistical number that is used is determined to be one that makes these excessive temperatures (i.e., temperatures colder than the assumed outdoor design temperature) an acceptable occurrence. Variations from this data can be caused by microclimates or normal (or abnormal) macro climatic changes and will throw off the statistical accuracy load calculations, but problems with the indoor temperature as described above will have an even greater impact in the statistical accuracy of the loads. In other words, the actual number of hours that the real heat load exceeds the calculated heat load may be dangerously high; the heater may be unable maintain a comfortable indoor temperature during long periods of extreme cold when reality exceeds the design margin.
_____________________________________________________________________ 45
Version 1.0
thickness of the frame, mullions and other details. SHGC can be dramatically improved through the use of special coatings that block certain wavelengths of light, particularly those responsible for heat gain. U-value. The U-value describes a window assemblys ability to transmit heat conductively and is a function of the properties of both the frame and glass panes. Like the SHGC, it can either be a generic number based on the general description of the window or it can be a National Fenestration Rating Council (NFRC) tested value. Emissivity of window. This number describes the amount of heat that is emitted from a window due to its being warmer than the surroundings. The lower the level of emissivity, the more efficient the window. Emissivity levels generally range from 0 to 1 and can be dramatically improved through the use of special coatings. Emissivity is usually accounted for in load calculations by adjusting the window U-value. Shading. Shading devices are either interior or exterior. They can be further subdivided into removable (or otherwise controllable) and fixed. This controllability is important because they can assist in reducing heat gain in cooling mode but they can also reduce heat gain in heating mode when heat gain may be desired (i.e., on a cold but sunny day). An additional type of exterior shading includes those that are not necessarily integral to the building and are categorized as adjacent structures. Interior shading devices. Curtains, blinds, roller shades and other such interior window treatments, though often aesthetic in purpose, can have a substantial impact on heat gains when used correctly. The more opaque and reflective the material, the more it will reduce solar heat gain. For example, a white, opaque roller shade will reduce solar gains better than a dark drape. One disadvantage of interior shading devices is that solar gains have already entered the space by the time they are intercepted by the interior shade device. This heat is trapped between the shading device and the window. Some of the heat is reflected or radiated back out of the window, but much of it remains inside. Exterior shading devices. These are devices that are part of the building or window assembly and include overhangs, bug screens, solar screens, and awnings. Overhangs are often overlooked as very efficient devices for reducing loads and energy consumption. Architectural fashion typically outweighs their practicality. Though a permanent component of the building they can be designed to maximize the benefit in the summer and minimize their impact in the winter. Bug screens are not considered an energy device but can have a noticeable impact on the SHGC of a window assembly. Sun-screens (a.k.a. solar screens) can be a very cost effective means of reducing heat gain. Also, because they are removable, their impact in the heating season can be minimized. Awnings behave as an overhang and are also seasonally removable. Adjacent structures. These can include buildings, trees, fences, and terrain such as hills. They may have a substantial impact on actual loads but are rarely accounted for in the calculations. They most commonly shade a window but can have the opposite impact of reflecting light into a window. In this regard, the ground adjacent to a building is considered an adjacent structure because it can reflect additional light into a window. Imagine the difference in solar gains between a house surrounded by lush lawn and a house surrounded by a bright white concrete surface.
_____________________________________________________________________ 46
Version 1.0
Best Practices Best practice for new construction loads would be to model no internal or external shades in the load calculations, but to model overhangs because they are fixed architectural features of the building that are unlikely to be removed. Internal and external shades are frequently left open, left off or otherwise removed. To assume that they are in place when calculating cooling loads is risky. Some designers believe that interior shades should be assumed closed. This results in dramatically lower solar gains and cooling loads. However, if the cooling equipment is sized under these assumptions, the home will not cool properly on hot days if the homeowner does not close the drapes. While closing drapes on a hot day is a praiseworthy behavior, this design philosophy is not consistent with the expectations of most homebuyers.
The approach used for modeling features in Title 24 compliance is usually appropriate for load calculations in new construction. In Manual J, Version 8, the designer should always assume NFRC rated windows will be used in new construction. If non-rated windows are used default performance values can be used that are consistent with Title 24 calculations but entered in the load calculations as though they are rated windows. Assume the same minimum features necessary for compliance, if slightly better features get installed, fine. If, however, better features get installed than were assumed in the load calculations, there is a small risk of over sizing the equipment to a point of reduced energy efficiency and conditioning performance. However, the potential expense to a builder of under sizing equipment is far greater than that of over sizing. Performance values used in the load calculations (U-value, SHGC, and shading coefficient of screens and other shading devices) should be consistent with those used in the Title 24 calculations. The current computerized versions of Manual J, Version 8, for room-by-room loads and the current methodology used by Micropas for whole-house loads do a very adequate job accounting for loads associated with windows. It is a useful exercise to compare the Micropas load to the total of the room-by-room manual. This provides a trustworthy check to help ensure that no calculation errors have been made. This is another reason why it is important to use the same window performance values in both calculations. For duct sizing it is appropriate to assume worst-case window conditions. For example a home may have a window that could be replaced by an optional sliding-glass door, which substantially increases the glazing area and the subsequent load on that room. Sizing the duct for the worst case (with the sliding-glass door) ensures that the duct serving the room will accommodate the amount of air required for the higher load. When the higher load does not occur, it is a simple matter to damper down the airflow if it is excessive. Again, the potential cost of underestimating the load is far greater than overestimating it.
_____________________________________________________________________ 47
Version 1.0
4.6
Duct Loads
Duct leakage rates of up to 45% were not uncommon in new homes built and tested prior to the late 90s. This is a direct loss of concentrated energy; the heated or cooled air is dumped directly into unconditioned spaces (e.g., supply leaks into attics), or conditioned air is displaced by unconditioned air (return leaks in attics or garages). Manual J does a reasonable job of accounting for duct leakage loads, given a known leakage. The problem lies not in quantifying a known leakage rate but in estimating the actual leakage amount. Prior to construction and/or without actually testing the system leakage, it is very difficult to predict. Field-testing has shown that using very similar installation protocols on two similar houses can still result in leakage rates that are vastly different. Even the brand of furnace can affect the leakage rate by one-third or more. Title 24 software assumes that the system is tight if it is known that the home will be tested, and repaired if the leakage is greater than 6%. If the home is subsequently tested and the leakage is indeed less than 6% then the designer can rest assured that the load calculations are valid. However if the system is not tested and the leakage is significantly more than 6%, the equipment may be undersized. Commonly, if the system is not going to be tested, current practice is to assume that the system is guilty until proven innocent i.e. it leaks more than 6%. The system is assumed to be typical, with a leakage of 22%. If the designer assumes this higher leakage and the installer does an excellent job of installing the system, the system may potentially be oversized. Even testing a system using common procedures such as a duct blaster test does not guarantee that the actual load of the duct leakage will be accurately estimated. Limitations of current duct leakage tests result in substantial variances between tested leakage and actual leakage. These limitations include the inability of the test, using common practices, to distinguish between supply and return leaks and the inability to identify the location of a leak, which may be located in a very high pressure part of the system (near the fan) or in a very low pressure part of the system (near a register or grille). Note: The duct blaster test pressurizes the entire system to the same pressure level and thereby treats all leaks equally.
Best Practices The best way to minimize variances between estimated and actual leakage is to assume that the leakage is attainably low and then make the appropriate effort to ensure that it is installed that way. More sophisticated test methods may improve the accuracy of measuring leakage, but the tighter the systems become, the law of diminishing returns makes more testing expensive and unnecessary.
_____________________________________________________________________ 48
Version 1.0
4.7
Two-story Considerations
As homes become more and more efficient, their heating and cooling loads decrease. The result of this is that larger and larger homes are being served by single HVAC systems. In a typical California subdivision that offers four floor plans, three will be two-story homes. Many of those are served by a single system, a very common design in California new construction and one that tends to have many customer service complaints related to temperature variations (stratification) in the home. Many HVAC subcontractors believe that a two-story home with a single system must have a substantial amount of the return air taken from the first floor. While there is no evidence to support this, HVAC subcontractors will insist that architects and builders go to great effort and expense to accommodate a relatively large return duct and grill to the first floor. Some designers believe that a return in the ceiling of the second floor is adequate as long as the downstairs supply ducts are properly sized. There is also much debate and disagreement over the proper location of a thermostat in a twostory home served by a single system. Some designers locate it upstairs because heat rises and that is where the most cooling is needed (cooling emphasized). Others locate it downstairs because in the winter the first floor tends to be colder and that is where the most heating is needed (heating emphasized). As part of the task of developing this design guide, a study was conducted to evaluate the impact of the number and locations of returns and the placement of the thermostat in a twostory home served by a single HVAC system. Three return configurations were evaluated for cooling using a computational fluid dynamics model (CFD). These three configurations were designed to address the common practices in California production homebuilding: Case 1: split returns upstairs and downstairs; thermostat upstairs Case 2: return upstairs; thermostat upstairs Case 3: return downstairs; thermostat downstairs
The figure below is an example of the information generated by this study showing the temperatures and duty cycles for the three configurations. Case 2 (return upstairs/thermostat upstairs) and Case 3 (return upstairs/thermostat downstairs) cycle twice as often as Case 1(returns upstairs and downstairs/thermostat upstairs). Case 1, with split return upstairs and downstairs, provides a better mixing of air, delaying the return to ambient temperature.
_____________________________________________________________________ 49
Version 1.0
Time (mins)
Case 1 AC ON #1 Case 2 AC ON #1 Case 3 AC ON #1 Case 1 AC OFF #1 Case 2 AC OFF #1 Case 3 AC OFF #1 Case 1 AC ON #2 Case 2 AC ON #2 Case 3 AC ON #2 Case 1 AC OFF #2 Case 2 AC OFF #2 Case 3 AC OFF #2
Recommendations
For the two-story application, installing returns both upstairs and downstairs provides longest duty cycles with good comfort and air quality. While the total On-Times are nearly equal for all cases, the two-return design causes the least system cycling, less startup demand, and less wear on the HVAC equipment. The thermostat located downstairs, farthest from the return, has the most negative effect on duty cycle. Not only does it generate more startup demand for each cycle, this configuration requires frequent system cycling, causing additional equipment wear, and should be avoided.
_____________________________________________________________________ 50
Version 1.0
_____________________________________________________________________ 51
Other Mechanical Design Related Issues 5.1 Condenser Locations and Refrigerant Lines
Version 1.0
5.1
From a design/performance standpoint, condensers and refrigerant lines are a simple concept: obey the minimum clearances and the maximum line lengths and the design should work fine. From an installation/practical standpoint, they can be a real headache. The noise they generate can be real problem. Bedroom walls should be avoided when running lines and locating condensers. Some manufacturers make special noise reduction kits that can help avoid or resolve noise problems. Vibrations transferred from the compressor through the refrigerant lines can be transferred and magnified by walls. Care should be taken not to let the lines come in direct contact with framing. Always use some sort of gasket or cushion. With the higher insulation requirements for refrigerant lines (Title 24 requires R-3 minimum insulation on the suction line, see section 2.5.5 of the Residential Manual) it is recommended that a 2x6 wall or some sort of a chase be provided to run the lines. Some builders have been known to run a 6x6 framed chase down the exterior of the house. Minimum clearances for condensers may vary by manufacturer but they are typically 6 on one side, 30 on the service access side, 12 on the other two sides, and 48 above. (Consult specific manufacturers specifications.) They should also be 24 apart if more than one is used. These clearances can sometime cause problems in narrow side yards. Minimum access requirements must be verified with the builder and can sometimes vary by lot. A condenser works best in a cool, shady spot with good air circulation, but this is usually an impractical request in production homes. Typically, most manufacturers do not recommend that you exceed refrigerant line lengths of 75, some even say 50. Some allow lengths up to 175 using a special kit. The impact on capacity and efficiency must be taken into account. Always refer to specific manufacturers requirements. The electrical contractor also needs to know exactly where the condensers are located so the power and disconnect can be properly located.
_____________________________________________________________________ 52
Version 1.0
5.2
Most single-family detached homes in California are designed with the furnace(s) located in the attic. This is because the attic provides a good central location with good clearance and good direct access to get ducts to most rooms, which reduces overall duct length. Furnaces in garages are the next most common location. Furnaces in closets are rare because of the restrictive clearances and service access to the unit, plus the valuable floor area it takes up. Even if a furnace has a minimum clearance of 0, code requires at least 3 for removal and service. Occasionally, homes with very low-pitched roofs or floors that are difficult to access will have furnaces in a closet. They are most common in attached and multi-family projects. The popularity of low-pitched roofs in current architecture has made it more of a challenge to locate furnaces in attics. Clearance must be verified if it appears that it will be a tight fit. There are always unexpected items that will use up whatever clearance you thought you had. Careful coordination in the field is critical. <UBC/UMC access and clearance>
The truss designer and structural engineer need to know where the furnace platform will be located and how big it needs to be (how many units, up flow or horizontal, etc.) so the trusses can be properly designed and the weight of the furnaces can be accounted for. The electrical contractor will need to provide electricity, a disconnect, a light and a light switch per the Uniform Building Code.
_____________________________________________________________________ 53
Version 1.0
5.3
The location of the attic access is especially important if the furnace is located in the attic. Section 908.0 of the UMC requires a minimum 30x30 opening and passageway but allows for an opening as small as 22x30 as long as the largest piece of equipment can be removed through the opening. Sometimes this is not very easy to determine because more than just the dimension of the opening and dimension of the furnace needs to be considered. Notice that it does not say, as long as the larger piece of equipment can fit through the opening. Remember that just because a furnace has a dimension of 21x29 does not mean that it can be removed through a 22x30 opening. You have to consider the length of the furnace, the attic access proximity to trusses and the roof decking, and the angle that the furnace must take to be removed. In case of a hip roof, the attic access must also be located far enough away from the exterior of the building so that there is a full 30 clearance above it. There should be a 30x30 passageway all the way to the furnace and then there should be a 30x30 work area in front of the furnace. The way it is sometimes described is that you need to be able to push a 30x30x30 cardboard box from directly above the access all the way to the furnace (but not more than 20 feet) and park it right in front of the furnace. It is allowed to locate the furnace immediately next to the attic access as long as the 30 cube is provided and the unit can be served from the access (e.g., standing on a ladder). <UBC attic access locations, UMC 908.0 and 304.1 (clearances)>
_____________________________________________________________________ 54
Other Mechanical Design Related Issues 5.4 Flue (b-vent) locations and routing
Version 1.0
5.4
Furnaces located in an attic can usually be easily vented straight up through the roof unless the aesthetics of the vent termination is an issue. B-vents can angle 60 degrees from vertical one time or 45 degrees from vertical more than one time, and must run in a generally vertical direction. Clearance from framing is very important. <UMC chapter 8> The vent termination must also be at least 8 feet from any vertical wall, including a turret, tower, upper floor, etc. If not, it must extend above that wall. A 90% or condensing furnace may provide a suitable alternative to a B-vent. Condensing furnaces and boilers are the most energy efficient units on the market today, potentially 10-15% more efficient than conventional units.. The extracted heat lowers the temperature of the combustion products to a point that any of the approved types of pipe can also be used for venting combustion products outside the structure. The combustion-air and vent pipes can terminate through a sidewall or through the roof when using one an approved vent termination kit, consistent with local codes.
_____________________________________________________________________ 55
Other Mechanical Design Related Issues 5.5 Duct sizes and locations
Version 1.0
5.5
Duct sizes and locations (soffits, joist bays, chases and drops)
Two-story homes with the furnaces in the attic pose a special challenge: how do you get ducts from the upper attic down past the second floor rooms to rooms on the first floor? Sometimes it is easy and sometimes it is impossible. Typically, in a two-story house the upstairs is predominantly bedrooms. Bedrooms have closets. Despites the protests from the architect, closets are a good place to locate a vertical chase that cuts through the second floor. The dead corners of walk-in-closets work very well because they dont use up too much hanging space and they provide a nice wall for the shelves and poles to die into. Care must be taken when using vertical chases adjacent to an exterior wall. The slope of the room can severely restrict access to the top of the chase in the attic. It may be necessary to drop the ceiling adjacent to the chase and low-frame the interior wall(s) of the chase. See Section 4, Chases and voids, for more discussion on chases. It is recommended that chase locations be conveyed to the architect so they can be put on the official floor plans and coordinated with the framer. Nothing ruins a good chase faster than dissecting it with a roof truss or floor joist. It may be useful to explain to the framer that two 6 ducts are not the same as one 12duct! Soffits and dropped ceilings are often necessary evils for getting ducts to a particular location if it cannot be accomplished using floor joist bays alone. The total depth of a drop (reduction in ceiling height) is typically the diameter of the duct to be run, plus 4-6 inches to allow for framing and duct insulation. Sometimes this can be reduced if flat framing is allowed and the insulation can be compressed, which is allowed if the drop is between conditioned spaces. Generally speaking, the amount of clear space required for a duct of a given diameter is the nominal diameter plus two inches. Less is feasible if the insulation can be compressed 5 but it can make it much harder to install.
_____________________________________________________________________ 56
Other Mechanical Design Related Issues 5.6 Duct Installation, Insulation, and Location
Version 1.0
5.6
Ducts carry air from the central heater or air conditioner to each part of the home and back again. Unfortunately, ducts can waste a significant amount of energy and money due to improper installation and poor materials. A number of factors can affect the functioning of ducts, including:
_____________________________________________________________________ 57
Version 1.0
5.7
Furnaces (and any gas burning appliances) need to be provided with combustion air. This is air that provides the oxygen for the combustion of the gas. If a typical furnace is located in a closet, that combustion air should be ducted. Chapter 7 provides some options for providing these ducts and openings. This can be quite a challenge if the furnace closet is deep within the building because two ducts are required and they can be 6 or even 8 inches in diameter and made of sheet metal. Some higher efficiency condensing furnaces can solve a lot of combustion air problems because they provide their own combustion air through PVC piping as small as 2 and as long as 70-80 feet. They also vent through a similar pipe and the termination of the vent and combustion air can be through the same concentric terminal. Furnaces located in a garage may not need special combustion air vents if the volume of the garage is adequate to meet the definition of an unconfined space. Be sure to count all gas burning appliances when making this determination. Furnaces located in attics are typically assumed to have adequate combustion air as long as the attic is adequately ventilated based on the attic ventilation requirements of section 1505.3 of the UBC. This is because the venting area required for attic venting is much greater than that for combustion air. However, despite the logic that if combustion air can be ducted from an attic to a closet (section 703.1.2 of the UMC) then you should be able to locate the furnace in that attic, some building departments require that the attic meet the high/low requirements for combustion air. Some building departments go even farther and require that combustion air venting be installed in addition to the normal attic venting. They do not understand that the air that serves to vent the attic can do double duty and also be combustion air.
_____________________________________________________________________ 58
Version 1.0
5.8
Thermostat location
Properly locating a thermostat can be as much a Zen art as a science. There are 10,000 bad places to put a thermostat in a house. Your job is to choose the least bad of those places. Some places to definitely avoid are exterior walls, locations that get direct sun, locations that a supply register will blow on, locations near an exterior door or window, walls adjacent to or near a fire place, etc. Remember that a thermostat does two basic things: It turns the system ON and it turns the system OFF. The best location for turning the system on may not be the best location for turning the system off. The best place for turning the system off is usually under or near the main return grill. This is because when the system is running, the return is pulling air from all over the house and it is a good sampling of the average temperature in the house. When the system shuts off this may not be a very good place to sense the average temperature in the house. As part of the task of developing this design guide, a study was conducted that included evaluating the locations of the thermostat in a two-story home served by a single HVAC system. Reference Section 4.7 Two-story Considerations for recommendations on thermostat placement. Detailed information on this study is available from the California Energy Commission as Appendix C of Attachment 2 to the Final Report for the Profitability, Quality, and Risk Reduction through Energy Efficiency program. The report is also available through the Building Industry Institute (BII) or ConSol.
_____________________________________________________________________ 59
Other Mechanical Design Related Issues 5.9 Ventilation and Indoor Air Quality
Version 1.0
5.9
In the old days, the wind and other uncontrolled forms of air leakage ventilated buildings. Today, people no longer accept such cold, drafty houses. Houses are now expected to be cozy, draft free and energy efficient and a tight home is fine, as long as it comes with good ventilation and indoor air quality. Modern building materials tend to make newly constructed homes much tighter than old ones. Plywood, house wrap, better windows, caulk and expanding foam are a few examples of common products that tighten a house. Research has shown that some builders inadvertently build houses much tighter than intended.." In any home, uncontrolled air leakage is an unreliable ventilator. The best way to ensure adequate ventilation is to install some type of automatically controlled ventilation system and there are several choices for the builder to consider, depending on local codes and costs.
|
https://www.scribd.com/document/194250186/Hvac-Design
|
CC-MAIN-2019-35
|
refinedweb
| 14,597 | 53 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
why is boolean field not answering the onchange function?
Hello friends!!!
I am using an onchange function to modify a boolean field.
here is the code:
Python:
def on_change_valid_id(self, cr, uid, ids,is_valid, context=None):
res = {'value':{'is_valid': self.get_inputs(cr, uid, ids, is_valid, context=context),
}
}
return res
def get_inputs(self, cr, uid,ids, is_valid, context=None):
ret = []
present = datetime.now()
a = str(present.year)+'-'+str(present.month)+'-'+str(present.day)
for obj in self.browse(cr, uid, ids, context=context):
matricule = obj.employee_id.id
obj = self.pool.get('hr.contract')
obj_ids = obj.search(cr, uid, [('employee_id', '=', matricule)])
res = obj.read(cr, uid, obj_ids, ['id', 'employee_id', 'date_end'], context)
for r in res:
b = str(r['date_end'])
compare = a > b
print compare
if compare == False:
inputs = {
'is_valid': True,
}
ret += [inputs]
else:
inputs = {
'is_valid': True,
}
ret += [inputs]
print ret
return ret
Can any one help please.
Best Regards.
Hello Drees,
I can see that get_inputs function return [{'is_valid': True}] and you are call in onchange that function, and your onchange is return something like this,
{'value':{'is_valid': [{'is_valid': True}] ,
}
The return must be something like this,
{'value':{'is_valid': True,
}
also, you must check that field is not like read!
|
https://www.odoo.com/forum/help-1/question/why-is-boolean-field-not-answering-the-onchange-function-93928
|
CC-MAIN-2016-50
|
refinedweb
| 230 | 57.47 |
.
Although somewhat unrelated, this reminds me of a VS feature I wish existed. It would be nice if VS had an option to generate a using statement when tab-completing a namespace via Intellisense. At least I don't think this functionality exists...
I'd like a way to automatically remove all unused usings throughout a project - but it may already exist and I just haven't found it yet.
The thing I'd definitely like using-wise, that VS definitely doesn't already do, is find potential "using"s that would provide an extension method that you are trying to use - the same way it provides potential "using"s that would match a class name if you type one in. This applies both to extension methods I've written myself, and to the extension methods in System.Linq that provide the query pattern for IEnumerable and company if you try to start typing a from iEnumerable select ... statement.
I realize that doing that would be a fair amount of work for the intellisense engine - it's a lot easier to do some work on every unmatched classname than to do it on every unmatched method, I guess. But extension methods are rare - you could build a name->method mapping up front of every extension method defined in every assembly that the compiler is aware of, same as the way it must currently build a dictionary of name->class for every assembly to enable the 'add using' feature that already works. The dictionary of extension methods would be much smaller than the one that already exists for classes...
Yes, other IDEs handle this kind of thing fairly elegantly. I'm thinking specifically of editing Java in Eclipse, where the code completion knows about types in packages you haven't yet imported and adds them for you when copy and pasting, that sort of thing. I think some people write programs that NEVER call Console.WriteLine!
@Stuart: it's not there out of the box, but it's fairly trivial to script one. E.g. see this:
And it would be nice if it were easier to configure which warnings were errors in VS too...
Guys - here are a couple of tips for the requested functionality mentioned above in VS 2008.
To automatically add required "using" statements, click on the offending type name in the source code and watch for the small red line that appears under the last charactter of the type name and click <Ctrl>-<Period>. This brings up a context menu that will either add the required "using" statement or spell out the full namespace definition.
To removed unused "using" statements, just right-click one of the using statements in your source code and click "Organize Usings" | "Remove Unused Usings".
Hope that helps someone! Best wishes to all.
So the problem here is really that “warnings are errors” doesn't make sense if the warning is about "useless" code. Could that feature be updated to understand which warnings are just about "useless" code and not treat those warnings as errors?
I've never worried about any of this since I started using ReSharper.
It has:
* Generate a using statement when tab-completing a namespace via Intellisense
* Find potential "using"s that would provide an extension method that you are trying to use
* Knows about types in [assemblies] you haven't yet imported and adds them for you when copy and pasting (OK, not quite seamless: requires some Alt+Enter keypresses)
Perhaps another type of compiler output is needed, something like Recommendations?
@stuart & Pavel: There's also the VS 2008 Power Commands that adds Organize&Remove using among other useful extensions:
I think this is another example of "good design is the art of making good compromises", and this is the right compromise to make. For me, the difference between unused using directives and other types of compiler warnings is that (AFAIK) unused using directives have no impact on the generated IL, whereas other warnings serve to tell you that the compiler may ignore some of your code because it isn't reachable, unused, etc...
@Eric,
Were using statements such as "using IntFunc = Func<int>;" also included for this? I could see this being treated differently than the "you didn't actually use types from the System namespace" case. For me, the aliasing using case sits somewhere between this case and the used variable case.
It sounds like the problem in your example is that the sample code includes system.linq and system.text for some reason, even though I'm not using either. If I have set 'warnings are errors', then I WANT warnings to hurt.
One example where you would need them, that tools seem unable to comprehend, is when using conditional compilation.
#if ALLOW_FILEIO
using System.IO;
#else
using MyApp.FakeIO;
#endif
@Danyel is spot on in my view.
<rant>
Why do the Visual Studio templates feel the need to include so many using directives? In my experience it's extremely easy to add a using directive automatically - I can't remember the last time I wrote one manually... probably back in VS2003.
This is just one example of Visual Studio being annoying, in my opinion. Other examples:
- Defaulting to creating "Form1.cs" - who actually *wants* a type called Form1? Ask me for a better name, or use a better default.
- Defaulting to copying a file as "OriginalName - Copy.cs" - again, I'm bound to want to give it a different name, so ask me at the point of copying... and then rename the class within, if it matched the original name.
- Defaulting to creating variables such as "textBox1" via the designer
- Defaulting to creating event handlers such as "button1_Click"
- Defaulting to adding references to various assemblies I rarely use (System.Data and System.Xml). I suppose this is somewhat justified by earlier versions of Visual Studio taking an age to bring up the "Add reference" dialog, which is much impoved in VS2010.
All of these make it really easy to create an app which is ugly (in terms of code) at the expense of making it easy to create an app with meaningful names. Why not guide developers towards using meaningful names to start with, instead of it being a manual extra step?
</rant>
I can see that the decision not to produce warnings for useless code makes some sense in the light of Visual Studio *encouraging* you to have useless code... and with so much code already written, it's probably too late to fix it now. It does make me sad though... oh well, at least Resharper lets me remove unused using directives fairly easily.
Sorry to be such a curmudgeon on this score. I promise to revert to my normal happy self soon.
I agree with Todd Wilder above.
It would be nice to have another type of compiler message that pointed out if a "using" or anything else were not actually necessary.
Maybe we can call the type of message "random musings of the compiler..."!
|
http://blogs.msdn.com/b/ericlippert/archive/2010/01/25/why-are-unused-using-directives-not-a-warning.aspx
|
CC-MAIN-2015-14
|
refinedweb
| 1,174 | 61.46 |
Pelican theme, first used for Minchin.ca.
Project description
Minchin dot CA is a theme for Pelican, a static site generator written in Python.
The Minchin dot CA theme is based on Bootstrap 3, and was first used at Minchin.ca.
Installation
The easiest way to install the Minchin dot CA theme is through the use of pip. This will also install the required dependencies automatically.
pip install minchin.pelican.themes.minchindotca
Then, in your pelicanconf.py file, import the modele, use the built in function to specify your theme location, set the default colour scheme, set the image processing patterns used, and add some Jinja filters that the theme uses:
from minchin.pelican.themes import minchindotca THEME = minchindotca.get_path() BOOTSTRAP_THEME = 'minchindotca', }
You will may also need to configure the theme through the use of additional settings (see below).
Requirements
Minchin dot ca requires Pelican and the image_process plugin. This can be manually installed with pip:
pip install pelican minchin.pelican.plugins.image_process
Additional Settings
Details coming. In the meantime, refer to the settings on the Bootstrap 3 theme.
Credits
Original theme developed by Daan Debie.
The idea that a theme could be installed as a Python package by Jeff Forcier’s Alabaster theme for Sphinx.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/minchin.pelican.themes.minchindotca/
|
CC-MAIN-2018-26
|
refinedweb
| 233 | 59.09 |
Hello!
I am looking for a function that would automatically compute location coordinates for X items (X being an argument of the function) presented at X iso-eccentric locations equally spaced around the central fixation point (or equally spaced around a point Y that would also be an argument fo the function). Ideally, the function should also take an argument Z to specify the eccentricity, and stimulus type (I am planning to present both words and pictures
).
Many thanks for your help guys!
Is there a psychopy function to generate coordinates for iso-eccentric locations in visual search displays?
Hello!
Hi @jeanne_lusiot, how about this:
import numpy as np from numpy import (sin, cos, pi) def isoEccentric(nPoints, radius, yLocs, xLocs): """Draw nPoints around a circle with radius and xLocs/yLocs given""" degreePoints = 360/nPoints angles = np.arange(0,360,degreePoints) angles = [convert*pi/180 for convert in angles] # points of circle in radians xyss = [[radius*cos(deg)+xLocs,radius*sin(deg)+yLocs] for deg in angles] return xyss xyss = isoEccentric(nPoints=4, radius=100, yLocs=0, xLocs = 0)
This function will provide locations around coordinates defined by xLocs and yLocs, at a distance specified by the radius. You can see the points drawn using ElementArrayStim components - see attached example (if you run, use ‘q’ to quit).
isoEccentric.py (1.4 KB)
.
@dvbridges That is awesome, thanks!! This will work for shapes and pictures. Will this function also work for words as well?
It should work, it will just be a case of defining the positions for each text component. If you create 4 locations in
xyss, then you could have four text components that index the position of those coordinates in a list. E.g.,
text1.pos = xyss[0] text2.pos = xyss[1] #etc...
Or, you could define the positions in when you initiate the text component object. E.g., see attached
isoEccentricText.py (1.5 KB)
Can this code be reworked so that it does not need to import numpy? and can be used on Pavlovia as an online experiment? It works great in builder but currently not able to initalize online
Hi @Emma1 , yes here is the converted Python function that is compatible with the auto-JS translate. You will need to add
Array.prototype.append = [].push; to a separate code component used for JS code only (see Wakefields crib sheet).
def isoEccentric(nPoints=4, radius=100, yLocs=0, xLocs=0): degreePoints = 360/nPoints angles = [] radianPoints = [] xyss = [] for p in range(0, nPoints): angles.append(p * degreePoints) for angle in angles: radianPoints.append(angle * pi / 180) for angle in radianPoints: xyss.append([radius*cos(angle)+xLocs,radius*sin(angle)+yLocs] ) return xyss xyss = isoEccentric(4, 100, 0, 0)
Hello,
I am trying to achieve something similar, but I am having some troubles adjusting the code.
I am working in the builder mode and I have a visual search task with a target letter and many distractors letters. I want them to be randomly located at two eccentricities (4.30 deg and 9.07 deg) around the center of the screen (and sometimes in the centre) in different locations, without being overlapped.
Do I just need to create and then specify as many locations in xyss as my stimuli are?
I am not sure that my approach is correct though (as I am not that expert in pyshcopy or python), so any suggestions would be really appreciated
Thank you in advance for your help
|
https://discourse.psychopy.org/t/is-there-a-psychopy-function-to-generate-coordinates-for-iso-eccentric-locations-in-visual-search-displays/4294
|
CC-MAIN-2021-43
|
refinedweb
| 576 | 54.52 |
Messaging with RabbitMQ and .NET C# part 1: foundations and setup
April 28, 2014 6 Comments
Introduction
Messaging is a technique to solve communication between disparate systems in a reliable and maintainable manner. You can have various platforms that need to communicate with each other: a Windows service, a Java servlet based web service, an MVC web application etc. Messaging aims to integrate these systems so that they can exchange information in a decoupled fashion.
A message bus is probably the most important component in a messaging infrastructure. It is the mechanism that co-ordinates sending and receiving messages in a message queue.
There have been numerous ways to solve messaging in the past: Java Messaging Service, MSMQ, IBM MQ, but they never really became widespread. Messaging systems based on those technologies were complex, expensive, difficult to connect to and in general difficult to work with. Also, they didn’t follow any particular messaging standard; each vendor had their own standards that the customers had to adhere to.
RabbitMQ is a high availability messaging framework which implements the Advanced Message Queue Protocol (AMQP). AMQP is an open standard wire level protocol similar to HTTP. It is also independent of any particular vendor. Here are some key concepts of AMQP:
- Message broker: the messaging server which applications connect to
- Exchange: there will be a number of exchanges on the broker which are message routers. A client submits a message to an exchange which will be routed to one or more queues
- Queue: a store for messages which normally implements the first-in-first-out pattern
- Binding: a rule that connects the exchange to a queue. The rule also determines which queue the message will be routed to
There are 4 different exchange types:
- Direct: a client sends a message to a queue for a particular recipient
- Fan-out: a message is sent to an exchange. The message is then sent to a number of queues which could be bound to that exchange
- Topic: a message is sent to a number of queues based on some rules
- Headers: the message headers are inspected and the message is routed based on those headers.
Installation
RabbitMQ is based on Erlang. There are client libraries for a number of frameworks such as .NET, Java, Ruby etc. We’ll of course be looking at the .NET variant. I’m going to run the installation on Windows 7. By the time you read this post the exact versions of Erlang and RabbitMQ server may be different. Hopefully there won’t be any breaking changes and you’ll be able to complete this tutorial.
Open a web browser and navigate to the RabbitMQ home page. We’ll need to install Erlang first. Click Installation:
…then Windows…:
Look for the following link:
This will get you to the Erlang page. Select either the 32 or 64 bit installation package depending on your system…:
This will download an installation package. Go through the installation process accepting the defaults. Then go back to the Windows installation page on the RabbitMQ page and click the following link:
Again, go through the installation process and accept the defaults.
RabbitMQ is now available among the installed applications:
Run the top item, i.e. the RabbitMQ command prompt.:
As the message says we’ll need to restart the server. The following command will stop the server:
rabbitmqctl stop
…and the following will start it:
rabbitmq-service start
In case the command prompt is complaining that the access was denied then you’ll need to run the command prompt as an administrator: right-click, and Run As Administrator from the context menu.
Open a web browser and navigate to the following URL:
This will open the RabbitMQ management login page. The default username and password is ‘guest’. Click around in the menu a bit. You won’t see much happening yet as there are no queues, no messages, no exchanges etc. Under the Exchanges link you’ll find the 4 exchange types we listed in the introduction.
We’re done with the RabbitMQ server setup.
RabbitMQ in .NET
There are two sets of API to interact with RabbitMQ in .NET: the general .NET library and WCF specific bindings. This binding allows the programmer to interact with the RabbitMQ service as if it were a WCF service.
Open Visual Studio 2012/2013 and create a new Console application. Import the following NuGet package:
Add the following using statement to Program.cs:
using RabbitMQ.Client;
Let’s create a connection to the RabbitMQ server in Main. The ConnectionFactory object will help build an IConnection:
ConnectionFactory connectionFactory = new ConnectionFactory(); connectionFactory.HostName = "localhost"; connectionFactory.UserName = "guest"; connectionFactory.Password = "guest"; IConnection connection = connectionFactory.CreateConnection();
An IModel represents a channel to the AMQP server:
IModel model = connection.CreateModel();
From IModel we can access methods to send and receive messages and much more. As we have no channels yet there’s no point in trying to run the available methods on IModel. Let’s return to RabbitMQ and create some queues!
Back in RabbitMQ
There are a couple of ways to create queues and exchanges in RabbitMQ:
- During run-time: directly in code
- After deploy: through the administration UI or PowerShell
We’ll look at creating queues and exchanges in the UI and in code. I’ll skip PowerShell as I’m not a big fan of writing commands in command prompts.
Let’s look at the RabbitMQ management console first. Navigate to the admin UI we tested above and log in. Click on the Exchanges tab. Below the table of default exchanges click the Add a new exchange link. Insert the following values:
- Name: newexchange
- Type: fanout
- Durability: durable (meaning messages can be recovered)
Keep the rest unchanged and click Add exchange. The new exchange has been added to the table above.
Next go to the Queues link and click on Add a new queue. Add the following values:
- Name: newqueue
- Durability: durable
Keep the rest of the options unchanged and press Add queue. The queue has been added to the list of queues on top. Click on its name in the list, scroll down to “Bindings” and click on it. We’ll bind newexchange to newqueue. Insert ‘newexchange’ in the “From exchange” text box. We’ll keep it as a straight binding so we’ll not provide any routing key. Press ‘Bind’. The new binding will show up in the list of bindings for this queue.
Open a new tab in the web browser and log onto the RabbitMQ management console there as well. Go to the Exchanges tab and click on the name of the exchange we’ve just created, i.e. newexchange. Open the ‘Publish message’ section. We have no routing key, we only want to send a first message. Enter some message in the Payload text box and press Publish message. You should see a popup saying Message published:
Go back to the other window where we set up the queue. You should see that there’s 1 message waiting:
Click on the name of the queue in the table and scroll down to the Get messages section. Open it and press Get Message(s). You should see the message payload you entered in the other browser tab.
Creating queues at runtime
You can achieve all this dynamically in code. Go back to the Console app we started working with. Add the following code to create a new queue:
model.QueueDeclare("queueFromVisualStudio", true, false, false, null);
As you type in the parameters you’ll recognise their names from the form we saw in the management UI.
We create an exchange of type topic:
model.ExchangeDeclare("exchangeFromVisualStudio", ExchangeType.Topic);
…and finally bind them specifying a routing key of “superstars”:
model.QueueBind("queueFromVisualStudio", "exchangeFromVisualStudio", "superstars");
The routing key means that if the message routing key contains the word “superstars” then it will be routed from exchangeFromVisualStudio to queueFromVisualStudio.
Run the Console app. It should run without exceptions. Go back to the admin UI and check if the exchange and queue have been created. I can see them here:
The binding has also succeeded:
Let’s test if this setup works. In the UI navigate to the exchange created through VS and publish a message:
Then go to the queue we set up through VS to check if the message has arrived. And it has indeed:
You can perform the same test with a different routing key, such as “music”. The message should not be delivered. Indeed, the popup message should say that the message has not been routed. This means that there’s no queue listening to messages with that routing key.
This concludes our discussion on the basics of messaging with RabbitMQ. We’ll continue with some C# code in the next installment of the series.
View the list of posts on Messaging here.
Very nice and usefull article for me, Andras.
Pingback: Domain Driven Design with Web API extensions part 6: domain events with RabbitMQ | Dinesh Ram Kali.
Andras, you do a great and helpfull job. Really nice! Thank you!
Wonderful Article! Thanks a lot ! Kindly let me know where I can find remaining series .
Hello, you can find a link to all posts on this page:
//Andras
Wonderful job. Very clear, simple and didactic. Thank you very much, your article is very helpful.
|
https://dotnetcodr.com/2014/04/28/messaging-with-rabbitmq-and-net-c-part-1-foundations-and-setup/
|
CC-MAIN-2018-47
|
refinedweb
| 1,556 | 65.73 |
Top Apps
- Audio & Video
- Business & Enterprise
- Communications
- Development
- Home & Education
- Games
- Graphics
- Science & Engineering
- Security & Utilities
- System Administration
Showing page 1 of 68.
.0 Ident Service
Real Windows Ident Service written in C#.1 weekly downloads
.Net MCI Wrapper Class
.Net MCI Wrapper Class, to allow develpoers to add Multi-Media abilities to their .net project.1
3Demon
An "upgrade" of the 1983 classic: 3Demon. Written in Visual Basic .NET and using Truevision3D for 3D rendering.0 weekly downloads
A flexible .NET Plugin architecture
The TaskPluginInterface namespace is a set of classes, interfaces, enumerations, and events to provide create a "Plug-in" architecture for .NET applications.4AL - Auto Database Access Layer
ADAL (Automatic Database Access Layer) creates VB.NET classes and SQL Server 2000 stored procedures (optional) that remove a lot of the redundant data access code required when building a new .NET application.3 weekly downloads
ADODB-mysql for Mono
Simplified ADODB interface library for MySQL on Mono/.NET. The library can be used to port MS ADODB project to an Mono environment.2 weekly downloads
AIBO Pal
A speech recognition application. It uses Microsoft Speech SDK to recognize and speak words. It can Play Music, Read the News, Tell the Time, Open Apps and many other cool things only with voice commands.5 weekly downloads
AIM PC control
This is a program, that uses AIM to somewhat control your computer. I can add in functionality if needed. These program, will control WMP through instant messages27 weekly downloads
|
http://sourceforge.net/directory/language%3Avb_net/os%3Awindows/license%3Agpl/?sort=name
|
CC-MAIN-2014-15
|
refinedweb
| 249 | 50.94 |
If I run the following program:
I get as output:I get as output:Code:#include <stdio.h> #include <stdlib.h> int main() { unsigned char c = -1; unsigned i = -1; printf("c: %d\n", c); printf("i: %d\n", i); return 0; }
c: 255
i: -1
I am assuming that on my machine (Pentium 4) a twos complement representation is used.
An int is 4 bytes long, so I presume the constant -1 is represented internally as a signed integer 11111111 11111111 11111111 11111111.
I also presume that when the unsigned char c is set equal to -1, the eight bits of c are set equal to the 8 lowest order bits of the above signed integer constant, i.e. 11111111.
This would be consistent with the value of 255 output for c.
So what is going on with i????? I would have expected the same logic to apply and a value of 65535 to be output. Even if the above assumptions are incorrect (in particular I am not certain if -1 is stored as an signed int and whether it is true that a signed int is converted to an unsigned char simply by discarding the 3 highest order bytes), I can't see why printf should output a value of -1 for a variable that has been defined as unsigned.
An suggestions would be most appreciated.
Incidentally, if I try the same thing in C++
I get the output:I get the output:Code:#include <iostream> using namespace std; int main() { unsigned i = -1; cout << "i: " << i << endl; return 0; }
i: 4294967295
which is FFFFFFFF, or eight bytes with all bits set to 1. How you get that from a 4-byte unsigned int I cannot imagine.
|
http://cboard.cprogramming.com/c-programming/110848-internal-representation-signed-unsigned-chars-ints.html
|
CC-MAIN-2015-35
|
refinedweb
| 289 | 73.71 |
ZFS was first publicly released in the 6/2006 distribution of
Solaris 10. Previous versions of Solaris 10 did not include ZFS.
ZFS is flexible, scalable and reliable. It is a POSIX-compliant
filesystem with several important features:
No separate filesystem creation step is required. The mount of the
filesystem is automatic and does not require vfstab maintenance.
Mounts are controlled via the mountpoint
attribute of each file system.
mountpoint
Members of a storage pool may either be hard drives
or slices of at least 128MB in size.
To create a mirrored pool:
zpool create -f pool-name
mirror c#t#d# c#t#d#
To check a pool's status, run:
zpool status -v pool-name
To list existing pools:
zpool list
To remove a pool and free its resources:
zpool destroy pool-name
A destroyed pool can sometimes be recovered as follows:
zpool import -D
zpool create -f
mirror
zpool status -v
zpool list
zpool destroy
zpool import -D
Additional disks can be added to an existing
pool. When this happens in a mirrored or RAID Z
pool, the ZFS is resilvered to redistribute the data.
To add storage to an existing mirrored pool:
zpool add -f pool-name
mirror c#t#d# c#t#d#
zpool add -f
Pools can be exported and imported to transfer
them between hosts.
zpool export pool-name
zpool import pool-name
Without a specified pool, the import
command lists available pools.
zpool import
zpool export
zpool import
import
zpool import
To clear a pool's error count, run:
zpool clear pool-name
zpool clear
Although virtual volumes (such as those from DiskSuite
or VxVM) can be used as base devices,
it is not recommended for performance reasons.
Similar filesystems should be grouped together
in hierarchies to make management easier. Naming
schemes should be thought out as well to make
it easier to group administrative commands for
similarly managed filesystems.
When a new pool is created, a new filesystem is
mounted at /pool-name.
To create another filesystem:
zfs create pool-name/fs-name
To delete a filesystem:
zfs destroy filesystem-name
zfs create
zfs destroy
To rename a ZFS filesystem:
zfs rename old-name new-name
zfs rename
Properties are set via the zfs set
command.
To turn on compression:
zfs set compression=on
pool-name/filesystem-name
To share the filesystem via NFS:
zfs set sharenfs=on
pool-name/fs-name
zfs set sharenfs="mount-options
" pool-name/fs-name
Rather than editing the /etc/vfstab:
zfs set mountpoint=
mountpoint-name pool-name/filesystem-name
zfs set
zfs set compression=on
zfs set sharenfs=on
zfs set sharenfs="
"
/etc/vfstab
zfs set mountpoint=
Quotas are also set via the same command:
zfs set quota=#gigG
pool-name/filesystem-name
zfs set quota=
G
ZFS filesystems automatically stripe across all
top-level disk devices. (Mirrors and RAID-Z
devices are considered to be top-level devices.)
It is not recommended that RAID types be mixed
in a pool. (zpool tries to prevent
this, but it can be forced with the -f
flag.)
zpool
-f
The following RAID levels are supported:
The zfs man page recommends 3-9 disks for RAID-Z
pools.
ZFS performance management is handled differently
than with older generation file systems. In ZFS,
I/Os are scheduled similarly to
how jobs are scheduled on CPUs.
The ZFS I/O scheduler tracks a priority and a deadline for
each I/O. Within each deadline group, the I/Os are scheduled
in order of logical block address.
Writes are assigned lower priorities than reads,
which can help to avoid traffic jams where reads
are unable to be serviced because they are queued
behind writes. (If a read is issued for a write
that is still underway, the read will be executed
against the in-memory image and will not hit the
hard drive.)
In addition to scheduling, ZFS attempts to intelligently
prefetch information into memory. The algorithm tries
to pick information that is likely to be needed.
Any forward or backward linear access patterns are
picked up and used to perform the prefetch.
The zpool iostat command can monitor
performance on ZFS objects:
zpool iostat
The health of an object can be monitored with
zpool status
zpool status
To create a snapshot:
zfs snapshot pool-name/filesystem-name@
snapshot-name
To clone a snapshot:
zfs clone snapshot-name filesystem-name
To roll back to a snapshot:
zfs rollback pool-name/filesystem-name@snapshot-name
zfs snapshot
zfs clone
zfs rollback
zfs send
and zfs receive allow
clones of filesystems to be sent to a development environment.
zfs send
zfs receive
The difference between a snapshot and a clone is that a
clone is a writable, mountable copy of the file system.
This capability allows us to store multiple copies of
mostly-shared data in a very space-efficient way.
Each snapshot is accessible through the
.zfs/snapshot in the /pool-name
directory. This can allow end users to recover their files
without system administrator intervention.
.zfs/snapshot
/pool-name
If the filesystem is created in the global zone
and added to the local zone via
zonecfg,
it may be assigned to more than one zone unless
the mountpoint is set to legacy.
zfs set mountpoint=legacy
pool-name/filesystem-name
zonecfg
legacy
zfs set mountpoint=legacy
To import a ZFS filesystem within a zone:
zonecfg -z zone-name
zonecfg -z
add fs
set dir=mount-point
set special=pool-name/filesystem-name
set type=zfs
end
verify
commit
exit
add fs
set dir=
set special=
set type=zfs
end
verify
commit
exit
Administrative rights for a filesystem can be granted
to a local zone:
zonecfg -z zone-name
add dataset
set name=pool-name/filesystem-name
end
commit
exit
add dataset
set name=
ZFS is a transactional file system.
Data consistency is protected via
Copy-On-Write (COW). For each write request, a copy is
made of the specified block. All changes are made to
the copy. When the write is complete, all pointers are
changed to point to the new block.
Checksums are used to validate data during reads and writes.
The checksum algorithm is user-selectable. Checksumming
and data recovery is done at a filesystem level; it is not
visible to applications. If a block becomes corrupted on
a pool protected by mirroring or RAID, ZFS will identify the
correct data value and fix the corrupted value.
Raid protections are also
part of ZFS.
Scrubbing is an additional type
of data protection available on ZFS. This is a
mechanism that performs regular validation of
all data. Manual scrubbing
can be performed by:
zpool scrub pool-name
The results can be viewed via:
zpool status
Any issues should be cleared with:
zpool clear pool-name
zpool scrub
The scrubbing operation walks through the pool
metadata to read each copy of each block. Each copy
is validated against its checksum and corrected if
it has become corrupted.
To replace a hard drive with another device, run:
zpool replace pool-name old-disk new-disk
zpool replace
To offline a failing drive, run:
zpool offline pool-name disk-name
(A -t flag allows the disk to come back
online after a reboot.)
zpool offline
-t
Once the drive has been physically replaced,
run the replace command against the device:
zpool replace pool-name device-name
After an offlined drive has been replaced, it can be
brought back online:
zpool online pool-name disk-name
replace
zpool online
Firmware upgrades may cause the disk device ID to change.
ZFS should be able to update the device ID automatically,
assuming that the disk was not physically moved during the update.
If necessary, the pool can be exported and re-imported to
update the device IDs.
The three categories of errors experienced by ZFS are:
It is important to check for all three categories of errors.
One type of problem is often connected to a problem from a different
family. Fixing a single problem is usually not sufficient.
Data integrity can be checked by running a manual scrubbing:
zpool scrub pool-name
zpool status -v pool-name
checks the status after the scrubbing is complete.
The status command also reports on
recovery suggestions for any errors it finds. These
are reported in the action section.
To diagnose a problem, use the output of
the status command and the fmd
messages in /var/adm/messages.
status
action
fmd
/var/adm/messages
The config section of the status
section reports the state of each device. The state
can be:
config
The status command also reports
READ, WRITE
or CHKSUM errors.
READ
WRITE
CHKSUM
To check if any problem pools exist, use
zpool status -x
This command only reports problem pools.
zpool status -x
If a ZFS configuration becomes damaged, it can be
fixed by running export and
import.
export
Devices can fail for any of several reasons:
Once the problems have been fixed, transient errors
should be cleared:
zpool clear pool-name
In the event of a panic-reboot loop caused by a
ZFS software bug, the system can be instructed to
boot without the ZFS filesystems:
boot -m milestone=none
When the system is up, remount / as rw and remove
the file /etc/zfs/zpool.cache.
The remainder of the boot can proceed with the
svcadm milestone all command. At that
point import the good pools. The damaged pools may
need to be re-initialized.
boot -m milestone=none
/etc/zfs/zpool.cache
svcadm milestone all
The filesystem is 128-bit. 256 quadrillion zetabytes of
information is addressable. Directories can have up to
256 trillion entries. No limit exists on the number of
filesystems or files within a filesystem.
Because ZFS uses kernel addressable memory, we need to
make sure to allow enough system resources to take advantage
of its capabilities. We should run on a system with a
64-bit kernel, at least 1GB of physical memory, and adequate
swap space.
While slices are supported for creating storage pools, their
performance will not be adequate for production uses.
Mirrored configurations should be set up across multiple
controllers where possible to maximize performance and
redundancy.
Scrubbing should be
scheduled on a regular basis to identify problems before they
become serious.
When latency or other requirements are important, it makes
sense to separate them onto different pools with distinct
hard drives. For example, database log files should be
on separate pools from the data files.
Root pools are not yet supported in the Solaris 10
6/2006 release, though they are anticipated in a future release.
When they are used, it is best to put them on separate
pools from the other filesystems.
On filesystems with many file creations and deletions,
utilization should be kept under 80% to protect performance.
The recordsize parameter can be tuned on
ZFS filesystems. When it is changed, it only affects
new files. zfs set recordsize=size
tuning can help where large files (like database files) are accessed
via small, random reads and writes. The default is 128KB;
it can be set to any power of two between 512B and
128KB. Where the database uses a fixed block or record
size, the recordsize should be set to match.
This should only be done for the filesystems actually
containing heavily-used database files.
recordsize
zfs set recordsize=
In general, recordsize should be reduced when
iostat regularly shows a throughput near
the maximum for the I/O channel. As with any tuning,
make a minimal change to a working system, monitor it
for long enough to understand the impact of the change,
and repeat the process if the improvement was not good
enough or reverse it if the effects were bad.
iostat
The
ZFS Evil Tuning Guide contains a number of tuning
methods that may or may not be appropriate to a particular installation.
As the document suggests, these tuning mechanisms will have to be
used carefully, since they are not appropriate to all installations.
zfs set checksum=off filesystem
zfs set checksum='on | fletcher2 | fletcher4 | sha256' filesystem
set zfs:zfs_arc_max
/etc/system
zfs:zfs_prefetch_disable
set zfs:zfs_vdev_cache_bshift = 13
set zfs:zfs_vdev_max_pending = 10
set zfs:zfs_nocacheflush = 1.)
Max Bruning wrote an
excellent paper on how to examine the internals of a ZFS data structure.
(Look for the article on the ZFS On-Disk Data Walk.) The structure is defined in
ZFS On-Disk Specification.
Some key structures:
uberblock_t
uts/common/fs/zfs/sys/uberblock_impl.h
zdb -uuu zpool-name
blkptr_t
uts/common/fs/zfs/sys/spa.h
dnode_phys_t
uts/common/fs/zfs/sys/dmu.h
objset_phys_t
uts/common/fs/zfs/sys/dmu_objset.h
uts/common/fs/zfs/sys/zap_leaf.h
dsl_dir_phys_t
dsl_dataset_phys_t
blkprt_t
znode_phys_t
Solaris Troubleshooting and Performance Tuning Home Page
ZFS Best Practices Guide
ZFS Evil Tuning Guide
OpenSolaris ZFS Documentation Page
Solaris ZFS Administration Guide
Brune, Corey, ZFS Administration, SysAdmin Magazine Jan 2007
ZFS
On-Disk Data Walk in the OpenSolaris Developer Conference Proceedings.
ZFS On-Disk Specification
|
http://www.princeton.edu/~unix/Solaris/troubleshoot/zfs.html
|
crawl-002
|
refinedweb
| 2,157 | 52.39 |
Programming Questions & Answers -
You should practice these quizzes to improve your C programming skills needed for various interviews (campus interviews, walk-in interviews, company interviews), placements, entrance exams and other competitive exams.
a) int my_num = 100,000;
b) int my_num = 100000;
c) int my num = 1000;
d) int $my_num = 10000;
Explanation: space, comma and $ cannot be used in a variable name.
#include <stdio.h> int main() { printf("Hello World! %d \n", x); return 0; }
a) Hello World! x;
b) Hello World! followed by a junk value
c) Compile time error
d) Hello World!
Explanation: It results in an error since x is used without declaring the variable x.
#include <stdio.h> int main() { int main = 3; printf("%d", main); return 0; }
a) It will cause a compile-time error
b) It will cause a run-time error
c) It will run without any error and prints 3
d) It will experience infinite looping
Explanation: A C program can have same function name and same variable name.
#include <stdio.h> int main() { char chr; chr = 128; printf("%d\n", chr); return 0; }
a) 128
b) -128
c) Depends on the compiler
d) None of the mentioned
Explanation: signed char will be a negative number.
#include <stdio.h> int main() { char *p[1] = {"hello"}; printf("%s", (p)[0]); return 0; }
a) Compile time error
b) Undefined behaviour
c) hello
d) None of the mentioned
Explanation: None
#include <stdio.h> int main() { printf("crazyfor\code\n"); return 0; }
a) crazyforcode
b) crazyfor
code
c) codeyfor
d) crazyfor
Explanation: r is carriage return and moves the cursor back. sanfo is replaced by class
a) !=
b) ==
c) ||
d) =
Explanation: None
#include <stdio.h> int main() { int a = 10; if (a == a--) printf("TRUE 1\t"); a = 10; if (a == --a) printf("TRUE 2\t"); }
a) TRUE 1
b) TRUE 2
c) TRUE 1 TRUE 2
d) No output
Explanation: None
a) Within the block it appears
b) Within the blocks of the block it appears
c) Until the end of program
d) Both (a) and (b)
Explanation: None
a) true
b) false
c) Depends on the standard
d) None of the mentioned
Explanation: None
Can anybody please explain me why is the answer for q#8
#include
int main()
{
int a = 10;
if (a == a–)
printf(“TRUE 1\t”);
a = 10;
if (a == –a)
printf(“TRUE 2\t”);
}
is TRUE1 TRUE2
Its True 2.
the answer shold be TRUE 1
there is a term called “sequence point” in c programming. Until the squence point is not over in the expression, the side effect of the operation remains in the expression and standard c doesn’t guarantee the output.
Here, the statement
if(a == a–) there is a side effect and compiler doesn’t guarantee its result.
similar for if(a == –a).
In gcc compiler the output is TRUE2 which is not matching with the output given in the solution of the question.
why
See TRUE1 should be clear to you. And TRUE2 is also true because –a is evaluated before the whole expression so a becomes 9 before comparison starts hence on both side its 9 == 9 that is true.
thanks a lot! now i am clear about the question and its answer.
step1:a=10;
step2: a–(a=a-1=>a=9);
step3:9==9;
STEP4;display TRUE 1
step1:a=10;
step2: a–(a=a-1=>a=9);
step3:9==9;
STEP4;display TRUE 2
OUTPUT IS TRUE1 TRUE 2
Operator precedence
#6 is wrong, there is no \c escape sequence…
To,
surendra maharajan
1st let me tell u that — before variable is knowm as prefix operator and other one is postfix operator,
in prefix operator first the value of the variable is changed and then its assigned to the left variable,
in case of if(a==a–),
so the value of a is defined to be 10,and when a– is executed it first decreases the value of a by 1 below(==) operator and now the value of a is 9 and now the work of (a–) comes thats it has to decrease by 1,since the value of a is now 9 the value will become 8 for (a–),
now what happens in
if(a=–a)
value of a=10;
first it will become 9 due to(–a) and then it will assign to 9 to the left variable,so the value of a is 9 on both side,
thank u
for #8 given answer is correct, a becomes 9 when –a and comparison results if(9==9) ok, what happens if (–a==a) in this case also TRUE2 is coming
answer is option c.true 1 true 2!
! because( – - ) decrement operator has more preority than (==) equal too !! so in both if condition before comparing the decrement is done hence option C is the correct amswer
|
http://www.crazyforcode.com/c-programming-quiz-set-1/
|
CC-MAIN-2017-26
|
refinedweb
| 805 | 60.58 |
Robot Operating System (ROS) is an open source robotics platform that helps your robot visualize the world, map and navigate it, and perform physical interactions using state-of-the-art algorithms. If you want to build a complex robot, chances are there is some ROS code already available to help you. You can use as little of ROS as you like, and it installs on machines from the Raspberry Pi level upwards.
Read articles from the magazine right here on Make:. Don’t have a subscription yet? Get one today.
Let’s consider how to control a servo as an introduction to ROS. One drawback of servomotors is that they will often run as fast as they can to obey your command. This can result in your robot falling over because it suddenly started to rotate at top speed. Once we get ROS to control the servo, we can add sinusoidal-like control to keep your robot steady. You can do this in ROS without changing the controlling code, or the code that exposes the servo to ROS, or the servo hardware itself. And you can easily reuse the code for other projects in the future!
ROS has very good support for installation on Ubuntu or Debian, so you won’t have to compile to get going. This build uses a Linux machine running Ubuntu, a hobby servo, an Arduino, and a few bits of common cables like hookup wires. ROS will be running on the Ubuntu machine and its messages will be sent over USB to the Arduino. Once you have installed the binary ROS packages, let your Arduino environment know about the ROS libraries by entering the following commands in a console program (such as gnome-terminal or konsole):
cd ~/sketchbook/libraries
rm -rf ros_lib
rosrun rosserial_arduino make_l ibraries.py .
Program the Arduino
Photo by Hep Svadja
Now we can upload a sketch to an Arduino to perform the low-level servo control and control it from the Linux machine. This will move a servo to a location specified as a percentage (0.0 to 1.0) of the full motion we want to allow. Using a percentage instead of an explicit angle lets the Arduino code limit the exact angle that can be set, to explicitly avoid angles that you know will cause a collision.
As you can see, the normal setup and loop functions become quite sparse when using ROS. The loop function can be the same for any Arduino code that’s just subscribing to data. In the setup you have to initialize ROS and then call subscribe for each ROS message subscriber you have. Each subscriber takes up RAM on your Arduino, so you might only have 6-12 of them depending on what else your sketch needs to do.
#include <Arduino.h> #include <Servo.h> #include <ros.h> #include <std_msgs/Float32.h> #define SERVOPIN 3 Servo servo; void servo_cb( const std_msgs:: Float32& msg ) { const float min = 45; const float range = 90; float v = msg.data; if( v > 1 ) v = 1; if( v < 0 ) v = 0; float angle = min + (range * v); servo.write(angle); } ros::Subscriber<std_msgs::Float 32> sub( “/head/tilt”, servo_cb ); ros::NodeHandle nh; void setup() { servo.attach(SERVOPIN); nh.initNode(); nh.subscribe(sub); } void loop() { nh.spinOnce(); delay(1); }
Now you need to be able to talk to the Arduino from the ROS world. The simplest way to do that is with a robot launch file. While the below file is very simple, these can include other launch files so you can eventually start a very complex robot with a single command.
$ cat rosservo.launch <launch> <node pkg="rosserial_python " type="serial_node.py" nam <param name="port" value= "/dev/ttyUSB0" /> </node> </launch> $ roslaunch ./rosservo.lanch
The rostopic command lets you see where you can send ROS messages on your robot. As you can see below, the /head/tilt is available from the Arduino. A message can be sent using rostopic pub, the -1 option means to only publish the message once and we want to talk to /head/tilt sending a single floating point number.
$ rostopic list /diagnostics /head/tilt /rosout /rosout_agg $ rostopic pub -1 /head/tilt std_msgs/Float32 0.4 $ rostopic pub -1 /head/tilt std_msgs/Float32 0.9
At this stage, anything that knows how to publish a number in ROS can be used to control the servo. If we move from 0 to 1 then the servo will run at full speed, which in itself is fine, but we might like the motor to accelerate to full speed and then slow down when it gets near the destination position. Less sudden motion, less jerky robot movement, less surprise to the humans in the area.
Smooth with Another Node
- Houndbot
- Terry. Both Terry and the Houndbot are ROS robots made primarily out of 6061 alloy parts. My goal is to have both be as autonomous as possible.
The below Python script listens to messages on /head/tilt/smooth and publishes many messages to /head/tilt to move the servo with a slow ramp up and a ramp down when getting close to the desired position. The moveServo_cb is called whenever a message arrives on /head/tilt/smooth. The callback then generates a number for every 10 degrees from -90 to +90 into the angles array. The sin() is taken on those angles which gives values ranging slowly from -1 to +1. Adding 1 to that makes the range 0 to +2, so a divide by 2 makes our array ramp up from 0 to +1. It’s then a matter of walking through the m array and publishing a message each time, moving slightly further through the range r each time, ending up at 1*r or the full range.
#!/usr/bin/env python from time import sleep import numpy as np import rospy from std_msgs.msg import Float32 currentPosition = 0.5 pub = None def moveServo_cb(data): global currentPosition, pub targetPosition = data.data r = targetPosition - curren tPosition angles = np.array( (range(1 90)) [0::10]) - 90 m = ( np.sin( angles * np.pi / 180. ) + 1 ) /2 for mi in np.nditer(m): pos = currentPosition + mi*r print “pos: “, pos pub.publish(pos) sleep(0.05) currentPosition = targetPosi tion print “pos-e: “, currentPos ition pub.publish(currentPosition) def listener(): global pub rospy.init_node(‘servoencod er’, anonymous=True) rospy.Subscriber(‘/head/til t/smooth’, Float32, moveSer vo_cb) pub = rospy.Publisher(‘/h ead/tilt’, Float32, queue_ size=10) rospy.spin() if __name__ == ‘__main__’: listener()
To test out smooth servo motion, start the Python script and publish your messages to /head/tilt/smooth and you should see a smoother movement.
$ ./servoencoder.py $ rostopic pub -1 /head/tilt/smooth std_msgs/Float32 1 $ rostopic pub -1 /head/tilt/smooth std_msgs/Float32 0
You can also remap the name of things in ROS. This way you can remap /head/tilt/smooth to be /head/tilt and the program commanding the servo will not even know that the sinusoidal motion is being used.
Going Further
I’ve focused on simple servo control here but ROS has support for much more. If you want to know what is blocking your robot from moving, there is already support for using a Kinect in ROS. Even if the navigation stack is using that data to do mapping, you can also feed a little Python script that moves a servo to track the closest object to the robot. Yes, the eyes really are following you.
Two ROS projects of mine are Terry and Houndbot. Terry is an indoor robot with two Kinects, one used exclusively for navigation, the other for depth mapping as I see fit. With its six Arduinos, Terry can be controlled via a ROS-backed web interface or directly via PS3 remote.
I designed the Houndbot for outdoor use. It has an RC remote, GPS, compass, and ROS controlled ears. I am working on getting it to use a PS4 eye twin camera for navigation. It cannot use a Kinect because the sun stops that from working. Since the hound is about 20kg I have upgraded the suspension recently, leading me to make custom alloy parts.
Robot Operating System Resources
Installation on Ubuntu
Delve into the world of navigation with ROS
ROS Q&A
Grab one of the many books on ROS
Get your robot arm on the move with ROS & MoveIt!
Run the NASA-GM Robonaut2 in a simulator. ROS is up there!
|
http://makezine.com/2017/01/20/smooth-servo-control-with-ros/
|
CC-MAIN-2017-09
|
refinedweb
| 1,404 | 72.87 |
sqlite3_exec: the 3rd argument
(1) By anonymous on 2021-01-17 09:01:45 [link]
The 3rd argument of this API is the callback function. 1. The callback function is executed as many times as there are rowsin the result. Correct? 2. The callback function returns the column names <b>every</b> time it is called? Correct? If affirmative, isn't this superfluous? 3. How does the callback function handle the different types in the result columns? (Int, Float etc)? 4. My callback function consistently fails on the second call: any ideas on how to overcome this? (I'm using C#)
(2) By Larry Brasfield (LarryBrasfield) on 2021-01-17 14:30:40 in reply to 1 [link]
> The callback function is executed as many times as there are rowsin [sic] the result. Correct? That's what the docs for sqlite3_exec() claim, and I have found it to be true. > The callback function returns the column names every time it is called? Correct? If affirmative, isn't this superfluous? Yes, and no. For some applications, it is a great convenience that the column names are available to the callback. > How does the callback function handle the different types in the result columns? (Int, Float etc)? That would be up to the library user. The sqlite3_column_type() function is likely to be useful if the column types are not known or thought to be coerceable. > My callback function consistently fails on the second call: any ideas on how to overcome this? (I'm using C#) It is time to learn how to use a debugger. Also, interfaces between the .Net CLR execution context and native code are generally non-trivial. More study is indicated. My guess is that you have botched a memory ownership issue. However, this subtopic is truly off-topic in this forum.
(3) By anonymous on 2021-01-17 16:26:53 in reply to 1 [link]
I figured out this one: >\4. My callback function consistently fails on the second call: any ideas on how to overcome this? (I'm using C#) The [reason is:]() >If an sqlite3_exec() callback returns non-zero, the sqlite3_exec() routine returns SQLITE_ABORT without invoking the callback again and without running any subsequent SQL statements. My callback was returning 1; however, returning 0, I encounter the same error on the 6th iteration irrespective of the number of columns in the result. <sup>(... more to do with my code).</sup>
(4) By anonymous on 2021-01-22 10:57:50 in reply to 1 [link]
Is there an sqlite3 API that the callback function should invoke before returning 0? Irrespective of the number of columns in my query, I am getting >Attempted to read or write protected memory. after some records i.e. sqlite3_exec is failing with that error <b>before</b> reaching the last record that my query returns.
(5) By anonymous on 2021-01-22 15:06:38 in reply to 4 [link]
> Irrespective of the number of columns in my query, I am getting > Attempted to read or write protected memory. It may be time to use the debugger to get a backtrace of where this error happens. Speculating on numbers of `sqlite3_exec` callbacks isn't likely to get useful results.
(6) By anonymous on 2021-01-22 16:02:12 in reply to 5 [link]
>It may be time to use the debugger to get a backtrace of where this error happens. I did think of that but I have this feedback upon hitting the error: >Source=<Cannot evaluate the exception source> I'm using the pre-compiled binary (rather than compiling my own version).
(7) By Larry Brasfield (LarryBrasfield) on 2021-01-22 17:30:08 in reply to 6 [link]
You will generally need a debug build to conveniently debug code at the source level. But that is unlikely to be of much help because this is what you will find: The SQLite library is suffering an address fault because it has been given a trashed heap and is susceptible to [the GIGO principle](). The heap is almost certainly being taken from its pristine (not-garbage) initial condition to a trashed condition by effects flowing from your code or possibly other portions of the application using the SQLite library. I suggest you get your callback scheme working with some simple data type that does not require heap allocation (such as floats or integers), and only once you have that working make the transition to passing strings or blobs through your native-mode / coddled-execution interface. There is far less opportunity to be using bad pointers with the simple data types.
(8) By Keith Medcalf (kmedcalf) on 2021-01-22 18:28:52 in reply to 1 [link]
> The callback function is executed as many times as there are rowsin the result. Correct? Yes. > The callback function returns the column names every time it is called? Correct? Yes. > If affirmative, isn't this superfluous? No. > How does the callback function handle the different types in the result columns? (Int, Float etc)? It does not. All values are converted to text strings. This interface is designed for "primitive applications" > My callback function consistently fails on the second call: any ideas on how to overcome this? (I'm using C#) Are you attempting to "hang onto a pointer" after your callback returns? You cannot do this. The pointer arrays received for the data and colnames are only valid for the duration of the execution of the callback. They are invalid once you return from the callback function. If you want to access the data after the callback is complete or after sqlite3_exec is complete, you need to copy it somewhere else that you control. The same thing applies to the contents of the array of pointers and the data itself. You should not be attempting to modify that which you do not own.
(9) By anonymous on 2021-01-22 21:15:55 in reply to 8 [link]
>Are you attempting to "hang onto a pointer" after your callback returns? No. My SQL is: >select * from employees; //chinook.db; table **employees** has 15 columns and 8 rows My callback function gets the data correctly <i>to start with, as follows (showing first row below)</i> |**names** |**values** | |{string[15]} |{string[15]} | | [0]: "EmployeeId" | [0]: "1" | | [1]: "LastName" | [1]: "Adams" | | [2]: "FirstName" | [2]: "Andrew" | | [3]: "Title" | [3]: "General Manager" | | [4]: "ReportsTo" | [4]: null | | [5]: "BirthDate" | [5]: "1962-02-18 00:00:00" | | [6]: "HireDate" | [6]: "2002-08-14 00:00:00" | | [7]: "Address" | [7]: "11120 Jasper Ave NW" | | [8]: "City" | [8]: "Edmonton" | | [9]: "State" | [9]: "AB" | | [10]: "Country" | [10]: "Canada" | | [11]: "PostalCode"| [11]: "T5K 2N1" | | [12]: "Phone" | [12]: "+1 (780) 428-9482" | | [13]: "Fax" | [13]: "+1 (780) 428-3457" | | [14]: "Email" | [14]: "[email protected]"| I've got the 15 column names and values, including the <i>null</i> value for <i>ReportsTo</i>. The values are literals, <i>as you've pointed out.</i> My callback function writes the data it receives to a file, <i> its content as follows:</i> ``` 1-0:EmployeeId=1 1-1:LastName=Adams 1-2:FirstName=Andrew 1-3:Title=General Manager 1-4:ReportsTo= 1-5:BirthDate=1962-02-18 00:00:00 1-6:HireDate=2002-08-14 00:00:00 1-7:Address=11120 Jasper Ave NW 1-8:City=Edmonton 1-9:State=AB 1-10:Country=Canada 1-11:PostalCode=T5K 2N1 1-12:Phone=+1 (780) 428-9482 1-13:Fax=+1 (780) 428-3457 1-14:[email protected] 2-0:EmployeeId=2 2-1:LastName=Edwards 2-2:FirstName=Nancy 2-3:Title=Sales Manager 2-4:ReportsTo=1 2-5:BirthDate=1958-12-08 00:00:00 2-6:HireDate=2002-05-01 00:00:00 2-7:Address=825 8 Ave SW 2-8:City=Calgary 2-9:State=AB 2-10:Country=Canada 2-11:PostalCode=T2P 2T3 2-12:Phone=+1 (403) 262-3443 2-13:Fax=+1 (403) 262-3322 2-14:[email protected] 3-0:EmployeeId=3 3-1:LastName=Peacock 3-2:FirstName=Jane 3-3:Title=Sales Support Agent 3-4:ReportsTo=2 3-5:BirthDate=1973-08-29 00:00:00 3-6:HireDate=2002-04-01 00:00:00 3-7:Address=1111 6 Ave SW 3-8:City=Calgary 3-9:State=AB 3-10:Country=Canada 3-11:PostalCode=T2P 5M5 3-12:Phone=+1 (403) 262-3443 3-13:Fax=+1 (403) 262-6712 3-14:[email protected] ``` The first number is the record number (index 1), the second number is the column number (index 0) , followed by the column name and column value. <b> Only 3 of 8 rows come to the callback function</b>. Then I hit this error: >Message=Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Source=<Cannot evaluate the exception source> 1.I've tried with other tables with fewer and more columns and records (in case it was a buffer issue) and meet the same fatal error. 2.I've also experimented with adding delays (in case there was a timing issue) in the callback function but it makes no difference. 3.I am using the pre-compiled 32-bit 3.34 version. I tried with 3.26 - makes no difference. Since my code works for 3 records, it is probably 'correct'. <i>I've run out of options for debugging the reason for the error I am getting after 3 rows.</i> Hopefully, someone can share their insight so I can solve this problem. When I call sqlite3_exec from my code, on which line in sqlite3.c does it land?
(10) By Larry Brasfield (LarryBrasfield) on 2021-01-23 02:15:53 in reply to 9 [link]
> Since my code works for 3 records, it is probably 'correct'. That is nowhere close to true. However, it does suggest a debugging strategy. After each record, do a heap integrity check. I must also submit that your definition of "works" is far too narrow. Maybe your code produces the results you expect 3 times, but it clearly causes some degradation of **something**, such that successive calls have a lower probability of satisfying even your weak kind of "works". > I've run out of options for debugging the reason for the error I am getting after 3 rows. Hopefully, someone can share their insight so I can solve this problem. Well, I already suggested an approach. How did that work out? The result may very well show you that whether "it works" depends on something in your code. You did not answer Keith's question, "Are you attempting to 'hang onto a pointer' after your callback returns?" This leads me to suspect you do not know what he means or why that programming sin should be among your chief suspects. (The values you cite in your not-quite-an-answer to him are **not** literals.) > When I call sqlite3_exec from my code, on which line in sqlite3.c does it land? If you can find "sqlite3_exec(" in sqlite3.c, you will find instances of that text in 4 categories: (1) Inside of comments; (2) A forward declaration; (3) In other code calling into sqlite3_exec(); and (4) A definition of sqlite3_exec(...). When you call it, that last is where your call "lands". I feel compelled to note that somebody who cannot look at sqlite3.c to find a function definition is unlikely to be able, absent extremely compelling evidence, to correctly diagnose that a bug lies outside his own code and hence must be in somebody else's code. The sqlite3_exec() API is used successfully by many thousands of SQLite library users, and the whole library API is tested extensively for every release and intervening code drop. The overwhelming likelihood is that your callback is doing something that cannot work reliably. If it passes around soon-to-be-stale pointers to dynamically allocated memory, to be used after they are in fact stale, (such as Keith and I suspect leading to our suggestions), that is what you must stop doing. Your debugging effort only going to be hindered by your wish to absolve your own code. That is an attitude you would do better to shed. Your goal during debugging is to figure out what is going wrong, with enough detail that it is reasonable to decide what code is violating its (explicit or implicit) contract. Only then can you proclaim the location of bug. You are nowhere close to that point, and having counted loops before a crash is not a sign of being close.
(11) By anonymous on 2021-01-23 07:43:07 in reply to 10 [link]
Your very long response does <b>nothing</b> absolutely <b>nothing</b> to help me search for a resolution. I am not, not even attempting to, absolve(ing) my code. That's why I put correct in quotes. Clearly there is an issue, <i>somewhere</i>. You asserted: >You did not answer Keith's question, "Are you attempting to 'hang onto a pointer' after your callback returns?" I suggest you re-read the opening paragraph in [my response](). My question was specific: >When I call sqlite3_exec from my code, <b>on which line</b> in sqlite3.c does it land? Your response does not help. I am looking for hints to help me rule out the following: 1.That it is not my DLLImport code (my investigation is still ongoing) 2.That it is not my Callback code (my investigation is still ongoing) that is the source of the problem. On balance, since I am getting 3 records back with what I have suggests that the problem is more subtle.<i>And I am not even thinking that it is anything to do with SQLite yet; that is why I've tried with several versions thereof.</i>
(16) By Larry Brasfield (LarryBrasfield) on 2021-01-23 17:22:44 in reply to 11 [link]
> Your very long response does nothing absolutely nothing to help me search for a resolution. I can see that. The "search" you intend must exclude using heap checking tools or changing your code to see whether use of simpler data types correlates with your problem. Good luck with that "search"; you will need it. > > You did not answer Keith's question, "Are you attempting to 'hang onto a pointer' after your callback returns?" > I suggest you re-read the opening paragraph in my response. I did read it and found a simple "No." followed by an elaboration of facts unrelated to the question. It was just as if you answered a different question. > > When I call sqlite3_exec from my code, on which line in sqlite3.c does it land? > Your response does not help. The line number might be 116761 if your SQLite version is 3.25. Does that help? Or do you expect some shoemaker's elf to provide a table with the line number you demand for every version of SQLite you might be using? I provided guidance for finding that line in **whatever version** of sqlite3.c you happen to have. To say that does not help shows that your notion of what will be useful in your quest is extremely limited -- so limited that I doubt you will find it here or at stackoverflow. > On balance, since I am getting 3 records back with what I have suggests that the problem is more subtle More subtle than what? More subtle than the effects of heap corruption? To any experienced programmer, the fact that you see 3 (or 6 or 2) "successful" callback executions followed by an address fault when sqlite3_exec() is called suggests that the SQLite library code has been asked to use a corrupted heap, and that corruption has led to an address fault when the heap manager attempts to use the corrupted heap data structure. As many thousands of test cases show and many thousands of the library's users know, the library is pretty good about not corrupting the heap. Hence, it is reasonable to suspect that your code is corrupting the heap. Yet you are immune to doing the simple work needed to ascertain whether or not that is happening. Too much work, or beyond your ken, I suppose. Better to see if somebody else has a solution. > *And I am not even thinking that it is anything to do with SQLite yet; that is why I've tried with several versions thereof.* Strange. Several versions of your failing code would be a better experiment. For example, if your callback, (which you have not revealed, even at the stackoverflow site where your plea for help was also made), does nothing with the data passed to it (by indirect reference), does the address fault still occur? Does it still occur if the callback makes only deep copies of the data? Is the heap intact across your sqlite3_exec() calls? Is it intact across execution of your callback? Inserting some diagnostics would be much more fruitful than trying different versions of sqlite3.c and hoping that matters. Ryan has given you good advice for helping others to help you. And as Kees noted (and I confirm), you tried but failed to do that. Is this because you do not know what code is actually running when your callback is called? Or are you simply hopeful that with a few seconds more work posting that link that somebody is going to spot your bug? Given what I see of your programming skill outside of the nice, managed execution environment that C# provides, I think you would be way ahead to use System.Data.SQLite and its SQLiteDataReader class to pull data from your SQLite database. Clearly, you are not yet knowledgeable enough about using C to be using the Native Code interfacing capability. (If you were, you would not be asking others to find a function definition for you in sqlite3.c .)
(17) By anonymous on 2021-01-23 17:59:22 in reply to 16 [link]
>I think you would be way ahead to use System.Data.SQLite We've discussed this. System.Data.SQLite is way behind in terms of SQLite3 releases & it has a very large footprint (code- and dependency- wise). I want to be in control of which version of SQLite3 I use; that way, - when 3.35 comes along, I 'll have all the new SQL functions available. - I am in control of what functionality I deliver and can choose how I do that.
(18) By anonymous on 2021-01-23 18:10:28 in reply to 16 [link]
> Clearly, you are not yet knowledgeable enough about using C Very true. One lifetime is not enough to learn everything. Besides, the whole point of SQLite3.DLL is that I do NOT need to know its inner workings (however much that might help) when using it via its exposed interface i.e. its published APIs. (I wish the SQLite3 documentation was a little less terse & provided worked examples but I imagine that that might be impossible given the huge number of clients that use it on all diverse platforms.) It is mostly difficult UNTIL you know how. It is easy when you know how but then beginners' questions appear <i>tiresome</i> as hinted by forum responses.
(12) By Ryan Smith (cuz) on 2021-01-23 12:09:06 in reply to 9 [link]
Please just show the actual code of the callback and any other code you have in that project that ever touches the DB connection. So far you've been experiencing some frustration feedback because your posts have all been: "I have a black box, when I put this SQL into it, an egg comes out. Sometimes the egg is broken, specifically, the fourth egg - what is the problem?" There are three possibilities - 1. You are lying (unintentionally for sure) and your code does something that some C# programmer on here will recognize as the error. 2. Your code is perfectly fine but C# or the Wrapper does something weird, which someone on here can test easily when they can reproduce your code on their machines. 3. SQLite is broken since not many people use that specific interface and it may have gone unnoticed and your code has finally shown the error, in which case being able to set up your code on our side would help in letting us debug the problem and fix SQLite. Can you see the general theme here? We need to see the code. Your explanations may have felt complete to you because you are privy to your own code, but for us it is all staring at a black box and playing twenty questions with you to find the problem. Fun as that is, the result will be much quicker when we can just see the code. PS: It would be "nice" if you can whittle down the code to just a few lines that still produces the error, but we will be happy to look at it either way.
(13) By anonymous on 2021-01-23 12:29:18 in reply to 12 [link]
>PS: It would be "nice" if you can whittle down the code to just a few lines that still produces the error, but we will be happy to look at it either way. Please [refer here]() for my code (also, note the response/advice <i>the callback interface is rarely the best way to do something even in C</i>.
(14) By Kees Nuyt (knu) on 2021-01-23 15:23:35 in reply to 13 [link]
> Please refer [here]() for my code (also, note the response/advice the callback interface is rarely the best way to do something even in C. That stackoverflow post only shows the function headers, not the actual processing code.
(15) By anonymous on 2021-01-23 16:28:03 in reply to 14 [link]
That you ask for the actual code raises doubts in my mind; however, to save time: ``` using System; using System.Runtime.InteropServices; using System.Text; namespace ConsoleApp1 { class Program { [DllImport("sqlite3.dll", EntryPoint = "sqlite3_exec", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_exec(IntPtr dbHandle, byte[] sql, Callback callback, string args, out IntPtr errmsg); [DllImport("sqlite3.dll", EntryPoint = "sqlite3_open", CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_open(string filename, out IntPtr dbPtr); //[MarshalAs(UnmanagedType.LPTStr)] [DllImport("sqlite3.dll", EntryPoint = "sqlite3_close_v2", CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_close_v2(IntPtr dbHandle);// int sqlite3_close_v2(sqlite3*); void Main(string[] args) { Callback IR = new Callback(IterateResults); IntPtr dbHandle; IntPtr errMsg = IntPtr.Zero; if (0 == sqlite3_open(@"d:\sqlite32\db\chinook.db", out dbHandle)) { int rc = sqlite3_exec(dbHandle, Encoding.Default.GetBytes(@"select * from employees;"), IR, "First Argument of callback", out errMsg); // CAllback on previous line failes!!! } else { Console.WriteLine("failed to open database"); } } public static int IterateResults(IntPtr unused, int n, string[] values, string[] names) { for (int i = 0; i < n; i++) { Console.WriteLine($"{names[i]}={values[i]}"); } return 0; } } } ``` This is a Console Application; using Visual Studio 2017, tested the build with framework 4.7.2, Platform target <i>Any CPU</i> and <i>Prefer 32-bit</i> enabled. If you need to know how to reference SQLite3.DLL version 3.34, [see thread [3] here]().
(19) By Larry Brasfield (LarryBrasfield) on 2021-01-23 21:26:28 in reply to 15 [link]
> That you ask for the actual code raises doubts in my mind; I cannot imagine why, so it would be educational for you to say why. You might be interested to know that when I substitute your code into a demo .Net Core console application targeting 'Any CPU', change the DB and table name literals to match some databases I have laying around, and put a 64-bit sqlite3.dll next to the .exe, then I can build the app and run it without any address faults, including blatting out a table with 22588 rows. Is that code in your post 15 what is actually producing address faults for you?
(20) By anonymous on 2021-01-23 22:43:51 in reply to 19 [link]
>so it would be educational for you to say why. I thought that anyone willing or capable of troubleshooting code will be able to [create the code]() without any problems. On the other hand, it would make sense to use the same code as myself; therefore, the request for code is valid. > I can build the app and run it without any address faults <b>Thank you very much</b> for doing this and relaying the outcome.<sup> As I've mentioned somewhere above, I've run out of options for fixing the error I'm encountering</sup> I am using <b>Console App(.Net Framework)</b> and <b>32-bit SQLite3.DLL.</b> I'll try to re-create your setup (.Net Core & 64-bit SQLite3) and report back if I can also re-create your outcome, namely, success! Switching to .Net Core is a sound idea but I need to be working with 32-bit SQLite3 my reasons will be clearer when I provide feedback ... more soon.
(23) By Larry Brasfield (LarryBrasfield) on 2021-01-24 01:54:20 in reply to 20 [link]
> > so it would be educational for you to say why [asking for code "raises doubts"]. > I thought that anyone willing or capable of troubleshooting code will be able to create the code without any problems. > On the other hand, it would make sense to use the same code as myself; therefore, the request for code is valid. Someone capable of seeing what is wrong with some code would likely create code that did not fail, unless they intended to create a specific bug. However, creating an address fault can be done in so many different ways that it would be pure chance if such creation happened to match how some unseen code did so. This is why "show the code" is so much preferred over summary descriptions. > > I can build the app and run it without any address faults > **Thank you** ... I am using Console App(.Net Framework) and 32-bit SQLite3.DLL This might be an parameter marshaling problem, or it could be simply that you have not yet told the auto-magic marshaling builder enough that it knows what to expect. [a] A diligent perusal of the Native Code interfacing docs is indicated. I doubt that your problem is a simple bug in the .Net marshaling code or C# compiler. Debugging at the assembler level would likely be revealing as to what is going wrong, but not how to fix it. Debugging at that level for the working and failing versions, with cross-comparison, would be more interesting. [a. I was surprised at how convenient marshaling setup has become since I last had to do it. Maybe it is not quite as easy as it looks in that post 15 code except when certain defaults are correct. ] > Switching to .Net Core is a sound idea but I need to be working with 32-bit SQLite3 my reasons will be clearer when I provide feedback ... more soon. I am not advocating .Net Core or use of a 64-bit DLL, at least not here. I was trying to get defaults more likely to be favorable because, while I did not see anything wrong popping out of the Delegate declaration, I was less sure as to what might be missing. To me, it seemed too easy.
(24) By anonymous on 2021-01-24 08:04:47 in reply to 23 [link]
>or it could be simply that you have not yet told the auto-magic marshaling builder The 32-bit problem manifests itself even when I am not using marshalling to get the string values - I pass a single pointer to the callback and (minimally) use something like this: ``` { string[] values = new string[n]; string[] names = new string[n]; int ptr_size = Marshal.SizeOf(typeof(IntPtr)); for (int i=0; i<n; i++) { IntPtr vp; vp = Marshal.ReadIntPtr(values_ptr, i * ptr_size); values[i] = util.from_utf8(vp); vp = Marshal.ReadIntPtr(names_ptr, i * ptr_size); names[i] = util.from_utf8(vp); } ``` where values_ptr is a single pointer for values & the corresponding pointer for column names is names_ptr; n is the number of columns.
(25.1) By Larry Brasfield (LarryBrasfield) on 2021-01-25 23:11:16 edited from 25.0 in reply to 24
Please examine the following code, then say whether you want to continue asserting that there is something amiss worth anybody else's investigation. You may also want to read about the [UnmanagedFunctionPointer]()Attribute. <code> using System; using System.Runtime.InteropServices; namespace ConsoleApp2 { class Program { const string dbLib = "sqlite3.dll"; [DllImport(dbLib, EntryPoint = "sqlite3_exec", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_exec(IntPtr dbHandle, [In][MarshalAs(UnmanagedType.LPStr)] string sql, Callback callback, IntPtr arbArg, ref IntPtr errmsg); [DllImport(dbLib, EntryPoint = "sqlite3_open", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_open([In][MarshalAs(UnmanagedType.LPStr)] string filename, out IntPtr dbPtr); [DllImport(dbLib, EntryPoint = "sqlite3_close_v2", CallingConvention = CallingConvention.Cdecl)] static extern int sqlite3_close_v2(IntPtr dbHandle); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] int callCount = 0; static void Main(string[] args) { Callback IR = new Callback(IterateResults); IntPtr dbHandle; IntPtr errMsg = IntPtr.Zero; if (args.Length < 2) { Console.WriteLine("Provide DB filename and a table name as arguments."); } else if (0 == sqlite3_open(args[0], out dbHandle)) { int rc = sqlite3_exec(dbHandle, @"select * from " + args[1], IR, dbHandle, ref errMsg); Console.WriteLine($"exec return: {rc}"); sqlite3_close_v2(dbHandle); } else { Console.WriteLine("failed to open database"); } } public static int IterateResults(IntPtr unused, int n, string[] values, string[] names) { ++callCount; for (int i = 0; i < n; i++) { Console.WriteLine($"{names[i]}[{callCount}]={values[i]}"); } return 0; } } } </code>
(21) By anonymous on 2021-01-23 23:19:01 in reply to 19 [link]
Larry, I switched to 64-bit SQLite3.DLL (still using .Net Framework) and was able to replicate your success with a table containing 15,000 records. (And it feels good!) I've raised a new topic <b>SQLite3 v3.34 ANOMALY - Precompiled Binaries for Windows</b> for investigating this anomaly.
(22) By anonymous on 2021-01-23 23:24:17 in reply to 19 [link]
I omitted to respond to this earlier: >Is that code in your post 15 what is actually producing address faults for you? Yes (with the 32-bit SQLite3). No fault with 64-bit SQLite3.
|
https://sqlite.org/forum/forumpost/12932135cd9f72a4?t=h&unf
|
CC-MAIN-2022-40
|
refinedweb
| 5,085 | 63.9 |
1, Title or a paragraph of lyrics or poetry.
2, SPI introduction
SPI (Serial Peripheral interface) is a Serial Peripheral interface. SPI interface is mainly used between EEPROM, FLASH, real-time clock, AD converter, digital signal processor and digital signal decoder.
SPI is a high-speed, full duplex and synchronous communication bus, and only occupies four wires on the pins of the chip, which saves the pins of the chip, saves space and provides convenience for the layout of PCB. It is precisely because of this simple and easy-to-use characteristic that more and more chips have integrated this communication protocol, and STM32 also has SPI interface.
SPI interface generally uses 4 lines for communication:
MISO master equipment data input and slave equipment data output.
MOSI master device data output and slave device data input.
SCLK clock signal is generated by the master equipment.
The chip selection signal of CS slave equipment is controlled by the master equipment.
SPI's main features are: it can send and receive serial data at the same time; It can work as a master or slave; Provide frequency programmable clock; Sending end interrupt flag; Write conflict protection; Bus contention protection, etc.
SPI bus has four working modes. In order to exchange data with peripherals, SPI module can configure its output serial synchronization clock polarity and phase according to the working requirements of peripherals. Clock polarity (CPOL) has no significant impact on the transmission protocol.
If CPOL=0, the idle state of the serial synchronization clock is low; If CPOL=1, the idle state of the serial synchronization clock is high.
The clock phase (CPHA) can be configured to select one of two different transmission protocols for data transmission. If CPHA=0, the data is sampled at the first jump edge (rising or falling) of the serial synchronization clock; If CPHA=1, the data is sampled at the second jump edge (up or down) of the serial synchronization clock.
The clock phase and polarity of SPI main module and external equipment communicating with it shall be consistent.
SPI communication process
MOSI and MISO signals are valid only when NSS is at low level. MOSI and MISO transmit one bit of data in each clock cycle of SCK.
3, OLED introduction
OLED is organic light emitting diode, also known as organic electroluminescence display (OELD). OLED is considered as the emerging application technology of the next generation of flat panel display because of its excellent characteristics such as self luminescence, no backlight, high contrast, thin thickness, wide viewing angle, fast reaction speed, flexible panel, wide temperature range, simple structure and manufacturing process.
LCD needs backlight, but OLED does not because it is self luminous. This same display, OLED effect is better. With the current technology, the size of OLED is still difficult to be large-scale, but the resolution can be very high..
The module uses 8 * 2 2 2.54 rows of pins to connect with the outside, with a total of 16 pins. Among the 16 lines, we only use 15, and one is suspended. Among the 15 lines, power supply and ground wire account for 2, and there are 13 signal lines left. In different modes, the number of signal lines we need is different. In 8080 mode, we need all 13 lines, while in IIC mode, only 2 lines are enough! One of them is common, that is, the reset line RST (RES). The low level on RST will lead to OLED reset. The OLED module should be reset before each initialization.
In the following experiments, seven wire OLED s will be used
Refer to the Demo program given by the manufacturer: 0.96 inch SPI_OLED module supporting data package
For the introduction of 0.96 inch OLED display, please refer to the link:
4, STM32+OLED displays individual student number and name
1. Text modeling method
Theoretical introduction in [embedded 14] It is introduced in
The text to be displayed is expressed in hexadecimal by using the modeling software, and the modeling software will be placed in the data link at the end of the article.
Software initial settings
Enter the target text in the text input area and ctrl+enter to get the display diagram
Click C51 format to generate dot matrix
2. Code writing
Content display TEST_MainPage function - > test. C file
void TEST_MainPage(void) { // GUI_ShowString(28,0,"abc",16,1);// English name GUI_ShowCHinese(28,20,16,"Yao Yier",1);//Chinese name GUI_ShowString(4,48,"12345678910",16,1);//Digital detail delay_ms(1500); delay_ms(1500); }
Text storage (example) - > oledfont. H file
const typFNT_GB16 cfont16[] = { "system",0x00,0xF8,0x3F,0x00,0x04,0x00,0x08,0x20,0x10,0x40,0x3F,0x80,0x01,0x00,0x06,0x10, 0x18,0x08,0x7F,0xFC,0x01,0x04,0x09,0x20,0x11,0x10,0x21,0x08,0x45,0x04,0x02,0x00,/*"System ", 0*/ "Unified",0x10,0x40,0x10,0x20,0x20,0x20,0x23,0xFE,0x48,0x40,0xF8,0x88,0x11,0x04,0x23,0xFE, 0x40,0x92,0xF8,0x90,0x40,0x90,0x00,0x90,0x19,0x12,0xE1,0x12,0x42,0x0E,0x04,0x00,/*"System ", 1*/ "set up",0x00,0x00,0x21,0xF0,0x11,0x10,0x11,0x10,0x01,0x10,0x02,0x0E,0xF4,0x00,0x13,0xF8, 0x11,0x08,0x11,0x10,0x10,0x90,0x14,0xA0,0x18,0x40,0x10,0xA0,0x03,0x18,0x0C,0x06,/*"Set "2"*/ "Set",0x7F,0xFC,0x44,0x44,0x7F,0xFC,0x01,0x00,0x7F,0xFC,0x01,0x00,0x1F,0xF0,0x10,0x10, 0x1F,0xF0,0x10,0x10,0x1F,0xF0,0x10,0x10,0x1F,0xF0,0x10,0x10,0xFF,0xFE,0x00,0x00,/*"Set ", 3*/ };
Main function - > main. C file
int main(void) { delay_init(); //Delay function initialization OLED_Init(); //Initialize OLED OLED_Clear(0); //Clear screen (all black) while(1) { TEST_MainPage(); //Interface display } }
3. Effect display
5, STM32+OLED displays the temperature and humidity of AHT20
1. Code writing
Temperature and humidity display read_AHT20 function - > BSP_ I2C. C file
void read_AHT20(void) { uint8_t i; for(i=0; i<6; i++) { readByte[i]=0; } //------------- I2C_Start(); I2C_WriteByte(0x71); ack_status = Receive_ACK(); readByte[0]= I2C_ReadByte(); Send_ACK(); readByte[1]= I2C_ReadByte(); Send_ACK(); readByte[2]= I2C_ReadByte(); Send_ACK(); readByte[3]= I2C_ReadByte(); Send_ACK(); readByte[4]= I2C_ReadByte(); Send_ACK(); readByte[5]= I2C_ReadByte(); SendNot_Ack(); //Send_ACK(); I2C_Stop(); //-------------- if( (readByte[0] & 0x68) == 0x08 ) { H1 = readByte[1]; H1 = (H1<<8) | readByte[2]; H1 = (H1<<8) | readByte[3]; H1 = H1>>4; H1 = (H1*1000)/1024/1024; T1 = readByte[3]; T1 = T1 & 0x0000000F; T1 = (T1<<8) | readByte[4]; T1 = (T1<<8) | readByte[5]; T1 = (T1*2000)/1024/1024 - 500; AHT20_OutData[0] = (H1>>8) & 0x000000FF; AHT20_OutData[1] = H1 & 0x000000FF; AHT20_OutData[2] = (T1>>8) & 0x000000FF; AHT20_OutData[3] = T1 & 0x000000FF; } else { AHT20_OutData[0] = 0xFF; AHT20_OutData[1] = 0xFF; AHT20_OutData[2] = 0xFF; AHT20_OutData[3] = 0xFF; printf("lyy"); } /*Display the collected temperature and humidity through the serial port printf("\r\n"); printf("Temperature:% d%d.%d",T1/100,(T1/10)%10,T1%10); printf("Humidity:% d%d.%d",H1/100,(H1/10)%10,H1%10); printf("\r\n");*/ t=T1/10; t1=T1%10; a=(float)(t+t1*0.1); h=H1/10; h1=H1%10; b=(float)(h+h1*0.1); sprintf(strTemp,"%.1f",a); //Call the Sprintf function to format the temperature data of DHT11 into the string array variable strTemp sprintf(strHumi,"%.1f",b); //Call the Sprintf function to format the humidity data of DHT11 into the string array variable strHumi GUI_ShowCHinese(16,00,16,"Temperature and humidity display",1); GUI_ShowCHinese(16,20,16,"temperature",1); GUI_ShowString(53,20,strTemp,16,1); GUI_ShowCHinese(16,38,16,"humidity",1); GUI_ShowString(53,38,strHumi,16,1); delay_ms(1500); delay_ms(1500); }
Dot matrix display text
"temperature",0x00,0x00,0x23,0xF8,0x12,0x08,0x12,0x08,0x83,0xF8,0x42,0x08,0x42,0x08,0x13,0xF8, 0x10,0x00,0x27,0xFC,0xE4,0xA4,0x24,0xA4,0x24,0xA4,0x24,0xA4,0x2F,0xFE,0x00,0x00,/*"Temperature ", 0*/ "degree",0x01,0x00,0x00,0x80,0x3F,0xFE,0x22,0x20,0x22,0x20,0x3F,0xFC,0x22,0x20,0x22,0x20, 0x23,0xE0,0x20,0x00,0x2F,0xF0,0x24,0x10,0x42,0x20,0x41,0xC0,0x86,0x30,0x38,0x0E,/*"Degrees ", 0*/ "wet",0x00,0x00,0x27,0xF8,0x14,0x08,0x14,0x08,0x87,0xF8,0x44,0x08,0x44,0x08,0x17,0xF8, 0x11,0x20,0x21,0x20,0xE9,0x24,0x25,0x28,0x23,0x30,0x21,0x20,0x2F,0xFE,0x00,0x00,/*"Wet ", 0*/ "display",0x00,0x00,0x1F,0xF0,0x10,0x10,0x10,0x10,0x1F,0xF0,0x10,0x10,0x10,0x10,0x1F,0xF0, 0x04,0x40,0x44,0x44,0x24,0x44,0x14,0x48,0x14,0x50,0x04,0x40,0xFF,0xFE,0x00,0x00,/*"Display ", 0*/ "show",0x00,0x00,0x3F,0xF8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFE,0x01,0x00, 0x01,0x00,0x11,0x10,0x11,0x08,0x21,0x04,0x41,0x02,0x81,0x02,0x05,0x00,0x02,0x00,/*"Display ", 0*/
Main function main.c file
#include "delay.h" #include "usart.h" #include "bsp_i2c.h" #include "sys.h" #include "oled.h" #include "gui.h" #include "test.h" int main(void) { delay_init(); //Delay function initialization uart_init(115200); IIC_Init(); NVIC_Configuration(); //Set NVIC interrupt packet 2: 2-bit preemption priority and 2-bit response priority OLED_Init(); //Initialize OLED OLED_Clear(0); while(1) { //printf("temperature and humidity display"); read_AHT20_once(); OLED_Clear(0); delay_ms(1500); } }
2. Effect display
The acquisition speed can be modified.
6, STM32+OLED up and down or left and right sliding display long characters
1. Scroll settings
Horizontal left-right movement
OLED_WR_Byte(0x2E,OLED_CMD); //Turn off scrolling OLED_WR_Byte(0x26 OLED_WR_Byte(0x2F,OLED_CMD); //Turn on scrolling
The display data shall be transmitted before sending and scrolling. If the display data is transmitted during scrolling, the contents in RAM may be damaged and cannot be displayed normally.
2. Code writing
Add text font code - > oledfont. H file
OLED display function test.c
void TEST_MainPage(void) { GUI_ShowCHinese(10,20,16,"We have a bright future",1); delay_ms(1500); delay_ms(1500); }
Main function main.c file
}
3. Effect display
7, Summary
Through three experiments, I am basically proficient in the operation of STM32+OLED. It shows that it is not difficult to complete three applications without problems in code and pin configuration.
Note that the word length should be set during OLED display, otherwise it cannot be fully displayed.
Pay attention to the difference between horizontal mold taking, vertical mold taking and reverse order when taking mold, otherwise you will get a piece of fuzzy dots instead of normal and clear Chinese characters.
OLED is an interesting peripheral. When more hardware projects are completed later, OLED can be used for debugging and display, which will be of great help. Therefore, it is necessary to master the use of OLED and practice more, which will benefit a lot.
other
Data link (including all codes)
Link:
Extraction code: v5ti
|
https://programmer.help/blogs/embedded-16-stm32-oled-screen-display-application-example.html
|
CC-MAIN-2022-21
|
refinedweb
| 1,724 | 54.42 |
53118/query-regarding-arraybuffer
Hi,
Why we cant use new keyword for creating ArrayBuffer object?
import scala.collection.mutable.ArrayBuffer
val MyArray = new ArrayBuffer(1,2,3,4) ==>> Showing me error
val MyArray = ArrayBuffer(1,2,3,4) ==>> Working Fine
It's because that is the syntax. This is the syntax for creating a Scala ArrayBuffer:
import scala.collection.mutable.ArrayBuffer
var fruits = ArrayBuffer[String]()
var ints = ArrayBuffer[Int]()
The key thing to know is that the keyword new is not required before the ArrayBuffer. (This is because ArrayBuffer is either defined as a case class, or because it has an apply method defined. I haven’t looked at its source code to know which approach is taken.)
While I’m in the neighborhood, here are some other ways you can work with ArrayBuffer:
val x = ArrayBuffer('a', 'b', 'c', 'd', 'e')
var characters = ArrayBuffer[String]()
characters += "Ben"
characters += "Jerry"
characters += "Dale" ...READ MORE
Registered tables are not cached in memory. ...READ MORE
Suppose I have the below parquet file ..
Varchar datatype is also saved internally as ...READ MORE
There no way to take any precautions ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/53118/query-regarding-arraybuffer?show=53119
|
CC-MAIN-2020-10
|
refinedweb
| 195 | 56.96 |
I'm trying to play around with hbase in python and I am using the cloudera repository to install the hadoop/hbase packages. It seems to work as I can access and work on the database using the shell but its not fully working within python.
I know to communicate with hbase I need thrift so I downloaded and complied it from source, I can import thrift into python but when I do
from hbase import Hbase
sudo aptiutde install python-hbase
Okay, I figured it out. If anyone else is having problems with this in the future its actually pretty easy. In the step where you run
thrift --gen py Hbase.thrift, it creates a hbase folder in the location you ran that command. Simply take that command and copy it to your default module folder(or in the folder where you run your program and it should work).
|
https://codedump.io/share/dWhZ1Ak3IFd0/1/how-can-i-import-hbase-in-python
|
CC-MAIN-2016-44
|
refinedweb
| 150 | 75.84 |
Shader renders completely black?Posted Wednesday, 28 November, 2012 - 23:51 by Evinyl in
Hello, i'm trying to make a rather basic shader that colors a fragment relative to it's position in the window.
So far, i've come up with this
uniform vec3 LowColor; uniform vec3 HighColor; uniform float windowHeight; float GetAverage(float scaler, float lowVal, float highVal) { return (lowVal + highVal) * scaler; } void main() { float scaler = gl_FragCoord.y /windowHeight; float r = GetAverage(scaler, LowColor[0],HighColor[0]); float g = GetAverage(scaler, LowColor[1],HighColor[1]); float b = GetAverage(scaler, LowColor[2],HighColor[2]); gl_FragColor = vec4(r,g,b,1.0); }
The uniforms are being found,
Location for Uniform windowHeight = 2 Location for Uniform LowColor = 1 Location for Uniform HighColor = 0
so I don't suspect that to be a problem.
Two things are a problem though, the bars that are rendered with the shader are completely black, ex.
When using fixed values for LowColor and HighColor, the shading seems way off, since there is only a small line across the bottom that is properly shaded, and the only way to fix it is to use a very high value for "windowHeight", such as 500000.
I'm assigning the uniforms of the shader as so,
BarShader.SetUniform("windowHeight",(float)mainGL.Height); BarShader.SetUniform("LowColor", new Vector3(fromColor[2], fromColor[1], fromColor[0])); BarShader.SetUniform("HighColor", new Vector3(toColor[2], toColor[1], toColor[0]));
public void SetUniform(string name, Vector3 value) { int loc = GL.GetUniformLocation(iProgram, name); Console.WriteLine("Location for Uniform " + name + " = " + loc + ", value = " + value); GL.Uniform3(loc, ref value); }
That is contained within a shader class.
Re: Shader renders completely black?
Whoops, fixed one part, I wasn't using the program before setting the uniforms...
Now, the other part, what is the scale of the gl_FragCoord.y variable? I tried sending in the height of the GlControl, but that just shades a very small, ~10 pixel wide line at the bottom, with the remaining height of the bar being a solid color.
Re: Shader renders completely black?
I guess there's a few things that could be odd here. I'm not 100% sure what you are trying to accomplish, but I guess you're trying to make a vertical gradient that goes from LowColor to HighColor. So I'll work form that assumption.
GetAverage is not interpolating between those two values.
Let's say you want to go between black and white (doing this for each channel)
You want to linearly interpolate between two values... or as the cool kids call it. "Lerp"ing.
And in fact, it's so handy, it's built in as mix(start, end, alpha)
(mix / lerp / blend ) are sometimes used interchangeably.
Okay, so after that, I'd take a really close look to make sure all your uniforms are correct. You didn't really mention their values, but the Vec3s probably should all have values in them between 0.0 and 1.0.
The scale of gl_fragCoord starts at 0.0 at the bottom and goes up to the screen height, I guess... I guess if I was you, I'd draw a screensize quad with UVs and then grab the gl_MultiTexCoord0 so it's ranged between 0 and 1.. But that's probably a matter of personal preference.
Consider simplifying your shader for testing purposes.
Will let you know if your scaler variable is ranged between zero and one like you expect. It should draw a white to black gradient.
Once you know that's working, add on the rest of it.
Re: Shader renders completely black?
Okay, so I figured out what I was doing wrong. I was passing the colors as a float between 0-255. The GetAverage function was also doing nothing to help, but it was late at night so what can you expect. Regardless, I fixed it and thanks for the help.
Re: Shader renders completely black?
Cool! Glad to hear it works now.
|
http://www.opentk.com/node/3224
|
CC-MAIN-2014-42
|
refinedweb
| 660 | 66.03 |
I discovered this pattern (or anti-pattern) and I am very happy with it.
I feel it is very agile:
def example(): age = ... name = ... print "hello %(name)s you are %(age)s years old" % locals()
Sometimes I use its cousin:
def example2(obj): print "The file at %(path)s has %(length)s bytes" % obj.__dict__
I don't need to create an artificial tuple and count parameters and keep the %s matching positions inside the tuple.
Do you like it? Do/Would you use it? Yes/No, please explain.
It's OK for small applications and allegedly "one-off" scripts, especially with the
vars enhancement mentioned by @kaizer.se and the
.format version mentioned by @RedGlyph.
However, for large applications with a long maintenance life and many maintainers this practice can lead to maintenance headaches, and I think that's where @S.Lott's answer is coming from. Let me explain some of the issues involved, as they may not be obvious to anybody who doesn't have the scars from developing and maintaining large applications (or reusable components for such beasts).
In a "serious" application, you would not have your format string hard-coded -- or, if you had, it would be in some form such as
_('Hello {name}.'), where the
_ comes from gettext or similar i18n / L10n frameworks. The point is that such an application (or reusable modules that can happen to be used in such applications) must support internationalization (AKA i18n) and locatization (AKA L10n): you want your application to be able to emit "Hello Paul" in certain countries and cultures, "Hola Paul" in some others, "Ciao Paul" in others yet, and so forth. So, the format string gets more or less automatically substituted with another at runtime, depending on the current localization settings; instead of being hardcoded, it lives in some sort of database. For all intents and purposes, imagine that format string always being a variable, not a string literal.
So, what you have is essentially
formatstring.format(**locals())
and you can't trivially check exactly what local names the formatting is going to be using. You'd have to open and peruse the L10N database, identify the format strings that are going to be used here in different settings, verify all of them.
So in practice you don't know what local names are going to get used -- which horribly crimps the maintenance of the function. You dare not rename or remove any local variable, as it might horribly break the user experience for users with some (to you) obscure combinaton of language, locale and preferences
If you have superb integration / regression testing, the breakage will be caught before the beta release -- but QA will scream at you and the release will be delayed... and, let's be honest, while aiming for 100% coverage with unit tests is reasonable, it really isn't with integration tests, once you consider the combinatorial explosion of settings [[for L10N and for many more reasons]] and supported versions of all dependencies. So, you just don't blithely go around risking breakages because "they'll be caught in QA" (if you do, you may not last long in an environment that develops large apps or reusable components;-).
So, in practice, you'll never remove the "name" local variable even though the User Experience folks have long switched that greeting to a more appropriate "Welcome, Dread Overlord!" (and suitably L10n'ed versions thereof). All because you went for
locals()...
So you're accumulating cruft because of the way you've crimped your ability to maintain and edit your code -- and maybe that "name" local variable only exists because it's been fetched from a DB or the like, so keeping it (or some other local) around is not just cruft, it's reducing your performance too. Is the surface convenience of
locals() worth that?-)
But wait, there's worse! Among the many useful services a
lint-like program (like, for example, pylint) can do for you, is to warn you about unused local variables (wish it could do it for unused globals as well, but, for reusable components, that's just a tad too hard;-). This way you'll catch most occasional misspellings such as
if ...: nmae = ... very rapidly and cheaply, rather than by seeing a unit-test break and doing sleuth work to find out why it broke (you do have obsessive, pervasive unit tests that would catch this eventually, right?-) -- lint will tell you about an unused local variable
nmae and you will immediately fix it.
But if you have in your code a
blah.format(**locals()), or equivalently a
blah % locals()... you're SOL, pal!-) How is poor lint going to know whether
nmae is in fact an unused variable, or actually it does get used by whatever external function or method you're passing
locals() to? It can't -- either it's going to warn anyway (causing a "cry wolf" effect that eventually leads you to ignore or disable such warnings), or it's never going to warn (with the same final effect: no warnings;-).
Compare this to the "explicit is better than implicit" alternative...:
blah.format(name=name)
There -- none of the maintenance, performance, and am-I-hampering-lint worries, applies any more; bliss! You make it immediately clear to everybody concerned (lint included;-) exactly what local variables are being used, and exactly for what purposes.
I could go on, but I think this post is already pretty long;-).
So, summarizing: "γνῶθι σεαυτόν!" Hmm, I mean, "know thyself!". And by "thyself" I actually mean "the purpose and scope of your code". If it's a 1-off-or-thereabouts thingy, never going to be i18n'd and L10n'd, will hardly need future maintenance, will never be reused in a broader context, etc, etc, then go ahead and use
locals() for its small but neat convenience; if you know otherwise, or even if you're not entirely certain, err on the side of caution, and make things more explicit -- suffer the small inconvenience of spelling out exactly what you're going, and enjoy all the resulting advantages.
BTW, this is just one of the examples where Python is striving to support both "small, one-off, exploratory, maybe interactive" programming (by allowing and supporting risky conveniences that extend well beyond
locals() -- think of
import *,
eval,
exec, and several other ways you can mush up namespaces and risk maintenance impacts for the sake of convenience), as well as "large, reusable, enterprise-y" apps and components. It can do a pretty good job at both, but only if you "know thyself" and avoid using the "convenience" parts except when you're absolutely certain you can in fact afford them. More often than not, the key consideration is, "what does this do to my namespaces, and awareness of their formation and use by the compiler, lint &c, human readers and maintainers, and so on?".
Remember, "Namespaces are one honking great idea -- let's do more of those!" is how the Zen of Python concludes... but Python, as a "language for consenting adults", lets you define the boundaries of what that implies, as a consequence of your development environment, targets, and practices. Use this power responsibly!-)
Regarding the "cousin", instead of
obj.__dict__, it looks a lot better with new string formatting:
def example2(obj): print "The file at {o.path} has {o.length} bytes".format(o=obj)
I use this a lot for repr methods, e.g.
def __repr__(self): return "{s.time}/{s.place}/{s.warning}".format(s=self)
|
https://pythonpedia.com/en/knowledge-base/1550479/python--is-using------var-s------locals---a-good-practice-
|
CC-MAIN-2020-29
|
refinedweb
| 1,262 | 59.94 |
From Devoxx: JavaFX on show, JDK 7 News !Filed under: devoxx javafx javase jdk7 on Friday Dec 12, 2008
Closing out a busy week here at Devoxx, the release of JavaFX and JDK 7 news have been the talk of the town !
Meanwhile, Devoxx attendees have been busily expressing their wants and needs on the whiteboards between sessions. Questions like Which language ?, Thinking of JavaFX ?, Which VM ?, Which JDK 7 language feature ? and even Worst Blog ? attracted attention all week.
The next day, Josh gave an entertaining talk about the next generation Java puzzlers, followed by Mark Reinhold who keynoted on Java Modularity and JDK 7. As you read here on the Planetarium, Project Jigsaw will modularize the JDK, you will get a new low pause garbage collector, better performance, language changes (see Joe's blog), and there will be new APIs too: NIO2, Swing App Framework, Annotations on Java types and a host of smaller features like the XRender graphics pipeline, SCTP support, unicode 5 and so on. JDK 7 will preview at JavaOne 2009 (don't forget to file a talk) and ship in a little over a year's time.
Sessions galore. Like Alex and Brian on the work in the VM to support multiple languages, and Richard and Josh on JavaFX in Practice: with a surprising number of questions about developing JavaFX on mobile. Hanging with half the JavaPosse off mike (though they have been broadcasting all week), Belgian beer, twittering, bumping into old friends, a Devoxx movie and suddenly today is the last day. What a week !
The Planetarium will be taking a break for the next week, but back right before the holidays with more news about what's been going on in the world of Java SE, Java ME, JavaFX and JavaCard (which, by the way, all Belgians have at least one of).
It seams that JavaFX is getting more and more news & developers attention...
That's pretty good, and just want to congrats all the guys that made JavaFX 1.0 possible !
Anyway, why not all the demos from the JavaFX launch, JavaOne December 12, 2008 at 02:07 AM PST #
"Reactions to seeing JavaFX 1.0 for the first time have been very good."
Interesting.. reactions from everyone I've talked to indicate is isn't ready and feels rushed. Everybody I've talked to have experienced browser crashes trying the demos at javafx.com.
The general consensus is Sun better get it fixed with the next update - and fast, or it's dead in the water.
I personally have found a few key things missing, but perhaps I'm just using the wrong tool. I have a UI that would really benefit from being implemented in JavaFX, but I can't do it since I need a single heavyweight component in my UI for realtime video preview of a live source.
Posted by swpalmer on December 15, 2008 at 07:00 AM PST #
Very good to hear Java in general is getting more exposure. I to am looking forward to more capability and stability in JavaFX. I have not downloaded the latest build for jre 1.7 but since that is a year away from release I will wait 6 more months.
I know alot of hard work has gone into JavaFX I want to thank the team! I will wait for the video capability to mature a little more before I use it. I already support heavyweight/lightweight mixing and have for years and also video for years both HD and many formats so will watch JavaFX for a little while before I consider it for my portal project for kids.
Many Congrads to the team!
Tony Anecito
Founder,
MyUniPortal
Posted by Tony Anecito on December 15, 2008 at 12:01 PM PST #
The JavaFX SDK looks alot more impressive than I thought of would. Some of the samples at javafx.com are great. Congrats !
Posted by Ian Yardley on December 16, 2008 at 12:51 AM PST #
> It seams that JavaFX is getting more and more news & developers attention...
It is massively hyped, dragged from conference to conference to feed it down developer throats. It is made to impress, aka "demo driven design", and made to milk as many book deals as possible for its developers out of it. It is not made to be useful, and not made to solve real problems.
Posted by Alf Igel on December 16, 2008 at 05:17 AM PST #
That's the end of Java. Uhmm what... Java being dead again?
Yes it is. Killed by Sun Microsystems. JavaFX definetely marks the end of Java. All of Sun's technological efforts are going into the new JavaFX-framework but JavaFX is Sun-only. It was created outside of the JCP and the JavaFX-namespace will not be part of the official JRE7.
But that's ok for me. I guess the future of "Java-technology" is outside of the sinking Sun-ship. Talk about Google, IBM, Adobe, Redhead, Intel, Eclipse and Apache.
Posted by Ulrich Weber on December 16, 2008 at 06:00 AM PST #
the end of java? nope, i don't think so. there are millions of java programmers and hundreds of technologies that use java. javafx is just an additional... something that is to be integrated to existing projects
Posted by ely on December 16, 2008 at 08:27 PM PST #
The video puzzle looks nice but it doesn't work correctly on my mac :(
Posted by Emmanuel Puybaret on December 20, 2008 at 12:35 PM PST #
"the end of java? nope, i don't think so. there are millions of java programmers and hundreds of technologies that use java."
That sounds like what Perl developers were saying in 2000 when Perl 6 was announced.
Posted by miguel on December 30, 2008 at 05:26 PM PST #
|
http://blogs.sun.com/theplanetarium/entry/from_devoxx_javafx_on_show
|
crawl-002
|
refinedweb
| 980 | 71.85 |
There.
This?
A little break from my "LINQ to SQL tips" series of posts. A recent vote of no confidence on a related component orchestrated by community activists reminded me of many questions I have fielded and how the design team approached the design of LINQ to SQL (and also core LINQ APIs and C# language changes for LINQ). Nah, that’s for another day when it is cloudy and raining. Instead, let’s talk about my recent dream. Or rather, a nightmare!
But first turn off your flame throwers, grab a cup of coffee and don't take this too seriously ...
I had this Q&A nightmare about the component I worked on - LINQ to SQL. I am the "expert" providing the non-answers.
Q: How do I use blah pattern with LINQ to SQL (e.g. blah = ActiveRecord if you don't like abstract concepts)A: You don't!
Q: I think I wasn't sufficiently clear. I stood on one leg and when the phase of the moon was 64% of full, it worked but now that the moon is waxing further, your foo method throws bar exception when I do baz. How do I just get that bit working with LINQ to SQL. A: You don't
Q: (By now quite upset) Do you even understand blah?A: (Forced to be less terse) Yes. We considered blah and decided against it for a set of reasons listed below. That pattern is not consistent with the core design assumptions and recommended usage patterns with LINQ to SQL.
Q: (Now a full-force verdict) I hearby find you guilty of violating the implicit agreement to solve the world hunger problem using blah methodology. Hence, what you produced is useless, evil and must be stopped at once. Any software built using your component will accelerate global warming and cause all glaciers to melt at once. And of course, it will irreparably damage the young and impressionable minds of generations of developers leaving them utterly useless for anything except writing some old fashioned code.A: Thank you for your interest in LINQ to SQL err blah.
Nightmare aside, (what) were we thinking? Stay tuned for that ...
P.S.
1. This release includes backward-looking statements intended to qualify for the safe harbor from liability established by the Public Flagellation by Community Act of 2008. These backward-looking statements generally can be identified by phrases such as "did", "was", "thought" ... and by the absence of "will" "fix" "in future release".
2. I just wanted to get you objects from the table. I swear. Nothing more than that!
3. Scott B., if you are reading this, peace! I won't let you drag me on to the stage at another PDC BoF and I won't fix anything either. I can't. I don't drive LINQ to SQL anymore. I just drive righteous developers crazy J
The:
Here is the overall answer.
Yes, you can use sprocs returning multiple results of different shapes. Here is an example:
This should be added to your partial class that is derived from DataContext:
[Function(Name="dbo.MultipleResultTypesSequentially")]
[ResultType(typeof(Product))]
[ResultType(typeof(Customer))]
public IMultipleResults MultipleResultTypesSequentially()
IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())));
return ((IMultipleResults)(result.ReturnValue));
}
In consuming code, here is how it can be used
using(IMultipleResults sprocResults = db.MultipleResultTypesSequentially()) {
List<Product> prods = sprocResults.GetResult<Product>().ToList();
List<Customer> custs = sprocResults.GetResult<Customer>().ToList();
…
And no, the designer does not support this feature. So you have to add the method in your partial class. SqlMetal does however extract the sproc. The reason for that is an implementation detail: the two use the same code generator but different database schema extractors.
Anyway, with or without SqlMetal, you can use the feature as described above..
|
http://blogs.msdn.com/dinesh.kulkarni/
|
crawl-002
|
refinedweb
| 632 | 58.18 |
Current Role Using BLCLISteven Scarborough Jun 9, 2008 2:14 PM
How do you get your current role using BLCLI?
1. Re: Current Role Using BLCLIGreg Kullberg Jun 10, 2008 7:31 AM (in response to Steven Scarborough)
That's a good question. Are you using a user_info.dat file?
You can specify a Role either by using the -r switch in your BLCLI call, or by echoing the BL_RBAC_ROLE environment variable (if it's been set).
Outside of the BLCLI you can always use the 'blid' command to get your current Role.
2. Re: Current Role Using BLCLISteven Scarborough Jun 10, 2008 8:54 AM (in response to Greg Kullberg)
Thanks Greg for responding. The info was helpful. Can you give me the correct syntax to assign the Role value to BL_RBAC_ROLE using BLCLI? If I "chrole role_name" the variable does get set.
Thanks again.
Steve
3. Re: Current Role Using BLCLIGreg Kullberg Jun 10, 2008 9:55 AM (in response to Steven Scarborough)
Because it's an environment variable, you wouldn't set BL_RBAC_ROLE using the BLCLI. Take a look at the BladeLogicAdministration.pdf in the "Environment Variables" section under "Administering Security" for info on setting it.
4. Re: Current Role Using BLCLIBill Robinson Jun 11, 2008 10:01 AM (in response to Greg Kullberg)
there's 'unreleased' blcli RBACUser getCurrentUser which should do the trick.
5. Re: Current Role Using BLCLIJames Andrews Jun 12, 2008 11:37 AM (in response to Bill Robinson)
Hi,
If you do import BRProfile:
from com.bladelogic.client import BRProfile
You can use:
BRProfile.getCurrentRole()
To output the current role in script.
Don't think you need to enable unreleased commands to use this method.
Thanks,
James
Message was edited by:
James Andrews
6. Re: Current Role Using BLCLISteven Scarborough Jun 12, 2008 11:49 AM (in response to Steven Scarborough)
Thanks for all of the feedback, it has been helpful. I'm just starting to work with scripting BL and have a lot to learn. I have a question about the last 2 responses;
1) How do you enable 'unreleased' commands?
2) How do you import BRProfile?
If you could just point me in the right direction I would appreciate it.
Best regards,
Steve Scarborough
UMB
7. Re: Current Role Using BLCLIBill Robinson Jun 12, 2008 1:04 PM (in response to Steven Scarborough)
on unix run:
blcli -Dcom.bladelogic.cli.debug.release-only="false"
on windows run:
blcli2 -Dcom.bladelogic.cli.debug.release-only="false"
8. Re: Current Role Using BLCLIBill Robinson Jun 12, 2008 1:13 PM (in response to Bill Robinson)
9. Re: Current Role Using BLCLIJames Andrews Jun 13, 2008 4:15 AM (in response to Steven Scarborough)
Hi Steve,
If you are using jythin / jli you just need to include the import statement at the top of your script. Use the line below:
*from com.bladelogic.client import BRProfile
*
Then call BRProfile.getCurrentRole() to get the current role.
I am not sure how you do it in NSH script to be honest!
Thanks,
James
10. Re: Current Role Using BLCLISiddu Angadi Apr 3, 2012 4:24 AM (in response to Bill Robinson)
Hi Bill,
How to use from com.bladelogic.client import BRProfile?
When I add this line to my code and get the following error. Can you please help me on this?
Error Apr 3, 2012 2:49:16 PM ImportError: cannot import name BRProfile
Error Apr 3, 2012 2:49:16 PM from com.bladelogic.client import BRProfile
Error Apr 3, 2012 2:49:16 PM File "Program Files\BMC Software\BladeLogic\NSH\share\sensors\changeProp.jli", line 11, in <module>
Error Apr 3, 2012 2:49:16 PM Traceback (most recent call last):
My jli code is as below and executing through NSH JOB:
"
import sys
import re
import string as s
import bladelogic.cli.CLI as blcli
from com.bladelogic.client import BRProfile
BRProfile.getCurrentRole()
JOB_GROUP = sys.argv[1]
JOB_NAME = sys.argv[2]
jli = blcli.CLI()
set = jli.setServiceProfileName()
jli.setRoleName()
jli.connect()
cmd =["DeployJob","getDBKeyByGroupAndName",JOB_GROUP,JOB_NAME]
JOB_KEY1 = jli.run(cmd)
JOB_KEY = JOB_KEY1.returnValue
cmd2 =["Job","setPropertyValue",JOB_KEY,"QUALITY_APPROVAL","true"]
changeProp=jli.run(cmd2)
print changeProp.returnValue
"
11. Current Role Using BLCLIRohit Nayyar Apr 3, 2012 4:57 AM (in response to Siddu Angadi)
Which Version of BBSA (BladeLogic are you using) ?
If this is 8.x
Please note that the internal class heirarchies in BL have changed so
this
'com.bladelogic.client' becomes 'com.bladelogic.om.infra.client.ui'
use this import statement
from com.bladelogic.om.infra.client.ui import BRProfile
12. Re: Current Role Using BLCLISiddu Angadi Apr 4, 2012 4:25 AM (in response to Rohit Nayyar)
Yes Rohit. I am using 8.2.
Let me test with this import statement.
Onemore thing, Is there any document internal classes? How we came to know BMC change the internal class heirarchies?
Thanks
Siddu
13. Re: Current Role Using BLCLIRohit Nayyar Apr 4, 2012 4:32 AM (in response to Siddu Angadi)
I am going to upload a document today, will update you.
This does not happen often, has happened only in 8.x, I have not seen this since 7.x
here is the document:
|
https://communities.bmc.com/message/235961?tstart=0
|
CC-MAIN-2016-07
|
refinedweb
| 867 | 59.7 |
In the part one up to six of this weblog series I discussed mostly simple transformations. Now it is time to write about the most powerful XML tranformation techniques in ABAP: XSLT that is integrated into ABAP by the CALL TRANSFORMATION command.
XSLT 2.0 – What’s new?
There are problems which are very difficult to solve in XSLT 1.0, think of grouping for example. There are even untractable tasks:
- You can’t access nodesets stored in variables via XPath because they are a result fragment tree.
- An XSLT program can only create one output document. To create multiple documents you have to do postprocessing or apply multiple transformations but this can be very slow because the DOM tree of the document will be generated a few times.
To overcome these difficulties we use XSLT 2.0. In one of my last projects those transformations should run under Java on a non-SAP platform but later it became clear that those transformations should be working in ABAP, too. So we had to develop in a way that we could migrate those transformation to the ABAP XSLT processor such that there only a few changes are necessary. In that this turned out to be a challenging task. So let’s start to talk about the differences between XSLT 1.0 and XSLT 2.0.
When you start programming you will recognize that XSLT 2.0 is strongly typed; in fact you have even node typing via W3C Schema integration. XSLT 2.0 is based on sequences. Sequences are similar to nodes but they are ordered and allow dublicates. If you select a node with an XPath 2.0 expression you will get a sequence of all matching nodes and not the first one like in XSLT 1.0. In XPath 2.0 you have lots new functions including regular expression support and there is even more text-processing support in XSLT 2.0, although in my opinion from a conceptual point of view STX has better text processing functions. If you look at these features of XSLT 2.0 you will soon recognize that they are not supported in ABAP. So let’s have a look at the XSLT 2.0 features we can use under ABAP.
XSLT 2.0 Compliance
When the XSLT processor was implemented, the current XSLT 2.0 specification was still under discussion. For this reason, the version which implemented was a W3C Working Draft from 2002. So there are a lot of XSLT 2.0 features. Let me mention a few:
- We can define XPath functions.
- XPath contains an if-then-else construct.
- Grouping is supported with the xsl:for-each-group command.
- We have temporary trees.
- We have multiple result documents.
I will give an example for these features besides multiple output documents. If you are interested in a deeper investigation I suggest you to read my SAP Heft XML-Datenaustausch in ABAP. The english version is coming soon.
An Example: Normalization of Test Data
The following transformation performs a kind of normalization of an XML document. I used it to post process asXML documents – serialized ABAP data – I had to store in a filesystem as test cases. Unfortunately those data contain unique identifiers in elements ID which I had to map according to their lexicographic order to ongoing integers. And in fact this proves the power of XSLT 2.0 features because it would be much more difficult to solve this task in XSLT 1.0
If you look at this program you will recognize that this might be not only the best solution. But I chose it for some reasons: it is quite easy to understand, it contains important XSLT 2.0 techniques you can use in ABAP and last but not least: there is always more than one way to do it in XSLT. In fact I will present a better solution for this problem at the end of this blog.
The transformation is quite easy: there is one template that matches all nodes and copies the content of each one. Then there is a second templates only for elements called ID. Within this template I calculate a number for each alphanumeric ID. In the following sections I will show you how it works.
Temporary Trees
At the beginning of the transformation I copy the elements ID into a variable, sort these elements and delete adjacent duplicates. I will show this in detail in a later section.
We use this variable in the template only for elements called ID. In this template we evaluate the content but assign a number to it using following XPath expression:
We count all elements ID that have smaller text content compared to the one which is just processed. We stored those elements a a variable for the reason of speed but I will come later to that point. Please recogize this XPath expression is not possible in XPath 1.0 because we can’t access a variable. And we have to use XSLT 1.2 to define own XPath functions. This is what the following section is about.
User defined XPath Functions
In XSLT 2.0 we can compare two strings lexicographically but we can’t neither in XSLT 1.0 nor in the ABAP XSLT processor. So we have to define a XPath function similar to strcmp (just remember C) \ on our own. In fact user-defined functions are a great benefit to XSLT! We define this function util:cmp in an own namespace:
The functions works in a recursive way and uses the proprietary function sap:find-first() to assign a number to a alphanumeric character. Of course the list of alphanumeric characters in the variable alphabet is for from being complete. So it might be better to solve this comparison using ABAP integration.To stop the recursion we need the sap:if command which differs from the corresponding command in the current \ XSLT 2.0 specification.
Please remark that the syntax above for user defined functions differs from the current XSLT 2.0 version.
Good Bye Muenchian Grouping
There are people who claim that the only way to learn XSLT is to study special techniques like Muenchian grouping or Oliver Becker’s intersection method. This may be right but why should we choose the hard way if there is a simple one?
In the SDN-blog Grouping XML with XSLT – From Muenchian Method To XSLT 2.0\ you could read how to group with XSLT 2.0. Under ABAP there is the same statement but some possibilities are not supported. There is no function current-grouping-key() which allows to access the grouping criteria within the template block for instance.
In our example we group the ID elements, sort them according to their value and \ delete adjacent duplicates with a second xsl:for-each-group() command:
We can do better!
I already mentioned that the solution above is far from being good. Le me tell you the reasons. In fact we don’t need to define an XPath function for string comparison because we alread have stored a list of sorted IDs in a variable. To map each ID to an ongoing number we can use XSLT 2.0: \
Here we use the fact that the ID elmeents in the variable are sorted. We can use XPath to query the number of preceding elements for a given ID-value. Here is the complete example:
With this approach we don’t need to call a compare function that supports only a constant number of alphanumeric characters. Moreover, our transformation will be much faster.
Summary
This was an introduction the XSLT 2.0 features of the ABAP XSLT processor. I recommend to use them!
Thanks for the nice overview.
It seems there is an XSLT day in SDN today:-)
Can you provide some info which ABAP version supports XSLT 2.0?
Thanks,
Peter
you need 6.20, 6.10 is not enough for XSLT 2.0 as far as I know.
Regards,
Tobias
Thanks for the info.
I was just wondering as in our SAP R/3 Enterprise 4.70*200 (BC 6.20 system) there is no XSLT 2.0 in the SE80 Tag Browser, only XSLT 1.0.
Best regards,
Peter
you are right about that. I just checked the Tag Browser in a 7.00 system and found no XSLT 2.0 tags, too.
In fact you can switch the XSLT version to “2.0” within a transformation but this is forbidden by SAP libary but as far as I know this has no effect.
So if you want to write XSLT transformations just do as you know from XSLT 1.0 and apply some techniques I mentioned in the blog: temporary trees, grouping, user-define XPath functions and so on.
If you are interested in further details to XSLT 2.0 features I suggest you to read my SAP Heft 😉
Regards,
Tobias
Thanks for the info.
Best regards,
Peter
|
https://blogs.sap.com/2006/07/26/xml-processing-in-abap-part-7-the-power-of-xslt-20/
|
CC-MAIN-2017-47
|
refinedweb
| 1,499 | 75.4 |
I was never really into games, but here's a blackjack game that I wrote today for practice. It's not meant to be really high tech or fully featured, just a little game for fun. What do you guys think and where can I improve it?
BebopBebopCode:// // BLACKJACK! // by Dr.Bebop // // Rules: Each player gets at most 5 cards. If the total // value of the cards is over 21 the player busts. // If a player gets a total of 21 with the first // two cards then they get a Blackjack, the highest // score in the game. Otherwise the winner is the // one with the highest total less than or equal to // 21. In the event of a tie, the dealer wins the // hand. // #include <algorithm> // For random_shuffle(). #include <iostream> // For cout and cin. #include <cstdlib> // For srand() #include <cctype> // For tolower(). #include <ctime> // For time(). using namespace std; const int N_CARDS = 52; // Total number of cards. int curr_card = 0; // Current card in the deck, from 0 to 51 int deck[N_CARDS] = { // Card values. Suit is ignored. 2,3,4,5,6,7,8,9, 10,10,10,10,11, 2,3,4,5,6,7,8,9, 10,10,10,10,11, 2,3,4,5,6,7,8,9, 10,10,10,10,11, 2,3,4,5,6,7,8,9, 10,10,10,10,11, }; struct Part { // Stuff for each player, number of cards, int n; // the running total, and the individial int total; // card values (for program upgrades). int cards[5]; }; // Take the deck of card values and mix them up randomly. // void shuffle_deck(); // Deal the first two cards. // Part first_deal( Part player ); // Deal out one card from the top of the deck (determined by curr_card) // int deal_one(); // After the first two cards are dealt, add another if the player wants. // Part hit_player( Part player ); // Test each player's hand and print who wins. // void check_score( Part player, Part dealer ); int main() { char play_again = 'y'; char hit = 0; // Seed the random number generator. srand( (unsigned int)time( (time_t *)NULL ) ); while( tolower( play_again ) == 'y' ) { Part dealer = {0}, player = {0}; cout<< "Shuffling..." <<endl; shuffle_deck(); cout<< "First deal..." <<endl; // Deal to players. player = first_deal( player ); dealer = first_deal( dealer ); // Begin play. while( true ) { cout<< "Your total is: " << player.total <<endl; cout<< "Hit? (y/n): "<<flush; cin.get( hit ).ignore(); // Dealer AI. Very simple, based on casino rules. while( dealer.total < 16 ) { dealer = hit_player( dealer ); if( dealer.n > 4 ) break; } // Player options. hit = tolower( hit ); if( hit == 'y' ) { player = hit_player( player ); if( player.total > 21 || player.n > 4 ) break; } else if ( hit == 'n' ) break; else cerr<< "Unknown selection" <<endl; } // Print the winner. check_score( player, dealer ); cout<< "Play again? (y/n): "<<flush; cin.get( play_again ).ignore(); } return 0; } void shuffle_deck() { random_shuffle( deck, (deck + N_CARDS) ); // Reset curr_card to the top of the deck. curr_card = 0; } Part first_deal( Part player ) { player.cards[player.n++] = deal_one(); player.cards[player.n++] = deal_one(); // n = 2 after the previous two lines. It makes for easier blackjack tests. player.total += (player.cards[0] + player.cards[1]); return player; } int deal_one() { if( curr_card == 52 ) // If there are no more cards. shuffle_deck(); // Reshuffle the deck, start over at the top. return deck[curr_card++]; } Part hit_player( Part player ) { int card = deal_one(); if( card == 11 && (player.total + card) > 21 ) // Check for an ace. card = 1; player.cards[player.n++] = card; player.total += card; return player; } void check_score( Part player, Part dealer ) { if( dealer.total == 21 && dealer.n == 2 ) // Dealer blackjack. cout<< "Dealer blackjack. Dealer wins." <<endl; else if( player.total == 21 && player.n == 2 ) // Player blackjack. cout<< "BLACKJACK! You win!" <<endl; else if( player.total > 21 ) // Player bust. cout<< "You busted. Dealer wins." <<endl; else if( dealer.total > 21 ) // Dealer bust. cout<< "Dealer busts. You win!" <<endl; else if( dealer.total < player.total ) // Player high value. cout<< "You win!" <<endl; else if( player.total < dealer.total ) // Dealer high value. cout<< "Dealer wins." <<endl; else // House wins in a tie. cout<< "Tie. Dealer wins." <<endl; }
|
http://cboard.cprogramming.com/game-programming/24490-blackjack.html
|
CC-MAIN-2014-52
|
refinedweb
| 662 | 79.77 |
Below is the test program, including a Chinese character:
# -*- coding: utf-8 -*-
import json
j = {"d":"中", "e":"a"}
json = json.dumps(j, encoding="utf-8")
print json
{"e": "a", "d": "\u4e2d"}
You should read json.org. The complete JSON specification is in the white box on the right.
There is nothing wrong with the generated JSON. Generators are allowed to genereate either UTF-8 strings or plain ASCII strings, where characters are escaped with the
\uXXXX notation. In your case, the Python
json module decided for escaping, and
中 has the escaped notation
\u4e2d.
By the way: Any conforming JSON interpreter will correctly unescape this sequence again and give you back the actual character.
|
https://codedump.io/share/Gn6XV4Ye8R9J/1/python-jsondumps-can39t-handle-utf-8
|
CC-MAIN-2017-26
|
refinedweb
| 115 | 64.61 |
Let’s take another look at the Kolakoski sequence (part 1, part 2) which, by definition, is the sequence of 1s and 2s in which the nth term is equal to the length of the nth run of consecutive equal numbers in the same sequence. When a sequence has only two distinct entries, it can be visualized with the help of a turtle that turns left (when the entry is 1) or right (when the entry is 2). This visualization method seems particularly appropriate for the Kolakoski sequence since there are no runs of 3 equal entries, meaning the turtle will never move around a square of sidelength equal to its step. In particular, this leaves open the possibility of getting a simple curve… Here are the first 300 terms; the turtle makes its first move down and then goes left-right-right-left-left-right-left-… according to the terms 1,2,2,1,1,2,1,…
No self-intersections yet… alas, at the 366th term it finally happens.
Self-intersections keep occurring after that:
again and again…
Okay, the curve obviously doesn’t mind intersecting self. But it can’t be periodic since the Kolakoski sequence isn’t. This leaves two questions unresolved:
- Does the turtle ever get above its initial position? Probably… I haven’t tried more than 5000 terms
- Is the curve bounded? Unlikely, but I’ve no idea how one would dis/prove that. For example, there cannot be a long diagonal run (left-right-left-right-left) because having 1,2,1,2,1 in the sequence implies that elsewhere, there are three consecutive 1s, and that doesn’t happen.
Here’s the Python code used for the above. I represented the sequence as a Boolean array with 1 = False, 2 = True.
import numpy as np import turtle n = 500 # number of terms to compute a = np.zeros(n, dtype=np.bool_) j = 0 # the index to look back at same = False # will next term be same as current one? for i in range(1, n): if same: a[i] = a[i-1] # current run continues same = False else: a[i] = not a[i-1] # the run is over j += 1 # another run begins same = a[j] # a[j] determines its length turtle.hideturtle() turtle.right(90) for i in range(n): turtle.forward(10) # used steps of 10 or 5 pixels if a[i]: turtle.right(90) else: turtle.left(90)
|
https://calculus7.org/tag/turtle/
|
CC-MAIN-2018-05
|
refinedweb
| 408 | 68.5 |
Passing Data From QT To C++
- physicsguy
I was working through one of the examples on signals and slots and ive found that there is one line of code that is causing compilation issues. Here is my code
@#include <iostream>
#include <QObject>
using namespace std;
class Counter : public QObject{
Q_OBJECT
signals:
void valueChanged(int newValue);
public:
Counter(){
m_value = 1;
}
int value() const {
return m_value;
}
public slots:
void setValue(int value);
private:
int m_value;
};
void Counter::setValue(int value){
if(value != m_value){
m_value = value;
emit valueChanged(value); // This line is causing errors
}
}
int main(void){
Counter a, b;
QObject :: connect(&a, SIGNAL(valueChanged(int)), &b, SLOT(setValue(int)));
a.setValue(7); cout << "a.setValue(7) " << endl; cout << "(a.value, b.value) = " << a.value() << b.value() << endl; return 0;
}@
Particularly, the error its causing is
john:~/Desktop/QT/gui1$ make
g++ -m64 -Wl,-O1 -o gui1 example1.o -L/usr/X11R6/lib64 -lQt5Gui -L/usr/lib/x86_64-linux-gnu -lQt5Core -lGL -lpthread
example1.o: In function
Counter::setValue(int)': example1.cpp:(.text+0x9): undefined reference toCounter::valueChanged(int)'
collect2: error: ld returned 1 exit status
make: *** [gui1] Error 1
- SGaist Lifetime Qt Champion
Hi and welcome to devnet,
since you declared your QObject class in main.cpp moc is not run. Just move it to it's own header and you should be good to go
- physicsguy
Hi,
thanks for the quick response, that was indeed the issue!
SGaist - you can declare QObject derived classes in main.cpp, after your class throw in @#include "main.moc"@
So it seems the problem is not about the moc not running.
|
https://forum.qt.io/topic/36950/passing-data-from-qt-to-c
|
CC-MAIN-2017-39
|
refinedweb
| 266 | 55.44 |
Closed Bug 689924 Opened 10 years ago Closed 10 years ago
Change "Inspect" shortcut to Cmd+Opt+I, Web Console to Cmd+Opt+K (Mac Only)
Categories
(DevTools :: General, defect)
Tracking
(firefox10+ verified)
Firefox 11
People
(Reporter: rik, Unassigned)
Details
(Keywords: verified-beta, Whiteboard: [fixed-in-fx-team][qa!])
Attachments
(1 file, 1 obsolete file)
Safari, Chrome and Opera uses Cmd+Opt+I to launch their devtools. I think we should match that. Might be worth changing other shortcuts to replace Shift by Opt also.
> Might be worth changing other shortcuts to replace Shift by Opt also. Which ones?
I'm gonna reopen this one because it's only about Mac shortcuts. The other bug says it can't be done because of Windows bindings. (In reply to Paul Rouget [:paul] from comment #1) > > Might be worth changing other shortcuts to replace Shift by Opt also. > > Which ones? Web Console and Error Console currently use Shift.
Status: RESOLVED → REOPENED
Resolution: DUPLICATE → ---
This changes the Web Console and Inspect tool to use Cmd+Alt shortcuts. I left out the Error console cause I think it's not gonna be a tool for web developers.
Assignee: nobody → anthony
Status: REOPENED → ASSIGNED
Attachment #574111 - Flags: review?(dcamp)
Comment on attachment 574111 [details] [diff] [review] Patch >+#ifdef XP_MACOSX >+ <key id="key_webConsole" key="&webConsoleCmd.commandkey;" oncommand="HUDConsoleUI.toggleHUD();" modifiers="accel,alt"/> >+#else > <key id="key_webConsole" key="&webConsoleCmd.commandkey;" oncommand="HUDConsoleUI.toggleHUD();" modifiers="accel,shift"/> >+#endif <key id="key_webConsole" key="&webConsoleCmd.commandkey;" #ifdef XP_MACOSX
Thanks for the comment Dao, fixed. I've put Dave for the review but I have no idea who I should ask to review.
Attachment #574200 - Flags: review?(dcamp)
(In reply to Anthony Ricaud (:rik) from comment #6) > Created attachment 574200 [details] [diff] [review] [diff] [details] [review] > Patch v2 > > Thanks for the comment Dao, fixed. > > I've put Dave for the review but I have no idea who I should ask to review. You could ask any of the fine gentlemen listed in this page:
Attachment #574111 - Attachment is obsolete: true
Attachment #574111 - Flags: review?(dcamp)
Comment on attachment 574200 [details] [diff] [review] Patch v2 Thanks Panos. Then I guess I should ask Dao.
Attachment #574200 - Flags: review?(dcamp) → review?(dao)
Whiteboard: [fixed-in-fx-team]
Updating the summary to reflect reality. This may severely impact people's muscle memory.
Summary: Change "Inspect" shortcut to Cmd+Opt+I → Change "Inspect" shortcut to Cmd+Opt+I, Web Console to Cmd+Opt+K (Mac Only)
Status: ASSIGNED → RESOLVED
Closed: 10 years ago → 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 11
Comment on attachment 574200 [details] [diff] [review] Patch v2 Since this changes the key for a new feature, we should land this in aurora to get people used to it. The webconsole key-binding should be considered a companion setting. Low-risk. No code changes.
Attachment #574200 - Flags: approval-mozilla-aurora?
[triage comment] Is there any downside to supporting both the current and this shortcut for web console? Hasn't web console shipped with that shortcut in multiple releases, and wouldn't we be breaking muscle memory that way?
Keywords: #relman/triage/needs-info
(In reply to Christian Legnitto [:LegNeato] from comment #13) > [triage comment] > Is there any downside to supporting both the current and this shortcut for > web console? Hasn't web console shipped with that shortcut in multiple > releases, and wouldn't we be breaking muscle memory that way? like, having two shortcuts assigned for the web console? Only downside is some untested xul. It does break muscle memory, and in some cases, interferes with other shortcuts requiring users to reconfigure hotkeys outside of Firefox (in my case, OmniFocus).
Did you test that this doesn't break existing localized shortcuts in localizations? (I don't think we have anything better than tweaking a mozmill l10n test and run through all localized builds.)
uh, no, I have no idea how to test that. I filed a follow-up bug to this: bug 706204 to restore the original web console shortcut as per legneato's comment 13. I'd like to get that into aurora at the same time.
I just received an email asking me to land this on Aurora. Since I cannot do that, de-assigning myself. Can someone take it?
Assignee: anthony → nobody
I will land that tomorrow.
Whiteboard: [fixed-in-fx-team] → [fixed-in-fx-team][qa+]
I have tried this on: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:10.0) Gecko/20100101 Firefox/10.0 beta 2 Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:10.0) Gecko/20100101 Firefox/10.0 beta 2 Cmd+Opt+I opens "Inspect" and Cmd+Opt+K opens "Web Console" Setting resolution to Verified Fixed
Status: RESOLVED → VERIFIED
Keywords: verified-beta
Whiteboard: [fixed-in-fx-team][qa+] → [fixed-in-fx-team][qa!]
Product: Firefox → DevTools
|
https://bugzilla.mozilla.org/show_bug.cgi?id=689924
|
CC-MAIN-2021-49
|
refinedweb
| 810 | 58.18 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
0018(0000) 0C 16 BC 04 | doNEXT rdword instr,IP 'read word code instruction 001C(0001) 02 18 FC 81 | add IP,#2 wc 'advance IP to next wordcode (clears the carry too!) 0020(0002) 09 16 7C 2A | shr instr,#9 nr,wz ' cog or hub? 0024(0003) 0B 00 28 5C | if_z jmp instr 'execute the code by directly indexing the first 512 longs in cog 0028(0004) 0F 16 7C 2A | shr instr,#15 nr,wz ' embedded 15-bit literal? 002C(0005) 09 00 54 5C | if_nz jmp #PUSH15 ' push this literal without having to do a call/return 0030(0006) 0A 14 FC 5C | call #SAVEIP ' otherwise this is an address of high-level word code 0034(0007) 0B 18 BC A0 | mov IP,instr ' so after saving the IP, load it with new address 0038(0008) 00 00 7C 5C | jmp #doNEXT
Sounds very interesting, keep us posted!
Jim
I think the Proptool max'd out because it couldn't handle all the DAT symbols which have been greatly reduced now since I don't need an indirect vector etc so I guess ol' Proptool should work again.
I'm really chuffed to come up with this simple word encoding that still allows code direct address as it makes all the difference plus cog code does not suffer any real penalty. Even just having cog space to implement a 32 level hybrid data stack makes a lot of difference too. This should really fly and still be compact. There won't be many changes that I should need to make to all the extensions so once I sort out the compiled kernel I should be ripping.
how long will it take to bring this to live?
could it be the preferred version for P][
can we create a git repository?
At this rate, within the week, maybe just days assuming I get to spend some time on it which I haven't really been able to yet. When it's all done and baked then maybe we can look at a git repository although unnecessary complexities just get in my way, but anyone is welcome to implement it if they so desire.
P2 already uses 16-bit addresses although they are not encoded and I could mix hubexec into it too. Even though the TF2 opcode is only 16-bits to make it more compact it did have the full 64k available just for code, with the dictionary and data in other areas.
BTW, one of the reasons I came up with this new version is because I need a morale booster, I just see the Propeller chip itself fading further and further into the background of forum chatter and dreams.
You can see the memory it ends up saving even if it looks like it uses twice as much memory as bytecode. For instance in bytecode if I wanted a value of 1,000 for instance this would take 3 code bytes plus the time to read them as well. With wordcode this is just one word and one read and faster. Another saving is not having to have the call vector table as all I need to do to call for example BLINK (with the Spin tool) is: where s is simply the 16 byte Spin header offset that we have to factor in. But if I didn't want to return I can just as easily say: where t is s+1 since bit 0 is redundant for word addressing and used instead to indicate that the IP should not be pushed onto the return stack, so it effectively becomes a jump and thus removes an otherwise required EXIT instruction.
Wordcode compiles nicely, have a look at a startup test demo which dumps some RAM and then sits in a loop incrementing and printing a number while blinking two LEDs. Notice how DEMO terminated with a @BLINK+t vs @BLINK+s,EXIT to save two byte and some time.
The doNEXT wordcode loop is a bit more complicated than it is in bytecode Tachyon but this same loop handles literals up to $7FFF and calls to wordcode as well. BTW, cog PASM codes which directly jump to that address in the cog only requires four doNEXT instructions which is only one more than bytecode Tachyon.
the new model sounds really promising for some speedup & code shrinkage then ...
You haven't looked close enough, in the testing section perhaps? Can you sync the Tachyon folder to your PC so it is always up-to-date?
V4 is not yet interactive as I'm starting afresh with how input is processed and I may allow for line input as well so corrections can be made while typing.
BTW, there are some sections of code, especially early on, which definitely consume almost twice as much memory. But the more that code is added, the more that savings are seen. There is no real penalty for factoring a similar snippet out and calling it from a from a routines which in the past always required at least another vector. The other saving is being able to jump instead of call which saves on the enter/exit overhead.
For instance, I noticed that I had the sequence SWAP DROP EXIT in a lot of places as well as DROP EXIT. So I just define: Then I jump to it easily.
I share your sentiments on the prop fading. Once upon a time you couldn't leave the forum for a day without heaps of posts. Now I can go away for a week and not miss much. Rarely are more than a couple of people are logged in.
I find other things to do now, rather than work on the prop, which is a shame.
V4 fibo results Where fibo(6) takes 14.2us fibo(46) takes 54.2us.
V3 fibo results
Wot! It's 200ns slower
I will do some further tests with less optimizable (real world) code to measure the gains which I expect to see.
V3 Primes = 198.922ms
V4 Primes = 121.58ms
To be fair I replaced the "2 +" with 1+ 1+ which V4 had and the result was Primes = 182.542ms
However 121.58ms vs 182.542ms shows that V4 is 50% faster based on V3 = 547.8 primes/100sec vs V4 = 822.5 primes/100sec
It seems that two Tachyons bound together can travel faster than one Tachyon alone!
V4 embeds constants as wordcode literals so this also makes it faster.
btw, the earlier benchmark times were skewed by 200ns so they were not in fact 200ns slower.
What you call "prop fading," I believe, has been function of three factors:
2. The introduction and emphasis of C language programming vs. Spin has split Prop enthusiasts into two camps. This has hindered cross-fertilization between the two, since they seem mutually incompatible.
3. The new (now not-so-new) forum software has alienated a lot of forumistas due to issues that remain unfixed and features that did not survive from the older version.
-Phil
I think the richness of the microcomputer / microcontroller market is also a factor. When I go into a MicroCenter store or look on-line, there are so many different Arduino-compatible devices plus ESPxxxx variations with WiFi built-in plus RaspberryPi's and C.H.I.P.s also with WiFi and Bluetooth. The latter are getting much better at doing the sorts of things the Basic Stamp does with development on-chip with a Bluetooth keyboard and either a composite video or HDMI/VGA display.
I'm sure you're right. The micro world has gotten a lot bigger. Perhaps it's a case of myopia, but I'm still a huge fan of the Propeller.
Now, back to Peter's topic and the amazing work he's doing with Forth!
-Phil
Still, all these chips don't have what the Prop has which for actual embedded control work is far more important than playing pico-8 games. But I'm about to squeeze a whole lot more out of what we've got just when I thought I couldn't squeeze no more, because it's all we've really got, so we can either dream or do.
I'm on a R&D project I can't share. It involves control of a pile of inductive loads.
We used the P1 for some early tests, was simple. Complexity went up, and people thought that exceeded the bounds of the Propeller. "Hobby toy"
There is a complex controller in development, GUI, C, etc... so far that has been stalled as a lot of stuff needs to be debugged, written and so forth. I got told all the amazing features, interrupts, timers, peripherals, integrated development tools, libraries...
"Real pro grade rapid development system."
Okie Dokie, well how come it takes so long? I got a bunch of answers all centered on features, tests and complexity.
"Can't you just write a loop, set the bits, etc....?"
"Yes but we need to integrate the PC client side tools, setup interrupts, comms, and, and, and...
Well, one evening a few days ago, I decided to connect my proto board to the controllers. Took an hour to solder up the connects, another 30 minutes for a quick systems check, and about an hour to code up motion in SPIN. This board was used to help characterize the inductive loads. So it was just sitting there along with a little POS netbook.
"Interpreted is too slow"
"Concurrency is hard"
LOL, while they fight with a heirachy of interrupts, missing brackets, API and library oddities, I wrote the few methods I needed and rolled that all up into a nice repeat loop that demonstrated the proof of viability nicely.
Sent a video out and the answer I got back was hilarious!
Basically, it was a bunch of, "when we get done..." Yeah, tons of setup, so the real work is easy.
Well, on the P1, just doing the real work was easy, almost no real setup needed.
That evening moved the project forward about a month. A lot of basic science and testing to characterize the device and it's physics needs to be done no matter how it's controlled on a high level in the end.
They keep building in features anticipating tests, while I just wrote them as needed quickly and easily. I've got the tech doing the same thing.
Now, the tech wants to get onto the science, so I set him up. Ran through my setup, SPIN basics, and after a couple hiccups, he's off and running with the odd programming question I can answer easily.
Over the years here, I have learned a ton! Thanks.
The best part was need for some debug and status output. Took me 10 minutes to merge the test code with one of the serial demos.
"Do you want keyboard, mouse and a video display or serial?"
"On that thing?"
"Yup, will take me an hour, maybe two..."
"How?"
"Concurrency is kind of easy" Damn right it is.
"I'll take serial."
"OK, that's 10 minutes."
So far, SPIN is quick enough for this application. I may need a PASM helper COG, depending on where the science takes things. I've got it done, and showed it off.
"Assembly is scary"
Well, after showing them how to just drop it in and do things the way we do...
"That assembly is stupid easy" Again, damn right it is.
"Won't scale"
Wait, until I show them just how much one of these little chips can do with basically zero support libraries, fancy multitasking, interrupt based kernels... my limit is pins, not any real speed bottleneck.
"Can't do anything in 32K RAM"
Already blew this one out of the water.
We need that P2 done. It's got similar potential, and a lot more basic room to work in.
But, I just wanted to share a positive. This hobby level guy just kicked a lot of arse, and did so with how you all have helped me to think and Parallax made a chip that is lean, mean, and effective.
I can write a few lines, hit F10 and see it all happen. They have to know a ton more and do much more, all of which gets one far away from the problem or task.
And this is just the bog standard stuff. Tachyon can do so much! Wish I got along with Forth better, but I don't.
What I do know is Peter is sharing pure gold here for those who can run with it.
And I also know the P1, and how we think here, and why we do that is very seriously potent stuff.
It's hard to get others to believe or understand. Nothing beats, "oh, you did that in an evening?"
And It's not even my area of expertise. I do this for fun and my own enlightenment too.
Last chat, "I might have to get one of these, it's simple and effective and fast."
Yup. It is all of that.
-Phil
Best part is they were talking about when and if. My quick setup is doing the task nicely.
That's looking a significant gain.
I like the sound of that too..
When this is tuned and running, what about looking at a P1V V4 Core ?
A few choices there
** Do a COG that uses only the V4 forth opcodes, and see how much smaller that is
** Do a COG with helper opcodes, to see how much faster that can be
Had a laugh on the CHIP forum as they were trying to interface a DHT11 type sensor which they eventually worked out a solution for using SPI, a big buffer, and several I/O lines
EDIT: As for Spin even if I don't love the speed, I really love the simplicity and ease of use of this language. Along with PASM it just works or is easy enough to get working. It got me started with the Prop.
The head space is the worst part about all of it. For a lot of things, it's already detailed. Adding all that other stuff makes it hard to get at the problem.
150 bytes! Nice.
... another idea, is what would this look like, coded to read HyperRAM as XIP ? - allows 2Mx8 Memory, with low pin code.
The 16b opcode could fit well there, as that's just 2 clocks for sequential read, leaving a lot of idle time for the hidden refresh.
What about a P1 module, with two HyperRam ? - one for XIP Code, and the other as a video buffer.
Turns out the V4 wordcode version at 130 code bytes takes less code space which is what I anticipated once I started coding in higher level functions.
@jmg - not sure what you mean but I try to keep within the hardware limits and I'd rather marry an ARM to a Prop than try to mutilate it with memory expansion schemes
btw, ignore some of the generated code I'm massaging the memory map (wordcode needs to start >$01FF)
A P1 also pairs well with smaller MCUs like EFM8LB1, where higher performance ADCs/ DACs are needed.
Parts like the EFM8LB1 are now comparable/lower in price with the equivalent ADC/DACs so you get the MCU for free.
HyperRAM has a good trade off, in pins-vs-bandwidth for memory expansion.
Yes, it adds another chip, but the jump is to go from 32kB to 2MB, which is quite a gain - plus it proves that 32k is less of a hard ceiling.
Otherwise, many would avoid a P1, in order to avoid hitting that limit.
It's such a pity that the Prop doesn't have more I/O pins (or maybe Address/Data bus) which is why I object to memory expansion schemes since they try to turn a microcontroller into a microcomputer but end up cannibalizing and crippling the Prop. P2 was supposed to be a reality many many many moons ago but when it eventually is then all this that we discuss will be moot.
I've added another special opcode for task variables so that each cog may have its own but share common routines that use these task variables. This means that all word codes are only one word long except for a LONG literal which of course requires 32-bit operand and 16-bit word code.
So far so good, says the eternal optimist.
Now the dilemma I face is what to do with the dictionary as I need to mix word aligned code with byte aligned characters. Currently Tachyon stores each record in the dictionary in this format:
That works well enough and the count also helps to search faster and to skip to the next record in ascending memory since the dictionary build down towards code memory which builds up.
Now to code a dictionary entry for V4 wordcode in the Prop tool requires an entry like this: Since I'm lazy I don't bother working out the count byte as Tachyon will fix up the counts on a cold start.
But that looks messy so I could just enter @WORDS+s as a word like this: So that is a lot cleaner but depending upon the byte alignment it might add one extra byte between the atr and the wordcode which has to be factored in when skipping to this field or the previous name.
Now the other way to format this is to bite the bullet and make each dictionary record a fixed length given that it is rare that Forth names exceed more than 12 characters since_we_don't_use_long_names as they are a pain to interactively type plus they take up memory. One advantage of a fixed format is that it lends itself to begin accessed easier in slow memory such as I2C EEPROM. At present the dictionary can take up a lot of precious hub RAM but I have a scheme which drops these names into 1 of 64 hashed index blocks of EEPROM/SD. However if I use fixed length records I could just store them as is in EEPROM without any special tricks or hashed index blocks except perhaps sorting them. Using a binary search it becomes quick and easy to locate a name without having to read a whole block of 384 bytes in.
The fixed record looks like this:
So the main reason for the fixed length record is that the dictionary, or most of it really belongs somewhere other than hub RAM but normally that somewhere else is slow. The hashed index block uses around 24kB as some blocks are only half full yet the 16-byte fixed record approach would use less memory overall.
Thoughts?
Hi Peter,
- I would completely deny a fixed length name schema although I try to keep names short. I think newcomers would give never more use Tachyon.
- I remember that in commodore 64 basic there was a way to shorten longer names for example "POKE" as P shift O ("pO") as a shortname this worked for many commands. I can't remember details maybe you had such a box also so you know what I'm speaking off.
- I would even think about something like a namespace in c++ this would free some names for application usage. This would be cool.
My wife is calling from upstairs "... where is the cremant".
Cheers,
proplem
Personally, I think the interactive aspect of Forth is overrated. I would much rather use a good off-line editor/pre-compiler and just upload all the resulting threaded code at once when I want to try something. With such a system, dictionary entries can be as long as you want them, since the target doesn't have to include the dictionary.
-Phil
|
https://forums.parallax.com/discussion/165490/tachyon-v4-dawn-exploring-new-worlds
|
CC-MAIN-2021-04
|
refinedweb
| 3,344 | 78.69 |
What is currying?
Currying is the fine art of transforming a function with arity
n into
n functions with arity 1.
This means: Given a function that takes X parameters, generate X functions that take only 1 parameter.
Using the wikipedia example:
- Given
x = ƒ(a, b, c)it becomes:
h = g(a)
i = h(b)
x = i(c)
- Or in a single sequence call:
x = g(a)(b)(c)
The name comes from Haskell Curry, a famous mathematician who developed a lot of concepts in modern math.
What it means in modern programming?
It simply means that you have a way to reduce the complexity of a function call creating intermediate functions which in turn return newer functions.
It will be more clear with examples...
A taste of currying
Let's start with a "simple function" it takes 2 numbers and returns the sum of them:
sum = { a: Int, b: Int } => a + b
Let's imagine for a second that we have a
curry function that will give back the curried version of our function, a call of:
curry(sum) will generate
curriedSum which can be invoked like:
curriedSum(a)(b) instead of the original call:
sum(a, b).
There's also the possibility of actually set some of its values and do PARTIAL APPLICATION which means setting a value for one (or more) of the parameters.
On our
sum example:
val curriedSum = curry(sum) // return value is: { a -> { b -> a + b } } // Or also expressed as: (Int) -> (Int) -> Int val add5 = curriedSum(5) // returns: (Int) -> Int = { b -> 5 + b } val add7 = curriedSum(7) // returns: (Int) -> Int = { b -> 7 + b } add5(42) // 47 add7(13) // 20 add5(add7(10)) curriedSum(17)(7)// 24 :: add5(add7(10)) -> add5(17)
Why would I want that?
There are multiple reasons for wanting to curry a function:
- We don't have all the values that will be passed right now, we usually have to create callbacks or proxies or mechanism to obtain these values before actually calling a function.
- We need to pass a function as parameter to another function (callbacks) and we already have a function defined to do the work.
- We want to partially apply a function to pass it along to other places but keeping our data contextualized in there, sounds weird but imagine you have to pass a callback or pass a filtering function, and you already have a function that does the job, but with additional flags or parameters.
- We need a simple way to provide a complex API shared between parts at different moments.
Truth is you maybe don't need it or have other approaches that can do the work as well, still is worth the shot to understand how it works in case you need it some day.
Let's see it in action
For this example, let's imagine we have a function that can do a POST to a web service and returns information.
fun <T> postCall( domain: String, port: Int, path: String, queryParams: QueryParams, ): T { //... Here happens the magic call }
Each call will be complex:
postCall<MovieResponse>( "moviedb.com", 8090, "/movies/scott-pilgrim", QueryParams( "order" to "asc", "type" to Types.JSON, "comments" to false ) ) //... postCall<MovieResponse>( "moviedb.com", 8090, "/movies/lego-movie", QueryParams( "order" to "desc", "type" to Types.JSON, "comments" to true ) ) //...
This is error-prone and can be improved:
- Using a builder:
createMovieCall() .forMovie("lego-movie") .withParams("order" to "desc", "type" to Types.JSON, "comments" to true) .call()
- Using a class:
val imdbService = MovieService( params = "default", domain = Domains.IMDB ) imdbService.getMovie("scott-pilgrim")
- Other cooler ways that are not as cool as currying.
But with currying we can do something like:
val movieService = curry(postCall)("moviedb.com")(8090) val scottPilgrimCall = movieService("/movies/scott-pilgrim") val scottPilgrimWithComments = scottPilgrimCall("comments" to true) val scottPilgrimNoComments = scottPilgrimCall("comments" to false) val legoMovieCall = movieService("/movies/lego-movie") val legoMovieYAML = legoMovieCall("type" to Types.YAML) val legoMovieXML = legoMovieCall("type" to Types.XML ) val legoMovieJSON = legoMovieCall("type" to Types.JSON)
This approach allow to create the intermediate calls and keep previous parameters without problems, we can even create a
"movie service generator":
fun movieServiceGenerator( movieWeb: string ): (String) -> (QueryParams) -> MovieResponse = curry(postCall)(movieWeb)(8090) // Assuming both sites use this port val imdbService = movieServiceGenerator("imdb.com") val cuevanaService = movieServiceGenerator("cuevana3.me") // And the calls will be similar: val jumanjiIMDB = imdbService("/movie/jumanji")(QueryParams()) val jumanjiCuevana = cuevanaService("/92345/jumanji")(QueryParams("server" to "webfree2"))
The syntax is different but the way we pass data evolves to return functions with partially applied data so, we can build functions with simpler calls.
How to implement it
Depending on what your language allows with functions it can be easy or hard or even unreadable (but easy to use).
For example, in Haskell all calls with multiple parameters are auto curried which means that a function:
postcall :: String -> Int -> String -> [(String, String)] -> a
Is the same as:
postcall :: String -> (Int -> (String -> ([(String, String)] -> a)))
So we can do partial application if needed:
imdbservice = postcall "imdb.com" 8090 scott_pilgrim = imdbservice "/movies/scott-pilgrim" scott_pilgrim_comments = scott_pilgrim [("comments", "true")]
In JS defining a currying function gets interesting:
function curry(func) { return function curried(...args) { if (args.length >= func.length) { return func.apply(this, args) } else { return function (...args2) { return curried.apply(this, args.concat(args2)) } } } }
But there are a lot of solutions out there, like lodash:
const curried = _.curry(postcall) const imdbService = curried("imdb.com")(8090) const legoMovie = imdbService("/movies/lego-movie")({})
Other languages can get complicated as we need to know the number of arguments for our application, like kotlin:
fun <A, B, R> curry(f: (A, B) -> R): (A) -> (B) -> R { return { a: A -> { b: B -> f(a, b) } } } fun <A, B, C, D, R> curry(f: (A, B, C, D) -> R): (A) -> (B) -> (C) -> (D) -> R = { a: A -> { b: B -> { c: C -> { d: D -> f(a, b, c, d) } } } } fun postCall( web: String, port: Int, path: String, params: Map<String, String>): MovieResponse { //... } val curried = curry(::postCall) println(curried("imdb")(9090)("/lego")(mapOf("format" to "json"))) // And so on...
Depending on how much your platform (language) allows, it can be messy...
@FunctionalInterface interface TetraFunction<A, B, C, D, R> { R apply(A a, B b, C c, D d); } class Curry { public static <A, B, R> Function<A, Function<B, R>> curry(BiFunction<A, B, R> f) { return (a) -> (b) -> f.apply(a, b); } public static <A, B, C, D, R> Function<A, Function<B, Function<C, Function<D, R>>>> curry(TetraFunction<A, B, C, D, R> f) { return (a) -> (b) -> (c) -> (d) -> f.apply(a, b, c, d); } } //Usage: class Example { static <T> T postCall( String movieWeb, int port, String path, Map<String, String> params) { } public static void main(String[] args){ Function<String, Function<Integer, Function<String, Function<Map<String, String>, MovieResponse>>>> curriedPostCall = Curry.curry(Example::postCall); MovieResponse movieResponse = curriedPostCall.apply("imdb.com").apply(9890).apply("/lego-movie").apply(new HashMap()); } }
The concept and application stays almost the same, only adapting it to each language/platform.
And I did not show this time a complete example as we already covered a lot, but the ideal scenario for using currying is when you have functions taking functions and returning other functions.
Conclusions
- Functional Programming creates a lot of intermediate data
- Currying is cool if you need to simplify your function calls
- Currying is cool if you need to partially apply your functions
- Currying is not a silver bullet but can help you create a simpler API
Top comments (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/sierisimo/the-beauty-of-currying-2poh
|
CC-MAIN-2022-40
|
refinedweb
| 1,243 | 50.57 |
On Chromium, we have to transcode a web font before handing it over to rendering libraries such as FreeType2 and t2embed.dll, for security reasons.
Chromium bug:
Created attachment 42480 [details]
transcode_webfonts_by_ots_v1
[+cc: [email protected], [email protected], [email protected]]
David, Adam,
Could you please review this change?
This change depends on a patch on Chromium side which is under review (). So please do not cq+ for a while even if it's r+.
How is this chromium only? Would this not be useful for other ports?
Yes, it should be useful for other ports. However, since the transcoder is kind of experimental, I would like to test it with Chromium's dev channel first.
(In reply to comment #3)
> How is this chromium only? Would this not be useful for other ports?
I haven't been following this since the initial exploratory patch, but as I recall the transcoding involves stripping important font information like hinting tables. So even post-experiment this code may be a regression on other ports that already offer fonts.
There's no reason why hinting code cannot be sanitised and passed through, but it's not in V1. The current thinking is that it's a lot of code and @font-face is mostly used for large, heading test rather than for the main body.
Actually the transcoder now supports font hinting in glyf and CFF tables, but it still does not support glyph substitution tables. As a result of the lack of the GSUB support, web browsers that use the transcoder can't handle web fonts for some complex scripts.
> this code may be a regression on other ports
So it is correct at present.
Comment on attachment 42480 [details]
transcode_webfonts_by_ots_v1
Hum... Maybe eventually we would want to wrap this in some sort of platform abstraction and have the OpenTypeSanitizer be a back-end only used by chromium at the moment. All others would just have a pass-through filter for now?
Created attachment 42542 [details]
transcode_webfonts_by_ots_v2
Created attachment 42543 [details]
transcode_webfonts_by_ots_v3
I've created a new patch that introduces platform/graphics/opentype/OpenTypeSanitiser.{cpp,h} class. Can you please take another look?
(and please ignore the v2 patch. I forgot to add ChangeLog to v2.)
Comment on attachment 42543 [details]
transcode_webfonts_by_ots_v3
LGTM. (I am not a WebKit reviewer. You need a real review also.)
> + handle web fonts in a secure manner
This ChangeLog entry should be more descriptive:
Add support for OpenType Sanitiser (OTS). This is experimental code that is Chromium only for the moment. It parses OpenType files (from @font-face) and attempts to validate and sanitise them. We hope this reduces the attack surface of the system font libraries.
> + // This is the largest web font size which we'll try to transcode.
> + static const size_t maxWebFontSize = 30 * 1024 * 1024; // 30 MB
This is pretty huge, but looking around it does seem that some fonts are nearly this large!
Created attachment 42610 [details]
transcode_webfonts_by_ots_v4
Created attachment 42611 [details]
transcode_webfonts_by_ots_v5
Uploaded v5 patch (please ignore v4. it's accidentially uploaded, sorry.).
- Revised WebCore/ChangeLog as agl suggested.
- Added a patch for WebKit/chromium/DEPS following Yaar's comment in .
The code is not changed.
Comment on attachment 42611 [details]
transcode_webfonts_by_ots_v5
LGTM.
(I am not a WebKit reviewer. You also need a real review.)
Comment on attachment 42611 [details]
transcode_webfonts_by_ots_v5
Minor quibble, but you should use American English spelling rather than British English spelling, e.g., we don't call Colors Colours. :)
This means changing sanitise to sanitize, and OpenTypeSanitiser to OpenTypeSanitizer.
Can you explain more the motivation of this patch? Have you run into specific attacks/exploits? How do other browsers like Firefox fare?
.
Comment on attachment 42611 [details]
transcode_webfonts_by_ots_v5
This is a feature, and there should be queried off an ENABLE(), not a PLATFORM(). There is nothing platform specific about this.
Created attachment 42691 [details]
transcode_webfonts_by_ots_v6
Thanks for the review! Uploaded v6 patch. Changes are as follows:
- Fixed class and method names: s to z.
- Removed #if PLATFORM()s. Use #if ENABLE(OPENTYPE_SANITIZER) instead.
(In reply to comment #18)
> .
Adam, please email webkit-security with the details if you don't feel comfortable discussing them here. Thanks.
One small nit, if you're basically reconstructing the font, you really should remove the DSIG table, as it will no longer be valid for the reconstructed font.
(In reply to comment #23)
Yes, the sanitizer always drops DSIG table from a reconstructed font. It recalculates checksums for each table as well. Thanks.
(In reply to comment #22)
I believe there was a similar discussion about half year ago. Can you please check the bug 25245 which is marked as security-sensitive?
Comment on attachment 42691 [details]
transcode_webfonts_by_ots_v6
> diff --git a/WebCore/ChangeLog b/WebCore/ChangeLog
> +2009-11-07 Yusuke Sato <[email protected]>
> +
> + Reviewed by NOBODY (OOPS!).
> +
> + handle web fonts in a secure manner
> +
> +
> + Add support for OpenType sanitiser (OTS). This is experimental code that is
> + Chromium only for the moment.
It isn't for chromium only anymore.
> diff --git a/WebCore/platform/graphics/chromium/FontCustomPlatformData.cpp b/WebCore/platform/graphics/chromium/FontCustomPlatformData.cpp
> #include "FontPlatformData.h"
> #include "NotImplemented.h"
> +#if ENABLE(OPENTYPE_SANITIZER)
> +#include "OpenTypeSanitizer.h"
> +#endif
The "if enable" should be in the header and then the include should be be done with no "if enable".
> #include "SharedBuffer.h"
>
> #if PLATFORM(WIN_OS)
> @@ -245,6 +248,14 @@ FontCustomPlatformData* createFontCustomPlatformData(SharedBuffer* buffer)
> {
> ASSERT_ARG(buffer, buffer);
>
> +/mac/FontCustomPlatformData.cpp b/WebCore/platform/graphics/mac/FontCustomPlatformData.cpp
> +/opentype/OpenTypeSanitizer.cpp b/WebCore/platform/graphics/opentype/OpenTypeSanitizer.cpp
> @@ -0,0 +1,80 @@
> +/*
> + * Copyright (c) 2009, Google Inc. All rights reserved.
Use a capital c
no comma after the year
Add "#if ENABLE(OPENTYPE_SANITIZER)" in the file right before "#include "config.h"". Put the endif if at end of the file "#endif // ENABLE(OPENTYPE_SANITIZER)"
> +#include "config.h"
> +namespace WebCore {
> +
> +PassRefPtr<SharedBuffer> OpenTypeSanitizer::sanitize()
> +{
> + if (!m_buffer)
> + return 0;
> +
> +#if PLATFORM(CHROMIUM)
I know you did this if PLATFORM at Eric's suggest, but I think it turned out in a form that no one would use.
So I would just get rid of *all* "if PLATFORM(CHROMIUM)" in this file. If others want to use it in the future, it can be adjusted then to meet their needs.
> + // This is the largest web font size which we'll try to transcode.
> + static const size_t maxWebFontSize = 30 * 1024 * 1024; // 30 MB
One space before end of line comments.
> + if (m_buffer->size() > maxWebFontSize)
> + return 0;
> +
> + // A transcoded font is usually smaller than an original font.
> + // However, it can be slightly bigger than the original one due to
> + // name table replacement and/or padding for glyf table.
I've typically seen glyph instead of glyf but I did see glyf in one place on the web.
> + static const size_t padLen = 20 * 1024; // 20kB
One space before end of line comments.
> +
> + unsigned char* transcodeRawBuffer = new unsigned char[m_buffer->size() + padLen];
Use OwnArrayPtr.
> + ots::MemoryStream output(transcodeRawBuffer, m_buffer->size() + padLen);
> + if (!ots::Process(&output, (const uint8_t *) m_buffer->data(), m_buffer->size())) {
Use c++ style cast. (reinterpret_cast). instead of "(const uint8_t *)".
> + delete[] transcodeRawBuffer;
This goes away when you use OwnArrayPtr (and then the "if" will have one line so the {} will go away on the if clause.)
> + return 0;
> + }
> + const size_t transcodeLen = output.Tell();
> + return SharedBuffer::create(transcodeRawBuffer, transcodeLen);
I think this leaks but once you switch to OwnArrayPtr, it won't.
> diff --git a/WebCore/platform/graphics/opentype/OpenTypeSanitizer.h b/WebCore/platform/graphics/opentype/OpenTypeSanitizer.h
> + * Copyright (c) 2009, Google Inc. All rights reserved.
Use a capital c
no comma after the year
> +#ifndef OpenTypeSanitizer_h
> +#define OpenTypeSanitizer_h
Add the if enable here.
Created attachment 42738 [details]
transcode_webfonts_by_ots_v7
Thanks for the review. Uploaded v7 patch which addresses all the comments except this one:
> > + // name table replacement and/or padding for glyf table.
>
> I've typically seen glyph instead of glyf but I did see glyf in one place on
> the web.
OpenType fonts can have two glyph tables, "CFF" and "glyf", and the comment refers to the latter.
Please let me leave the comment as is to make it less ambiguous.
Thanks,
Yusuke
Comment on attachment 42738 [details]
transcode_webfonts_by_ots_v7
> diff --git a/WebCore/platform/graphics/chromium/FontCustomPlatformData.cpp b/WebCore/platform/graphics/chromium/FontCustomPlatformData.cpp
Why did the the include move to the header file?
Just put it here as you like in your previous patch.
> diff --git a/WebCore/platform/graphics/chromium/FontCustomPlatformData.h b/WebCore/platform/graphics/chromium/FontCustomPlatformData.h
> #include "FontRenderingMode.h"
> #include <wtf/Noncopyable.h>
>
> +#if ENABLE(OPENTYPE_SANITIZER)
Now that you have the ENABLE guards in the header file, you don't need them around the include.
> +#include "OpenTypeSanitizer.h"
I don't understand why this include is here (instead of being in the cpp file as in the previous patch).
> diff --git a/WebCore/platform/graphics/mac/FontCustomPlatformData.cpp b/WebCore/platform/graphics/mac/FontCustomPlatformData.cpp
> diff --git a/WebCore/platform/graphics/mac/FontCustomPlatformData.h b/WebCore/platform/graphics/mac/FontCustomPlatformData.h
Same comments about the #include file for these two files as well.
> diff --git a/WebCore/platform/graphics/opentype/OpenTypeSanitizer.cpp b/WebCore/platform/graphics/opentype/OpenTypeSanitizer.cpp
> +
> +#include "OwnArrayPtr.h"
Include wtf files like this:
#include <wtf/OwnArrayPtr.h>
and put it after the other includes.
> +#include "SharedBuffer.h"
> +#include "opentype-sanitiser.h"
> +#include "ots-memory-stream.h"
> + if (!ots::Process(&output, reinterpret_cast<const uint8_t *>(m_buffer->data()), m_buffer->size()))
No space after the unit8_t. "reinterpret_cast<const uint8_t*>("
Created attachment 42788 [details]
transcode_webfonts_by_ots_v8
Uploaded v8 patch.
(In reply to comment #29)
> Why did the the include move to the header file?
As I told you offline, I misunderstood your review comment, sorry.
Created attachment 42807 [details]
transcode_webfonts_by_ots_v9
Created attachment 42808 [details]
transcode_webfonts_by_ots_v10
WebCore.xcodeproj was missing from the previous patch. Added the file.
Other files are not changed.
David, sorry for bothering you.
(In reply to comment #33)
> Created an attachment (id=42808) [details]
> transcode_webfonts_by_ots_v10
Comment on attachment 42808 [details]
transcode_webfonts_by_ots_v10
Looks good.
It would be good to check other build files (for example other files in WebCore that list HTMLDataListElement.h ) and see if you need to change them.
I would rather this not get committed until we have more time to discus this. If it is in fact a good idea to have this sanitizer, then I believe a copy should live in the webkit tree (just as image decoders live in the tree).
That said, I am not sure it is good idea. What makes one parser (the sanitizer) less prone to security bugs then the actual font parser? Won't this increase the attack surface for a certain class of bug?
Let's not push this through until there has been more discussion on this.
> What makes one parser (the sanitizer) less prone to security bugs then the actual font parser?
The reasons behind using a sanitiser:
* Should bugs in popular parsers be found or known, we can render them irrelevant with the sanitiser. Since we don't control the system font parser, the time to patch may be unbounded and it's almost certainly a lot longer than the patch time of Chrome. For Safari on OS X this isn't an issue, since Apple controls both. But for any WebKit browser on Windows, this is a concern.
* The system font parser may not be open source. In this case, security folks can review the sanitiser, but not the system font parser. The sanitiser is also a lot smaller than a full parser/renderer, thus it's easier to review and to see where attacker-controlled values are going.
So, you're correct that adding the sanitiser introduces the possibility of exploiting a bug in the sanitiser itself. However, given the two points above, I believe it's a worthwhile tradeoff.
> If it is in fact a good idea to have this sanitizer, then I believe a copy
> should live in the webkit tree (just as image decoders live in the tree).
At this point, I believe that's premature. But, if people feel strongly, I will concede as I don't feel that strongly.
Comment on attachment 42808 [details]
transcode_webfonts_by_ots_v10
The patch still looks fine but I'll switch this back to r? pending Sam's request being answered so that it isn't accidentally committed.
I'm curious why OpenType layout (e.g. GPOS/GSUB) and AAT (e.g. morx) tables are omitted from the sanitizer. My experience fixing font bugs in Firefox makes me think that these are actually more susceptible to attack then many of the base level TrueType tables, since the complexity of these tables easily hides underlying bugs.
One other thing to note here is that Webkit code currently uses the t2embed library for loading ttf fonts. There have been known problems with this library in the past:
KB 961371 Vulnerabilities in the Embedded OpenType Font Engine could allow remote code execution
If you're implementing a sanitizer seems like you really should be skipping calls to t2embed and instead using the low-level font loading API's, as is done for CFF fonts currently.
Yes, the OTS library currently does not support GPOS/GSUB/morx tables. However, "does not support" means that OTS does not parse these tables, does not put them on a reconstructed font. As a result, attackers are not able to abuse these tables.
Though we might add parsers for these tables in the future if needed, it's unlikely for the first release.
(In reply to comment #40)
> Yes, the OTS library currently does not support GPOS/GSUB/morx tables. However,
> "does not support" means that OTS does not parse these tables, does not put
> them on a reconstructed font. As a result, attackers are not able to abuse
> these tables.
This means that fonts for any language that requires shaping (Arabic, Hindi, etc.) will effectively be neutered by the sanitize process. Also looks like this effectively disables kerning, a recently added feature:
Yes, the v1 sanitiser will kill complex text and kerning. Probably other things too.
@Sam - Any update? was made public today, so hopefully some of the reasoning behind this work is clearer.
> This change depends on a patch on Chromium side which is under review
> (). So please do not cq+ for a while even
> if it's r+.
The Chromium patch has been landed. Now the WebKit change is cq+ safe.
.
Anyway, we're tracking them at and
(In reply to comment #45)
> .
Clarification: When I wrote 'not an issue, yet', I meant 'not an issue, yet for Chromium or other webkit ports'.
Because Firefox already supports kerning, for OTS to be used by Firefox soon, kern and other related tables have to be sanitized and passed through. Contributions are welcome :-) ( )
For complex scripts, gpos,gsub and other tables must be sanitized even for webkit.
Comment on attachment 42808 [details]
transcode_webfonts_by_ots_v10
54 // name table replacement and/or padding for glyf table.
"glyph" not "glyf"
36 #include "opentype-sanitiser.h"
and
9 and attempts to validate and sanitise them. We hope this reduces the attack surface
sanitizer instead of sanitiser.
I just read through the entire patch for the first time and it otherwise looks OK, those typos can be corrected on landing, but this cannot be cq+'d as is.
As far as I can tell this change is non-harmful to WebKit and helps Chromium feel more secure. I am certain there are real, exploitable bugs in system font-parsers on the various OSes WebKit supports.
Adding a sanitization step allows WebKit to trade any known system font parser bugs for possible WebKit or OpenTypeSanitizer bugs which can be contained in a sandbox instead of being exploit-your-machine bugs. To the best of my knowledge, WebKit has historically done the same for CG and Skia graphics libraries. I see this as a similar approach.
I've not spoke with any of the other Chromium folks in person about this, but from a WebKit perspective, this seems like a good change to make.
I think other ports are going to want to adopt this type of sanitization. Whether that means we eventually need to move this "OTS" code into WebKit or not, I'm not sure. We don't include libxslt in WebKit, and I see this as similar.
Obviously we should not commit this with objection remaining, but I'll mark this with r+ representing my support of this patch.
> "glyph" not "glyf"
glyf isn't a typo: it's the name of the table:
Created attachment 44018 [details]
transcode_webfonts_by_ots_v11
Created attachment 44019 [details]
transcode_webfonts_by_ots_v12
(In reply to comment #50)
> Created an attachment (id=44019) [details]
> transcode_webfonts_by_ots_v12
Attached a new patch (v12) which can be applied to the latest tree.
Replaced sanitiser with sanitizer as well. Thanks, Eric.
Comment on attachment 44019 [details]
transcode_webfonts_by_ots_v12
Now that I understand that "glyf" is a special name, I guess I would have provided a url in the comment like AGL did in the comment above, but that's really not a big issue.
I still think this is a good change to make. Lets leave this for at least a day for Sam to make any final comments on before committing.
Comment on attachment 44019 [details]
transcode_webfonts_by_ots_v12
Not having heard from Sam, I'm going to commit this. We an always roll it out if there are further objections.
Comment on attachment 44019 [details]
transcode_webfonts_by_ots_v12
patching file WebCore/platform/graphics/chromium/FontCustomPlatformData.cpp
patching file WebCore/platform/graphics/mac/FontCustomPlatformData.cpp
patching file WebCore/platform/graphics/opentype/OpenTypeSanitizer.cpp
patching file WebCore/platform/graphics/opentype/OpenTypeSanitizer.h
patching file WebKit/chromium/ChangeLog
Hunk #1 succeeded at 1 with fuzz 3.
patching file WebKit/chromium/DEPS
patching file WebKit/chromium/features.gypi
Created attachment 44200 [details]
transcode_webfonts_by_ots_v13
(In reply to comment #54)
> (From update of attachment 44019 [details])
>
Hmm...
Uploaded v13 which should resolve the project.pbxproj conflict.
Comment on attachment 44200 [details]
transcode_webfonts_by_ots_v13
Exiting early after 1 failures. 9385 tests run.
275.25s total testing time
9384 test cases (99%) succeeded
1 test case (<1%) had incorrect layout
6 test cases (<1%) had stderr output
(In reply to comment #57)
> (From update of attachment 44200 [details])
>
I've run-webkit-tests locally again, and confirmed that all test cases succeed:
yusukes-macpro:WebKit yusukes$ ./WebKitTools/Scripts/run-webkit-tests
...
all 11703 test cases succeeded
Changing cq- to cq?. Thanks.
Comment on attachment 44200 [details]
transcode_webfonts_by_ots_v13
Ok. Let's try again.
Comment on attachment 44200 [details]
transcode_webfonts_by_ots_v13
Clearing flags on attachment: 44200
Committed r51623: <>
All reviewed patches have been landed. Closing bug.
I believe the bogus inspector failure was just bug 30098.
|
https://bugs.webkit.org/show_bug.cgi?id=31106
|
CC-MAIN-2019-39
|
refinedweb
| 3,103 | 58.48 |
Section 8.1
Exceptions, try, and catch
GETTING A PROGRAM TO WORK UNDER IDEAL circumstances is usually a lot easier than making the program robust. A robust program is one that can survive unusual or "exceptional" circumstances without crashing. For example, a program that does a calculation that involves taking a square root will crash if it tries to take the square root of a negative number. A robust program must anticipate the possibility of a negative number and guard against it. This could be done with an if statement:if (disc >= 0) { r = Math.sqrt(disc); // Since disc >= 0, this must be OK } else { ... // Do something to handle the case where disc < 0 }
We would say that the statement "r = Math.sqrt(disc);" has the precondition that disc >= 0. A precondition is a condition that must be true at a certain point in the execution of a program, if that program is to continue without error and give a correct result. One approach to writing robust programs is to rigorously apply the rule, "At each point in the program identify any preconditions and make sure that they are true" -- either by using if statements to check whether they are true, or by verifying that the required preconditions are consequences of what the program has already done. An example of the latter case would be the sequence of statementsx = Math.abs(x); // At this point, we know that x >= 0, since // the absolute value of any number is defined to be >= 0 y = Math.sqrt(x); // Since x >= 0, this must be OK
There are some problems with this approach. It is difficult and sometimes impossible to anticipate all the possible things that might go wrong. Furthermore, trying to anticipate all the possible problems can turn what would otherwise be a straightforward program into a messy tangle of if statements.
Java (like its cousin, C++) provides a neater, more structured alternative method for dealing with possible errors that can occur while a program is running. The method is referred to as exception-handling. The word "exception" is meant to be more general than "error." It includes any circumstance that arises as the program is executed which is meant to be treated as an exception to the normal flow of control of the program. An exception might be an error, or it might just be a special case that you would rather not have clutter up your elegant algorithm.
When an exception occurs during the execution of a program, we say that the exception is thrown. When this happens, the normal flow of the program is thrown off-track, and the program is in danger of crashing. However, the crash can be avoided if the exception is caught and handled in some way. An exception can be thrown in one part of a program and caught in a completely different part. An exception that is not caught will generally cause the program to crash.
By the way, since Java programs are executed by a Java interpreter, having a program crash simply means that it terminates abnormally and prematurely. It doesn't mean that the Java interpreter will crash. In effect, the interpreter catches any exceptions that are not caught by the program. The interpreter responds by terminating the program. In many other languages, a crashed program will often crash the entire system and freeze the computer until it is restarted. With Java, such system crashes should be impossible -- which means that when they happen, you have the satisfaction of blaming the system rather than your own program.
When an exception occurs, it is actually an object that is thrown. This object can carry information (in its instance variables) from the point where the exception occurred to the point where it is caught and handled. This information typically includes an error message describing what happened to cause the exception, but it could also include other data. The object thrown by an exception must be an instance of the class Throwable or of one of its subclasses. In general, each different type of exception is represented by its own subclass of Throwable. Throwable has two direct subclasses, Error and Exception. These two subclasses in turn have many other predefined subclasses. In addition, a programmer can create new exception classes to represent new types of exceptions.
Most of the subclasses of the class Error represent serious errors within the Java virtual machine that should ordinarily cause program termination because there is no reasonable way to handle them. You should not try to catch and handle such errors. An example is the ClassFormatError, which occurs when the Java virtual machine finds some kind of illegal data in a file that is supposed to contain a compiled Java class. If that class was supposed to be part of the program, then there is really no way for the program to proceed.
Subclasses of Exception represent exceptions that are meant to be caught. In many cases, these are exceptions that might naturally be called "errors," but they are errors in the program, or in input data, that a programmer can anticipate and possibly respond to in some reasonable way. (You have to avoid the temptation of saying, "Well, I'll just put a thing here to catch all the errors that might occur, so my program won't crash." If you don't have a reasonable way to respond to the error, it's usually best just to terminate the program, because trying to go on will probably only lead to worse things down the road -- in the worst case, a program that gives an incorrect answer without giving you any indication that the answer might be wrong!)
The class Exception has its own subclass, RuntimeException. This class groups together many common exceptions such as ArithmeticException, which occurs for example when there is an attempt to take the square root of a negative number, and NullPointerException, which occurs when there is an attempt to use a null reference in a context when an actual object reference is required. RuntimeExceptions and Errors share the property that a program can simply ignore the possibility that they might occur. ("Ignoring" here means that you are content to let your program crash if the exception occurs.) For example, a program does this every time it uses Math.sqrt() without making arrangements to catch a possible ArithmeticException. For all other exception classes besides Error, RuntimeException, and their subclasses, exception-handling is "mandatory" in a sense that I'll discuss below.
The following diagram is a class hierarchy showing the class Throwable and just a few of its subclasses. Classes that require mandatory exception-handling are shown in red.
To handle exceptions in a Java program, you need a try statement. The idea is that you tell the computer to "try" to execute some commands. If it succeeds, all well and good. But if an exception is thrown during the execution of those commands, you can catch the exception and handle it. For example,try { d = Math.sqrt(b*b - 4*a*c); r1 = (-b + d) / (2*a); r2 = (-b - d) / (2*a); console.putln("The roots are " + r1 + " and " + r2); } catch ( ArithmeticException e ) { console.putln("There are no real roots."); }
The computer tries to execute the block of statements following the word "try". If no exception occurs during the execution of this block, then the "catch" part of the statement is simply ignored. However, if an ArithmeticException occurs, then the computer jumps immediately to the block of statements labeled "catch (ArithmeticException e)". This block of statements is said to be a exception handler for ArithmeticExceptions. By handling the exception in this way, you prevent it from crashing the program.
You might notice that there are some other possible sources of error in this try statement. For example, if the value of the variable a is zero, you would probably expect the division by zero to produce an error. The reality is a bit surprising: If the numbers that are being divided are of type int, then division by zero will indeed throw an ArithmeticException. However, no arithmetic operations with floating-point numbers will ever produce an exception. Instead, the double type includes a special value called not-a-number to represent the result of an illegal operation. When this value is printed out, it is written as "NaN" -- which is hardly what you would like to see in the output!
Another possible error in this example is even more subtle: If the value of the variable console is null, then a NullPointerException will be thrown when the console is referenced in the last line of the try block. You could catch such an exception by adding another catch clause to the try statement:try { d = Math.sqrt(b*b - 4*a*c); r1 = (-b + d) / (2*a); r2 = (-b - d) / (2*a); console.putln("The roots are " + r1 + " and " + r2); } catch ( ArithmeticException e ) { console.putln("There are no real roots."); } catch ( NullPointerException e ) { System.out.println("Programming error! " + e.getMessage()); }
I haven't tried to use the console in the handler for NullPointerExceptions, because it's likely that the value of console is itself the problem. In fact, it would almost surely be better in this case just to let the program crash! This is a case where careful programming is better than exception handling: Just be sure that your program assigns a non-null value to console before it is used. However, this excample does show how multiple catch clauses can be used with one try block. This example also shows what that little "e" is doing in the catch clauses. The e is actually a variable name. (You can use any name you like.) Recall that when an exception occurs, it is actually an object that is thrown. Before executing a catch clause, the computer sets this variable to refer to the exception object that is being caught. This object contains information about the exception. In particular, every exception object includes an error message, with can be retrieved using the object's getMessage() method, as is done in the above example.
The example I've given here is not particularly realistic. You are more likely to use an if statement than to use exception-handling to guard against taking the square root of a negative number. You would certainly resent it if the designers of Java forced you to set up a try...catch statement every time you wanted to take a square root. This is why handling of potential RuntimeExceptions is not mandatory. There are just too many things that might go wrong! (This also shows that exception-handling does not solve the problem of program robustness. It just gives you a tool that will in many cases let you approach the problem in a more organized way.)
The syntax of a try statement is a little more complicated than I've indicated so far. The syntax can be described astry statement optional-catch-clauses optional-finally-clause
where, as usual, the statement can be a block of statements enclosed between { and }. The try statement can include zero or more catch clauses and, optionally, a finally clause. (The statement must include either a finally clause or at least one catch clause.) The syntax for a catch clause iscatch ( exception-class-name variable-name ) statement
and the syntax for a finally clause isfinally statement
The semantics of the finally clause is that the statement or block of statements in the finally clause is guaranteed to be executed as the last step in the execution of the try statement, whether or not any exception occurs and whether or not any exception that does occur is caught and handled. The finally clause is meant for doing essential cleanup that under no circumstances should be omitted. You'll see an example of this later in the chapter.
There are times when it makes sense for a program to deliberately throw an exception. This is the case when the program discovers some sort of exceptional or error condition, but there is no reasonable way to handle the error at the point where the problem is discovered. The program can throw an exception in the hope that some other part of the program will catch and handle the exception.
To throw an exception, use a throw statement. The syntax of the throw statement isthrow exception-object ;
The exception-object must be an object belonging to one of the subclasses of Throwable. Usually, it will in fact belong to one of the subclasses of Exception. In most cases, it will be a newly constructed object created with the new operator. For example:throw new ArithmeticException("Division by zero");
The parameter in the constructor becomes the error message in the exception object.
When an exception is thrown during the execution of a subroutine and the exception is not handled in the same subroutine, then the subroutine is terminated (after the execution of any pending finally clauses). Then the routine that called that subroutine gets a chance to handle that exception. If it doesn't do so, then it also is terminated and the routine that called it gets the next shot at the exception. The exception will crash the program only if it passes up through the entire chain of subroutine calls without being handled.
A subroutine that can throw an exception can announce this fact by adding the phrase "throws exception-class-name" to the specification of the routine. For example:static double root(double A, double B, double C) throws ArithmeticException { // returns the larger of the two roots of // the quadratic equation A*x*x + B*x + C = 0 double d = Math.sqrt(b*b - 4*a*c); // might throw an exception! if (a == 0) throw new ArithmeticException("Division by zero."); return (-b + d) / (2*a); }
In this case, declaring that root() can throw an ArithmeticException is just a courtesy to potential users of this routine. This is because handling of ArithmeticExceptions is not mandatory. A routine can throw an ArithmeticException without announcing the possibility. And the user of such a routine is free either to catch or ignore such an exception.
For those exception classes that require mandatory handling, the situation is different. If a routine can throw such an exception, that fact must be announced in a throws clause in the routine definition. Failing to do so is a syntax error that will be reported by the compiler.
Suppose you call a routine that can possibly throw an exception of a type requires mandatory handling. In that case, you must handle the exception in one of two ways. You can call the routine inside a try statement, and use a catch clause to catch the exception if it occurs. If you don't do this, then, since calling the offending routine might implicitly throw the same exception in your own routine, you have to add an appropriate throws clause to your own routine. If you don't handle the exception one way or another, it will be considered a syntax error, and the compiler will not accept your program. Exception-handling is mandatory in this sense for any exception class that is not a subclass of either Error or RuntimeException.
Among the exceptions that require mandatory handling are several that can occur when using Java's input/output routines. This means that you can't even use these routines unless you understand something about exception-handling. The rest of this chapter deals with input/output and uses exception-handling extensively.
[ Next Section | Previous Chapter | Chapter Index | Main Index ]
|
http://math.hws.edu/eck/cs124/javanotes1/c8/s1.html
|
crawl-002
|
refinedweb
| 2,605 | 61.77 |
We were unable to locate this content in ja-jp.
Here is the same content in en-us.
You can create a series of Web pages, called a code comment Web report, that allow you to browse the code structure within files in your current project or solution, such as objects and interfaces defined in a project, and members. Using special code comment syntax, you can also include notes or other information. By default, code comment Web pages are stored in a directory called CodeCommentReport, and this directory is placed in the directory containing the solution file.
Note Currently, only C# supports the code comment syntax required for code comment Web reports. Additional languages might also support code comment syntax at a later date.
The home page for the code comment Web report lists summary information for the solution. This page is created regardless of whether or not you choose to build a Web report for a solution or just selected projects. Selecting a project from the home page takes you to a summary page for the project and displays a navigation pane. You can then browse the code structure for the project by selecting links from either the project summary page or from the navigation pane. The navigation pane lists all the namespaces within the project. These namespaces include classes, structures, interfaces, enumerators, and delegates.
Adding Comments in C# Code | Viewing Code Structure with Comments | Build Comment Web Pages Dialog Box | Tags for Documentation Comments | Creating Code Reports
|
http://msdn.microsoft.com/ja-jp/library/aa292987(en-us,VS.71).aspx
|
crawl-002
|
refinedweb
| 249 | 62.07 |
AS3, Dictionary & Weak Method Closures
This is going to be a technical post so those of you not of the code persuasion look away now..
Okay great, now those guys have gone I can get down to it.
Some of my recent work on the SWFt project has revolved around the use of Robert Penners AS3Signals. If you dont know what Signals are I strongly reccomend that you check out Roberts blog for more info. In brief, they are an alternative to the Events system found in Flash, based on the Signal / Slot pattern of Qt and C# they are much faster and more elegant (my opinion) than native events.
I have been trying to incorporate signals in SWft for both the elegance and performance gains that they bring, however there is an issue that was brought to my attention by Shaun Smith on the mailing list. The issue is that my current use of them will cause memory leaks.
I realised that this too would apply to the work I had been doing using the RobotLegs and Signals libraries. RobotLegs (for those of you that dont know) is an excellent Dependency Injection framework inspired by the very popular PureMVC framework. I have blogged before about its excellence. Signals have been incorporated into RobotLegs as a separate ‘plugin’ by Joel Hooks in the form of the SignalCommandMap. The SignalCommandMap does as the name implies, it allows you to map signals to commands so that whenever a mapped signal is dispatched then the corresponding command is executed.
Its a very nice, elegant, solution to RIA development. However there is one catch. I have so far been using signals such as:
[codesyntax lang=”actionscript3” lines=”normal”]
public class MyMediator extends Mediator
{
// View
[Inject] public var view : MyView;
// Signals [Inject] public var eventOccured : ViewEventOccuredSignal; [Inject] public var modelChanged : ModelChangedSignal; override public function onRegister():void { view.someSignal.add(eventOccured.dispatch); modelChanged.add(onModelChanged); } protected function onModelChanged() { view.updateView(); }
}
[/codesyntax]
So here we can see a typical use of Signals in a mediator. There are two things going on here that are of concern, lets break them down.
Firstly on line 12 we are listening to a signal on the view, then passing on the event directly to an app-level event, notice how nice and clean this is, this is what I love about using RL & Signals. Line 13 we are listening for an app-level signal for a change on the model then updating the view to reflect this.
It all looks well and good but unfortunately in its current state it could cause a memory leak. This is because we are listening to events on signals without then removing the listen. For example, we are listening to the app-level event on line 13 “modelChanged.add(onModelChanged);” so now the “modelChanged” signal has a reference to this Mediator. This will cause a leak when the View is removed from the display list. Normally the mediator would also be make available for garbage collection, however, because the singleton Signal has a reference to the Mediator it cannot be removed.
The same goes for line 12. Suppose the “ViewEventOccuredSignal” that is injected is not a singleton and is swapped out for another instance it could not be garbage collected as the “view.someSignal” has a reference to its dispatch function.
Realising this problem I knew that the solution was simply to be careful and add a “onRemoved” override function in my Mediator then clean up by removing the signal listeners. However I like the simplicity and beauty of current way of doing things so I started to wonder if there was another way.
I started thinking about whether I could use weak references with the Signal. If I could then I wouldnt have to worry about cleaning up as the Signal wouldnt store any hard-references to the functions and so the listener would be free for collection. After some digging however I realised that there was no option for weak listening in Robert Penners AS3Signals.
I thought to myself why the hell not? I knew that the Dictionary object in AS3 has an option to store its contents weakly so I thought so long as you don’t require order dependant execution of your listeners it should be possible to store the listener functions in a weakly referenced Dictionary.
It was at this point that I noticed Roberts post on the subject of weakly referenced Signals:. In it he references Grant Skinners post concerning a bug with storing functions in a weakly referenced Dictionary.
From Grant’s post:.
This was starting to look bad for my idea. Me being me however, I thought I knew better, and that post was written pre Flash 10 so I thought to myself: perhaps its been fixed in Flash 10. So I set to work coding a simple example.
I created a very simple Signal dispatcher:
[codesyntax lang=”actionscript3” lines=”normal”]
package
{
import flash.events.EventDispatcher;
import flash.utils.Dictionary;
public class SimpleDispatcher { protected var _listeners : Dictionary; public function SimpleDispatcher(useWeak:Boolean) { _listeners = new Dictionary(useWeak); } public function add(f:Function) : void { _listeners[f] = true; } public function dispatch() : void { for (var o:* in _listeners) { o(); } } }
}
[/codesyntax]
And a very simple listening object:
[codesyntax lang=”actionscript3” lines=”normal”]
package
{
public class SimpleListener
{
public function listen(d:SimpleDispatcher) : void
{
d.add(onPing);
}
protected function onPing() : void { trace(this+" - ping"); } }
}
[/codesyntax]
And then a simple Application to hook it all together:
[codesyntax lang=”mxml” lines=”normal”]
<?xml version=”1.0” encoding=”utf-8”?>
<s:Application xmlns:fx=”“
xmlns:s=”library://ns.adobe.com/flex/spark”
xmlns:mx=”library://ns.adobe.com/flex/mx”>
<fx:Script> <![CDATA[ import mx.controls.List; protected var _dispatcher : SimpleDispatcher = new SimpleDispatcher(true); protected var _listener : SimpleListener; protected function onAddListenerClicked(event:MouseEvent):void { _listener = new SimpleListener(); _listener.listen(_dispatcher); } protected function onRunGCClicked(event:MouseEvent):void { try { new LocalConnection().connect('foo'); new LocalConnection().connect('foo'); } catch (e:*) {} } protected function onDispatchClicked(event:MouseEvent):void { _dispatcher.dispatch(); } ]]> </fx:Script> <s:VGroup <s:Button <s:Button <s:Button </s:VGroup>
</s:Application>
[/codesyntax]
So what I should expect to see from this example is that when I click “Add Listener” it should create a listener reference which will then listen for when the signal is dispatched and trace out a “ping”.
What actually happens is you get nothing. No trace out, despite the fact that there is clearly still a reference to the listener in the Application file.
So whats happening here? If you break into the debugger at the point that the listener is added then you get the following:
You can see that the type “MethodClosure” is added as the key to the dictionary rather than Function which is passed in. MethodClosure is a special native Flash Type that you dont have access to. It exists to resolve the issues we used to have in AS2 where passing a function of a class to a listener would cause the listener to go out of scope and other nasties. From the Adobe docs:
Event handling is simplified in ActionScript 3.0 thanks to method closures, which provide built-in event delegation. In ActionScript 2.0, a closure would not remember what object instance it was extracted from, leading to unexpected behavior when the closure was invoked.
..
This class is no longer needed because in ActionScript 3.0, a method closure will be generated when someMethod is referenced. The method closure will automatically remember its original object instance.
The only problem is that it seems that using a MethodClosure as a key in a weak dictionary causes the MethodClosure to have no references and hence be free for garbage collection as soon as its added to the Dictionary which is not good :(
So thats about as far as I got, I have spent a few evenings on this one now and I think im about ready to call it quits. I had a few ideas about creating Delegate handlers to make functions very much in the same way as was done in AS2 but then I read this post: and the subsequent comments and realised it probably wasnt going to work.
I also had an idea about using the only other method of holding weak references the EventDispatcher class. I thought perhaps somehow I could get it to hold the weak references then I could loop through the listeners in there calling dispatch manually. Despite “listeners” property showing up in the Flex debugger for an EventDispatcher you dont actually have access to that property unfortunately so hence cant get access to the listening functions. Interestingly however the EventDispatcher uses “WeakMethodClosure” object instead of the “MethodClosure” object according to the debugger.
Well I guess for now Ill have to make sure I code more carefully and unlisten from my Signals ;)
|
https://mikecann.co.uk/actionscript/flex/programming/swft/as3-dictionary-weak-method-closures/
|
CC-MAIN-2018-34
|
refinedweb
| 1,469 | 52.29 |
FilMember
Content count118
Joined
Last visited
Community Reputation168 Neutral
About Fil
- RankMember
rename variables in release mode (like C++ #define)
Fil replied to Fil's topic in General and Gameplay Programming@jbadams: Thank you: your words are very encouraging @Bacterius: I haven't proprietary algorithms but it's a business software with (I think) its unique selling points... so I hate the idea my various classes can be reused by others in their programs (that's the source of my little regret). I know my (unique) features can be redone from scratch by competition, I just don't want to help them too much, something like "hey, that's the source code!". With respect to Minecraft example, I'd say that because it's based on a community its source code is important only to mod developers: I imagined a clone of the game would be easy to spot for everyone... but I understand from Bacterius' reply that my reasonings are wrong Anyway you're right: obfuscation is not so effective... even a friend of mine told me not to spend too much time in obfuscation since it's far more easy to understand a program analysing the data base schema instead of trying to unobfuscate or (in case of native code) to decompile an exe. Thanks for your suggestions
rename variables in release mode (like C++ #define)
Fil replied to Fil's topic in General and Gameplay Programming@wack: I hope it will be extremely popular [img][/img] About decompilation I agree with you. I'll try to spend the least time possible doing renaming @antheus:You're right. I regret a little to have used this kind of tecnology to develop my program. I think I'll rewrite it using VC++ to have native code, but at present I need to complete the first release as soon as possible and I'd like to discourage reading of my exe/source code by casual observers. Thanks for sharing your thoughs. [img][/img]
rename variables in release mode (like C++ #define)
Fil replied to Fil's topic in General and Gameplay ProgrammingThank you for your replies. I'm already using an obfuscator (since the exe in .Net is just all the source code without comments) but a lot of classes dealing with WCF services cannot be renamed without compromising communication between clients and server (in fact obfuscator leaves them unchanged). That's why I'd like to change theese names manually. So I can have the class CCustomerInfoService in debug mode and something named C0945 in release mode (the same for model classes, all the methos and members). I know obfuscators are not a (great) protection, but since it's not an open source project I don't want other to see so easily how I wrote my program, neither I want to give them too many insights about the class diagram I implemented. "Replace All" as Bacterius suggests maybe is not a bad idea (or am I desperate? ;-) ): I'm thinking about a String.Replace done on my source code by a little utility as a pre-compilation step. That's will keep in sync release and debug version of source code (so I have no change in how I write programs), and names are changed automagically before compiling release version. If I forgot to rename something it will become a compiler error in release mode, so it should be easy to fix since I'll find a debug name not replaced with its counterpart in release mode. I think to try this way. Thank you again, your comments have been useful
rename variables in release mode (like C++ #define)
Fil posted a topic in General and Gameplay ProgrammingI'm using VB.NET 2008. I'd like to change variable names from debug to release mode. I know I can't use a #define preprocessor directive (like in C++) [CODE] #if DEBUG # define MyVarName myDebugVarName #else # define MyVarName myReleaseVarName #endif [/CODE] and I wonder if there's something I can do to have this kind of result. I know I can write something like: [CODE] #If DEBUG Then Dim myDebugVarName As Integer = 0 #Else Dim myReleaseVarName As Integer = 0 #End If [/CODE] but this is very ugly since now every instruction (or block of instructions) dealing with that var has to be written 2 times, one for debug mode and one for release mode. E.g.: [CODE] #If DEBUG Then myDebugVarName+=1 #Else myReleaseVarName+=1 #End If [/CODE] or [CODE] #If DEBUG Then if myDebugVarName > 2 Then DoSomething(myDebugVarName) Else DoSomethingElse(myDebugVarName) End If #Else If myReleaseVarName > 2 Then DoSomething(myReleaseVarName) Else DoSomethingElse(myReleaseVarName) End If #End If [/CODE] The code becomes soon unreadable and it is easy to forget keep the couple of version in sync (violation of DRY principle). [img][/img] Renaming variables in MSIL might be an option, but I've not found an easy way to do that. For example I imagine having a table with debug names and release names for each var (maybe a Dictionary) and call a translation of all the names in Release version (so I write only myDebugVarName in my source and in MSIL it changes in myReleaseVarName). I think I'll apply your suggested methods to change names to some classes and methods. I hope you see what I mean. Thank you in advance for any help. [img][/img]
Convolution not working
Fil replied to TheMadScientist's topic in For BeginnersQuote:Original post by TheMadScientist If one of my coefficients calls for y[n-5] and n=0 I would assume this would be zero I think I've found your problem: your array indexes are negative, so values they are pointing to are meaningless and have random values. When i=0 you are using theese elements: - InputSignal[0] ok - InputSignal[-1] <<-- bad - InputSignal[-2] <<-- bad - OutputSignal[0] ok - OutputSignal[-1] <<-- bad Remember arrays first index is 0 and there is nothing before. You should do something like this. Suppose you need nNeg element before zero index. float *InputSignal=new float[N+nNeg+1]; float *OutputSignal=new float[N+nNeg+1]; InputSignal+=nNeg+1; OutputSignal+=nNeg+1; // now InputSignal[-nNeg] exists because actually it is InputSignal[0] ... NotchBandPass(InputSignal,FC,BW,OutputSignal,N); ... // dispose mem (delete must use original array so I need to change the two pointers) InputSignal-=nNeg+1; OutputSignal-=nNeg+1; delete [] InputSignal; delete [] OutputSignal; Now your NotchBandPass should work... if there are not other mistakes [grin]
How i can insert a link on a dialog can open a web page in IE?
Fil replied to pepeland's topic in General and Gameplay ProgrammingMaybe I've not understood your question [grin]. My solution is: system("explorer"); That's will open on IE. I hope it can be helpful. [grin]
Invoking menu of other applications
Fil posted a topic in General and Gameplay ProgrammingI'm trying to open menu of other applications programmatically, sending the messages a window receives when I click the menu items or when I press ALT+F then 'O' to open a file, for example. I've tried with: CWnd * pWnd= the window I want to control (I read the handle with Spy++ then I use CWnd::FromHandle); CMenu * pMenu=pWnd->GetMenu(); CMenu * pSubMenu=pMenu->GetSubMenu(0); // 0 correspond to "File" menu int MenuID=pSubMenu->GetMenuItemID(5); // 5 correspond to "Save as..." item pWnd->SendMessage(WM_COMMAND,MenuID,NULL); While this can work for Notepad, for other application it doesn't work (eg. with an instance of Visual C++). I searched on google and yesterday I found a source in VB that make use of CommandBar objects to invoke menu and I promised myself to give it a try (after installing the .NET 2003 Toolkit I've not downloaded yet [grin]). (I think it is easier to write a program like that with VB instead of VC++.) Can you suggest a link or a tutorial on using CommandBar and/or invoking menu of another application since I cannot find the VB source code I found yesterday? Any other suggestion (ie. open source project, other methods, etc.) will be surely helpful. Thank you very much in advance. :)
I want to write some chinese words here!
Fil replied to linux_game_dev's topic in GDNet LoungeQuote:Original post by SamLowry I once asked some person (which I met during the "Projectwerk verdediging") how Japanese keyboards looked like, because I just couldn't image keyboards with thousands of keys on it. He told me that it worked phonetically. Also chinese works in phonetically way. For example (with right programs) if you type "bak" you'll see the symbol 白 that means "white" and you use normal keyboard :) "bak" is how chinese people read the symbol 白.
delete operator
Fil replied to mike74's topic in For BeginnersYou must use "delete []" otherwise you don't deallocate completely the allocated block. The compiler say nothing to you but if you do so many times you can go out of memory :O
[java] How to know if I should use / or \ ?
Fil replied to gcsaba2's topic in General and Gameplay ProgrammingThe property you are looking for is "file.separator" to be used as String FileSeparator=System.getProperty("file.separator"); See for reference.
pre or post multiplication for matrices?
Fil replied to PolyVox's topic in Math and PhysicsYou have this cases: trasformed vertex = vertex x Transformation Matrix trasformed vertices (Matrix with 1 vertex per row) = vertices (1 vertex x row) x Transformation Matrix so you use always post-multiplication. P.S.- Remember that the product is always a row x a column. Hope that helps. [Edit: nmi has been faster than me [grin]]
Thread questions...
Fil replied to mrmrcoleman's topic in General and Gameplay ProgrammingYou can read my answer on flipcode.
Memory questions...
Fil replied to mrmrcoleman's topic in General and Gameplay ProgrammingYou can find other replies here [grin]
line slope formula
Fil replied to emprog's topic in Math and PhysicsThe slope formula is ok. You are reinventing the Bresenham algorithm. [grin]
2D fourier spectrum rotation problems
Fil posted a topic in Math and PhysicsI.
|
https://www.gamedev.net/profile/39238-fil/?tab=blog
|
CC-MAIN-2018-05
|
refinedweb
| 1,688 | 58.52 |
Yes, I also learned programming using BASIC when I was 11 years old or something around that age.
It's really made for beginners, nothing beats
PRINT "hello world"
and I learned my girlfriend basic programming before she learned a bit of C++.
The great thing about FreeBASIC is that it can do simple things, run QuickBasic programs, and also supports (limited) OOP and pointers, and C(++) libraries, so it can be used from simple things up to bigger projects.
I think many of us started that way. I'm still trying to recover, over 2 decades later.
PRINT "hello world"
Yes. But the problem arises when you start with BASIC because it's fun and simple and next thing you know, it has shaped the way you think about programming.
BASIC is not bad because it's not as "fancy" as C or Java. It's bad because it's a dead-end, a destroyer of mind flexibility. C or Java have their redeeming qualities because they allow and even require the mind to work in other ways too. BASIC has none of those qualities. It is a stupid, mean, mind-grinding machine.
Edsger Dijkstra said this about BASIC:).
That person is locked into thinking like the lowest form of dumb machine there is: one single instruction at a time. These examples I've given require such a person to take their mind out, rinse it and turn it inside out, so to speak. It's a terrible effort; many of them fail and for many of them it takes years and years to accomplish that. That's because they should have started flexing their mind right from the start, and instead they were dumbed down with BASIC. Perhaps you can now understand why Dijkstra calls this "criminal".
The context of that quote is much larger, and that entire paper is worth a read:...
If you read it you will see that he warns about larger issues, of which this BASIC thing is just one symptom. There's an entire unhealthy current in today's society that tends to regard computers as magic machines, expect them to do what they cannot, blindly trust them for all the wrong things, or, worst of all, trying to emulate their inner working in the human mind, when the exact opposite should be true.
Computers are dumb tools meant for specific tasks. Once you've learned how to use a hammer you use it when you have nails to hammer and that's it. But when it comes to computers and things like BASIC we basically have lots of people who try to be a hammer and bang their heads on nails. It's a horrifyingly wrong and dangerous way of non-thinking.
Low-level languages have their place, sure. At some point in computing history they were the only way to do things, sure. But technology has evolved. We don't use rocks and live in caves anymore, we have specialised tools. And imperative programming languages should be just one tool among many, not the start and end of it all. In no way should any person should be subjected to the horror of breaking their mind to only think like a computer, one instruction at a time. The human mind is capable of much, much more.
If you want to teach your kid programming, for the love of God, please, start with a balanced combination of several approaches:...
Off the top of my head, I'd go with Alice or Lego Mindstorms, Logo and then BASIC. But under no circumstances start with BASIC or let them learn only BASIC. Carrots are good but you wouldn't feed your kid only carrots or wean him on carrots, would you?
Edited 2010-06-22 11:01 UTC
). "
I learnt to code in BBC Basic, then ZX Basic and QL SuperBASIC. I had absolutely no issues learning SQL, nor OOP for that matter:
I am a certified SQL Server 2005 technology specialist and MySQL 5.0 Certified Developer. And I have used Access SQL for too many years to remember. I also know my way around Oracle SQL (including PL/SQL), OpenBase, SQLBase, SQLite etc
As for OOP, my first OOP language was Smalltalk (under OS/2). About as OOP as you can get. I wrote financial transaction systems for a major European investment bank back in the '90s with it - took me about 2 weeks to 'get it'. Not that bad, given that there was no Web to speak of to look things up back then, and precious few books.
Nowadays I mainly code in PHP 5 with Zend Framework.
I'm not saying that BASIC is the greatest first language (although modern BASICs are fine and do OOP, recursion etc with no problems), and I tend to agree with you about Logo being more suitable for young kids (my daughters can write Logo programs, and are not even aware that they are programming). It's just that statements like the above are too general in nature and tend to come from people brought up on a diet of C and other low level languages.
Yes, *some* people who learn BASIC may have issue with OOP, but I suspect that most people have problems with OOP when they first encounter it anyway. When you get it though, it's like learning to ride a bike - it's easy thereon in.
I suspect that the main block to people learning new technologies/methodologies is jargon, poor teaching/books and lack of time!
Sadly I have to agree with this completely. I cut my teeth on BASIC and then Pascal and coming to terms with OOP was, and to some degree still is brain-draining. While I loved those times of tinkering with the learning languages, they were counterproductive in the long term I feel.
Unfortunately there wasn't much else around on mainstream desktop systems back then - Sinclair Spectrum, Apple ][ and /// series (including GS), original IBM PC and the clones, TRS-80, BBC, Commodore machines, they all had BASIC "built-in", and languages like Pascal and Fortran were readily available for most, so that's where we started. I regularly find myself cursing those "fun times" as I try to wrap my ageing grey matter around some modern code snippet, only to end up taking a handful of Paracetamol and Skyping a friend for an explanation...
I kind of have to agree with this. My guess is that 95% of people just could never be programmers. And that is not a bad thing. 95% of programmers could never write novels, or be car mechanics, or teachers, or doctors.
Each person's mind works in different ways. Personally, I think I was lucky enough to have the sort of mind that understands programming very easily. I started with Logo, then BASIC, then Java, then C++, and now Haskell, and I've never had any trouble with the new concepts (except maybe monads... that took a few weeks). But I could probably never do any of those other things.
And I'd agree with that as well. I've always been fascinated with technology, but spent a lot of time in the arts rather than the sciences because I felt a lot more driven by intuition and emotion than I did by sequence and logic.
I learned a few languages or so, maybe half dozen at most, either on my own or through my schooling, and I studied programming/coding methods and concepts. But I never felt that I had the patience or diligence to do such things for hours and hours.
I've always felt much more affinity for the hardware side than the software side. If circumstances had gone differently, I might have gone deep into the music applications: acoustics, sound engineering, etc. My other hobby *was* home sound systems, after all.
As generic advice it's probably a good one. People should be aware of their limitations and make the best of what they're good at.
If you meant it as defence for BASIC, not so much. Because it's been criticised by people who aren't exactly dumb. Dijkstra wasn't taking it out on BASIC because he wasn't capable of programming in it. He was condemning it for what it is: a completely unnatural way of thinking, blindly borrowed by humans from computers, instead of the opposite.
LOL. BASIC is a simple language to introduce computer programming and do some basic work in a dozen of lines. Anyone who expect OOP or some high-end functionality from BASIC is completely missing the point.
I started from ZX Basic, BTW. And I completely agree with author's point.
While your criticisms of BASIC are not unfounded (and I did not start on it myself) I don't think the author of the article is wrong, either. BASIC serves an important role in teaching programming: it reaches out to people who might otherwise not begin to program.
Where BASIC serves is as a gateway drug. To give the novice a taste of power and to drive his maniacal impulses. Once hooked it is possible to wean someone away from BASIC and on to harder drugs^H^H^H^H^Hlanguages, like LISP.
The low barrier to entry is precisely BASIC's virtue and its broken-ness is directly related to this simplistic virtue. Any other language would serve as well if it could be as simple, but if it could then I think it would likely to be as broken.
I couldn't disagree more. The thing is, most people are bad programmers. Even those doing it for a living. Great programmers are, above all, great architects, being able to have both a highlevel view and drill down to the core. BASIC is so simple, everyone can do it. Everyone understands it, as it is the closest thing to giving the computer instructions the way you give instructions in real life to people. This means that really bad programmers (or must I say, people who are really bad at programming and will never become "true programmers") can do BASIC, and nothing more. Sure, you can teach them C, but they'll turn it in a nightmare spaghetti dish C has become infamous for. So although it may seem that teaching someone BASIC destroys his chances of ever grasping anything else, that's just statistics, not fact. In reality, people who are born with the ability to become good programmers will be good programmers, even if they are taught BASIC (or worse, assembly!) first. I know many of them who started with line-oriented BASIC, then went to procedural BASIC, and from there to Pascal, C, C++/C#/Java whatever. The mind is really flexible, a single bad (or rather, limited) experience doesn't prevent its expression.
When's the last time you spoon-fed a real person step-by-step instructions? You tell them "go to the store and pick up some eggs" (perfect example of declarative programming, by the way). Even if they'd never been to that store, it's easy, as far as instructions go. Because most humans are not as dumb as computers.
To a computer you'd have to tell in painstaking detail how to reach the store, work the main door, pick a basket, work the turnstile, recognize the eggs, pick up the eggs, read the price label, check out, verify payment, return home.
And that's exactly the problem with BASIC. It's a fundamental fallacy that because computers work like that (them being dumb pieces of metal) we, humans, should also think like that. That's NOT how the human mind works. ANY programming paradigm other than imperative is much closer to the way humans think: declarative, functional, OOP.
So why did we settle on step-by-step as the best way to approach programming? Because at some point in time that's all we had: early programming "languages" were very rudimentary. But that's not the case anymore!
You could also argue that even people who suffered massive physical or psychological trauma as children have a chance of leading functional lives later on. Doesn't mean the trauma was a good thing. Their lives would have been so much better without it.
Yesterday. I have kids, you see.
Well, if they're not familiar with the details, I'd also tell them which store to go to, what shelves to find the eggs, what eggs I want (eko, 4-corn; small, medium, large; chicken, quail; white, brown) and what the maximum price is that I want to pay for them. I may also tell them to get the car or go by bike, of that's faster.
I'm not sure what you're arguing here. Do you imply that when programming in C or Java, your computer instantly gets smarter, and understands more? The downside of standard BASIC is the limited possibilities of procedural programming and extendability in general, but it works pretty much like C or Java. In fact, it is more like your ideal, as in BASIC I can say "draw circle" or "play music" and it works like a charm. In C or Java I have to code all that (or include libraries, that usually make me "intialize" them and set all kinds of parameters and so on).
Nobody is claiming humans should think like computers, you are putting up strawmen here.
Really? I always liked to think that "draw circle" is much more intuitive than, say "circle.draw", since in real life, the circle doesn't draw itself. And I must be very unhuman for not being able to grasp functional programming, since hey, that's supposedly how my mind works. Well, it doesn't.
Yes, you could argue that, but that's not what we're arguing about here, again putting up strawmen. I am arguing against the notion that using BASIC as a first programming language would "damage the brain" or whatever wording the OP used. I am not arguing in favour of the notion that BASIC should be taught as first language, which is what you seem to imply.
I dunno. But I hardly remember any BASIC from when I started programming (which was just 10-12 or so years ago). I am not saying it's a good language, but comparing this to real traumata seems exaggerated to me. And I'd say I am not that bad of a C++-, C#- (and some more) programmer (some of my projects include picogen () and metatrace, a C++ compile time ray tracer ()).
Or are you telling me next that I should visit a psychologist, so he pulls out this untreated soul wound, makes me whine and cry and lie in is arms at his hairy chest, so I can finally forget those abuses that I hardly remember?
Edited 2010-06-24 08:52 UTC
You claim "That person is locked into thinking like the lowest form of dumb machine there is: one single instruction at a time."
And you think that's a bad thing?
That's what a computer DOES! And even when it doesn't, the CPU goes to some effort to make it seem that instructions are happening sequentially.
Programmers who only learn Java or Haskell or Ocaml often end up creating software with awful performance and/or memory use. Worse, they don't know why it sucks!
I'm not going to claim BASIC is a great programming language, but I don't believe it's harmful. It doesn't stop people from learning declarative and functional techniques. After all, I learned both and I started with BASIC and 6502 assembly on a Vic20 and C-128.
Personally my experience programming (that I can remember) was writing Basic code which used a for loop with "poke" to blow out an apple IIe. I think we did some other small stuff.
IMHO programming in and of itself isnt' such a big deal. The bigger deal is "logic", being able to look at a problem and to be able to logically break it down into logical sub problems. Having exposure to these basic languages helps in that it removes all the distracting *crap* and reduces the problem solving to its elemental environment, sitting at a terminal and writing some code to solve a simple problem.
Apple and Microsoft has some issues. Their primary interest isnt' in helping you solve your problem, it's to make sure whatever you do you use their tools and their platform first and foremost. It's kinda of bass ackwards.
I didnt say you did, I asked you NOT to. Because thats what people do.
Also, without a framework, you code will be crap. just like any other language.
you cannot create a nice application using pure c/c++ (without franeworks like gtk, Qt and others). why do you think its a good idea to write a web application in pure php?
You also dont create websites using pure Ruby, Python or ASP.
And tell me why on earth you need EVERYTHING to be an object when you are not going to use advanced API's and libraries?
Edited 2010-06-21 21:59 UTC.
If people of my generation didn't get OOP earlier is because of all the hype and the stupid lemmings and inheritance examples.
If OOP had been explained like it was conceived - as a way to separate parts of the program in a way that makes reuse and replacement easier, we would have got it a lot faster. It didn't help that C++(which at the time allowed for nil reuse and focused on half-inheriting things from other things) was the poster child of the movement.
Encapsulation is the real deal, inheritance is a complement, by no means necessary, and most importantly, it certainly has nothing to do with lemmings.
The real damaging languages are the likes of Java which make understanding of the "primitive" ones completely impossible all while promoting dubious OOP practices.
On the OT, if I had been told to #import some stuff and type boilerplate code in order to print HELLO in an infinite loop, I wouldn't be programming right now. BASIC, like machine code for the microcomputers of the time is simple enough to slide within a kid's attention span. Vb did that for GUI apps.
It is amazing that some kids still learn to program with some hellish popular language, but I think it is clear that the success rate(defined as programmer-kids/computer-using-kids) was far higher back then.
Functional is the new OOP. It is an interesting paradigm to learn and apply, but how do they promote it? Stupid recursion examples that don't make sense because they run infinitely slower than a sound algorithm, written in algebra-like languages where you are not allowed to do anything lest you unleash side-effects. The ultimate entertainment for children who like playing soccer without a ball.
Oh wait, reminded me of this!
Reminds me of a "feature" of a product from Digital Equipment Corporation around 1972. The GT40, a vector mode graphics system with a PDP-11 as it computer, had a "programmable fuse blow" function. Well, they figured out it was a "bug" but the fellow in the group that found the problem was quite impressed... Definitely "Goodbye World".
Also reminds me of the "Printer on Fire" error message:
edit: What you describe sounds similar to this: (noticed at the bottom of the other wiki page).
Edited 2010-06-21 21:07 UTC
Or why not Python? It's free, runs on pretty much everything, and has a basic-like syntax to it, and you can either code procedurally or use OOP if you want.
Note: I'm not a Python fanboy and don't claim it to be the 'end-all' of languages, but seems like it would be a nice choice to start out on. Much more so than any language that forces you into OOP from the beginning, or one so complex that you have to study it for 6 months to do anything useful with it.
I wouldn't say Python syntax is BASIC-like, but it *does* favour words over symbols, which would probably help someone to get into it. It also has lists and maps as native types, so there's no need to drag in library functions too early. Supports classes for teaching OO, and a variety of constructs suitable for teaching functional programming. And has plenty of libraries available when they start looking at web stuff, or UI, or writing games.
All in all, I think Python would be one of the better choices for teaching programming to kids....
It also supports turtle graphics very well in the base install on just about any computer, which makes drawing very accessible. And if you create a program on a Mac at school, it'll draw correctly on the PC at home.
And it's free in both senses. And includes a good editor (Idle) to help get started.
Not perfect, but certainly an attractive candidate.
Kroc; You hinted at this quite heavily, but didn't explicitly say it;
The important thing wasn't that the language was available, even free. It was that it gave the impression of being an integral part of the computer. You can actually get some very good BASICs nowerdays. But they just seem like any other program.Surely the real programming bug was given to you by the thought that that's what computer's are for, for you to bend to your will.
Now, you load up BBC BASIC for Windows (or YAB or any other) and there's no feeling of magic---no "I'm in absolute control of this machine" feeling. We're taught in schools (I'm a 22 year old Briton, so I was at least) to regard PCs as glorified secretarial tools. If anything stymies interest in programming then its got to be that.
P.S. Oh, and now kids are also taught to regard computers as communication devices and big libraries. This only further removes the sense of creative potential.
Edited 2010-06-21 20:19 UTC
I was going to post something about how unix shell scripting does all the same good stuff...
until I got to the part about playing music, and manipulating dots on the screen.
I remember BASIC, but honestly never did anything cool with it. It was LOGO, which was graphical, and not nearly as powerful that really lit the fire in me.
I now have 6 kids of my own and NONE of them know as much about computers as I did at their ages. Its sad and I'd like to do something about it, but the problem is as much about connecting with them as it is teaching. Most of them are apathetic..
and UCBLOGO is available for every major OS, and has a friendly front-end for Windows available.
I can't code for crap, mainly because I've haven't really tried in over a decade, but I understand the basics of what programming is, and what it takes to get <foo> to do <bar>.
I started with Apple LOGO in an Apple //c.
I remember the enormous feeling of power when I realised that anything I could tell the computer to do, and told it properly, it could do for me. Then I was massively disillusioned in High School when the only programming course was in C++ and spent 2 chapters on things like Univac and Eniac and the history of mainframes and military computing.
The teacher was over 70, had never written so much as a line of C, and spent most of the class period taking smoke breaks. I spent that class playing video games, like 90% of the students (though I also intentionally broke the computer's Win9X install, repeatedly). He graded it on a curve, and I, who had done literally no homework, got a D. This was in 1998 or so, before the bubble had burst, and they wanted to churn out C++ and JAVA programmers.
C++ as the first language for high school students? My first thought was "How do I make something happen??? This is all moving bits around! How much of this crap do I have to write before I can do... anything?"
Do low-level languages have their place? Yes, but not for kids who want to learn the _principals_ of programming.
Hell, I'd have been more entertained by some 6502 box and pure, raw, unadulterated assembly, because you actually _are_ moving bits around! Stuff is happening, exactly as I say.
Then again, the fact that they didn't try Python(or is that too high level?) or some lisp/scheme(PLT is everywhere, and has everything but a turtle in its standard library, really) is surprising to me.
When I was a little kid my father wrote a small BASIC program that would do multiplications. When I saw how simple that was I knew I wanted to learn how to write computer programs, and now years later I am very happy that I saw that little BASIC program. Of course I have moved on to proper languages now (C of course), but I think BASIC is just fine to get kids into programming.
I completely agree with you.
When I try to explain their first program to people that is learning Java... Their Hello World program looks like:
package MyPackage;
public class MyHelloWorld {
public static void main(String... args) {
System.out.println("Hello world");
}
}
How can I explain them the meaning of package, public, MyPackage, class, MyHelloWorld, static, void, main, String, ..., args, curly braces, System and out when the real thing is just executed by the println method and they do not know anything about the rest?
I disagree with people that thinks that starting with some
OOP language is good. The beginner needs to know more basic things like programming logics, what a variable is, what a subroutine is, how the loops works, the if-then clauses, etc... The constructions on top of this are... constructions on top of this. You should not build a skyscraper with no foundations.
If that is your only issue with introducing new programmers to Java as their first language, then look into J0 ( ) which exists to solve that problem by not requiring all of that for simple Java programs.
i've been teaching my 9 year old brother ruby on rails to get him to understand the idea of object oriented programming and the basic ideas of programming in general. so far im impressed by how well seems to be able to grasp the concepts and especially by how well he seems to be able to understand the same concepts when presented with a code sample that does the same thing in another language.
Why not just get the kid an OLPC? Or, well, any computer running Sugar on a Stick:
The 'Turtle' activity in Sugar is essentially a very basic programming language. And yes, I note the similarity to the Microsoft thing mentioned here. I dunno which came first.
I guess Logo came first (1967), but there may even be something before that......
You are right about apple. I too worry about the direction apple is pushing computers. That is why I repeatedly state, not to buy apple-stuff. I guess Jobs is trying to prevent that another younger and better Jobs will come out by limiting the ways you can use the apple devices.
Sorry Steve but one will come . . . . most likely from the open-source guys.
RealBasic is clearly an easy choice, tiered pricing and all, and shouldn't take 3 years to get running or change icons on.
Gnu bash is another easy choice, though at some point you have to be offended at the scope selection and overall design, and maybe pick tcsh or something else handy, from a POSIX, gnome or kde toolchain to an IDE that lets you program arduinos.
Maybe one needs a program that takes beleaguered Fathers' Day stories and makes some recommendations on how your search terms can be improved....
I've got one better: Scratch. The propellerheads over at MIT came up with this one. IIRC it's based on Squeak and the visual editing eliminates syntax errors so kids can concentrate on the programming logic. I've used it with my students at school and it has been easy to write whiz-bang-BOOM projects with a minimum of effort while teaching real programming skills.
mit.scratch.edu
I haven't tried it, but it's from MIT and supposedly is awesome....
Alice is cool, but the main problem is that it is really only useful for animation. The only aspects of programming that it teaches are sequencing of actions and loops.
I tried making some simple games with it, but it gets incredibly complicated just trying to do the simplest things.
I've also used Scratch, which is basically a 2D version of Alice, and I think it's much better. It's relatively easy to do cool things with it.
I want to join the fun. Why not start with brainf--k. It is also a good way to understand computer, but from a different perspective.
No, seriously. If you want to learn how a computer works the assembly language or machine code.
If not, choose ANY language. Everything is abstract anyway. Find a language that suits YOU and I mean YOU and don't care about everyone telling you which language paradigm you should use. Being the most popular doesn't mean anything.
Everyone tells you, you should use OO. If you like it, it's great but its only one way of many. If you want to use true OO you have to use smalltalk anyway, but that's an other story.
You can program everything with any language and any paradigm. And there is way more than imperative, object oriented and functional.
Last but not least, you can use nearly any programming language to program in any paradigm either by using it like using it in a specific way or by creating your OO or functional framework on top of it.
There is always a new hype coming out. Most of the programming languages never die. See Smalltalk, Tcl and Perl. They aren't hyped and declared dead, but they are very healthy and have many new developments. New means cooler/hyped (for a while) and older means that you have more resources. See Perls CPAN, it can easily beat any cool new language.
I know what the author meant and I mostly agree, but in the end the developer has to choose the tools on its own. You will learn multiple languages and most of them don't die. There are always new BASIC programmers, like there are always new programmers in all the other (currently) not-so hyped languages. BASIC, Perl, Ada, LISP, Smalltalk, ...
All these languages are very healthy and there are tons of good reasons to use them instead of any other. Maturity has benefits, like youth has. All of them are good for learning, understanding and using in everything from for-fun programming to mission-critical, productive stuff. All of them can be and are used from small scripts to huge projects.
It's the same thing with Ruby and Erlang, which had quiet a lot of hype in the last years.
So don't waste your time. Choose a first language. You can always learn different ones later and knowing multiple languages gives a big benefit using any other language because you learn multiple ways to think about a problem and how to solve it.
Edited 2010-06-21 23:06 UTC
One thing that I think makes the learning curve so steep is that there's not a lot of good resources for kids to learn programming with.
I had RUN Magazine, Compute's books, Assembly Language for Kids (assembly language for kids! Just imagine trying to get that into a school library these days!), could read about TRS-80 basic in Tandy comic books, and could find Basic programs as an integral part of choose-your-old-adventure games. It doesn't hurt that as a quasi-monopoly, having BASIC as the "only" choice for computers meant you could count on someone being able to run a program.
These days? Nothing like that. Programming is, at the earliest, a senior-level elective in high school, not part of the elementary school math department. (God, I could tell horror stories about my Visual Basic class.)
Heck, look at the front page of Google results for "Learn programming." The top article tells you that you can't learn it even in ten years (a wonderful way to quash hopes of kids - give up, even adults think it's too hard). One uniquely worthless Wikihow article. Two resources for experienced programmers. Two articles about what's the best language to learn. Two commercial sites promoting some company's special snowflake. Only one (Cprogramming.com) has actual content about learning to program for the first time, and even then it's not the core thrust of the site.
You want to make programming safe for the next generation of nerdy kids? Write, for god's sake. We could have the objectively best programming language that can ever be made, and it wouldn't matter if they don't have a good clean way to go from complete ignorance of programming to the point where they're informed enough to explore on their own. Write what you know - and write it clearly. Collaborate with others to make your text no more obfuscated than it has to be - and don't be afraid to use a silly picture of a dragon to illustrate key concepts; I still think about assembly language in that way, thanks to Compute. If you feel really ambitious, bribe some elementary school teachers and try to expand your tutorial into a full-fledged set of lesson plans to go along with your well-written, well-illustrated tutorial.
Fill Google's front page with well-made resources instead of articles that are worse than useless, and that's how you're going to keep kids coming into programming.
Edited 2010-06-21 23:32 UTC
Very well said. I don’t claim to know what’s right for the kids of today to learn though. I learnt on a C64 which was right for the time, but I wouldn’t suggest that now.
It’s not programming in the same sense as this article, but I am filling the first page of Google with content that teaches what I’m interested in..
]{).
If it's not a joke, then this is the most appalling thing I've ever heard.
No need to sightread, or even read music before being a true musician (e.g. Pavarotti) but playing an instrument is mandatory and in that respect, Pavarotti buried any living singer of his time.
@ unclefester: could you please confirm what you said? you're really frightening me with that.
I do believe that line-by-line intepreted language like BASIC or Pascal are a good way to learn programming...the only problem is that these classical languages are becoming useless. It might be a good idea to put a free version of Visual Studio on little Johnny's computer and start him on Visual Basic.NET. This way Johnny is learning a more modern way of thinking about programming (object oriented) now instead of having to force himself to change later. You still can learn algorithms and programming practice...but learning this in the framework of an object oriented environment is much more powerful and future-proof. Unfortunately, if you have a MAC, I don't know what programming environment would be the equivalent...C++ (Xcode?) compiler I guess...perhaps dual boot windows to fool around with VB.NET i guess...
come to think of it, one of the first problem solving I did with computers was a robot combat sim. Basically you write the ai for a robot in an arena with other bots that can move, scan and fire. It also received damage events. All in all a petty good way to learn. I honestly don't remember the name of it but likely it still exists in some form or another.
Have to say, this is not really true for Windows. VBScript is a pretty unholy thing in modern language terms. But if you're young and just want your computer to do something, all you need is notepad.
Using vbScript, you can preform a pretty wide array of stuff. Of course I mainly use it for planting practical jokes on my mates' computers.
Here's one I prepared earlier:
1. Start Notepad.
2. Enter:
Dim szYou
szYou = inputbox("What's your name?")
If Trim(szYou) <> "" Then
msgbox "Hello " & szYou
Else
msgbox "Fine, be that way"
End If
3. Save the file, and then double-click it.
Man, back in '88, I first coded in GW-BASIC using my Epson Equity II, with two dual 5¼ inch floppy drives, no hard drive, monocrhome monitor. Yeah it was great. It had nothing useful to do on it, so I learned to make it myself.
With the computer, had a huge three ring binder book with GW-BASIC reference material. More or less, a corporatey textbook which at 8 years old, I read, studied, did the examples, learned how to do things and made programs I thought were cool.
I made a text adventure game at a carnival where, usually, the person died several times where it would play the funeral march - which thinking about it, for an 8 year old is pretty impressive. I used the chart from notes to Hz in the book, found the notes by humming it a thousand times and writing down the correct notes from using a piano (no midi files to look up yo!) It ruled.
Unfortunately, my love for it waned as I got a Nintendo soon after. Though as an adult now I do still program now in C++, I wish I kept up with that love (other than at a hobbiest level since I made programs every so often but weren't terribly complex).
Author is right, if I had started at C or C++ I would be flustered and would of quit. I waited until 1992 to learn C - then C++.
I think from learning GW-BASIC first, but learning by example, playing, hacking code away is what did it for me. It was still a full featured BASIC language set. While Small Basic seems cool when I played with it, it seems stripped down, especially because learning to code in DOS mode is much easier than GUI- smallbasic does it right, but by stripping down .net might not been the best approach.
It is a shame Pascal isn't used much anymore as structured language to teach students, as I learned C++ much better because of it.
Johnny can't code because public schooling in the USA is only geared towards socializing children into productive worker bees for the large corporations, NOT for teaching independent thought. Kinda hard to learn programming when school only teaches you how to OPERATE MS-Word, Excel, etc.
As others have pointed out, what matters is thinking about a problem in a competent and ingenious way, not the particular programming language. I am of the school that the programming language should aid in developing this faculty; a choice of a pragmatic language (e.g., C++) can always come later.
Start with Haskell and Smalltalk; learn assembly when learning about hardware.
Maybe I missed the point of this article, but wow, what a huge UI design problem we face. The save icon has always relied on a subconscious connection with a file being placed on a floppy disk. What does that mean to a 5-10 year old that has just entered the tech field? What universal icon could possibly exist? ow can you accurately convey the idea or concept of 'saving' in a world of cloud computing? wow. just wow.
I have not read all posts yet, so I react to the article itself, rather than to some concrete post here.
The author is kind of right. I learned basic on some old Czech communistic computer, called PMD-85 :-) And it was really a fun - we started exactly by "moving pixels" around, and we produced very simple and primitive games.
Then I moved to ZX Spectrum, and later on - the Amiga. I still remember the AMOS basic. Man - you could do sprites, sounds, simply a complete games, but still in a very simple to understand BASIC.
Well, some educated ppl (which I am not, I am a self-taught IT person) might claim, that it provided you with some bad habits. But you know what? I don't have much respect for such an opinion. And why? Because I still think, that ppl should learn very basic things about computing.
I am still sorry to not learn ASM/C long time ago, because it would provide me with much better understanding for higher level languages. Now you might ask, how BASIC is related to C? It is not, but it is good enough for the first steps in computing. Anyone trying to introduce kids to OOP or web programming as a first step to understand the programming technique, is imo doing a very bad job.
I am a REBOL addict. But I know exactly where my influence comes from. BASICs (ZX Spectrum, Amiga AMOS), and Matlab - you know what those share? A simplicity. I was amazed by Matlab - you could do cool things, with just basic understanding. You know - you don't need OOP or any other modern crap, to make usefull things, right?
So - my final word is, that there actually might be some good "first languages", and apart from the author, I would not discount scripting languages, being it REBOL, Python, whatever - simply put - start a console, and write some usefull little scripts, which automate the system for you. That is how Amiga has grown - via Arexx ...
For some reason, nobody else before this post has mentioned the most important thing: choice. Which one? All of them!
Back in the 80's, not many choices existed: you had about the same BASIC language available on every computer, with minor variations, with the Macintosh being perhaps the first computer sold without a BASIC interpreter with it. The operating systems were relatively primitive, and fairly simple, but also limited: again, choice, or the lack thereof. In addition, getting any truly "useful" software or major game was more likely to cost a lot, compared to relative costs of today: there simply weren't free alternatives, beyond sneaker net of pirated apps and games. If you wanted something, you had to do it yourself, if it didn't exist already, and/or it cost more than you were willing to spend: you had to choose to buy it, make it, or steal it: you had to make a choice.
Software back then wasn't nearly as complicated as it is now, because, frankly, the microcomputers of the time couldn't handle anything all that complicated, due to CPU speed and RAM and other resource limitations: you had a choice of doing very little at a time.
Software wasn't nearly as readily available back then, even via sneaker net: there's only so much bandwidth you can do via sneaker net, and dial up modem (very few people had anything better, if they even had that) was painfully slow. Of course, once again, software was much smaller and simpler then. But, it wasn't very well-publicized, and to access such things back then, you had to learn a lot more just to get online: you had to make a choice.
Now, with open source software so readily available, with it doing so much of what people want, scratching their itch for functionality, people can still make a choice: get the proprietary solution via paying for it or stealing it (always will remain as options, I believe) or... go honest and get the open source software: it's EASY compared to what it used to be, as bandwidth is rarely the limiting factor for getting it and installing it: it's all a matter of spending the time and energy to understand what you're doing, possibly grokking the instructions, maybe even looking at the code. Which language is it in? Most people... choose not to care. Those that do, well, there's so much.... choice as to which language(s) a project may be implemented in. If you choose to start developing or just even learning how to code, which language is best? You have to make a .... choice. Back in the 80's, for the most part, unless you spent a large amount of money, you had a native version of BASIC, or a native version of machine language: not much other... choice available. These days, there are umpteen different languages available, even simple ones: the only problem is... choice, and having so many to choose from.
Now, couple all those issues, together with the fact that what people expect out of computers is so far removed in terms of complexity from what people expected of computers in the 80's: the barrier to writing code to do something that's beyond hello world is much higher, not because there's nothing available for learning it (there are lots of great books for beginner programmers, really!) or for having hardware (adjusted for inflation, likely far cheaper now than then, for a basic system) or for software tools (how many people these days buy basic development software, say, compilers, or IDEs? Not many!) of for what languages can do in their environments (quick, think of all the things you can do within Visual Basic for Applications within Microsoft Office: it's far more than I could have ever done in Applesoft BASIC, even without leveraging all the nifty features of the system) but a matter of choosing to learn how things work at a low enough level, and then how they fit together with programming constructs.
In essence, to boil it all down: because so much is available now, compared to then, people don't need to imagine what computers can do, they can see what they can do, and chances are, the solution already exists: they choose not to go into it, because they're not motivated enough to do work that everyone else and their dog has already done, when they can easily get that work done for them. Those that really find the whole thing interesting, and have any aptitude, they'll make the choice to find the way: that's far easier now, in some respects, than it was then, and far harder in some respects, too, but it's a double-edged sword, and cuts both ways. On the one hand, you need to learn a lot more about doing such things as GUI design, on the other hand, most any programming language these days will allow you to do far more complicated data structures and algorithms with the base language and libraries, or write your own, because the available languages are so much more capable and expressive, so someone with a fairly small amount of time spent, and self-study of basic data structure runtime and space complexity and knowing what they're called, can do far more interesting applications in a short time than they could have done with any BASIC from the 80's or before.
I learnt with Basic and Logo. I think Python and Awk are also good starting points. What is interesting is writing a single line of code and seeing immediately a result on the screen, unlike many programming environments where you have a lot to do before you get anything out.
When I was a child, I learned programming with Delphi. Even now, I still think that if the language was still taken care of, it'd be a very good start to learn programming. You can make shiny GUI applications with buttons right away to impress your friends, yet you have the power of a serious language under the hood.
Now, I'd rather suggest python, like others. It's not as good as Delphi because you keep wrestling with the command line for tasks where it's not suited. But it's the best compromise in terms of general-purpose language nowadays, especially since it teaches good things like indentation right away.
However, when you look at the computing world today, it's clear that programming is quite kid-unfriendly. That's because childs are now taught that computers are a sort of typewriter-video game console mix. The "programmable" aspect of the thing is not taught anymore, instead people are encouraged to let other people code for them and buy the resulting product. The consequence of this is that people just see computers as buggy modular appliances and don't understand why they have to use such a boring thing. If this way of thinking moves on, real computing will definitively go out of the home market and be replaced by iDevices-like content consumption devices. Which is sad, because those things are just overpriced castrated computers...
Actually, it is. I received, 4 hours ago, an email from Embarcadero about Delphi 2010.
Two of my most used apps are written in Deplhi: Subrip (a few years old) and FreeCommander (1 year old).
Two of my most used apps are written in Deplhi: Subrip (a few years old) and FreeCommander (1 year old).
The IDE is still taken care of, true, but who, outside of the professional world, does still use Pascal (Object) ? Most of the freely available PO libraries on Delphi, last time I checked, were crappy ports of C/C++ libraries. I say crappy because it wasn't true Delphi code, but rather C code with a Pascal syntax. You had to use C-style strings (GetMem + StrPCopy madness ensues), pointers, and so on.
The beauty of Pascal Object is that it manages to be a clean and complete language. Putting C-style functions on it is destroying its cleanness...
The second thing that made me move from Delphi to Python, some years ago (before I started learning C), was the lack of a good Linux IDE. Sure, there was Lazarus, but I don't call software which makes 6 MB large binaries a good IDE... And Kylix was a joke to begin with, I exactly managed to get it running properly one time.
Edited 2010-06-22 16:18 UTC
Check this out:
EDIT: Oh, and this:
Edited 2010-06-23 07:26 UTC
I know about that, but do they now have libraries that are properly ported to pascal instead of C libraries with a pascal syntax ? And does Lazarus finally manage to make binaries that don't weight several MBs for a hello world application on Linux and Windows now ?
Please read the parent post fully.
Edited 2010-06-23 08:02 UTC
Well, have you tried it lately? And, of course there is also this:?
I loved playing with QBasic in elementary school. I'm not much of a programmer, but I've been recently playing with Processing (). It's an open source derivative of Java created to allow non-programmers (primarily artists and designers) a straightforward but powerful way to get into programming. It allows for the kind of instant gratification that BASIC offered.
I'm not sure the comparison of a guitar to BASIC fully lines up. The guitar varies WILDLY in construction: although the primary distinction is between acoustic and electric, individual components and construction materials vary greatly, and even the method by which they are played differs. Much of this stems to how different cultures have interpreted it, and many guitarists are quite flexible in the sort of music they play, since music written or transcribed for it (especially compositions written for lute) is so prolific.
The cello, as a member of the viol family, has changed very little in construction and method of playing for hundreds of years. To my knowledge, it has not even been electrified/amplified the way the violin and the double bass (standup double bass, not guitar) have. Furthermore, those instruments have been adapted to other musical genres (think violin as "country fiddle" and plucking standup bass in jazz), while I sincerely doubt that the typical cellist plays much more than classical genres (Baroque, Classical, Romantic, etc.) on a cello. Even its appearance in instrumentation for other genres usually takes on a very classical tone.
The comparison is limited to the "instant gratification" aspect.
Guitar is hard and takes years for the average person, just like any other instrument as 6 years after starting I still have problems with fast chord switching and some barrés, I still miss and mute strings, etc.
However, strumming a chord is easy and a C chord makes five strings (!) sound at the same time, which is a rich sound. Switching to an Am requires moving only one finger. Another switch to Em is straightforward. C, Am, Em, Am is not a common chord progression but it's easy and beautiful, even more when played with arpeggios. For me it took the opening ceremony of the Athens Olympics to have the C and Am arpeggio.
Another example, sounding a string is putting one finger on the fretboard and plucking the string with a finger of the other hand.
I don't see any non-keyboard or non-fretted instrument compete with that in terms of "instant gratification" although the "instant" is of course in comparison with bowed instruments where before playing a sequence of two notes, you need to learn the correct posture, correct handling of the bow, both in pressure and position, have an ear (which I've never had) for knowing whether you've hit the right frequency for the note you want to play. Even the tablatures are more easily read by a total newbie like I was than standard notation.
That's were the comparison started and ended. Other than that, I agree with you.
I wish people would actually read the context of a comment before responding to it. Nobody was talking about how good or bad VB6 is, what we were talking about was better tools for getting immediate results in programming then BASIC. Your comment added nothing to the discussion, and makes you look like an idiot.
What do you think sounds more likely to capture the imagination of an 8 year old.
"A function pointer is a variable that stores the address of a function that can later be called through that function pointer. This is useful because functions encapsulate behavior."
or
"Would you like to draw some crazy dots and lines that wiggle around the screen at your command?"
My parents got their first computer, a Mac Classic, in 1991 when I was 9 years old. It came bundled with a copy of HyperCard 2.1. My father noticed my interest in the computer and found an article in a magazine with an example HyperTalk script. We tried it out together and a few more examples from the user manual. Then I continued exploring on my own.
This is before I had learned any English. Fortunately both the application and the manual were in Swedish and thus the script language itself was all the English I had to deal with.
Most of the time I was content making simple point-and-click adventure games where you'd explore old castles and fight the skeleton from the clip art gallery.
The HyperTalk scripts seldom did more than transition to another 'card' or display a sequence of cards to create an animation. Some of my friends got inspired and started making HyperCard games of their own.
Unfortunately I got lost in Klik & Play. While I did have a lot of fun making games, Klik & Play proved to be quite limited and very buggy and ultimately it felt like a dead end.
I returned to HyperCard and began writing more complex adventure games (that never got finished) plus some other things like e.g. a Mandelbrot renderer. I also taught myself HTML, CSS and TI-83 Basic.
Then Apple released Mac OS X and with it came Project Builder (now Xcode). Finally I had the "real tools" I had been waiting for. Sadly it seemed all tutorials on Objective -C assumed prior knowledge of C or C++, something I didn't have. Thus I progressed slowly and not very far – no until I started studying computer science at the University. But that's an entirely different chapter of my life.
Abandoning HyperCard is in my eyes one of the worst things Apple have done.
While things have gotten more complicated I do think kids today have one advantage – the internet! When I was a kid I had to connect through a modem, pay for every minute that I was connected and then disconnect as soon as my parents needed to use the telephone. I didn't prioritize asking for advice from random strangers on IRC.
Well, that's my story. Hope I didn't bore you too much.
If you like "Small Basic" then you might also like to try SmallBASIC
Runs on Linux, Windows, PalmOS and even the Franklin ebookman. Comes with tonnes of examples programs all in a very small footprint.
In my opinion, it is not about the language or ide. It is about the gap between those people who know their problems and who have an idea to solve them. And those who can code. The bad thing is, there are too few people in both crowds.
The solution is not to teach people a academic language and the whole shebang of different coding models. Give them a tool where they start with an already functional gui application. Something to _play_ with. And an easy, graphical way to handle it. Yet able to use an editor to code when you mature on the field.
And it should be free in both ways.
I think that Adobe Flash offers one way to do it. And as I know FileMaker, I look on FileMaker more as a RAD, not a database. But those are not free and don't run everywhere.
We need an ide which is already a functional application and the foundation of new apps, that has everything we need, like database access, http connectivity, ... I don't mind which "script language" there will be, beside that there is one.
I don't know Revolutions. Does that work that way?
At least I did. It has everything. Sure, I statred with BASIC on my C64, but I quickly dropped that like a brick when I had access to MONITOR and could mess with assembler. But that's offtopic.
Later forays into coding was with Turbo Pascal. Pascal has given me habits (neat indenting, declaring everything, good procedural coding, thinking things out before you implement them, etc.) that appear in my non-Pascal code today. And the compiler was fast.
Heck, Turbo Pascal had pointers, object oriented programming, inline assembler, pass by value/reference, and it was easy to create your own units (libraries) so you could easily reuse your code. What more do you want for a language to start out with?
Screw BASIC. Here's to Pascal!
Basic is probably not the best choice, You can do line by line structured programming in Pascal using the free pascal compiler. Pascal is as easy to learn as basic, yet is closer in form to C or java than basic, not to mention you get native compiled executables with pascal that don't kneed any framework or interpreter to run.
I started off with BASIC on 8 bit computers. Yes, Basic is crude but it taught me some basics:
- Decision making
- Using loops
- Use subroutines and functions where possible
- minimise use of goto where possible
- Economic use of variables
- Syntax and command formats
Fortunately, without having to use advanced languages, which can be learnt later, then I suggest QBasic (QB64) from:
I know it's Microsoft endorsed but has anyone tried Small Basic?
It looks pretty kid friendly.
Majority of people are non-programmers, but, still need to automate some tasks for daily use. Applications like OOCalc and MS Excel, for example, offer quick and easy scripting in BASIC. BASIC offers quick and dirty coding, and that is what casual coder needs. I didn't use term "programmer" intentionally. They won't get their creative abilities tainted, because their creativity is directed towards quite different areas.
So, teaching BASIC might help, perhaps 10 or more people for each one who might (or might not) encounter difficulties in programming career.
I am trying to find some embeddable BASIC interpreter for my own Java applications, to offer some level of customization to customers. Most of them employ someone with rudimentary knowledge and experience in coding. Unfortunately, I've found either GPL-ed interpreters or tools that compile BASIC instructions into Java Bytecode, both of which are unsuitable for my needs.
I'm in a situation where I am not a developer, but from time to time I need to extract data and generate reports from massive logs, or automate tasks or do math.
I grew up, like many here, with a Commodore 64. I ran a BBS for awhile back in the 80s on C-Net, which most Commodore people will remember if they dialed out. The BBS software was written with the lower level routines in some kind of compiled format (maybe assembler?) and whatever could be done in BASIC was done in BASIC.
This allowed for extraordinary extensibility and modification of the code. As such, almost everyone who wanted to run a BBS learned Commodore BASIC as a matter of course.
I think what I learned is still applicable today. If I decided to be a professional programmer with my lack of formal training and skills, professionals would be right to scoff, but you know what? My code *works*.
It may not be the fastest code. But it is well commented, even if amateurish. I have several dozen scripts (mostly Perl) I've written that I use daily, and all do the job they were written to do.
I think if you're going to be a professional developer, you have a whole host of separate concerns. But I think one thing we have lost is the idea that you can be a non-technical person - hell, you could be out of the workforce entirely - and still benefit from knowing how to automate tasks in whatever language works for you.
And I say this because I was maybe 12 years old when I learned Commodore BASIC from the manual that actually came with the machine. And I've been automating tedious tasks ever since. You could own a floral shop and probably find ways to save yourself time if you just knew a bit of programming or scripting or whatever you want to call it.
I've long observed that if you encounter an inefficient, repetitive user behavior which could be automated away, and you did some math with someone who is not computer savvy, calculating that:
8 hours to learn enough of PHP or Perl or Python to automate a task, plus another 8 to actually write it = 16 hours.
If over the course of the year, the person wastes 100 hours manually repeating the task, they will still look at the 16 hours it would take to automate it as an unreasonable, insurmountable time investment, and choose to waste the 100 hours instead.
This isn't always true of course but it is much of the time.
This has never made sense to me, especially when you consider the value of the 8 hours spent learning a language, which will be directly usable in the future (by itself, or to be built on) to automate other tasks away.
I just kind of wish basic programming was part of any office's universal skill set, especially when I note that in any company I've worked for, there just aren't people employed for the purpose of automating away individual employees' annoyances. Developers anywhere I've worked are involved in building and maintaining foundational enterprise applications. If you're lucky and you make nice, maybe you can get one to do you a favor, but that's about it.
There is too much criticism of entry-level languages by professionals. It's like being a Formula One driver and deriding Vespa scooters as "underpowered." Languages serve not only different needs in a programmer's toolset, but they also appeal to different populations.
As much as people freak out every ten minutes about PHP, there is nothing wrong with an amateur with a personal website dropping a simple static include("footer.html") statement in the bottom of an HTML page. That's what it's there for.
I would love to see a return to the idea that every kid should learn to program, and I see no point in BASIC anymore. Is basic Python or even Perl much more complicated than BASIC? I don't think so. It's also relevant, and portable, and cross-platform which BASICs I grew up on were not.
At the company I work, the only language I can be positive will be either installed or available on all machines is Perl.
In the 80s, we had to take computer programming classes, in labs full of Apple //es. It would be nice to see something similar, maybe with Python, integrated into math classes, for all students.
|
http://www.osnews.com/comments/23464
|
CC-MAIN-2017-26
|
refinedweb
| 11,053 | 70.53 |
Adding.
Choosing a script language
I had many choices of languages I could support. The Search API runs on the JVM, so I could simply load additional jars. But that requires compilation, and I wanted something users can simply edit, save and reload to test. Some kind of script language would work better. The Java ecosystem offers many options (Jython, JRuby, Groovy, several JavaScript runtimes, etc.). In the end, I settled on JavaScript, mainly because working with the JS UI already requires JS skills.
There are several options for running JavaScript code on the JVM. At the time I started working on that, Java 8 was not yet released and thus the standard way to go was to use the Rhino engine that comes standard with the JDK (vs the new Nashorn engine in Java 8). So I started with Rhino.
Pretty quickly I had the basics working: a query pipeline folder could contain a main.js file that would be loaded at initialization, and to which I could expose ways to interact with the query pipeline when queries are executed. Whenever the JavaScript code changes, it’s automatically reloaded so that the next query uses the latest code.
I could now do stuff like this:
Coveo.onResolveIdentity(function (authenticated) { var windowsDomainUser = new UserId(); windowsDomainUser.user = 'DOMAIN\\someone'; windowsDomainUser.provider = 'Active Directory'; return authenticated.concat(windowsDomainUser); });
This code adds an additional identity to use when executing the query.
Coveo.onResolveIdentity is a method I implemented on the Scala Search API code. It gets passed some kind of handle to the JavaScript function, and later on I’m able to call this and collect the result (with lots of marshalling).
Runtime libraries
I was pretty happy at this point, and then I started thinking: “How do I allow the JavaScript code to perform more sophisticated stuff, like reading files, calling stuff over the network, etc?” Code running under Rhino does have access to the Java libraries in the classpath, but it felt weird using Java libs in JavaScript. People won’t be expecting that.
Another option was to provide 100% custom APIs for performing the common tasks. This would require quite a large amount of work, would be completely specific to our environment, and I’d also need to document all this. Hmm. Honestly, I prefer writing code.
Then Greg on my team mentionned it’d be nice if we could make use of libs from NodeJS. As a matter of fact, I had previously investigated whether I could somehow embed the real NodeJS runtime inside my process, but it turns out it’s not quite possible yet (Node being essentially a single threaded system). It’s also not very convenient to call Node as an child process — I wanted to be able to expose a Java object and have my script code call its methods. So Node was out. But then Greg asked, isn’t there some Java implementation of Node, like there is for Python, Ruby, etc?
Well, it turns out, there is.
The Nodyn project aims to implement a NodeJS runtime on the JVM. It’s being built by some nice folks working for Red Hat. It hasn’t reached its first release yet, but they already have quite a lot of the core APIs working. Also, they support using packages from NPM, so any of those that doesn’t use stuff that isn’t implemented yet should work fine.
But there was one problem: they don’t use Rhino. Instead, they use the DynJS JavaScript runtime (built by the same folks). So before I could try using Nodyn in my stuff, I had to port all my JS runtime code to work with DynJS. In the end, it wasn’t very hard, and in fact, I find the DynJS API to be much nicer than Rhino’s, so even without Nodyn the switch was a plus.
Then, from there loading the Nodyn environment into my JS environment was very easy:
runtime.global.defineGlobalProperty("__dirname", System.getProperty("user.dir")) runtime.global.defineGlobalProperty("__filename", "v1") val nodeJs = classOf[org.projectodd.nodyn.Main].getClassLoader.getResourceAsStream("node.js") runtime.dynJs.newRunner().withSource(new InputStreamReader(nodeJs)).evaluate()
That’s it. Well, of course I’ve added a Maven dependency to the Nodyn lib. But then I only need to arrange for the embedded
node.js file to execute whenever I’m initializing a JavaScript context and then I’m good to go. Suddenly, I could do stuff like this in my JS code and everything would just work:
Coveo.registerQueryExtension('stock', function (query, callback) { http.request('', function (res) { var json = ''; res.on('data', function (chunk) { json += chunk; }); res.on('end', function () { var data = eval(('' + json).replace('YAHOO.Finance.SymbolSuggest.ssCallback', '')); // Very secure, please do that at home kids. var tickers = _.map(data.ResultSet.Result, function (result) { return result.symbol; }); console.log(tickers); }); }).end(); });
Yay! No need to provide calls to perform everything. Customers can simply use stock Node APIs or use packages from NPM. My empty sandbox suddently had thousands of libraries available.
Handling callbacks
One thing about Node, it never blocks threads. Well, mostly. That’s why it scales so well. So, pretty much all code written for Node uses callbacks for about everything. This means I had to handle that as well on my code.
In my first implementation, when a JS function returned, it was expected that the return value was definitive and fully computed. It made it impossible to use Node APIs that use callbacks to signal that an async operation (like an HTTP request) completed.
Right now, the implementation for the Search API’s REST service doesn’t use an async model. This means a thread is allocated to each request and is kept in use until the processing is finished. I want to change that at some point, but for now any remote call will simply block the thread.
I needed a way for my main request thread to block if the JS code I’ve invoked is executing some asynchronous process. Also, I wanted to keep support for synchronous usage (e.g. returning a value directly), because that’s often just simpler and simple is good.
To address this, I arranged for all my calls to JS code to go through a single call point that would check the function being invoked from Scala code. If the function has one more formal parameter than what’s expected (based on the params I’m passing it), I assume the additional parameter is a callback, and I pass it a special object. Then, if the function doesn’t return anything meaningful (e.g.
undefined), I block the main request thread until the callback has been called, or if a specific timeout expires.
Here’s the code (well, part of it):
def call(runtime: JsRuntime, function: JSFunction, theThis: JSFunction, args: Seq[Object]): Object = { runtime.executeInEventLoopThread(defaultTimeout)(cb => { // If the function takes one more parameter than what we've been provided, assume // it's a callback and enable waiting on it. Looks like a hack, but hey why not? val jsCallback = if (function.getFormalParameters.length > args.length) { Some(new JsCallback(runtime.global, cb)) } else { None } val result = runtime.context.call(function, theThis, args :+ jsCallback.getOrElse(Types.UNDEFINED): _*) result match { case Types.UNDEFINED if jsCallback.isDefined => // When a function returns undefined, we assume it'll call the callback // function that was provided to it automatically. There is no need to // wait in this thread because executeInEventLoop will do that for us. case other => // Otherwise, when a function returns a value we call the callback ourselves cb(Right(other)) } }) }
Under the hood, Nodyn uses Vert.x for providing event loops usable for async operation (among many other things). So, every time I make a JS call, I arrange for it to happen in a Vert.x event loop. Per design, all subsequent callbacks are invoked in the same event loop (e.g. no parallel execution). So I only have to wait in my main request thread for the result to be available.
Using NPM packages
At this point I started showing this to some coworkers and PS guys (professional services consultants). One of those conveniently had a need to override some stuff based on data retrieved from a SOAP service ಠ_ಠ, and he was willing to beta test my stuff. There are many libs in NPM to call SOAP services, and in the end it boils down to making an HTTP request somewhere. Should be a piece of cake, right? I mean, I did that at least once.
Well, not so fast here cowboy.
As I said previously, Nodyn hasn’t reached a first release yet, and this means there are some rough corners. In particular, it had issues with the request NPM package, which the SOAP library used under the hood. I had to fix a couple of glitches in the Nodyn and DynJS code to get the package to work as expected. I’ve submitted those changes to the maintainers, and the fixes are now merged in the official code.
A more annoying thing was that request seems to access undocumented fields of Node’s HTTP request, which for obvious reason aren’t present in Nodyn’s implementation. For now I worked around this by “enhancing” the Nodyn objects with some stubs (this only when running in my environment, since it’s too ugly for a pull request). Still, I’d like to find a better solution to this. The Nodyn devs are currently rewritting the HTTP stack directly on top of Netty, so I’ll wait a little and then check if there’s something better to do.
UPDATE: I just learned that the Nodyn devs switched to a different approach for implementing Node’s core APIs. Instead of replicating the user facing APIs with their own implementation, they now only implement the native APIs from which Node’s JS runtime depends. This means they are now using Node’s own JS code as-is, effectively eliminating this family of issues once and for all. Great!
With those changes, I was able to build a client from the service’s WSDL and use it to call some methods. The only problem remaining is performance: the parser that processes the WSDL runs pretty slowly on DynJS. Right now I’m not using the JIT feature of the interpreter, because I had a weird error when I tried it, so that might explain the performance issue.
In any cases, I’ve seen mentions that Nodyn might also support Nashorn as a JS engine in the future, which should take care of performance issues if I can’t get DynJS to run faster. Also, the problem really happens when CPU intensive work is being done in JavaScript, which often ain’t the case anyway.
Of course, I expect other issues to appear with other NPM packages. I’ll try to address those as they come. Still, what’s already working is a pretty interesting addition.
|
http://source.coveo.com/2014/09/23/adding-server-side-scripting/
|
CC-MAIN-2017-13
|
refinedweb
| 1,836 | 64.91 |
As everyone knows, unit testing is a great way of ensuring
that code actually does what it claims it does, and that over time, as
the system changes, it carries on doing the same thing. Formal processes
like Extreme Programming (XP) depend heavily on consistent unit testing.
When used together, object-oriented design and comprehensive unit tests can lead to a very clean design, as the test-first methodology implies a user-based interface design, resulting in a public interface that is simple yet efficient.
However, when it comes to testing, sometimes these clean interfaces are not
as good as they could be. There are often member variables that the test
suite would like to access but that have been scoped private or
protected, and making these members public would
expose the internals, ruining the clean design. C++ has a way of working
around this: by declaring the test suite as a friend class,
the access protection is sidelined. In Java, a similar approach can be
used by making certain members package scope and putting the
test classes into the same package. However, this leads to an
unsatisfactory design, as some members are private or
protected for good reasons, and then an arbitrary set of
members are package scope solely for the current test suite.
private
protected
public
friend
package
However, there is a third option available to Java programmers...
Let's begin by proving to ourselves that a private field is
really a private field, and this is not just something we've all been told. First
we need a class with some private members; this is the test
class I shall use for the following examples.
class FieldTest {
public String publicString = "Foobar";
private String privateString = "Hello, World!";
}
Here we have a simple class with two fields: one private
field and one public field. We would assume that arbitrary
code can access the public field, but not the
private field.
public class Test1 {
public static void main(String args[]) {
System.out.println(new FieldTest().publicString);
System.out.println(new FieldTest().privateString);
}
}
When we compile this we get:
Test1.java:4: privateString has private access in FieldTest
System.out.println(new FieldTest().privateString);
^
1 error
This shows that private fields really are private fields, as
the Java compiler won't allow access to them.
In Java 1.0, the java.lang.Class object was fairly
trivial. However, in Java 1.1, the Reflection API was added. A cursory
glance at the documentation reveals several interesting methods:
java.lang.Class
getField()
getFields()
getDeclaredFields()
These methods return Field instances (arrays, in the latter
two) that allow us to see the name of the field and its type, and more
importantly, get its value.
Field
It seems that if we can get a Class instance, we can call
getField() to get a Field instance, which we can
use to get at the value we want. And the easiest way to get a
Class object for the class Foo is to write
Foo.class, a construct called a class literal.
Class
Foo
Foo.class
Calling getField() looks like this:
import java.lang.reflect.Field;
public class Test2 {
public static void main(String args[])
throws Exception {
Field f;
f = FieldTest.class.getField("publicString");
System.out.println("Public Field: " + f);
f = FieldTest.class.getField("privateString");
System.out.println("Private Field: " + f);
}
}
Running this gives us:
Public Field: public java.lang.String FieldTest.publicString
Exception in thread "main"
java.lang.NoSuchFieldException: privateString
at java.lang.Class.getField0(Class.java:1735)
at java.lang.Class.getField(Class.java:900)
at Test2.main(Test2.java:10)
So what happened here? The code managed to get a reference to
publicString but failed when trying to get a reference to
privateString. The exception thrown was
NoSuchFieldException, but I know it does exist, since I created
it. However, the fine print in the API documentation for
Class.getField() clearly states, "... the specified
public member field ...". Time to try
getFields() and getDeclaredFields().
publicString
privateString
NoSuchFieldException
Class.getField()
Related Reading
Java Extreme Programming Cookbook
By Eric M. Burke, Brian M. Coyner
At first glance, getFields() and
getDeclaredFields() seem very similar. Nevertheless, a closer
read of the API documentation reveals that they are very different.
The method getFields() reflects (no pun intended) what the
Java programmer conceptually sees when programming: it enumerates all
publicly accessible fields in the class and all of its superclasses.
On the other hand, getDeclaredFields() reveals how the class
is constructed. It enumerates fields, but only if they are actually
declared in that class; any inherited fields are ignored.
The reason for the existence of two methods (instead of a single method that returns
all fields, including inherited ones) seems to be so that simple dynamic
lookup of public fields can be achieved easily (using
getField() and getFields()) and generally does
the right thing. If a program wants to see the private fields,
it will probably want to handle inherited fields specially (for example,
an object-oriented debugger).
Exercising these methods is always a good thing, to check they do what we
expect. Here is a test for getFields().
import java.lang.reflect.Field;
public class Test3 {
public static void main(String args[]) {
final Field fields[] =
FieldTest.class.getFields();
for (int i = 0; i < fields.length; ++i) {
System.out.println("Field: " + fields[i]);
}
}
}
The output is rather predicable:
Field: public java.lang.String FieldTest.publicString
Now let's try the same test, but use getDeclaredFields() instead.
import java.lang.reflect.Field;
public class Test4 {
public static void main(String args[]) {
final Field fields[] =
FieldTest.class.getDeclaredFields();
for (int i = 0; i < fields.length; ++i) {
System.out.println("Field: " + fields[i]);
}
}
}
We hope that this will yield all of the fields, both public and private:
Field: public java.lang.String FieldTest.publicString
Field: private java.lang.String FieldTest.privateString
Life is indeed good. Now that we can enumerate all of the fields in a
class, we can get the specific field we are after and hopefully manipulate
it however we want.
Pages: 1, 2
Next Page
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://archive.oreilly.com/pub/a/onjava/2003/11/12/reflection.html
|
CC-MAIN-2017-09
|
refinedweb
| 1,031 | 58.28 |
-
htotal = 0
ototal = 0
ctotal = 0
stotal = 0
ftotal = 0
ltotal = 0
while choice.upper() != "A" and choice.upper() != "R":
choice = raw_input("\nEnter a letter that corresponds to what you would like to order: ")
if choice.upper() == "H":
print "Hamburger\t$1.29"
subtotal = subtotal + 1.29
htotal = htotal + 1
elif choice.upper() == "O":
print "Onion Rings\t$1.09"
subtotal = subtotal + 1.09
ototal = ototal + 1
elif choice.upper() == "C":
print "Cheeseburger\t$1.49"
subtotal = subtotal + 1.49
ctotal = ctotal + 1
elif choice.upper() == "S":
print "Small Drink\t$0.79"
subtotal = subtotal + .79
stotal = stotal + 1
elif choice.upper() == "F":
print "Fries\t$0.99"
subtotal = subtotal + .99
ftotal = ftotal + 1
elif choice.upper() == "L":
print "Large Drink\t$1.19"
subtotal = subtotal + 1.19
ltotal = ltotal + 1
elif choice.upper() == "A":
subtotal = subtotal
else:
print "Please enter a correct choice."
grandtotal = htotal + ototal + ctotal + stotal + ftotal + ltotal
return choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal
#calc function
def calc(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal):
"""calculate"""
tax = subtotal * .05
total = subtotal + tax
print "Subtotal: ", subtotal
print "Tax: ", tax
print "Total: $", total
amount = float(raw_input("\nEnter the amount collected: "))
if amount >= total:
change = amount - total
print "Change: $", change
elif amount < total:
amount = float(raw_input("That is not enough money. Please reenter the amount collected: "))
change = amount - total
print "Change: $", change
return tax
# report function
def report(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal, tax):
"""displays the report of sales"""
print "Item\t\t\tQuantity\tSales"
print "Hamburgers\t\t", htotal, "\t\t", htotal * 1.29
print "Cheeseburgers\t\t", ctotal, "\t\t", ctotal * 1.49
print "Fries\t\t\t", ftotal, "\t\t", ftotal * .99
print "Onion Rings\t\t", ototal, "\t\t", ototal * 1.09
print "Small Drink\t\t", stotal, "\t\t", stotal * .79
print "Large Drink\t\t", ltotal, "\t\t", ltotal * 1.19
print
print "Total Sales for Day:\t\t\t", grandtotal
print "Total Tax for Day:\t\t\t", grandtotal * tax
print "Total:\t\t\t\t\t", (grandtotal * tax) + grandtotal
# main scope
choice = "yes"
while choice == "yes":
choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal = input()
if choice.upper() == "A":
tax = calc(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal)
if choice.upper() == "R":
report(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal, tax)
choice = raw_input("\nWould you like to enter a new customer(yes/no)? ")
|
http://www.python-forum.org/viewtopic.php?f=6&t=3431
|
CC-MAIN-2015-22
|
refinedweb
| 406 | 53.07 |
Slashback: Palace, Perl, Coastalism 114
007 would prefer not to be required to go reinstall Linux. You may recall that in addition to various other pieces of head-adornment, the members of Britain's Royal Family rely on Red Hat, by way of their webmaster, Mick Morgan.
Brian writes: "Looks to me as if the Queen's webmaster is pulling out. See the letter at. Let's hope the new owners keep Linux eh?"
Yet another yet another. Pittsburgher Kevin Lenzo wants you to know that June 13-15 in Montreal marks yet another Yet Another Perl Conference. They're also looking for sponsors worthy enough to fund their deeds of derring-do. Suggested reading includes parent organization the Yet Another Society and YAPC Europe (which will be terrorizing Amsterdam sometime in early August, CFP soon), and darn-near required reading includes the (and I quote) "kick-ass" Damien diary going at the Joy of YAS.
Do you ever suspect that some people code Perl strictly for the interesting conferences?. Maddog Hall is sure to be there, so play hookey from work or school to go visit. The announcement reads, in part:
The ALS 2001 Program Committee invites you to contribute your ideas, proposals, and papers for tutorials, invited talks program, refereed papers track, workshops, work-in-progress reports, and symposia tracks. We welcome submissions that address any and all issues relating to Linux and the Open Source world.
The Call for Papers with submission guidelines and suggested topics is now available at.
Submissions are due June 5, 2001
Revenge of the -- oh, I won't say it. A coward who failed to sign his name writes: "DirecTV stuck on Sunday a week before the Superbowl and wiped out 98% of hacked DSS cards. Supposedly DirecTV wrote to an area that is write once thereby making the cards go into an infinite loop. Now the hackers have found a way to bypass that sequence in the ROM in the form of a DPBB (Dead Processor Blocker Board). The board has a simple Atmel ROM that glitches pass the looped part of the ASIC on the DSS cards. DSS hacking is back."
Re:Sheesh (Score:1)
For all the time and effort spent into developing a hack for DSS, they could buy a thousand subscriptions to the service.
Not if they're Canadian, they can't.
Confession. (Score:1)
Just my take on the matter...
Re:What about... (Score:1)
1st Law Of Networking: Loose ends are bad, termination is good.
Re:This is incorrect (Score:1)
So the cards malfunctioned. They'll be replaced as quickly as possible, free of charge. Presumably, these cards are licensed, not sold, and they are supported by DirecTV.
--
What about... (Score:1)
Re:why? (Score:1)
Re:Yes it should! (Score:1)
Ah, but you don't really own the card, at least not from a legal standpoint. The card's software was created by DirectTV and as such is licensed proprietary code. As with any software license, they have the right to terminate it at any time. (Read the EULA on your latest game, it's in there.) And since you are not the rightful licensee of the code, they are not required to offer any form of reimbursement.
Of course this is all a moot point since the software code on the card had already been illegally reverse-engineered and altered, and because of that violation of the license agreement is illegal to operate in the first place.
Re:There is such thing as due process (Score:1)
You make a good argument, except for this... The tactics used by DirectTV only affected cards that had been illegaly hacked. Anyone who fell victim to the attack by DirectTV had already been "proven guilty" because had they not been breaking the law, no damage would have occured.
I know it sounds like a cheap argument, but in this day and age it would probably stand up in court.
Re:DirecTV (Score:1)
(The original law was to keep people from "giving" you something you hadn't asked for and then charging you later.)
Re:Going back to Cali (Score:1)
Re:God Bless the CRTC! (Score:1)
I have a river on my property. I can certainly drink it, bottle it, use it to water my garden.. But if I am polluting and that pollution is unmanaged, then I am in the wrong.
Simularly, a radio signal that is on my property. I can certainly listen to it, record it, and show it to me friends and comrads. But if I am "pulluting" that radio signal, such as to affect my neighbors reception, then I am in the wrong.
Because a company decided to broadcast it's content onto the open airwaves.. doesn't mean that I don't have fair use!!! Just because they throw a little bit scrambling into it doesn't make it and more or less of the content. It is just a format. But out american politicians shield a company from loss by PASSING a law to say that it IS illegal?!
Pan
Your re-reasoning is also flawed... (Score:1)
Unscrambling a signal has no affect on that signal in general, so it isn't even the equivalent of a trash can, more like a stove (utilizes the material present). If, however, you "pollute" the airwaves, by immitting a signal of your own which affectively blocks or prohibits others from receiving the original signal (much like dumping pollution into a stream running through your property), then you are definately breaking laws against jamming public airwaves.
There is such thing as due process (Score:1)
This is the case because in the USA, we're supposedly Innocent Until Proven Guilty.
This is incorrect (Score:1)
The point I make is that it is not legal in any manner you try to justify it as to deliberately break property not owned by another person/entity. It's called Malicious Destruction of Property, and is a definate criminal act.
I want to stress, I do not, have not, and will not use hacked cards for such a service. I don't watch enough TV to justify the cost of the hacked card. In fact, my TV hasn't even been turned on in over 8 months using plain old broadcast airwave local stations. I just have a problem with a company intentionally destroying property that doesn't belong to them because they feel, despite applicable laws (ie: Canada), that people using hacked cards are criminal and as such fair game for this type of destruction.
Re:DirecTV (Score:1)
Fine with me, if those Airport/pager/cell phone users wanted privacy, they would use strong crypto on the radio waves they're beaming to all and sundry. I reserve the right to do anything I want with elecromagnetic energy that people send me for free, as long as I don't generate any harmful interference for others. I have a right to be an RF receiver!
I'm sure someone will point out that decrypting a cell phone call is illegal, and I agree that this is currently the case. However, I don't agree with this law because it only provides the appearance of privacy protection, not the fact. I could sit in my shed and listen to my neighbors on the phone for years and no one would ever know. Thus, relying on such a law to keep your phone conversations private is a mistake, and I see no reason why I should support such a law.
Not that I've actually done this, but I would have no qualms in doing so if I wanted or needed to.
Re:Congrats to DirecTV (Score:1)
If Jack Valenti sent you a free movie on a VHS cassette, would you watch it? Of course you would.
If Jack Valenti sent you a free movie on DVD, would you watch it? You might have to buy a DVD player, but sure you would.
If he sent it to you over the free radio waves, why wouldn't you watch it? You might have to get some equipment, but there's nothing wrong with watching it. Trust me, somebody already paid to shoot the movie.
Re:DirecTV (Score:1)
OT: Late food [Was Re:Best barbeque in bay area] (Score:1)
For anyone else in the east bay, where do you go late at night? I've spent two years in the area and still have yet to find anything decent/close open late at night (And you can forget about food.com, everything they list closed at 9pm).
And for anyone worried about how seedy oakland looks, it's really not that seedy. Sure, you have no troubles finding someone to sell you crack in a church's chicken box, or a hooker about any time of the night, but it's still a safe neighborhood. I've never had any qualms about walking through about any neighborhood at night, even when my skin is pasty white after 3 days of sitting in front of a terminal with the shades drawn.
Re:DirecTV (Score:1)
I don't think you can make an argument that you are not permitting the signal onto your property if you're making use of the signal.
All your base are belong to us.
Re:DSS is back! (Score:1)
you brought ww2 into this. really though to compare the loss or your precious cable to the lives lost at perl harbor is sad. thats the point when any rational discussion ended.
use LaTeX? want an online reference manager that
Re:DSS is back! (Score:1)
use LaTeX? want an online reference manager that
Re:God Bless the CRTC! (Score:1)
lets extend this a bit...
i'm a company who uses river water in a process. hey the company didnt ask the river to flow through the earth and if they want to take that water and add polutants to it then who are we to stop them.
well until we pass laws they can do whatever they want. i realize you live in canada and our laws dont apply, hell i'm surprised that canada doesnt declare direct tv a natural resource. they could harvest it and resell it to the people in the us at a lower rate. they could even use nafta to strongarm the us govt into letting them do it. what a tangent... until laws are passed you can continue to live off of the direct tv tit if you will. just dont pretend you are doing something your not. you are taking advantage of service oriented buisness and not paying for the product you are consuming. the arguments here are mainly justification for taking something...
the truth of the matter is that people in the us are breaking the dmca i believe when they use these cards to decrypt the signals. do i agree with the dmca? not really. do i break the law? yes. am i going to pretend that it's ok to steal software, tv signals, etc because the people i am stealing from are wealthy? NO! people stop trying to justify what you are doing and admit that it is stealing. at the very least admit you are taking advantage of a company.
use LaTeX? want an online reference manager that
Re:why? (Score:1)
i realize that the dmca was passed by lobbying by companies. if our sheep like population is happy to sit back and let their rights be taken away, what can we do?
my point is not to blame the companies. they are operating in their interests within the law. if you want to blame someone look at the corruption in washington (i like to imagine a huge fire personally). smile and feel that warm fuzzy feeling your senators leave in your belly when they pass bills in the wee hours of the night. dont say "i have the right to do..." you lost that right at 3:40 in the morning.
use LaTeX? want an online reference manager that
why? (Score:1)
lets apply this to your property: either you should lock your door and protect your property or rely on the law and not bother with the lock.
but wait this is information... its different than physical property...
really thought. whos to tell a person/company what they can and cannot do with their copyrighted information?
use LaTeX? want an online reference manager that
Re:God Bless the CRTC! (Score:1)
i also agree with you the dmca is seriously flawed, and i also believe it is unconstitutional. if it is repealed does it make intercepting and decoding tv signals right? i dont believe so. i still feel you are using a service that you havent paid for. alot of people think that just because you are screwing over a big corporation it is ok. it would be nice to see people protesting by not watching tv instead of stealing it.
kinda off topic: there really isnt that much on tv thats worth watching-imagine what would happen if people started reading again instead of watching tv. you would see nike sending books out with their swoosh surrounding the page numbers. people have turned into walking billboards. it's sad when people will pay more for clothing with advertisements.
use LaTeX? want an online reference manager that
Re:why? (Score:1)
use LaTeX? want an online reference manager that
Re:God Bless the CRTC! (Score:1)
use LaTeX? want an online reference manager that
Re:DSS is back! (Score:1)
use LaTeX? want an online reference manager that
Re:DSS is back! (Score:1)
use LaTeX? want an online reference manager that
Re:God Bless the CRTC! (Score:1)
I think it is foolish for the DirecTV people to stop a fringe minority of people from enjoying free service, but it is their right. I'm fine with it as long as they don't use the law as a heavy blunt instrument, like the DVD CCA did. Then, it's just a war of brains against brains, hackers versus DirecTV programmers. No one gets hurt, and it's fun.
Re:Confession. (Score:1)
Re:Best barbeque in bay area (Score:1)
Have you tried:
Rudy's Bar-B-Que Pit
4712 3rd St
San Francisco, CA 94124
(415) 282-4539
Best BBQ I've had in California.
The usual warnings about the neighborhood. This one makes me a little nervous, but I haven't had a problem beyond things getting thrown at my car. Probably because of the amount of foot traffic.
I'll be sure to try Bobby's. What makes BBQ Cajun?
Dan
Re:DSS is back! (Score:1)
Re:DirecTV (Score:1)
Re:DMCA? (Score:1)
The story (better to read the earlier stories referenced in several comments) is about the DirecTV company sending out signals in their broadcast stream that mess with hacked receivers. They have done this several times, but shortly before the Super Bowl, they sent out a signal which was physically damaging to hacked receivers. The "smart cards" these systems use to decode the signals have a write-once area on them, and if this area contains certain coding, the card just won't work.
The current story is about the users of hacked systems finding a way to use some sort of emulation intermediate to make their otherwise damaged-broken-dead cards work anyway.
This particular thread points out that a large number of the hackers using hacked receivers/cards to get DirecTV are in Canada, where Canadian laws (probably the "Canadian content" laws) prohibit DirecTV from selling the service. The signal reaches into parts of Canada near the border, but DirecTV is prohibited from providing legit systems for decoding and viewing it. So these people are taking things into their own hand, and we have a bit of a hack war.
ALS (Score:1)
Re:DirecTV (Score:1)
It's not, at least according to them. On each card it states that it is owned by someone else (I forgot who) and is merely loaned to you, subject to recall at any time. It's not illegal to destroy property you own. I don't know how this would stand up to a court challenge, but that's the way things stand right now.
Re:Going back to Cali (Score:1)
Re:DSS hacking makes my head hurt (Score:1)
DirecTV vs. Canuck hackers (Score:1)
I like to imagine some DirecTV technicians and engineers in a dark corner of DirecTV headquarters watching and waiting thinking "damn they got that one qucik. what should we do now." and someone suggesting death rays to peals of laughter.
Re:God Bless the CRTC! (Score:1)
Don't forget Microsoft too [crtc.gc.ca]. That's right, if you want to listen to CRTC hearings over the net, you'll need to have a WiMP supported OS. So much for a commision who's mandate [crtc.gc.ca] includes evaluating and approving open standards (NTSC, FM, etc) to "ensure that all Canadians have access to a wide variety of high quality Canadian programming."
I guess that doesn't include their programming and the internet (ok, hearings aren't "high quality", but still, there some hypocrisy here, no? ).
Re:bbq (Score:1)
Because we sold it to them. (Score:1)
Re:DirecTV (Score:1)
Why not? "I am giving no permission for or even acceptance of their sending this signal onto my property, but since they did it anyway, I am going to use it as I wish. Whatever use I make of the signal, my top preference is that they keep their signal off of my property." What is so hard to understand about that?
Edward Burr
i cant wait for an ALS in redmond ,WA (Score:1)
you think Bill would come out w the shot gun and start bussin'?
DSS is what? (Score:1)
Re:DSS is back! (Score:1)
Re:Confession. (Score:1)
Congrats to DirecTV (Score:1)
This is in contrast to most other companies and organizations, such as the Microsoft and the RIAA/MPAA... even companies such as Rambus, Apple, Fraunhoffer, etc. who attempt to enforce what they feel is 'their' property not by any technical means, but by patents and lawsuits.
Now, I'm disgusted by, for example, what the MPAA is trying to do with DeCSS; however, I would have a lot more respect for them if they took a more DirecTV-type approach, and tried to figure out technical means to 'throw a wrench in the works'. If they think some of their.. property (? content? data? I'm not even sure exactly...) is being violated, they should be able to protect it themselves. They don't have the luxury of being able to update the embedded software on all existing DVD players; maybe they could just declare that they screwed up, and that DVDs can't offer the kind of content control that they want. (Of course, it's a bit late to propose any kind of alternative, but still.)
In any case, DirecTV's being a good guy throughout this, I would think -- and I might even be buying their systems in the future. (This in contrast to other satellite networks that will be locking out some HD content from being viewed with HDTV equipment that doesn't have media access control capabilities!)
Re:DSS is back! (Score:1)
DSS is back! (Score:1)
Re:Best barbeque in bay area (Score:2)
Re:Best barbeque in bay area (Score:2)
There's also Everett and Jones.
If you want excellent Korean wood-charcoal BBQ, try Koryo.
bbq (Score:2)
flint's barbecue
Ugh. (Score:2).
Yeah, and those of us on the east coast are jumping for joy that there are now 141 Linux shows on the other side of the continent instead of 140; we can rest easy knowing California finally has a Linux expo, at the expense of Georgia. I felt bad they were being left out. Now I can rest easy.
1st Law Of Networking: Loose ends are bad, termination is good.
In California, it *is* legal. (Score:2)
Unfortunately, by the tenets of the DMCA, decrypting the DirecTV signal is illegal, at least for now.
- A.P.
--
* CmdrTaco is an idiot.
Why ALS moved (Score:2)
Here is some info on why ALS moved, directly from Marc Torres himself.
Going back to Cali (Score:2)
Just a we bit bitter that ALS is moving and it's no longer just a 1.5hr drive.
Re:Confession. (Score:2)
And I personally question the fine print on the cards. If you don't sign any contracts at the time of purchase, and there is nothing on the outer box indicating otherwise, then I then OWN the contents of the box. Doesn't mean I can redistribute the IP in it, but it is then my physical property, I can hack (either with an axe or a computer) it, spin, fold, mutilate, or remove tags all I want.
#include <stddisclaimers.h>
Re:DSS hacking makes my head hurt (Score:2)
It's Pierre Litre in Canada who cannot possibly
get the service any other way.
The DSS broadcasters are not allowed to do business in Canada. The signal reaches them
just fine. So there is a *huge* motivation
to unscramble the signal. I believe if it hits
your house, you own it, whether it's an orange from the neighbor's tree, or a tv signal from
another country.
Re:DSS is what? (Score:2)
Re:No Room (Score:2)
I'd agree that a lot of people are turned off by NYC. My personal opinion is that it is a noisy, crowded, filth ridden cesspool with an overabundance of rude and/or dangerous people.
My favorite city for big shows is Las Vegas. San Francisco is also most excellent. Other places that have had a pretty good track record include Chicago and Atlanta. Other places I'd rather visit than NYC... just about any other large US City... San Diego, Orlando, Dallas, Denver... Heck I think even Los Angeles would rank higher in my view than NYC.
Re:DSS Hacking (Score:2)
I'm not advocating theft in any way, but I found this to be amazing, that rogue codewarriors had enough diligence to be able to figure a way around what everyone (Hughes included) thought was permanent.
That's not amazing. If anything, it's extremely sad. Would these same people expend as much effort getting and retaining a job as they do stealing DSS, they'd have more than enough money to be able to PAY for DSS. People like this make me sick.
Re:DirecTV (Score:2)
Re:why? (Score:2)
In order to address this problem, we, as a society, have made a deal with content producers: we created a concept of ownership for imaginary things, like novels, and movies, and songs, and so on, that allow the content producers to profit from their creation. In return, though, the content producer agrees that society retains certain rights, like the rule of first sale (after someone buys a book, they can do what they will with it), various sorts of fair use, and the idea that copyrights expire.
Until now, the content producers had to agree to the deal, as they really couldn't effectively limit any of the rights society kept for itself, and they had to take what they were given from a legal point of view. Now, though, they want the protection of copyright without upholding their side of the deal; in fact, they want to set things up so that there's no way for society to make them agree to the deal we had before. I think they should be forced to choose: make the deal, and we'll protect your works under law (and remember that anything digital can be copied; CSS doesn't stop copying, just viewing), or protect your works with technology, and you don't have to accept the fair use provisions you would otherwise, but if your technology is broken, well, we're sorry, but you chose not to accept the deal. Content producers shouldn't be able to have it both ways.
In fact, we already have such a distinction when it comes to ideas. There's patents, where a company agrees to reveal the invention to the world, and for some period of time, no other company can use said invention, even if they come up with the idea on their own, and then there's trade secrets, where they can keep something secret for as long as they wish, and competitors can't do unfair things to learn about the invention, but if a competitor discovers the idea independantly, well, too bad. A company isn't allowed to claim exclusive use of ideas they won't reveal to everyone else.
My apologies for the quotes, incidentally. Words in quotes usually annoy me, but words like property, theft, steal, and so on don't mean the same thing when used in reference to IP as opposed to physical property, but we have no substitutes, so I use the quotes to emphasize that they need to be looked at differently.
Re:Best barbeque in bay area (Score:2)
OnTopicPost: Is anyone going to this linux conference? Here in europe, we've just had a couple back to back, in Paris and Brussels. Good stuff, but no late night barbeque
the AC
Re:God Bless the CRTC! (Score:2)
Just saying "descrambling dss signals is illegal" doesn't *explain* anything. We know it's illegal. But why?? These signals are already passing through our very bodies. It's ridiculous.
No Room (Score:2)
As for NYC...I was at LWE and it was *DEAD*. People don't like to go there for shows..maybe it's better in the summer but it was about 1/3 as busy as LWE in San Jose.
They're transmitting bits. You're interpreting. (Score:2)
buy the real thing (ok, smuggle it from your US Mailboxes Etc. box
Re:Sheesh (Score:2)
When I first saw this, my mind translated it into this:
For all the time and effort spent into developing Linux, they could buy a thousand copies of Windows.
Re:DSS Hacking (Score:2)
Aw, come on. Even presuming that all the people that were using hacked cards were doing so in the US, where it is illegal, as opposed to some/most being in Canada, where it's (according to previous comments) perfectly legal, your argument is still flawed.
Why do you climb a mountain? Because it's there. Half of the purpose of hacking like this [and it is hacking, not cracking, when you actually develop a new workaround like this] is the fun. Sure, these people are probably spending more in development than they'd spend on a full subscription, but that's like driving around the mountain.
Offtopic, -1 (Score:2)
Re:Confession. (Score:2)
There is one huge return: they get to keep broadcasting high demand content.
The contracts that they sign with their content providers no doubt stipulate that DirecTV has to make every effort to keep the signal from being viewed by anyone who is not paying the subscription fee.
If they don't try to stop the hackers (and succeded every once in a while), one of two things will happen - CNN won't license the feed to them anymore (reducing the quality of the service they offer, losing them subscribers and reducing their profits) or CNN will charge them more to make up for the extra viewers who aren't paying the fee, (reducing their profits outright, or losing them future subscribers who go to a cheaper system).
Should you have the right to decode radio waves that come through your property? IMO, yes. But there's nothing that says that they have to make it easy.
Cable theft, tapping lines and frying for it (Score:2)
The local cable company here (Las Vegas, NV) alludes to that danger in its anti-cable theft ads.
You cant buy the service you can get (Score:2)
Amber Yuan 2k A.D
Re:DSS hacking makes my head hurt (Score:2)
Here's a Related Link [freshmeat.net] For "learning" about watching illegal Cable TV on your linux box. Haven't tried it yet mostly because My TV tuner card sucks.
Also, I could be wrong, but suppose a guy (not me I swear!) wanted to steal cable signals. If they wanted to go with the DSS method, they wouldn't have to pay ANY monthly service fee. The guy stealing Signals from his local cable company would likely have to pay for the "basic" package while his little black box would be considered the "upgrade".
Sheesh (Score:2)
You understand that does not promote nor condone signal theft of any kind and you do not hold responsible for the actions of any of their users as it is the users' responsibility to comply with all local and State laws of their territory and country.
This site is for educational and informational purposes ONLY. It is not our intention to assist you in committing fraud or performing any illegal acts
The news page [hackhu.com] mentions how much traffic the site will be getting this month. I'm sure everyone who's downloading these programs are using them completely lawfully. Yeah, right. Ah well, I bet the site just loves these
Re:why? (Score:2)
You could just imagine the conversation at some Washington bar:
Entertainment Type: Senator, let me ask you a hypothetical question. If we were to offer some service, that people were supposed to pay for, and someone invented a device that allowed them to recieve the service for free, should that be illegal in someway?
Senator: Well, of course.
ET: Well, let me tell you something. It isn't under current law.
Senator: [Should be asking why, but isn't, and ET wouldn't exactly want to explain Fair Use law anyway] Well, we could do something about that, what do you have in mind.
ET: In the new "Digital Millenium", our goal is to protect content with access devices blah blah blah blah.
Senator: Huh? OK. I need another drink.
(Some time later) Senator 2: But wouldn't this bill abridge people's fair use rights? My constituants like to record things of the TeeVee with the VeeCeeArr. Those things are really a wonder.
Senator 1: Well we'll just put a provision in there saying this bill doesn't do that (possibly never aware that that was the entire point of the bill to begin with...)
Re:DirecTV (Score:2)
YAPC: Who needs a room? (Score:2)
I've got 2 futons and 120 ft of CAT-5 waiting for anyone coming to my home town of Montreal for the YAPC!
Click here [mailto] to become buddies with a budding perl lover up north! (yes Cam, you can come too!)
Re:why? (Score:2)
Sure, Locke interpreted intellectual property as a subset of royal monopolies, but plenty of other natural rights theorists instead argue it's more properly interpreted in the light of his theories on physical property.
Locke's theory was that someone becomes the owner of an unowned resource by applying his labor to it. Which arguably applies even more firmly to intellectual property, because it is a pure product of work, instead of the transformation by work of a limited physical resource to which no one has an inherent right.
There's a reason most advocates of eliminating intellectual property use Hobbes-authoritarian (government has all power), Burke-conservative (common law tradition is right), Marx-socialist (property is theft), or FDR-progressive (property is merely necessary to society) theories to bolster their claims, instead of natural rights (classical) liberal theories.
Re:DirecTV (Score:2)
Was it cost effective? Probably, most of the people with affected boxes probably weren't 'hackers', they just bought hacked boxes. A lot of them would gladly purchase a DirecTV to get their football fix.
I still think the DirectTV hack was beautiful, even if I would've been pissed off (sort of, but laughing) had I been one of the ones affected by it.
DirecTV (Score:2)
Interestingly, DirecTV's method of defeating the hackers seems at least as ingenious as the hacker's methods of circumvention.
On the matters of royalty (Score:2)
Re:DirecTV (Score:3)
Even if it weren't illegal, if you're using one of these unlicensed SmartCards, and DirectTV figures out a way to send a signal that will prevent them from working, that shouldn't be illegal either.
--
Re:DSS is back! (Score:3)
Oh wait, I *pay* for mine. When exactly did Slashdot become "Elite Script Kiddie Central"?
Re:DirecTV (Score:3)
ultimate object (Score:3)
Here, in Calgary, AB, Canada, dealers offered credits for turning in your grey market system (dish, receiver, AND smart card) towards a locally sold system.
Re:Best barbeque in bay area (Score:3)
anticypher didn't mention the BBQ goat and turkey at Doug's, both of which are excellent.
Best barbeque in bay area (Score:3)
I couldn't have made it through school without Doug's Barbeque, open until 3:00 AM most nights, 3600 San Pablo Blvd, Oakland. Not recommended for pasty white solitary geeks at 3:00 AM, due to its location under the freeway on the north edge of the seedier part of Oaktown. But worth it for the best ribs, fried chicken, roast lamb and slabs o'beef around.
the AC
Re:God Bless the CRTC! (Score:3)
The DMCA makes it illegal to make devices to decrypt these transmissions. So yes, dss cards are illegal under the DMCA.
Now, I personally believe that the DMCA is *wrong* (never confuse the law with what is right), and possibly unconstitutional (which would mean it was not merely unjust, but illegal as well). But until it is demonstrated unconstitutional by the Supreme Court, or otherwise repealed, descrambling
dss signals is illegal.
Someone mentioned wireless LANs. This falls into the same category. It is NOT illegal to intercept wireless LAN traffic on your own property. However, what you do with the information gained may or may not be illegal.
Re:DSS is what? (Score:3)
Some hackers then created a boot-strap-loader, which mimics the normal boot process of a normal card, then once the boot-up process gets past the point where it checks for that 1 in the PROM, it then hands over the remainder of the boot-up sequence to the DirecTV smartcard, and it can be used again to steal signals.
Note: This is very watered down version of what happend, so don't flame me
Re:If they wanted to be bastards (Score:3)
If they wanted to be bastards (Score:3)
DSS Hacking (Score:3)
I'm not advocating theft in any way, but I found this to be amazing, that rogue codewarriors had enough diligence to be able to figure a way around what everyone (Hughes included) thought was permanent.
If you ask me, the main goal of wiping out the H cards was because it simply became too easy to pirate the service - my estimate is at least 100,000+ people were pirating DirecTV this way. It is still impossible to use these cards as they were before, but they can now be used in emulation set-ups. Most people don't want to be bothered to do that though, and the population of people who will do that is a small enough number for Hughes to be able to call their H card strike a success, because at most there will be 5,000-10,000 people using said emulation setups.
Re:DirecTV (Score:3)
Yes, Minister! (Score:4)
I assume that's British Civil Service Speak for "You're Out of the Loop, Sucker!" One of my favorite TV characters is Sir Humpherey Applebee, who once said:Why can't American bureaucrats be that entertaining?
__________________
God Bless the CRTC! (Score:4)
The whole DirecTV thing, I say more power to the hackers out there. The broadcast monopoly in Canada is ridiculous, and anyone who circumvents the absolute garbage CRTC regulations deserves a pat on the back, a hearty handshake, and a nice beer.
BTW, the signals that are broadcast are penetrating my body and passing through me with no permission. Why should it be illegal to decrypt something that is physically passing through me as I write this? I never asked them to broadcast their signal through me. Same with cellphones and all that. If the signal is passing through my body, then IMO I have every right to do what I want with that signal.
DSS hacking makes my head hurt (Score:5)
Compare that to cable theft...you buy a box and it works and it always works. Cable companies can't change encryption schemes overnight. In truth, in the five years I've been in my home location we are still using the same Jerrold/GE boxes. A one time fee of $200 for five years of unlimited cable seems like a worthy temptation.
I am honestly surprised that there isn't a bigger market for these digital cable black boxes. Almost as many channels as DSS plus the local stuff plus many people feel they can rationalize it by paying for the basic cable connection.
So I think that part of the effort that goes into the DSS hacking scene must truly be the hacking spirit, the doing something difficult to see if it can be done. I can see that modivation but at best that could only be a couple thousand dedicated souls. Where the other 98,000 customers are coming from I just can't understand.
-JoeShmoe
DirecTV (Score:5)
A question, though. If the airwaves are public, what's illegal about using a signal that you didn't permit someone to send onto your property? I think that DirecTV is spending far too much money trying to stop the fraction of a percent of their viewers from stealing service. Is it really cost effective?
|
https://news.slashdot.org/story/01/02/08/0150232/slashback-palace-perl-coastalism
|
CC-MAIN-2017-04
|
refinedweb
| 6,424 | 71.24 |
If you haven't done so already read the installation instructions. This
document gives you a quick overview of the basic tasks many people will be interested
in doing.
Table of Contents
Pylons uses Paste to create and deploy projects as well as create new controllers and their tests.
Create a new project named helloworld with this command:
1
$ paster create -t pylons helloworld
Note: Windows users must configure their PATH as described in Windows Notes, otherwise they must specify the full path name to the paster command.
This creates a new Pylons project which you can use as a basis for your own project. The directory structure is as follows:
1
2
3
4
5
6
7
8
9
10
11
- helloworld
- README.txt
- data
- docs
- development.ini
- helloworld
- helloworld.egg-info
- Various files including paste_deploy_config.ini_tmpl
- setup.cfg
- setup.py
- test.ini
The setup.py file is used to create a re-distributable Python package of your project called an egg. Eggs can be thought of as similar to .jar files in Java. The setup.cfg file contains extra information about your project and the helloworld.egg-info directory contains information about the egg including a paste_deploy_config.ini_tmpl file which is used as a template for the config file when users of your project issue a paster make-config command. Distributing and deploying your egg is covered in the Distributing Your Project documentation and end user configuration is described in Application Setup.
You may also notice a data directory which is created the first time you run the code. You can configure the location of the data directory by editing your development.ini file. This directory will hold cached data and sessions used by your app while its running.
The helloworld directory within the helloworld directory is where all your application specific code and files are placed. The directory looks like this:
- helloworld
- helloworld
- config
- controllers
- lib
- model
- public
- templates
- tests
- __init__.py
- websetup.py
The config directory contains the configuration options for your web application.
The controllers directory is where your application controllers are written. Controllers are the core of your application where the decision is made on what data to load, and how to view it.
The lib directory is where you can put code that is used between different controllers, third party code, or any other code that doesn't fit in well elsewhere.
The model directory is for your model objects, if you're using an ORM this is where the classes for them should go. Objects defined in model/__init__.py will be loaded and present as model.YourObject inside your controllers. The database configuration string can be set in your development.ini file.
The public directory is where you put all your HTML, images, Javascript, CSS and other static files. It is similar to the htdocs directory in Apache.
The tests directory is where you can put controller and other tests. The controller testing functionality uses Nose and paste.fixture.
The templates directory is where templates are stored. Templates contain a mixture of plain text and Python code and are used for creating HTML and other documents in a way that is easy for designers to tweak without them needing to see all the code that goes on behind the scenes. Pylons uses Mako templates by default but also supports Cheetah, Kid and others through a system called Buffet. See how to change template languages
The __init__.py file is present so that the helloworld directory can be used as a Python module within the egg.
The websetup.py should contain any code that should be executed when an end user of your application runs the paster setup-app command described in Application Setup. If you're looking for where to put that should be run before your application is, this is the place.
We can test the template project like this:
1
2
$ cd helloworld
$ paster serve --reload development.ini
The command loads our project server configuration file in development.ini and serves the Pylons application.
The --reload option ensures that the server is automatically reloaded if
you make any changes to Python files or the development.ini config file.
This is very useful during development. To stop the server you can press
Ctrl+c or your platform's equivalent.
If you visit when the server is running you will see the
welcome page (127.0.0.1 is a special IP address that references your own
computer but you can change the hostname by editing the development.ini
file).
Try creating a new file named test.html in the helloworld/public directory with the following content:
1
2
3
4
5
<html>
<body>
Hello World!
</body>
</html>
If you visit you will see the message Hello World!. Any files in the public directory are served in the same way they would be by any webserver, but with built-in caching, and if Pylons has a choice of whether to serve a file from the public directory or from code in a controller it will always choose the file in public. This behavior can be changed by altering the order of the Cascade in config/middleware.py.
The interactive debugger is a powerful tool for use during application development. It is enabled by default in the development environment's development.ini. When enabled, it allows debugging of the application through a web page after an exception is raised. On production environments the debugger poses a major security risk; so production ini files generated from the paster make-config command will have debugging disabled.
To disable debugging, uncomment the following line in the [app:main] section of your development.ini:
#set debug = false
to:
set debug = false
Again; debug must be set to false on production environments as the interactive debugger poses a MAJOR SECURITY RISK.
More information is available in the Interactive Debugger documentation.
You're now ready to start creating your own web application. First, lets create a basic hello World controller:
$ paster controller hello
This paster command will create the controllers/hello.py file for you with a basic layout as well as a helloworld/tests/functional/test_hello.py that is used for running functional tests of the controller.
Here's what a basic controller looks like to print out 'Hello World' and respond to. Put the following text in the file helloworld/controllers/hello.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
import logging
from helloworld.lib.base import *
log = logging.getLogger(__name__)
class HelloController(BaseController):
def index(self):
# Return a rendered template
# return render('/some/template.mako')
# or, Return a response
return 'Hello World'
Pylons uses a powerful and flexible system for routing URLs to the appropriate piece of code and back.
We would like the hello controller to also be displayed for both the URL and the URL which is the site route. We need to add a line to the routes config in helloworld/config/routing.py so it looks like this:
1
2
3
map.connect('', controller='hello', action='index')
map.connect(':controller/:action/:id')
map.connect('*url', controller='template', action='view')
This means that an empty URL is matched to the index action of the hello controller. Otherwise, the route mapper looks for URLs in the form controller/action/id, but if action or controller are not specified the request is routed to the view action of the templates controller (created by the Pylons template). This raises a 404 error by default.
Since we have made changes to our routes we must restart the server. If you are using the --reload option this will happen automatically, otherwise close the old server and start it again using the same command as before. (Note: Mako template changes do not require restarting the server even without the --reload option.)
Visit both and and you will find that although the first URL produces the expected Hello World, the second URL produces the welcome page as before. This is because, as mentioned earlier, static files in the public directory are served before looking for code.
Delete the file public/index.html and the application works as expected.
More information on routes can be found in the Routes manual.
When your controller's action is called it is expected to either call a WSGI application or return a response. In the previous section we saw a basic example which returned a string response. To render templates, use the render command.
Note
For the gory details on the available options to render look at the Pylons templating
API.
Here's an example template, using Mako, that prints some request information.
Create a template file helloworld/templates/serverinfo.mako containing the following:
1
2
3
4
5
6
7
<h2>
Server info for ${request.host}
</h2>
<p>
The URL you called: ${h.url_for()}
</p>
To use this template add a new method to your HelloController in helloworld/controllers/hello.py with the following function at the end of the class:
def serverinfo(self):
return render('/serverinfo.mako')
The render('/serverinfo.mako') function will render your template using the default template engine (Mako).
If your server is still running you can view the page at:
If not simply restart the server with paster serve --reload development.ini from the helloworld directory.
Sessions come enabled for your application and are handled by the Beaker middleware. This provides robust and powerful sessions as well as caching abilities.
Using a session is very easy, here's what using and saving a session in the above function would look like:
1
2
3
4
def serverinfo(self):
session['name'] = 'George'
session.save()
return render('/serverinfo.mako')
Session options can be customized via your development.ini file.
Remember to always call session.save() before returning a response to ensure that the session is
saved.
For convenience, there are several globals (imported from lib.base) available for use in your controllers:
object
description
session
Acts as a dict to store session data.
request
The request object
response
The response object, headers, cookies and status code can be set
on this which will be used for the response
abort
Function to abort the request immediately by raising an
HTTPException according to the specified status code
redirect_to
Function to redirect the browser to a different location via the
HTTP 302 status code (by raising an HTTPException)
render
Function to render a template and return a string
h
Your project's lib/helpers.py module. By default, this
module exposes all functions available from the WebHelpers
package. Keep in mind when reading the WebHelpers docs that all
the functions listed should be prefixed by h. when used under
Pylons.
c
(described in Passing Variables to Templates)
g
(described in Application Globals and Persistent Objects)
config
Dictionary-like object for accessing .ini file directives
and other Pylons configuration options.
cache,
etag_cache
Caching objects and functions.
(described in the Caching in Templates and Controllers doc)
jsonify
Action decorator to format output as JavaScript Object Notation.
validate
Decorator for convenient form validation with FormEncode.
(further described in the Form Handling document)
_, N_,
ungettext
Internationalization / localization functions.
(described in the Internationalization and Localization doc)
model
Access to your model package, however you choose to define it.
Pylons controllers are created for each request. This means you can attach variables to self if you want them passed around. However, it can be very inconvenient to keep track of all the variables and methods attached to self, especially if you want to pass them to a template.
To make it easier to set up your data for use by the template, the variable c is made available and is also available in all Mako templates as the c global. Let's take a look at using it:
1
2
3
4
5
6
def serverinfo(self):
import cgi
import pprint
c.pretty_environ = cgi.escape(pprint.pformat(request.environ))
c.name = 'The Black Knight'
return render('/serverinfo.mako')
and modify the serverinfo.mako file in the templates directory to look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<h2>
Server info for ${request.host}
</h2>
<p>
The URL you called: ${h.url_for()}
</p>
<p>
The name you set: ${c.name}
</p>
<p>The WSGI environ:<br />
<pre>${c.pretty_environ}</pre>
</p>
The c object is available in the other template languages.
The pprint.pformat function creates a pretty representation of the object passed to it, and the cgi.escape function html escapes a string. You should now see 'The name you set: The Black Knight' printed out on the page, as well as the WSGI environ dictionary.
If you ask for an attribute on c that does not exist, rather than throwing an AttributeError, an empty string will be returned. This makes it easy to toggle behavior depending on the response. For example:
<p>Hi there ${c.name or c.full_name or "Joe Smith"}
Warning
Be careful when setting c attributes that begin with an _ character. c and other global variables are really a StackedObjectProxies, which reserve the attribute names _current_obj, _push_object and _pop_object for their internal methods.
The c global is also reset on each request so that you don't need to worry about a controller still having old values set from a previous request.
There are occasions where you might want information to be available to all controllers and not reset on each request. For example you might want to initiate a TCP connection that is made when the application is loaded. You can do this through the g variable.
The g variable is an instance of your Globals class in your application's lib/app_globals.py file. Any attributes set in the __init__() method will be available as attributes of g throughout your Pylons application. Any attributes set on g during one request will remain changed for all the other requests. You have to be very careful when setting global variables in requests.
Here is an example of using the g variable. First modify your lib/app_globals.py Globals class so that the __init__.py method looks like this:
def __init__(self):
self.message = 'Hello'
Then add this new method to the end of the helloworld/controllers/hello.py:
def app_globals_test(self):
if g.message == 'Hello':
content = g.message
g.message = 'Hello World!'
return content
else:
return g.message
This time if you run the server and visit you should see the message Hello. If you visit the page again the message will be changed to Hello World! and it will remain changed for all subsequent requests because the application global variable was modified on the first request.
The Globals object is initialized when the application is loaded; the c, cache and other request-scoped variables are not initialized/available at that point.
Keep in mind that the Globals object can be utilized by all threads in your application. Thread-safety should be considered when using it, especially when modifying attributes of the object.
If you're looking for where to put that should be run before your application is, this is the place.
The syntax seems odd. How about "where to put CODE that should be run"...
James
If you're trying to figure out why none of the paste commands work on Windows - it's because you have to run with admin privileges.
I suspect that these are instructions assume that the user is installing on his/her local system. I am trying to install on my VPS host, and I am having some difficulties.
I think the filenames are given with inconsistent working directories. Sometimes they include 'helloworld' and sometimes they omit it. I think you should use a consistent base directory. Thus, in section 8.3, 'lib/app_globals.py' would be 'helloworld/lib/app_globals.py'. The same comment applies to any other file not prefixed with 'helloworld/'.
|
http://wiki.pylonshq.com/display/pylonsdocs/Getting+Started
|
crawl-001
|
refinedweb
| 2,654 | 57.37 |
To merge two files in Java Programming, You have to ask to the user to enter the name of both the files with extension to merge its content and store the merged content in the third file so also ask to the user to enter the third file name with extension to perform this action as shown in the following program.
Following Java Program ask to the user to enter the first and second file name with extension to merge its content and then ask to the user to enter the third file name with extension to store the merged content inside it:
/* Java Program Example - Merge Two Files */ import java.io.*; import java.util.Scanner; public class JavaProgram { public static void main(String args[]) { String srcy, srcz, merge; Scanner scan = new Scanner(System.in); /* enter the file names with extension like file.txt */ System.out.print("Enter First File Name : "); srcy = scan.nextLine(); System.out.print("Enter Second File Name : "); srcz = scan.nextLine(); System.out.print("Enter FileName to Store merged content of First and Second File : "); merge = scan.nextLine(); File[] files = new File[2]; files[0] = new File(srcy); files[1] = new File(srcz);) { e1.printStackTrace(); } System.out.print("Merging Both File...\n");.print("\nMerged Successfully..!!"); try { out.close(); } catch(IOException e) { e.printStackTrace(); } } }
When the above Java Program is compile and executed, it will produce the following output:
You may also like to learn and practice the same program in other popular programming languages:
Tools
Calculator
Quick Links
|
https://codescracker.com/java/program/java-program-merge-two-files.htm
|
CC-MAIN-2019-13
|
refinedweb
| 251 | 51.48 |
Whatever your programs are doing, they often have to deal with vast amounts of data. This data is usually represented and manipulated in the form of strings. However, handling such a large quantity of input in strings can be very ineffective once you start manipulating them by copying, slicing, and modifying. Why?
Let's consider a small program which reads a large file of binary data, and
copies it partially into another file. To examine out the memory usage of this program, we will use memory_profiler, an excellent Python package that allows us to see the memory usage of a program line by line.
@profile def read_random(): with open("/dev/urandom", "rb") as source: content = source.read(1024 * 10000) content_to_write = content[1024:] print("Content length: %d, content to write length %d" % (len(content), len(content_to_write))) with open("/dev/null", "wb") as target: target.write(content_to_write) if __name__ == '__main__': read_random()
Running the above program using memory_profiler produces the following:
$ python -m memory_profiler memoryview/copy.py Content length: 10240000, content to write length 10238976 Filename: memoryview/copy.py Mem usage Increment Line Contents ====================================== @profile 9.883 MB 0.000 MB def read_random(): 9.887 MB 0.004 MB with open("/dev/urandom", "rb") as source: 19.656 MB 9.770 MB content = source.read(1024 * 10000) 29.422 MB 9.766 MB content_to_write = content[1024:] 29.422 MB 0.000 MB print("Content length: %d, content to write length %d" % 29.434 MB 0.012 MB (len(content), len(content_to_write))) 29.434 MB 0.000 MB with open("/dev/null", "wb") as target: 29.434 MB 0.000 MB target.write(content_to_write)
The call to
source.read reads 10 MB from
/dev/urandom. Python needs to allocate around 10 MB of memory to store this data as a string. The instruction on the line just after,
content[1024:], copies the entire block of data minus the first KB — allocating 10 more megabytes.
So what's interesting here, is to notice that the memory usage of the program increased by about 10 MB when building the variable
content_to_write. The slice operator is copying the entirety of
content, minus the first KB, into a new string object.
When dealing with extensive data, performing this kind of operation on large byte arrays is going to be a disaster. If you already have written C code, you know that using
memcpy() has a significant cost, both in term of memory usage and regarding general performance: copying memory is slow.
However, as a C programmer, you also know that strings are arrays of characters and that nothing stops you from looking at only part of this array without copying it, through the use of basic pointer arithmetic – assuming that the entire string is in a contiguous memory area.
This is possible in Python using objects which implement the buffer protocol. The buffer protocol is defined in PEP 3118, which explains the C API used to provide this protocol to various types, such as strings.
When an object implements this protocol, you can use the
memoryview class constructor on it to build a new memoryview object that references the original object memory.
>>> s = b"abcdefgh" >>> view = memoryview(s) >>> view[1] 98 >>> limited = view[1:3] >>> limited <memory at 0x7fca18b8d460> >>> bytes(view[1:3]) b'bc'
Note:
98is the ASCII code for the letter
b.
In the example above, we use the fact that the
memoryview object's slice operator itself returns a
memoryview object. That means it does not copy any data but merely references a particular slice of it.
The graph below illustrates what happens:
Therefore, it is possible to rewrite the program above in a more efficient manner. We need to reference the data that we want to write using a memoryview object, rather than allocating a new string.
@profile def read_random(): with open("/dev/urandom", "rb") as source: content = source.read(1024 * 10000) content_to_write = memoryview(content)[1024:] print("Content length: %d, content to write length %d" % (len(content), len(content_to_write))) with open("/dev/null", "wb") as target: target.write(content_to_write) if __name__ == '__main__': read_random()
Let's run the program above with the memory profiler:
$ python -m memory_profiler memoryview/copy-memoryview.py Content length: 10240000, content to write length 10238976 Filename: memoryview/copy-memoryview.py Mem usage Increment Line Contents ====================================== @profile 9.887 MB 0.000 MB def read_random(): 9.891 MB 0.004 MB with open("/dev/urandom", "rb") as source: 19.660 MB 9.770 MB content = source.read(1024 * 10000) <1> 19.660 MB 0.000 MB content_to_write = memoryview(content)[1024:] <2> 19.660 MB 0.000 MB print("Content length: %d, content to write length %d" % 19.672 MB 0.012 MB (len(content), len(content_to_write))) 19.672 MB 0.000 MB with open("/dev/null", "wb") as target: 19.672 MB 0.000 MB target.write(content_to_write)
In that case, the
source.read call still allocates 10 MB of memory to read the content of the file. However, when using
memoryview to refer to the offset content, no more memory is allocated.
This version of the program ends up allocating 50% less memory than the original version!
This kind of trick is especially useful when dealing with sockets. When sending data over a socket, all the data might not be sent in a single call.
import socket s = socket.socket(…) s.connect(…) # Build a bytes object with more than 100 millions times the letter `a` data = b"a" * (1024 * 100000) while data: sent = s.send(data) # Remove the first `sent` bytes sent data = data[sent:] <2>
Using a mechanism as implemented above, the program copies the data over and over until the socket has sent everything. By using
memoryview, it is possible to achieve the same functionality with zero-copy, and therefore higher performance:
import socket s = socket.socket(…) s.connect(…) # Build a bytes object with more than 100 millions times the letter `a` data = b"a" * (1024 * 100000) mv = memoryview(data) while mv: sent = s.send(mv) # Build a new memoryview object pointing to the data which remains to be sent mv = mv[sent:]
As this won't copy anything, it won't use any more memory than the 100 MB
initially needed for the
data variable.
So far we've used
memoryview objects to write data efficiently, but the same method can also be used to read data. Most I/O operations in Python know how to deal with objects implementing the buffer protocol. They can read from it, but also write to it. In this case, we don't need
memoryview objects – we can ask an I/O function to write into our pre-allocated object:
>>> ba = bytearray(8) >>> ba bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00') >>> with open("/dev/urandom", "rb") as source: ... source.readinto(ba) ... 8 >>> ba bytearray(b'`m.z\x8d\x0fp\xa1')
With such techniques, it's easy to pre-allocate a buffer (as you would do in C to mitigate the number of calls to
malloc()) and fill it at your convenience.
Using
memoryview, you can even place data at any point in the memory area:
>>> ba = bytearray(8) >>> # Reference the _bytearray_ from offset 4 to its end >>> ba_at_4 = memoryview(ba)[4:] >>> with open("/dev/urandom", "rb") as source: ... # Write the content of /dev/urandom from offset 4 to the end of the ... # bytearray, effectively reading 4 bytes only ... source.readinto(ba_at_4) ... 4 >>> ba bytearray(b'\x00\x00\x00\x00\x0b\x19\xae\xb2')
The buffer protocol is fundamental to achieve low memory overhead and great performances. As Python hides all the memory allocations, developers tend to forget what happens under the hood, at a high cost for the speed of their programs!
It's also good to know that both the objects in the
array module and the functions in the
struct module can handle the buffer protocol correctly, and can, therefore, efficiently perform when targeting zero copy.
|
https://julien.danjou.info/high-performance-in-python-with-zero-copy-and-the-buffer-protocol/
|
CC-MAIN-2019-30
|
refinedweb
| 1,319 | 58.48 |
i’ve got the basics of my code working but I can’t for the life of me figure out how to turn the lights into an array so the user can drag in whatever lights they want in the inspector.
I gather I need to turn the whole thing into a gameobject or something along those lines but im really not a coder and am working mostly off copypasta. The problem with turning them into a gameobject is unity doesn’t seem to like that for lights so i’m at an impasse 🙂
using System.Collections; using System.Collections.Generic; using UnityEngine; public class headlight : MonoBehaviour { public Light headlights; // Start is called before the first frame update void Start() { headlights = GetComponent<Light>(); } // Update is called once per frame void Update() { if (Input.GetKeyDown(KeyCode.L)) { headlights.enabled = !headlights.enabled; } } }
|
https://proxieslive.com/making-a-user-configurable-light-array-to-change-functions/
|
CC-MAIN-2020-40
|
refinedweb
| 140 | 54.15 |
Read pixels from screen into the saved texture data.
This will copy a rectangular pixel area from the currently active RenderTexture or the view (specified by the
source parameter) into the position defined
by
destX and
destY. Both coordinates use pixel space - (0,0) is lower left.
If
recalculateMipMaps is set to true, the mip maps of the texture will also be updated. If
recalculateMipMaps is set to false, you must call Apply to recalculate them.
This function works on
RGBA32,
ARGB32 and
RGB24 texture formats, when render target is of a similar format too (e.g. usual 32 or 16 bit render texture).
Reading from a HDR render target (ARGBFloat or ARGBHalf render texture formats) into HDR texture formats (RGBAFloat or RGBAHalf) is supported too.
The texture also has to have read/write enabled flag set in the texture import settings.
// Attach this script to a Camera //Also attach a GameObject that has a Renderer (e.g. a cube) in the Display field //Press the space key in Play mode to capture
using UnityEngine;
public class Example : MonoBehaviour { // Grab the camera's view when this variable is true. bool grab;
// The "m_Display" is the GameObject whose Texture will be set to the captured image. public Renderer m_Display;
private void Update() { //Press space to start the screen grab if (Input.GetKeyDown(KeyCode.Space)) grab = true; }
private void OnPostRender() { if (grab) { //Create a new texture with the width and height of the screen Texture2D texture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false); //Read the pixels in the Rect starting at 0,0 and ending at the screen's width and height texture.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false); texture.Apply(); //Check that the display field has been assigned in the Inspector if (m_Display != null) //Give your GameObject with the renderer this texture m_Display.material.mainTexture = texture; //Reset the grab state grab = false; } } }
See Also: EncodeToPNG.
|
https://docs.unity3d.com/ru/2020.2/ScriptReference/Texture2D.ReadPixels.html
|
CC-MAIN-2021-43
|
refinedweb
| 322 | 64.71 |
ps2io – Support for PS/2 protocol¶
The
ps2io module contains classes to provide PS/2 communication.
ps2io.
Ps2(data_pin: microcontroller.Pin, clock_pin: microcontroller.Pin)¶
Communicate with a PS/2 keyboard or mouse
Ps2 implements the PS/2 keyboard/mouse serial protocol, used in legacy devices. It is similar to UART but there are only two lines (Data and Clock). PS/2 devices are 5V, so bidirectional level converters must be used to connect the I/O lines to pins of 3.3V boards.
Create a Ps2 object associated with the given pins.
Read one byte from PS/2 keyboard and turn on Scroll Lock LED:
import ps2io import board kbd = ps2io.Ps2(board.D10, board.D11) while len(kbd) == 0: pass print(kbd.popleft()) print(kbd.sendcmd(0xed)) print(kbd.sendcmd(0x01))
__exit__(self)¶
Automatically deinitializes the hardware when exiting a context. See Lifetime and ContextManagers for more info.
popleft(self)¶
Removes and returns the oldest received byte. When buffer is empty, raises an IndexError exception.
sendcmd(self, byte: int)¶
Sends a command byte to PS/2. Returns the response byte, typically the general ack value (0xFA). Some commands return additional data which is available through
popleft().
Raises a RuntimeError in case of failure. The root cause can be found by calling
clear_errors(). It is advisable to call
clear_errors()before
sendcmd()to flush any previous errors.
clear_errors(self)¶
Returns and clears a bitmap with latest recorded communication errors.
Reception errors (arise asynchronously, as data is received):
0x01: start bit not 0
0x02: timeout
0x04: parity bit error
0x08: stop bit not 1
0x10: buffer overflow, newest data discarded
Transmission errors (can only arise in the course of sendcmd()):
0x100: clock pin didn’t go to LO in time
0x200: clock pin didn’t go to HI in time
0x400: data pin didn’t ACK
0x800: clock pin didn’t ACK
0x1000: device didn’t respond to RTS
0x2000: device didn’t send a response byte in time
|
https://circuitpython.readthedocs.io/en/6.0.x/shared-bindings/ps2io/index.html
|
CC-MAIN-2020-50
|
refinedweb
| 327 | 56.76 |
Ok, I was studying my examination and I was directed to watch this video to understand how Diffie Hellman (DH) key exchange works mathematically, the video was really good, concise and easy to understand, so to add spice to my study (study for examination is the world’s most boring thing to do) I wrote a simple python script to derive secret keys for both Alice and Bob based on the video.
The result is printed in the python console:
This is the code in python. I am using the
secrets module which is an built-in module of python.
from secrets import SystemRandom # See video demonstration of DH key exchange # in # pseudo random number generator prng = SystemRandom() # g and p values agreed by both Alice and Bob. g = prng.randint(1, 100) p = prng.randint(1, 100) print(f"Agreed g value:{g}, agreed modulo:{p}\n") # Alice's private random number chose between 1 and 100 A = prng.randint(1, 100) print(f"Alice's random number is {A}.\n") # Bob's private random number chose between 1 and 100 B = prng.randint(1, 100) print(f"Bob's random number is {B}.\n") # public value of Alice's to be sent over to Bob. a = g**A % p print(f"Alice's calculated public value is {a}, which will be sent to Bob publicly.\n") # public value of Bob's to be sent over to Alice. b = g**B % p print(f"Bob's calculated public value is {b}, which will be sent over to Alice publicly.\n") # Alice uses Bob's public value and her private value to compute the secret key. secret_key1 = b**A % p # Bob uses Alice's public value and his private value to compute the secret key. secret_key2 = a**B % p if secret_key1 == secret_key2: print("Secret key has been successfully derived! See below...\n") print(f"Bob uses Alice's public value which is {a}, and his own private value which is {B}, " f"the secret key is {secret_key2}.\n") print(f"Alice uses Bob's public value which is {b}, and her own private random value which is {A}," f"the secret key is {secret_key1}.") else: print("Alice and Bob have different secret keys, which is wrong! Try again!")
|
https://cyruslab.net/2019/12/01/python-diffie-hellman-key-exchange-demo/
|
CC-MAIN-2022-27
|
refinedweb
| 378 | 62.58 |
It's like upvotes, except they don't count for anything.
Here's the idea...
I send this app out to the community. The users will click the button. They will get hooked. They will continue to hit the button. We all work together to reach the goal of 1M (1,000,000) votes!
But I'm bored of clicker games!
Well, this isn't your basic clicker game. Instead of clicking a button and increasing points by yourself, this allows you to collaborate with the entire community to get to the goal of 1M votes.
No, the votes aren't for me. I just called them "votes" because why not!
no, why is it privated?
@TaylorLiang In order to prevent people from checking and bypassing the security code (which happened yesterday)
@TaylorLiang I tried to use cookies, but socketio was not reading them correctly. It's fine now tho, botting has stopped. Now it's just the user's job to click.
@TaylorLiang Yeah. Just to hide some of the captcha and other security stuff so it doesn't get botted so easily anymore.
You didn't said we can't do this, so I'm simply not breaking the rules :)
I plit windows, the big one on the right I can put there anything, another tab to watch videos or something with the simple script I wrote automatically Puts the mouse on the the vote button on the screen on the left and clicks every 5 and a half on the screen on the left and after clicking It returns the cursor where it was before, very fast you don't even notice that It moved, You just see a red cool down.
Python Script:
from pynput.mouse import Controller, Button as btn cur = Controller() pos=(309, 611) while True: pos2=cur.position cur.position=pos cur.click(btn.left, 1) cur.position=pos2 __import__('time').sleep(5.5)
Must have pynput module. And position the windows as I did or change the variable "pos" to match yours.
The only problem would be when you have to type 😔
@Invizibo ive been botting vote on my own for 2 hours thats how there are 66k
with a simple console script:
const num_clients = 275 const clients = [] const init_interval = setInterval(() => { clients.push(io()) if (clients.length >= num_clients) { clearInterval(init_interval) } }, 20) setTimeout(() => { let c = 0 setInterval(() => { clients[c].emit("vote") c ++ if (c >= clients.length) { c = 0 } }, 20) }, 1000)
this script also works with multiple vote windows and a customizable amount of clients (I found over 1000 craps the server out)
Come on....
data.toLocaleString()
much more yummy :)
also:
while(true){ cycles ++; ++ votes; }
nice work!
I see you have used my suggestion of url shortening/taking advantage :)
no the
/socket.io/socket.io.js
.toLocaleString() is so useful!
1000 -> "1,000"
e?
@MrEconomical e
@Vandesm14 :eyes:
@MrEconomical why do you comments e?
@TaylorLiang much e
@MrEconomical scared confusion
@MrEconomical E indeeed
@MrEconomical e.
@EvanSkyberg
@MrEconomical d
|
https://repl.it/talk/share/Vote/28181?order=votes
|
CC-MAIN-2020-34
|
refinedweb
| 494 | 66.94 |
Networking in Unity is a large and probably confusing topic, so this lesson is designed to help introduce some of the basic requirements that will be needed to complete our muliplayer game. We will create a few specialized assets and scripts and make use of new classes and tags you may not have seen before. By the time we are done, you should understand how to get two players joined in a match, as well as determine who is who.
Quick Overview
I made a lot of incorrect assumptions while learning Networking with Unity that I hope to spare my readers from. I created the following diagram to help summarize the important stuff we will need for this project:
In this diagram I have created two boxes, one marked as “Host” and one marked as “Client”. These boxes represent separate running instances of our game. In the future this could be instances running on different computers or phones, etc, but while we are testing one will be a built executable and the other will be the Unity editor in play mode. Having both running instances on a single machine helps greatly in learning what’s going on behind the scenes and being able to debug issues.
Unity provides a script that configures a HUD and will allow you to determine which running instance is registered as the Host and which is registered as the Client. It is important to note though that Unity allows a Host to be both a server and a client simultaneously, and this makes a lot of things easier and cheaper to operate.
The two running game instances communicate with each other via specially marked methods. “Rpc” methods are called from the server and run on all the clients (this can include itself). “Cmd” methods are called from any client but run on the server (this can also include itself). The great thing is that you dont have to know if you are the client or the server, you just call the the method and it will handle the rest. For this project I will use pairs of these messages whenever I want to keep game state in sync between them. For example, if a move is being made on the board, then I would use a Cmd which would then call an Rpc so then the move would be made on all game instances at the same time.
Each running game instance will have a special “Player” GameObject that represents it and all of these player objects will load for each game instance. In other words, because there are two game instances joined in a match, there will be two players loaded in each. The “Player” GameObjects are a new type of class that has fields such as “isServer”, “isClient”, and “isLocalPlayer” which are designed to help differentiate what is what in your code.
One of the fields, “isLocalPlayer” will only be true on the Player instance which represents the game instance it was intantiated on. In the diagram I show that code running on the Host’s game will see that Player1 is the local player, and code running on the Client’s game will see that Player2 is the local player.
The other two fields, “isServer” and “isClient”, I repeatedly made wrong assumptions about. Unlike “isLocalPlayer”, these fields relate to the game instance that a player is running on, and NOT the player object itself. For example, I originally had believed that Player1 would return true for “isServer” regardless of whether it was accessed on the Host or Client game instance. Instead, I discovered that both Player objects return true for the “isServer” property if checked on the Host game, and both Players return false if checked on the Client game.
To make it more confusing, in my project, all players on all game instances return true for “isClient” because my Host also acted as a Client. Some of my early mistakes included logic where I would check if the player was “not” the client, but that code would never execute.
Scene Setup
There is more setup we will need to do in order to add networking to our game. First, we will need to create a prefab to represent the players of our game:
- Create an empty GameObject named “Player”
- Add the “Network Identity” component
- Create and add a new C# script named “PlayerController” – we will implement it later. Save the script in the “Scripts/Controller” folder
- Create a prefab from the Player
- Delete the instance of the Player in the scene
Unity provided scripts which manage the network for us, so let’s create an object for our manager:
- Create an empty GameObject named “Network Manager”
- Add the “Network Manager” component
- Expand the “Spawn Info” group in the Inspector for this component
- Set the “Player Prefab” to use the “Player” prefab we created earlier
- Add the “Network Manager HUD” component
- Save the scene and project
Player Controller
The creation and destruction of player objects is pretty important. In our finished implementation we will need to make sure two players have joined before we begin a game. Likewise it would be important to have a means for responding to events such as when a player loses their connection. Go ahead and open the “PlayerController” script for editing and replace the template code with the following:
using UnityEngine; using UnityEngine.Networking; using System.Collections; using TicTacToe; public class PlayerController : NetworkBehaviour { // Add Code Here }
Take a moment and notice that this is not inheriting directly from “MonoBehaviour” but is a subclass of a new type called “NetworkBehaviour” instead. This class will provide us a means of communication between player’s over the network.
When I created this class initially, I chose the name PlayerController partially because it was intuitive, but probably also because the Unity Networking Tutorial I followed along with did the same. After I spent a little more time in the networking docs I noticed that Unity has a class by the same name in its Networking namespace. Technically this is still not a problem, because the different namespace allows us a way to work around conflicts, although I still am not happy about it because it might lead to confusion later.
public const string Started = "PlayerController.Start"; public const string StartedLocal = "PlayerController.StartedLocal"; public const string Destroyed = "PlayerController.Destroyed"; public const string CoinToss = "PlayerController.CoinToss"; public const string RequestMarkSquare = "PlayerController.RequestMarkSquare";
Our class will post a variety of notifications. The first three are related to its object life-cycle. The coin toss notification is used to indicate when I have decided which player will go first, and the request mark square notification indicates when a player is attempting to take a turn by clicking on the game board.
public int score; public Mark mark;
I will also go ahead and add two public fields. I want each player to keep track of its own score – which is the number of times they have won a game. I also want to keep track of which kind of mark the player is using – either an ‘X’ or an ‘O’. Unity provides something called a SyncVar attribute which could automatically keep values synchronized, but for some reason I found it very confusing to work with and ended up managing state myself.
public override void OnStartClient () { base.OnStartClient (); this.PostNotification(Started); } public override void OnStartLocalPlayer () { base.OnStartLocalPlayer (); this.PostNotification(StartedLocal); } void OnDestroy () { this.PostNotification(Destroyed); }
All NetworkBehaviour scripts will invoke “OnStartClient” when they become active – in our case this includes both player obejcts which are instantiated by the Network Manager. Another method exists named “OnStartLocalPlayer” to help differentiate the player which represents the “local” player. Finally we use the MonoBehaviour method “OnDestroy” to be able to post a notification when a player has left the game.
[Command] public void CmdCoinToss () { RpcCoinToss(Random.value < 0.5); } [ClientRpc] void RpcCoinToss (bool coinToss) { this.PostNotification(CoinToss, coinToss); }
Here is the first example of a paired “Cmd” to “Rpc” call which I use to make sure that both Games stay in sync. You can tell when a method is a “Command” method because it has the [Command] tag as well as a prefix of “Cmd” on the method name itself. Likewise a “ClientRpc” method uses the [ClientRpc] tag and an “Rpc” prefix on its own method name. According to Unity’s documentation, both the tags and the prefixes are required.
In this example, my game will use a command on a certain player to “flip a coin” to decide who goes first. The command method is actually executed on the equivalent player on the Host game regardless of if the method was actually invoked on the Client or the Host. While there I generate a random true or false value and pass that same value to all clients via an Rpc call. Only the Host (server) can call Rpc methods, and only the Rpc method is applied to all client instances. The Rpc method in this example takes the result of the coin flip and posts a notification of the result.
[Command] public void CmdMarkSquare (int index) { RpcMarkSquare(index); } [ClientRpc] void RpcMarkSquare (int index) { this.PostNotification(RequestMarkSquare, index); }
Here we have another example of the Cmd to Rpc logic which will be used when a player needs to take a turn. The player uses a command so that the server can keep all the client game instances synched with the Rpc method.
Match Controller
You’ve already gotten a hint on how to differentiate one player from another. However, this by itself isn’t sufficient for our needs. I want to have a single place which can keep track of all of my players, and which can provide convenient references for me of which player is local and which is remote, as well as which player is the host of the match and which player joined (the client). Some of this was a little tricky because the Host is actually both a server and client simultaneously.
When I first created this class I had not encountered the ClientScene class provided by Unity. I still haven’t worked with this class, but it does appear that it would handle at least some of my issues like keeping track of all the players.
- Create a new GameObject in the scene called “Match Controller”
- Create and add a new C# script named “MatchController”. Save the script in the “Scripts/Controller” folder
- Open the script for editing and replace the template code with the followng
using UnityEngine; using System.Collections; using System.Collections.Generic; public class MatchController : MonoBehaviour { // Add Code Here }
I’m not doing anything special with this class regarding networking, I am simply providing a place to keep track of networked items. Therefore, I was able to make it a MonoBehaviour.
public const string MatchReady = "MatchController.Ready";
The class will only post a single notification which let’s listeners know when a match is ready. This will be fired when it becomes aware of both Players and can differentiate which is which regarding who is local, who is remote, etc.
public bool IsReady { get { return localPlayer != null && remotePlayer != null; }} public PlayerController localPlayer; public PlayerController remotePlayer; public PlayerController hostPlayer; public PlayerController clientPlayer; public List<PlayerController> players = new List<PlayerController>();
For classes which miss the notification, or don’t want to store some sort of state regarding it, they will be able to check the “IsReady” property of the MatchController. This will be true as long as we have a player assigned to both the local and remote player fields.
For convenience I have four fields regarding the players: local, remote, host and client. These four fields will be filled out using the two actual players, which means that two of the fields will be duplicates of another reference. Sometimes it is convenient to think of the players in a different way, and that is why I cache it.
Before I have both players registered, I may not know which player is the local player and which is the remote player. Anytime I get an event that a player was created I store it in a list of players so I can check it later and update the conveniently named fields to access them by.
void OnEnable () { this.AddObserver(OnPlayerStarted, PlayerController.Started); this.AddObserver(OnPlayerStartedLocal, PlayerController.StartedLocal); this.AddObserver(OnPlayerDestroyed, PlayerController.Destroyed); } void OnDisable () { this.RemoveObserver(OnPlayerStarted, PlayerController.Started); this.RemoveObserver(OnPlayerStartedLocal, PlayerController.StartedLocal); this.RemoveObserver(OnPlayerDestroyed, PlayerController.Destroyed); }
Rather than making the “PlayerController” be tightly coupled to the “MatchController” and notify it upon creation and destruction, I simply caused it to post notifications. Here, we register as an observer for each of the relevant notifications.
void OnPlayerStarted (object sender, object args) { players.Add((PlayerController)sender); Configure(); }
Both “PlayerController” instances will trigger the “OnPlayerStarted” method, so I use that handler to add the sender to my list of players. Afterwards I call Configure – which tries to sort the list of players into the named fields.
void OnPlayerStartedLocal (object sender, object args) { localPlayer = (PlayerController)sender; Configure(); }
Only one of the “PlayerController” instances will trigger the “OnPlayerStartedLocal” method, so with this I can go ahead and assign one of my cached convenience fields. On the Host game, the local player method will be invoked before the other player connects, and I wont be able to finish configuration, but on the Client game it wont be invoked until both players have started. In this case I need to call “Configure” because I will finally have all of the data needed to finish setup.
void OnPlayerDestroyed (object sender, object args) { PlayerController pc = (PlayerController)sender; if (localPlayer == pc) localPlayer = null; if (remotePlayer == pc) remotePlayer = null; if (hostPlayer == pc) hostPlayer = null; if (clientPlayer == pc) clientPlayer = null; if (players.Contains(pc)) players.Remove(pc); }
When a player disconnects or a match is quit, the players will be destroyed. I listen for this notification so that I can clear out any references I might have had, and will know that the match is no longer playable.
void Configure () { if (localPlayer == null || players.Count < 2) return; for (int i = 0; i < players.Count; ++i) { if (players[i] != localPlayer) { remotePlayer = players[i]; break; } } hostPlayer = (localPlayer.isServer) ? localPlayer : remotePlayer; clientPlayer = (localPlayer.isServer) ? remotePlayer : localPlayer; this.PostNotification(MatchReady); }
In order to finish setup, we need to have two players and know which of the players is the “local” player. If I dont have the needed bits of information I just quit early. Otherwise I loop through the list of players to try to find the player which isn’t the local player and then I will mark that player as the remote player.
Next I want to be able to think of the local and remote players in a slightly different way. I want to know which player is local on the Host game, and which player is local on the Client game. I can determine this value by looking at the “isServer” field. If my local player’s “isServer” is true, then I know that I am on the Host game, and therefore the host player is the local player, otherwise it will be the remote player. To find the client player I used the same check but flipped the order of the players.
Testing Pipeline
The network manager will handle the creation and destruction of “Player” instances based on real players connecting to our game. You can test this out using a process which you will need to get in the habit of doing so you can test your networked game:
- Choose the menu bar option “File -> Build & Run”. Choose a location to save your file, and then on the configuration screen choose a small resolution and check the box for “Windowed” so you can still see Unity’s IDE simultaneously.
- Play the built game and choose “LAN Host (H)” from the HUD on screen.
- Hit play on Unity’s editor window as well, and this time choose “LAN Client (C)” from the HUD screen.
While the match is connected you can look at the hierarchy pane in the Unity Editor and see that two “Player” prefabs have been instantiated. If you use the HUD to “Stop” the game then the two “Player” prefabs will be destroyed. Before you quit, select the MatchController and verify that all four player references have been filled in. In case you didn’t know, you can left click the player references in the inspector and it will temporarily highlight the same object in the hierarchy pane. One of the references should be local, and the other should be remote. Now you have a convenient way to find out which is which.
Note that it doesn’t actually matter which running instance is the Host and which is the Client, but you do need exactly one of each to test with. You probably should try it both ways, particularly if you are experiencing a bug that appears exclusively on either type of connection. Make the unity editor use whichever connection type experiences the bug so you can access information in the inspector and utilize Debug Logs etc.
Summary
In this lesson I provided a quick primer on Networking in Unity to help you avoid making some of the wrong assumptions and mistakes that I struggled with. Afterward, we dove right in and began creating player and network manager assets. We created a player controller script to manage communication across the match and make sure that both game instances remained in synch, and we also created a match controller to keep track of the players and identify them.
With all of the new setup in place, you are now able to create and join a match and see “player” objects get created in the scene as players join. We discussed how this workflow could be achieved on a single machine in order to help speed up development and testing.
Don’t forget that if you get stuck on something, you can always check the repository for a working version here.
25 thoughts on “Turn Based Multiplayer – Part 4”
Can you help me convert my turn based game into a multiplayer? Thank you.
Hey Kim, can you be a bit more specific? What kinds of problems are you encountering? Did you follow along with this series and something still isn’t clear?
my game is a like snake and ladder . I’m having problem converting it into a multiplayer.
Hey Kim, yeah I’m familiar with a variant on that – Chutes and Ladders (by Milton Bradley). This should definitely be able to be implemented using Turn Based Multiplayer if you follow the example in my tutorial. Unfortunately, I still don’t know what exactly you need help with. It might be as simple as passing the dice roll along instead of passing a square index that a user clicked on.
Also, I would prefer to keep the comments here to be something that is relevant to everyone, so if you want to start a more specific thread on my Forum then I would be happy to continue to work with you there.
Okay sir, I’m having a problem on how can I set a player and how can I choose who is next to roll the dice. I got 4 players to play with. I followed your tutorial and I got some idea but I don’t know where to start. Sorry I’m not being specific. Thank you.
Okay, so the challenge you are facing is how to make a turn based game with more than 2 players. Here are my thoughts (though I haven’t actually created such a game so you may have to experiment):
The match controller is currently responsible for making sure all of the players have joined the match. Currently I only have the concept of a host player and a client player which wont be as helpful because your scenario will have a host player and client players (plural). I think what I would do is to have the match controller expect a certain number of players to join. As each player connects, I could then take some sort of ID, such as the “NetworkBehaviour.netID” property as a way to identify the different players. I would add each player’s id to an array of player ids in the match controller as they join until I have the number of players I want. When the game begins, I would pass along the array of player ids from the host to all of the clients so they all know the order of players. You can of course shuffle the array before passing it along so that the player order is random.
During gameplay, I would have the game store an index for the player whose turn it should be. Each client could compare the player id at that index against their own id to see if it is their turn or not. Does that help?
Wow! thank you. It helps a lot! it gives me an idea where to start. Thank you again!
Hey, thank you so much for your tutorial! It helped me a lot!
Though, I still have a problem. In the Show() method in the Board.cs file, instead of using a Collection, I am instantiating the GameObjects. It also works, that I can see the Objects on both clients. The thing is however, when I am trying to access those GameObjects in script, it tells me that there is no GameObject component attached to it. I can’t even change the name of the instantiated GameObject. Can you help me?
Thanks!!
You’re very welcome, and glad you managed to solve the question on your own before it even got approved 🙂
Great tutorial. Thank you so much! I am currently working on an implementation of 9 Man Morris, and found this tutorial very helpful. However, I would need to send RPCs containing two integers (move from and move to). This would also mean I’d have to edit your NotificationExtensions. I have yet to go though the first few parts of the tutorial, but I was wondering if you would recommend that I edit the NotificationExtensions, or rework the project with only RPCs.
Hey Russell, you’re welcome – I’m glad you enjoyed it. You don’t actually need to modify anything. Rather than sending two parameters, simply send a structure which is made up of the two integers you need. In C# everything can be passed as an object, even a simple value type like an integer, so you would also be able to reuse the structure with the notification system.
You can see what parameter types are allowed on networking calls here at the bottom (Arguments to Remote Actions):
I’m following along with the Pokemon Tutorial while implementing a personal project, and jumping back to this one, because I’m trying to put in the initial networking components. I hit a wall, and I’m hoping you might have some insight. I’m currently just trying to create the equivalent of pressing the LAN Host(H) or LAN Client (C)-localhost buttons.
I have have the Network Manager HUD displayed for visual feedback, but I’m trying to handle the actual network connection completely via code.
Getting the server up and running seems as simple as:
NetworkServer.Listen(7777);
but I’m having a devil of a time getting the client-side up and running.
From what I’ve read elsewhere it should be as simple as:
NetworkClient myClient;
…
myClient = ClientScene.ConnectLocalServer();
sadly, this seems to have no effect. Any ideas?
Okay, found my issue, or at least the proper way to handle it for where I am in the process right now.
I was going about it all wrong. What I needed was to grab a reference to the NetworkManager component via:
manager = GetComponent();
Then use that to trigger the NetworkManager methods:
manager.StartHost();
or
manager.StartClient();
Thanks for sharing what you found out on the issue!
Would this model be safe for a multiplayer card game? I initially thought I’d have to use an external game server for all the business logic to protect against cheating.
I am not a security expert, but from what I hear, an authoritative server would be needed if you wanted something to be competitive or had any requirement of safety. I would only use this sort of solution for a casual or hobbyist project.
Hi, I love your tic-tac-toe multiplayer tutorial. I’m trying to implement a turn-based multiplayer board game like Words with Friends (just 2 players.) The challenge is that the players may not be online at the same time, so the state of the board needs to be stored while we wait for the next player to connect. I don’t know whether to try Unet or a different 3rd party like GameSparks. I’d really appreciate your advice on how to achieve this goal. Thank you much.
Glad you enjoyed it! Unfortunately I don’t have any experience doing turn-based multiplayer in Unity like this. I have done it with other things like GameKit on iOS, and I would expect Google Play Game Services would be great as well (Android or cross platform). I don’t know much about GameSparks but it might also be just as good. Good luck!
You spelled “disucessed” wrong. 😉
Very helpful tutorial, though. Thank you for putting so much effort into this.
Hi, great tutorial!
I was following along until I got in the network part (which ofc is what I am most interested in learning), but already in the beginning I can’t find the “Network Manager” component. Aparently, Unity is in the middle of big changes in how they handle network and some of the old code is now deprecated.
Do you plan on updating the tutorial with the new UNet library once it’s out of alpha and fully stable? Is it still possible to use the old network manager option (I couldn’t find it but maybe was hidden behind some sort of option)?
Great questions. I found this blog post which shows how long the various components are supported through. You can always download older versions of Unity if you still want to follow along, but I can imagine that may not sound that desirable to learn something that is already deprecated.
I would definitely be interested in updating the tutorial when the new version is out. I feel the same way about upgrading all my tutorials for when their new ECS is finalized too. Hope that helps!
Hi,
I am a big fan, have been following several pieces of your project and learned a lot, namely the design pattern, the notification system, etc. So I wanna say thank you before throwing out the question! Hope you have time to get more of these tutorials, even videos. your work deserves more people to know.
currently working on 2019.3, I found the following codes in MatchController:
void OnPlayerStarted (object sender, object args)
{
players.Add((PlayerController)sender);
Configure();
}
IDE told me it cannot cast sender as a PlayerController. I figured out I have to rewrite PlayerController codes such as:
public override void OnStartClient()
{
base.OnStartClient();
this.PostNotification(Started);
}
and change the last line to:
this.PostNotification(Started, this);
then refigure the MatchController to cast the args, not the sender as the PlayerController. problem fixed but I wonder why? I still haven’t figure our your Notification code thru but I have been learning really hard.
Thank you very much.
Thanks for the kind words, and I am glad you’ve been enjoying my content so much. 🙂
Regarding the question, I’m not sure what was going on. I have on rare occasions gotten errors that were misleading, usually they appear lower in the list of errors than something else that is causing the “real” problem. Did you see other errors at the same time?
If you consider your changes, both `sender` and `args` are passed as `object` types and so it wouldn’t make sense that the compiler could cast one as a `PlayerController` but not the other. The only thing I can imagine is that maybe you had some sort of typo or something and it was giving an unhelpful message. Good luck!
You were right – it was a typo!
I looked thoroughly at my NotificationCenter script:
if (subTable.ContainsKey(this))
{
List handlers = subTable[this];
_invoking.Add(handlers);
for (int i = 0; i < handlers.Count; i++)
handlers[i](this, e); // this is the typo
_invoking.Remove(handlers);
}
I mistakenly put instead of in handlers[i](sender, e); that’s the reason all the sender Debug info are “NotificationCenter”, instead of its real sender.
Problem solved! I didn’t fully undertand these line of codes until now, obviously. Another lesson learned.
Thank you so much!
* I mistakenly put this instead of sender in handlers[i](sender, e);
|
http://theliquidfire.com/2016/05/05/turn-based-multiplayer-part-4/
|
CC-MAIN-2020-45
|
refinedweb
| 4,828 | 61.56 |
Things used in this project
Story
Introduction
This topic will teach you how to control a RGB LED on an Arduino 101 board with an Android device. We used App Inventor to make Android app because it is graphical (just like Scratch) and can easily build .apk installation file for almost any Android device. For who have no Android devices or schools, App Inventor has emulator for basic usage as well (hardware-related functions like sensors and Bluetooth is not available on emulator).
After this topic, you can attach more RGB LEDs or other display modules to achieve more astonishing effect. Try to build your own Philip HUE light bulb!
Note: This topic is using Arduino 101 board for its onboard BLE communication ability. If you are using earlier Bluetooth module like HC05 or HC06, you should use App Inventor’s BluetoothClient instead of BluetoothLE component in this topic.
Video
Let’s start
This topic will tell you how to get the pixel’s RGB intensity of where you touched, then send these data to Arduino 101 to control a RGB LED attached.
Hardware
We are using RGB LED (common cathode), please attach its R G B terminal to Arduino 101’s pin 9, 6 and 3 (~PWM of course), as show below. If yours is common anode, please refer to its data sheet and modify the circuit.
Software
Software can be divided into App Inventor and Arduino 101, please see sections below:
App Inventor
App Inventor is a graphical Android app browser-based IDE, please login with your gmail account:. For more tutorials, please visit:.
Designer
Please add components below to your project, numbers in parentheses means how many you have to add, for example Button(2) means there are two button components in your project. You don’t have to rename the component as we did, but for better readability we suggest you to rename components of the same type as different names.
Canvas(1): get touch point’s coordinates for RGB color intensity.
Button(2):
- Btn_Connect: Ask BluetoothLE component to connect with specified BLE device when clicked.
- Btn_DisConnect: Ask BluetoothLE component to disconnect with specified BLE device when clicked.
Sliders(3): Three sliders to represent the RGB value of touch point.
BluetoothLE(1): Send/ receive data through Bluetooth Low Energy protocol.
Clock(1): Ask BluetoothLE component to send data to Arduino 101 periodically.
Blocks
1. Initialize
Declare variables. addr is the BLE device(Arduino 101)’s hardware address, which is labeled at the back of your Arduino 101. r, g and b are numerical variables to save the RGB value of the touch point.
When your app is initializing (Screen1.Initialize event), we ask BluetoothLE component to StartScanning for available BLE devices and show related message on Screen Title.
2. Connect / Disconnect
- When Btn_Connect is clicked, we ask BluetoothLE component to connect with specified BLE device (addr variable). Then set Btn_DisConnect to be enabled and Btn_Connect to be disabled. The reason here is simple: you can no way disconnect when you have nothing connected, vice versa. Finally show “Connected” message on the screen title.
- On the other hand, when Btn_DisConnect is clicked, we ask BluetoothLE component to disconnect and set two buttons to the originally state for another connection.
3. Drag your finger to get the RGB or value of the touch point
In Ball.Dragged event, when this ball is dragged (means your finger tip’s location) will do things below:
A. Clear canvas.
B. Use GetPixelColor to show the color intensity of that touch point on where you touched canvas. This value is a quite big negative integer; in step D we will extract the real color value from this integer.
C. Move the Ball to where we’ve touched.
D. Use select list item with split to get the Red, Green and Blue value of where user has touched, the show the information on Lable. Use select list item and split color to extract it’s item #1, #2 and #3, then show the final R G B values on Label1.
4. Update slider
Next still in the Ball.Dragged event, we will update every Slider’s thumbPosition and r, g, b three variables’ value to canvas location’s color intensity where user has touched. We are ready to send data to Arduino 101!
If you feel that the code here is too tedious, you can make them into a procedure, which can make your screen more simplified and readable.
5. Send data back to App Inventor
Clock1 component will activate Clock.Timer event every second, which will first combine the red, green and blue value of the touch point together then send the combination result by BluetoothLE.WriteIntValue. For example (128, 34, 255) will be combined as 128034255, Arduino will divide them back to three independent integers. You can modify the TimerInterval value of Clock1 component in case to achieve a better operation experience.
Arduino 101
[
source code] First in setup() function, a special library is included: <CurieBLE.h>(line 1), which is specially designed for Arduino 101’s Intel Curie chip. We’ve prepared all the settings for BLE (line 2~31). As for UUID, you can sue websites like UUID generator () to create you own UUID. Please notice that UUID of service and characteristics are one-digit different.
In loop() function within the if (LEDStatus.written()) condition from line 53 to 67, we use incom = LEDStatus.value(); to get the integer value sent from App Inventor. Arduino will separate this integer into another 3 integer representing the RGB LED’s light intensity. Finally we use analogWrite() to control corresponding pins(line 63~65).
References
1. Arduino 101 product page
2. New $30 Sensor-Packed, Curie-Powered Arduino 101 | Make
3. Arduino 101 LED Blink with Android tutorial
Code
App InventorScratch
No preview (download only).
Arduino 101Arduino
#include <CurieBLE.h> #include <stdlib.h> #define LEDr 9 #define LEDg 6 #define LEDb 3 BLEPeripheral blePeripheral; //BLE Peripheral Device (the board you're programming) BLEService ControlLED("19B10010-E8F2-537E-4F6C-D104768A1214"); //initialize a BLE Service // BLE LED Switch Characteristic - custom 128-bit UUID, read and writable by central BLEUnsignedIntCharacteristic LEDStatus("19B10011-E8F2-537E-4F6C-D104768A1214", BLERead | BLEWrite ); int incom = 0; int r, g, b ; void setup() { Serial.begin(9600); // set advertised local name and service UUID: blePeripheral.setLocalName("ControlLED"); blePeripheral.setAdvertisedServiceUuid(ControlLED.uuid()); // add service and characteristic: blePeripheral.addAttribute(ControlLED); blePeripheral.addAttribute(LEDStatus); // begin advertising BLE Light service: blePeripheral.begin(); Serial.println("BLE RGBLED control."); // set Light pin to output mode pinMode(LEDr, OUTPUT); pinMode(LEDg, OUTPUT); pinMode(LEDb, OUTPUT); }()) { //Serial.println(LEDStatus.written()); if (LEDStatus.written()) { incom = LEDStatus.value();//take 110225101 for exmaple r = incom / 1000000 ;//110 g = (incom / 1000 - r * 1000) ; //110225-110000=225 b = (incom - r * 1000000 - g * 1000) ; //110225101-110000000-2250000=101 Serial.println(incom); Serial.println(r); Serial.println(g); Serial.println(b); //show RGB data on serial monitor analogWrite(LEDr, r); analogWrite(LEDg, g); analogWrite(LEDb, b); //Light up LED delay(10); } } digitalWrite(LEDr, LOW); digitalWrite(LEDg, LOW); digitalWrite(LEDb, LOW); //LED turn off delay(100); } // when the central disconnects, print it out: Serial.print(F("Disconnected from central: ")); Serial.println(central.address()); }
Credits
Replications
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think!
|
https://www.hackster.io/45349/control-rgb-led-by-dragging-arduino-101-app-inventor-98ab0b
|
CC-MAIN-2017-34
|
refinedweb
| 1,220 | 56.55 |
Generating typed business foundation classes
This topic describes how to generate a typed Business Foundation (BF) class. It also describes code generation and templates to facilitate access to and modification of business foundation data.
BF is an Object Relation Modeling tool built into Optimizely Commerce that lets you store custom data that does not fit into the Optimizely Commerce data model. You almost always need to store custom data, such as data related to customer pricing, gift cards, and authentication SSO tickets.
Background
Here is a summary of BF entities:
- BF entities are not typed, with string names for fields everywhere in the code. This is error-prone. Use string constants to make sure they are defined only once.
- Caching is not built into BF, unlike the catalog, customer, and order subsystems.
- The interface to retrieve BF entities is generic, which causes low code readability.
- BF provides most of what is needed for paging but lacks a returned record count. When paging is setup, you need to know the following:
- what first record is needed on a page
- the number of records to be returned
- the total number of records available to the paging control (so the number of pages can be calculated)
BF lets you input only the first two parameters and does not give you a count of the total number of records matching the query.
The following example shows entities using the existing BF objects:
Sample templates and McCodeGen
McCodeGen.exe is a free tool for BF code generation. See the examples below to create a set of typed business foundation classes, based on example templates and the McCodeGen.exe (provided as a download below).
One example of a template is a simple, typed BF class that generates a class file for an existing BF class. It inherits from EntityObject and simply exposes the fields for an EntityObject as typed fields. You can also create templates to create context classes that provide an API to retrieve, update, create, and delete these typed BF objects.
One of the context class templates provides access to the Business Manager methods used to retrieve EntityObject but returns the typed objects instead. The other context class template has this functionality plus caching and full paging support. The latter context class lets you configure the cache timeout.
The typed Business Foundation class template is called EntityObjectSimple.aspx. The non-cached typed BF context class template is EntityObjectAccessTemplateNoCaching.aspx. The cached typed BF context class template is EntityObjectAccessTemplate.aspx.
The following example shows the previous example's code using generated classes:
Exploring the classes created
When you explore the classes created in this example, consider:
- Classes generated by the class template have an ExtendedProperties property to provide access to fields that are added to the business foundation entity definition but not defined in the class (for example, because the class was not updated after a field was added).
- Classes generated by the cached context class provide overloads to use a member Boolean property value to determine whether to use the cache, or override the member Boolean indicator with your own specific caching indicator. You can set a default value for whether records are cached or retrieved from the cache and override it when needed.
- The cached context class template explicitly defines cache timeout period. You can change the template to your desired timeout or use config settings to set them.
- Both context classes are singletons.
- The record count takes the FilterElements to build a dynamic SQL query and returns a count of the number of records matching the List(/Search) conditions.
Implementing the example code
- Unzip the download with code and templates.
- Add the following .dlls to the bindirectory.
-
- Copy a .mcgen file (it does not matter which one) to the root folder. Rename the .mcgen file so that it is clear which BF class it is associated with and the class' purpose.
- Open the newly-copied version of the .mcgen file in the rootfolder.
a. Set the connectionString element value to the connection string for the Optimizely Commerce database containing the BF class definition.
b. Set the MetaClass element value to the name of the BF class you are typing.
c. In the mcgen element, set the template for the class you want to use, such as Templates\EntityObjectAccessTemplate.aspx; see the descriptions of the templates above.
d. In the params definition, set the namespace param value to the namespace you would like the class to contain, such as EPiServer.Training.BusinessObjects.
- Open a commandline to the root folder. Run the following command: mccodegen -mcgen:: -out:
For example: mccodegen -mcgen:GiftCard.mcgen -out:GiftCardAccess.cs
A new class is created based on the specified template. Use the typed class template to generate the typed BF class. Use one of the context class templates to generate the context class to access the typed BF class.
- For each typed class or context class you want to create, iterate through steps 2-5.
Updated 4 months ago
|
https://docs.developers.optimizely.com/commerce/v14.0.0-commerce-cloud/docs/generating-typed-business-foundation-classes
|
CC-MAIN-2022-27
|
refinedweb
| 831 | 55.03 |
Maslakh: a look at Afghanistan's humanitarian disaster
It means "slaughterhouse," an apt enough term as it reflects and links the subjects of food and death. Maslakh is the largest refugee camp in Afghanistan. The largest refugee camp in the world. It's named after a nearby slaughterhouse that has not functioned in quite a long timethere are no animals to process. Now it stands as a source for the name one of the worst catastrophic humanitarian tragedies seen in the decades surrounding the year 2000.
Background
Afghanistan is a poor country, one of the poorest in the world. Much of its population is made up of subsistence farmers and their families. In what (as of the beginning of 2002) is going into its fourth year, severe drought continues to plague these people who already live in a precarious balance due to the conditions of the land, soil, and climate. What crops have grown are quickly depleted, little seems to be growing (and with somewhat mild winters without a lot of snow, spring 2002 does not look very promising), and the animalswhich were also suffering from hunger and malnutritionare long since eaten.
If the food shortage was not enough of a problem, the country has been racked with conflict. There was the Soviet invasion (1979-1989), followed by a period of tribal/civil war which continued even after the Taleban rose to power. Then the United States came in, attempting to bomb the Taleban into submission/extinction.
The long-standing refugee problem (begun by the various conflicts and added to by attempts to escape the Taleban regime) was made far worse with the onset of the current drought. When the US "war on terrorism" began, it exacerbated the problem a great deal, making more flee their homes (occasionally the country, Afghans have long sought refuge in neighboring Iran and Pakistanalmost four million since the Soviet invasion). Additionally, many lost their homes (some their lives) as a result. Further, the conflict disrupted many of the (already inadequate) lines of supply for food and other aid, as well as temporarily caused many relief workers to leave the country or areas most in need.
As a result of the combined problems and situations, Afghanistanwith a population of about 26 millionhas an estimated seven million or more people in need of some kind aid (in September 2001, Human Rights Watch issued a press release saying at least 3.8 million people were reliant on food aid at that time). It also has the largest refugee population in the world (outside its borders) and the largest population of Internally Displaced People (IDPs, basically refugees within their own country).
Before September 2001
Maslakh is near the town of Herat in northwestern Afghanistan. It was set up about four years ago (as of late 2001). Even before the events following the start of the US "war on terrorism" it was brimming with peopleit was (is) not alone: there are six camps near Herat. According to a United Nations report from 6 April 2001, there had been as many as 700,000 Afghans who had to leave their homes because of the civil war or the drought since June 2000. Most remained IDPs and many swelled the camps near Herat where, according to Omiad Weekly (the "most widely read Afghan publication in the world"), "120,000 IDPs [live] in just one of the camps" ()and at the time that wasn't even Maslakh, which the article notes has only 80,000.
In early April, the camps were averaging 225 families a day and by the time of the article (15 and 16 April) it was up to between 300 and 400, with an estimated thousand new people a day. At the time, the "aid community" was able to cope, but according to the UN regional coordinator for western Afghanistan "we are dependent on continuous support from donor countries."
Sanitary conditions were already atrocious, Maslakh being singled out as the worst, where even the 1,200 latrines being built would leave them about 2000 short. This, of course, leads to the spread of disease. And for the camp of 80,000 there were only one hundred wellsone for each 800 plus persons (the reader can do the math for the latrines).
The UN's World Food Programme (WFP) was able to keep up with the need, but even thenabout five months before the attacks on the World Trade Center and even longer before the American bombs began to fallthey were estimating that it would have to "continue providing assistance at current or higher levels for at least 12 months."
The US was aware of the problem as can be seen from an Islamic Republic News Agency article from 20 April (the information was provided by the US ambassador to Pakistan), where it is announced that a three man "humanitarian assessment team" was finishing a seven day mission that "has highlighted the US government's ongoing concern for the humanitarian crisis." The ambassador reported that "we needed an eye's-on assessment of which way the situation was going. Our preliminary reports say that things are bad and going to get worse up there" ().
By May, CNN reported that Maslakh has 100,000 IDPs with about 2000 arriving daily. It also noted that the UN aid workers no longer had tents to give outthere was no real "place" to put the refugees and they end up in enormous tent cities (in some cases made out of plastic sheeting) or sleeping in the open. The previous "coping" seems to be gone as CNN notes that "there is not enough food to go around" ().
A little over a month later (22 June) the WFP released its Emergency report. It mentions the registration process and says that it needs to be improved. Relief workers register all new arrivals (in theory, it is much more difficult to accomplish, in practice) so that those who are really IDPs can be given the aid they need. This has been a problem all along, growing much worse since September 2001, with people who are not refugees getting aid that should not be going to them or people registering more than once under different names in order to get more food. More often than not the latter is a means to make money on the black market (or often out in the open in Herat) rather than a way to better feed one's family. That this makes the crisis worse by depriving others should be unnecessary to note.
According to the report, 2,079 families (about 10,000 people) were registered that week in Maslakh. The report also states that "it has been decided that Maslakh camp would be closed within the next two months" (), which it was not. It would only grow and the disaster along with it.
In the 4 July Assistance for Afghanistan Weekly Update, a date was given (28 June) when the registration center was to be shut down so it could be shifted to another camp. This has yet to happen (as of early January 2002). In the Update, it reports a survey of 157 families. Of these, 21% were without potable water and 99% believed the upcoming "harvest would be a disaster" ().
After September 2001
Following the World Trade Center attacks and the beginning of the US "campaign" in Afghanistan, more stories bean to filter through the media. Still few and far between (some commentators have noted just how few "humanitarian" stories compared to Bosnia or Kosovo) and the most informative tending to come from non-US sources. Still, the public awareness of the severity of the crisis was (is) low, even though "Maslakha name that should be on every newspaper front pageis the biggest refugee camp in the world" (Madeleine Bunting in a column for The Guardian on 17 December). But if one searches, one can find out the sad details of the disaster. It only gets worse.
Late November 2001
The New York Times printed an article on Maslakh. Now, it had grown to between 150,000 and 300,000the numbers will always be estimated as the registration process is slow and cheaters exist; also the death rates in the camp with the numbers of incoming refugees (many who aren't or can't be registered) make the population too fluid to pin down. The camp had now been open long enough that primitive shelters made of sun-baked mud are being used in addition to the inadequate supplies of tents.
Interestingly, the article claims that the refugees have been "pushed from their homes, and pulled here, not by war but by drought," something that would surprise many of the refugees and aid workers (as noted, war isn't the sole or even main cause, but to assert that it isn't a cause seems almost deliberately sloppy). On the other hand, lateralmost in passingit's noted that the "delivery of food aid here was interrupted by American airstrikes" (college4.nytimes.com). When the fighting began most aid in the country did stop, as it became too dangerous for workers there. Even following the fall of the Taleban regime, workers would be harassed and aid convoys forced to pay high tolls to cross bridges and pass crossings by United Front soldiers.
Of course, a more clear explanation for the cutoff at the Herat camps came from a spokesman for General Ismail Khan ("the region's top warlord") and was reported in an article in the Detroit Free Press on 3 December: the airport had been closed down because it had been bombed out by US planes.
Even though there are some somewhat "permanent" structures for shelter (though far to few to go around), the NYT article quotes a doctor there that fifteen or twenty people can be found sleeping in one room. It mentions one woman who has none of the precious few blankets available and has her twelve children huddle under her chador to sleep at night.
In addition to the poor sanitation that cannot keep up with the growing masses, this arrangement of many people in small confined spaces helps facilitate the spread of disease and infection. The laundry list of diseases and maladies includes "malaria, diarrhea, tuberculosis, dehydration in summer, respiratory ailments in winter, parasites, hunger and cold," the doctor at the camp estimating that around 30% of the camp inhabitants have recurrent malaria and almost as many have tuberculosis. There are womenas young as thirtywhose bodies no longer can produce milk for their children. Children who are malnourished and in many cases starving. Signs of the protein deficiency disease kwashiorkor becoming more common as evinced by children with swollen hands and feet and sometimes bellies. Marasmus, a deficiency of fats and carbohydrates, is also commonthese children emaciated and remaining quiet; no energy to cry.
Medical equipment for testing was almost nonexistent (little sign of improvement forthcoming) and with the patient to doctor ratio, it became difficult to treat in anything more than the most cursory way. All that was worse while the Taleban was in power. The doctor from the article saying that "religious police" would come and threaten to beat him for examining women (some of whom were uncomfortable and/or reluctant to allow themselves to be examined anyway, according to a relief worker in an interview for National Public RadioI regret I cannot recall when I heard this but it was December 2001). They also made it next to impossible to properly make counts of the refugees, even disallowing graveyard counts of the deceased (where an alarming number of childsize mounds continue to increase).
As of the date of the article (26 November), foreign aid was finally becoming to trickle back into a flow, with trucks from the International Committee of the Red Cross, the UN, and the Red Crescent Society.1 Aid that would be desparately needed, though almost certainly short of what would be needed to "cope." Since 11 September, the camp had registered two thousand more families.
Stampede
A few days later (29 November), reports from Maslakh offered grim testimony to the state of things in the camp. An Associated Press story reported that four refugees had died in a "stampede" for food near the camp. One of the deceased was a four year old girl whose mother said ("wailing as mourners threw handfuls of dirt in the grave") "I'm glad she died quickly. What is the use of living like this" (). According to camp officials, this is not the first time this had happened. Because of the crowded camp and its inability to keep up with the influx of people (both registering and giving aid) many were gathering around the camp "in a field ringed with human waste. Many were coughing, a sign that communicable diseases are spreading. Few had even a thin blanket to ward off near-freezing temperatures."
The question of "why" presents itself. Why go to places of such human misery? Because the people have no other alternative. At least there is some promise of food, shelter, and medical aidin the mountain villages there is nothing, the food is gone, wood for fuel is gone. A driver (who profits from taking truckloads of refugees to the camps) was quoted as saying "if you saw that area, you would prefer this to that. It is cold there already. The snow has not come yet, but the snow will come and kill everybody." An exaggeration, to be sure, but not enough of one to offer much comfortthe report goes on to add that UNICEF has warned that as many as a 100,000 children could die of "cold, disease and hunger if essential relief supplies are not made available in the next few weeks."
It should be noted that some families remain in their villages because they cannot make the long trek (those that do often lose family members along the way) or pay to be taken to the camps. Once winter has set in and the snows come, many of these villages may become inaccessible. In or out.
November ends
With the Taleban mostly out of power and aid returning to the camps, things were still not looking positive. In a story from the Newark Star-Ledger, Farnaz Fassihi writes of a man tallying the numbers of dead. He had found forty-two who had died from cold and hunger in the previous twenty-four hours, alone. A wife and mother was being buried after having frozen to death the night before (exposure made worse by the lack of tent). Her husband adds that she hadn't eaten in ten days.
The day before the article, thousands of people rushed the aid vehicles that were bringing blankets and water. Trying to avoid a situation like the earlier "stampede," the guards beat away people with sticks. It's really come to this. And if you aren't registered, you aren't eligible for foodand the food packages dropped by the US mostly end up for sale outside the camps where those most in need cannot afford them at 50¢ a piece (some have ended up crashing into the houses of Afghan people).
Sanitation is still an ever-present problem. The children have no winter clothing or socks and few even have shoes. The coming winter was concerning all the aid workers as it would not only create more problems with those in the camps already, but would almost certainly continue to cause the population to grow.
A twenty year old mother of four (no tent, only a blanket spread on the ground) is quoted, saying "No one cares about us. We are dying of hunger and thirst. We are sleeping in the cold night after night" (). She explains that she feeds her children weeds and grass. Her husband collects garbage to burn for warmth. She asks "Do you know if anywhere in the world there are people suffering like us?"
No, I don't.
December 2001
As December opened, the outlook remained bleak. At least 200,000 are accounted for, with the understanding that the number is surely low. New arrivals continued unabated and the prospect of many, many more concerned aid workers. In addition to IDPs, thousands of refugees were arriving from nearby Iranin some cases voluntarily, in some because they were forced to return.
This deportation is a violation of international law, as according to the UN (from): "Countries may not forcibly return (refoulement) refugees to a territory where they face danger or discriminate between groups of refugees" and that "states have an obligation to cooperate with UNHCR" (Iran having been a UN member state since 1945). The United Nations High Commissioner for Refugees (UNHCR) has estimated between November and the end of 2001, about 80,000 refugees returned to Afghanistan.
The mud "houses" are being continually built by Maslakh's "citizens" and the camp, according to a 3 December story in the Detroit Free Press, is now three miles long (4.8 km). Considering the number of inhabitants, the extent of the overcrowding becomes more clear. Each family can only be given about a bowl of sugar and flour (mixed with some oil into a thin gruel) each day ("typically five or six people" according the article's source).
A journalist visits Maslakh
On 9 December, The Telegraph published a story detailing the horrible conditions in Maslakh. It introduced a woman who, along with her five children, had last eaten over a week previous. The meal? A bowl of rice they had to beg for. A couple days before the story was filed, her two year old froze to death during the night. Winter has arrived and it's taking its toll.
At that time it was estimated that as many as forty people were dying of cold and starvation each night as temperature fell "well below zero" (given the source, I assume it means Celsius). Water freezes and so do the people of Maslakhmany still without tents or the mud brick shelters. But there are six cemeteries. One wonders how long before they become inadequate to "serve" the people in the camp.
The conflicting estimates arise when the camp administrator claims the camp has a population of 800,000 and a survey by Médecins Sans Frontières (Doctors without Borders) says it is 300,000. Probably somewhere in between but it's difficult to know for sure.
Most harrowing are the stories and pleas of the people..
The journalist (supposedly the first western one to actually visit the camp, itself) was given a nine month old baby to hold. She writes that she almost dropped it, shocked that it "weighed so littleless than my notebook."
The woman who was introduced earlier told of traveling there with her family (four children and her husband, who is blind) and five other families. How, upon arrival, the camphaving trouble coping with the masses already therewould not allow them to register. According to her, the camp authorities "just tell us to get out and beat us and even the children if we do not move from the registration office." Three of the children in their traveling group had already died, "when we woke they were all wrapped around each other."
There is frustration and even anger (and despair) over the situation. The governor of Herat is quoted as saying "the world has made us a lot of promises. Now people are dying and it has no excuse to act." People are upset that the world is focusing on the Taleban and Osama bin Laden "rather than tackling the conditions which led to them taking over the country.
One woman (the mother of the tiny baby) said "when the Taliban fell we thought the international community would help us. I'm so angry and depressed I even dream of leaving my children here and walking away. If you are a mother can you imagine ever saying that?" The other woman from the beginning of the piece has the final word for the article:.
December continues
It's not that no aid is arriving. According to a report from the United Kingdom's Department for International Development, "UNICEF has distributed winter emergency relief items to Maslakh IDP camp in Herat10,000 blankets, 10,000 mattresses, 6,000 children's sweaters and 6,000 pairs of winter shoes" (). It also notes (not specifically to Maslakh) more shipments of food from the WFP, medical supplies furnished from the World Health Organization (WHO), and more supplies from the International Organization for Migration (IOM).
With the "security problem" becoming less of a concern, aid groups were able to move more freely and distribute more food. And more donations were coming in. But the problem remained the same. Not enough food and supplies, too many people in need. Of course sometimes reports assure that not to be the case as in a story by Australian journalist Hamish McDonald that was dated the day after the Telegraph piece. In it he writes that those who register can "get access to food, shelter and medical aidof which there is no shortage, according to aid workers" (). That this seems contrary to most every other report, the refugees, themselves, and the rest of the article seems odd.
One of the main subjects of the article is the registration process and how it isn't doing its job (largely because of the huge crush of humanity at the camp). It explains that "the difference between life and death for possibly hundreds of children is a sheet of paper" (the registration). And that those who are able to "muscle" their way to the front of the line (so to speak, though sometimes literally) are more likely to get those registrations. According to the story, there have been "reports and visual verification of people being trampled to death trying to register." And despite the IOM registering as many 1,300 a day, it cannot keep up with the number of people there and those arriving.
But the registration process seems to be a necessary evil and without it, the situation would be worse. Those that can get registered gain access to food and medical care. Once registered, a medical screening is given and children are vaccinated for measlesin December, UNICEF and WHO began a widespread measles vaccination program for the children of Afghanistan (about 35,000 die yearly of the disease which makes up about 40% of "vaccine-preventable childhood deaths," according to UN figures).
But none of it is possible in Maslakh without the registration. And those not able to get registered are simply turned away. A refugee tells a similar story to the one in the Telegraph: "we can't register, and when we go the food kitchens or the clinics say, 'You are not registered, go away' and drive us away with sticks." The man adds that "we are beggars. We beg for money to buy food, and we are sleeping without shelter in the rain. Every day someone died. Yesterday we lost five children."
Officials hope that they can re-register the people at the camps near Herat. They are waiting at Maslakh until a new camp can be opened (and Maslakh closed, presumably). The story ran 10 December, but as of 9 January 2002, this is still the plan but it hasn't yet been accomplished. Recall that the camp was to be shut down prior to September, according to the WFP report.
A new year: January 2002
The promise of a new year doesn't seem to pervade Maslakh, which is described on 1 January by a writer for the Denver Post as "a teeming city of tents and mud huts and ground covered by plastic sheeting" (). He then introduces the reader to a family who is living in a pit dug in the hard, cold ground. They try to keep warm at night: "we lie near each other," said the mother of two, "punctuating her statements with a raspy cough." Another family sleeps thirty under a burlap tent held up by tree branches.
The story reports that there was a shortfall of about 5000 (enough for 30,000 peoplethough given known information, they would probably hold far more). And even with increasing food aid, "nearly everyone complains of being hungry." What firewood that can be had consists of brambles gathered from a mountainside which requires two days of travel on foot. The article adds that "for food, residents fight it out." Residents who have nothing else they can do: "we don't want to be here. But we have no choice."
On 3 January, The Guardian quoted a man with Feed the Children who has fifteen years experience in humanitarian disasters: "I always judge everything by what I have seen in Africa. And this is on the scale of Africa. I was shocked at the living conditions of the new arrivals" (). It also estimated the population at 350,000, with about one hundred dying daily of exposure and starvation. At that time, there were four bakeries working to feed the peoplewith only 8000 loaves of bread a day. Plans to get sixty built and working are in the works. Meanwhile the people wait.
The reporter was mistaken for an aid worker on numerous occasions with people rushing him for help. When they found he was not, they were more than disappointed. Said one woman, "you are just taking pictures. You are not here to help. We can't eat pictures. We are dying. We need food and medicine."
"U.S. Says Helped Avert Wide Famine in Afghanistan"
That was the headline for a Reuters report put out the same day as the above story. The definition of "wide famine" seems in dispute. Especially for the people dying of sickness and starvation. In a bit of self-congratulation, an administrator for the US Agency for International Development (USAID) stated that "it appears from the data we have collected and the reporting we are getting from the field that we have averted widespread famine in Afghanistan. This is a major accomplishment" (dailynews.yahoo.com).
He did concede, on the other hand that "there are areas, remote areas in the Hazarajat, up in the mountains, and maybe in the Hindu Kush in some valleys, where there could be pockets of need" and that "we don't know because no one's been there. ...We are not assured that every single person is being fed now" (ellipsis in original). Interestingly, the report also noted the Guardian article and its contrary assessment (including the Feed the Children organizer's quote) without comment.
This is not to say the US has not made an effortan incredible effort in many respectsat getting food to Afghans in need, but that it seems to ignore the reality that it isn't near enough, patting itself on the back rather than looking at what is needed to even begin to ameliorate some of the crisis.
The following day, in The Independent, the director for the IOM stated that, following its taking over of running much of the camp, "what we discovered was shocking. It was a complete disaster. It was the worst example of a bad situation. The international agencies had basically given up on Afghanistan, it was a lost cause" (). He added that "Maslakh is not a camp, it's a city. We are trying to get these people to go back home, with support."
The problem is that many have nowhere to return. No livestock, no crops, little potential for either andmore than some might care to admitmany have no homes waiting for them.
A woman who had traveled to Maslakh, who had lost her three children, explained how one died on the trip: "it was our baby. I was so weak I couldn't suckle my baby, and she went away."
A director of WHO is quoted, saying "they should close the camp. It is out of control." But then the people would have nowhere to go and almost no hope of food, shelter, clothing, and medical (as little as they receive now).
9 January 2002
Today is 9 January 2002. The camp is estimated to have 324,000 people with as many as a thousand arriving each day (no update on the number of deaths). Between the camp and the city of Herat, there are close to 700,000 people in need of aid. The UN assures that it has enough food in the country to feed six million Afghans (for how long?) but not enough workers and trucks to distribute it. Some areas are still a security risk. Maslakh is still open and receiving refugees/IDPs. There are now more stories concerning the situation coming out. Maybe that will help. Maybe not. Even if it does it won't be before thousands more die.
Maybe one day I can write a happy ending to the story. Maybe not.
1The Red Crescent Society is similar to the Red Cross, though from predominantly Muslim countries. During the war between Russia and Turkey in 1876-78, the Ottoman Empire used a flag similar to the Red Cross' except with a red crescent to designate its medical and aid personnel. A coalition of the many national societies of both organizations was established in 1919, under the name the "League of Red Cross Societies" and since 1991 has been known as the "International Federation of Red Cross and Red Crescent Societies."
(Sources: dozens of sources were consulted for this, including multiple pages through the UN site at and its related sites;;;;;;; college4.nytimes.com/guests/articles/2001/11/26/886703.xml; and ...story=112688;;;;;;;;
dailynews.yahoo.com/h/nm/20020103/pl/attack_afghan_famine_dc_1.html)
Need help? [email protected]
|
https://everything2.com/user/sid/writeups/Maslakh
|
CC-MAIN-2018-30
|
refinedweb
| 4,963 | 67.99 |
Hey, Scripting Guy! How can I retrieve a list of the System DSNs on a computer?-- RT
Hey, RT. You know, there’s an old Hollywood superstition that suggests that famous movie stars always die in threes: if a famous star dies today, then according to legend two more famous stars are doomed to die in the next week as well. We don’t know if that’s true or not, but we know about an eerily-similar scripting corollary that is true: questions about ODBC Data Sources always come in pairs.
Scoff if you will, but you can’t argue with the facts: two weeks ago we answered a question about retrieving the set of ODBC drivers installed on a computer. And now, out of the blue, we get a question about retrieving System DSNs!
Listen, don’t feel bad: that is spooky.
Note. Yes, we know: you thought this thing about System DSNs was just another urban legend, akin to the old scare story about people eating Pop Rocks, drinking a pop, and then having their stomachs explode. Better think again, huh?
If you have no idea what we’re talking about (something which seems to occur more and more often with this column) System DSNs are simply a shortcut method for connecting to databases and other data sources. You can view a list of the System DSNs available on a computer by bringing up the ODBC Data Source Administrator dialog box and looking on the System DSN tab:
That’s fine if you’re working on the local machine. But what if you’re interested in retrieving a list of the System DSNs on a remote machine, or what if you’d like to inventory the System DSNs on a whole bunch of computers? How do you do something like that?
Why, you use a script, of course:Value
For some reason there’s no WMI class or other COM object designed to retrieve System DSNs. But that’s OK: because this information is stored in the registry we can still write a script to grab and return the DSNs. As you might expect, that’s exactly what the preceding script does: it opens the registry, zips down to HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\ODBC DATA SOURCES, and then returns the names and values of the all the registry entries found there. Each entry will consist of a name (representing the DSN name) and a value (representing the DSN driver). We’ll return and echo both the name and the value, thus replicating the information found in the dialog box.
Our script begins by defining a constant named HKEY_LOCAL_MACHINE and setting the value to &H80000002; we’ll use this constant to indicate the registry hive we want to work with. We then bind to the WMI service, connecting to the StdRegProv class. (Which, as we always hasten to add, is found in the root\default namespace, not root\cimv2. In fact, this was the subject of our first column ever.)
Following that, we assign the registry path within HKEY_LOCAL_MACHINE to a variable named strKeyPath. With that done we can then use this line of code to call the EnumValues method and return a list of all the registry values stored in HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\ODBC DATA SOURCES:
objRegistry.EnumValues HKEY_LOCAL_MACHINE, strKeyPath, arrValueNames, arrValueTypes
As you can see, we pass EnumValues four parameters. The first two - HKEY_LOCAL_MACHINE and strKeyPath - are “in parameters” that represent the registry hive and registry path. The second two - arrValueNames and arrValueTypes - are “out parameters;” that means they represent information that the EnumValues method returns to us. After EnumValues runs, arrValueNames will be populated with the names of all the registry values found in HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\ODBC DATA SOURCES; arrValueTypes, meanwhile, will be populated with the registry data type for all those values.
Note. In this particular script we won’t actually use the data types; that’s because all the values will be string values of type REG_SZ.
At this point in time we have the name of each registry entry; if all we wanted to do was echo back the name we’d practically be done. However, we also wanted to echo back the value (that is, the driver name) for each DSN. To do that, we need to connect to each individual registry entry and return the value. And to do that we need to set up a For Next loop that walks through the array of registry entries. For each item in that array we assign the DSN name to a variable named strValueName. We then call the GetStringValue method to return the value assigned to that registry entry:
objRegistry.GetStringValue HKEY_LOCAL_MACHINE, strKeyPath, strValueName, strValue
In this script, strValue is an out parameter that contains the registry value. We now have the DSN name in one variable (strValueName) and the DSN driver in another variable (strValue). All that’s left is to display that information onscreen:
Wscript.Echo strValueName & " -- " & strValue
When we run the script we should get back information similar to this (depending on the DSNs available on the computer):
Northwind -- SQL Server
Scripting Content -- SQL Server
Events -- Microsoft Access Driver (*.mdb)
Cool, huh? Bear in mind, though, that you should never run this script while eating Pop Rocks. No sense taking any chances, right?
P.S. No need to ask: now you want to know if it’s possible to create and delete System DSNs using a script. Well, for once we’re way ahead of you.
Great script, thank you
Can you tell me how i could run this script against a list of computers?
thanks,
Of course, you can retrieve the list of DSN via some registry functions. However, there is a much simpler method available in Windows 8 (Release Preview version) and Windows Server 2012 (Release Candidate version).
You can use the Get-OdbcDsn to retrieve the list of all ODBC DSN installed in the system.
See the blog article for more detail: blogs.msdn.com/.../odbc-dsn-management-in-the-next-release-of-windows-code-named-windows-8-and-windows-server-8.aspx
Thanks,
Ming.
WDAC Team, Microsoft.
(This post include information about a pre-release windows and is subject to change in future releases.)
To retrieve the list of ODBC System DSN, you can use the command:
Get-OdbcDsn -DsnType System
To retrieve the list of ODBC User DSN, you can use the command:
Get-OdbcDsn -DsnType User
Good morning all!
How can I run the script agains a list of computers and store the results in a text file or Excel file, listing the DSN and the computer names? How about if I want to include the last user who logged into the computer?
Thank you,
Your script does not work: "C:\scripts\systemdsn.vbs(11, 1) Microsoft VBScript runtime error: Type mismatch: 'Ubound'Either you forgot a 'dim' statement or... something?
Can't believe this is helping me out 10 years after it was written.
|
http://blogs.technet.com/b/heyscriptingguy/archive/2005/07/25/how-can-i-retrieve-a-list-of-the-system-dsns-on-a-computer.aspx
|
CC-MAIN-2015-22
|
refinedweb
| 1,169 | 61.97 |
!
4 Replies to ““Unable to load one or more of the requested types.” when using EntityDataSource with Entity Framework in ASP.NET”
Thanks for this post. I am not sure yet it fixes the issue will know soon… but your post makes it easy to understand why this error occurs and that is always some comfort (not knowing and just trying stuff to fix drives me crazy).
Man you totally saved me! I had this headache all morning! thanks a lot!
How do I find out what my full namespace and class are? Mind you, when you add an edmx, it creates all that shit for you.
The namespace and class for the models are sitting in the generated classes under the EDMX. Hit the Plus sign in the solution explorer to see all your generated files. You can then just look at the top of the class definitions for namespace and class names.
|
http://www.brianseekford.com/2011/08/01/unable-to-load-one-or-more-of-the-requested-types-when-using-entitydatasource-with-entity-framework-in-asp-net/
|
CC-MAIN-2018-51
|
refinedweb
| 154 | 83.86 |
Follow the steps below to get things up and running.
This example assumes you have ASP.NET Ajax and de Web Applications already installed.
1. Create a new project of type "ASP.NET AJAX-Enabled Web Application"
This will create a new Web Application with a Default.aspx that already has a script manager on in, and a preconfigured web.config. Call this project WebServiceDemo
2. Add a web service
Right-click on your web application, click Add/New Item and then Web Service. Call this web service "DemoService".
3. Make a web service callable from script
Open code file "Default.aspx.cs". Notice your class "DemoService" sits in a namespace "WebServiceDemo". You will need this knowlegde later.
Add to the top:
using System.Web.Script.Services;decorate the class with the attribute [ScriptService] and modify the standard hello world method so it looks like this:
[WebService(Namespace = "")]4. Register the service on the page where you want to call it from
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
[ScriptService]
public class DemoService : System.Web.Services.WebService
{
[WebMethod]
public string HelloWorld( string ToSomeone )
{
return "Hello World" + ToSomeone;
}
}
Open Default.aspx in design view and select the ScriptManager1
Select the "Services" property, and click the button that appears
Click "Add", and enter "DemoService.asmx" for the path property
Click OK. The result should look like this:
<asp:ScriptManager <Services> <asp:ServiceReference </Services> </asp:ScriptManager>5. Create a client side script to perform the call
Open Default.aspx in source view and enter just before the <head> tag the following code:
<script type="text/javascript">6. Create a button to start the web service calling function
function CallService()
{
WebServiceDemo.DemoService.HelloWorld( "Yourself",
Callback );
}
function Callback( result )
{
var outDiv = document.getElementById("outputDiv");
outDiv.innerText = result;
}
</script>
Drag a button onto the form
Set the property "OnClientClick" to "CallService();return false;"
7. Create div for the output data
Drag a div (from the HTML tab) onto the form.
Set its id to "outputDiv";
If you did everything correctly, the text "Hello World Yourself" should appear beneath your button, without a visible postback.
Notice the following things:
- You always need the fully qualified name of the class to call the webservice: WebServiceDemo.DemoService.HelloWorld
- Calling a webservice from javascript always needs two methods: one to make the actual call, and one to receive the results
- The callback method is added as an extra parameter to the web service method parameters. In C#, the method has one parameter, so in javascript two.
Complete code downloadable here.
20 comments:
That's great! Now how can I do this in a user control(ascx file)?
I tried it, but the in the user control, intellisense does not pick up the webservice (as it would in a regular aspx page). I registered the service with script manager and decorated it with the script services tag, so help me god!
Hello:
it seems a if the Callback function is not being called. (no message box is shown). Here's my script:
function logoutCurrentUser()
{
if (event.clientY < 0)
{
Authentication.LogoutCurrentUser(Callback);
}
}
function Callback(result)
{
alert(result);
}
onbeforeunload="logoutCurrentUser()"
@joseph @anonymous: you might want to be more specific.
How to call a WebService when using "Web Site Project" and service's .asmx file is part of the same project?
@George: the same way as when using a web application, I guess. Maybe you want to specify a full path, starting with a "~"?
Excellent demo -- thanks a lot! Simple yet comprehensive. Just a quick note about a couple of typos elsewhere in the page: "(and apperantly and avarage of some 90 people per day do)" should read "(and apparently some 90 people per day do on average)". (Feel free to edit this bit out of my comment once you've fixed it!)
When you say 'Open code file "Default.aspx.cs"', you mean to say 'open demoservice.asmx.vb'
@zhi in all my samples I use C#. If you use visual basic, the code behind file will indeed be called demoservice.asmx.vb
Thank you so much... i just spent the last hour pulling my hair out trying to figure out why it wasn't working until i found this article and realized i needed the scriptmanager to initialize it.
Hi, this code works if i have the javascript code in an usercontrol???
Thanks in advance
@Gera I think it should. Can't think of a reason why it should not.
What if the webservice is developed and hosted by thirdparty? as you said we have to add [ScriptService] tag, we can't do in our case.
Please advise.
@Rupen: I think you will need to provide a proxy for that. Unless you have control over the remote server, than you might try fiddling with a crossdomain.xml but I don't know if that will work
Hi, I didn't get putting proxy if possible please explained (or any link), secondly, what about creating my own WCF service which will call ASMX web service?
@Rupen, that is exactly what I mean with a proxy - create a WCF service in your own site, that calls the ASMX service in the other site. I don't have any samples of that, sorry
very nice explanation
Thank allot for in time help :)
But please disable that comment moderation. It gives so much irritation for posting comment
@PURNA I completely agree with your comment on the comment moderation and captcha - but I suggest you try to run a blog that get moderate attention and weed out a ginormous amount of Chinese and p*rn spam link comments every single day ;-)
Hi,
its not working in me. i have downloaded your code and run it. when i clicked on the button nothing happens. am i doing any wrong? i can't understand. plz help me to run it. i am a newbie in webservice related work.
thanks with regards.
Razib
@Razib, this is very old code. I'd suggest you look at my wcf post. Maybe that works better for you.
I am using Visual Studio 2010 and it has no project of type "ASP.NET AJAX-Enabled Web Application". I'll try to figure out what to do instead but if anyone has gone down this path already an answer would save me some time. TIA.
|
https://dotnetbyexample.blogspot.com/2007/10/calling-asmx-web-services-directly-from.html
|
CC-MAIN-2017-51
|
refinedweb
| 1,049 | 67.15 |
Feb 11, 2020 learning C++ Makefile ROOT
main() { int a = 2; int b = 3; int c = a*b; }
Let’s start with this short and not-quit-correct program. Save it to a file called
test.cc.
test.C or
test.cpp works just as well.
Your computer does not understand this at all. You need to convert it to a format that the machine can understand using a program called compiler. A standard C++ compiler in Linux is called
g++. So you can run
$ g++ test.cc # compile test.cc using g++ $ ls # check what is created by g++ a.out test.cc
to create
a.out from
test.cc using
g++.
a.out is an executable, you can run it this way:
$ ./a.out # run a.out in the current directory (./)
Of course, nothing will show up in your terminal. To print out the result of the calculation, you need to modify your program a bit:
#include <iostream> main() { int a = 2; int b = 3; int c = a*b; std::cout<<c<<std::endl; // print the value of c on screen }
<< indicates the flow of data.
std::cout is something declared in
iostream.h. It means the standard output, or your terminal screen.
std::endl is declared in
iostream.h as well. It means the end of a line, or an Enter to start a new line. Everything behind
// are comments and will be ignored by the compiler.
#include <iostream> tells the compiler where to search for declarations of
std::cout and
std::endl.
Compile the modified
test.cc again and run it:
$ g++ test.cc $ ./a.out 6
Now the calculation result is printed in your terminal screen.
a.out is not a very good name. You can change the name of the comfile file using the
o (outut) option of
g++:
$ g++ test.cc -o test.exe $ ls test.exe test.cc
As I mentioned at the beginning,
test.cc is not quit correct. To see what’s wrong, you can require
g++ to print out useful warning messages to help us debug:
$ g++ -Wall test.cc # turn on "all" Warnings test.cc:3:7: warning: ISO C++ forbids declaration of 'main' with no type [-Wreturn-type] main () ^
The warning message is quite clear. The C++ standard forbids to declear the
main function without any return type. To get rid of this warning, you need to add
int in front of
main:
int main()
Try to compile again. The warning message should disappear.
From the simple program above we learned that you can call functions written by others in your program, for example,
std::cout in
iostream. Now we are going to call a random number generating function provided by ROOT:
#include <iostream> #include <TRandom.h> int main() { TRandom generator; // create an object of the class TRandom std::cout<<generator.Rndm()<<std::endl; // call TRandom's public member function Rndm() }
Run
g++ test.cc and you will get
test.cc:2:22: fatal error: TRandom.h: No such file or directory #include <TRandom.h> ^ compilation terminated.
The second
include causes the problem while the first does not. This is because the second is not as standard as the first one. You need to tell
g++ the location of
TRandom.h:
$ g++ -I /path/to/include/ test.cc
This will fixed the previous error, but create more error messages, which is so long that you’d better save them to a log file so that you can check the start of them easily:
$ g++ -I /path/to/include/ test.cc > log 2>&1
Search on Google
bash error redirect if you don’t understand the meaning of
2>&1. Open
log and you should be able to allocate the following two lines close to the beginning of the output:
# error "ROOT requires support for C++11 or higher." # error "Pass `-std=c++11` as compiler argument."
which says that ROOT related program needs to be compiled with
c++11 standard. Follow this instruction:
$ g++ -std=c++11 -I /path/to/include/ test.cc
You will get some new error messages:
/tmp/ccmm4zlO.o: In function `main': test.cc:(.text+0x1c): undefined reference to `TRandom::TRandom(unsigned int)' test.cc:(.text+0x2b): undefined reference to `TRandom::Rndm()' ...
This is because
TRandom.h only declares the class
TRandom and the function
Rndm(), but the real definition of them are saved in a separated file called
libMathCore.so, which is a shared object (
.so), or a shared library file. You need to tell
g++ to link your executable with this library:
$ g++ -std=c++11 -I /path/to/include/ test.cc -L /path/to/ROOT/lib -lCore -lMathCore
Now that you have your
test.cc compiled to
test.exe, you can try to run it as
./test.exe.
./ means current directory. This is a way to tell your shell where to find an executable named
test.exe. You can add
. in your
PATH so that you don’t have to type
./ all the time:
$ export PATH=.:$PATH # add . to the list of folders that contain executables
Check Linux 101 if you don’t know about the environment variable
PATH.
Run
test.exe and you will get yet again an error message:
test.exe: error while loading shared libraries: libCore.so: cannot open shared object file: No such file or directory
This is because
test.exe uses the ROOT library
libCore.so and your shell does not know where to find it. Yes, we already told
g++ where to find it. But
shell and
g++ are two different things. We need to instruct them individually. To tell shell where to find some libraries, you need
export LD_LIBRARY_PATH=/path/to/ROOT/lib:$LD_LIBRARY_PATH
For MAC users, replace
LD_LIBRARY_PATH with
DYLD_LIBRARY_PATH.
It is too much to type such a long command,
g++ -Wall -std=c++11 -I..., just to compile a simple program
test.cc. You’d better save this command somewhere so that you can use it later. A standard way to do this is to create a
Makefile in the same directory as your
test.cc, which contains the following two lines of code:
test.exe: test.cc g++ -std=c++11 -I /path/to/include/ test.cc -o test.exe -L /path/to/ROOT/lib -lCore -lMathCore
This is called a rule. Check the make manual to understand the structure of a Makefile rule. Basically,
test.exe is the target.
test.cc is the prerequisite of this target. If the target is older than its prerequisite, the command in the second line, or the recipe, will be used to update the target. Otherwise, no action will be taken. Be aware that a recipe must start with a real
Tab instead of a few spaces.
Now you can run
$ make
to compile
test.cc to
test.exe.
We typed
test.cc and
test.exe in the recipe of the rule above. But they are simply the prerequisite and the target of that rule. In principle, you have already told make all the information it needs. Indeed, make remembers them. You can use two automatic variables in your recipe to refer to them without defining the two variables (that’s why they are called automatic ones):
test.exe: test.cc g++ -std=c++11 -I /path/to/include/ $? -o $@ -L /path/to/ROOT/lib -lCore -lMathCore
where
$? refers to
test.cc and
$@ refers to
test.exe. There is a full list of automatic variables in the make manual.
There is a standard name for each part of the recipe. For example,
g++ is called the compiler,
-std=c++ -I... are flags. Make maintains a list of standard names (Implicit variables) to refer to individual parts in a recipe. If we use these standard names, the
Makefile can be rewritten as
CXXFLAGS = -std=c++11 -I/path/to/include/ LDLIBS = -L/path/to/ROOT/lib -lCore -lMathCore test.exe: test.cc $(CXX) $(CXXFLAGS) $? -o $@ $(LDFLAGS) $(LDLIBS)
You don’t have to define
$(CXX), since it has a default value of
g++.
ROOT provides a command
root-config for you to figure out the contents of these standard parts. Run
root-config --help to learn more about it. Our
Makefile can be modified to work with any machine that have ROOT properly installed:
CXXFLAGS = $(shell root-config --cflags) LDLIBS = $(shell root-config --libs)
$(shell a-shell-command) is how you call
a-shell-command in a Makefile and get the output of it.
Let’s add another C++ source file,
gaus.cc into this directory. We need to add another rule in our Makefile to compile it:
test.exe: test.cc $(CXX) $(CXXFLAGS) $? -o $@ $(LDFLAGS) $(LDLIBS) gaus.exe: gaus.cc $(CXX) $(CXXFLAGS) $? -o $@ $(LDFLAGS) $(LDLIBS)
However, if you run
make in your terminal, only
test.cc will be compiled. This is because
make without any argument will only run the first rule. To run a specific rule, you need to pass the target name of that rule to
make:
$ make gaus.exe
What if you want to compile both of them? You need to add a special rule that depends on both exe files:
all: test.exe gaus.exe
Keep this as the first rule so that you can run it when you call
make without any argument. This rule does not have any recipe. The sole purpose of it is to call rules related to
test.exe and
gaus.exe.
Unlike
test.exe and
gaus.exe,
all is not really a file. It is just the name of a target. We call it a phony target. To make this point clear so that make won’t do something for a file named
all in the directory, we need to add the following line to your Makefile:
.PHONY: all
There are some standard phony targets you may consider to add in your Makefile:
.PHONY: all clean install debug clean: $(RM) *.exe install: install *.exe ~/bin debug: @echo $(CXXFLAGS) @echo $(LDLIBS)
where
$(RM) is another implicit variable. It has a default value of
rm -f. The
@ in front of
echo is to suppress the print out of the recipe itself in the terminal window.
The two rules for
test.exe and
gaus.exe are very similar. It would be nice if we can combine them in one rule. You can achieve this through the patter-specific variable values:
all: test.exe gaus.exe %.exe: %.cc $(CXX) $(CXXFLAGS) $? -o $@ $(LDFLAGS) $(LDLIBS)
This rule can be applied to any pair of
.exe and
.cc files.
If we want to add another C++ source file,
rndm.cc, to the directory, we just need to add
rndm.exe to the list after
all:
all: test.exe gaus.exe rndm.exe
It would be nice if we can automate this step. You can achieve this using a make function called
wildcard:
SRC = $(wildcard *.cc) EXE = $(SRC:.cc=.exe) all: $(EXE)
The first line creates a list of all files that end with
.cc and save it in
$(SRC). The second line changes the suffix of every entry in
$(SRC) from
.cc to
.exe and save the new list in
$(EXE). The third line uses this list.
When you add a new
.cc file in this directory, you can compile it with
make, without modifying the Makefile.
A command in Linux normally does not end with
.exe. We can remove it from our final executables:
%: %.cc $(CXX) $(CXXFLAGS) $? -o $@ $(LDFLAGS) $(LDLIBS)
This way,
test.cc will be compiled to
test instead of
test.exe. This rule is so commonly used that
make includes it as an implicit rule. You don’ even need to write it in your Makefile. (Run
make -np |less to get a list of all implicit rules.) Remove this rule and your Makefile still works! It seems crazy that we spend so much effort to improve our rules, only to get rid of them at last!
As a final touch, we’d like to add some protections, print out some instructions here and there. Your final Makefile would look like this:
# variables used by implicit rules to allocate ROOT headers and libs CXXFLAGS = $(shell root-config --cflags) LDLIBS = $(shell root-config --libs) SRC = $(wildcard *.cc) # list all files that end with .cc EXE = $(SRC:.cc=) # remove .cc from those file names all: $(EXE) @echo make install: copy $(EXE) to ~/bin @echo make clean: delete $(EXE) @echo make debug: check contents of Makefile variables clean: $(RM) $(EXE) install: mkdir -p ~/bin install $(EXE) ~/bin @echo Please add $(shell root-config --libdir) @echo to your LD_LIBRARY_PATH before you run any executable debug: @echo CXXFLAGS = $(CXXFLAGS) @echo LDLIBS = $(LDLIBS) @echo SRC = $(SRC) @echo EXE = $(EXE) .PHONY: all clean install debug
References for “Final touch”::
|
http://physino.xyz/learning/2020/02/11/automate-compilation-of-root-based-c++-program-with-makefile/
|
CC-MAIN-2020-16
|
refinedweb
| 2,113 | 77.94 |
Python doesn’t have any specific data type as an array. We can use List that has all the characteristics of an array.
Python array module can be used to create an array of integers and floating-point numbers.
If you want to do some mathematical operations on an array, you should use the NumPy module.
Table of Contents
1. Python add to Array
- If you are using List as an array, you can use its append(), insert(), and extend() functions.
- Using + operator: a new array is returned with the elements from both the arrays.
- append(): adds the element to the end of the array.
- insert(): inserts the element before the given index of the array.
- extend(): used to append the given array elements to this array.
import array arr1 = array.array('i', [1, 2, 3]) arr2 = array.array('i', [4, 5, 6]) print(arr1) # array('i', [1, 2, 3]) print(arr2) # array('i', [4, 5, 6]) arr3 = arr1 + arr2 print(arr3) # array('i', [1, 2, 3, 4, 5, 6]) arr1.append(4) print(arr1) # array('i', [1, 2, 3, 4]) arr1.insert(0, 10) print(arr1) # array('i', [10, 1, 2, 3, 4]) arr1.extend(arr2) print(arr1) # array('i', [10, 1, 2, 3, 4, 4, 5, 6])
3. Adding elements to the NumPy Array
- append(): the given values are added to the end of the array. If the axis is not provided, then the arrays are flattened before appending.
- insert(): used to insert values at the given index. We can insert elements based on the axis, otherwise, the elements will be flattened before the insert operation.
>>> import numpy as np >>> np_arr1 = np.array([[1, 2], [3, 4]]) >>> np_arr2 = np.array([[10, 20], [30, 40]]) >>> >>> np.append(np_arr1, np_arr2) array([ 1, 2, 3, 4, 10, 20, 30, 40]) >>> >>> np.append(np_arr1, np_arr2, axis=0) array([[ 1, 2], [ 3, 4], [10, 20], [30, 40]]) >>> >>> np.append(np_arr1, np_arr2, axis=1) array([[ 1, 2, 10, 20], [ 3, 4, 30, 40]]) >>> >>> np.insert(np_arr1, 1, np_arr2, axis=0) array([[ 1, 2], [10, 20], [30, 40], [ 3, 4]]) >>> >>> np.insert(np_arr1, 1, np_arr2, axis=1) array([[ 1, 10, 30, 2], [ 3, 20, 40, 4]]) >>>
all we want to do is something really simple like
somarray = [0] * 256
somearray[5] = 100
but it will throw list index out of range, even after it was created with the right size
why cant you just write simple solution to this problem
How can I insert element at given position by using array module without using any in built function in python
Thank you! very helpful
|
https://www.journaldev.com/33185/python-add-to-array
|
CC-MAIN-2021-25
|
refinedweb
| 429 | 65.93 |
Discover and use UARTs and serial ports in Elixir
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Circuits.UARTallows you to use UARTs, serial ports, Bluetooth virtual serial port connections and more in Elixir. Some highlights:
Looking for
Nerves.UART?
Circuits.UARTis the new name. Everything else is the same. Update your project by replacing all references to
nerves_uartand
Nerves.UARTto
circuits_uartand
Circuits.UARTand you should be good.
Something doesn't work for you? Check out below and the docs. Post a question on the Elixir Forum or file an issue or PR.
Discover what serial ports are attached:
iex> Circuits}}
Start the UART GenServer:
iex> {:ok, pid} = Circuits.UART.start_link {:ok, #PID<0.132.0>}
The GenServer doesn't open a port automatically, so open up a serial port or UART now. See the results from your call to
Circuits.UART.enumerate/0for what's available on your system.
iex> Circuits.UART.open(pid, "COM14", speed: 115200, active: false) :ok
This opens the serial port up at 115200 baud and turns off active mode. This means that you'll have to manually call
Circuits.UART.readto receive input. In active mode, input from the serial port will be sent as messages. See the docs for all options.
Write something to the serial port:
iex> Circuits.UART.write(pid, "Hello there\r\n") :ok
See if anyone responds in the next 60 seconds:
iex> Circuits.UART.read(pid, 60000) {:ok, "Hi"}
Input is reported as soon as it is received, so you may need multiple calls to
read/2to get everything you want. If you have flow control enabled and stop calling
read/2, the port will push back to the sender when its buffers fill up.
Enough with passive mode, let's switch to active mode:
iex> Circuits.UART.configure(pid, active: true) :ok
iex> flush {:circuits_uart, "COM14", "a"} {:circuits_uart, "COM14", "b"} {:circuits_uart, "COM14", "c"} {:circuits_uart, "COM14", "\r"} {:circuits_uart, "COM14", "\n"} :ok
It turns out that
COM14is a USB to serial port. Let's unplug it and see what happens:
iex> flush {:circuits_uart, "COM14", {:error, :eio}}
Oops. Well, when it appears again, it can be reopened. In passive mode, errors get reported on the calls to
Circuits.UART.read/2and
Circuits.UART.write/3
Back to receiving data, it's a little annoying that characters arrive one by one. That's because our computer is really fast compared to the serial port, but if something slows it down, we could receive two or more characters at a time. Rather than reassemble the characters into lines, we can ask
circuits_uartto do it for us:
iex> Circuits.UART.configure(pid, framing: {Circuits.UART.Framing.Line, separator: "\r\n"}) :ok
This tells
circuits_uartto append a
\r\nto each call to
write/2and to report each line separately in active and passive mode. You can set this configuration in the call to
open/3as well. Here's what we get now:
iex> flush {:circuits_uart, "COM14", "abc"} # Note that the "\r\n" is trimmed :ok
If your serial data is framed differently, check out the
Circuits.UART.Framingbehaviour and implement your own.
Circuits.UART.Framing.FourByteis a particularly simple example of a framer.
You can also set a timeout so that a partial line doesn't hang around in the receive buffer forever:
iex> Circuits.UART.configure(pid, rx_framing_timeout: 500) :ok
Assume that the sender sent the letter "A" without sending anything else
for 500 ms.
iex> flush {:circuits_uart, "COM14", {:partial, "A"}}
Sometimes it's easier to operate with the
pidof the UART GenServer rather than using the name of the port in active mode. An example of this is when you want to send an acknowledgment back after a receive and you are using more than one serial port at a time. You can do this with the
id: :pidoption to
open/1or
configure/1.
iex> Circuits.UART.configure(pid, id: :pid) :ok
Assume some data was received
iex> receive do ...> {:circuits_uart, pid, _} -> ...> Circuits.UART.write(pid, "ack") ...> end :ok
To install
circuits_uart:
circuits_uartto your list of dependencies in
mix.exs:
def deps do [{:circuits_uart, "~> 1.3"}] end
Check that the C compiler dependencies are satisified (see below)
Run
mix deps.getand
mix compile
Since this library includes C code,
make,
gcc, and Erlang header and development libraries are required.
On Linux systems, this usually requires you to install the
build-essentialand
erlang-devpackages. For example:
sudo apt-get install build-essential erlang-dev
On Macs, run
gcc --versionor
make --version. If they're not installed, you will be given instructions.
On Windows, if you're obtaining
circuits_uartfrom
hex.pm, you'll need MinGW to compile the C code. I use Chocolatey and install MinGW by running the following in an administrative command prompt:
choco install mingw
On Nerves, you're set - just add
circuits_uartto your
mix.exs. Nerves contains everything needed by default. If you do use Nerves, though, keep in mind that the C code is crosscompiled for your target hardware and will not work on your host (the port will crash when you call
start_linkor
enumerate. If you want to try out
circuits_uarton your host machine, the easiest way is to either clone the source or add
circuits_uartas a dependency to a regular (non-Nerves) Elixir project.
The standard Elixir build process applies. Clone
circuits_uartor download a source release and run:
mix deps.get mix compile
The unit tests require two serial ports connected via a NULL modem cable to run. Define the names of the serial ports in the environment before running the tests. For example,
export CIRCUITS_UART_PORT1=ttyS0 export CIRCUITS_UART_PORT2=ttyS1
If you're on Windows or Linux, you don't need real serial ports. For linux, download and install tty0tty. Load the kernel module and specify
tnt0and
tnt1for the serial ports. Check the
tty0ttyREADME.md, but this should looks something like:
cd tty0tty/module make sudo cp tty0tty.ko /lib/modules/$(uname -r)/kernel/drivers/misc/ sudo depmod sudo modprobe tty0tty sudo chmod 666 /dev/tnt*
export CIRCUITS_UART_PORT1=tnt0 export CIRCUITS_UART_PORT2=tnt1
On Windows, download and install com0com (Look for version 2.2.2.0 if the latest hasn't been signed). The ports on Windows are
CNCA0and
CNCB0.
Then run:
mix test
If you're using
tty0tty, the tests will run at full speed. Real serial ports seem to take a fraction of a second to close and re-open. I added a gratuitous delay to each test to work around this. It likely can be much shorter.
On MacOS, download and install socat. You can install it via Homebrew. Once you have it installed and ready to go, run the following command. You will need to changeto your current system username
sudo socat -d -d -d -d -lf /tmp/socat pty,link=/dev/dummy1,raw,echo=0,user=,group=staff link=/dev/dummy2,raw,echo=0,user=,group=staff
Once that opens, in a separate terminal emulator, set the Circuits ENVars, and go about your testing
export CIRCUITS_UART_PORT1=/dev/dummy1 export CIRCUITS_UART_PORT2=/dev/dummy2 mix test
No, this project doesn't have any dependencies on any Nerves components. The desire for some serial port library features on Nerves drove us to create it, but we also have host-based use cases. To be useful for us, the library must remain crossplatform and have few dependencies. We're just developing it under the Nerves umbrella.
Serial port files are almost always owned by the
dialoutgroup. Add yourself to the
dialoutgroup by running
sudo adduser yourusername dialout. Then log out and back in again, and you should be able to access the serial port.
If you're having trouble and suspect the C code, edit the
Makefileto enable debug logging. See the
Makefilefor instructions on how to do this. Debug logging is appended to a file by default, but can be sent to
stderror another location by editing
src/circuits_uart.c.
If you're on Linux, the
tty0ttyemulated null modem removes the flakiness of real serial port drivers if that's the problem. The serial port monitor jpnevulator is useful for monitoring the hardware signals and dumping data as hex byte values.
On OSX and Windows, I've found that PL2303-based serial ports can be flakey. First, make sure that you don't have a counterfeit PL2303. On Windows, they show up in device manager with a warning symbol. On OSX, they seem to hang when closing the port. Non-counterfeit PL2303-based serial ports can pass the unit tests on Windows 10, but I have not been able to get them to pass on OSX. FTDI-based serial ports appear to work better on both operating systesm.
You may have noticed Erlang's
erl_interfacecode copy/pasted into
src/ei_copy. This is only used on Windows to work around issues linking to the distributed version of
erl_interface. That was compiled with Visual Studio. This project uses MinGW, and even though the C ABIs are the same between the compilers, Visual Studio adds stack protection calls that I couldn't figure out how to work around.
Circuits.UART uses a Port and C code. Elixir/Erlang ports have nothing to do with the serial ports of the operating system. They share the same name but are different concepts.
By default nerves is configured so Linux and the Elixir console is redirected to the serial0 interface. As a result, while using this interface, the buffer might be full of debug logs from your application, which could cause the port to timeout when you are writing to it, or attempting to drain it
:port_timed_out.
To disable this "pollution" you will have to edit: -
erlinit.configand comment
-c ttyAMA0-
cmdline.txtand comment
console=serial0,115200
To learn how to edit those files in your nerves setup you can check the advanced configuration documentation of nerves:
When building this library, node-serialport and QtSerialPort where incredibly helpful in helping to define APIs and point out subtleties with platform-specific serial port code. Sadly, I couldn't reuse their code, but I feel indebted to the authors and maintainers of these libraries, since they undoubtedly saved me hours of time debugging corner cases. I have tried to acknowledge them in the comments where I have used strategies that I learned from them.
|
https://xscode.com/elixir-circuits/circuits_uart
|
CC-MAIN-2020-45
|
refinedweb
| 1,732 | 57.47 |
Basic window source codeEdit
Here is the smallest FLTK application, it shows an empty window:
#include <fltk/run.h> #include <fltk/Window.h> int main (int argc, char** argv) { // build a square window (with a side of 300 pixels) fltk::Window window (300, 300, "FLTK test"); // show it window.show (argc, argv); // enter the FLTK event loop return fltk::run(); }
Compiling and running this piece of code yields the following kind of window:
Basic application compilationEdit
To build above code, first save it to a file named test.cxx. You can also use test.cpp, but the cxx extension is the one used by FLTK source code.
We assume GCC is installed, since it is required to build FLTK. Depending on the environment, windowing system libraries need to be linked against the executable. Basically, the following command is the minimum required to build test.cxx:
g++ -o test test.cxx -lfltk2
Using the X Window system (GNU Linux for example), you usually need to add the following ones:
g++ -o test test.cxx -lfltk2 -lXi -lXinerama -lXft
Under Mac OS X, you'll have to link with the Carbon framework:
g++ -o test test.cxx -lfltk2 -framework Carbon
|
http://en.m.wikibooks.org/wiki/FLTK/Basic_applications
|
CC-MAIN-2014-10
|
refinedweb
| 198 | 73.98 |
lp:~tracker-team/tracker/tracker-0.8.karmic
- Get this branch:
- bzr branch lp:~tracker-team/tracker/tracker-0.8.karmic
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Tracker Team
- Status:
- Mature
Recent revisions
- 36. By Chris Coulson on 2010-05-20
releasing version 0.8.7-0ubuntu0.
9.10.1~ ppa1
- 35. By Chris Coulson on 2010-05-20
New upstream release (0.8.7)
- 34. By Chris Coulson on 2010-05-13
releasing version 0.8.6-0ubuntu0.
9.10.1
- 33. By Chris Coulson on 2010-05-13
* New upstream release (0.8.6)
* debian/
tracker- miner-evolution .install:
- rename desktop file
- 32. By Chris Coulson on 2010-05-07
releasing version 0.8.5-0ubuntu0p
pa1~karmic1
- 31. By Chris Coulson on 2010-05-07
* New upstream release
* General:
- Removed many unused variables from coverity reports
- Various other fixes picked up from coverity reports
* Ontology:
- Fixed typo for nfo:softwareCmdLine comment
* Data Generators:
- Fixed %u use since it is deprecated in favour of %d in Python
* libtracker-common:
- Fixed compilation error in validating ints for tracker-
keyfile- object
* libtracker-db:
- Don't fsync/close already open databases if g_open() succeeds
- Avoid strstr in uri_is_
{parent| descendant} functions
- Performance improvement for tracker:
uri-is- parent function
* libtracker-extract:
- Don't modify setlocale() return value, as it's statically stored
- Protect against invalid values in tracker_
date_guess( )
* libtracker-miner:
- Added missing .deps file for Vala bindings
- Fixed memory leak in VAPI file
- Fixed includes for libtracker-client in VAPI file
* libtracker-client:
- Fixed typo in documentation for tracker_
resources_ sparql_ query()
* tracker-store:
- Handle commit transaction error when importing turtle files and rollback
* tracker-extract:
- Don't run past genre array in mp3 extractor
- Use nfo:HtmlDocument instead of nfo:Document in html extractor
- Fixed compilation warning for msoffice extractor, use G_GSIZE_FORMAT not %d
* tracker-search:
- Added --emails and list subjects/dates
- Added --contacts and list names/addresses
- Updated --detailed so we only report URNs if this is supplied
* tracker-tag:
- Fixed possible use of uninitialised memory
* tracker-info:
- Show results in shortened form, added --full-namespaces for old behaviour
* tracker-
search- tool:
- Fixed segmentation fault when there are no results
- Added "Folders" category
- Renamed "Office Documents" category to "Documents"
- 30. By Chris Coulson on 2010-05-03
releasing version 0.8.4-0ubuntu0p
pa1~karmic1
- 29. By Chris Coulson on 2010-05-03
* Merge with Debian, remaining changes:
- debian/
30-tracker. conf: increase number of watches to 524288
- debian/control: don't build-depend on libunac-dev
- debian/rules: don't build with --enable-unac
* New upstream release
*
- Added videos to the data generation
- Added test set configuration for maximum values
- Added full text queries
- Added basic file operations for miner-fs and desktop environments
* libtracker-db:
- Added tracker:
uri-is- parent SQLite functions (for crawling improvements)
- Added tracker:
uri-is- descendant SQLite functions (for crawling
improvements)
- Support O_LARGEFILE when using g_open for the journal
* libtracker-data:
- Fixed memory leak on journal replay
* libtracker-miner:
- Improve crawling queries (3693 dirs, 27678 files, was 651s, now 166s)
- Don't translate statuses
* libtracker-client:
- Added initial test cases
* tracker-extract:
- Fixed man page for -d
- Fixed double free in Vorbis extractor, caused timeouts in miner-fs logs
- Set nfo:isContentEn
crypted for encrypted docs
- Improve <script> bypassing.
* libtracker-extract:
- Don't run past an array in XMP tests
* evolution:
- Fixed race condition
* tracker-
search- tool:
- Removed --service (old 0.6 option which is unused)
- Added support for starting queries using command line arguments
* Ontology:
- nfo:isContentEn
crypted was defined in nmm, not nfo
* tracker-sparql:
- Fixed typo in man page for command line args
* tracker-status:
- Added --list-
common- statuses option
* tracker-control:
- Added --reindex-mime-type and --start options to man page docs
* New upstream release.
* debian/control
- Improve package descriptions. Thanks to Tshepang Lekhonkhobe for the
patch.
- Add Build-Depends on gtk-doc-tools and graphviz.
* debian/rules
- Add --enable-gtk-doc to DEB_CONFIGURE_
EXTRA_FLAGS.
* debian/
libtracker- {client, extractor, miner}- dev.install
- Install API documentation.
* debian/
patches/ 30-vfat- hidden- attribute- build-fix. patch
- Removed, merged upstream.
* debian/
tracker. install
- Install D-Bus interface description file tracker-
miner-web. xml.
* New upstream release.
* Remove patches
- debian/
patches/ 10-improve- library- dependencies. patch (merged upstream)
- debian/
patches/ 20-am-maintaine r-mode. patch (obsolete)
- debian/
patches/ 99-autoreconf. patch (obsolete)
* Upload to unstable.
* debian/control
- Add Breaks: rygel-tracker (<< 0.5) as the D-Bus API has changed between
0.6 and 0.8.
- Add Conflicts/Replaces: tracker (<< 0.8.1-1) to tracker-gui. The icons
were moved between those two packages.
- Remove useless Conflicts from tracker-miner-fs and
tracker-
miner-evolution . Both packages already have a strict dependency
on tracker.
* debian/
patches/ 10-improve- library- dependencies. patch
- Only link against libraries when actually needed to get rid of
unnecessary library dependencies.
* debian/
patches/ 99-autoreconf. patch
- Run autoreconf to update the build system.
* New major upstream release. (Closes: #549695)
- The changes are too numerous to list them all, as it is basically a
rewrite. Some relevant changes:
- QDBM is gone, and with it its limitations. (Closes: #452657, #525393)
- The metadata store has been split from the file system crawler and can
be used independently. There are separate "miners" which can be
installed to feed data to tracker.
The new packaging layout accounts for that change.
- Support for Nepomuk with SPARQL which are used to query and update the
data.
- tracker-meta-folder is gone. (Closes: #430623)
* Remove patches:
- debian/
patches/ 10-binutils- gold.patch (fixed upstream)
- debian/
patches/ 15-am-maintaine r-mode. patch (obsolete)
- debian/
patches/ 20-tracker- search- man-page- typo-fix. patch (merged
upstream)
- debian/
patches/ 25-trackerd- man-page- typo-fix. patch (merged upstream)
- debian/
patches/ 99-autoreconf. patch (obsolete)
* debian/
patches/ 30-vfat- hidden- attribute- build-fix. patch
- Refresh to apply cleanly.
* Update Build-Depends:
- Bump libglib2.0-dev to (>= 2.20.0).
- Bump libdbus-1-dev to (>= 1.0).
- Bump libdbus-glib-1-dev to (>= 0.78).
- Bump libsqlite3-dev to (>= 3.6.16).
- Bump libgtk2.0-dev to (>= 2.18.0).
- Bump libexempi-dev to (>= 2.1.0).
- Drop libgmime-2.4-dev, libgnome2-dev, libgnomeui-dev,
libgnome-
desktop- dev, libglade2-dev, libraptor1-dev, libqdbm-dev,
libhal-dev, libhal-storage-dev.
- Add libflac-dev (>= 1.2.1) for FLAC extractor support.
- Add evolution-dev (>= 2.25.5) and evolution-
data-server- dev (>= 2.25.5)
for the evolution email miner.
- Add libpanel-
applet2- dev for the tracker-search-bar applet.
- Add libnautilus-
extension- dev for the tracker-tag widget integration in
nautilus.
- Add libdevkit-
power-gobject- dev (>= 007) for AC power detection.
- Add libenca-dev (>= 1.9) for detecting defect Russion or Cyrillic language
specifis in MP3s.
- Add libiptcdata0-dev for extracting IPTC metadata from images.
- Add libxml2-dev (>= 2.6), uuid-dev, libgee-dev (>= 0.3), valac.
* New package layout:
- Drop libdeskbar-applet, libtracker-gtk0, libtracker-gtk-dev.
- Add libtracker-
miner-0. 8-0, libtracker- miner-0. 8-dev,
libtracker-
extract- 0.8-0 libtracker- extract- 0.8-dev,
tracker-
extract, tracker-miner-fs, tracker- miner-evolution ,
tracker-
explorer, tracker-gui.
- Make tracker-search-tool a transitional package which depends on
tracker-gui.
- Rename libtrackerclient0 → libtracker-
client- 0.8-0,
libtrackerc
lient-dev → libtracker- client- 0.8-dev.
* Add symbols files for all shared libraries:
- Add debian/
libtracker- client- 0.8-0.symbols.
- Add debian/
libtracker- miner-0. 8-0.symbols.
- Add debian/
libtracker- extract- 0.8-0.symbols.
* Rename tracker.postinst → tracker-
miner-fs. postinst as tracker-miner-fs
needs the increased fs.inotify.
max_user_ watches, not tracker-store.
* Add lintian overrides for binary-
or-shlib- defines- rpath for packages
linking against private libraries in /usr/lib/
tracker- 0.8:
- Add debian/
libtracker- extract- 0.8-0.lintian- overrides.
- Add debian/
libtracker- miner-0. 8-0.lintian- overrides.
- Add debian/
tracker- gui.lintian- overrides.
- Add debian/
tracker- miner-evolution .lintian- overrides.
- Add debian/
tracker- miner-fs. lintian- overrides.
- Add debian/
tracker- utils.lintian- overrides.
* Remove the old xdg autostart files for trackerd and tracker-applet
on upgrades:
- Add debian/
tracker- gui.preinst.
- Add debian/
tracker. preinst.
* debian/rules:
- Update configure switches, enable FLAC extractor support.
- Update DEB_DH_
MAKESHLIBS_ ARGS_ALL arguments.
* Review and update debian/copyright.
* debian/
patches/ 30-vfat- hidden- attribute- build-fix. patch
- Don't build VFAT check for hidden attributes on non-linux platforms.
Thanks Petr Salinger for the patch. (Closes: #576938)
* New upstream release.
* debian/
tracker. manpages
- Remove tracker-
thumbnailer. 1, no longer installed upstream.
* Remove patches:
- debian/
patches/ 10-drop- bogus-version- info.patch (merged upstream)
- debian/
patches/ 20-tracker- defaults. patch (merged upstream)
- debian/
patches/ 30-gmime- 2.4.patch (merged upstream)
* debian/control
- Fix small typo in tracker-dbg's package description. (Closes: #550771)
- Bump Standards-Version to 3.8.4. No further changes.
- Add Depends on procps.
* debian/
patches/ 10-binutils- gold.patch
- Add missing libraries to fix FTBFS with binutils-gold. (Closes: #556499)
* debian/
patches/ 15-am-maintaine r-mode. patch
- Set AM_MAINTAINER_MODE to make patching the build system less painful.
* debian/
patches/ 99-autoreconf. patch
- Rerun autoreconf -i to update the build system.
* debian/
*.lintian- overrides
- Add lintian overrides for tracker-search-tool and tracker-utils. Those
binaries encode an rpath for /usr/lib/tracker. As they are built from
the same source package and have a strict dependency on the tracker
binary package, it is acceptable to define an rpath.
* debian/
patches/ 20-tracker- search- man-page- typo-fix. patch
- Fix typo in the tracker-search.1 man page detected by lintian.
* debian/
patches/ 25-trackerd- man-page- typo-fix. patch
- Fix typo in the trackerd.1 man page spotted by Hans Spaans.
(Closes: #549868)
* debian/rules
- Update configure flags.
- Don't generate ldconfig calls in postinst/postrm for the libraries
shipped in /usr/lib/tracker.
* debian/
tracker. postinst
- Start procps to apply "sysctl.
d/30-tracker. conf".
* Port to GMime 2.4. (Closes: #549052)
* debian/control
- Update Build-Depends libgmime-2.0-2-dev → libgmime-2.4-dev.
- Bump Standards-Version to 3.8.3. No further changes.
* debian/
patches/ 30-gmime- 2.4.patch
- Pull patch from https:/
/bugzilla. gnome.org/ show_bug. cgi?id= 564640 to
make tracker compile against GMime 2.4.
* debian/
patches/ 99-autoreconf. patch
- Run autoreconf as the gmime-2.4 patch requires changes to the build
system.
* Bump Standards-Version to 3.8.2. No further changes.
* libdeskbar-tracker: Change Depends on python-
gnome2- desktop to
python-
gnomedesktop, as python- gnome2- desktop is going away.
(Closes: #541565)
- 28. By Chris Coulson on 2010-04-20
releasing version 0.8.2-0ubuntu0p
pa1~karmic1
- 27. By Chris Coulson on 2010-04-20
* New upstream release
* General:
- Set functional tests to be enabled by default
- Set libunac to be enabled if available by default
- Fixed erroneous linking where GdkPixbuf, HAL, DeviceKit, Pango and UNAC
were involved
* Functional Tests:
- Improved performance-tc.py, mostly whitespace changes
* libtracker-common:
- Fixed use of timegm on BSD and use it for __GLIBC__, it's faster
* libtracker-db:
- Avoid type checking for TrackerDBInterface and TrackerDBResultSet
* libtracker-data:
- Added tests for more than one regex query
- Fixed SPARQL regex, don't use bound strings, use literals
- Fixed memory leak due to reference cycle
- Avoid type checking for TrackerProperty, TrackerClass and TrackerNamespace
* libtracker-client:
- Added properties to Vala bindings (allowing property = value)
* libtracker-miner:
- Fixed debian builds, don't use $(builddir) in Makefile.am
* tracker-miner-fs:
- Fixed build failures on non-Linux systems (FAT filesystem operations)
* tracker-
preferences:
- Added file chooser button for ignored directories
- Remove separator in patterns dialog
- Make OK button default action in patterns dialog
- Set throttle/low disk space only when Apply button is clicked
* evolution:
- Avoid e-d-s 2.28 #define causing compilation errors, see GB#613199
* Drop debian/
patches/ 01_eds_ workaround. patch - not needed now
* debian/control:
- Don't build-depend on quilt
* debian/rules:
- Drop quilt includes
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:tracker
|
https://code.launchpad.net/~tracker-team/tracker/tracker-0.8.karmic
|
CC-MAIN-2019-35
|
refinedweb
| 2,003 | 52.66 |
A stream-like object for creating an entry in a log file. More...
#include <Wt/WLogger.h>
A stream-like object for creating an entry in a log file.
This class is returned by WLogger::entry() and creates a log entry using a stream-like interface.
Move constructor.
This is mostly for returning a (newly constructed) WLogEntry from a function.
Appending to the from object after move construction has no effect.
Writes a field separator.
You must separate fields in a single entry using the WLogger::sep constant.
Writes a time stamp in the current field.
Formats a timestamp (date+time) to the current field.
|
https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1WLogEntry.html
|
CC-MAIN-2020-10
|
refinedweb
| 105 | 60.51 |
Introducing Donald Trump...software?
Could this be considered a Business Lesson? It could.
For those who are interested in learning more about Real Estate through Donald Trump, his university web site and personal blog are incredible resources of information.
In addition to the courses he has available, he is now offering the Real Estate Wealth Builder. If you are deeply involved with real estate, this software helps you look deals over "like the big boys" in the real estate industry with expert advice.
With the price at $1,499, I imagine, if you sell one house, this will have paid for itself many times over. The software has a 30-day guarantee as well.
But an initial cost of $1,500? Is this software worth the high price?
My reasoning is:
It takes money to make money.
If you buy something that costs a lot of money, you better make sure you get one heck of a return (ROI) from that mega-purchase.
I, personally, wouldn't purchase the software. I'm not into real estate. :-)
|
https://www.danylkoweb.com/Blog/introducing-donald-trumpsoftware-FY
|
CC-MAIN-2017-04
|
refinedweb
| 176 | 75.2 |
Hi all, Im new to the site and to Java. Im having some trouble with my current assignment. I was suposed to make an isosceles triangle with asterisks based on a number entered by user. I got the first half correct but I can not figure out my loops for the second half. Do I have to start a new main to get it to descend? Here is what I have done so far. It compiles fine but will not run the descending half of pyramid. Im not sure it makes a difference but I use JCreator for IDE.
Thanks for any assistance.
import java.util.*; public class homeWork6 { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); System.out.println("Enter an interger from 1 to 50:"); int number = keyboard.nextInt(); System.out.println(); if (number < 1) System.out.println("Invalid number: must be at least one."); else if (number > 50) System.out.println("Invalid number: cannot exceed 50."); else // valid input print triangle of asterisks { // print first half int lineCount; int asteriskCount; for (lineCount = 1; lineCount <= number; lineCount++) { for (asteriskCount = 1; asteriskCount <= lineCount; asteriskCount ++) { System.out.print("*"); }//end inner for loop System.out.println(); }//end outer for loop for (lineCount = 1; lineCount >= number; lineCount --) { for (asteriskCount = number; asteriskCount >= lineCount; asteriskCount --) System.out.print("*"); } System.out.println(); } //end else }//end main }//end class
|
https://www.daniweb.com/programming/software-development/threads/28009/cant-figure-out-second-half-of-piramid
|
CC-MAIN-2018-39
|
refinedweb
| 227 | 60.61 |
build: backport the --skip-unused-stages flag.
run: add container gid to additional groups.
build: support filtering cache by duration using `--cache-ttl`. build: support building from commit when using git repo as build context. build: clean up git repos correctly when using subdirs. build: add support for distributing cache to remote sources using `--cache-to` and `--cache-from`. imagebuildah: optimize cache hits for `COPY` and `ADD` instructions. build: support OCI hooks for ephemeral build containers. build: add support for `--userns=auto`. copier: add NoOverwriteNonDirDir option . add initial support for building images using Buildah on FreeBSD. multistage: this now skips the computing of unwanted stages to improve performance. multiarch: support splitting build logs for `--platform` using `--logsplit`. build: add support for building images where the base image has no history. commit: allow disabling image history with `--omit-history`. build: add support for renaming a device in rootless setups. build: now supports additionalBuildContext in builds via the `--build-context` option. build: `--output` produces artifacts even if the build container is not committed. build: now accepts `-cpp-flag`, allowing users to pass in CPP flags when processing a Containerfile with C Preprocessor-like syntax. build: now accepts a branch and a subdirectory when the build context is a git repository. build: output now shows a progress bar while pushing and pulling images build: now errors out if the path to Containerfile is a directory. build: support building container images on environments that are rootless and without any valid login sessions. fix: `--output` now generates artifacts even if the entire build is cached. fix: `--output` generates artifacts only for the target stage in multi-stage builds. fix,add: now fails on a bad HTTP response instead of writing to container fix,squash: never use build cache when computing the last step of the last stage fix,build,run: allow reusing secret more than once in different RUN steps fix: compatibility with Docker build by making its --label and --annotate options set empty labels and annotations when given a name but no `=` or label value.
imagebuildah,build: move deepcopy of args before we spawn goroutine Vendor in containers/storage v1.40.2 buildah.BuilderOptions.DefaultEnv is ignored, so mark it as deprecated help output: get more consistent about option usage text Handle OS version and features flags buildah build: --annotation and --label should remove values buildah build: add a --env buildah: deep copy options.Args before performing concurrent build/stage test: inline platform and builtinargs behaviour vendor: bump imagebuilder to master/009dbc6 build: automatically set correct TARGETPLATFORM where expected build(deps): bump github.com/fsouza/go-dockerclient Vendor in containers/(common, storage, image) imagebuildah, executor: process arg variables while populating baseMap buildkit: add support for custom build output with --output Cirrus: Update CI VMs to F36 fix staticcheck linter warning for deprecated function Fix docs build on FreeBSD build(deps): bump github.com/containernetworking/cni from 1.0.1 to 1.1.0 copier.unwrapError(): update for Go 1.16 copier.PutOptions: add StripSetuidBit/StripSetgidBit/StripStickyBit copier.Put(): write to read-only directories build(deps): bump github.com/cpuguy83/go-md2man/v2 in /tests/tools Rename $TESTSDIR (the plural one), step 4 of 3 Rename $TESTSDIR (the plural one), step 3 of 3 Rename $TESTSDIR (the plural one), step 2 of 3 Rename $TESTSDIR (the plural one), step 1 of 3 build(deps): bump github.com/containerd/containerd from 1.6.2 to 1.6.3 Ed's periodic test cleanup using consistent lowercase 'invalid' word in returned err msg Update vendor of containers/(common,storage,image) use etchosts package from c/common run: set actual hostname in /etc/hostname to match docker parity update c/common to latest main Update vendor of containers/(common,storage,image) Stop littering manifest-create: allow creating manifest list from local image Update vendor of storage,common,image Bump golang.org/x/crypto to 7b82a4e Initialize network backend before first pull oci spec: change special mount points for namespaces tests/helpers.bash: assert handle corner cases correctly buildah: actually use containers.conf settings integration tests: learn to start a dummy registry Fix error check to work on Podman buildah build should accept at most one arg tests: reduce concurrency for flaky bud-multiple-platform-no-run vendor in latest containers/common,image,storage manifest-add: allow override arch,variant while adding image Remove a stray `\` from .containerenv Vendor in latest opencontainers/selinux v1.10.1 build, commit: allow removing default identity labels Create shorter names for containers based on image IDs test: skip rootless on cgroupv2 in root env fix hang when oci runtime fails Set permissions for GitHub actions copier test: use correct UID/GID in test archives run: set parent-death signals and forward SIGHUP/SIGINT/SIGTERM Bump back to v1.26.0-dev build(deps): bump github.com/opencontainers/runc from 1.1.0 to 1.1.1 Included the URL to check the SHA
buildah: create WORKDIR with USER permissions vendor: update github.com/openshift/imagebuilder copier: attempt to open the dir before adding it Updated dependabot to get updates for GitHub actions. Switch most calls to filepath.Walk to filepath.WalkDir build: allow --no-cache and --layers so build cache can be overrided build(deps): bump github.com/onsi/gomega from 1.18.1 to 1.19.0 Bump to v1.26.0-dev build(deps): bump github.com/golangci/golangci-lint in /tests/tools
install: drop RHEL/CentOS 7 doc build(deps): bump github.com/containers/common from 0.47.4 to 0.47.5 Bump c/storage to v1.39.0 in main Add a test for CVE-2022-27651 build(deps): bump github.com/docker/docker Bump github.com/prometheus/client_golang to v1.11.1 [CI:DOCS] man pages: sort flags, and keep them that way build(deps): bump github.com/containerd/containerd from 1.6.1 to 1.6.2 Don't pollute network setup: increase timeout to 4 minutes do not set the inheritable capabilities build(deps): bump github.com/golangci/golangci-lint in /tests/tools build(deps): bump github.com/containers/ocicrypt from 1.1.2 to 1.1.3 parse: convert exposed GetVolumes to internal only buildkit: mount=type=cache support locking external cache store .in support: improve error message when cpp is not installed buildah image: install cpp build(deps): bump github.com/stretchr/testify from 1.7.0 to 1.7.1 build(deps): bump github.com/spf13/cobra from 1.3.0 to 1.4.0 build(deps): bump github.com/docker/docker Add --no-hosts flag to eliminate use of /etc/hosts within containers test: remove skips for rootless users test: unshare mount/umount if test is_rootless tests/copy: read correct containers.conf build(deps): bump github.com/docker/distribution cirrus: add seperate task and matrix for rootless tests: skip tests for rootless which need unshare buildah: test rootless integration vendor: bump c/storage to main/93ce26691863 build(deps): bump github.com/fsouza/go-dockerclient from 1.7.9 to 1.7.10 tests/copy: initialize the network, too [CI:DOCS] remove references to Kubic for CentOS and Ubuntu build(deps): bump github.com/containerd/containerd from 1.6.0 to 1.6.1 use c/image/pkg/blobcache vendor c/image/[email protected] add: ensure the context directory is an absolute path executor: docker builds must inherit healthconfig from base if any docs: Remove Containerfile and containeringore build(deps): bump github.com/fsouza/go-dockerclient from 1.7.8 to 1.7.9 helpers.bash: Use correct syntax speed up combination-namespaces test build(deps): bump github.com/golangci/golangci-lint in /tests/tools Bump back to 1.25.0-dev build(deps): bump github.com/containerd/containerd from 1.5.9 to 1.6.0
Increase subuid/subgid to 65535 history: only add proxy vars to history if specified run_linux: use --systemd-cgroup buildah: new global option --cgroup-manager Makefile: build with systemd when available build(deps): bump github.com/fsouza/go-dockerclient from 1.7.7 to 1.7.8 Bump c/common to v0.47.4 Cirrus: Use updated VM images conformance: add a few "replace-directory-with-symlink" tests Bump back to v1.25.0-dev
executor: Add support for inline --platform within Dockerfile caps: fix buildah run --cap-add=all Update vendor of openshift/imagebuilder Bump version of containers/image and containers/common Update vendor of containers/common System tests: fix accidental vandalism of source dir build(deps): bump github.com/containers/storage from 1.38.1 to 1.38.2 imagebuildah.BuildDockerfiles(): create the jobs semaphore build(deps): bump github.com/onsi/gomega from 1.18.0 to 1.18.1 overlay: always honor mountProgram overlay: move mount program invocation to separate function overlay: move mount program lookup to separate function Bump to v1.25.0-dev [NO TESTS NEEDED]
Update vendor of containers/common build(deps): bump github.com/golangci/golangci-lint in /tests/tools Github-workflow: Report both failures and errors. build(deps): bump github.com/containers/image/v5 from 5.18.0 to 5.19.0 Update docs/buildah-build.1.md [CI:DOCS] Fix typos and improve language buildah bud --network add support for custom networks Make pull commands be consistent docs/buildah-build.1.md: don't imply that -v isn't just a RUN thing build(deps): bump github.com/onsi/gomega from 1.17.0 to 1.18.0 Vendor in latest containers/image Run codespell on code .github/dependabot.yml: add tests/tools go.mod CI: rm git-validation, add GHA job to validate PRs tests/tools: bump go-md2man to v2.0.1 tests/tools/Makefile: simplify tests/tools: bump onsi/ginkgo to v1.16.5 vendor: bump c/common and others mount: add support for custom upper and workdir with overlay mounts linux: fix lookup for runtime overlay: add MountWithOptions to API which extends support for advanced overlay Allow processing of SystemContext from FlagSet .golangci.yml: enable unparam linter util/resolveName: rm bool return tests/tools: bump golangci-lint .gitignore: fixups all: fix capabilities.NewPid deprecation warnings bind/mount.go: fix linter comment all: fix gosimple warning S1039 tests/e2e/buildah_suite_test.go: fix gosimple warnings imagebuildah/executor.go: fix gosimple warning util.go: fix gosimple warning build(deps): bump github.com/opencontainers/runc from 1.0.3 to 1.1.0 Enable git-daemon tests Allow processing of id options from FlagSet Cirrus: Re-order tasks for more parallelism Cirrus: Freshen VM images Fix platform handling for empty os/arch values Allow processing of network options from FlagSet Fix permissions on secrets directory Update containers/image and containers/common bud.bats: use a local git daemon for the git protocol test Allow processing of common options from FlagSet Cirrus: Run int. tests in parallel with unit vendor c/common Fix default CNI paths build(deps): bump github.com/fsouza/go-dockerclient from 1.7.6 to 1.7.7 multi-stage: enable mounting stages across each other with selinux enabled executor: Share selinux label of first stage with other stages in a build buildkit: add from field to bind and cache mounts so images can be used as source Use config.ProxyEnv from containers/common use libnetwork from c/common for networking setup the netns in the buildah parent process build(deps): bump github.com/containerd/containerd from 1.5.8 to 1.5.9 build(deps): bump github.com/fsouza/go-dockerclient from 1.7.4 to 1.7.6 build: fix libsubid test Allow callers to replace the ContainerSuffix parse: allow parsing anomaly non-human value for memory control group .cirrus: remove static_build from ci stage_executor: re-use all possible layers from cache for squashed builds build(deps): bump github.com/spf13/cobra from 1.2.1 to 1.3.0 Allow rootless buildah to set resource limits on cgroup V2 build(deps): bump github.com/docker/docker tests: move buildkit mount tests files from TESTSDIR to TESTDIR before modification build(deps): bump github.com/opencontainers/runc from 1.0.2 to 1.0.3 Wire logger through to config copier.Put: check for is-not-a-directory using lstat, not stat Turn on rootless cgroupv2 tests Grab all of the containers.conf settings for namespaces. image: set MediaType in OCI manifests copier: RemoveAll possibly-directories Simple README fix images: accept multiple filter with logical AND build(deps): bump github.com/containernetworking/cni from 0.8.1 to 1.0.1 UPdate vendor of container/storage build(deps): bump github.com/onsi/gomega from 1.16.0 to 1.17.0 build(deps): bump github.com/containers/image/v5 from 5.16.1 to 5.17.0 Make LocalIP public function so Podman can use it Fix UnsetEnv for buildah bud Tests should rely only on static/unchanging images run: ensure that stdio pipes are labeled correctly build(deps): bump github.com/docker/docker Cirrus: Bump up to Fedora 35 & Ubuntu 21.10 chroot: don't use the generate default seccomp filter for unit tests build(deps): bump github.com/containerd/containerd from 1.5.7 to 1.5.8 ssh-agent: Increase timeout before we explicitly close connection docs/tutorials: update Clarify that manifest defaults to localhost as the registry name "config": remove a stray bit of debug output "commit": fix a flag typo Fix an error message: unlocking vs locking Expand the godoc for CommonBuildOptions.Secrets chroot: accept an "rw" option Add --unsetenv option to buildah commit and build define.TempDirForURL(): show CombinedOutput when a command fails config: support the variant field rootless: do not bind mount /sys if not needed Fix tutorial to specify command on buildah run line build: history should not contain ARG values docs: Use guaranteed path for go-md2man run: honor --network=none from builder if nothing specified networkpolicy: Should be enabled instead of default when explictly set Add support for env var secret sources build(deps): bump github.com/docker/docker fix: another non-portable shebang Rootless containers users should use additional groups Support overlayfs path contains colon Report ignorefile location when no content added Add support for host.containers.internal in the /etc/hosts build(deps): bump github.com/onsi/ginkgo from 1.16.4 to 1.16.5 imagebuildah: fix nil deref buildkit: add support for mount=type=cache Default secret mode to 400 [CI:DOCS] Include manifest example usage docs: update buildah-from, buildah-pull 'platform' option compatibility notes docs: update buildah-build 'platform' option compatibility notes De-dockerize the man page as much as possible [CI:DOCS] Touch up Containerfile man page to show ARG can be 1st docs: Fix and Update Containerfile man page with supported mount types mount: add tmpcopyup to tmpfs mount option buildkit: Add support for --mount=type=tmpfs build(deps): bump github.com/opencontainers/selinux from 1.8.5 to 1.9.1 Fix command doc links in README.md build(deps): bump github.com/containers/image/v5 from 5.16.0 to 5.16.1 build: Add support for buildkit like --mount=type=bind Bump containerd to v1.5.7 build(deps): bump github.com/docker/docker tests: stop pulling php, composer Fix .containerignore link file Cirrus: Fix defunct package metadata breaking cache build(deps): bump github.com/containers/storage from 1.36.0 to 1.37.0 buildah build: add --all-platforms Add man page for Containerfile and .containerignore Plumb the remote logger throughut Buildah Replace fmt.Sprintf("%d", x) with strconv.Itoa(x) Run: Cleanup run directory after every RUN step build(deps): bump github.com/containers/common from 0.45.0 to 0.46.0 Makefile: adjust -ldflags/-gcflags/-gccgoflags depending on the go implementation Makefile: check for `-race` using `-mod=vendor` imagebuildah: fix an attempt to write to a nil map push: support to specify the compression format conformance: allow test cases to specify dockerUseBuildKit build(deps): bump github.com/containers/common from 0.44.1 to 0.45.0 build(deps): bump github.com/containers/common from 0.44.0 to 0.44.1 unmarshalConvertedConfig(): handle zstd compression tests/copy/copy: wire up compression options Update to github.com/vbauerster/mpb v7.1.5 Add flouthoc to OWNERS build: Add additional step nodes when labels are modified Makefile: turn on race detection whenever it's available conformance: add more tests for exclusion short-circuiting Update VM Images + Drop prior-ubuntu testing Bump to v1.24.0-dev
Vendor in containers/common v0.44.0 build(deps): bump github.com/containers/storage from 1.35.0 to 1.36.0 Update 05-openshift-rootless-build.md build(deps): bump github.com/opencontainers/selinux from 1.8.4 to 1.8.5 .cirrus.yml: run cross_build_task on Big Sur Makefile: update cross targets Add support for rootless overlay mounts Cirrus: Increase unit-test timeout Docs: Clarify rmi w/ manifest/index use build: mirror --authfile to filesystem if pointing to FD instead of file Fix build with .git url with branch manifest: rm should remove only manifests not referenced images. vendor: bump c/common to v0.43.3-0.20210902095222-a7acc160fb25 Avoid rehashing and noop compression writer corrected man page section; .conf file to mention its man page copy: add --max-parallel-downloads to tune that copy option copier.Get(): try to avoid descending into directories tag: Support tagging manifest list instead of resolving to images Install new manpages to correct sections conformance: tighten up exception specifications Add support for libsubid Add epoch time field to buildah images Fix ownership of /home/build/.local/share/containers build(deps): bump github.com/containers/image/v5 from 5.15.2 to 5.16.0 Rename bud to build, while keeping an alias for to bud. Replace golang.org/x/crypto/ssh/terminal with golang.org/x/term build(deps): bump github.com/opencontainers/runc from 1.0.1 to 1.0.2 build(deps): bump github.com/onsi/gomega from 1.15.0 to 1.16.0 build(deps): bump github.com/fsouza/go-dockerclient from 1.7.3 to 1.7.4 build(deps): bump github.com/containers/common from 0.43.1 to 0.43.2 Move DiscoverContainerfile to pkg/util directory build(deps): bump github.com/containers/image/v5 from 5.15.1 to 5.15.2 Remove some references to Docker build(deps): bump github.com/containers/image/v5 from 5.15.0 to 5.15.1 imagebuildah: handle --manifest directly build(deps): bump github.com/containers/common from 0.42.1 to 0.43.1 build(deps): bump github.com/opencontainers/selinux from 1.8.3 to 1.8.4 executor: make sure imageMap is updated with terminatedStage tests/serve/serve.go: use a kernel-assigned port Bump go for vendor-in-container from 1.13 to 1.16 imagebuildah: move multiple-platform building internal Adds GenerateStructure helper function to support rootfs-overlay. Run codespell to fix spelling Implement SSH RUN mount build(deps): bump github.com/onsi/gomega from 1.14.0 to 1.15.0 Fix resolv.conf content with run --net=private run: fix nil deref using the option's logger build(deps): bump github.com/containerd/containerd from 1.5.1 to 1.5.5 make vendor-in-container bud: teach --platform to take a list set base-image annotations build(deps): bump github.com/opencontainers/selinux from 1.8.2 to 1.8.3 [CI:DOCS] Fix CHANGELOG.md Bump to v1.23.0-dev [NO TESTS NEEDED] Accept repositories on login/logout
c/image, c/storage, c/common vendor before Podman 3.3 release WIP: tests: new assert() Proposed patch for 3399 (shadowutils) Fix handling of --restore shadow-utils build(deps): bump github.com/containers/image/v5 from 5.13.2 to 5.14.0 runtime-flag (debug) test: handle old & new runc build(deps): bump github.com/containers/storage from 1.32.6 to 1.33.0 Allow dst and destination for target in secret mounts Multi-arch: Always push updated version-tagged img Add a few tests on cgroups V2 imagebuildah.stageExecutor.prepare(): remove pseudonym check refine dangling filter Chown with environment variables not set should fail Just restore protections of shadow-utils build(deps): bump github.com/opencontainers/runc from 1.0.0 to 1.0.1 Remove specific kernel version number requirement from install.md Multi-arch image workflow: Make steps generic chroot: fix environment value leakage to intermediate processes Update nix pin with `make nixpkgs` buildah source - create and manage source images Update cirrus-cron notification GH workflow Reuse code from containers/common/pkg/parse Cirrus: Freshen VM images build(deps): bump github.com/containers/storage from 1.32.5 to 1.32.6 Fix excludes exception begining with / or ./ Fix syntax for --manifest example build(deps): bump github.com/onsi/gomega from 1.13.0 to 1.14.0 vendor containers/common@main Cirrus: Drop dependence on fedora-minimal Adjust conformance-test error-message regex Workaround appearance of differing debug messages Cirrus: Install docker from package cache build(deps): bump github.com/containers/ocicrypt from 1.1.1 to 1.1.2 Switch rusagelogfile to use options.Out build(deps): bump github.com/containers/storage from 1.32.4 to 1.32.5 Turn stdio back to blocking when command finishes Add support for default network creation Cirrus: Updates for master->main rename Change references from master to main Add `--env` and `--workingdir` flags to run command build(deps): bump github.com/opencontainers/runc [CI:DOCS] buildah bud: spelling --ignore-file requires parameter [CI:DOCS] push/pull: clarify supported transports Remove unused function arguments Create mountOptions for mount command flags Extract version command implementation to function Add --json flags to `mount` and `version` commands build(deps): bump github.com/containers/storage from 1.32.2 to 1.32.3 build(deps): bump github.com/containers/common from 0.40.0 to 0.40.1 copier.Put(): set xattrs after ownership buildah add/copy: spelling build(deps): bump github.com/containers/common from 0.39.0 to 0.40.0 buildah copy and buildah add should support .containerignore Remove unused util.StartsWithValidTransport Fix documentation of the --format option of buildah push Don't use alltransports.ParseImageName with known transports build(deps): bump github.com/containers/image/v5 from 5.13.0 to 5.13.1 man pages: clarify `rmi` removes dangling parents tests: make it easer to override the location of the copy helper build(deps): bump github.com/containers/image/v5 from 5.12.0 to 5.13.0 [CI:DOCS] Fix links to c/image master branch imagebuildah: use the specified logger for logging preprocessing warnings Fix copy into workdir for a single file Fix docs links due to branch rename Update nix pin with `make nixpkgs` build(deps): bump github.com/fsouza/go-dockerclient from 1.7.2 to 1.7.3 build(deps): bump github.com/opencontainers/selinux from 1.8.1 to 1.8.2 build(deps): bump go.etcd.io/bbolt from 1.3.5 to 1.3.6 build(deps): bump github.com/containers/storage from 1.32.1 to 1.32.2 build(deps): bump github.com/mattn/go-shellwords from 1.0.11 to 1.0.12 build(deps): bump github.com/onsi/ginkgo from 1.16.3 to 1.16.4 fix(docs): typo Move to v1.22.0-dev Fix handling of auth.json file while in a user namespace Add rusage-logfile flag to optionally send rusage to a file imagebuildah: redo step logging build(deps): bump github.com/onsi/ginkgo from 1.16.2 to 1.16.3 build(deps): bump github.com/containers/storage from 1.32.0 to 1.32.1 Add volumes to make running buildah within a container easier build(deps): bump github.com/onsi/gomega from 1.12.0 to 1.13.0 Add and use a "copy" helper instead of podman load/save Bump github.com/containers/common from 0.38.4 to 0.39.0 containerImageRef/containerImageSource: don't buffer uncompressed layers containerImageRef(): squashed images have no parent images Sync. workflow across skopeo, buildah, and podman Bump github.com/containers/storage from 1.31.1 to 1.31.2 Bump github.com/opencontainers/runc from 1.0.0-rc94 to 1.0.0-rc95 Bump to v1.21.1-dev [NO TESTS NEEDED]
Don't blow up if cpp detects errors Vendor in containers/common v0.38.4 Remove 'buildah run --security-opt' from completion update c/common Fix handling of --default-mounts-file update vendor of containers/storage v1.31.1 Bump github.com/containers/storage from 1.30.3 to 1.31.0 Send logrus messages back to caller when building github: Fix bad repo. ref in workflow config Check earlier for bad image tags name buildah bud: fix containers/podman/issues/10307 Bump github.com/containers/storage from 1.30.1 to 1.30.3 Cirrus: Support [CI:DOCS] test skipping Notification email for cirrus-cron build failures Bump github.com/opencontainers/runc from 1.0.0-rc93 to 1.0.0-rc94 Fix race condition Fix copy race while walking paths Preserve ownership of lower directory when doing an overlay mount Bump github.com/onsi/gomega from 1.11.0 to 1.12.0 Update nix pin with `make nixpkgs` codespell cleanup Multi-arch github-action workflow unification Bump github.com/containers/image/v5 from 5.11.1 to 5.12.0 Bump github.com/onsi/ginkgo from 1.16.1 to 1.16.2 imagebuildah: ignore signatures when tagging images update to latest libimage Bump github.com/containers/common from 0.37.0 to 0.37.1 Bump github.com/containers/storage from 1.30.0 to 1.30.1 Upgrade to GitHub-native Dependabot Document location of auth.json file if XDG_RUNTIME_DIR is not set run.bats: fix flake in run-user test Cirrus: Update F34beta -> F34 pr-should-include-tests: try to make work in buildah runUsingRuntime: when relaying error from the runtime, mention that Run(): avoid Mkdir() into the rootfs imagebuildah: replace archive with chrootarchive imagebuildah.StageExecutor.volumeCacheSaveVFS(): set up bind mounts conformance: use :Z with transient mounts when SELinux is enabled bud.bats: fix a bats warning imagebuildah: create volume directories when using overlays imagebuildah: drop resolveSymlink() namespaces test - refactoring and cleanup Refactor 'idmapping' system test Cirrus: Update Ubuntu images to 21.04 Tiny fixes in bud system tests Add compabitility wrappers for removed packages Fix expected message at pulling image Fix system tests of 'bud' subcommand [CI:DOCS] Update steps for CentOS runc users Add support for secret mounts Add buildah manifest rm command restore push/pull and util API [CI:DOCS] Remove older distro docs Rename rhel secrets to subscriptions vendor in openshift/imagebuilder Remove buildah bud --loglevel ... use new containers/common/libimage package Fix copier when using globs Test namespace flags of 'bud' subcommand Add system test of 'bud' subcommand Output names of multiple tags in buildah bud push to docker test: don't get fooled by podman copier: add Remove() build(deps): bump github.com/containers/image/v5 from 5.10.5 to 5.11.1 Restore log timestamps Add system test of 'buildah help' with a tiny fix tests: copy.bats: fix infinite hang Do not force hard code to crun in rootless mode build(deps): bump github.com/openshift/imagebuilder from 1.2.0 to 1.2.1 build(deps): bump github.com/containers/ocicrypt from 1.1.0 to 1.1.1 build(deps): bump github.com/containers/common from 0.35.4 to 0.36.0 Fix arg missing warning in bud Check without flag in 'from --cgroup-parent' test Minor fixes to Buildah as a library tutorial documentation Add system test of 'buildah version' for packaged buildah Add a few system tests of 'buildah from' Log the final error with %+v at logging level "trace" copier: add GetOptions.NoCrossDevice Update nix pin with `make nixpkgs` Bump to v1.20.2-dev
Run container with isolation type set at 'from' bats helpers.bash - minor refactoring Bump containers/storage vendor to v1.29.0 build(deps): bump github.com/onsi/ginkgo from 1.16.0 to 1.16.1 Cirrus: Update VMs w/ F34beta CLI add/copy: add a --from option build(deps): bump github.com/onsi/ginkgo from 1.15.2 to 1.16.0 Add authentication system tests for 'commit' and 'bud' fix local image lookup for custom platform Double-check existence of OCI runtimes Cirrus: Make use of shared get_ci_vm container Add system tests of "buildah run" Update nix pin with `make nixpkgs` Remove some stuttering on returns errors Setup alias for --tty to --terminal Add conformance tests for COPY /... Put a few more minutes on the clock for the CI conformance test Add a conformance test for COPY --from $symlink Add conformance tests for COPY "" Check for symlink in builtin volume Sort all mounts by destination directory System-test cleanup Export parse.Platform string to be used by podman-remote blobcache: fix sequencing error build(deps): bump github.com/containers/common from 0.35.3 to 0.35.4 Fix URL in demos/buildah_multi_stage.sh Add a few system tests [NO TESTS NEEDED] Use --recurse-modules when building git context Bump to v1.20.1-dev
make nixpkgs
make nixpkgs
Update vendor of containers/storage and containers/common Buildah inspect should be able to inspect manifests Make buildah push support pushing manifests lists and digests Fix handling of TMPDIR environment variable Add support for --manifest flags Upper directory should match mode of destination directory Only grab the OS, Arch if the user actually specified them Use --arch and --os and --variant options to select architecture and os Cirrus: Track libseccomp and golang version copier.PutOptions: add an "IgnoreDevices" flag fix: `rmi --prune` when parent image is in store. build(deps): bump github.com/containers/storage from 1.24.3 to 1.24.4 build(deps): bump github.com/containers/common from 0.31.1 to 0.31.2 Allow users to specify stdin into containers Drop log message on failure to mount on /sys file systems to info Spelling SELinux no longer requires a tag. build(deps): bump github.com/opencontainers/selinux from 1.6.0 to 1.8.0 build(deps): bump github.com/containers/common from 0.31.0 to 0.31.1 Update nix pin with `make nixpkgs` Switch references of /var/run -> /run Allow FROM to be overriden with from option copier: don't assume we can chroot() on Unixy systems copier: add PutOptions.NoOverwriteDirNonDir, Get/PutOptions.Rename copier: handle replacing directories with not-directories copier: Put: skip entries with zero-length names build(deps): bump github.com/containers/storage from 1.24.2 to 1.24.3 Add U volume flag to chown source volumes Turn off PRIOR_UBUNTU Test until vm is updated pkg, cli: rootless uses correct isolation build(deps): bump github.com/onsi/gomega from 1.10.3 to 1.10.4 update installation doc to reflect current status Move away from using docker.io enable short-name aliasing build(deps): bump github.com/containers/storage from 1.24.1 to 1.24.2 build(deps): bump github.com/containers/common from 0.30.0 to 0.31.0 Throw errors when using bogus --network flags pkg/supplemented test: replace our null blobinfocache build(deps): bump github.com/containers/common from 0.29.0 to 0.30.0 inserts forgotten quotation mark Not prefer use local image create/add manifest Add container information to .containerenv Add --ignorefile flag to use alternate .dockerignore flags Add a source debug build Fix crash on invalid filter commands build(deps): bump github.com/containers/common from 0.27.0 to 0.29.0 Switch to using containers/common pkg's fix: non-portable shebang #2812 Remove copy/paste errors that leaked `Podman` into man pages. Add suggests cpp to spec file Apply suggestions from code review update docs for debian testing and unstable imagebuildah: disable pseudo-terminals for RUN Compute diffID for mapped-layer at creating image source intermediateImageExists: ignore images whose history we can't read Bump to v1.19.0-dev build(deps): bump github.com/containers/common from 0.26.3 to 0.27.0
Fix testing error caused by simultanious merge Vendor in containers/storage v1.24.0 short-names aliasing Add --policy flag to buildah pull Stop overwrapping and stuttering copier.Get(): ignore ENOTSUP/ENOSYS when listing xattrs Run: don't forcibly disable UTS namespaces in rootless mode test: ensure non-directory in a Dockerfile path is handled correctly Add a few tests for `pull` command Fix buildah config --cmd to handle array build(deps): bump github.com/containers/storage from 1.23.8 to 1.23.9 Fix NPE when Dockerfile path contains non-directory entries Update buildah bud man page from podman build man page Move declaration of decryption-keys to common cli Run: correctly call copier.Mkdir util: digging UID/GID out of os.FileInfo should work on Unix imagebuildah.getImageTypeAndHistoryAndDiffIDs: cache results Verify userns-uid-map and userns-gid-map input Use CPP, CC and flags in dep check scripts Avoid overriding LDFLAGS in Makefile ADD: handle --chown on URLs Update nix pin with `make nixpkgs` (*Builder).Run: MkdirAll: handle EEXIST error copier: try to force loading of nsswitch modules before chroot() fix MkdirAll usage build(deps): bump github.com/containers/common from 0.26.2 to 0.26.3 build(deps): bump github.com/containers/storage from 1.23.7 to 1.23.8 Use osusergo build tag for static build imagebuildah: cache should take image format into account Bump to v1.18.0-dev
Handle cases where other tools mount/unmount containers overlay.MountReadOnly: support RO overlay mounts overlay: use fusermount for rootless umounts overlay: fix umount Switch default log level of Buildah to Warn. Users need to see these messages Drop error messages about OCI/Docker format to Warning level build(deps): bump github.com/containers/common from 0.26.0 to 0.26.2 tests/testreport: adjust for API break in storage v1.23.6 build(deps): bump github.com/containers/storage from 1.23.5 to 1.23.7 build(deps): bump github.com/fsouza/go-dockerclient from 1.6.5 to 1.6.6 copier: put: ignore Typeflag="g" Use curl to get repo file (fix #2714) build(deps): bump github.com/containers/common from 0.25.0 to 0.26.0 build(deps): bump github.com/spf13/cobra from 1.0.0 to 1.1.1 Remove docs that refer to bors, since we're not using it Buildah bud should not use stdin by default bump containerd, docker, and golang.org/x/sys Makefile: cross: remove windows.386 target copier.copierHandlerPut: don't check length when there are errors Stop excessive wrapping CI: require that conformance tests pass bump(github.com/openshift/imagebuilder) to v1.1.8 Skip tlsVerify insecure BUILD_REGISTRY_SOURCES Fix build path wrong refactor pullpolicy to avoid deps build(deps): bump github.com/containers/common from 0.24.0 to 0.25.0 CI: run gating tasks with a lot more memory ADD and COPY: descend into excluded directories, sometimes copier: add more context to a couple of error messages copier: check an error earlier copier: log stderr output as debug on success Update nix pin with `make nixpkgs` Set directory ownership when copied with ID mapping build(deps): bump github.com/sirupsen/logrus from 1.6.0 to 1.7.0 build(deps): bump github.com/containers/common from 0.23.0 to 0.24.0 Cirrus: Remove bors artifacts Sort build flag definitions alphabetically ADD: only expand archives at the right time Remove configuration for bors Shell Completion for podman build flags Bump c/common to v0.24.0 New CI check: xref --help vs man pages CI: re-enable several linters Move --userns-uid-map/--userns-gid-map description into buildah man page add: preserve ownerships and permissions on ADDed archives Makefile: tweak the cross-compile target Bump containers/common to v0.23.0 chroot: create bind mount targets 0755 instead of 0700 Change call to Split() to safer SplitN() chroot: fix handling of errno seccomp rules build(deps): bump github.com/containers/image/v5 from 5.5.2 to 5.6.0 Add In Progress section to contributing integration tests: make sure tests run in ${topdir}/tests Run(): ignore containers.conf's environment configuration Warn when setting healthcheck in OCI format Cirrus: Skip git-validate on branches tools: update git-validation to the latest commit tools: update golangci-lint to v1.18.0 Add a few tests of push command Add(): fix handling of relative paths with no ContextDir build(deps): bump github.com/containers/common from 0.21.0 to 0.22.0 Lint: Use same linters as podman Validate: reference HEAD Fix buildah mount to display container names not ids Update nix pin with `make nixpkgs` Add missing --format option in buildah from man page Fix up code based on codespell build(deps): bump github.com/openshift/imagebuilder from 1.1.6 to 1.1.7 build(deps): bump github.com/containers/storage from 1.23.4 to 1.23.5 Improve buildah completions Cirrus: Fix validate commit epoch Fix bash completion of manifest flags Uniform some man pages Update Buildah Tutorial to address BZ1867426 Update bash completion of `manifest add` sub command copier.Get(): hard link targets shouldn't be relative paths build(deps): bump github.com/onsi/gomega from 1.10.1 to 1.10.2 Pass timestamp down to history lines Timestamp gets updated everytime you inspect an image bud.bats: use absolute paths in newly-added tests contrib/cirrus/lib.sh: don't use CN for the hostname tests: Add some tests Update `manifest add` man page Extend flags of `manifest add` build(deps): bump github.com/containers/storage from 1.23.3 to 1.23.4 build(deps): bump github.com/onsi/ginkgo from 1.14.0 to 1.14.1 Bump to v1.17.0-dev CI: expand cross-compile checks
fix build on 32bit arches containerImageRef.NewImageSource(): don't always force timestamps Add fuse module warning to image readme Heed our retry delay option values when retrying commit/pull/push Switch to containers/common for seccomp Use --timestamp rather then --omit-timestamp docs: remove outdated notice docs: remove outdated notice build-using-dockerfile: add a hidden --log-rusage flag build(deps): bump github.com/containers/image/v5 from 5.5.1 to 5.5.2 Discard ReportWriter if user sets options.Quiet build(deps): bump github.com/containers/common from 0.19.0 to 0.20.3 Fix ownership of content copied using COPY --from newTarDigester: zero out timestamps in tar headers Update nix pin with `make nixpkgs` bud.bats: correct .dockerignore integration tests Use pipes for copying run: include stdout in error message run: use the correct error for errors.Wrapf copier: un-export internal types copier: add Mkdir() in_podman: don't get tripped up by $CIRRUS_CHANGE_TITLE docs/buildah-commit.md: tweak some wording, add a --rm example imagebuildah: don’t blank out destination names when COPYing Replace retry functions with common/pkg/retry StageExecutor.historyMatches: compare timestamps using .Equal Update vendor of containers/common Fix errors found in coverity scan Change namespace handling flags to better match podman commands conformance testing: ignore buildah.BuilderIdentityAnnotation labels Vendor in containers/storage v1.23.0 Add buildah.IsContainer interface Avoid feeding run_buildah to pipe fix(buildahimage): add xz dependency in buildah image Bump github.com/containers/common from 0.15.2 to 0.18.0 Howto for rootless image building from OpenShift Add --omit-timestamp flag to buildah bud Update nix pin with `make nixpkgs` Shutdown storage on failures Handle COPY --from when an argument is used Bump github.com/seccomp/containers-golang from 0.5.0 to 0.6.0 Cirrus: Use newly built VM images Bump github.com/opencontainers/runc from 1.0.0-rc91 to 1.0.0-rc92 Enhance the .dockerignore man pages conformance: add a test for COPY from subdirectory fix bug manifest inspct Add documentation for .dockerignore Add BuilderIdentityAnnotation to identify buildah version DOC: Add quay.io/containers/buildah image to README.md Update buildahimages readme fix spelling mistake in "info" command result display Don't bind /etc/host and /etc/resolv.conf if network is not present blobcache: avoid an unnecessary NewImage() Build static binary with `buildGoModule` copier: split StripSetidBits into StripSetuidBit/StripSetgidBit/StripStickyBit tarFilterer: handle multiple archives Fix a race we hit during conformance tests Rework conformance testing Update 02-registries-repositories.md test-unit: invoke cmd/buildah tests with --flags parse: fix a type mismatch in a test Fix compilation of tests/testreport/testreport build.sh: log the version of Go that we're using test-unit: increase the test timeout to 40/45 minutes Add the "copier" package Fix & add notes regarding problematic language in codebase Add dependency on github.com/stretchr/testify/require CompositeDigester: add the ability to filter tar streams BATS tests: make more robust vendor golang.org/x/[email protected] Switch golang 1.12 to golang 1.13 imagebuildah: wait for stages that might not have even started yet chroot, run: not fail on bind mounts from /sys chroot: do not use setgroups if it is blocked Set engine env from containers.conf imagebuildah: return the right stage's image as the "final" image Fix a help string Deduplicate environment variables switch containers/libpod to containers/podman Bump github.com/containers/ocicrypt from 1.0.2 to 1.0.3 Bump github.com/opencontainers/selinux from 1.5.2 to 1.6.0 Mask out /sys/dev to prevent information leak linux: skip errors from the runtime kill Mask over the /sys/fs/selinux in mask branch Add VFS additional image store to container tests: add auth tests Allow "readonly" as alias to "ro" in mount options Ignore OS X specific consistency mount option Bump github.com/onsi/ginkgo from 1.13.0 to 1.14.0 Bump github.com/containers/common from 0.14.0 to 0.15.2 Rootless Buildah should default to IsolationOCIRootless imagebuildah: fix inheriting multi-stage builds Make imagebuildah.BuildOptions.Architecture/OS optional Make imagebuildah.BuildOptions.Jobs optional Resolve a possible race in imagebuildah.Executor.startStage() Switch scripts to use containers.conf Bump openshift/imagebuilder to v1.1.6 Bump go.etcd.io/bbolt from 1.3.4 to 1.3.5 buildah, bud: support --jobs=N for parallel execution executor: refactor build code inside new function Add bud regression tests Cirrus: Fix missing htpasswd in registry img docs: clarify the 'triples' format CHANGELOG.md: Fix markdown formatting Add nix derivation for static builds Bump to v1.16.0-dev version centos7 for compatible
Bump github.com/containers/common from 0.12.0 to 0.13.1 Bump github.com/containers/storage from 1.20.1 to 1.20.2 Bump github.com/seccomp/containers-golang from 0.4.1 to 0.5.0 Bump github.com/stretchr/testify from 1.6.0 to 1.6.1 Bump github.com/opencontainers/runc from 1.0.0-rc9 to 1.0.0-rc90 Add CVE-2020-10696 to CHANGELOG.md and changelog.txt Bump github.com/stretchr/testify from 1.5.1 to 1.6.0 Bump github.com/onsi/ginkgo from 1.12.2 to 1.12.3 Vendor in containers/common v0.12.0 fix lighttpd example Vendor in new go.etcd.io/bbolt Bump github.com/onsi/ginkgo from 1.12.1 to 1.12.2 Bump imagebuilder for ARG fix Bump github.com/containers/common from 0.11.2 to 0.11.4 remove dependency on openshift struct Warn on unset build arguments vendor: update seccomp/containers-golang to v0.4.1 Ammended docs Updated docs clean up comments update exit code for tests Implement commit for encryption implementation of encrypt/decrypt push/pull/bud/from fix resolve docker image name as transport Bump github.com/opencontainers/go-digest from 1.0.0-rc1 to 1.0.0 Bump github.com/onsi/ginkgo from 1.12.0 to 1.12.1 Bump github.com/containers/storage from 1.19.1 to 1.19.2 Bump github.com/containers/image/v5 from 5.4.3 to 5.4.4 Add preliminary profiling support to the CLI Bump github.com/containers/common from 0.10.0 to 0.11.2 Evaluate symlinks in build context directory fix error info about get signatures for containerImageSource Add Security Policy Cirrus: Fixes from review feedback Bump github.com/containers/storage from 1.19.0 to 1.19.1 Bump github.com/sirupsen/logrus from 1.5.0 to 1.6.0 imagebuildah: stages shouldn't count as their base images Update containers/common v0.10.0 Bump github.com/fsouza/go-dockerclient from 1.6.4 to 1.6.5 Add registry to buildahimage Dockerfiles Cirrus: Use pre-installed VM packages + F32 Cirrus: Re-enable all distro versions Cirrus: Update to F31 + Use cache images golangci-lint: Disable gosimple Lower number of golangci-lint threads Fix permissions on containers.conf Don't force tests to use runc Bump github.com/containers/common from 0.9.1 to 0.9.5 Return exit code from failed containers Bump github.com/containers/storage from 1.18.2 to 1.19.0 Bump github.com/containers/common from 0.9.0 to 0.9.1 cgroup_manager should be under [engine] Use c/common/pkg/auth in login/logout Cirrus: Temporarily disable Ubuntu 19 testing Add containers.conf to stablebyhand build Update gitignore to exclude test Dockerfiles Bump github.com/fsouza/go-dockerclient from 1.6.3 to 1.6.4 Bump github.com/containers/common from 0.8.1 to 0.9.0 Bump back to v1.15.0-dev Remove warning for systemd inside of container
Run (make vendor) Run (make -C tests/tools vendor) Run (go mod tidy) before (go mod vendor) again Fix (make vendor) Bump validation Bump back to v1.15.0-dev
Bump github.com/containers/image/v5 from 5.3.1 to 5.4.3 make vendor: run `tidy` after `vendor` Do not skip the directory when the ignore pattern matches Bump github.com/containers/common from 0.7.0 to 0.8.1 Downgrade siruspen/logrus from 1.4.2 Fix errorf conventions dockerignore tests : remove symlinks, rework Bump back to v1.15.0-dev
bud.bats - cleanup, refactoring vendor in latest containers/storage 1.18.0 and containers/common v0.7.0 Bump github.com/spf13/cobra from 0.0.6 to 0.0.7 Bump github.com/containers/storage from 1.16.5 to 1.17.0 Bump github.com/containers/image/v5 from 5.2.1 to 5.3.1 Fix Amazon install step Bump back to v1.15.0-dev Fix bud-build-arg-cache test Make image history work correctly with new args handling Don't add args to the RUN environment from the Builder Update github.com/openshift/imagebuilder to v1.1.4 Add .swp files to .gitignore
revert #2246 FIPS mode change Bump back to v1.15.0-dev image with dup layers: we now have one on quay digest test : make more robust
Fix fips-mode check for RHEL8 boxes Fix potential CVE in tarfile w/ symlink (Edit 02-Jun-2020: Addresses CVE-2020-10696) Fix .dockerignore with globs and ! commands update install steps for Amazon Linux 2 Bump github.com/openshift/imagebuilder from 1.1.2 to 1.1.3 Add comment for RUN command in volume ownership test Run stat command directly for volume ownership test vendor in containers/common v0.6.1 Cleanup go.sum Bump back to v1.15.0-dev
Update containers/storage to v1.16.5 Bump github.com/containers/storage from 1.16.2 to 1.16.4 Bump github.com/openshift/imagebuilder from 1.1.1 to 1.1.2 Update github.com/openshift/imagebuilder vendoring Update unshare man page to fix script example Fix compilation errors on non linux platforms Bump containers/common and opencontainers/selinux versions Add tests for volume ownership Preserve volume uid and gid through subsequent commands Fix FORWARD_NULL errors found by Coverity Bump github.com/containers/storage from 1.16.1 to 1.16.2 Fix errors found by codespell Bump back to v1.15.0-dev Add Pull Request Template
Add Buildah pull request template Bump to containers/storage v1.16.1 run_linux: fix tight loop if file is not pollable Bump github.com/opencontainers/selinux from 1.3.2 to 1.3.3 Bump github.com/containers/common from 0.4.1 to 0.4.2 Bump back to v1.15.0-dev Add Containerfile to build a versioned stable image on quay.io
Search for local runtime per values in containers.conf Set correct ownership on working directory BATS : in teardown, umount stale mounts Bump github.com/spf13/cobra from 0.0.5 to 0.0.6 Bump github.com/fsouza/go-dockerclient from 1.6.1 to 1.6.3 Bump github.com/stretchr/testify from 1.4.0 to 1.5.1 Replace unix with syscall to allow vendoring into libpod Update to containers/common v0.4.1 Improve remote manifest retrieval Fix minor spelling errors in containertools README Clear the right variable in buildahimage Correct a couple of incorrect format specifiers Update to containers/common v0.3.0 manifest push --format: force an image type, not a list type run: adjust the order in which elements are added to $PATH getDateAndDigestAndSize(): handle creation time not being set Bump github.com/containers/common from 0.2.0 to 0.2.1 include installation steps for CentOS 8 and Stream include installation steps for CentOS7 and forks Adjust Ubuntu install info to also work on Pop!_OS Make the commit id clear like Docker Show error on copied file above context directory in build Bump github.com/containers/image/v5 from 5.2.0 to 5.2.1 pull/from/commit/push: retry on most failures Makefile: fix install.cni.sudo Repair buildah so it can use containers.conf on the server side Bump github.com/mattn/go-shellwords from 1.0.9 to 1.0.10 Bump github.com/fsouza/go-dockerclient from 1.6.0 to 1.6.1 Fixing formatting & build instructions Add Code of Conduct Bors: Fix no. req. github reviews Cirrus+Bors: Simplify temp branch skipping Bors-ng: Add documentation and status-icon Bump github.com/onsi/ginkgo from 1.11.0 to 1.12.0 fix XDG_RUNTIME_DIR for authfile Cirrus: Disable F29 testing Cirrus: Add jq package Cirrus: Fix lint + validation using wrong epoch Stop using fedorproject registry Bors: Workaround ineffective required statuses Bors: Enable app + Disable Travis Cirrus: Add standardized log-collection Cirrus: Improve automated lint + validation Allow passing options to golangci-lint Cirrus: Fixes from review feedback Cirrus: Temporarily ignore VM testing failures Cirrus: Migrate off papr + implement VM testing Cirrus: Update packages + fixes for get_ci_vm.sh Show validation command-line Skip overlay test w/ vfs driver use alpine, not centos, for various tests Flake handling: cache and prefetch images Bump to v1.15.0-dev
bump github.com/mtrmac/gpgme Update containers/common to v0.1.4 manifest push: add --format option Bump github.com/onsi/gomega from 1.8.1 to 1.9.0 vendor github.com/containers/image/[email protected] info test: deal with random key order Bump back to v1.14.0-dev
sign.bats: set GPG_TTY=/dev/null Fix parse_unsupported.go getDateAndDigestAndSize(): use manifest.Digest Bump github.com/opencontainers/selinux from 1.3.0 to 1.3.1 Bump github.com/containers/common from 0.1.0 to 0.1.2 Touch up os/arch doc chroot: handle slightly broken seccomp defaults buildahimage: specify fuse-overlayfs mount options Bump github.com/mattn/go-shellwords from 1.0.7 to 1.0.9 copy.bats: make sure we detect failures due to missing source parse: don't complain about not being able to rename something to itself Makefile: use a $(GO_TEST) macro, fix a typo manifests: unit test fix Fix build for 32bit platforms Allow users to set OS and architecture on bud Fix COPY in containerfile with envvar Bump c/storage to v1.15.7 add --sign-by to bud/commit/push, --remove-signatures for pull/push Remove cut/paste error in CHANGELOG.md Update vendor of containers/common to v0.1.0 update install instructions for Debian, Raspbian and Ubuntu Add support for containers.conf Bump back to v1.14.0-dev
Bump github.com/containers/common from 0.0.5 to 0.0.7 Bump github.com/onsi/ginkgo from 1.10.3 to 1.11.0 Bump github.com/pkg/errors from 0.8.1 to 0.9.0 Bump github.com/onsi/gomega from 1.7.1 to 1.8.1 Add codespell support copyFileWithTar: close source files at the right time copy: don't digest files that we ignore Check for .dockerignore specifically Travis: rm go 1.12.x Don't setup excludes, if their is only one pattern to match set HOME env to /root on chroot-isolation by default docs: fix references to containers-*.5 update openshift/api fix bug Add check .dockerignore COPY file buildah bud --volume: run from tmpdir, not source dir Fix imageNamePrefix to give consistent names in buildah-from cpp: use -traditional and -undef flags Fix image reference in tutorial 4 discard outputs coming from onbuild command on buildah-from --quiet make --format columnizing consistent with buildah images Bump to v1.14.0-dev
Bump to c/storage v1.15.5 Update container/storage to v1.15.4 Fix option handling for volumes in build Rework overlay pkg for use with libpod Fix buildahimage builds for buildah Add support for FIPS-Mode backends Set the TMPDIR for pulling/pushing image to $TMPDIR WIP: safer test for pull --all-tags BATS major cleanup: blobcache.bats: refactor BATS major cleanup: part 4: manual stuff BATS major cleanup, step 3: yet more run_buildah BATS major cleanup, part 2: use more run_buildah BATS major cleanup, part 1: log-level Bump github.com/containers/image/v5 from 5.0.0 to 5.1.0 Bump github.com/containers/common from 0.0.3 to 0.0.5 Bump to v1.13.0-dev
Allow ADD to use http src Bump to c/storage v.1.15.3 install.md: update golang dependency imgtype: reset storage opts if driver overridden Start using containers/common overlay.bats typo: fuse-overlays should be fuse-overlayfs chroot: Unmount with MNT_DETACH instead of UnmountMountpoints() bind: don't complain about missing mountpoints imgtype: check earlier for expected manifest type Vendor containers/storage fix Vendor containers/storage v1.15.1 Add history names support PR takeover of #1966 Tests: Add inspect test check steps Tests: Add container name and id check in containers test steps Test: Get permission in add test Tests: Add a test for tag by id Tests: Add test cases for push test Tests: Add image digest test Tests: Add some buildah from tests Tests: Add two commit test Tests: Add buildah bud with --quiet test Tests: Add two test for buildah add Bump back to v1.12.0-dev
Handle missing equal sign in --from and --chown flags for COPY/ADD bud COPY does not download URL Bump github.com/onsi/gomega from 1.7.0 to 1.7.1 Fix .dockerignore exclude regression Ran buildah through codespell commit(docker): always set ContainerID and ContainerConfig Touch up commit man page image parameter Add builder identity annotations. info: use util.Runtime() Bump github.com/onsi/ginkgo from 1.10.2 to 1.10.3 Bump back to v1.12.0-dev
Enhance error on unsafe symbolic link targets Add OCIRuntime to info Check nonexsit authfile Only output image id if running buildah bud --quiet Fix --pull=true||false and add --pull-never to bud and from (retry) cgroups v2: tweak or skip tests Prepwork: new 'skip' helpers for tests Handle configuration blobs for manifest lists unmarshalConvertedConfig: avoid using the updated image's ref Add completions for Manifest commands Add disableFips option to secrets pkg Update bud.bats test archive test Add test for caching based on content digest Builder.untarPath(): always evaluate b.ContentDigester.Hash() Bump github.com/onsi/ginkgo from 1.10.1 to 1.10.2 Fix another broken test: copy-url-mtime yet more fixes Actual bug fix for 'add' test: fix the expected mode BATS tests - lots of mostly minor cleanup build: drop support for ostree Add support for make vendor-in-container imgtype: exit with error if storage fails remove XDG_RUNTIME_DIR from default authfile path fix troubleshooting redirect instructions Bump back to v1.12.0-dev
buildah: add a "manifest" command manifests: add the module pkg/supplemented: add a package for grouping images together pkg/manifests: add a manifest list build/manipulation API Update for ErrUnauthorizedForCredentials API change in containers/image Update for manifest-lists API changes in containers/image version: also note the version of containers/image Move to containers/image v5.0.0 Enable --device directory as src device Fix git build with branch specified Bump github.com/openshift/imagebuilder from 1.1.0 to 1.1.1 Bump github.com/fsouza/go-dockerclient from 1.4.4 to 1.5.0 Add clarification to the Tutorial for new users Silence "using cache" to ensure -q is fully quiet Add OWNERS File to Buildah Bump github.com/containers/storage from 1.13.4 to 1.13.5 Move runtime flag to bud from common Commit: check for storage.ErrImageUnknown using errors.Cause() Fix crash when invalid COPY --from flag is specified. Bump back to v1.12.0-dev
Update c/image to v4.0.1 Bump github.com/spf13/pflag from 1.0.3 to 1.0.5 Fix --build-args handling Bump github.com/spf13/cobra from 0.0.3 to 0.0.5 Bump github.com/cyphar/filepath-securejoin from 0.2.1 to 0.2.2 Bump github.com/onsi/ginkgo from 1.8.0 to 1.10.1 Bump github.com/fsouza/go-dockerclient from 1.3.0 to 1.4.4 Add support for retrieving context from stdin "-" Ensure bud remote context cleans up on error info: add cgroups2 Bump github.com/seccomp/libseccomp-golang from 0.9.0 to 0.9.1 Bump github.com/mattn/go-shellwords from 1.0.5 to 1.0.6 Bump github.com/stretchr/testify from 1.3.0 to 1.4.0 Bump github.com/opencontainers/selinux from 1.2.2 to 1.3.0 Bump github.com/etcd-io/bbolt from 1.3.2 to 1.3.3 Bump github.com/onsi/gomega from 1.5.0 to 1.7.0 update c/storage to v1.13.4 Print build 'STEP' line to stdout, not stderr Fix travis-ci on forks Vendor c/storage v1.13.3 Use Containerfile by default Added tutorial on how to include Buildah as library util/util: Fix "configuraitno" -> "configuration" log typo Bump back to v1.12.0-dev
Add some cleanup code Move devices code to unit specific directory. Bump back to v1.12.0-dev
Add --devices flag to bud and from Downgrade .papr to highest atomic verion Add support for /run/.containerenv Truncate output of too long image names Preserve file and directory mount permissions Bump fedora version from 28 to 30 makeImageRef: ignore EmptyLayer if Squash is set Set TMPDIR to /var/tmp by default replace --debug=false with --log-level=error Allow mounts.conf entries for equal source and destination paths fix label and annotation for 1-line Dockerfiles Enable interfacer linter and fix lints install.md: mention goproxy Makefile: use go proxy Bump to v1.12.0-dev
tests/bud.bats: add --signature-policy to some tests Vendor github.com/openshift/api pull/commit/push: pay attention to $BUILD_REGISTRY_SOURCES Add `--log-level` command line option and deprecate `--debug` add support for cgroupsV2 Correctly detect ExitError values from Run() Disable empty logrus timestamps to reduce logger noise Remove outdated deps Makefile target Remove gofmt.sh in favor of golangci-lint Remove govet.sh in favor of golangci-lint Allow to override build date with SOURCE_DATE_EPOCH Update shebangs to take env into consideration Fix directory pull image names Add --digestfile and Re-add push statement as debug README: mention that Podman uses Buildah's API Use content digests in ADD/COPY history entries add: add a DryRun flag to AddAndCopyOptions Fix possible runtime panic on bud Add security-related volume options to validator use correct path for ginkgo Add bud 'without arguments' integration tests Update documentation about bud add: handle hard links when copying with .dockerignore add: teach copyFileWithTar() about symlinks and directories Allow buildah bud to be called without arguments imagebuilder: fix detection of referenced stage roots Touch up go mod instructions in install run_linux: fix mounting /sys in a userns Vendor Storage v1.13.2 Cirrus: Update VM images Fix handling of /dev/null masked devices Update `bud`/`from` help to contain indicator for `--dns=none` Bump back to v1.11.0-dev
Bump containers/image to v3.0.2 to fix keyring issue Bug fix for volume minus syntax Bump container/storage v1.13.1 and containers/image v3.0.1 bump github.com/containernetworking/cni to v0.7.1 Add overlayfs to fuse-overlayfs tip Add automatic apparmor tag discovery Fix bug whereby --get-login has no effect Bump to v1.11.0-dev
vendor github.com/containers/[email protected] Remove GO111MODULE in favor of `-mod=vendor` Vendor in containers/storage v1.12.16 Add '-' minus syntax for removal of config values tests: enable overlay tests for rootless rootless, overlay: use fuse-overlayfs vendor github.com/containers/[email protected] Added '-' syntax to remove volume config option delete `successfully pushed` message Add golint linter and apply fixes vendor github.com/containers/[email protected] Change wait to sleep in buildahimage readme Handle ReadOnly images when deleting images Add support for listing read/only images
from/import: record the base image's digest, if it has one Fix CNI version retrieval to not require network connection Add misspell linter and apply fixes Add goimports linter and apply fixes Add stylecheck linter and apply fixes Add unconvert linter and apply fixes image: make sure we don't try to use zstd compression run.bats: skip the "z" flag when testing --mount Update to runc v1.0.0-rc8 Update to match updated runtime-tools API bump github.com/opencontainers/runtime-tools to v0.9.0 Build e2e tests using the proper build tags Add unparam linter and apply fixes Run: correct a typo in the --cap-add help text unshare: add a --mount flag fix push check image name is not empty Bump to v1.9.2-dev
add: fix slow copy with no excludes Add errcheck linter and fix missing error check Improve tests/tools/Makefile parallelism and abstraction Fix response body not closed resource leak Switch to golangci-lint Add gomod instructions and mailing list links On Masked path, check if /dev/null already mounted before mounting Update to containers/storage v1.12.13 Refactor code in package imagebuildah Add rootless podman with NFS issue in documentation Add --mount for buildah run import method ValidateVolumeOpts from libpod Fix typo Makefile: set GO111MODULE=off rootless: add the built-in slirp DNS server Update docker/libnetwork to get rid of outdated sctp package Update buildah-login.md migrate to go modules install.md: mention go modules tests/tools: go module for test binaries fix --volume splits comma delimited option Add bud test for RUN with a priv'd command vendor logrus v1.4.2 pkg/cli: panic when flags can't be hidden pkg/unshare: check all errors pull: check error during report write run_linux.go: ignore unchecked errors conformance test: catch copy error chroot/run_test.go: export funcs to actually be executed tests/imgtype: ignore error when shutting down the store testreport: check json error bind/util.go: remove unused func rm chroot/util.go imagebuildah: remove unused `dedupeStringSlice` StageExecutor: EnsureContainerPath: catch error from SecureJoin() imagebuildah/build.go: return <expr> instead of branching rmi: avoid redundant branching conformance tests: nilness: allocate map imagebuildah/build.go: avoid redundant `filepath.Join()` imagebuildah/build.go: avoid redundant `os.Stat()` imagebuildah: omit comparison to bool fix "ineffectual assignment" lint errors docker: ignore "repeats json tag" lint error pkg/unshare: use `...` instead of iterating a slice conformance: bud test: use raw strings for regexes conformance suite: remove unused func/var buildah test suite: remove unused vars/funcs testreport: fix golangci-lint errors util: remove redundant `return` statement chroot: only log clean-up errors images_test: ignore golangci-lint error blobcache: log error when draining the pipe imagebuildah: check errors in deferred calls chroot: fix error handling in deferred funcs cmd: check all errors chroot/run_test.go: check errors chroot/run.go: check errors in deferred calls imagebuildah.Executor: remove unused onbuild field docker/types.go: remove unused struct fields util: use strings.ContainsRune instead of index check Cirrus: Initial implementation Bump to v1.9.1-dev
buildah-run: fix-out-of-range panic (2) Bump back to v1.9.0-dev
Update containers/image to v2.0.0 run: fix hang with run and --isolation=chroot run: fix hang when using run chroot: drop unused function call remove --> before imgageID on build Always close stdin pipe Write deny to setgroups when doing single user mapping Avoid including linux/memfd.h Add a test for the symlink pointing to a directory Add missing continue Fix the handling of symlinks to absolute paths Only set default network sysctls if not rootless Support --dns=none like podman fix bug --cpu-shares parsing typo Fix validate complaint Update vendor on containers/storage to v1.12.10 Create directory paths for COPY thereby ensuring correct perms imagebuildah: use a stable sort for comparing build args imagebuildah: tighten up cache checking bud.bats: add a test verying the order of --build-args add -t to podman run imagebuildah: simplify screening by top layers imagebuildah: handle ID mappings for COPY --from imagebuildah: apply additionalTags ourselves bud.bats: test additional tags with cached images bud.bats: add a test for WORKDIR and COPY with absolute destinations Cleanup Overlay Mounts content
Add support for file secret mounts Add ability to skip secrets in mounts file allow 32bit builds fix tutorial instructions imagebuilder: pass the right contextDir to Add() add: use fileutils.PatternMatcher for .dockerignore bud.bats: add another .dockerignore test unshare: fallback to single usermapping addHelperSymlink: clear the destination on os.IsExist errors bud.bats: test replacing symbolic links imagebuildah: fix handling of destinations that end with '/' bud.bats: test COPY with a final "/" in the destination linux: add check for sysctl before using it unshare: set _CONTAINERS_ROOTLESS_GID Rework buildahimamges build context: support https git repos Add a test for ENV special chars behaviour Check in new Dockerfiles Apply custom SHELL during build time config: expand variables only at the command line SetEnv: we only need to expand v once Add default /root if empty on chroot iso Add support for Overlay volumes into the container. Export buildah validate volume functions so it can share code with libpod Bump baseline test to F30 Fix rootless handling of /dev/shm size Avoid fmt.Printf() in the library imagebuildah: tighten cache checking back up Handle WORKDIR with dangling target Default Authfile to proper path Make buildah run --isolation follow BUILDAH_ISOLATION environment Vendor in latest containers/storage and containers/image getParent/getChildren: handle layerless images imagebuildah: recognize cache images for layerless images bud.bats: test scratch images with --layers caching Get CHANGELOG.md updates Add some symlinks to test our .dockerignore logic imagebuildah: addHelper: handle symbolic links commit/push: use an everything-allowed policy Correct manpage formatting in files section Remove must be root statement from buildah doc Change image names to stable, testing and upstream Bump back to v1.9.0-dev
Vendor Storage 1.12.6 Create scratch file in TESTDIR Test bud-copy-dot with --layers picks up changed file Bump back to 1.9.0-dev
Don't create directory on container Replace kubernetes/pause in tests with k8s.gcr.io/pause imagebuildah: don't remove intermediate images if we need them Rework buildahimagegit to buildahimageupstream Fix Transient Mounts Handle WORKDIRs that are symlinks allow podman to build a client for windows Touch up 1.9-dev to 1.9.0-dev Bump to 1.9-dev
Resolve symlink when checking container path commit: commit on every instruction, but not always with layers CommitOptions: drop the unused OnBuild field makeImageRef: pass in the whole CommitOptions structure cmd: API cleanup: stores before images run: check if SELinux is enabled Fix buildahimages Dockerfiles to include support for additionalimages mounted from host. Detect changes in rootdir Fix typo in buildah-pull(1) Vendor in latest containers/storage Keep track of any build-args used during buildah bud --layers commit: always set a parent ID imagebuildah: rework unused-argument detection fix bug dest path when COPY .dockerignore Move Host IDMAppings code from util to unshare Add BUILDAH_ISOLATION rootless back Travis CI: fail fast, upon error in any step imagebuildah: only commit images for intermediate stages if we have to Use errors.Cause() when checking for IsNotExist errors auto pass http_proxy to container Bump back to 1.8-dev
imagebuildah: don't leak image structs Add Dockerfiles for buildahimages Bump to Replace golang 1.10 with 1.12 add --dns* flags to buildah bud Add hack/build_speed.sh test speeds on building container images Create buildahimage Dockerfile for Quay rename 'is' to 'expect_output' squash.bats: test squashing in multi-layered builds bud.bats: test COPY --from in a Dockerfile while using the cache commit: make target image names optional Fix bud-args to allow comma separation oops, missed some tests in commit.bats new helper: expect_line_count New tests for #1467 (string slices in cmdline opts) Workarounds for dealing with travis; review feedback BATS tests - extensive but minor cleanup imagebuildah: defer pulling images for COPY --from imagebuildah: centralize COMMIT and image ID output Travis: do not use traviswait imagebuildah: only initialize imagebuilder configuration once per stage Make cleaner error on Dockerfile build errors unshare: move to pkg/ unshare: move some code from cmd/buildah/unshare Fix handling of Slices versus Arrays imagebuildah: reorganize stage and per-stage logic imagebuildah: add empty layers for instructions Add missing step in installing into Ubuntu fix bug in .dockerignore support imagebuildah: deduplicate prepended "FROM" instructions Touch up intro commit: set created-by to the shell if it isn't set commit: check that we always set a "created-by" docs/buildah.md: add "containers-" prefixes under "SEE ALSO" Bump back to 1.8-dev
mount: do not create automatically a namespace buildah: correctly create the userns if euid!=0 imagebuildah.Build: consolidate cleanup logic CommitOptions: drop the redundant Store field Move pkg/chrootuser from libpod to buildah. imagebuildah: record image IDs and references more often vendor imagebuilder v1.1.0 imagebuildah: fix requiresStart/noRunsRemaining confusion imagebuildah: check for unused args across stages bump github.com/containernetworking/cni to v0.7.0-rc2 imagebuildah: use "useCache" instead of "noCache" imagebuildah.resolveNameToImageRef(): take name as a parameter Export fields of the DokcerIgnore struct imagebuildah: drop the duplicate containerIDs list rootless: by default use the host network namespace imagebuildah: split Executor and per-stage execution imagebuildah: move some fields around golint: make golint happy docs: 01-intro.md: add missing . in Dockerfile examples fix bug using .dockerignore Do not create empty mounts.conf file images: suppress a spurious blank line with no images from: distinguish between ADD and COPY fix bug to not separate each --label value with comma buildah-bud.md: correct a typo, note a default Remove mistaken code that got merged in other PR add sample registries.conf to docs escape shell variables in README example slirp4netns: set mtu to 65520 images: imageReposToMap() already adds <none>:<none> imagebuildah.ReposToMap: move to cmd Build: resolve copyFrom references earlier Allow rootless users to use the cache directory in homedir bud.bats: use the per-test temp directory bud.bats: log output before counting length Simplify checks for leftover args Print commitID with --layers fix bug images use the template to print results rootless: honor --net host onsi/gomeage add missing files vendor latest openshift/imagebuilder Remove noop from squash help Prepend a comment to files setup in container imagebuildah resolveSymlink: fix handling of relative links Errors should be printed to stderr Add recommends for slirp4netns and fuse-overlay Update pull and pull-always flags Hide from users command options that we don't want them to use. Update secrets fipsmode patch to work on rootless containers fix unshare option handling and documentation Vendor in latest containers/storage Hard-code docker.Transport use in pull --all-tags Use a types.ImageReference instead of (transport, name) strings in pullImage etc. Move the computation of srcRef before first pullAndFindImage Don't throw away user-specified tag for pull --all-tags CHANGES BEHAVIOR: Remove the string format input to localImageNameForReference Don't try to parse imageName as transport:image in pullImage Use reference.WithTag instead of manual string manipulation in Pull Don't pass image = transport:repo:tag, transport=transport to pullImage Fix confusing variable naming in Pull Don't try to parse image name as a transport:image Fix error reporting when parsing trans+image Remove 'transport == ""' handling from the pull path Clean up "pulls" of local image IDs / ID prefixes Simplify ExpandNames Document the semantics of transport+name returned by ResolveName UPdate gitvalidation epoch Bump back to 1.8-dev
vendor containers/image v1.5 Move secrets code from libpod into buildah Update CHANGELOG.md with the past changes README.md: fix typo Fix a few issues found by tests/validate/gometalinter.sh Neutralize buildah/unshare on non-Linux platforms Explicitly specify a directory to find(1) README.md: rephrase Buildah description Stop printing default twice in cli --help install.md: add section about vendoring Bump to 1.8-dev
vendor containers/image v1.4 Make "images --all" faster Remove a misleading comment Remove quiet option from pull options Make sure buildah pull --all-tags only works with docker transport Support oci layout format Fix pulling of images within buildah Fix tls-verify polarity Travis: execute make vendor and hack/tree_status.sh vendor.conf: remove unused dependencies add missing vendor/github.com/containers/libpod/vendor.conf vendor.conf: remove github.com/inconshreveable/mousetrap make vendor: always fetch the latest vndr add hack/tree_status.sh script Bump c/Storage to 1.10 Add --all-tags test to pull mount: make error clearer Remove global flags from cli help Set --disable-compression to true as documented Help document using buildah mount in rootless mode healthcheck start-period: update documentation Vendor in latest c/storage and c/image dumpbolt: handle nested buckets Fix buildah commit compress by default Test on xenial, not trusty unshare: reexec using a memfd copy instead of the binary Add --target to bud command Fix example for setting multiple environment variables main: fix rootless mode buildah: force umask 022 pull.bats: specify registry config when using registries pull.bats: use the temporary directory, not /tmp unshare: do not set rootless mode if euid=0 Touch up cli help examples and a few nits Add an undocumented dumpbolt command Move tar commands into containers/storage Fix bud issue with 2 line Dockerfile Add package install descriptions Note configuration file requirements Replace urfave/cli with cobra cleanup vendor.conf Vendor in latest containers/storage Add Quiet to PullOptions and PushOptions cmd/commit: add flag omit-timestamp to allow for deterministic builds Add options for empty-layer history entries Make CLI help descriptions and usage a bit more consistent vndr opencontainers/selinux Bump baseline test Fedora to 29 Bump to v1.7-dev-1 Bump to v1.6-1 Add support for ADD --chown imagebuildah: make EnsureContainerPath() check/create the right one Bump 1.7-dev Fix contrib/rpm/bulidah.spec changelog date
Add support for ADD --chown imagebuildah: make EnsureContainerPath() check/create the right one Fix contrib/rpm/bulidah.spec changelog date Vendor in latest containers/storage Revendor everything Revendor in latest code by release unshare: do not set USER=root run: ignore EIO when flushing at the end, avoid double log build-using-dockerfile,commit: disable compression by default Update some comments Make rootless work under no_pivot_root Add CreatedAtRaw date field for use with Format Properly format images JSON output pull: add all-tags option Fix support for multiple Short options pkg/blobcache: add synchronization Skip empty files in file check of conformance test Use NoPivot also for RUN, not only for run Remove no longer used isReferenceInsecure / isRegistryInsecure Do not set OCIInsecureSkipTLSVerify based on registries.conf Remove duplicate entries from images JSON output vendor parallel-copy from containers/image blobcache.bats: adjust explicit push tests Handle one line Dockerfile with layers We should only warn if user actually requests Hostname be set in image Fix compiler Warning about comparing different size types imagebuildah: don't walk if rootdir and path are equal Add aliases for buildah containers, so buildah list, ls and ps work vendor: use faster version instead compress/gzip vendor: update libpod Properly handle Hostname inside of RUN command docs: mention how to mount in rootless mode tests: use fully qualified name for centos image travis.yml: use the fully qualified name for alpine mount: allow mount only when using vfs Add some tests for buildah pull Touch up images -q processing Refactor: Use library shared idtools.ParseIDMap() instead of bundling it bump GITVALIDATE_EPOCH cli.BudFlags: add `--platform` nop Makefile: allow packagers to more easily add tags Makefile: soften the requirement on git tests: add containers json test Inline blobCache.putBlob into blobCacheDestination.PutBlob Move saveStream and putBlob near blobCacheDestination.PutBlob Remove BlobCache.PutBlob Update for API changes Vendor c/image after merging c/image#536 Handle 'COPY --from' in Dockerfile Vendor in latest content from github.com/containers/storage Clarify docker.io default in push with docker-daemon Test blob caching Wire in a hidden --blob-cache option Use a blob cache when we're asked to use one Add --disable-compression to 'build-using-dockerfile' Add a blob cache implementation vendor: update containers/storage Update for sysregistriesv2 API changes Update containers/image to 63a1cbdc5e6537056695cf0d627c0a33b334df53 clean up makefile variables Fix file permission Complete the instructions for the command Show warning when a build arg not used Assume user 0 group 0, if /etc/passwd file in container. Add buildah info command Enable -q when --filter is used for images command Add v1.5 Release Announcement Fix dangling filter for images command Fix completions to print Names as well as IDs tests: Fix file permissions Bump 1.6-dev
Bump min go to 1.10 in install.md vendor: update ostree-go Update docker build command line in conformance test Print command in SystemExec as debug information Add some skip word for inspect check in conformance test Update regex for multi stage base test Sort CLI flags vendor: update containers/storage Add note to install about non-root on RHEL/CentOS Update imagebuild depdency to support heading ARGs in Dockerfile rootless: do not specify --rootless to the OCI runtime Export resolvesymlink function Exclude --force-rm from common bud cli flags run: bind mount /etc/hosts and /etc/resolv.conf if not in a volume rootless: use slirp4netns to setup the network namespace Instructions for completing the pull command Fix travis to not run environment variable patch rootless: only discard network configuration names run: only set up /etc/hosts or /etc/resolv.conf with network common: getFormat: match entire string not only the prefix vendor: update libpod Change validation EPOCH Fixing broken link for container-registries.conf Restore rootless isolation test for from volume ro test ostree: fix tag for build constraint Handle directories better in bud -f vndr in latest containers/storage Fix unshare gofmt issue runSetupBuiltinVolumes(): break up volume setup common: support a per-user registries conf file unshare: do not override the configuration common: honor the rootless configuration file unshare: create a new mount namespace unshare: support libpod rootless pkg Use libpod GetDefaultStorage to report proper storage config Allow container storage to manage the SELinux labels Resolve image names with default transport in from command run: When the value of isolation is set, use the set value instead of the default value. Vendor in latest containers/storage and opencontainers/selinux Remove no longer valid todo Check for empty buildTime in version Change gofmt so it runs on all but 1.10 Run gofmt only on Go 1.11 Walk symlinks when checking cached images for copied/added files ReserveSELinuxLabels(): handle wrapped errors from OpenBuilder Set WorkingDir to empty, not / for conformance Update calls in e2e to addres 1101 imagebuilder.BuildDockerfiles: return the image ID Update for changes in the containers/image API bump(github.com/containers/image) Allow setting --no-pivot default with an env var Add man page and bash completion, for --no-pivot Add the --no-pivot flag to the run command Improve reporting about individual pull failures Move the "short name but no search registries" error handling to resolveImage Return a "search registries were needed but empty" indication in util.ResolveName Simplify handling of the "tried to pull an image but found nothing" case in newBuilder Don't even invoke the pull loop if options.FromImage == "" Eliminate the long-running ref and img variables in resolveImage In resolveImage, return immediately on success Fix From As in Dockerfile Vendor latest containers/image Vendor in latest libpod Sort CLI flags of buildah bud Change from testing with golang 1.9 to 1.11. unshare: detect when unprivileged userns are disabled Optimize redundant code fix missing format param chroot: fix the args check imagebuildah: make ResolveSymLink public Update copy chown test buildah: use the same logic for XDG_RUNTIME_DIR as podman V1.4 Release Announcement Podman --privileged selinux is broken papr: mount source at gopath parse: Modify the return value parse: modify the verification of the isolation value Make sure we log or return every error pullImage(): when completing an image name, try docker:// Fix up Tutorial 3 to account for format Vendor in latest containers/storage and containers/image docs/tutorials/01-intro.md: enhanced installation instructions Enforce "blocked" for registries for the "docker" transport Correctly set DockerInsecureSkipTLSVerify when pulling images chroot: set up seccomp and capabilities after supplemental groups chroot: fix capabilities list setup and application .papr.yml: log the podman version namespaces.bats: fix handling of uidmap/gidmap options in pairs chroot: only create user namespaces when we know we need them Check /proc/sys/user/max_user_namespaces on unshare(NEWUSERNS) bash/buildah: add isolation option to the from command
from: fix isolation option Touchup pull manpage Export buildah ReserveSELinuxLables so podman can use it Add buildah.io to README.md and doc fixes Update rmi man for prune changes Ignore file not found removal error in bud bump(github.com/containers/{storage,image}) NewImageSource(): only create one Diff() at a time Copy ExposedPorts from base image into the config tests: run conformance test suite in Travis Change rmi --prune to not accept an imageID Clear intermediate container IDs after each stage Request podman version for build issues unshare: keep the additional groups of the user Builtin volumes should be owned by the UID/GID of the container Get rid of dangling whitespace in markdown files Move buildah from projecatatomic/buildah to containers/buildah nitpick: parse.validateFlags loop in bud cli bash: Completion options Add signature policy to push tests vendor in latest containers/image Fix grammar in Container Tools Guide Don't build btrfs if it is not installed new: Return image-pulling errors from resolveImage pull: Return image-pulling errors from pullImage Add more volume mount tests chroot: create missing parent directories for volume mounts Push: Allow an empty destination Add Podman relationship to readme, create container tools guide Fix arg usage in buildah-tag Add flags/arguments order verification to other commands Handle ErrDuplicateName errors from store.CreateContainer() Evaluate symbolic links on Add/Copy Commands Vendor in latest containers/image and containers/storage Retain bounding set when running containers as non root run container-diff tests in Travis buildah-images.md: Fix option contents push: show image digest after push succeed Vendor in latest containers/storage,image,libpod and runc Change references to cri-o to point at new repository Exclude --layers from the common bug cli flags demos: Increase the executable permissions run: clear default seccomp filter if not enabled Bump maximum cyclomatic complexity to 45 stdin: on HUP, read everything nitpick: use tabs in tests/helpers.bash Add flags/arguments order verification to one arg commands nitpick: decrease cognitive complexity in buildah-bud rename: Avoid renaming the same name as other containers chroot isolation: chroot() before setting up seccomp Small nitpick at the "if" condition in tag.go cmd/images: Modify json option cmd/images: Disallow the input of image when using the -a option Fix examples to include context directory Update containers/image to fix commit layer issue cmd/containers: End loop early when using the json option Make buildah-from error message clear when flags are after arg Touch up README.md for conformance tests Update container/storage for lock fix cmd/rm: restore the correct containerID display Remove debug lines Remove docker build image after each test Add README for conformance test Update the MakeOptions to accept all command options for buildah Update regrex to fit the docker output in test "run with JSON" cmd/buildah: Remove redundant variable declarations Warn about using Commands in Dockerfile that are not supported by OCI. Add buildah bud conformance test Fix rename to also change container name in builder Makefile: use $(GO) env-var everywhere Cleanup code to more closely match Docker Build images Document BUILDAH_* environment variables in buildah bud --help output Return error immediately if error occurs in Prepare step Fix --layers ADD from url issue Add "Sign your PRs" TOC item to contributing.md. Display the correct ID after deleting image rmi: Modify the handling of errors Let util.ResolveName() return parsing errors Explain Open Container Initiative (OCI) acronym, add link Update vendor for urfave/cli back to master Handle COPY --chown in Dockerfile Switch to Recommends container-selinux Update vendor for containernetworking, imagebuildah and podman Document STORAGE_DRIVER and STORAGE_OPTS environment variable Change references to projectatomic/libpod to containers/libpod Add container PATH retrieval example Expand variables names for --env imagebuildah: provide a way to provide stdin for RUN Remove an unused srcRef.NewImageSource in pullImage chroot: correct a comment chroot: bind mount an empty directory for masking Don't bother with --no-pivot for rootless isolation CentOS need EPEL repo Export a Pull() function Remove stream options, since docker build does not have it release v1.3: mention openSUSE Add Release Announcements directory Bump to v1.4-dev
Revert pull error handling from 881 bud should not search context directory for Dockerfile Set BUILDAH_ISOLATION=rootless when running unprivileged .papr.sh: Also test with BUILDAH_ISOLATION=rootless Skip certain tests when we're using "rootless" isolation .travis.yml: run integration tests with BUILDAH_ISOLATION=chroot Add and implement IsolationOCIRootless Add a value for IsolationOCIRootless Fix rmi to remove intermediate images associated with an image Return policy error on pull Update containers/image to 216acb1bcd2c1abef736ee322e17147ee2b7d76c Switch to github.com/containers/image/pkg/sysregistriesv2 unshare: make adjusting the OOM score optional Add flags validation chroot: handle raising process limits chroot: make the resource limits name map module-global Remove rpm.bats, we need to run this manually Set the default ulimits to match Docker buildah: no args is out of bounds unshare: error message missed the pid preprocess ".in" suffixed Dockerfiles Fix the the in buildah-config man page Only test rpmbuild on latest fedora Add support for multiple Short options Update to latest urvave/cli Add additional SELinux tests Vendor in latest github.com/containers/{image;storage} Stop testing with golang 1.8 Fix volume cache issue with buildah bud --layers Create buildah pull command Increase the deadline for gometalinter during 'make validate' .papr.sh: Also test with BUILDAH_ISOLATION=chroot .travis.yml: run integration tests with BUILDAH_ISOLATION=chroot Add a Dockerfile Set BUILDAH_ISOLATION=chroot when running unprivileged Add and implement IsolationChroot Update github.com/opencontainers/runc maybeReexecUsingUserNamespace: add a default for root Allow ping command without NET_RAW Capabilities rmi.storageImageID: fix Wrapf format warning Allow Dockerfile content to come from stdin Vendor latest container/storage to fix overlay mountopt userns: assign additional IDs sequentially Remove default dev/pts Add OnBuild test to baseline test tests/run.bats(volumes): use :z when SELinux is enabled Avoid a stall in runCollectOutput() Use manifest from container/image Vendor in latest containers/image and containers/storage add rename command Completion command Update CHANGELOG.md Update vendor for runc to fix 32 bit builds bash completion: remove shebang Update vendor for runc to fix 32 bit builds
Vendor in lates containers/image build-using-dockerfile: let -t include transports again Block use of /proc/acpi and /proc/keys from inside containers Fix handling of --registries-conf Fix becoming a maintainer link add optional CI test fo darwin Don't pass a nil error to errors.Wrapf() image filter test: use kubernetes/pause as a "since" Add --cidfile option to from vendor: update containers/storage Contributors need to find the CONTRIBUTOR.md file easier Add a --loglevel option to build-with-dockerfile Create Development plan cmd: Code improvement allow buildah cross compile for a darwin target Add unused function param lint check docs: Follow man-pages(7) suggestions for SYNOPSIS Start using github.com/seccomp/containers-golang umount: add all option to umount all mounted containers runConfigureNetwork(): remove an unused parameter Update github.com/opencontainers/selinux Fix buildah bud --layers Force ownership of /etc/hosts and /etc/resolv.conf to 0:0 main: if unprivileged, reexec in a user namespace Vendor in latest imagebuilder Reduce the complexity of the buildah.Run function mount: output it before replacing lastError Vendor in latest selinux-go code Implement basic recognition of the "--isolation" option Run(): try to resolve non-absolute paths using $PATH Run(): don't include any default environment variables build without seccomp vendor in latest runtime-tools bind/mount_unsupported.go: remove import errors Update github.com/opencontainers/runc Add Capabilities lists to BuilderInfo Tweaks for commit tests commit: recognize committing to second storage locations Fix ARGS parsing for run commands Add info on registries.conf to from manpage Switch from using docker to podman for testing in .papr buildah: set the HTTP User-Agent ONBUILD tutorial Add information about the configuration files to the install docs Makefile: add uninstall Add tilde info for push to troubleshooting mount: support multiple inputs Use the right formatting when adding entries to /etc/hosts Vendor in latest go-selinux bindings Allow --userns-uid-map/--userns-gid-map to be global options bind: factor out UnmountMountpoints Run(): simplify runCopyStdio() Run(): handle POLLNVAL results Run(): tweak terminal mode handling Run(): rename 'copyStdio' to 'copyPipes' Run(): don't set a Pdeathsig for the runtime Run(): add options for adding and removing capabilities Run(): don't use a callback when a slice will do setupSeccomp(): refactor Change RunOptions.Stdin/Stdout/Stderr to just be Reader/Writers Escape use of '_' in .md docs Break out getProcIDMappings() Break out SetupIntermediateMountNamespace() Add Multi From Demo Use the c/image conversion code instead of converting configs manually Don't throw away the manifest MIME type and guess again Consolidate loading manifest and config in initConfig Pass a types.Image to Builder.initConfig Require an image ID in importBuilderDataFromImage Use c/image/manifest.GuessMIMEType instead of a custom heuristic Do not ignore any parsing errors in initConfig Explicitly handle "from scratch" images in Builder.initConfig Fix parsing of OCI images Simplify dead but dangerous-looking error handling Don't ignore v2s1 history if docker_version is not set Add --rm and --force-rm to buildah bud Add --all,-a flag to buildah images Separate stdio buffering from writing Remove tty check from images --format Add environment variable BUILDAH_RUNTIME Add --layers and --no-cache to buildah bud Touch up images man version.md: fix DESCRIPTION tests: add containers test tests: add images test images: fix usage fix make clean error Change 'registries' to 'container registries' in man add commit test Add(): learn to record hashes of what we add Minor update to buildah config documentation for entrypoint Bump to v1.2-dev Add registries.conf link to a few man pages
Drop capabilities if running container processes as non root Print Warning message if cmd will not be used based on entrypoint Update 01-intro.md Shouldn't add insecure registries to list of search registries Report errors on bad transports specification when pushing images Move parsing code out of common for namespaces and into pkg/parse.go Add disable-content-trust noop flag to bud Change freenode chan to buildah runCopyStdio(): don't close stdin unless we saw POLLHUP Add registry errors for pull runCollectOutput(): just read until the pipes are closed on us Run(): provide redirection for stdio rmi, rm: add test add mount test Add parameter judgment for commands that do not require parameters Add context dir to bud command in baseline test run.bats: check that we can run with symlinks in the bundle path Give better messages to users when image can not be found use absolute path for bundlePath Add environment variable to buildah --format rm: add validation to args and all option Accept json array input for config entrypoint Run(): process RunOptions.Mounts, and its flags Run(): only collect error output from stdio pipes if we created some Add OnBuild support for Dockerfiles Quick fix on demo readme run: fix validate flags buildah bud should require a context directory or URL Touchup tutorial for run changes Validate common bud and from flags images: Error if the specified imagename does not exist inspect: Increase err judgments to avoid panic add test to inspect buildah bud picks up ENV from base image Extend the amount of time travis_wait should wait Add a make target for Installing CNI plugins Add tests for namespace control flags copy.bats: check ownerships in the container Fix SELinux test errors when SELinux is enabled Add example CNI configurations Run: set supplemental group IDs Run: use a temporary mount namespace Use CNI to configure container networks add/secrets/commit: Use mappings when setting permissions on added content Add CLI options for specifying namespace and cgroup setup Always set mappings when using user namespaces Run(): break out creation of stdio pipe descriptors Read UID/GID mapping information from containers and images Additional bud CI tests Run integration tests under travis_wait in Travis build-using-dockerfile: add --annotation Implement --squash for build-using-dockerfile and commit Vendor in latest container/storage for devicemapper support add test to inspect Vendor github.com/onsi/ginkgo and github.com/onsi/gomega Test with Go 1.10, too Add console syntax highlighting to troubleshooting page bud.bats: print "$output" before checking its contents Manage "Run" containers more closely Break Builder.Run()'s "run runc" bits out util.ResolveName(): handle completion for tagged/digested image names Handle /etc/hosts and /etc/resolv.conf properly in container Documentation fixes Make it easier to parse our temporary directory as an image name Makefile: list new pkg/ subdirectoris as dependencies for buildah containerImageSource: return more-correct errors API cleanup: PullPolicy and TerminalPolicy should be types Make "run --terminal" and "run -t" aliases for "run --tty" Vendor github.com/containernetworking/cni v0.6.0 Update github.com/containers/storage Update github.com/containers/libpod Add support for buildah bud --label buildah push/from can push and pull images with no reference Vendor in latest containers/image Update gometalinter to fix install.tools error Update troubleshooting with new run workaround Added a bud demo and tidied up Attempt to download file from url, if fails assume Dockerfile Add buildah bud CI tests for ENV variables Re-enable rpm .spec version check and new commit test Update buildah scratch demo to support el7 Added Docker compatibility demo Update to F28 and new run format in baseline test Touchup man page short options across man pages Added demo dir and a demo. chged distrorlease builder-inspect: fix format option Add cpu-shares short flag (-c) and cpu-shares CI tests Minor fixes to formatting in rpm spec changelog Fix rpm .spec changelog formatting CI tests and minor fix for cache related noop flags buildah-from: add effective value to mount propagation
Declare Buildah 1.0 Add cache-from and no-cache noops, and fix doco Update option and documentation for --force-rm Adding noop for --force-rm to match --rm Add buildah bud ENTRYPOINT,CMD,RUN tests Adding buildah bud RUN test scenarios Extend tests for empty buildah run command Fix formatting error in run.go Update buildah run to make command required Expanding buildah run cmd/entrypoint tests Update test cases for buildah run behaviour Remove buildah run cmd and entrypoint execution Add Files section with registries.conf to pertinent man pages tests/config: perfect test tests/from: add name test Do not print directly to stdout in Commit() Touch up auth test commands Force "localhost" as a default registry Drop util.GetLocalTime() Vendor in latest containers/image Validate host and container paths passed to --volume test/from: add add-host test Add --compress, --rm, --squash flags as a noop for bud Add FIPS mode secret to buildah run and bud Add config --comment/--domainname/--history-comment/--hostname 'buildah config': stop replacing Created-By whenever it's not specified Modify man pages so they compile correctly in mandb Add description on how to do --isolation to buildah-bud man page Add support for --iidfile to bud and commit Refactor buildah bud for vendoring Fail if date or git not installed Revert update of entrypoint behaviour to match docker Vendor in latest imagebuilder code to fix multiple stage builds Add /bin/sh -c to entrypoint in config image_test: Improve the test Fix README example of buildah config buildah-image: add validation to 'format' Simple changes to allow buildah to pass make validate Clarify the use of buildah config options containers_test: Perfect testing buildah images and podman images are listing different sizes buildah-containers: add tests and example to the man page buildah-containers: add validation to 'format' Clarify the use of buildah config options Minor fix for lighttpd example in README Add tls-verification to troubleshooting Modify buildah rmi to account for changes in containers/storage Vendor in latest containers/image and containers/storage addcopy: add src validation Remove tarball as an option from buildah push --help Fix secrets patch Update entrypoint behaviour to match docker Display imageId after commit config: add support for StopSignal Fix docker login issue in travis.yml Allow referencing stages as index and names Add multi-stage builds tests Add multi-stage builds support Add accessor functions for comment and stop signal Vendor in latest imagebuilder, to get mixed case AS support Allow umount to have multi-containers Update buildah push doc buildah bud walks symlinks Imagename is required for commit atm, update manpage
Bump to v0.16.0 Remove requires for ostree-lib in rpm spec file Add support for shell buildah.spec should require ostree-libs Vendor in latest containers/image bash: prefer options Change image time to locale, add troubleshooting.md, add logo to other mds buildah-run.md: fix error SYNOPSIS docs: fix error example Allow --cmd parameter to have commands as values Touchup README to re-enable logo Clean up README.md Make default-mounts-file a hidden option Document the mounts.conf file Fix man pages to format correctly Add various transport support to buildah from Add unit tests to run.go If the user overrides the storage driver, the options should be dropped Show Config/Manifest as JSON string in inspect when format is not set Switch which for that in README.md Remove COPR Fix wrong order of parameters Vendor in latest containers/image Remove shallowCopy(), which shouldn't be saving us time any more shallowCopy: avoid a second read of the container's layer
Update buildah spec file to match new version Bump to version 0.4 Add default transport to push if not provided Add authentication to commit and push Remove --transport flag Run: don't complain about missing volume locations Add credentials to buildah from Remove export command Bump containers/storage and containers/image
Vendor in latest containers/image and containers/storage Update image-spec and runtime-spec to v1.0.0 Add support for -- ending options parsing to buildah run Add/Copy need to support glob syntax Add flag to remove containers on commit Add buildah export support update 'buildah images' and 'buildah rmi' commands buildah containers/image: Add JSON output option Add 'buildah version' command Handle "run" without an explicit command correctly Ensure volume points get created, and with perms Add a -a/--all option to "buildah containers"
Vendor in latest container/storage container/image Add a "push" command Add an option to specify a Create date for images Allow building a source image from another image Improve buildah commit performance Add a --volume flag to "buildah run" Fix inspect/tag-by-truncated-image-ID Include image-spec and runtime-spec versions buildah mount command should list mounts when no arguments are given. Make the output image format selectable commit images in multiple formats Also import configurations from V2S1 images Add a "tag" command Add an "inspect" command Update reference comments for docker types origins Improve configuration preservation in imagebuildah Report pull/commit progress by default Contribute buildah.spec Remove --mount from buildah-from Add a build-using-dockerfile command (alias: bud) Create manpages for the buildah project Add installation for buildah and bash completions Rename "list"/"delete" to "containers"/"rm" Switch `buildah list quiet` option to only list container id's buildah delete should be able to delete multiple containers Correctly set tags on the names of pulled images Don't mix "config" in with "run" and "commit" Add a "list" command, for listing active builders Add "add" and "copy" commands Add a "run" command, using runc Massive refactoring Make a note to distinguish compression of layers
Initial version, needs work
|
https://fossies.org/linux/buildah/CHANGELOG.md
|
CC-MAIN-2022-40
|
refinedweb
| 17,095 | 50.33 |
Note: This post was originally going to go up on the 30th of March but PicoCTF requested writeups to be held on to until winners were verified.
As stated in my last post, a group of friends and I participated in the 2021 PicoCTF challenge. Having just come to an end, this year’s contest was certainly the most enjoyable one for me because of the truly awesome people I got to work with, leading us to a second place finish in Canada (7th globally). One of the most interesting, though not particularly challenging, cryptography problems I came across and solved was titled “Double DES.” It began with this description and hint:
I wanted an encryption service that’s more secure than regular DES, but not as slow as 3DES… The flag is not in standard format.
Hint: How large is the keyspace?
It contained the following Python code (modified to be shorter) which was also running on a remote server:
#!/usr/bin/python3 -u from Crypto.Cipher import DES import binascii, itertools, random, string def pad(msg): block_len = 8 over = len(msg) % block_len pad = block_len - over return (msg + " " * pad).encode() def generate_key(): return pad("".join(random.choice(string.digits) for _ in range(6))) FLAG = open("flag").read().rstrip() KEY1 = generate_key() KEY2 = generate_key() def get_input(): return binascii.unhexlify(input("What data would you like to encrypt? ").rstrip()).decode() def double_encrypt(m): msg = pad(m) cipher1 = DES.new(KEY1, DES.MODE_ECB) enc_msg = cipher1.encrypt(msg) cipher2 = DES.new(KEY2, DES.MODE_ECB) return binascii.hexlify(cipher2.encrypt(enc_msg)).decode() print("Here is the flag:") print(double_encrypt(FLAG)) while True: try: print(double_encrypt(get_input())) except: print("Invalid input.")
A quick look at the code makes it clear that two random keys made up of 6 digits each are generated and then used to encrypt the flag with DES which is provided to the user. From there, the user can input any plaintext and get the ciphertext using the same keys. This last part is important because it means the challenge is open to “known-plaintext attacks.”
A naive examination of the encryption would suggest that one could bruteforce the correct key out of the 1012 possible made up from every possible first key (106) for every possible second key (106). However, while certainly possible to do in the timespan of the competition, there exists a significantly better attack called Meet-in-the-Middle (MITM). To perform such an attack, we first choose a known plaintext, say “0123456789012345678”, and encrypt it with every possible first key:
def single_encrypt(k, m): msg = pad(m) cipher1 = DES.new(k, DES.MODE_ECB) return cipher1.encrypt(msg) # The 'singles' file is a good save point since this first # generation step can take a bit. with open("singles", "w") as f: for i in range(1000000): ciphertext = single_encrypt(pad(str(i).zfill(6)), binascii.unhexlify('0123456701234567').decode()) f.write(f"{i}: {ciphertext}\n")
Then, we can connect to the remote and receive the flag ciphertext (
PASS) and our known plaintext ciphertext (
GIVEN):" GIVEN = b"\xe0\xdd\xb8\x90\x74\xc3\x64\x8a\x13\x0f\x14\x8b\x12\x91\x01\x5a"
Finally, we can take the
GIVEN ciphertext and decrypt it with every possible second key and check if it matches any of our single encrypted plaintexts. If it does, we’ve just found the correct first and second keys which we can use to decrypt the encrypted flag; wrap it with
picoCTF{} and submit! So there you have it, an attack that only takes 2 × 106 (2N) tries instead of the naive 1012 (N2) attack and a reason not use Double DES1. Full source code for my solution is provided below:
#!/usr/bin/python3 -u from Crypto.Cipher import DES import binascii def pad(msg): block_len = 8 over = len(msg) % block_len pad = block_len - over return (msg + " " * pad).encode() def single_encrypt(k, m): msg = pad(m) cipher1 = DES.new(k, DES.MODE_ECB) return cipher1.encrypt(msg) def single_decrypt(k, enc_msg): cipher1 = DES.new(k, DES.MODE_ECB) return cipher1.decrypt(enc_msg) # Generates single encrypted versions of known plaintext # The 'singles' file is a good save point since this first # generation step takes a bit. with open("singles", "w") as f: for i in range(1000000): if i % 10000 == 0: print(pad(str(i).zfill(6))) f.write(f"{i}: {single_encrypt(pad(str(i).zfill(6)), binascii.unhexlify('0123456701234567').decode())}\n") KEYS = {} with open("singles", "r") as f: lines = f.read().splitlines() for line in lines: key, enc = line.split(": ", 1) exec(f"enc={enc}") KEYS[enc] = pad(key) # From challenge" # Encrypted known plaintext PAIRS = [] GIVEN = b"\xe0\xdd\xb8\x90\x74\xc3\x64\x8a\x13\x0f\x14\x8b\x12\x91\x01\x5a" for i in range(1000000): d = single_decrypt(pad(str(i).zfill(6)), GIVEN) if d in KEYS: PAIRS.append((KEYS[d], pad(str(i).zfill(6)))) # You will get many correct pairs, in this case all of them worked for a, b in PAIRS: print(single_decrypt(a, single_decrypt(b, PASS)))
More writeups of other challenges coming soon.
Not that you should use DES at all, nowadays. ↩︎
|
https://fluix.one/blog/picoctf-2021-ddes/
|
CC-MAIN-2021-25
|
refinedweb
| 848 | 56.05 |
To create the simple Web Form that will be used in the next example, start up Visual Studio .NET and open a New Project named ProgrammingCSharpWeb. Select the Visual C# Projects folder (because C# is your language of choice), select ASP.NET Web Application as the project type, and type in its name, ProgrammingCSharpWeb. Visual Studio .NET will display as the default location, as shown in Figure 15-1..
Notice that the code-behind file does not appear 15-1.
<%@"> </form> </body> </html>
What you see is typical boilerplate HTML except for the first line, which contains the following ASP.NET code:
<%@ Page language="c#" Codebehind="HelloWeb.aspx.cs" AutoEventWireup="false" Inherits="ProgrammingCSharpWeb.WebForm1" %>
The language attribute indicates that the language used on the code-behind page is C#. The Codebehind attribute designates that the filename of that page is HelloWeb.aspx.cs, and the Inherits attribute indicates that this page derives from WebForm1. WebForm1 is a class declared in HelloWeb.aspx.cs.
public class WebForm1 : System.Web.UI.Page
As the C# code makes clear, WebForm1 inherits from System.Web.UI.Page, which is the class that defines the properties, methods, and events common to all server-side pages.
Returning to the HTML view of HelloWeb.aspx, you see that a form has been specified in the body of the page using the standard HTML form tag:
<form id="Form1" method="post" runat="server">
Web Forms assumes that you need at least one form to manage the user interaction, and creates one when you open a project. The attribute runat="server" is the key to the server-side magic. Any tag that includes this attribute is considered a server-side control to be executed by the ASP.NET framework on the server. will cause it to display a greeting and the current local time:
Hello World! It is now <% = DateTime.Now.ToString( ) %>
The <% and %> marks work just as they did in classic ASP, indicating that code falls between them (in this case, C#). The = sign immediately following the opening tag causes ASP.NET to display the value, just like a call to Response.Write( ). You could just as easily write the line as:
Hello World! It is now <% Response.Write(DateTime.Now.ToString( )); %>
Run the page by pressing Ctrl-F5 (or save it and navigate to it in your browser). You should see the string printed to the browser, as in Figure 15-2.
|
https://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+15.+Programming+Web+Forms+and+Web+Services/15.2+Creating+a+Web+Form/
|
CC-MAIN-2022-21
|
refinedweb
| 405 | 67.65 |
Best Practices for Testing Vue 3 Components (Part 2)
This article is available as a screencast!
In this article, we continue developing the TodoApp from the previous post.
Find it here if you have not read it for the context. We will start by by moving the new todo form to a separate component, and seeing what a test might look like if the component was to persist a todo to a backend server via an API call.
Creating a TodoForm component
Since the form is about to get a whole lot more complex, let’s create a new component for it,
TodoForm.vue, and move the markup from
TodoApp.vue to it:
<template> <form @ <input type="text" v- </form> </template>
In
TodoApp.vue, we just import and render a
<TodoForm />:
<template> <div> <div v- {{ todo.text }} <input type="checkbox" v- </div> <TodoForm /> </div> </template>
All the test are now failing, since we need to move the business logic from the
TodoApp to
TodoForm. Let’s do that.
TodoApp.vue now looks like this:
<template> <div> <div v- {{ todo.text }} <input type="checkbox" v- </div> <TodoForm @ </div> </template> <script lang="ts"> import { ref } from 'vue' import TodoForm from './TodoForm.vue' interface Todo { id: number text: string completed: boolean } export default { components: { TodoForm }, setup() { const todos = ref<Todo[]>([ { id: 1, text: 'do some work', completed: false } ]) const createTodo = (todo: Todo) => { todos.value.push(todo) } return { todos, createTodo, } } } </script>
And
TodoForm.vue:
<template> <form @ <input type="text" v- </form> </template> <script lang="ts"> import { ref } from 'vue' export default { setup(props, ctx) { const newTodo = ref('') const createTodo = () => { const todo: Todo = { id: 2, text: newTodo.value, completed: false } ctx.emit('createTodo', todo) } return { createTodo, newTodo } } } </script>
The main change is
createTodo in
TodoForm now emits a
createTodo event, which
TodoApp then responds to. Now our tests are passing again.
A review of our tests
Notice we did not change the tests at any point - this is a VERY GOOD THING. Good tests operate on behavior, not implementation details. If you find yourself having to change your tests when you refactor your code, chances are, you are testing implementation details (aka, how the code works) rather than the public API and behavior (what the code actually does).
Since we did not change and features, just performed a refactor, we should not, and did not, change our tests. Great!
Adding a test for TodoForm
Let’s add a basic test for TodoForm. It will be similar to the TodoApp test, where we will out the form and submit it.
import { mount } from '../src' import TodoForm from './TodoForm.vue' const mockTodo = { id: 2, text: 'Test todo', completed: false } test('creates a todo', async (done) => { const wrapper = mount(TodoForm) wrapper.find<HTMLInputElement>('[data-test="todo-input"]').element.value = 'My new todo' await wrapper.trigger('submit') expect(wrapper.emitted().createTodo).toHaveLength(1) expect(wrapper.emitted().createTodo[0]).toEqual([ mockTodo ]) })
emitted is an object that maps all the events a component has emitted during its lifetime. Each key is an array, where each entry represents a single emitted event, and the parameters it emitted. We assert that one
createTodo event was emitted, with a
mockTodo parameter.
Adding an API to TodoForm
Instead of just emitting a new todo, let’s make an API call, persist it to a database, then emit the new todo. We can use
axios to make the API call. The update
<script> tag in
TodoForm.vue looks like this:
<script lang="ts"> import { ref } from 'vue' import axios from 'axios' import { Todo } from './App.vue' export default { setup(props, ctx) { const newTodo = ref('') const createTodo = async () => { const todo: Todo = { id: 2, text: newTodo.value, completed: false } const response = await axios.post<Todo>('/api/todos', { todo }) ctx.emit('createTodo', response.data) } return { createTodo, newTodo } } } </script>
Now the test will fail - we are making an API call in a test, which is not ideal. The API call is of course failing with a network error, so the event is not emitted. We can use
jest.mock to fake out
axios, and return our
mockTodo:
jest.mock('axios', () => { return { default: { post: (url: string) => { return { data: mockTodo } } } } })
And great! Our test now passes.
Reflecting on the TodoForm spec
At this point, some developers may feel proud of their
TodoForm component - full tested, all green. Our
TodoApp tests, on the other hand, are all failing - since we now are using
axios, we need to mock it our there as well. So…. we go ahead and move the
jest.mock to the other test. We are now repeating ourselves - we have the same mock to test the same behavior in multiple tests! We got so excited about using
emitted and
jest.mock that we did not stop and ask ourselves, does this make sense?
In my opinion, no. The TodoForm component is as much an implementation detail as how we emit events, or now we add todos to an array. Of course, if it does get sufficiently complex, we may need to have lots of fine grained tests around it. Either way, we will still need to mock
axios out in the
TodoApp test, to make sure we are not triggering an API call in our test. Ideally, I would not bother with a dedicated test file for
TodoForm until it gets too complex to test as part of the
TodoApp component - which it is not, at least at the moment.
Another approach we could consider is, instead of
jest.mock('axios) in the
TodoApp test to prevent the API call, we could also mock out
TodoForm: something like this would work:
jest.mock('./TodoForm.vue', () => ({ render() { return h('div') } }))
I don’t find this as ideal though; I like to be able to test components and their interactions, which means a full
mount and as little stubbing as possible.
Conclusion
We refactored the
TodoApp component’s form and wrote some unit tests for it. Although they may not be strictly necessary now, we saw how to use
emitted and
jest.mock to test the
TodoForm.vue. We also talked about the merits of mocking and stubbing, and how each has trade-offs. As always, the best decision will depend on your app and the problem you are solving. I’ll be diving more into these topics, and look forward to discussing mocking vs stubbing, shallowMount vs mount and other opinions in a future article.
Absolutely no unsolicted spam. Unsubscribe anytime.
|
https://vuejs-course.com/blog/best-practices-for-testing-vue-3-components-part-2
|
CC-MAIN-2021-04
|
refinedweb
| 1,070 | 64.71 |
import "github.com/golang/go/src/cmd/internal/src"
const ( // It is expected that the front end or a phase in SSA will usually generate positions tagged with // PosDefaultStmt, but note statement boundaries with PosIsStmt. Simple statements will have a single // boundary; for loops with initialization may have one for their entry and one for their back edge // (this depends on exactly how the loop is compiled; the intent is to provide a good experience to a // user debugging a program; the goal is that a breakpoint set on the loop line fires both on entry // and on iteration). Proper treatment of non-gofmt input with multiple simple statements on a single // line is TBD. // // Optimizing compilation will move instructions around, and some of these will become known-bad as // step targets for debugging purposes (examples: register spills and reloads; code generated into // the entry block; invariant code hoisted out of loops) but those instructions will still have interesting // positions for profiling purposes. To reflect this these positions will be changed to PosNotStmt. // // When the optimizer removes an instruction marked PosIsStmt; it should attempt to find a nearby // instruction with the same line marked PosDefaultStmt to be the new statement boundary. I.e., the // optimizer should make a best-effort to conserve statement boundary positions, and might be enhanced // to note when a statement boundary is not conserved. // // Code cloning, e.g. loop unrolling or loop unswitching, is an exception to the conservation rule // because a user running a debugger would expect to see breakpoints active in the copies of the code. // // In non-optimizing compilation there is still a role for PosNotStmt because of code generation // into the entry block. PosIsStmt statement positions should be conserved. // // When code generation occurs any remaining default-marked positions are replaced with not-statement // positions. // PosDefaultStmt uint = iota // Default; position is not a statement boundary, but might be if optimization removes the designated statement boundary PosIsStmt // Position is a statement boundary; if optimization removes the corresponding instruction, it should attempt to find a new instruction to be the boundary. PosNotStmt // Position should not be a statement boundary, but line should be preserved for profiling and low-level debugging purposes. )
A Pos encodes a source position consisting of a (line, column) number pair and a position base. A zero Pos is a ready to use "unknown" position (nil position base and zero line number).
The (line, column) values refer to a position in a file independent of any position base ("absolute" file position).
The position base is used to determine the "relative" position, that is the filename and line number relative to the position base. If the base refers to the current file, there is no difference between absolute and relative positions. If it refers to a //line directive, a relative position is relative to that directive. A position base in turn contains the position at which it was introduced in the current file.
NoPos is a valid unknown position.
MakePos creates a new Pos value with the given base, and (file-absolute) line and column.
AbsFilename() returns the absolute filename recorded with the position's base.
After reports whether the position p comes after q in the source. For positions in different files, ordering is by filename.
Base returns the position base.
Before reports whether the position p comes before q in the source. For positions in different files, ordering is by filename.
Filename returns the name of the actual file containing this position.
Format formats a position as "filename:line" or "filename:line:column", controlled by the showCol flag and if the column is known (!= 0). For positions relative to line directives, the original position is shown as well, as in "filename:line[origfile:origline:origcolumn] if showOrig is set.
IsKnown reports whether the position p is known. A position is known if it either has a non-nil position base, or a non-zero line number.
RelCol returns the column number relative to the position's base.
RelFilename returns the filename recorded with the position's base.
RelLine returns the line number relative to the position's base.
SetBase sets the position base.
SymFilename() returns the absolute filename recorded with the position's base, prefixed by FileSymPrefix to make it appropriate for use as a linker symbol.
A PosBase encodes a filename and base position. Typically, each file and line directive introduce a PosBase.
NewFileBase returns a new *PosBase for a file with the given (relative and absolute) filenames.
NewInliningBase returns a copy of the old PosBase with the given inlining index. If old == nil, the resulting PosBase has no filename.
NewLinePragmaBase returns a new *PosBase for a line directive of the form
//line filename:line:col /*line filename:line:col*/
at position pos.
AbsFilename returns the absolute filename recorded with the base. If b == nil, the result is the empty string.
Col returns the column number recorded with the base. If b == nil, the result is 0.
Filename returns the filename recorded with the base. If b == nil, the result is the empty string.
InliningIndex returns the index into the global inlining tree recorded with the base. If b == nil or the base has not been inlined, the result is < 0.
Line returns the line number recorded with the base. If b == nil, the result is 0.
Pos returns the position at which base is located. If b == nil, the result is the zero position.
SymFilename returns the absolute filename recorded with the base, prefixed by FileSymPrefix to make it appropriate for use as a linker symbol. If b is nil, SymFilename returns FileSymPrefix + "??".
A PosTable tracks Pos -> XPos conversions and vice versa. Its zero value is a ready-to-use PosTable.
Pos returns the corresponding Pos for the given p. If p cannot be translated via t, the function panics.
XPos returns the corresponding XPos for the given pos, adding pos to t if necessary.
XPos is a more compact representation of Pos.
NoXPos is a valid unknown position.
After reports whether the position p comes after q in the source. For positions with different bases, ordering is by base index.
AtColumn1 returns the same location but shifted to column 1.
Before reports whether the position p comes before q in the source. For positions with different bases, ordering is by base index.
FileIndex returns a smallish non-negative integer corresponding to the file for this source position. Smallish is relative; it can be thousands large, but not millions.
IsKnown reports whether the position p is known. XPos.IsKnown() matches Pos.IsKnown() for corresponding positions.
LineNumber returns a string for the line number, "?" if it is not known.
SameFile reports whether p and q are positions in the same file.
WithBogusLine returns a bogus line that won't match any recorded for the source code. Its use is to disrupt the statements within an infinite loop so that the debugger will not itself loop infinitely waiting for the line number to change. gdb chooses not to display the bogus line; delve shows it with a complaint, but the alternative behavior is to hang.
WithDefaultStmt returns the same location with undetermined is_stmt
WithIsStmt returns the same location to be marked with DWARF is_stmt=1
WithNotStmt returns the same location to be marked with DWARF is_stmt=0
WithXlogue returns the same location but marked with DWARF function prologue/epilogue
Package src imports 2 packages (graph). Updated 2019-06-13. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/golang/go/src/cmd/internal/src
|
CC-MAIN-2019-35
|
refinedweb
| 1,244 | 56.96 |
When editing long/wide YAML files, you can use this to see the YAML namespace that you are currently in. It only works when the grammar is YAML and refreshes every 100ms.
Activate with CTRL+ALT+Y
Disclaimer: I had no idea what I was doing when I was writing this, it's the first time I ever saw CoffeeScript/JavaScript. So any bugs/features might get ignored. Shameless rip of the status-bar-clock package.
TODO: subscribe to event, such as grammar change or cursor move.
Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
|
https://api.atom.io/packages/yaml-path
|
CC-MAIN-2021-39
|
refinedweb
| 106 | 74.29 |
I don't think that's a good idea, since as soon as the '*pair' is deleted IMO the ~string() will be called anew for 'pair->second'. This might be ok for strings, but for other objects maybe not.
Why don't you simply empty the string?
ZOPPO
#include <iostream> #include <utility> class Test { public : Test() { std::cout << "Test::Test()" << std::endl; } Test(const Test &test) { std::cout << "Test::Test(test)" << std::endl; } ~Test() { std::cout << "Test::~Test()" << std::endl; } }; int main(void) { std::cout << "Allocating memory ..." << std::endl; void* buf = std::malloc(sizeof(std::pair<int, Test>)); std::cout << "Placement new ..." << std::endl; std::pair<int, Test>* pair = new (buf) std::pair<int, Test>(555, Test()); std::cout << "Cleaning up ..." << std::endl; pair->std::~pair<int, Test>(); std::cout << "All done !" << std::endl; return 0; }
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
Connect with top rated Experts
20 Experts available now in Live!
|
https://www.experts-exchange.com/questions/23832642/Deleting-one-member-of-std-pair.html
|
CC-MAIN-2017-04
|
refinedweb
| 171 | 64.91 |
It is possible to use the LLVM IR generator API to programmatically build the IR for
sum.ll (created at the
-O0 optimization level, that is, without optimizations). In this section, you will see how to do it step by step. First, take a look at which header files are needed:
#include <llvm/ADT/SmallVector.h>: This is used to make the
SmallVector<>template available, a data structure to aid us in building efficient vectors when the number of elements is not large. Check for help on LLVM data structures.
#include <llvm/Analysis/Verifier.h>: The verifier pass is an important analysis that checks whether your LLVM module is well formed with respect to the IR rules.
#include ...
No credit card required
|
https://www.safaribooksonline.com/library/view/getting-started-with/9781782166924/ch05s04.html
|
CC-MAIN-2018-17
|
refinedweb
| 122 | 56.96 |
22 February 2010 14:37 [Source: ICIS news]
Correction: In the ICIS news story headlined "Total declares force majeure on propylene from French refineries" dated 22 February 2010, please read in the 10th paragraph … Workers at INEOS' Lavera refinery in southern France have not voted to strike, according a statement by Richard Longden, group communications manager for INEOS - contrary to an earlier Reuters report citing the CGT Union.… instead of … Workers at INEOS' Lavera refinery in southern France have voted to strike from Wednesday, according to a Reuters report citing the CGT Union.… A corrected story follows.
LONDON (ICIS news)--Total has declared forces majeures on propylene at some of its French refineries following a continuing strike by workers and sub-contractors, a market source said on Monday.
"Things are getting worse," said a major consumer. "There is no indication when things will return to normal," it added.
A Total spokesperson refused to comment on the forces majeures and no other details have been given.
The continuing strike by workers at Total's refineries in ?xml:namespace>
Employees last week voted for an unlimited strike at all six refineries in France, prompting some market sources to suggest that force majeure could be called on propylene in the near future due to a lack of supply.
Around 7,000 direct employees, suppliers and subcontractors of the refineries went on strike for 48 hours last Wednesday to demand a restart of the petrochemical giant’s
Talks between Total and French union CGT collapsed on Sunday.
The CGT has called for the strike to spread to the two French oil refineries owned by
ExxonMobil would not comment on the possible strike and said at present all of its refineries were running as normal.
Workers at INEOS' Lavera refinery in southern France have not voted to strike, according a statement by Richard Longden, group communications manager for INEOS - contrary to an earlier Reuters report citing the CGT Union.
In a radio interview, the French industry minister Christian Estrosi pledged to prevent any shortages of petrol and diesel supplies.
Total has proposed a roundtable discussion on 4 March on the future of its refining operations in
Franco Capaldo contributed to this
|
http://www.icis.com/Articles/2010/02/22/9336765/corrected-total-declares-force-majeure-on-propylene-from-french-refineries.html
|
CC-MAIN-2014-42
|
refinedweb
| 366 | 53.95 |
str
html:select in struts - Struts
html select in struts What is the HTML select in struts default value
ls: cannot access >: No such file or directory
ls: cannot access >: No such file or directory import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Example...: cannot access >: No such file or directory error is displayed .No f.t
Proplem with select data - Struts
Proplem with select data Hi , Please can u give me a example for display all data from the database (Access or MySql) using Struts
struts
struts <p>hi here is my code can you please help me to solve...=st.executeQuery("select type from userdetails1 where username=\'"+uname...;
<h1></h1>
<p>struts-config.xml</p>
<p>
struts
;!--
This file contains the default Struts Validator pluggable validator... in this file.
# Struts Validator Error Messages
errors.required={0...struts <p>hi here is my code in struts i want to validate my
Cannot find tag library descriptor
Cannot find tag library descriptor Cannot find tag library descriptor...? How to resolve in struts in eclipse
combobox cannot be resolved in JavaFX
combobox cannot be resolved in JavaFX I want to design one... {
try
{
AnchorPane...);
primaryStage.setTitle("Product");
//getting external .css file
cannot do the additional operator
cannot do the additional operator i got problem with additional and multiplication operator...please anyone help me
<html>
<head>... math operator menu -->
<select name="operator">
<option selected
java - Struts
file.
function validate(objForm...;
}
if(objForm.usertype.selectedIndex == 0 ){
alert("Please Select User Type!");
objForm.usertype.focus... [Servlet Error]-[/loginpage.jsp]: javax.servlet.jsp.JspException: Cannot find
Cannot assign an ArrayList to an empty ArrayList
Cannot assign an ArrayList to an empty ArrayList I have a java file, in which a method returns an ArrayList. This ArrayList is supposed to contain all the Student object which are in X year.
StudentsManager.java
public
image cannot be saved - Java Beginners
image cannot be saved In the following program when we click... BufferedImage image;
static File file=null;
JMenuBar menubar = new JMenuBar();
JMenu... = new JMenu("File");
open =new JMenuItem("Import");
save=new JMenuItem("save file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm newDocumentForm = (NewDocumentForm) form;
FormFile file
Struts Tutorials
struts-config.xml file for an existing Struts application into multiple... to struts-config.xml file. Then, open an editor for the struts-config.xml file... of the struts-config.xml file.
4. Struts Application Wizard - Turns your existing project
Select Tag (Form Tag) Example
Select Tag (Form Tag) Example
In this section, we are going to describe the select
tag. The select tag is a UI tag that is used to render an HTML input tag of type
Add
struts
struts how to write Dao to select data from the database
Select from select list + display
Select from select list + display i have a select list containing... select EmpCode from the select list,
the corresponding EmpName and DeptName should be displayed automatically in empty text fields.
I am using struts 1 why doc type is not manditory in struts configuration file
Select Tag<html:select>:
;
}
}
Defining form Bean in struts-config.xml file :
Add the following entry in the struts-config.xml file for Form Bean.
Defining the form... in the struts-config.xml :
Here, Action mapping helps to select the From
Struts
Struts What is called properties file in struts? How you call the properties message to the View (Front End) JSP Pages
cannot open .jar files by double click
cannot open .jar files by double click I'm having a problem i create a .jar file in net beans and yesterday it work right b y and double click but i... the error as " main class cannot be found " plz help i'm stucked now
still i
how to access the messagresource.proprties filevalues in normal class file using struts - Struts
how to access the messagresource.proprties filevalues in normal class file...
password=system
My class file is
import java.io.PrintStream;
import...();
ResultSet rs=st.executeQuery("Select * from data");
while(rs
java [ cannot retrive date from sql ] why??
java [ cannot retrive date from sql ] why??
import...=st.executeQuery("select amt from add where dte= "+sdf.format(da)+" "); // why cannot I retrive data from bd even if I put correct data in it
if( rs - Overrite of Validate method - Struts
Struts - Overrite of Validate method i am trying to display error... it's not compiling ...pls send what is the resion..and iam using struts 1.3.8..it's is some jar file missing or suppose i want ActionMessages clss in place
Struts upload file - Framework
Struts upload file Hi,
I have upload a file from struts... and send to file upload struts..how to get the sheets and data in that sheets
Thanks. Hi friend,
For upload a file in struts visit to :
http Suppose if you write label message with in your JSP page. But that "add.title" key name was not added in ApplicationResources.properties file? What happens when you run that JSP? What error shows? If it is run
if there is an invalid entry in html:file control then form is not submitting. - Struts
select any file and submit the form - form will submit without any problem
3... friend,
Code to fileupload :
Struts File Upload Example...if there is an invalid entry in html:file control then form is not submitting
struts validation
;%@ include file="../common/header.jsp"%>
<%@ taglib uri="/WEB-INF/struts-bean.tld" prefix="bean" %>
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix...struts validation I want to apply validation on my program.But i am
Multiple file upload - Struts
Multiple file upload HI all,
I m trying to upload multiple files using struts and jsp.
I m using enctype="multipart". and the number of files...
Multipale file Uploade
Specify file
struts - Struts
.
Struts only reads the struts.config.xml file upon start up.
Struts...struts hi,
what is meant by struts-config.xml and wht are the tags are used in this xml file
and could u plz explain abt that tags indetail
java vertual machine launcher error; cannot access jarfiles
java vertual machine launcher error; cannot access jarfiles Hi I am getting a error while running Dos bat file
Error showing as error; cannot access jarfiles D\Documents
Please help
Regards
GP
Struts - Struts
Struts Hi,
I m getting Error when runing struts application.
i...
/WEB-INF/struts-config.xml
1... resolve this.
Hi friend,
Create the web.xml file
Multiple select box
Multiple select box Hi, I need help in code for multiple select box. The multiple select box should be populated with the db values.The selection done in the multiple select box is to be moved to the text area provided a add
DynaActionForm |
Struts File
Upload |
Struts
file upload and save on server...
configuration file |
Struts
2 Actions |
Struts 2 Redirect Action...
Format |
Struts 2 File Upload |
Struts 2 Resources |
Static Parameter
cannot insert data into ms access database - Java Server Faces Questions
cannot insert data into ms access database
go back...->Data Sources(ODBC)
2. Open User DSN tab
3. Add a user DSN
4. Select Microsoft Access Driver(*.mdb)
5. Select database name and Create the DSN
Struts 2 File Upload error
Struts 2 File Upload error Hi! I am trying implement a file upload using Struts 2, I use this article, but now the server response the error... solve this?
Hi Friend,
Please visit the following link:
File
select Query result display problem
Hibernate: select cc0.id as col00 from cc cc0_
Cc$$EnhancerByCGLIB$$2235676a cannot...select Query result display problem Hi,
String SQL_QUERY ="from Cc";
Query query = session.createQuery(SQL_QUERY);
for(Iterator it=query.iterate
The server encountered internal error() - Struts
the problem in struts application.
Here is my web.xml
MYAPP...
org.apache.struts.action.ActionServlet
config
/WEB-INF/struts-config.xml...: The absolute uri: cannot be resolved
cannot find symbol class array queue--plzz somebody help..
cannot find symbol class array queue--plzz somebody help.. import java.util.*;
public class Test {
public static void main(String[] args... import that package through the jar file or implement the class ArrayQueue
How to select only .txt file to be zipped using java?
How to select only .txt file to be zipped using java? Hello,
i'm trying to zipp .txt files from a folder but i want to know how to select only .txt file.
I try my code but it's not working, could any one help me please.
[CODE
how to forward select query result to jsp page using struts action class
how to forward select query result to jsp page using struts action class how to forward select query result to jsp page using struts action class
more than one struts-config.xml file
more than one struts-config.xml file Can we have more than one struts-config.xml file for a single Struts application Layout Examples - Struts
Struts Layout Examples Hi,
Iam trying to create tabbed pages using the struts layout tag.
I see the tab names on the page but they cannot...://
Thanks.
Amardeep
Struts Articles
and add some lines to the struts-config.xml file to get this going... mapping definitions in the struts-config file.
2. Servlet creates... Struts configuration file(s) mappings and the JAAS security framework policy file
Labels in Struts 2.0 - Struts
Labels in Struts 2.0 Hello, how to get the Label name from properties file
STRUTS INTERNATIONALIZATION
introduction we shall see how to implement
i18n in a Simple JSP file of Struts.
g... in the application.properties file
index.info=STRUTS TUTORIAL.
Now we have to add entry in the
struts-config.xml file for all the properties files. The entry and its
Based on struts Upload - Struts
Based on struts Upload hi,
i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code
SELECT query using executeUpdate() instread of executeQuery()
SELECT query using executeUpdate() instread of executeQuery() can we execute SQL SELECT query using executeUpdate() instread of executeQuery() method
Hi Friend,
No, You cannot.
Thanks
Hibernate Select Clause
is the code of our java file which
shows how select HQL can be used...
Hibernate Select Clause
In this lesson we will write example code to select the data
Understanding Struts Action Class
Understanding Struts Action Class
In this lesson I will show you how to use Struts Action Class and forward a
jsp file through it.
What is Action Class?
An Action
Method Invocation
;
Without methods, an object cannot do anything. When an object calls... a
value, write to a file or to provide some functionality required. .... In
the class method it select the reference of the object at compile time.
ii
|
http://roseindia.net/tutorialhelp/comment/94299
|
CC-MAIN-2014-15
|
refinedweb
| 1,807 | 58.28 |
Is there any efficient way to check and report in a log file or on the console may be... when ever the VPN is disconnected?
import time
print time.asctime( time.localtime(time.time()) )
This solution is system dependent, I know it works on Linux becaus I've done something similar, but don0't know if it wiorks on Windows. I don't know if you want a solution not involving ping, but I think this is a good solution.
import logging, os, time PING_HOST='10.10.10.10' # some host on the other side of the VPN while True: retcode = os.system('ping -c 1 %s' % PING_HOST) if retcode: # perform action for lost connection logging.warn('Lost visibility with %s' % PING_HOST) time.sleep(10) # sleep 10 seconds
This works because ping returns a return code of
0 for success. All other return codes signify an error.
|
https://codedump.io/share/E2ft8ZMlSRhw/1/continuous-check-for-vpn-connectivity---python
|
CC-MAIN-2017-34
|
refinedweb
| 147 | 67.96 |
Page 1 of 2
1
2
>
Show 40 post(s) from this thread on one page
CFD Online Discussion Forums
(
)
-
OpenFOAM Installation
(
)
- -
OpenFOAM Cygwin port updated to 13
(
)
brooksmoses
April 10, 2006 22:51
I've now updated Petr Vita's p
I've now updated Petr Vita's patch to work with OpenFOAM-1.3, and have produced source and binary distribution files for it. Those are now available at
.
In addition, for those still using OpenFOAM-1.2, I've incorporated the latest bugfixes for the bugs that Philippe Straet reported, and there are updated packages for that on the site as well.
These are not quite complete ports; we haven't yet tried to port FoamX or ParaFoam/ParaView. Also, Petr is still working on getting the parallel code to work; hopefully that will be working soon. However, everything else should be working.
...
There are a few changes to the Cygwin port that are new with OpenFOAM-1.3. First, I've rearranged how the dependency loop between dummy/libPstream and libOpenFOAM is resolved; on Cygwin, we now compile libOpenFOAM first, and link dummy/libPstream to it. This simplifies the relevant Make/files files quite a bit, and should be vastly easier for the OpenFOAM team to keep up to date if Hrv incorporates these changes into the main tree.
As with the OpenFOAM-1.2 port, I've compiled a set of patch files that contain the various changes; those are also linked and described at the above page. I've tried to separate the "bulk" changes (e.g., the changes to dozens of Make/options files) from the "interesting" changes, so as to make the patch files readable. This time around, it turns out that all the renaming that I did to make things work on a case-insensitive filesystem does not affect any of the other changes; thus, one should be able to apply patches 0.2 through 0.5 to an unmodified OpenFOAM-1.3 source tree.
In addition, most of the changes have been wrapped with "#ifdef cygwin" or such if they seem likely to affect compilation on other platforms, and the source tree from this port (either with the renamed files, or generated via patch files as above) should compile cleanly on any supported platform. I'm actually not sure if we need to wrap the Make/options changes in "#ifdef cygwin", but that's how Petr did it, so I'm keeping it that way for simplicity.
As always, feel free to report bugs, suggestions, or questions in this thread.
niklas
April 11, 2006 02:44
Just one important thing. M
Just one important thing.
My managed mount was set to /manage, with this name
I will hit a filename length limit for one file (its one character too long)
So it is really important to keep the name as short as possible, setting it to /m instead allows me
to unpack it as usual.
N
brooksmoses
April 11, 2006 03:10
Ah, right -- I forgot to menti
Ah, right -- I forgot to mention this! The packages that I've created have all the relevant source files renamed so that they don't require usage of a managed mount at all, for anything. They work fine in a case-insensitive normal Windows filesystem.
Of course, if you're using the official OpenFOAM source distribution and just applying the patches, then you will need a managed mount. If you do that, don't apply patch 0.1 (it's changing all the #includes because of the renamed files), and please let me know how it works in the end, because I haven't tried it yet!
pvita
April 12, 2006 12:00
Hi Brooks, Niklas and the rest
Hi Brooks, Niklas and the rest of the OpenFOAM world!
I tested OpenFOAM distribution from Brooks (based on
OpenFOAM-1.3.cygwin-src-0.5.tar.gz
,
gcc-4.1.0.cygwin-OpenFOAM.tar.gz
and
OpenFOAM-1.3.Docs.tar.gz
) under my Windows 2000 Professional and can confirm that it works with small changes. I am attaching all these changes in a patch form that can be applied on any Windows platform and that solves following problems:
Cygwin environment detection
: fixed a detection of Cygwin environment to provide an unified and general way.
OpenFOAM-1.3/Allwmake
: Added somehow dropped code to compile sources of wmake system, library and applications.
thermophysicalModel/NSRDSfunctions
: Fixed an unknown problem that emerges at least on my rig. The fix is very general and should work on all Cygwin platforms.
The patch uses actual Brooks nomenclature and should applied as the last one in the line.
OpenFOAM-1.3.cygwin-src-0.6.diff
PV
brooksmoses
April 12, 2006 15:29
Petr - Thanks for the patch
Petr -
Thanks for the patch! The fix for the Cygwin environment detection should have been included already; I'm not sure how it slipped out.
I have no clue what's going on with the NSRDSfunctions, since it doesn't do that on any of my test machines, but since it seems to reliably do that on yours, I can include the change. Are you doing this on a managed mount or an unmanaged one? Perhaps this is getting into the filename length limit Niklas mentioned?
I also admit to being somewhat confused by what's going on in the Allwmake patch. Your patch doesn't add the code to compile the wmake system, library, or applications -- it simply removes whitespace from those three lines (which do already exist), and moves them around. And it's incorporating code that was in the doc/Allwmake file -- it looks like this change is for a case where you've moved your Doxygen directory from the doc directory to the root OpenFOAM-1.3 directory?
I'll get these into the packaged distribution shortly!
pvita
April 13, 2006 03:40
Greetings! In that version
Greetings!
In that version from
OpenFOAM-1.3.cygwin-src-0.5.tar.gz
was the fix just partialy implemented and still the old detection through CYGWIN_NT-5.0 or CYGWIN_NT-5.1 or etc. was used at least in
OpenFOAM-1.3/.OpenFOAM-1.3/bashrc
and
OpenFOAM-1.3/.OpenFOAM-1.3/cshrc
.
Well, in the case of
Allwmake
something got screwed as I see. I checked it once again and you are right. However I have no idea how I got on my installation contents I was repairing. There was some modification of Doxygen directory rights and fully missing compilation applications and OpenFOAM libraries. *scratches on the head* This part is somehow strange... Throw it out, please.
Problem with NSRDSfunctions is very strange one to mee as well. I am using non-managed mounts now but I do not think it is bound to lenght of filenames either as normally just limited part of functions get compiled and part simply not without reason. It could be somebody will replicate it. But who knows.
PV
pvita
April 13, 2006 03:54
I posted that I do not know wh
I posted that I do not know where it is comming from those changes in
Allwmake
I had to repair. Well, I know it now. :-) It is contained in your documents distribution
OpenFOAM-1.3.Docs.tar.gz
. I have unpacked the sources first, then documents distribution and it replaced original
Allwmake
. You should probably correct it removing
Allwmake
from documents distribution completly.
PV
brooksmoses
April 13, 2006 04:13
Oh, I see what I did! When I
Oh, I see what I did! When I created the Docs distribution, I got it mixed up so it puts everything into the OpenFOAM-1.3 root directory instead of OpenFOAM-1.3/doc -- and thus what should be OpenFOAM-1.3/doc/Allwmake overwrites OpenFOAM-1.3/Allwmake from the source distribution. So it's a complete mess, and that's only one of the symptoms.
Anyhow, it should be fixed now.
On the NSRDS functions: what are the symptoms of what happens with them if you don't use this patch? Does it fail at compilation, or only when you try to use them (and, if so, with which testcase)?
niklas
April 19, 2006 02:12
I've tried out Brooks stuff an
I've tried out Brooks stuff and it works great.
There is one point I'd like to see improved though,
to better handle cross platform development,
and that is the way includes have been modified.
#include "vector.H" -> #include "vector.hh"
would be nice if instead it was
#ifdef cygwin
#include "vector.hh"
#else
#include "vector.H"
#endif
I have no objections to renaming the files,
but its messy sitting writing/testing code on a windows PC and running it on another.
Great work though Brooks.
N
pvita
April 19, 2006 03:41
Niklas -- Nice to see you a
Niklas --
Nice to see you again between those who tries to get Cygwin port working!
Brooks --
I have noticed that there is a OpenFOAM Wiki running. Well, it is my shame ignoring it so hard as it is loved child of Bernhard, my collegue. Whatever! There is a section know as
Main HowTos
that contain already some HOWTOs inside. One of them is named
Compiling OpenFOAM under Unix
. Maybe we should put content of your website there to centralize it... What you think about it?
PV
brooksmoses
April 19, 2006 04:08
Niklas: That sort of #ifdef co
Niklas: That sort of #ifdef construct is definitely how I do things in my own user applications outside the OpenFOAM source tree, yes, for exactly that reason. I'm not quite sure how it's useful to do it in the OpenFOAM tree itself, though -- are you editing the OpenFOAM files and transferring them back and forth? If so, don't you already have problems with some of the files having different names? I was thinking that it made more sense to have the modified tree be self-consistent, so that if someone unpacked it into a non-Cygwin machine it would still compile properly. (And it does; I tested it a couple of days ago.)
Petr: Yes, I think that it would be useful to move at least some of the content of my site to the Wiki. I probably won't have a chance to think about it for a week or two, though, but I'll make a note to work on it when I have time.
niklas
April 19, 2006 04:19
Petr: Hehe, This is probab
Petr:
Hehe,
This is probably going to upset a few and come out wrong, but Im gonna go ahead and say it anyway so
that you know where I stand.
I'm deliberately keeping a very low profile for this windows port thingy since
windows users in general are clicky-this-mesh, clicky-that-run, clicky-produce-nice-pics users who know very little (I might be wrong, but that is my experience) and
I dont like supporting them because they seldom
understand the complexity of CFD and/or OpenFOAM,
know very little about computers and
in general require, and often demand!!!, alooot of time to support.
Time I do not have.
Brooks:
Yup, havent got that far yet, but the amount of fiddling will be less
N
gschaider
April 19, 2006 04:50
Niklas comment about the Windo
Niklas comment about the Windows-users: I understand (and partially share) your sentiments, but I don't think it will be so bad for one reason: AFAIK nobody plans to port FoamX to cygwin (don't know if this is even possible). So if a disclaimer "...but the GUI is only available on Linux." is prepended to the Cygwin-README the pointAndClick-crowd will either go away or move to Linux (and will want support for their Linux-problems ;) ).
pvita
April 19, 2006 06:32
Niklas -- Well, we need tes
Niklas --
Well, we need testers, nobody asks you to support anybody if you have no time. That is your decision. I do not really understood message of your statement. And as Bernhard said, AFAIK there is no really plan for FoamX port.
PV
niklas
April 19, 2006 07:01
Well, we need testers, nobody
Well, we need testers, nobody asks you to support anybody if you have no time.
That is nice to hear.
Anyway, my point (although I have to admit not entirely clear) was that I never went away -
as your previous post 'nice to see you again' suggested.
I just haven't posted my solutions here, hoping that you would solve it all and get all the questions
I think you're both doing a great job.
N
brooksmoses
April 19, 2006 14:01
Thanks for the compliments, Ni
Thanks for the compliments, Niklas! And, yes, that's part of the reason that I've tried to make it very clear on my file-distribution page that this is an
unofficial
port -- I very much want to make it clear that it's not something the OpenFOAM team provides support for.
look
May 3, 2006 05:49
Hi, When I compiled OpenFOAM
Hi,
When I compiled OpenFOAM with ./Allwmake. I have a problem like this:
/home/user/OpenFOAM/OpenFOAM-1.3/wmake/wmake: line 140: make: command not found
/home/user/OpenFOAM/OpenFOAM-1.3/wmake/wmake: cannot make, file Make/cygwinGcc4DPOpt/objectFiles was not created successfully
Could someone teach me what is going on and how to solve it?
Thank you very much.
pvita
May 3, 2006 06:13
Look -- Have a look once ag
Look --
Have a look once again at Brook's website where you downloaded your files from and read section
Dependencies
. There are written down some packages you need to install in your Cygwin to able to compile OpenFOAM 1.3 at all. Check if all packages are installed. Your error message says on its first line
...make: command not found
informing you that
make
is not provided.
PV
look
May 3, 2006 06:27
Dear Petr Vita: I really ne
Dear Petr Vita:
I really neglect this.
Thank for your help.
Look
look
May 4, 2006 05:39
Hi, When I use wmake to compi
Hi,
When I use wmake to compiled icoFoam,I have a problem like this:
g++: error trying to exec 'as': execvp: No such file or directory
make: *** [Make/cygwinGcc4DPOpt/icoFoam.o] Error 1
I don't know what is wrong.
Can someone help me ?
Thank you.
Look
All times are GMT -4. The time now is
21:21
.
Page 1 of 2
1
2
>
Show 40 post(s) from this thread on one page
|
https://www.cfd-online.com/Forums/openfoam-installation/57590-openfoam-cygwin-port-updated-13-a-print.html
|
CC-MAIN-2017-47
|
refinedweb
| 2,451 | 72.26 |
On Wed, 2011-11-30 at 02:22 +0000, Al Viro wrote:> On Tue, Nov 29, 2011 at 06:14:14PM -0800, Joe Perches wrote:> > Fix a few style things.> > $ ./scripts/checkpatch.pl -f --terse --nosummary fs/namespace.c | \> > cut -f3- -d":" | sort | uniq -c> > 1 ERROR: do not initialise statics to 0 or NULL> > 2 ERROR: do not use assignment in if condition> > 1 ERROR: "foo * bar" should be "foo *bar"> > 1 ERROR: need consistent spacing around '|' (ctx:VxW)> > 1 WARNING: braces {} are not necessary for single statement blocks> > 3 WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable> > 4 WARNING: line over 80 characters> > 9 WARNING: please, no space before tabs> > 1 WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h>> > 1 WARNING: Use #include <linux/unistd.h> instead of <asm/unistd.h>> NAK. First of all, wanking it certainly is; moreover, it creates shitloads> of conflicts with patches in vfs.git#vfsmount-gutsNo worries.I think patches 1 thru 14 are reasonable thoughand do apply with a few offsets to vfsmount-guts.
|
http://lkml.org/lkml/2011/11/29/572
|
CC-MAIN-2016-22
|
refinedweb
| 177 | 52.19 |
#include <stdio.h>
void swap(void *v[], int i, int j)
{
void *tmp;
tmp = v[i];
v[i] = v[j];
v[j] = tmp;
}
int main(void)
{
char *s[] = {"one", "two"};
printf("%s, %s\n", s[0], s[1]);
swap(s, 0, 1);
printf("%s, %s\n", s[0], s[1]);
return 0;
}
one, two
two, one
no compatible pointer casting, need void**, but char
void pointer
No, it is not necessarily safe to pass a
char** where a
void** (which is what a
void*[] function parameter actually is) is expected. The fact that the compiler makes you perform an explicit cast is a hint about that.
In practice, it is likely to be fine. Strictly speaking, however, you usually have no guarantee that
sizeof (T*) == sizeof (U*) for distinct types
T and
U. (For example, you could imagine a hypothetical system where
sizeof (int*) < sizeof (char*) because pointers-to-
int are aligned and therefore don't need to store the least significant bits.) Consequently, your
swap function might index into the
v array using the wrong offsets.
Also see Q4.9 from the comp.lang.c FAQ: Can I give the formal parameter type
void **, and do something like this?
To call
swap safely, you should do something like:
void* temp[] = { &s[0], &s[1] }; swap(temp, 0, 1);
although that would swap the elements of
temp, not of
s.
If you're authoring
swap, in general you should make such a function take a
void* argument (instead of a
void** one) and a
size_t argument that specifies the size of each element. Your function then could cast the
void* to
char* safely and swap individual bytes:
void swap(void* p, size_t elementSize, size_t i, size_t j) { char* item1 = p; char* item2 = p; item1 += i * elementSize; item2 += j * elementSize; while (elementSize-- > 0) { char temp = *item1; *item1 = *item2; *item2 = temp; item1++; item2++; } }
Edit: Also see this StackOverflow answer to a similar question.
|
https://codedump.io/share/bkDiiCH6wKHM/1/is-casting-to-pointers-to-pointers-to-void-always-safe
|
CC-MAIN-2018-09
|
refinedweb
| 321 | 63.43 |
Rop, all the included examples run fine.
I found that the problem is that Button::_hidden is uninitialized. I will open an issue.
Now I can easily add hit and miss sounds to my button testing app. Will update soon.
I love all computers. Always have.
I'm a retired computer programmer. My first product was a keyboard macro program (Tempo - Affinity Software, Macintosh) in the late 1980's.
Later I helped in a small way to invent email (QuickMail - CE Software, Macintosh.)
I worked for startups and shutdowns. I moved around the country. It was a blast.
I spent my last 23 years at Microsoft, where I mostly worked on Visual Studio.
I enjoy programming in many languages, on many operating systems. It's a wonderful time to be a geek.
Rop, all the included examples run fine.
I found that the problem is that Button::_hidden is uninitialized. I will open an issue.
Now I can easily add hit and miss sounds to my button testing app. Will update soon.
Thanks for the
checkRotation function!
I've just glanced at the Sound stuff (very nice!) but am still working on Touch.
I wrote a test to determine what the minimal effective touch button size is ().
When I moved it from PlatformIO (for development) to Arduino (for publishing) I moved from your master branch to the M5Sound branch. The app built, but did not run in M5Sound. (It drew one button and stopped. I haven't looked into it yet.)
Are you making touch changes in M5Sound as well?
I wrote a program to see what a reasonable minimum size touch button is:
It's a test you play: it lets you choose what size button and what spacing; it makes a bunch of buttons and asks you to press eight of them. At some point you should stop getting perfect scores.
I'd be interested in hearing what people find a usable minimum to be.
Caution: to use this app, you need RopG's PR for M5Core2. You can download the touch button version of the lib here:.
Rop's latest sample (M5Sound, in this forum) make me wonder what a reasonable, minimum size and spacing should be for TouchButtons. At almost 80 X 60 pixels, the DTMF buttons are usable, but already require some care with only 12 on the screen.
I dug up an old passive capacitive stylus (the kind that looks like a soft pencil eraser) and found it increases speed and accuracy. Maybe a 50 X 50 pixel button would be usable with the passive stylus (doubling the number of buttons available.)
I see that there are active capacitive styluses available with much smaller tips. Does anyone have one? Does it work well with the Core2?
Nice, efficient implementation of the dtmf sample. I see
btn.userData came in handy!
The audio quality is great, completely free of pops and clicks.
I think I'll add
checkRotation() to my system library!
Works like a champ!
If we ever meet, I will buy you a beer.
@felmue That's a very simple change and works great! Thanks a million!
Still on the topic of "Is there a Brain Genius in the House?" I have one more PlatformIO question:
When I compile Core2 programs that allocate lots of memory, they work if built on Arduino but fail at runtime if built on PIO. It seems that Arduino is somehow dynamically aware of the 8MB PSRAM, while PIO is not. I believe PIO uses the board def to select ..\partitions\default_16MB.csv, which seems to limit app0 to 64K. I bet that's it.
So when I execute a statement like
disp.createSprite(320, 240) it works fine if I build with Arduino, but PIO tells the Core2 it doesn't have that much RAM, so it fails. The factory test program likewise won't run if built as an M5Stack-Fire on PIO.
Does anyone have a definition for an M5Stack-Core2 board and an appropriate partition map for it for PIO?
I'm trying to build M5Stack Core2 projects on PlatformIO (on Windows, if it matters.)
I've been tracking RopG's Fork, so M5Core2 was in my project's lib folder, and things worked fine.
Now I've moved it to the [user].platformio\lib folder, and I have problems.
If the M5Stack library is NOT installed, the project compiles and links and this is the Dependency Graph:
|-- <M5Core2> 0.0.1 | |-- <Wire> 1.0.1 | |-- <SPIFFS> 1.0 | | |-- <FS> 1.0 | |-- <FS> 1.0 | |-- <SPI> 1.0 | |-- <HTTPClient> 1.2 | | |-- <WiFi> 1.0 | | |-- <WiFiClientSecure> 1.0 | | | |-- <WiFi> 1.0 | |-- <SD(esp32)> 1.0.5 | | |-- <FS> 1.0 | | |-- <SPI> 1.0
However, if both M5Stack and M5Core2 libraries are installed in [user].platformio\lib, I get this Dependency Graph:
|-- <M5Core2> 0.0.1 | |-- <Wire> 1.0.1 | |-- <SPIFFS> 1.0 | | |-- <FS> 1.0 | |-- <FS> 1.0 | |-- <SPI> 1.0 | |-- <HTTPClient> 1.2 | | |-- <WiFi> 1.0 | | |-- <WiFiClientSecure> 1.0 | | | |-- <WiFi> 1.0 | |-- <M5Stack> 0.3.0 | | |-- <FS> 1.0 | | |-- <SPIFFS> 1.0 | | | |-- <FS> 1.0 | | |-- <SPI> 1.0 | | |-- <HTTPClient> 1.2 | | | |-- <WiFi> 1.0 | | | |-- <WiFiClientSecure> 1.0 | | | | |-- <WiFi> 1.0 | | |-- <Wire> 1.0.1 | | |-- <SD(esp32)> 1.0.5 | | | |-- <FS> 1.0 | | | |-- <SPI> 1.0 | |-- <SD(esp32)> 1.0.5 | | |-- <FS> 1.0 | | |-- <SPI> 1.0
Everything compiles, but I get innumerable linking errors like:
.pio\build\m5stack-fire\libb51\libM5Core2.a(M5Core2.cpp.o):(.bss.M5+0x0): multiple definition of `M5' .pio\build\m5stack-fire\lib3c8\libM5Stack.a(M5Stack.cpp.o):(.bss.M5+0x0): first defined here c:/users/van/.platformio/packages/toolchain-xtensa32/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld.exe: Warning: size of symbol `M5' changed from 332 in .pio\build\m5stack-fire\lib3c8\libM5Stack.a(M5Stack.cpp.o) to 1856 in .pio\build\m5stack-fire\libb51\libM5Core2.a(M5Core2.cpp.o)
My platformio.ini is plain vanilla:
[env:m5stack-fire] platform = espressif32 board = m5stack-fire framework = arduino monitor_speed = 115200
I've tried it with incredibly trivial projects, it's always the same.
Any suggestions? Is anyone successful at this?
Works like a champ! Thanks, Felix--sorry it took me so long to circle back and try it out.
Rop has reved two more times; I'm on the latest and this seems to work fine.
Sorry, I wasn't clear, but it sounds like we're on to it.
Keep the M5StickC board selected, but change:
#include <M5StickC.h>
to
#include <M5StickCPlus.h>
|
https://forum.m5stack.com/user/vkichline
|
CC-MAIN-2020-45
|
refinedweb
| 1,089 | 80.38 |
James shows how to add a simple WMI provider to a service so that you can monitor it, and make changes to it, remotely across the network
If you are writing an application, such as a service, that you want to be able to monitor and configure remotely then you'll want to ensure that your application integrates smoothly with Windows Management Instrumentation (WMI). WMI is the Microsoft implementation of an industry standard interface called Web Based Enterprise Management. It allows a user to access, and change, management information from a wide variety of devices across the enterprise. The classes that support WMI in the .NET Framework reside in the system.management namespace, within the framework’s class library.
In this article, we will first create a Windows Service that accepts a TCP connection and echoes back any text typed into a telnet connection. We will then add some WMI code to our service in order to enable it to publish information about the number of characters that have been echoed back. In other words, we will turn our Windows Service into a WMI Provider.
Although this is a fairly simple project, it does highlight the key steps that are required to WMI-enable your application.
Create a new Windows Service project in Visual Studio and call it TestService. In Solution Explorer, open up the file called Service1.cs in the code editor.
Add a new member variable called m_engineThread of type Thread:
Thread m_engineThread;
We then want to kick it off this new listening thread during service startup:
protected override void OnStart(string[] args) { m_engineThread = new Thread(new ThreadStart(ThreadMain)); m_engineThread.Start(); }
And make sure we terminate it when the service is stopped:
protected override void OnStop() { try { m_engineThread.Abort(); } catch (Exception) { ;} }
The code for ThreadMain is fairly simple; it just sets up a TCPListner and accepts connections. It then prints out any line entered via the TCP connection, until a single "." is received on a line by itself:
public void ThreadMain() { // Setup the TCP Listener to bind to 127.0.0.1:50009 IPAddress localAddr = IPAddress.Parse("127.0.0.1"); TcpListener tlistener = new TcpListener(localAddr, 50009); try { // Start listening tlistener.Start(); String data = null; // Enter processing loop while (true) { // Block until we get a connection TcpClient client = tlistener.AcceptTcpClient(); data = null; // Get a stream object and // then create a StreamReader for convience NetworkStream stream = client.GetStream(); StreamReader sr = new StreamReader(stream); // Read a line from the client at a time. while ((data = sr.ReadLine()) != null) { if (data == ".") { break; } byte[] msg = System.Text.Encoding.ASCII.GetBytes(data); stream.Write(msg, 0, msg.Length); stream.WriteByte((byte)'\r'); stream.WriteByte((byte)'\n'); } // Shutdown and end connection client.Close(); } } catch (SocketException e) { ; } finally { // Stop listening for new clients. tlistener.Stop(); } }
Finally, we need to get the service to install itself. To do this, add a project reference to System.Configuration.Install.dll and then add a new class to your project, called MyInstaller. This class should derive from Installer and be attributed with the RunInstallerAttribute:
[System.ComponentModel.RunInstaller(true)] public class MyInstaller : Installer { …..
[System.ComponentModel.RunInstaller(true)] public class MyInstaller : Installer {
…..
In the constructor of the MyInstaller class, we need the following code to install the service:
public MyInstaller() { ServiceProcessInstaller procInstaller = new ServiceProcessInstaller(); ServiceInstaller sInstaller = new ServiceInstaller(); procInstaller.Account = ServiceAccount.LocalSystem; sInstaller.StartType = ServiceStartMode.Automatic; sInstaller.ServiceName = "Simple-Talk Test Service"; Installers.Add(sInstaller); Installers.Add(procInstaller); }
All this does is to ensure that the service installs correctly and appears in the services.msc control panel.
Let's give it a go. Hit F6 to build the project and then run InstallUtil.exe on the resulting binary. In my case, this is:
C:\Simple-Talk>InstallUtil.exe TestService.exe
You will see a large amount of text output. Once this has completed, hit start->Run and type services.msc. This will bring up the services control panel; scroll down until you find Simple-Talk Test Service and start it.
Having started the service, we can now try it out. Hit start->run again and type: telnet 127.0.0.1 50009. This will open up a telnet window; anything you type will be echo’d back to you when you hit enter.
To close the connection enter "a ." on a line, on its own.
We now need to stop the service, which you can do using the services control panel.
We now want to add the WMI support to the service. As an example, we will publish the number of characters which have been echo’d back since the service started.
To WMI-enable our service, include a reference to System.Management.Dll and then add a new class to the project called EchoInfoClass. Attribute this class with the InstrumentationClass attribute, with its parameter as InstrumentationType.Instance. Then, add a public field called CharsEchoed of type int:
[InstrumentationClass(InstrumentationType.Instance)] public class EchoInfoClass { public int CharsEchoed; }
The InstrumentationClass attribute specifies that the class provides WMI data; this WMI Data can either be an instance of a class, or a class used during a WMI event notification. In this case, we want to provide an instance of a class. Next, in order to WMI-enable our project, we need to modify the installer class we wrote earlier so that it registers our WMI object with the underlying WMI framework.
For safety, first run InstallUtil.exe /u against the binary we built before to uninstall the service.
Now, we can change the installer class so that it registers our WMI object correctly with the underlying WMI subsystem. Luckily, the .NET Framework architects made this easy for us. There is a class called DefaultManagementProjectInstaller in the framework that provides the default installation code to register classes attributed with InstrumentationClass. To take advantage of this we simply change the class MyInstaller to derive from DefaultManagementProjectInstaller rather than Installer.
[System.ComponentModel.RunInstaller(true)] public class MyInstaller : DefaultManagementProjectInstaller { …
We need to create and register an instance of this service class on service startup. To do this, first add a member variable to the service class:
EchoInfoClass m_informationClass;
Then, add the following code to your OnStart override:
protected override void OnStart(string[] args) { m_informationClass = new EchoInfoClass(); m_informationClass.CharsEchoed = 0; Instrumentation.Publish(m_informationClass); m_engineThread = new Thread(new ThreadStart(ThreadMain)); m_engineThread.Start(); }
This creates the class instance and registers it with the WMI framework so that it is accessible via WMI. Once that is done we just use the class as normal.
We have now told the WMI Framework about our class (via the installer) and published an instance of it (in our OnStart method). Now, we just need to update the information we are publishing via WMI. To do this we increment the m_informationClass.CharsEchoed field whenever we echo a character back to the client. To do this add the following line to ThreadMain:
while ((data = sr.ReadLine()) != null) { if (data == ".") { break; } byte[] msg = System.Text.Encoding.ASCII.GetBytes(data); stream.Write(msg, 0, msg.Length); stream.WriteByte((byte)'\r'); stream.WriteByte((byte)'\n'); m_informationClass.CharsEchoed += msg.Length; }
We are now ready to give it a go and see if it all works! Build your application by hitting F6 and then run InstallUtil again:
Then just start the service and try it out:
C:\Simple-Talk>net start “Simple-talk test service”C:\Simple-Talk>telnet 127.0.0.1 50009
The telnet command opens up a blank screen, waiting for you to type something in. I typed in "simple-talk" and hit enter and the service duly echo'd back "simple-talk" to the screen.
So, the service returned 11 characters and, hopefully, our WMI provider worked correctly and recorded that. Microsoft provides a WMI information browser called wbemtest – it's fairly ropey, but it will do for now so open that up:
C:\Simple-Talk>wbemtest
Click connect, leave all the settings at their default value, and click OK:
Next click the Query… button and enter the following:
WQL returns instances of the classes we requested, rather than rows, and presents us with the following screen:
NOTE: WQL is very similar to SQL – in fact it is a subset of SQL and allows you to query management information in a very similar way to an RDBMS. WQL generally returns “instances” rather than rows. However, these can be thought of as analogous.
Double click on the instance of the class:
The CharsEchoed property shows us that 11 characters have been sent back from the service.
WMI is a wide ranging infrastructure on Windows (and other platforms) for managing machines and programs across an enterprise. Although our example is fairly simple it should give you enough information to be able to include WMI integration next time you are writing a service or website that requires remote monitoring and management.
It is equally as easy to consume WMI information from .NET however that topic can wait for another article.
Author profile: James Moore
James Moore is a developer at Red Gate Software and currently runs the DBA tools division.
|
http://www.simple-talk.com/dotnet/.net-framework/integrating-with-wmi/
|
crawl-002
|
refinedweb
| 1,500 | 56.86 |
On Thursday 16 July 2009, Gregory Haskins wrote:> Arnd Bergmann wrote: > > Your approach allows passing the vmid from a process that does> > not own the kvm context. This looks like an intentional feature,> > but I can't see what this gains us.> > This work is towards the implementation of lockless-shared-memory> subsystems, which includes ring constructs such as virtio-ring,> VJ-netchannels, and vbus-ioq. I find that these designs perform> optimally when you allow two distinct contexts (producer + consumer) to> process the ring concurrently, which implies a disparate context from> the guest in question. Note that the infrastructure we are discussing> does not impose a requirement for the contexts to be unique: it will> work equally well from the same or a different process.> > For an example of this "producer/consumer" dynamic over shared memory in> action, please refer to my previous posting re: "vbus"> >> > I am working on v4 now, and this patch is part of the required support.Ok. I can see how your approach gives you more flexibility in thisregard, but it does not seem critical.> But to your point, I suppose the dependency lifetime thing is not a huge> deal. I could therefore modify the patch to simply link xinterface.o> into kvm.ko and still achieve the primary objective by retaining ops->owner.Right. And even if it's a separate module, holding an extra referenceon kvm.ko will not cause any harm.> > Can't you simply provide a function call to lookup the kvm context> > pointer from the file descriptor to achieve the same functionality?> > > You mean so have: struct kvm_xinterface *kvm_xinterface_find(int fd)> > (instead of creating our own vmid namespace) ?> > Or are you suggesting using fget() instead of kvm_xinterface_find()?I guess they are roughly equivalent. Either you pass a fd tokvm_xinterface_find, or pass the struct file pointer you getfrom fget. The latter is probably more convenient because itallows you to pass around the struct file in kernel contextsthat don't have that file descriptor open.> > To take that thought further, maybe the dependency can be turned> > around: If every user (pci-uio, virtio-net, ...) exposes a file> > descriptor based interface to user space, you can have a kvm> > ioctl to register the object behind that file descriptor with> > an existing kvm context to associate it with a guest.> > FWIW: We do that already for the signaling path (see irqfd and ioeventfd> in kvm.git). Each side exposes interfaces that accept eventfds, and the> fds are passed around that way.> > However, for the functions we are talking about now, I don't think it> really works well to go the other way. I could be misunderstanding what> you mean, though. What I mean is that it's KVM that is providing a> service to the other modules (in this case, translating memory> pointers), so what would an inverse interface look like for that? And> even if you came up with one, it seems to me that its just "6 of one,> half-dozen of the other" kind of thing.I mean something like int kvm_ioctl_register_service(struct file *filp, unsigned long arg){ struct file *service = fget(arg); struct kvm *kvm = filp->private_data; if (!service->f_ops->new_xinterface_register) return -EINVAL; return service->f_ops->new_xinterface_register(service, (void*)kvm, &kvm_xinterface_ops);}This would assume that we define a new file_operation specifically for this,which would simplify the code, but there are other ways to achieve the same.It would even mean that you don't need any static code as an interface layer. Arnd <><
|
https://lkml.org/lkml/2009/7/16/335
|
CC-MAIN-2015-35
|
refinedweb
| 586 | 61.36 |
I'm trying to serialize and deserialize JSON data using
DataContractJsonSerializer. Everything works perfectly until .NET/Mono decide to add
__type field when it writes the object to JSON string. Now when it tries to read the JSON string, it gives
system.typeinitializationexception. I would assume that since Mono is the one who adding that extra field, it would be able to read it, but no.
I then tried to stop Mono from generating __type field by using
DataContractJsonSerializerSettings.EmitTypeInformation set it to
EmitTypeInformation.Never, but that doesn't do anything. Mono still adding that __type field.
Is there a way to get around this? All I want is to be able to write and read the object using DataContractJsonSerializer.
This is how my object looks like,
[DataContract] public class UserObj { [DataMember] public string email { get; set; } [DataMember] public string deviceId { get; set; } [DataMember] public List<StoreObj> stores { get; set; } }
Have you found a resolution for this problem? I'm running into the exact same thing. This was working when we last shipped our app maybe a year ago and is now broken. It seems like it was also fixed and then maybe broken again as there is a resolved bugzilla issue ().
I'm having the same problem here. EmitTypeInformation.Never is ignored.
|
https://forums.xamarin.com/discussion/comment/144886
|
CC-MAIN-2020-40
|
refinedweb
| 213 | 59.4 |
Hi! I’m new to Gradle and so far liking what I’m seeing.
I’m attempting to port an ANT build over to Gradle. Right now I’m trying to get the flow of the build correct, then I’m going to tackle getting the build to actually do something (this is a pretty complex build that requires interaction with an IBM iSeries system and a lot of shared ANT code with other builds). The first problem I’m running into is switching some properties around for QA builds vs. Production (PDN) builds.
The way our ANT build handles this is when it’s time for a QA build, we call two targets
ant qua all
This sets some properties differently, swaps in a special property file and then continues to the “all” target which does the build.
What I’m trying to do is to figure out how to make this switch. I’ve tried defining a task for my QA build and running it (similar to the way we do it with ANT), I’ve tried passing in either a project property (
-P) or a Java system property (
-D).
Either way, I can only get Gradle to adopt one or the other, meaning I can’t switch from run-to-run which variation of the build I do.
Here’s my current (very simple) build.gradle:
def buildProfile = 'build-pdn.props’
def isQUABuild
task all { doLast { println 'Inside the all task' } } task qua { buildProfile = "build-qua.props" isQUABuild = true } task pdn { buildProfile = "build-pdn.props" isQUABuild = false } task init { doLast { println 'Inside the init task' println isQUABuild if(isQUABuild) { println "This is a QA build!" } else { println "This is a PDN build!" } println buildProfile } } task setProps { doLast { println 'Inside the setProps task!' } } task fetchRelMod { doLast { println 'Inside the fetchRelMod task!' } }
Any help or guidance would be appreciated. Thanks in advance for the help.
Allen
|
https://discuss.gradle.org/t/conditional-compile-steps-properties-issues/21124
|
CC-MAIN-2018-51
|
refinedweb
| 317 | 81.12 |
.
When Windows 10 IoT was first announced, there was great hope for a Windows RT-like experience. Being able to run real Windows applications on a Raspberry Pi would be a killer feature, and putting Skype on a Pi would mean real Jetsons-style video phones appearing in short order.
Windows 10 IoT core isn’t so much an operating system, as it is a device that will run apps written with Windows APIs: there is no shell. If you want to control dozens or hundreds of devices, each running a program written in Visual Basic, JavaScript, C#, or Python, this is for you.
The majority of interaction with Windows 10 IoT Core is over the web. After booting and pointing a browser to the Pi, you’re presented with a rather complete web-based interface. Here, you can check out what devices are connected to the Pi, look at the running processes, and run new apps. Think of this feature as a web-based Windows control panel..
Installing
Officially, the only way to install Windows 10 IoT Core is with a computer running Windows 10. There are a few ways around this is with the ffu2img project on GitHub. This Python script takes the special Microsoft .FFU image file format and turns it into an .IMG file that can be used with dd under *nix and Win32DiskImager on Windows.
Yes, Windows 10 is free for everyone with a relatively modern Windows box, but since the only requirement for running Windows 10 IoT core is putting an image on an SD card and monitoring a swarm of IoT Core devices, there is no reason why this OS can’t be supplied in an .IMG file.
After putting the image on an SD card, installing Windows 10 IoT Core is as simple as any other Raspi distro: shove the card in the Pi, connect an Ethernet cable, and give it some power..
You do get a few options for language and network settings, and there are a few tutorials and examples – connecting to Visual Studio and blinking an LED – but that’s it. The base user experience of Windows 10 IoT Core is just network information, a device name, and a picture of a Raspberry Pi.
There are a few shortcomings of the Windows 10 IoT core for the Raspberry Pi. Officially, the only supported WiFi module is the official Raspberry Pi WiFi module with a BCM43143 chipset. By far, the most popular WiFi module used for the Raspberry Pi (and something you should always carry around in your go-bag) is the Edimax EW-7811Un, a tiny WiFi module that uses a Realtek chipset. Odds are, if you have a Raspberry Pi 2, that WiFi module you picked up won’t work. Common sense would dictate that you could install the Windows driver for the Realtek chipset, but this is not the case; no Windows driver will ever work with Windows 10 IoT core. Even devices from the Raspberry Pi foundation, like the Raspberry Pi camera, are not supported by IoT core
If you’ve ever wanted clearer evidence the Windows 10 IoT core is not meant to be an extensible system like every other Linux-based single board computer, you need only look a little deeper. Digital audio is completely ignored, and pins 8 and 10 – normally reserved for a 3.3V UART on every other Raspberry Pi distribution – are reserved pins. Microsoft managed to make a single board computer without a hardware UART.
Fortunately, some of these problems are temporary. A representative from the Windows On Devices team told us more WiFi dongles will be supported in the future; the only driver they were able to bring up in time is the official dongle from the Raspberry Pi foundation. A similar situation of engineering tradeoffs is the reason for the lack of UART support.
Who is this for, exactly?
The idea that Microsoft would put out a non-operating system without support for the de facto standard WiFi adapter, a hardware UART, or drivers for the majority of peripherals is one thing. Selling this to the ‘maker movement’ strains credulity. There is another explanation.
Let’s go over once again what Windows 10 IoT Core actually is. By design, you can write programs in Visual Studio and upload them to one or many devices running IoT core. These programs can have a familiar-looking GUI, and are actually pretty easy to build given 20+ years of Windows framework development. This is not a device for makers, this is a device for point of sale terminals and ATMs. Windows XP – the operating system that is still deployed on a frighting number of ATMs – is going away soon, and this is Microsoft’s attempt to save their share of that market. IoT Core isn’t for you, it isn’t for me, and it isn’t for the 9-year-old that wants to blink an LED. This is an OS for companies that need to replace thousands of systems still running XP Embedded and need Windows APIs in kiosks and terminals.
Save your SD card
For anyone with a Raspberry Pi 2 and an SD card, the only investment you’ll make in trying out Windows 10 IoT Core is your time. It’s not worth it.
While Windows 10 IoT Core is great for any company that has a lot of Visual Basic and other engineering debt, it’s not meant for hackers, makers, or anyone building something new. For that, there are dozens of choices if you want an Internet-connected box that can be programmed and updated remotely. The Cloud9 IDE for the Pi and BeagleBone allow you to write code on single board computers without forcing you to install Visual Studio, and Linux is king for managing dozens or hundreds of boxes over the Internet.
This is not an OS that replaces everything out there. A Linux system will almost always have better hardware support, and this is especially true on embedded devices. Windows 10 IoT Core is a beginning, and should be viewed as such. It’s there for those who want it, but for everyone else any one of a dozen Linux distributions will be better.
222 thoughts on “Raspberry Pi And Windows 10 IoT Core: A Huge Letdown”
As a .Net Developer, having seen some interesting stuff on the Techdays in The Hague, i look at this solution a little different.
The main thing microsoft want’s to promote is “windows 10 on everything”, and IoT is just one of these.
As somebody comming from a microsoft world this might be a lower threshold to getting into tinkering then installing linux, getting started with python etc etc.
I Think the products does have it’s target audience, even though it might be a little light for your application.
I agree with this. One way to look at this is that Microsoft is trying to support more devices. Having the option to use Raspberry Pi and other embedded systems (or SoC) is a good step. If there’s widespread adoption I assume they’ll add to the feature set.
Very much to the point. Plus, they need to see money knocking their doors before putting more effort in support. This review helped me avoid time wasted over a weekend. I like desktop windows because of the abundance of programs and a familiar interface for 20 years. I am learning Linux but I don’t like the multi-user aspect of it at all, when it comes to programming it, especially on a raspberry pi. Either you run a code with admin account (you’re fed up with access issues and just followed a blog post or book recommendation) or you have to know everything to set up the right groups and access (right route only taken by Linux geeks). I was hoping win 10 provides RPI a small cozy retro windows desktop and programming experience.
This is an extremely ignorant comment. All computets have multiple users, and all OSes, unless you use MS-DOS, or OS/2 or something. Eating that having multiple users is a bad thing, is so ignorant, especially from someone claiming they primarily used a computer for programming. Why do you think windows is so easy to compromise? Because everything runs as root. If can’t tell what needs root privileges, you absolutely shouldn’t be programming anything.
Correct. *nix’s are “Deny then allow”, vs. MS’s “Allow then deny” method. At least newer versions of the server grade Windows have switched to the more secure “Deny then allow” model I learned 20+ years ago. It’s really not much of a leap from checking boxes (bits) to add/remove rights, to understanding attrib to change a bit to add/remove rights. The RIGHT route is taken by anybody, Geek or not, who doesn’t want to do a half-assed job. If I ever saw a Windows Geek doing this to a server or desktop under my watch, that would be an issue. So the OS doesn’t matter, learn it, don’t blame it.
While I do agree with the points you make in this comment, I gotta say – man, what an unnecessarily rude comment!
Not to get off topic, but I would argue that an iPad is a computer and Apple still hasn’t figured out how to have multiple users on it.
I know this is old…
But you say:
“All computets have multiple users, and all OSes”
Then go on to say:
“Why do you think windows is so easy to compromise? Because everything runs as root.”
That was his point exactly. I’m the only user of my Raspberry Pi. It’s easier to be root and not have to mess about for a day to make something work.
If you have a Raspberry PI 2, you should try it for yourself. You will (just like I did) likely find out that it is MUCH easier to learn Linux and C++ than to do C# SW development on Raspberry PI IoT.
You miss the whole point of a Win 10 IoT – the minute you need to learn a development stack you’ve lost the purpose of Win 10 IoT. The point of Win 10 IoT is for EXISTING .net developers to bring their experience to the burgeoning IoT market.
A current .net developer would find the migration to Linux/C++ to be about as hard as a current Linux/C++ developer (like you) would find the migration to .net.
>
If learning C++ and Linux is hard for C# developer, how would you expect that C# developer to be able to learn how to program Win 10 IoT? It is not like programming for Windows. For Windows, you install Visual Studio, create a C# project from template, then push F5 and start it – done, you are good to go adding your code. Win 10 IOT on Raspberry is closer to setting up Unix on some weird hardware some 30 years ago. If C# SW engineer cannot learn Linux, how do you expect him to figure out Win 10 IoT? I am Windows C# SW engineer, I have a Raspberry PI 2, I was able to figure out Raspbian SW development on C++, but gave up Win 10 IoT.
You’ve obviously not used Win 10 IoT because you literally have no idea what you’re talking about. You remotely develop, deploy, and debug your app from your PC but you can also SSH or Powershell into the Raspberry Pi 2 as needed.
Setup:
1) Install Visual Studio 2015 with Universal App Support.
2) Create Win 10 IoT Image and copy to microSD card.
3) Boot Rasberry Pi 2 off microSD card.
4) Create Project
5) Change build target from “Local Machine” to “Remote Machine” and provide the Raspberry Pi’s address and authentication credentials (output via HDMI on the Pi)
Development:
1) Develop hardware on Pi as you normally would.
2) Develop your App as you would normally in Visual Studio.
3) Press F5 in Visual Studio to deploy code and being interactive debugging.
Here’s a pretty idiot proof guide from Microsoft to Blink an LED on a Raspberry Pi 2.
Petr, I was able to install Win 10 IOT on Raspberry, but I had problems building and deploying the project from VS to the PI. Maybe, I should give it another try.
However, I was able to install Mono and MonoDevelop on Raspbarian and blink the LED from C# – that was easy!
Well i would have to disagree Ken. I have a background in software engineering, and have written complete hardware drivers in C++ (and i have a bachelors degrees in computer science, with a major in embedded software). Also i have been running linux systems since 1997.
I would never advice in starting learning programming using a programming language that requires you to be that strict when it comes to memory allocation, de-allocation, working with pointers etc.
But instead of getting into a flame war about which programming language / os you have to use, my main point was that this article comes off a bit negative about something that is a first step in the right direction.
As an example, this device supports alljoyn out of the box, a protocol for home automation.
You can runn universal apps on it, which means that an app written for the desktop can also be run on the xbox, microsoft band, windows phone, and the raspberry pi 2.
You can do remote debugging out of the box, using tools that are imho part of one of the best ide’s available.
My point was that Win 10 IoT was created to leverage existing knowledge (.net) onto a new platform (Raspberry Pi), not to convert folks using Linux and asst. languages on Raspberry Pi to Win 10 IoT.
>
I agree.
As a (mainly) Microsoft programmer I was blown away when I first ran Win 10 on my Pi
I instantly saw a world of inexpensive easy to create possibilities.
I had a simple app I built to get weather data from a web site and display it in minutes.
On the other hand, doing the same using other tool stack would be a huge learning curve for me.
… We all have our preference and should not criticise others for having theirs
Your excitement parallels that of Linux users when the api was first announced – a powerful tool that leverages my current knowledge…
A leopard never changes its spots, If you are Smart you can control any program in any OS, Intelligence is the ability to adapt to any environment, if you can’t, you are not.
I have apple and android and I get fun upsetting apple supporters with android and android supporters with apple.
My plan is to do the same with linux and IoT.
So, you are coming from the Windows world and you say it was easier to learn an entirely new platform (Linux) than to leverage off your deep Windows knowledge and develop in the Windows IoT environment?
Odd.
I agree too, I hate programming for linux, don’t care for python, ruby, c++. I use .NET/C# at work, and I like programming in it.
I think the reviewer is a bit biased by having a general disliking for MS.
Too many windows programmers not willing to try anything else. I bet you’re trained for it and good at it and all your routine challenges can be solved by it. But you could be better. It’s language. You speak more than English? You can talk with more people. I have a friend who’s a good windows programmer. Wish he could try Arduino sometime soon.
This is BS. not willing to try anything else…. I have had Android, Ios, Blackberry and windows phone. I have had Acorn Atom, Commodore Vic 20, Commodore C64, Commodore Amiga’s ,Apple’s, Sun Ultra sparc, Sgi Indy and even Windows computers.
I have picprogrammers, avr buttyfly, arduino, and raspberry pi, so for me trying something different clearly isn’t an issue.
And roger21 is exactly the target audience i am talking about. Developers that already know .Net, and want to get into tinkering. Learning a new programming language & os & ide is not the right way if there is an alternative that requires little more effort than switching the target of the application.
I’ve been programming for 30 years. I am more productive with C#, the full .NET stack, XAML, EF, Workflows, etc… It takes me about 10 times as long to get the same thing done in C++ and I programmed in C, 20 years before I started into C#. Java is a like a bad version of C#; just slightly less power, features, longer-winded. Android SDK is non-intuitive, like most of Googles SDK’s. Monodevelop works pretty well on the Raspbian OS, but it does not have the GUI capabilities that XAML provides, which is far superior (time saved, capabilities, flexibility, etc.) to any other existing GUI platform out there. Having the full .NET framework would come in handy in a Kiosk style application and Win 10 IOT would be the perfect fit. But if the GUI is not important, Mono works just fine on Raspbian. That combination, if you spend the time to get proficient, would be the best way to go. Even embedded real time apps would save a lot of development time to have a .NET Micro layer or a Java layer (2nd best choice) at least as an interface. The core code, just a minimal set should be in C++. Reflection can do a lot of heavy lifting, checking especially in communications. Cuts down the development time more than anything else in client/server apps and even in GUI or database apps. C++ just can’t compete in most areas. I’ve converted C++ to C#. The code becomes about 10 times smaller and the bugs just aren’t there.
You can develop c# on Linux. The Monodevelop IDE is quite good these days.
Yes, this. I’ve been doing C# in Mono on my Pi since the original Pi B. It was really painless. I want to try Win10 IoT when I get some time. There are plenty of universal apps I want to test out on there.
Are you writing GUI apps and if so what API are you using for that?
I personally wrote a winforms app and got it working on Mono… but that doesn’t work if you want it to run on android also unfortunately. :/
I’m not a big fan of GTK either… too much trying to shoe horn objects into a language not design to support them.
It’s not dislike MS. it’s like more dislike the clickie clickie kids that think that windows is the only thing around. Not even willing to look at something else.
“Clickie Clickie kids” Thanks…. almost 40 here, and as you can see in earlier remarks clearly willing to try other stuff.
Instead of going into your spasm when seeing Microsoft, or as you most likely write it Micro$oft, just see it for what it is. A device for an audience that previously might getting started with iot as a too high hurdle.
If i would ask you to choose between two devices, one for which you know the ide, the language and the os, and the other requires a new ide, new programming language and a new os which one would you use if you just wanted to switch on the house lights using your smartphone..
Pssst…. Take a look at Lazarus:
Very powerful and easy to learn. 100% open source, and of course it already runs on the RasPI.
Born as a clone of Delphi now it’s fully integrated into the Linux system. Also true native multiplatform, no VMs or emulations, the IDE itself and the generated code are 100% native; forget Eclipse and Java slugginess.
And before anyone bitches about it being Pascal, please update your knowledge about the language: Pascal allowed pointers to arrays of structures of pointers to functions returning… etc. before most people here were even born, just forget the Pascal you were taught at school.
Anyway it’s not about the language (In fact I would go for C if I could) but how the IDE is thought. It’s easy to grasp, almost self explanatory at least to start using it.
Give it a look, you won’t regret.
why I should forget the pascal I was taught at school (university really), was good, it has pointer and advanced data structures, at that time (1993) OOP just started to get known to advamced programmers and universities HERE doesn’t taught that, but pascal was really good, and moreso, I didn’t have to learn a new languaje when I want to do more than a couple of blinky light with a PIC (16F628) and I just start writing code in JAL (which is a kind of pascal derivative). I’m really proud, my 41Hz.com’s amp6 is still kicking commanded by my pic and code that understand my (old) TV Remote, so I can turn it on, change volume or change input channel. all the codes where recorded to the eeprom, sadly I only used the RC5 lib, which also needed some debugging when used with the SPI lib (for the digital pots), anyway, was fun an productive. Anyway I’ll look lazarus, thanks for the heads up, sorry for my english.
Well, I meant that for some reason there’s people out there thinking that Pascal does not have pointers or cannot allow pointer arithmetics and the like. Years ago when I was using extensively Delphi at work, when talking about it with colleagues some of them believed the above, even those who were taught Turbo Pascal at school. It was fun to show them how one can shoot himself in the foot with Pascal the same way he would do using C :^)
Pascal is still alive?? Man i wrote some cool software with pascal back in the day. Borland Turbo Pascal for DOS and Pascal Lite for MAC. I wrote a Pager app similar to T9 (before T9) for converting phone numbers into words. Back when pagers where a thing. if only i patented it i could of been the one to develop T9 texting for phones.
Anyways, ill have to try this lazarus. i kind of miss how pascal wouldn’t let you do stupid things like c lets you do.
It’s alive and well. Maybe not the most beautiful or elegant one but it works. Back in the day I wrote network protocol translators on pure binary unmarshalled data coming from different architectures (xml? what xml?!? if you need serious speed just stay binary and pay attention to architecture endianess) so it’s very well suited for low level stuff.
I wish they could extend the IDE and library to other languages too as Borland did back in the day with C++ Builder, but that would probably be asking too much from a free community project like Lazarus.
A Lot of .NET programmers moved from Delphi or VB so Lazarus is probably a good option for Windows guys who want to make the switch over to writing GUIs for embedded Linux, ARM/Rpi and Windows CE while still supporting Windows 10 and still being in a familiar environment.
I have been writing a manufacturing system for Windows 7/10 for the past 2 months and found version 1.4 of Lazarus a very professional and stable package to write in. As a first for me, the touchscreen parts of the system will be hosted on Embedded Linux rather than Windows XP Embedded, certainly not encountering the problems I expected to have :-) Go Linux!
I use .NET/C# at work, and I like “programming” in it.
fixed it for you
According to your Linkedln account Mr Beragg, you have overtime been paid by ATOS, to develop programs that assess peoples claims for disability benefits here in the UK.
It is well known that the actions and conduct of your paymasters led to people committing suicide.
Perhaps, you were just following orders, a bit like the one from Microsoft where they directed to go forth and defend IOT!
Pay the piper call the tune……………..Otto
Feeling creepy ?
I hate AtoS as much as the next guy, but…
Don’t bring politics into a tech discussion.
Just don’t.
+1
>
Puts me right off Mr Beragg though. Same way as if someone’s CV mentioned database consulting work for Satan.
Yep. This article was written by someone with a specific preconceived expectation and who feels victim of a bait-and-switch, but isn’t savvy to ecosystems beyond their own. The article could be better. Go watch Hanselman’s robot demo on YouTube. It’s a good fit for making things, and a good first release.
I have a robot running on this using Rasberi Pi and windows 10 Core – did not waste my time.
More Wifi drivers would be great, but a $10 wifi compatible dongle from eBay was not a major sacrifice either…
Who said anything about using Visual Basic… C# works fine by me, and I can control the robot from the internet plus view it on a 2MP old web cam that was laying on my drawer….
I wish I saw this article sooner! I wasted an hour of my time setting it up and playing with how to setup an F**ing WIFI – while in raspian it is just fully automated – I second this honest article Windows IOT is crap – run – run fast
Windows IOT is not for running desktop apps, it’s for running your own custom-written & compiled code.
Ken, N2VIP
>
For those of us who got used to making things work using the other Raspberry Pi or even the Rasp2 running Linux then yes then certainly the whole idea was dead on arrival. Ask anyone who still runs command line on machines running the OS the original Windows replaced, then you’ll face the same problematic responses.For my part the jury is still out. It did come back on the boards named for a shuttle craft and a philosopher however the response was one of a hung jury problem.
.”
Really this is news to you? Windows 10 IoT is exactly what you say it is but that has been known for a long time
“While Windows 10 IoT Core is great for any company that has a lot of Visual Basic and other engineering debt, it’s not meant for hackers, makers, or anyone building something new.”
Or if you have a lot of Windows programmers.
I think you are missing the point of Windows 10 IoT.
It allows you to reuse a lot of the windows development resources for embedded projects. You are also using the term technical debt when you should not. Technical debt is when you fix bugs or even have to port to a new platform. Windows IoT allows you to use existing resources like programmers that know the Windows API and Visual Studio and frankly Visual Studio.
I am a big fan of Linux but this all seems to come down to saying that Windows 10 IoT is not Windows RT which we all should have known. That Windows 10 IoT is not a desktop OS, again a given. Windows 10 IoT is not embedded Linux. And finally that Windows 10 IoT strength is that it leverages knowledge of the Windows API, Windows Network management tools, and Windows Development tools in an embedded environment.
Shocking….
I just hope the anti-windows bias in the maker community doesn’t push developers that decide to work with Windows IoT out. Windows 7, Windows 8(minus the metro UI), and Windows 10 don’t suck anymore so get over it.
BTW
I use a MacBook running OS/X at home, and Android phone, Windows at work writing code for Windows, Linux (running on a VM for an Arm based embedded device), and a Cortex M4 embedded system. I like all tech and use most of it.
Wow dude, have you even ever run any kinds of windows? Or seen the countless of issues W.10 (desktop) has as mentioned by users on many forums as they seek solutions? And have you read people that specialize in windows and really embrace it with all their heart advise people to wait with updating to it until the bugs are out?
Hell, initially with that nvidia forced buggy driver update issue people could not even boot their system, but I’m sure ‘getting over it’ will be just as good as having a working computer.
An as far as I can tell Windows 10 is just your good old UI tweaked w7 with forced assimilation into MS internet services, almost a ‘Windows Borg™’.
You should probably read more.
they wrote a new OS more or less from scratch its been out officially for like what 3 weeks and you expect it to be perfect? did you ever program something that complex?i think not. don’t get me wrong i updated my machine and have now to suffer the consequences like everybody else but going from a few thousand beta tester to a full fledged multimillion customers with an almost infinite hardware and software combinations is not an easy feat so some problems are to be expected. same goes for W10 IoT its new its fresh from the press and Microsoft has been focused on desktop for years so its something a bit new for them. for my part im gonna stick to W10 and see how everything develops…
MS always claims they ‘wrote it from scratch’ and ‘worked on it for 10 years’ and it’s always a very transparent lie.
And a big company should test things before release. Although it has to be said that it’s not MS’s fault that a million people sign up to get the test version and maybe 5 of them will actually report bugs, so it’s fighting an uphill battle really.
Anyway, it’s a complex system, bugs are to be expected. But the issue I have is people online spouting it’s flawless.
We are talking about Win 10 IoT, not Win10 desktop, right?
The only complaining I hear about Win 10 IoT is it’s limited feature set, not it’s functionality…
>
Actually no, we veered towards windows 10 desktop, which I specified as being where my reply was towards.
But it’s a good point that we should stay on topic.
Right, because no one has ever released a buggy version of Linux or iOS.
I don’t think anyone has EVER released a bug free version of Linux tbh
No software is bug free. Not even Windows.
Wtf are you on about? We’re not in 2001 ME times anymore.
I’m a windows user is what.
That is not true, just today I installed Windows 10 on a Compaq machine with just 2GB of memory and it not only detected everything including biometric and pen but everything runs perfectly fine. This was the 16th machine I upgraded and have not experienced anything what you are saying. Just fyi, there are already 50 million installations in just 2 weeks. I can understand you are anti-Microsoft, but stop spreading the fud just like most anti-Microsoft sites are doing.
Cannot agree more… free flaming is not what we are used to see here and the write-up seems a bit oriented!
Windows and Microsoft have changed their ways recently and that may have not been noticed by everyone, they are now more open and transparent porting some things in the open source domain, which should in itself be an argument to consider its potential value in embedded and open source…
Add to this that the .NET 5 is on the fire, and that you would cover every device just by changing the view, this is a nice thing to keep an eye on!
Microsoft is trying to get its foot into the door as far as the maker crowed is concerned, but I would not call this “open source.”
They are putting themselves on open source platforms, kind of like releasing the stripped down, memory limited, “starter” version of windows 7 to compete with Linux on netbooks that they and Intel intentionally hobbled.
You will never see any source code for windows IoT, and unlike open source OSs, until Microsoft adds support for the rest of the hardware no one else will be able to.
I think Windows 10 Iot will be great for working with MS technologies and embedded devices, It is common in industry, now anyone can afford to try it.
They are certainly experimenting with new business models and paradigms, sure, but they are not suddenly not MS anymore. And the core of W.10 is many previous versions of windows.
And a lot of the new stuff they are trying is not to my taste, but that’s just me.
You sounded very sensible- until you said windows 8 does not still suck.
I agree with you about Win 10 IoT. All of the articles complaints are complaints about the nature of embedded devices and applications. The article does bring up good points about hardware support. Obviously Microsoft, unfortunately as usual, is late to the game with something pretty interesting yet not quite done. I am still looking forward to working with it in any case. I work and teach in electronics / networking/ industrial automation, and .net is the usual way we interface industrial controllers with PCs and networks (mainly because of manufacturer support).
Yes, most embedded Linux does not use a desktop GUI either, not being used as a general purpose computer, why waste resources and power.
Windows 8.1 does not suck except for the Metro UI on a desktop. The OS under the UI is actually pretty nice. Notice that Windows 10 drops that dumb metro on devices without touch screens like… Most PCs…
I a hope we will see a DDK for Windows IoT sooner rather than later. The lack of the UART is pretty bad as well but it is really new.
It is a new “free as in beer” platform with a great set of development tools. The negativity I see aimed at it smacks of bias IMHO. I doubt I will do anything with it since I am not a .net guy but that does not make it worthless.
“While Windows 10 IoT Core is great for any company that has a lot of Visual Basic and other engineering debt, it’s not meant for hackers, makers, or anyone building something new.”
I guess Assembler is best choice.
Not sure if this is sarcasm or not, could you clarify?
Most of my IoT applications are implemented mainly in ASM, at least until it hits a web server somewhere. It never seems like a big deal, although I get some funny looks sometimes from people sometimes.
I’m just happy when people make new things. ATMs and POS terminals are not new things. Even if I’m not a huge fan of the Microsoft ecology, I hope it gets subverted for something fun.
I see the opportunity for discarded POS terminals of the future to become a source of useful hardware, for example.
Unfortunately old POS terminals running CE or embedded windows probably won’t be useful for hacking. Look at all of the used thin client hardware out there- just a lot of junk, as the manufacturers have zero interest in anyone ever using the hardware for running anything other than their customized images. The hardware looks really interesting, but with so much computing power for cheap with boards like the Pi, Beagle Bone, etc. they just do not seem worth messing with.
People who make fun of assembly just do not understand that applications still require it. I know an engineer who is an expert on embedded motion, and for many aerospace embedded hardware projects they will not trust/use anything else. C++ of course for higher level projects, but for low level brushless motor drives and such assembly is used. You use the best tool for the application, sometimes not the favorite one.
especially if the new versions are locked up with UEFI, which they likely will be.
Yes and no.
Discarding M$ thing just because it’s M$ is childish. Last week i was playing with VS 2015 to write simple WP8.1 application. This was much easier than developing android software. Additionally I was able to deploy it to a desktop windows without any code change. What more i guess the same part of code might be used in IoT environment and phone application to control Things.
IoT is a catchy topic but i see a huge problem. Who cares about Thrust and Privacy. In my opinion Win10 IoT may give us a framework which will allow us to simplify such things. With Cloud9 IDE you can only write simple blinking LED program.
From my point of view a true “Hardware hacker” has to know not only how to use assembler but also understand a platform where a lot of unpredictable external events may came from outside.
And one more thing: each project has to meet some requirements and this requirements enforces us to use specific technology.
Good writeup Brian, very clear (and not anti-windows actually) but realistic about what you get. And even making room for possible future expansion/updates.
It’s good to have users be informed so they know what to expect and use it at places where it’s usable.
P.S. It’s just a pity it’s about windows so you get the insane as well as the paid-for nonsense comments taking over. But sometimes it’s just not about the comments but about getting an article out.
I’m confused about the HDMI connector.. I understand there’s no “shell” … but can you create apps on a PC and put them on the pie and use the Pi’s HDMI that way? Or does the Pi’s HDMI only ever show the network config?
It shows the interface for your application. Good for purpose built installations. (Avoids the embarrassing ATM/vending machine showing a desktop issue). This article shows an example,
Well, that’s pretty cool, then. Thank you.
.”
… doesn’t quite seem like you could do an ATM, but everyone says it’s great for ATMs.
Go read about what W10 IoT can do. It’s an arbitrary display, but it’s a single application arbitrary display.
Think of it less as a full OS and more as a framework for running an application. The default output for HDMI is to display some debug information, but this is more of a ‘hello world’ demonstration baked onto an Arduino more than anything else.
So, you would write an ATM application that has a GUI involved, and when running it would display that GUI to the user. And nothing else. Something hard faults? Reboot the whole thing, launch back into the app. It’d great for what it was designed for, single application embedded connected to the internet things.
Windows 10 Embedded would be great for an ATM, Windows 10 IoT would be great for your dishwasher, Windows 10 Desktop would be great for general purpose computing…
Those are three distinct versions of Windows 10.
>
Besides the UART thing I can’t see anything terribly wrong with it. It’s a different approach. This is not hardware-programming like you would expect for an embedded device but more PC-like programming with an abstraction layer in between. Im not sure about the performance but I think it’s worth having a look.
I would like to read a review about what the framework supports and with which performance the applications (e.g. C#) run.
And please avoid sentences like:
“the only investment you’ll make in trying out Windows 10 IoT Core is your time. It’s not worth it.”
Because it’s our decision if it’s worth it ;)
He’s not holding a gun to your head and when someone says something is not worth it it’s always assumed that is the author’s view.
As for your ‘only the UART thing, then how about the lack of drivers for WiFi dongles? The lack of audio? That seems a good way to use a raspi?
The use is limited, the article outlines the limitations but leaves the decision to use it to you..
Think of what people use raspi for right now, and then you realize 99% of those uses would be not possible.with this W.10 IoT.
But perhaps it opens it up to things not thought of though. Like with photography when you only use a fixed single wide angle lens; a limitation can force people into getting new creativity.
It is a very new version so I HOPE that there will be more drivers etc. I’m with you, the current state is… suboptimal but I think this will change.
It also underlines the reason why linux is massively superior to any other OS for embedded devices (what we used to call IoT before the marketing people)
I have zero driver issues with hardware under linux. Windows CE/Embedded and QNX… very little supported hardware.
I’m glad I’m finally able to create apps for a cheap embedded platform using mature, fully functional tools (like VS).
I’ll take being able to remotely deploy and debug embedded apps to a no-setup embedded device over having to fiddle around with Linux, SSH, Python and using print statements to debug any day…
+1
I’d rather not confuse the moon with the pointing hand. Look forward to seeing what you build and share though!
To each his own I suppose.
There is also NETMF which has been out for about 5 years.
Windows lost me a longlong time ago when they “forgot” C++ & GUI….
Use wxWidgets + boost + codeblocks and your will easily be able to build your apps both on windows and linux
( raspberry ..parallela..whatever….)
Frankly, i think it’s not as bad as it article draws it. The way i see it, MS just put out a first glimpse of the basic IoT core, and not a finished turnkey solution. I’m pretty shure that the driver and low level hardware support will follow soon. They probably just wanted to get some public feedback to shift the focus of upcoming development away from internal roadmaps to the clients/customers most pressing needs. And that was not a bad move imho, as long as the feedback stays constructive and isn’t just an occasion for a new MS bash.
Ahhh…. Did you stop and ask WHY the pins are reserved? Personally, I can’t find an answer, but the other two reserved pins are specifically because they cannot be used for GPIO (according to the release notes). They are to be used for ID_SC and ID_SD only to support HATS. The same may be true of the uart pins.
It’s called “Windows 10 IoT” where IoT means ‘Internet of Things’… Things like toasters, thermostats, smart sprinkler controllers, etc.
It’s a pity so many folks (the author of this article included) forgot or never knew what IoT actually meant and expected a full desktop/end-user experience, presumably similar to Windows 10 desktop or phone versions.
That a Realtek x86 WiFi driver is also hardly surprising, since the Raspberry Pi uses a non-x86 CPU.
MS never promised you a desktop-like OS for your Raspberry Pi, it was uninformed internet bloggers that failed to understand what MS was offering and set expectations too high.
It makes no sense to get mad at MS for delivering EXACTLY what they said they would, get mad at the ‘journalists’ that acted like they knew what they were talking about.
i completely agree and also i wonder if this review was not premature anyway i mean the new tendency in software release is unfortunately to ship a barely functioning software and to fix/improve/enhance it afterwards. who knows what future releases will bring?
+1
Although very annoying because of the many updates, it can drive the product to be exactly what the customers wanted (if a suitable feedback channel is in place).
I think the review is way too negative and does not understand what Microsoft is doing here.
Microsoft is not turning the Raspberry Pi 2 into a Windows PC and that is a good thing because we already have Windows PCs that work fine and better than a Raspberry Pi can possibly work without a lot more RAM, a disk interface and so on.
What Microsoft is doing here is to turn the Raspberry Pi 2 into an Arduino but without the memory and CPU limits that would entail. And that is exactly what you want if you are going to do embedded device development.
My hero size Dalek prop has 7 motors internally moving the dome, eystalk, arms (2×2) and plunger. I want to have the whole device under Ethernet control. So with this kit and a RPi2 I can very easily develop the necessary code in Visual Studio and C#.
I do not want to develop code on the RPi2 and I don’t particularly want a lot of ‘device drivers’. In fact the less going on in the kernel, the better.
MS has been doing embedded for decades, Windows CE? Supporting most of the Win32 API it was a breeze to port. I love it though, don’t understand the hate it seems to get…
Seconded. I’ve been using Windows CE in various projects, one is still running 3.0 today in production machines we still sell. It’s never BSOD’d crashed, or failed. It has been 100% reliable and simple to use. It has been so reliable we have never bothered to upgrade the OS. It runs on SH4 hardware too just for fun! Boots in 3 seconds to the app splash screen from cold start. And that was from back in 2002. The same app re-compiles for X86 Win32, X86 Wince, Arm etc with only a few #ifdefs in the main header. Its a shame all most people see if WinCE is Grossly underpowered Sat Nav units with even more craptastic application software.
This was always going to be a replacement for the embedded platforms – Windows CE and Windows Mobile, rather than the desktop and windows phone replacement. To me it looks like a fantastic step forward as those platforms are a little creaky these days to say the least.
I don’t think Win 10 IoT replaces Embedded Windows (like CE), I think it augments them. There was a Win 8.1 embedded edition and I suspect there is/will be of Win 10 as well – it is used for POS terminals, kiosks, and other applications.
Happy to wait and see. Would love the industrial handhelds and similar embedded systems to move over to something like this.
This article is somewhat biased but its worth noting that MS is also putting a lot of effort into the .NET microframework and Gadgetter. Which is incredibly good, and perfect for the maker world. Especially if you a)want to debug whats running on the STM32 live b)Are a .NET developer c)Like Visual studio d)are just getting into embedded systems ec etc. Also the community version of VS2013 and 2015 are freeeeee to which is phenominal. All this Anti MS BS is just a pissing contest. Every OS has its problems none of them are the best.
Sounds good to me, if I were my windows programmer friend. There are lots of windows/non-windows programmers out there and most know nothing about hacking or electronics beyond a digital electronics course or two back in college and they are apprehensive of the concept of having to wire something actually in the physical world. This platform, if it is as good as you envisioned it to be, could open doors to lots of THEM, so soon enough they can join the broader community and contribute.
I am a developer on Visual Studio and was waiting for something like this to start making cool hardware. I don’t want to learn new software to make the home automation stuff I need. Hoping this is the real thing
You don’t have to learn new software, you can start using NETMF now.
In my experience every developer has a language of choice, and every multi-lingual person I know has a primary language that they think in. From my perspective all c syntax languages feel like the Latin languages. I can read them, but I trip over things like string handling in pure c that I don’t even think about in c#.
I don’t really enjoy learning new c subsets or derivatives because for every “Hey cool!” feature there’s a “now how the hell do I…” question. Same feeling as I get while I’m trying to beat Italian into my French-speaking brain. I prefer to learn things that look totally different (as much as anything really does these days) so that my instincts and habits don’t get confused. Windows IoT is fine, but it is an initial offering. Saying “It will never support” anything is pretty foolish, just look at how much the .Net framework has changed from 1.0 to 4.0.
I can’t blame MS for wanting to test the market before throwing billions at something. For every Xbox there’s the possibility of a Zune
Ha! I think ZunePi would be a great application to build based on Win10 IoT. And I say this without any sarcasm.
Zune was really nice imho.
Nice write-up. I did something similar and have put my thoughts here:
But to summarise. Better than the insider preview but still not as good as Raspbian.
Microsoft was not aiming to compete with Raspbian – you might as well have said ‘not as good as OS X or Windows 7’… They never intended to release an end-user desktop OS equivalent, they released an OS for automating ‘things’ to make, you know, an ‘Internet of Things’ that builds on the tens of millions of windows programmer’s current knowledge of the windows development stack and languages.
(Why would spend money to create a free OS to compete with another free OS?)
>
“It’s not worth your time unless you have a burning desire to write apps for Windows, and even then you could do a better job with less effort with any Linux distro.”
Brian you are making less and less sense. This whole post could really use some re-write after you drink your coffee or get a better sleep.
“You do get a few options for language and network settings, and there are a few tutorials and examples – connecting to Visual Studio and blinking an LED – but that’s it.”
There are several samples, here is a link,
“No, you don’t need a keyboard or mouse; there’s very little you can actually do with the Pi.”
There is no shell because you are supposed to write your own interface. I suppose this is to address things like the Automatic Updates window appearing on your vending machine or ATM. According to MS,
“For devices with screens, Windows 10 IoT Core does not have a Windows shell experience; instead you can write a Universal Windows app that is the interface and “personality” for your device.”
Windows why ever…
OS / 2 is Da Bomb !! Only seconded by BeOS… the rest is shit !! #obvioustrollisobvious
Yet another anti-Windows / anti-Microsoft article. Windows 10 IoT Core is first generation, and yes, it doesn’t have all of the Windows features. The author forgot the other I/O supported. Windows 10 IoT Core on the MinnowBoard Max supports I2C, UART, SPI, and GPIO. The Edimax WiFi works fine with the MinnowBoard Max. I2C, SPI, and GPIO are available on RPi2. There are several examples available to get started so there is more than just blinking lights and HDMI. As with the other responses, the author needs to do better homework before writing a product bashing article.
I don’t think the author knows what you’re talking about here since you mentioned I2C, UART, SPI, and GPIO ..
;)
“A Linux system will almost always have better hardware support, and this is especially true on embedded devices.”
I had to stop and marvel at that sentence. It wasn’t that long ago that hardware compatibility was a strong reason to not install Linux, and now it’s just assumed it’ll run on any computer you’d care to build. We’ve come a long way, and it’s nice to see that hardware compatibility is actually a selling point for Linux over Windows now (at least some of the time).
Over Windows 10 IoT, not Desktop Windows – Windows 10 on the Desktop has better hardware support than Linux.
>
Supported architectures on Linux:
Supported on Windows: x86, x86_64, (now) RPi/ARM.
When people typically discuss ‘hardware comparability’ in Windows vs. Linux they are discussing peripherals (video, RAID, networking, so on) not system architectures (intel x86, power pc, Alpha, SPARC, ARM, etc.).
I was dumbfounded when I didn’t have any problems whatsoever w/ a USB-serial driver on linux when I needed admin access to install it on Windows…normally it’s the other way around lol
Way to go Brian. This is another brain damaged locked software system, which MS can drop anytime it feels (XP). And then many hands will be busy rewriting software for newest and greatest OS from MS, to blink few LEDs. Instead of using either Linux, BSD, DOS … I can still use QBasic (dropped by MS) programs from 1980-ies to test stepper motors on pport, while it is difficult to run even VB5 programs on new Windows. It is planed obsolescence, resulting in hundreds of millions of PCs going to landfill because they “can’t run Firefox any more”. You are not fooling me.
Mate, XP came out in 2001, get a grip. You can’t support obsolete software for ever.
MS released Vista, Windows 7, and Windows 8 before ‘killing off’ Win XP…
Does Apple still support 14 year old OS?
Debian? Red Hat? Ubuntu?
Win XP programs still work on Win 10, AFAIK.
>
I rarely use XP, except to run software written by MS afficionados that “does not work any more”, and few old games. Same with OS9/10 crap – thousands of dollars had to be spent to create essentialy the same testing system. Meanwhile, old QBasic programs still run fine in DOS, and old C programs run fine on Linux. Old Python seems ok as well, but I dislike v3 fork that is being rammed down v2 throats.
“Common sense would dictate that you could install the Windows driver for the Realtek chipset, but this is not the case; no Windows driver will ever work with Windows 10 IoT core.”
This is a partially incorrect statement. Existing Windows drivers may not be compatible with the new architecture, but anything developed for “Windows OneCore” will work across all Windows 10 devices, including IoT.
Information on driver developement.
Seriously? POS terminals and ATMs don’t run stock Windows XP, they use the POSReady version which has support all the way through 2019. Companies aren’t using desktop Windows XP, good god, do your fucking homework.
Thank you for telling me what is good for me. That way I don’t have to think for myself.
Just as a side note, the EW-7811Un may be easy to use on the Pi, but it doesn’t support monitor mode. Had I known this, I’d have bought a different adapter.
So it’s a bit like ye olde MS-DOS 1.0 circa 1981. You boot up and all there is, is this >_
You have no clue what to do next. If you do know what to do, there’s very little DOS on its own can do. You must install some programs that do things.
Here’s Win 10 IoT for the Pi2. You boot up and all you get is this information screen. Well duh, you have to install and run a program that does something, just like good old DOS. Like in the earliest days of MS-DOS there are very few programs for Win 10 IoT for the Pi2.
Give it time, people will write all kinds of programs for it.
People WILL write all kinds of programs … and then the platform WILL be obsoleted because it does not make enough money for M$ any more. It always happens that way. And then we will be chasing another MS wonder?
Right, they should support unprofitable/unpopular platforms forever, just like Apple and the Linux community does.
Why won’t apple support my eMate?
Why won’t the latest Linux run on my SPARC-based Sun workstation?
>
Such a well thought out, and carefully constructed article.
Here is one example.
“A similar situation of engineering tradeoffs is the reason for the lack of UART support.”
I don’t know what that sentence means. However it does seem to have all the erudition you’d expect from someone wiping drool and Cheeto drooping off their keyboard while writing it.
Please.
Now say it with me.
Please.
+1
The fact that this bucket of slop is being sold to makers is a joke.
This thing..
…does not qualify as an operating system (not one capable of hosting it’s own development at least, or capable of being useful by itself)
…is a shame to even be named after a “real” operating system like 10 (and for me the jury’s still out on whether 10 is a *real* OS) or windows in general.
…is exactly what you pay for. (utterly useless as filler for a raspi’s sd card, guess there’s no such thing as a free lunch)
Honestly, from a practical standpoint, this thing is not much better than a usb adapter (missing critical things like sound, networking, and UART) that I plug into my computer over the network. It is an extension of my computer, not a computer in it’s own right. The fact that I can remotely load processes into it to untether from my computer it hardly makes up for that, as at that point it’s almost like a arduino with more processing power, but shitty I/O.
Microsoft could have made another killer app for win10, as well as another strong platform (besides winPhone, xbox, and PC) but I guess they gave us exactly what we’re expected to pay for it: Nothing.
Microsoft, you dun fckd up,
-M
(dammit, and I was really hyped for plugging in a monitor and a few peripherals, and getting an at least windows-like experience. it was a main justification for finally getting around to looking at picking up a Pi)
“This thing…does not qualify as an operating system (not one capable of hosting it’s own development at least, or capable of being useful by itself)”
Is that how we measure ’embedded’ operating systems? If they can host their own development stack?
Define ‘useful by itself’ – does that mean include a browser? Includes a copy of ‘solitaire’?
Kids theses days…
>
Yeah, developing on it is a requirement…how about a shell? Worthless, Windows just needs too much memory to be useful on these larger embedded systems.
Wow.
An OS designed for an appliance with a few buttons and a temperature display *needs* to support development on the embedded controller AND support a user shell, or it isn’t a true ’embedded operating system’?!
In other words, I must be able to shell into my thermostat and be able recompile the code for that thermostat *on* the thermostat, or it’s not a ‘real’ embedded OS?
We’re just going to have to agree to disagree on that one.
>
Goddamnit Ken. If this is the best MS can do then what’s the point. Go back and stay back in markets MS dominates and stop trying to go mobile and embedded. I like running large toolchains supported generally best by MS to program microcontrollers w/ flash and every peripheral you’d ever need. I said LARGE embedded, real embedded doesn’t even run an RTOS and is just a looping program waiting for input and toggling lines. On just Raspian, I can run from command line, or launch GUI, have access to audio, graphics, ethernet, USB and multiple pins of assorted GPIO, that’s the embedded part. W/ shell, browser, text editor…so much and it’s made for free and customizable and it works quite well. Take some hints MS (look at Atmel, TI, Freescale less and Microchip even less getting into hobbyist market; if some random project takes off and everyone wants one, those companies can sit back and sell a bunch of those chips).
Raspbian is a desktop OS that runs on a desktop replacement system that hackers choose to embed inside a hardware project.
(Remember how the Raspberry Pi was going to ‘revolutionize’ education by replacing computers that cost several hundred dollars with Pis that ran Linux and ‘only’ needed a Hi-Def display, keyboard, mouse, power supply and possibly a wifi adapter? That’s why I call it a desktop replacement system.)
You can argue that the Raspberry Pi is an embeddable device, but you can’t argue Raspbian is an embedded OS in any classic meaning for an embedded OS… How many office automation and games does it include?
The main limitation of IoT I’ve heard is that it doesn’t satisfy needs it was never designed to address.
>
Wow, the fanbois are out in force.
From all sides :)
Not surprising about the lack of hardware/driver support. Existing desktop Windows drivers were never designed to be ported to a very different architecture. They’re starting from zero.
Eventually I bet Microsoft is going to push for a different driver system across all their platforms, with most drivers written in .NET.
This is their model that covers all devices, PC, Phone, IoT etc
Cmon guys we all knew windows on a Pi would absolutely blow dick. This is no surprise even if the writer is massively biassed. The real let down is Ubuntu 14.04 trusty on the Pi. 2 days and i still can’t get the frigging wifi to work!
Lol I couldn’t get my weird wifi dongle to work on beaglebone w/ Angstrom, feel your pain. Raspian on the Pi though, piece of cake. Love it.
Two things.
Firstly, has nobody realised that this is Windows on ARM? It’s not like you can drop Microsoft Word on here and expect it to run. Windows Embedded has long been a corporate offering, so there is little to no community involvement and pretty much all programs written for Windows are compiled for x86. Linux on the other hand has a flourishing community around ARM and nearly the full package repository.
There is no support for ARM on Windows so the need for compiling your own binaries should have been expected. Not being a Linux fan boy, just pointing out that Microsoft has only just embarked on this particular journey and they have a long way to go yet.
Secondly, point one aside, I feel like Microsoft has arrived at the party (late) and is this big awkward dude who’s very happy to be there but not exactly sure what to do with himself. And he has no friends.
What’s in Windows phones? Aren’t they ARM? I’m positive they aren’t Intel x86 compatible.
>
I think it’s safe to say that on HaD we are all completely aware it’s windows for ARM, and we already knew that in advance.
The issue is that when they announced W10 for Raspi although we deduced it had its limitation we were still overly optimistic in how much it could do on the Raspi.
And how about the missing System.Drawing namespace? You can’t run anything that uses Image or Bitmap objects, so no image processing, no image generation! I asked them about it several days ago and no response.
More than this, I found they lied by posting a project that can’t be completed because of this missing namespace! Still no response!
Microsoft Windows 10 with NSA Inside Technology..
Windows 10 on a Raspberry Pi really cripples it compared with it running Linux . I have listened to to the arguments about .net and it being easier for a Windows person to start doing things but I think it is the duty of the community to encourage those less fortunate Windows people to learn something (a lot) that will no doubt benefit them in the future. .net vs C++ – whatever, Linux can run .net code via mono if need be. They would also learn about the wonderful world of FOSS and that is perhaps the thing that they would really open their eyes having sat through Microsoft training and suffered the perpetual evangelisation which has brainwashed most of them.
Bottom line is that the Windows guys can now run something Microsoft on a Raspberry Pi but truth is that when they explore what its capabilities are when it runs Windows 10, compared with it running Linux, they are always disappointed.
Yes, a limited-function OS (Windows 10 IoT) offers users less functionality than a full-function desktop OS (Raspbian)…
Thank you Captain Obvious!
>
For those who have followed the earlier versions of Windows the term “Core” is used for the GUI less versions of the O/S. I’m also not surprised by lack of hardware support either as all the other drivers will have been written for Intel not Arm so will require some porting. Having spent a long time coding C# using Visual Studio, I’ll all up for a platform that will support that but I’d agree with Brian that the Raspberry Pi is not really the right platform for flashing and LED.
According to many here, to do anything ’embedded’ requires a micro controller and OS that supports HD user displays and can run the entire development stack locally… Anything less is not an ‘Operating System’ worthy of even turning an LED on or off.
>
Basically, all I got out of this article is that Win10 IOT is a huge let down to the writer because they expected the product to be what they wanted it to be and not what Microsoft intended and indicated it would be. That’s really it!
+1
It’s funny how many folks here judge an OS by it’s ability to host it’s development stack.
Just curious, is iOS an ‘operating system’? I ask because my iPad can’t host it’s own development stack…
>
I think Brian is a fair and unbiased writer, but I think what most people here are not familiar with is that is not the first Microsoft product for makers. They released .NET Micro Framework and Gadgeteer roughly 5 years ago. It gave me, a software guy, a very easy entry point into embedded devices and hardware. And since then I have been using Arduino, Raspberry Pi, etc. Instead of hating a product because of the company producing it, let’s think about the many software developers that will now be introduced to the maker community and embedded devices. I would imagine that most will be like me and venture into new technologies because of this entry point.
+1
Not to worry, anybody that keeps informed doesn’t bother hating MS since they are just the same as all other companies, bad, but not more than the rest really.
But hating financed comments and delusional fanbois; that’s another matter.
I have read all the way down here to find this comment (and I am a linux fan boy).
+1
“All I wanted was a Windows 10 version of ChromeOS, but what I got was firmware for a light switch.”
âAll I wanted was a Windows 10 version of ChromeOS, but what I got was firmware for a light switch.â
Which part of ‘Internet of THINGS’ (emphasis added) made you think they were releasing a ‘Windows 10 version of ChromeOS’?
IoT IS not a desktop OS, no one said it was, and there is no reason to compare it to other desktop OSes, like Linux/Raspbian.
And yes, Raspbian *is* a desktop OS – here’s a tip, if it ships with end-user games, a browser, and office-suite applications, it is NOT an embedded OS, it may be a desktop OS that provides for a great level of control over the hardware, but it is not an embedded OS.
>
+1
Do not have a degree in cs, but I do have 50 hours in computing in addition to my bba degree. Developed MSWindows applications in visual basic for a major retailer where I worked five years. We do not get any hardware unless it can run an open OS, Have run linux since the 1990’s also. Started with Novell and have managed MSWin os’s through win2k3 systems. Dislike developing software for MSWindows. the code is not as portable to me as on bsd or linux. I might experiment with W10 only because of this article, but it will probably be replaced fairly quickl. Give me the freedom of open source.
“Yes, Microsoft is finally moving away from the desktop”
I can’t imagine how many times I read that line since this article went up, but it just struck me how ignorant that statement is…
If taken literally, it leads one to believe MS is abandoning the desktop – that is clearly not the case.
Giving the author a bit of room for interpretation, it implies that MS is expanding it’s offerings to include other platforms than the ‘desktop’ – that is also clearly not the case – MS sells Server OSes & applications, they have an interesting sideline in console gaming, and they have offered Windows on phones for years.
What Microsoft is doing is expanding the number of platforms that a Windows developer can develop windows applications for… It isn’t ‘moving away’ from anything.
>
In my view, for embedded systems, the UART isn’t all that necessary. I2C and SPI are clearly required because the Raspberry Pi can’t do any kind of ’embedded’ IO: No analog input, only a single PWM output, and so on. The interface to the real hardware (motors, rotors and dynamos) will be by way of external chips connected to either SPI or I2C.
Further, Linux is a poor choice for a real time OS. Yes, there’s a hack around somewhere but if you want uS response, you better be dealing directly with the IO. Not like Arduino because most of the Atmel based boards are too slow but more like the mbed board.
I’m going to play with IoT just to see what it does. The fact that it doesn’t have a UART doesn’t bother me at all.
Okay let me jump in as well.
I have 15 years experience as a .NET developer, and for the last 3 years vie been involved in R&D doing firmware development using pure C for ARM cortex devices, M0, M3 AND M4 (and I love it! better than .NET for obvious reasons).. This is a bold step from Microsoft however and once again!! they got it wrong again!!
Firstly, this operating system is nothing new, its a skimmed version of windows phone OS running the “failed” micro framework, which never took off the way it was intended to. they encompassing two failed products under the windows 10 marketing umbrella.
Another point is this, how does a developer that comes from a garbage “.NET collected world” write software for the internet of things? or modify windows 10 IOT “C” drivers. example lets say a typical .NET has to now mod the protocol piping data from a blue receiver, most .NET developers for years were isolated from this now? they faced with the realms of the real world, memory management, pointers ;-/ Linux guys are mutually natural to this.
seriously this is still embedded electronics where C code is running sensors, RF transceivers are communicating using some seriously optimised code, sensors with internal DSP`s to most .NET developers this is vodoo,
IOT = “energy optimisation”, “memory optimisation”, “flexibility” this is NOT a freaking enterprise.
If anyone is serious enough to build a REAL IOT server they would want to deploy their apps using a range of stacks example NodeJS, python scripts (to perform data processing), a mongo db for storage or sqlite and a small web server with HTML5 / Javascript capabilities for a user dashpanel from an interactive point of view.
Windows IOT does not offer this..Ubuntu mate or some Linux distro can easily perform these tasks on a PI.
Also, Windows 10 IOT is no different from a novel RTOS and offers nothing “greatly different”
Windows IOT is nice to flash leds, switch relays.. you know that kinda things, seriously MS leave IOT for the big boys.
It uses .Net Core + .Net Native, not the Micro framework.
You can write in C, C++, Asm, WETFYL if you bothered to look.
+1
My point has absolutely nothing to do with language support.
So, when the marketing department says a Raspberry Pi will run Linux, it’s essentially true. But when the marketing department says that it will run Windows 10, it’s essentially false.
Got it..
I’ll keep running Kodi, Raspian, Matebuntu, etc. on my RasPi’s.
Windows 10 supports 3 platforms:
Windows10 (desktop)
Windows 10 (phone)
And Windows 10 (IoT).
With certain limitations, the same source code can be targeted at all three platforms, and experience with one platform quickly translates to the others.
Does the Ubuntu Phone match feature for feature Ubuntu Desktop?
>
As I understand it, yes it does. But I’d still rather use Android because the form factor suits it.
Really? I can host a hypervisor on an Ubuntu phone?
>
This article screams ignorance. RUN!
I could not agree more. I’ve played with MCU’s back before they had embedded flash, remember the 8052AH-BASIC – those were the days, a 232 chip, 74×373 and ram chip. Feel the burn. For $50, I bought a board and wifi adapter, hooked it up to a TV and a usb keyboard/mouse and cell pone charger – BAM, start to finish 1 hour, I was blinking an LED from my laptop.
For those who are accustomed to C# – and who get paid very well for it, I ask, why should I learn a different language? Sure some will say C or Mono under Linux, cool, go do that. But If I don’t have to learn something new, WOW, there it is – no learning curve and up in programming in minutes on a completely new platform? YEP, WIn 10 on a RaspBerry programming in C#, yep, gunna be around for a very long time.
Jeez, I have no clue what this guy is thinking. Run, as fast as you can. What does he want it to do replace a tablet? Connect a HDMI monitor and watch movies? All I know is that learning Linux for embedded is no small feat. I gave this a go and within minutes I wrote a C# app downloaded it and ran it. Simple a cake. Terrible article! Yes they need to support more WiFi & GPIO and other things, but HELLO???? it just released. Think outside the box .
Ok, I just ported over an android screencaster app to the raspberry pi running windows 10 iot, and it is pretty sweet. It was my first foray into all the .net45 async stuff, and that is some cool stuff. In general the debugging environment worked great, although i still cant get the emulator to open up to incoming net traffic, so i have just been debugging it on the device itself. I still think unix on these small boxes is way more powerful, just editing a file on the image to make the resolution right requires removing the sdcard and mounting it on a pc. Thats just silly. i wouldnt be surprised if there is some hacky way to make powershell fetch and deliver, but it just seems crazy that the local image is so sparse of utilities. You just have to get used to a very different style of usage. Everything is done locally and pushed out to the remote box.
As an aside, in theory the Android Bridge support () should work on IoT Core within some constraints as the bridge targets UWP, which would be quite neat from a code reuse point of view, same for the iOS bridge.
Login using putty over ssh, you will be able to do things on the hw itself.
You can get to the files on the Win10 IoT device by doing a: [ net use x:\ \C$] can then open files with whatever editor/tool you want from your dev PC. This assumes that you have networked your Win10 IoT device.
The above should read: net use x:\ IoT device IP address\C$. this is executed from a normal command prompt on your dev PC that is networked to the IoT device.
( for some reason the comments function omitted some of my text. Perhaps the author of this article should spend some time checking & fixing functionality of the HAD site rather than posting poorly researched articles of products that are clearly not to his liking as opposed to being factual and informed)
Or open the WindowsIotWatcher app, select you Pi from the detected list, right click and select browse.
Regarding changing the resolution, connect to it via SSH or Powershell and run the SetDisplayResolution command.
|
https://hackaday.com/2015/08/13/raspberry-pi-and-windows-10-iot-core-a-huge-letdown/?replytocom=2677519
|
CC-MAIN-2019-43
|
refinedweb
| 13,113 | 70.53 |
Computer vision imparts human intelligence and instincts to a computer. This field of computer science works on enabling computers to see, identify and process images the same way the human eye does. OpenCV is a great tool to accelerate computer vision in commercial products.
Let us say you and your family went on a vacation and you uploaded a few pictures on Facebook. However, since it takes time to tag the names in each picture, Facebook is intelligent enough to do that for you. How do you think this auto-tag feature works? Well, it works through computer vision.
What is OpenCV?
OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in commercial products. This library has more than 2500 algorithms used to detect and recognise human faces, identify images, track moving objects, and extract 3D models of objects.
Installation of OpenCV
To install OpenCV for Python, use the following code in the terminal:
$ python3 -m pip install opencv-python $ python3 -m pip install opencv-contrib-python
How does a computer read an image?
One look at Figure 2, and we can see that it is a picture of the Times Square in New York. But computers cannot analyse that, since they don’t have any intelligence.
For any image, there are three primary colours — red, green and blue. A matrix is formed for every primary colour, and later, these matrices combine to provide a pixel value for the individual R, G, and B colours. Each element of these matrices provides data pertaining to the intensity of brightness of the pixel. It reads any image as a range of values between 0 and 255.
How are videos and images captured through a camera?
import cv2 cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() cv2.imshow(“Capturing”,frame) if cv2.waitKey(1) & 0xFF == ord(‘q’): break cap.release() cv2.destroyAllWindows()
As seen in the above piece of code, we need to import the OpenCV library, cv2. VideoCapture() triggers the camera, and cv2.imshow shows what the camera is capturing by opening a new window. cv2.waitKey makes the window static until the user presses a key.
Basic functions of OpenCV
To load images using OpenCV and converting them into grayscale, type:
import numpy as np import cv2 img = cv2.imread(‘Image123.png’,0) #write the name of an image cv2.imshow(‘image’,img) k = cv2.waitKey(0) & 0xFF if k == 27: # wait for ESC key to exit cv2.destroyAllWindows() elif k == ord(‘s’): # wait for ‘s’ key to save and exit cv2.imwrite(‘Firefox_wallpapergray.png’,img) cv2.destroyAllWindows()
cv2.imread reads the selected image, and the 0 parameter turns the image into grayscale; cv2.imshow shows the converted image.
Drawing and writing on an image can be done as follows:
import numpy as np import cv2 img = cv2.imread(‘black.jpg’,cv2.IMREAD_COLOR) cv2.line(img,(0,0),(511,511),(255,0,0),5) cv2.rectangle(img,(384,0),(510,128),(0,255,0),3) cv2.circle(img,(447,63), 63, (0,0,255), -1) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,’OpenCV’,(10,500), font, 4,(255,255,255),2,cv2.LINE_AA) cv2.imshow(‘image’,img) cv2.waitKey(0) cv2.destroyAllWindows()
cv2.line draws the line with the given coordinates on the image, cv2.rectangle and cv2.circle draw a rectangle and circle respectively, and cv2.putText writes the given text. Here, we have used the Hershey Simpex font. The output is shown in Figure 4.
Feature and template matching
Template matching is basically a part of one image matching another image. The code for this is as follows:
import cv2 import numpy as np img_bgr=cv2.imread(‘sc1.png’) img_gray=cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY) template=cv2.imread(‘sc2.png’,0) w,h=template.shape[::-1] res=cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED) threshold=0.8 loc=np.where(res>=threshold) for pt in zip(*loc[::-1]): cv2.rectangle(img_bgr, pt, (pt[0]+w, pt[1]+h), (0,255,255),2) cv2.imshow(‘detected’,img_bgr) cv2.waitKey()
cv2.imread reads the selected image, cv2.cvtColor converts the image into grayscale, w and h variables are the positions of x and y-axis, cv2.matchTemplate helps to match the common area of two images with threshold 80 per cent, and cv2.rectangle marks the area that is matching in the image.
The output is shown in Figures 5 and 6.
We can see that the cropped image ‘black ops’ is matched with the portion of the full image, and the area is outlined by a yellow rectangle.
Gradients and edge detection
For detecting gradients and edge, type:
import cv2 import numpy as np cap=cv2.VideoCapture(0) while True: _, frame=cap.read() a= cv2.Laplacian(frame,cv2.CV_64F) x= cv2.Sobel(frame,cv2.CV_64F,1 ,0, ksize=5) y= cv2.Sobel(frame,cv2.CV_64F,0 ,1, ksize=5) edge= cv2.Canny(frame, 100, 200) cv2.imshow(‘original’,frame) cv2.imshow(‘laplacian’,a) cv2.imshow(‘sobelx’,x) cv2.imshow(‘sobely’,y) cv2.imshow(‘edge’,edge) k=cv2.waitKey(5) & 0xFF if k == 27: break cv2.destroyAllWindows() cap.release()
cv2.Laplacian converts the image into the gradient, while cv2.Sobel converts it into a horizontal and vertical gradient. We use cv2.CV_64F as a standard label.
The output is shown in Figure 7.
Edge detection is pervasive in many applications such as fingerprint matching, medical diagnoses and licence plate detection. These applications basically highlight the areas where image intensity changes drastically, and ignore everything else.
Edge detection is also used in self-driving cars for lane detection.
Other features of OpenCV include motion detection, intrusion detection, homography, corner detection, colour filtering, thresholding, image arithmetic, etc.
Statistical machine learning libraries used by OpenCV include Naive Bayes classifier, K-nearest neighbour algorithm, decision-tree learning, meta algorithm, random forest, support vector machine, and deep and convolutional neural networks.
Some popular applications of OpenCV are:
Driver drowsiness detection (using a camera in a car) by alerting the car driver with a buzz or alarm.
Counting vehicles on highways (can be segregated into buses, cars, trucks) along with their speeds.
Anomaly detection in the manufacturing process (the odd defective product).
Automatic number plate recognition (ANPR) to trace vehicles and count the number of passengers.
OpenCV is also used for data imaging to provide better diagnosis and treatment for a range of diseases.
|
https://www.opensourceforu.com/2020/10/opencv-an-excellent-tool-for-computer-vision/
|
CC-MAIN-2021-49
|
refinedweb
| 1,069 | 59.5 |
Back to: ASP.NET MVC Tutorial For Beginners and Professionals
Role-Based Authentication in ASP.NET MVC
In this article, I am going to discuss how to implement Role-Based Authentication in the ASP.NET MVC application. I strongly recommended reading our previous article before proceeding to this article as it is a continuation part of our previous article. In our previous article, we discussed how to implement Forms Authentication in ASP.NET MVC as well as we also created the required database tables. As part of this article, we are going to discuss the following things in detail.
- What are the Roles?
- What is the need for Role-Based Authentication?
- How to implement Role-Based Authentication?
What are the Roles?
Roles are nothing but the permissions given to a particular user to access some resources. So in some other words, we can say that, once a user is authenticated then what are the resources the user can access are determined by his roles. A single user can have multiple roles and Roles plays an important part in providing security to the system. For example, Admin, Customer, Accountant, etc.
SQL Script:
In order to understand the Roles, let add some data into the tables. Please use the below SQL Script to insert some test data to Employee, Users, RoleMaster, and UserRolesMapping table.
-- Inserting data into Employee table INSERT INTO Employee VALUES('Anurag', 'Software Engineer', 10000) INSERT INTO Employee VALUES('Preety', 'Tester', 20000) INSERT INTO Employee VALUES('Priyanka', 'Software Engineer', 20000) INSERT INTO Employee VALUES('Ramesh', 'Team Lead', 10000) INSERT INTO Employee VALUES('Santosh', 'Tester', 15000) -- Inserting data into Users table INSERT INTO Users VALUES('Admin','admin') INSERT INTO Users VALUES('User','user') INSERT INTO Users VALUES('Customer','customer') -- Inserting data into Role Master table INSERT INTO RoleMaster VALUES('Admin') INSERT INTO RoleMaster VALUES('User') INSERT INTO RoleMaster VALUES('Customer') -- Inserting data into User Roll Mapping table INSERT INTO UserRolesMapping VALUES(1, 1, 1) INSERT INTO UserRolesMapping VALUES(2, 1, 2) INSERT INTO UserRolesMapping VALUES(3, 1, 3) INSERT INTO UserRolesMapping VALUES(4, 2, 2) INSERT INTO UserRolesMapping VALUES(5, 3, 3)
As you can see, the user with id 1 having three roles whiles the user with id 2 and 3 having only one role.
Creating the Role Provider:
Create a class file with the name UsersRoleProvider within the Models folder and then copy and paste the following code. This class implements the RoleProvider class. If you go to the definition of RoleProvider class then you can see it is an abstract class. As it is an abstract class we need to implement all the methods of that class. The RoleProvider class belongs to System.Web.Security namespace.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Security; namespace SecurityDemoMVC.Models { public class UsersRoleProvider : RoleProvider { string[] GetAllRoles() { throw new NotImplementedException(); } public override string[] GetRolesForUser(string username) { using (EmployeeDBContext context = new EmployeeDBContext()) { var userRoles = (from user in context.Users join roleMapping in context.UserRolesMappings on user.ID equals roleMapping.UserID join role in context.RoleMasters on roleMapping.RoleID equals role.ID where user.UserName == username select role.RollName).ToArray(); return userRoles; } }(); } } }
In the above class, we only modify the implementation of the GetRolesForUser method. This method takes the Username as an input parameter and based on the username we need to fetch the User Roles as an array and return that array.
Configuring Role Provider in the web.config file:
Add the following code within the system.web section of your web.config file.
<roleManager defaultProvider="usersRoleProvider" enabled="true" > <providers> <clear/> <add name="usersRoleProvider" type="SecurityDemoMVC.Models.UsersRoleProvider"/> </providers> </roleManager>
Basically here we are adding our Role Providers. Before adding the Role Providers first we clear all roles. The name you can give anything but the type value is going to be the full name of your Role Provider i.e. including the namespace. Here you can add any number of Role Providers. You have to provide the default provider which is going to be used as default in the default provider parameter of role manager and you need to enable it by setting the value to true of enabled property.
Modifying the Employees Controller:
Please modify the Authorize attribute to include Roles as shown below.
First, we remove the Authorize attribute from the Controller Level and applied it at the action method level. Here you can pass multiple roles separated by a comma. As per your business requirement set the Roles and test by yourself.
In the next article, I am going to discuss how to implement Role-Based Menus in the MVC applications. Here, in this article, I try to explain the Role-Based Authentication in ASP.NET MVC application. I hope you understood what is and how to implement Role-Based Authentication in the ASP.NET MVC application.
4 thoughts on “Role-Based Authentication in ASP.NET MVC”
how to use GetRolesForUser() Method
What is the table structure
Refer to previous article
role-based not working
|
https://dotnettutorials.net/lesson/role-based-authentication-in-mvc/
|
CC-MAIN-2022-27
|
refinedweb
| 834 | 50.23 |
User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); nb-NO; rv:1.9.2.12) Gecko/20101027 Fedora/3.6.12-1.fc14 Firefox/3.6.12 Build Identifier: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b7) Gecko/20100101 Firefox/4.0b7 On a Mac OS X machines connected to networks with IPv6 router advertisements that have no auto-configuration of IPv6 addresses (i.e. they have no prefix information), Firefox fails to open dual-stacked web sites, that is, they will load after a considerable timeout. In these situations the operating system has a IPv6 default route, but no globally usable IPv6 address assigned (only link-local unicast addresses from fe80::/10). Use of IPv4 must in these situations be preferred to IPv6, because IPv6 cannot possibly work. Reproducible: Always Steps to Reproduce: 1. Connect a Mac OS X host to a network with IPv6 RAs but no prefix information 2. Attempt to open a dual-stacked site using Firefox, e.g. 3. Actual Results: It sits there without loading anything for a very long time, tab header is saying «Connecting...». The site appears to be down, until things finally start happening well over a minute later. However you get the same kind of timeouts for every dual-stacked element included on the page, too, which makes the total page load time extremely long. All but the most patient users would give up. Expected Results: The page should have loaded in the same speed when it does when the machine has no IPv6 default route and try IPv4 right away. This is tested on Mac OS X 10.6.5. Safari (5.0.3), Chrome (7.0.517.44), and Opera (10.63) have no problems, they all try IPv4 directly. The problem occurs both with Firefox 4.0b7 and 3.6.12. I'm not familiar with the Mozilla source code, but one thing worth checking out is if it calls getaddrinfo() without the AI_ADDRCONFIG flag when resolving host names. This might make the resolver skip the AAAA lookups if there's only link-local IPv6 addresses. Not sure, though. Network configuration on my test host (with IPv4 addresses anonymised): osx:~ tore$dc4:446b:4566:5ba1:21f:5bff:fec2:b845 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en1: flags=8823<UP,BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:c2:b8:45 media: autoselect (<unknown type>) status: inactive en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:f7:71:d0 inet6 fe80::21f:5bff:fef7:71d0%en0 prefixlen 64 scopeid 0x5 inet 192.0.2.59 netmask 0xffffffc0 broadcast 192.0.2.63 media: autoselect (1000baseT <full-duplex,flow-control>) status: active fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 00:1f:f3:ff:fe:34:c8:a8 media: autoselect <full-duplex> status: inactive vboxnet0: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 0a:00:27:00:00:00 utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1500 inet6 fe80::21f:5bff:fec2:b845%utun0 prefixlen 64 scopeid 0x8 inet6 fd00:6587:52d7:87:21f:5bff:fec2:b845 prefixlen 64 osx:~ tore$ netstat -rn Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 192.0.2.1 UGSc 62 0 en0 192.0.2/26 link#5 UCS 8 0 en0 192.0.2.1 0:11:43:e6:f5:77 UHLWI 62 0 en0 1165 192.0.2.2 0:14:22:12:99:d9 UHLWI 0 860 en0 775 192.0.2.3 0:14:22:17:64:4 UHLWI 0 22 en0 1179 192.0.2.9 0:25:11:59:cc:93 UHLWI 0 495 en0 1104 192.0.2.11 0:14:4f:1:8a:28 UHLWI 0 0 en0 1185 192.0.2.42 0:1d:60:48:f5:9e UHLWI 2 411272 en0 233 192.0.2.44 0:18:f3:4:79:1f UHLWI 1 3198 en0 751 192.0.2.59 127.0.0.1 UHS 0 0 lo0 192.0.2.63 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en0 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 0 0 lo0 169.254 link#5 UCS 0 0 en0 Internet6: Destination Gateway Flags Netif Expire default fe80::211:43ff:fee6:f577%en0 UGc en0 ::1 ::1 UH lo0 fd00:6587:52d7::/52 fd00:6587:52d7:87:21f:5bff:fec2:b845 UGCS utun0 fd00:6587:52d7:87::/64 fe80::21f:5bff:fec2:b845%utun0 Uc utun0 fd00:6587:52d7:87:21f:5bff:fec2:b845 link#8 UHL lo0 fdc4:446b:4566:5ba1:21f:5bff:fec2:b845 link#1 UHL lo0 fe80::%lo0/64 fe80::1%lo0 Uc lo0 fe80::1%lo0 link#1 UHL lo0 fe80::%en0/64 link#5 UC en0 fe80::211:43ff:fee6:f577%en0 0:11:43:e6:f5:77 UHLW en0 fe80::21f:5bff:fef7:71d0%en0 0:1f:5b:f7:71:d0 UHL lo0 fe80::%utun0/64 fe80::21f:5bff:fec2:b845%utun0 Uc utun0 fe80::21f:5bff:fec2:b845%utun0 link#8 UHL lo0 ff01::/32 ::1 Um lo0 ff02::/32 ::1 UmC lo0 ff02::/32 link#5 UmC en0 ff02::/32 fe80::21f:5bff:fec2:b845%utun0 UmC utun0 osx:~ tore$ ndp -rn fe80::211:43ff:fee6:f577%en0 if=en0, flags=, pref=medium, expire=8h28m34s
> but one thing worth checking out is if it calls getaddrinfo() without the > AI_ADDRCONFIG flag The relevant code is in nsHostResolver::ThreadFunc: 884 PRIntn flags = PR_AI_ADDRCONFIG; 885 if (!(rec->flags & RES_CANON_NAME)) 886 flags |= PR_AI_NOCANONNAME; 887 888 ai = PR_GetAddrInfoByName(rec->host, rec->af, flags); But then NSPR ignores that flag in PR_GetAddrInfoByName. Why, exactly?
And confirming, too.
I looked into that issue before, and I couldn't find the reason in the CVS history. That code was developed on a branch with terse checkin comments. I think it's better to call getaddrinfo with AI_ADDRCONFIG. Based on the experience with Chromium, the only problem is that AI_ADDRCONFIG applies the existence of an outgoing network interface to IP addresses of the loopback interface, due to a strict interpretation of the specification. For example, if a computer does not have any outgoing IPv6 network interface, but its loopback network interface supports IPv6, getaddrinfo on "localhost" with AI_ADDRCONFIG won't return the IPv6 loopback address "::1", because getaddrinfo thinks the host cannot connect to any IPv6 destination, ignoring the remote vs. local/loopback distinction. So after passing AI_ADDRCONFIG, you will need to add code to handle the loopback addresses as special cases.
Where "you" is NSPR, right?
I've confirmed for myself that using AI_ADDRCONFIG will indeed save the day here. Output from my test app show below there's a 75 second penalty for every IPv6 address it attempts to connect to before, and without AI_ADDRCONFIG you get all the IPv6 addresses sorted on top. So for (that have two IPv6 addresses) the user have to endure a 150 sec timeout before something starts happening. The test app does getaddrinfo for (without AI_ADDRCONFIG) and conncects to the resulting list of addresses in order, then repeats the process (this time with AI_ADDRCONFIG enabled). Let me know you you want the source code for the test app. The machine has a IPv6 default route, but no globally scoped addresses. Tcpdump shows no IPv6 connection attempts happening on the wire so it seems the entire timeout is internal to the operating system. By the way content providers are seeing this problem in the wild and it's one of several issues that are causing us to put off deploying IPv6 - we don't want to cut of access to access to our web sites for our own users, after all - but users with this problem are effectively prohibited from accessing dual-stacked sites. So in the interest of helping the IPv6 transition get underway I hope you'll prioritise gettnig this fixed in both the 4.0 and the 3.6 branches on the next version you're pushing out to the auto-update mechanism. Tore osx:~ tore$ ./toretest -ac [ 0us] begin gai_and_connect() [+ 3790us] getaddinfo() done [+ 20us] dest = 2001:500:4:13::81 (AF_INET6) [+ 8us] about to connect() [+ 74593566us] connect() fails: Operation timed out [+ 54us] dest = 2001:500:4:13::80 (AF_INET6) [+ 9us] about to connect() [+ 75011715us] connect() fails: Operation timed out [+ 49us] dest = 192.149.252.75 (AF_INET) [+ 10us] about to connect() [+ 128100us] connect() suceeds [+ 48us] dest = 192.149.252.76 (AF_INET) [+ 10us] about to connect() [+ 128853us] connect() suceeds [ 0us] -ac seen, using AI_ADDRCONFIG from now on [ 0us] begin gai_and_connect() [+ 944us] getaddinfo() done [+ 20us] dest = 192.149.252.75 (AF_INET) [+ 10us] about to connect() [+ 128374us] connect() suceeds [+ 47us] dest = 192.149.252.76 (AF_INET) [+ 11us] about to connect() [+ 128738us] connect() suceeds Tore
Oh, and by the way, with regards to the problem with the loopback interface that was mentioned, it appears it's no longer the case, see how it behaves with IPv6 completely disabled (ip6 -x): osx:~ tore$ sudo ip6 -x osx:~ tore$ ./toretest localhost -ac localhost [ 0us] begin gai_and_connect(localhost) [+ 1219us] getaddinfo(localhost) done [+ 23us] dest = ::1 (AF_INET6) [+ 7us] about to connect() [+ 74us] connect() fails: Connection refused [+ 24us] dest = fe80::1 (AF_INET6) [+ 6us] about to connect() [+ 55us] connect() fails: Connection refused [+ 19us] dest = 127.0.0.1 (AF_INET) [+ 7us] about to connect() [+ 65us] connect() fails: Connection refused [ 0us] -ac seen, using AI_ADDRCONFIG from now on [ 0us] begin gai_and_connect(localhost) [+ 141us] getaddinfo(localhost) done [+ 15us] dest = ::1 (AF_INET6) [+ 5us] about to connect() [+ 62us] connect() fails: Connection refused [+ 40us] dest = fe80::1 (AF_INET6) [+ 8us] about to connect() [+ 47us] connect() fails: Connection refused [+ 18us] dest = 127.0.0.1 (AF_INET) [+ 6us] about to connect() [+ 46us] connect() fails: Connection refused osx:~ tore$ ./toretest -ac [ 0us] begin gai_and_connect() [+ 2215us] getaddinfo() done [+ 22us] dest = 2001:500:4:13::80 (AF_INET6) [+ 7us] about to connect() [+ 24us] connect() fails: No route to host [+ 21us] dest = 2001:500:4:13::81 (AF_INET6) [+ 6us] about to connect() [+ 11us] connect() fails: No route to host [+ 15us] dest = 192.149.252.75 (AF_INET) [+ 6us] about to connect() [+ 128101us] connect() suceeds [+ 45us] dest = 192.149.252.76 (AF_INET) [+ 11us] about to connect() [+ 128201us] connect() suceeds [ 0us] -ac seen, using AI_ADDRCONFIG from now on [ 0us] begin gai_and_connect() [+ 777us] getaddinfo() done [+ 20us] dest = 192.149.252.75 (AF_INET) [+ 8us] about to connect() [+ 128745us] connect() suceeds [+ 45us] dest = 192.149.252.76 (AF_INET) [+ 10us] about to connect() [+ 128349us] connect() suceeds So for the global destination, AI_ADDRCONFIG masked the IPv6 addresses, but for localhost, it did not. So I don't think any special casing of localhost is needed, at least not as of OS X 10.6.5. Tore
I believe that the "AI_ADDRCONFIG breaks connecting to localhost" behaviour is specific to Windows, which has a particular interpretation of RFC 3484. On OS X (and Linux), the loopback address does not appear to count for the purposes of AI_ADDRCONFIG. Tore, you once pointed me at: I see that the code just calls getifaddrs(). I don't have a mac to test, does getifaddrs() just ignore the loopback address?
Lorenzo: Chrome users also reported the "AI_ADDRCONFIG breaks connecting to localhost" behavior on Ubuntu Linux.
I'm happy to test whatever on Mac if someone tells me how to (ideally in the form of a C file I just compile and run).
(In reply to comment #7) > On OS X > (and Linux), the loopback address does not appear to count for the purposes of > AI_ADDRCONFIG. For what it's worth, my test results in comment #6 appear to confirm this (for OS X). > Tore, you once pointed me at: > > > > I see that the code just calls getifaddrs(). I don't have a mac to test, does > getifaddrs() just ignore the loopback address? Doesn't look like it: osx:~ tore$ ./gia-test name=lo0, addr=::1, UP name=lo0, addr=fe80::1, UP name=lo0, addr=127.0.0.1, UP name=lo0, addr=fd14:aca2:970c:a18e:21f:5bff:fec2:b845, UP name=en0, addr=192.0.2.59, UP Will attach test programs shortly. Tore
Created attachment 494338 [details] getifaddrs() test program
Created attachment 494339 [details] getaddrinfo() test program
What environment is actually needed to test a patch for this? It is not clear to me from the description.
(In reply to comment #13) > What environment is actually needed to test a patch for this? It is not clear > to me from the description. 1) Start with a Mac OS X host on a IPv4-only network, with for instance the latest version 10.6.5 2) Add a default IPv6 route, e.g.: $ sudo route add -inet6 default fe80::1%en0 3) Attempt to open using Firefox (and maybe try other browsers too while you wait for the page to load) Let me know if you need more help reproducing the issue. Tore
Created attachment 497780 [details] [diff] [review] v1 So this patch helps. It's untested on other platforms, after that I will request review.
Maybe instead: hints.ai_flags = (flags & PR_AI_NOCANONNAME) ? 0: AI_CANONNAME; +#if AI_ADDRCONFIG + hints.ai_flags |=(flags & PR_AI_ADDRCONFIG) ? AI_ADDRCONFIG : 0); +#endif
As this is not a regression from 3.6, it's not going to block, but we will take an appropriate patch.
The IPv6 test day should also apply pressure on the OS vendors to fix getaddrinfo bugs with the addresses of the loopback interface when AI_ADDRCONFIG is specified. Chromium uses AI_ADDRCONFIG and works around these bugs. It sucks if every program that uses AI_ADDRCONFIG has to work around these bugs. [email protected] worked on those Chromium bugs. He has a spreadsheet for those bugs at It references five Chromium bugs: 41408, 39830, 49024, 42058, 49025 In those Chromium bug reports you can find vandebo's changelists. Then you can duplicate them in either NSPR or Mozilla proper. Hopefully the IPv6 test day will cause the OS vendors to fix the underlying getaddrinfo bugs, too. Note: not every OS has these bugs. vandebo's spreadsheet seems to suggest only Linux (or certain Linux distributions) has these bugs. If this is too much disorganized info to digest, you can also add AI_ADDRCONFIG first, and then react to bug reports. (I have to admit I can't parse vandebo's spreadsheet.)
I had problems grokking the spreadsheet as well...? I didn't find any open bug about this in the glibc bug tracker at , has it been reported anywhere? Anyway I think the Linux guys will love you for adding AI_ADDRCONFIG. A very often-reported bug that leads to users recommending disabling IPv6 outright to each other is - essentially what happens here is that when you have the following: 1) An IPv4-only computer 2) An application that doesn't use AI_ADDRCONFIG, like Firefox 3) A DNS forwarder/recursor that doesn't handle AAAA queries properly ...«Internet doesn't work». If you add AI_ADDRCONFIG, you'll fix that, instead exposing the localhost glibc bug. That, I think, is the best way to pressure the Linux vendors into fixing that bug, which after all is definitively their responsibility. Tore Tore
(In reply to comment #17) > As this is not a regression from 3.6, it's not going to block, but we will take > an appropriate patch. Simple: Fix it in 3.6 too, that way it can block, right? ;-) Seriously though, fixing it in 3.6 as well would be very welcome. Not all users upgrade in a timely fashion, especially not when going from one major version to the next. Tore
(In reply to comment #19) > >? Tore: yes, that's the bug. It may exist in only certain versions or Linux/glibc or certain Linux distributions. The source of the bug is a strict interpretation of the specification of AI_ADDRCONFIG in RFC 3493:. This specification is ambiguous when the addresses of the host name are loopback addresses. I don't remember if I ever reported this bug to glibc.
I forgot to say that I agree with Tore we should use AI_ADDRCONFIG, and deal with the localhost bug. I remember Windows also has the localhost bug, which is one reason Chromium doesn't use AI_ADDRCONFIG on Windows. Another reason is that a comment in <winsock2.h> says AI_ADDRCONFIG is the default (although it clears changes the behavior related to localhost). I quoted that comment in
I tested resolving "localhost" on Fedora 14 (glibc 2.12.90) using AI_ADDRCONFIG, with the following results (depending on what kind of non-loopback/linklocal addresses were configured on the system): 1) neither ipv4 nor ipv6 => ::1, 127.0.0.1 2) only ipv4 => ::1, 127.0.0.1 3) only ipv6 => ::1 4) both ipv4 and ipv6 => ::1, 127.0.0.1 So the user would have to be on an IPv6-only machine while running an IPv4-only service on the loopback interface for the bug to actually affect him. That's a fringe case, I think... I believe it's safe to assume that the OS X users affected by this bug and the Linux users that are affected by the Ubuntu #417757 bug are far greater in number. Therefore, simply starting using AI_ADDRCONFIG (at least on OS X and Linux) should be a net improvement, even if you don't at the same time add any special handling of AI_ADDRCONFIG. However, avoiding using AI_ADDRCONFIG when you're looking up localhost can't be very hard either? Suggested patch (untested but obvious): --- nsprpub/pr/src/misc/prnetdb.c 1 May 2009 23:08:05 -0000 3.59 +++ nsprpub/pr/src/misc/prnetdb.c 16 Dec 2010 19:25:17 -0000 @@ -2028,6 +2028,10 @@ memset(&hints, 0, sizeof(hints)); hints.ai_flags = (flags & PR_AI_NOCANONNAME) ? 0: AI_CANONNAME; +#if defined(AI_ADDRCONFIG) + if(strcasecmp(hostname, "localhost")) + hints.ai_flags |= (flags & PR_AI_ADDRCONFIG) ? AI_ADDRCONFIG : 0; +#endif hints.ai_family = (af == PR_AF_INET) ? AF_INET : AF_UNSPEC; /* Tore
I'm curious to hear if there's been any progress on this bug lately? Is there some problems with the suggested patches I could potentially help resolve? BTW: I just learned that the Portugese ISP SAPO now is shipping DSL routers to their customers that by default emits such prefix-less router advertisements. That means that all of those customers that are also using Mac OS X are unable to use Firefox to access dual-stacked sites. Tore
Actually, I got stuck on strcasecmp function that is not available for prnetdb.c, then this bug lost the blocking status. Also I'm not sure that doing strcmp on the host name this way is the proper way of recognizing localhost. I am thinking of some kind of fallback here in case we get only '::1' as a result. In that case drop the flag and retry the query. This will also cover any hosts file entries. But I was not thinking very deep about that yet.
prnetdb.c includes <string.h> and it compiles fine on my Linux host, but maybe it's not available on all platforms? In any case, I just noticed that there's a private reimplementation of strcasecmp in lib/libc/src/strcase.c (PL_strcasecmp). I'll attach an updated patch in a bit that uses that function instead, see if that works better? BTW that "localhost" is the canonical name for the local host is documented in RFC 2606, so I think the strcasecmp approach should be fine. Tore
Created attachment 502256 [details] [diff] [review] Use AI_ADDRCONFIG if requested by caller with PR_AI_ADDRCONFIG, except if the host name to be looked up is "localhost"
Honza: just use strcmp for now. We can't use PL_strcasecmp in this file because PL_strcasecmp is defined in another shared library. Please rewrite the original code and the new code like this: hints.ai_flags = 0; if (flags & PR_AI_NOCANONNAME) hints.ai_flags |= AI_CANONNAME; #ifdef AI_ADDRCONFIG /* A comment that explains the special case for "localhost", etc. */ if (strcmp(hostname, "localhost") != 0 && strcmp(hostname, "localhost.localdomain") != 0 && ...) { if (flags & PR_AI_ADDRCONFIG)) hints.ai_flags |= AI_ADDRCONFIG; } #endif Your fallback logic in comment 25 may be more flexible than testing for "localhost", etc. specifically, but it won't be able to handle the corner case of getting nothing as a result (for example, when the computer is not connected to any network).
"&& ...)" what else should be included? Also, should anything ending with ".localhost" be excluded as well?
I have seen "localhost6" mentioned in bug reports. Another option is to exclude the getaddrinfo implementations that are known to have this AI_ADDRCONFIG/localhost problem: #ifdef AI_ADDRCONFIG /* A comment that explains why these implementations are excluded */ #if !defined(__GLIBC__) && !defined(_WIN32) if (flags & PR_AI_ADDRCONFIG) hints.ai_flags |= AI_ADDRCONFIG; #endif #endif This approach is worth considering only if Mac OS X's getaddrinfo doesn't have the AI_ADDRCONFIG/localhost problem.
(In reply to comment #30) > #if !defined(__GLIBC__) && !defined(_WIN32) How do we recognize a GLIBC version that doesn't suffer from this bug when it gets fixed?
Created attachment 502660 [details] [diff] [review] v2 - filtering various localhost names - Wan-Teh, I have added citation of your comment 3 in this bug as comment for the code; it perfectly explains why we do it, do you agree? Going to push to try to just check it builds on all our platforms. Tested only on Mac with $route add -inet6 default fe80::1%en1 and page.
Comment on attachment 502660 [details] [diff] [review] v2 r=wtc. >- hints.ai_flags = (flags & PR_AI_NOCANONNAME) ? 0: AI_CANONNAME; >+ hints.ai_flags = ((flags & PR_AI_NOCANONNAME) ? 0: AI_CANONNAME); This change is not needed. Alternatively, you can rewrite this as: hints.ai_flags = 0; if (flags & PR_AI_NOCANONNAME) hints.ai_flags |= AI_CANONNAME; >+ /* >+ Propagate AI_ADDRCONFIG to GETADDRINFO call if set. >+ >+ Needs workaround for loopback host addresses: ... >+ destination, ignoring the remote vs. local/loopback distinction. >+ */ Please format a multi-line comment as follows: /* * line 1 * line 2 * line 3 */
Created attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Honza, I made the changes I suggested to your patch. Please review and test it.
Hi guys, I just saw the Slashdot story about you gearing up for the release of Firefox 4 next month. As you might already have noticed, in a few months major content providers (Google, Yahoo, Facebook, Limelight, Akamai) will be simultaneously publishing AAAA records for their sites, see <>. In order to make this event be as smooth as possible for all users of Firefox, I would strongly urge you to get this patch committed prior to the release of Firefox 4 (and preferably back-ported to the 3.6 series as well). If there's something I can do to help make this happen, please let me know as soon as possible. Apologies for nagging... Tore
Tore: you can nag the OS vendors or the authors of RFC 3493 about the AI_ADDRCONFIG/loopback address problem I described in comment 18 and comment 21.
Wan-Teh, sure thing: Tore
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Thanks for an update. Going to land this after it gets a+.
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Patch checked in on the NSPR trunk (NSPR 4.8.8). Checking in prnetdb.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prnetdb.c,v <-- prnetdb.c new revision: 3.62; previous revision: 3.61 done
Wan-Teh, should this also land in the mozilla tree, or is NSPR expected to merge soon?
Please merge this patch into mozilla-central. I am not following the Firefox 4 schedule closely, so I don't know what the checkin rules are right now.
I just remember that the need for AI_ADDRCONFIG was previously discussed in bug 467497.
To drivers: please decide on approval soon, thanks. This should get to Firefox 4 because of the ipv6 test day. Perfectly it should get to a beta first.
FWIW, to state it explicitly: Usually, Mozilla is supposed to take only public releases of NSPR. However, this is an edge sceneario. Wan-Teh has granted permission to Mozilla to take this individual patch on top of the currently used NSPR snapshot. This is being seen as the best route to getting this bug widely tested. Please approve this patch, so this can be tested in the next Firefox 4 beta.
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Please land ASAP.
Patch has been pushed to mozilla-central for Firefox 4:
(In reply to comment #46) > Patch has been pushed to mozilla-central for Firefox 4: > Honza and Wan-Teh, great stuff, thanks guys! :-D The patch applies cleanly to releases/mozilla-1.9.2/nsprpub/pr/src/misc/prnetdb.c (i.e. Firefox 3.6 - also vulnerable to the bug), could you please commit it there, too?
- hints.ai_flags = (flags & PR_AI_NOCANONNAME) ? 0: AI_CANONNAME; + if (flags & PR_AI_NOCANONNAME) + hints.ai_flags |= AI_CANONNAME;.
(In reply to comment #48) >. I very much doubt it, this bug is all about the AI_ADDRCONFIG part of the patch. The AI_CANONNAME part is likely just (intended to be) a cosmetic change. Tore
No, Masatoshi Kimura is right. The NOCANONNAME change is wrong. The if condition is backwards. Honza, Wan-Teh, can you please confirm + fix? Tore, if you think the patch should land for 1.9.2, please request approval for 1.9.2 on the patch?
(In reply to comment #50) > No, Masatoshi Kimura is right. The NOCANONNAME change is wrong. The if > condition is backwards. Honza, Wan-Teh, can you please confirm + fix? > Fell in to my blind spot. Good catch. Will fix it today.
Created attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55]
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55] Preapproving this.
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55] r=wtc. emk, thank you for catching my mistake.
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55]
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55] Patch checked in on the NSPR trunk (NSPR 4.8.8). Checking in prnetdb.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prnetdb.c,v <-- prnetdb.c new revision: 3.64; previous revision: 3.63 done
(In reply to comment #50) > Tore, if you think the patch should land for 1.9.2, please request approval for > 1.9.2 on the patch? Boris, I'd love to, afterall it is users of 3.6.x I see that are the most affected by this problem today. Unfortunately, I cannot figure out how to actually go about doing it. I understand I would have to set a flag «approval1.9.2?» on the two patches, but I have no idea where I would do that. Similarly, I understand I should set the «status1.9.2» field to «wanted», but again, I don't see how. Perhaps my Bugzilla account lack the necessary privileges? Could you add the necessary flags and statuses for me, do you think? Tore
We should not backport these patches to mozilla-1.9.2 for Firefox 3.6.x until they have been tested in Firefox 4 for a few weeks. Our workaround for loopback addresses may be insufficient.
> but I have no idea where I would do that Click the "Edit" link on the attachment, and the flags should be there. But yes, it may depend on your bugzilla permissions.... I'll set the flags.
(In reply to comment #7) Lorenzo Colitti wrote: > > Tore, you once pointed me at: > > Thank you for that link to Libinfo-330.7, which is in Mac OS X 10.6.6. I studied the Libinfo-330.7 code carefully. > I see that the code just calls getifaddrs(). I don't have a mac to test, does > getifaddrs() just ignore the loopback address? The getifaddrs() call you mentioned is only used by the deprecated getipnodebyname() function. So I'll ignore that. I found that getaddrinfo checks the AI_ADDRCONFIG flag only when DNS (_mdns_addrinfo) is used. The other two lookup methods, directory service and file (ds_addrinfo and file_addrinfo), ignore the AI_ADDRCONFIG flag. This implies Mac OS X 10.6.6's getaddrinfo doesn't have the AI_ADDRCONFIG loopback address problem, because DNS cannot return loopback addresses. But it has a different problem -- addresses returned by /etc/hosts lookup (file_addrinfo) may be unusable.
Wan-Teh, No problems for me waiting until the patch has proved itself in Firefox 4 for a while. I very much hope that it'll make it into Firefox 3.6.15, though. (Provided no problems show up, that is.) Should I re-open the bug until the patch has landed on the 1.9.2 branch? Boris, Thank you for setting the flags for me. I'm pretty sure I don't have the necessary permissions in Bugzilla, as I still cannot see any «edit» links even now when I know where to look for them. Tore
(In reply to comment #61) > I very much hope that it'll make it into Firefox 3.6.15, though. > (Provided no problems show up, that is.) Should I re-open the bug until the > patch has landed on the 1.9.2 branch? No, we track the status on the branches using flags.
Hi guys, I just realised that Firefox 3.5 is still maintained and receive updates. The patches apply cleanly to the mozilla-1.9.1 branch as well - in fact, except for these two patches, prnetdb.c is identical on the mozilla-1.9.1, -1.9.2, and -central brances. So, for completeness sake, could you also add the «approval1.9.1?» (or perhaps it is «approval1.9.1.18?») flag to the patches and set status1.9.1 to «wanted»? Tore
I can confirm that the issue is now solved for Firefox 4.0 beta 11 running on Mac OS X. Fantastic work guys, thanks a lot! :-) Now we just need the patches to appear in Firefox 3.5.18 and 3.6.14 as well. Could they please be checked in to the release branches now, so that we can be completely certain that they won't be accidentally overlooked and forgotten about until 3.5.18 and 3.6.14 are tagged in the Mercurial repository? Tore
Tore, 3.6.14 and 3.5.18 have been frozen for a while. The patches attached are approved to land for 3.6.15. Honza, can you do that? If not, let me know. The patches attached are not yet approved to land for 3.5.anything.
(In reply to comment #65) > Honza, can you do that? If not, let me know. I have it prepared in the queue just qfin and push it, I just don't know if I can land now, before 3.6.14 final build.
You can. 3.6.14 is on a branch and has been since January 21.
Just make sure to land on the "default" branch in 1.9.2. ;)
(In reply to comment #65) > Tore, 3.6.14 and 3.5.18 have been frozen for a while. I'm aware of that - my hopes are for 3.6.15 and 3.5.17. > The patches attached are not yet approved to land for 3.5.anything. Can anyone help with that? I would think that approving it for 3.5 would be quite uncontroversial, considering that it's approved for 3.6 already and the file that's being patched is identical in both branches...? Tore
(In reply to comment #69) > (In reply to comment #65) > > Tore, 3.6.14 and 3.5.18 have been frozen for a while. > > I'm aware of that - my hopes are for 3.6.15 and 3.5.17. Uhm, on second thought I meant 3.5.18 here. Is 3.5.18 really frozen, already now? It's not tagged in the Mercurial repo at and show that 3.5.17 isn't even released yet... Anyway - I just want the patches to be committed on the tip so that they become part of the next release that is not frozen/tagged right now. :-) Tore Tore
Ah, I might have confused 3.5.17 and 3.5.18. 3.5.18 is still open. You mistyped in comment 64 if you meant 3.6.15. ;)
Hi, Can somebody with the appropriate Bugzilla permissions please set the «status1.9.1» flag to «wanted»? Thanks in advance, Tore
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Approved for 1.9.1.18, a=dveditz for release-drivers Upgrading NSS to 3.12.9 is scheduled to go into 3.6.15 and 3.5.18 already, does it require an upgraded NSPR anyway? Seems better to pick up an NSPR release than to start adding the odd patch if these patches are in an upstream.
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55] Approved for 1.9.1.18, a=dveditz for release-drivers
The NSS 3.12.9 update on the branches included an update to NSPR 4.8.7 according to kaie, but comment 56 says this was checked into NSPR 4.8.8 -- I guess we need this separate patch on the branches in the interim.
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] wtc may not agree that a month of pre-release beta testing in Firefox 4 is sufficient for this fix, especially given Mac's minority status and the extremely tiny use of IPv6. Moving branch approvals back to requests.
(In reply to comment #77) > wtc may not agree that a month of pre-release beta testing in Firefox 4 is > sufficient for this fix, especially given Mac's minority status and the > extremely tiny use of IPv6. Moving branch approvals back to requests. Not speaking for wtc, but he said in comment #58 «a few weeks» of testing in Firefox 4 should do, and it's been in Firefox 4 for more than three weeks now (since Feb 8). Also, regarding the «extremely tiny use of IPv6», you should check out «World IPv6 Day», scheduled for June 8, where many major sites will simultaneously enable IPv6 - participants include Google/YouTube, Yahoo, Akamai, Limelight, Bing, heck, even Mozilla is in on it. See - and the list is constantly growing. It is crucially important that all supported releases of Firefox carries this change ahead of that day. The earlier changed versions are released, the better; as you probably know, not all users upgrade their software in a timely manner. Also, de-prioritizing IPv6-related problems due to its lacklustre deployment status is just helping cement non-deployment; the reason why Google and others haven't deployed IPv6 so far is *precicely* due to issues such as this one. Tore
'd like to echo what Tore said. The company I work for (cisco) is participating in world v6 day on June 8 as well. We'll have on IPv6 and webex, linksys, and other parts of the company will likely be participating in one form or another as well. We have lots of Macs here, and Firefox is one of the two IT-supported browsers. Further, my ISP in Paris (free.fr) currently delivers IPv6 to 500,000 "opt-in" users. They just moved to "on by default" with their new Residential Gateway which is shipping as fast as they can build them. I could see a million just from this ISP alone by year end.
I see at <> that there's now a FIREFOX_3_6_15_RELEASE tag present. Does that mean it's too late for these patches to be included in 3.6.15? Tore
The version number 3.6.15 was used for an emergency release. The version originally planned as 3.6.15 has been renamed to 3.6.16.
Comment on attachment 503263 [details] [diff] [review] Patch v3, by Honza Bambas [Check in comment 46] Approved for 1.9.2.16 and 1.9.1.18, a=dveditz for release-drivers
Comment on attachment 507842 [details] [diff] [review] AI_CANONNAME bustage fix v1 [Check in comment 55] Approved for 1.9.2.16 and 1.9.1.18, a=dveditz for release-drivers It appears that, although the approval flags have been renamed for the emergency release, the status flags haven't, so I'm setting "fixed1.9.2.16" and "fixed1.9.1.18" even though we already know the release this is in is *at least* 1.9.2.17/1.9.1.19.
Regression on AIX5, see bug# 650474
|
https://bugzilla.mozilla.org/show_bug.cgi?id=614526
|
CC-MAIN-2017-26
|
refinedweb
| 6,133 | 66.64 |
tag:blogger.com,1999:blog-4119367461890632622015-09-17T01:25:11.170-04:00hello treesa move to the country ...!!!hyd, finally! Though it's still cold (burning fires and wearing sweaters), the snow is slowly melting and the sun seems warmer and happier.<br /><br /.<br /><br /.<br /><br />2013 is shaping up to be a wild ride in itself, but more on that later.<br /><br /.<img src="" height="1" width="1" alt=""/>hyd:)When the heart is full, the words are few.<img src="" height="1" width="1" alt=""/>hyd, baby, biz.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>!<br /><br /. <br /><br /.<br /><br />The house is still a construction zone: pails under the kitchen sink, missing drywall, door-less closets, piles of flooring stacked, tools everywhere... all made tolerable by the thought of a nicer home in the end.<br /><br />Prepping for an online store for our store - it's overdue. We have high hopes for it though will be pleased with anything. Speaking of which, time to get back at it. It's nice to code again. I like PHP.<br /><br /><br /><img src="" height="1" width="1" alt=""/>hyd whole lot of nothingDad sent me a comic strip, many years ago, when I was kinda down in the dumps. It's on my bulletin board now and makes me smile. It's a picture of a bumblebee thinking, "Hmm, what's on my list to-do today?"...... "JUST BEE" ..... "bzzz....♥". This is basically my life on maternity leave. A whole lot of nothin but happily being and buzzin.<br /><br />The Arch man is becoming quite the little lad... he's LOVELY. So gentle and happy and clever. This kid has made my life wondrous and I am thankful every day. 7 1/2 months already! Here he is with his aunt:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br />Other things on the go:<br /><ul><li>IKEA Kitchen planner</li><li>Chasing runaway dogs</li><li>Feeling depressed about my fried garden and then saying "oh well"</li><li>Manual removal of milkweed</li><li>Looking out the window at a large pile of wood</li><li>Being surrounded by home renovation materials</li><li>Looking at commercial properties for sale and dreaming</li><li>Celebrating 2 years married to a hilarious and lovable man</li><li>Trying not to think about going back to work</li><li>Writing a short story for a magazine contest</li><li>Store orders </li></ul> Bzzzzz.... ♥<img src="" height="1" width="1" alt=""/>hyd mother.<br /><br /. <br /><br /.<img src="" height="1" width="1" alt=""/>hyd<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>The song "3am" by pal Jim Guthrie is in my head a lot these days, especially at... 3am. Arch is waking up roughly every 3hrs at night still, which is fine... but looking forward to a longer stretch of sleep in the coming months (fingers crossed). He's usually pretty good to eat and go back to sleep but last night at 1am he was lying in his cradle laughing and laughing - tiring and adorable.<br /><br /. <br /><br />We sell organic seeds at <a href="">our store</a> so I'll have lots to choose from for my own garden. I've already started some flowers and herbs, and will continue to start seeds from now until spring. SPRING! Gotta go.<br /><br />p.s. Got my bro to shoot our chickens... gulp. They were old, not laying too much, and requiring too much effort right now. Thinking we'll get a bunch of chicks in the spring tho.<img src="" height="1" width="1" alt=""/>hyd icies<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>It's sometimes hard to convince myself that A can hang on his own, awake, and be well. Well as in not bored not being stimulated, not somehow being made smarter by interacting with toys and me. But I can also tell it's good for him to chat with his own hands on a pillow in the sun - simple, quiet, reflective. Some baby zen. I'm watching him do that now... His little fists starting to rub at his eyes... he'll nap soon. I'm comforted by his ability to self-soothe, to sleep on his own accord. And these minutes I'm stealing for myself feel vindicated (this time). <br /><br /...<br /><br /.<img src="" height="1" width="1" alt=""/>hyd daysSome days are tired... watery-eyed, weak, edgy. Hard to keep cheer in the voice, hard to muster enthusiasm, hard to be polite and interesting. Try to stay alert long enough to shower, careful not to let the knees buckle like they want to, to wash away at least one layer of exhaustion. Physically irritated - hair irritating, socks irritating, glasses irritating... on top of the sound of his insistent crying. Back especially sore, and wrists. Then at night, two hours of sleep feels like two seconds - 2.30am already? only?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="320" src="" /></a></div><br />Some days are well rested and fun. Waking only once in the night, and he eats and is changed without opening his eyes. The morning feels like victory and the afternoon is good. Even a fancy breakfast, and some productive day time moments on projects long lingering. Almost want to rouse him from naps to play and cuddle and coo (almost). We sing and dance... and the crying doesn't sound as bad, and stops quickly. Husband happy I'm happy. I'm happy I'm happy. Cozy and thankful... that's mostly how it is.<img src="" height="1" width="1" alt=""/>hyd archHe stares out the window at the tree line, not really seeing it but feeling the shadows of trees. The sun is shining behind them, onto the bed. I'm happy to be raising a child in nature because it's a great presence, even unseen. We're surrounded, protected, kept. It's quiet and good.<br /><br />He's especially happy in the mornings - well rested and cooing, smiling his parents awake, unable to resist the charm of an early grin. Fed and diapered and then one of us will make the coffee, rekindle the fire, let the dogs out... and the other will snuggle. <br /><br />Not venturing out at all... will change that soon. For now it's fine to keep warm and comfortable and well. Tea, rattles, movies, bouncy chair, good dinners, music, naps, laundry. The cycles of our winter this year, the quiet and the good.<br /><br />Vid from awhile ago, off the porch:<br /><iframe width="420" height="315" src="" frameborder="0" allowfullscreen></iframe><img src="" height="1" width="1" alt=""/>hyd steady string of minutesThe minutes string together to create beautiful spider webs of days. I'm like a spider collecting whatever droplets happen to fall, first admiring their beauty, first securing the lines. Elegantly moving from one end of the house to the other, holding my baby, stepping over pets, going places but not really, just a few steps but it's my whole world, it's my home. <br /><br />I live within this space every minute, meaning to go for a walk but instead keeping here, inside. Am I stuck or just staying? Maybe staying stuck, happily. Staying in with my baby in my web of a home. Paths mapped out between email and firewood, between diapers and a view out the window, between food and a quick chance to brush my hair. It's all so easy but so full - the days aren't hard (I know what to do all the time), but they're hardly days (I don't do anything).<br /><br />I love it, wouldn't leave it. Despite the lack of productivity I feel satisfied. I have a beautiful baby that I can and do spend minutes just staring at, just smiling. He coos and the whole day is worthwhile, my little tangle, the knot at the centre of my universe.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="320" src="" /></a></div><img src="" height="1" width="1" alt=""/>hyd new yearGood afternoon 2012! We didn't make it to midnight last night (ha!). I fed little man around 11pm, then we all passed out, hubby too. An appropriate beginning to the new year I guess, with a priority on sleep and snuggles. We did have some good food and wine tho, and reflected happily back on the year - a baby!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br />A beautiful, awesome baby. 2011 was also a lovely pregnancy - I enjoyed it for the most part, and stayed well and happy. The whole thing flew by fast. As usual, always interesting to read back on last year's <a href="">NY post</a>.<br /><br />The store did well, we did a big trip to visit family in the states, and lots of cottage, city, and family visits. D made a lot of music, but alas I did not. Which brings me to...<br /><br />Resolutions for 2012 (such a futuristic number)... read! read! and read! Sitting with a nursing or sleeping babe for the majority of my waking hours is prime reading opportunity... must take advantage of these quiet, plentiful times. One can only type emails one-handed, read up on "baby milestones", and research never-to-be-made elaborate recipes so much. I also aim to frequent the local libraries more often and "get into" books... as in, even read about what books to read. I miss them. What's reasonable? Maybe an easy to reach goal of a book every 2 weeks? Surely that's doable. <br /><br />I also want to get into making dinner again. We've been eating lots of rice and cans of <a href="">Jyoti</a>, bagels and cream cheese, wraps, etc. <br /><br />Reading and eating. That sounds good. Oh, and exercising...... at least trying to move more often than not. Music, we'll see. Maybe music with D.<br /><br />2012!<img src="" height="1" width="1" alt=""/>hyd flies!<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="" width="300" /></a></div>Happy holidays all!! Yep I'm still here... just got distracted :)<br /><br />So much to write but am currently typing one handed with baby snoozing in the other. Just wanted to post a quick somethin' before the year ends. Because, gee... what a year. Okay I've re-situated to give me another hand to type, temporarily.<br /><br /.<br /><br />The above pic is A in a Christmas sleeper my brother B wore as a baby. Mom got a charge outta that.<img src="" height="1" width="1" alt=""/>hyd<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>Belly belly belly - it's the thing I see, feel, and think about most these days. Still time to enjoy it, I think. Baby still cozy, and I'm still relaxed having the little one with me. Only a couple more days of work. I'm getting tired of sitting all day, trying to stay focused... looking forward to replacing those efforts with organising the nest and afternoon naps.<br /><br /!<br /><br />Hard to believe November is right around the corner... we're trimming store hours down for the winter, planting our garlic this weekend, putting wood on the porch, on and on and on.<img src="" height="1" width="1" alt=""/>hyd, we had frost last night and I didn't grab the remaining tomatoes. I think it was a light freeze tho so hopefully some will be salvageable. Hard to believe it's that time of year. The cold moves a few things that were on the back burner to the front - cleaning out the chicken coop, planting garlic, replacing the back door with a storm door, getting wood on the porch, etc.!<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>We've started making use of our new freezer - I grated and froze a whole bunch of scallop summer squash, along with some squash muffins and bread. Doubling recipes will likely become a new norm in these last months of preparing for baby time. <br /><br />The car is officially broked so figuring out a new ride is in the works. A real hassle but it was inevitable and I suppose now is good time to deal with it. I'm lucky to have such a knowledgeable and helpful dad and brother not too far away.<br /><br />Five weeks left of work. Amazing. D's finished painting the little one's room so now what's left is "just" flooring and trim. So close!<img src="" height="1" width="1" alt=""/>hyd the land of idling vehiclesI'm on store duty today while D finishes the last bit of dry walling around the new shelves in baby's room. Sitting here watching the endless stream of long-weekend-visiting vehicles wait their turn at the lights in our tiny town. <br /><br />I just returned from a walk to the garage to pick up my car - the brakes went funny on me yesterday so I drove it to the shop last night. Today they're working so the garage wasn't able to "fix" anything. Anticipating them going weird again, and eventually a $300 fix for a new master cylinder. Also apparently front left wheel bearing needs replacing. Oh rusty '97 Malibu, how you age. Eventually a new vehicle will have to be figured out... looking like sooner rather than later. And D's car isn't too far behind needing replacing either. Cha-ching!<br /><br />Most of the extra cash from the store this summer went right back into it, but I think we're starting to top off inventory-wise and can settle into just replacing stuff vs expanding. We should be mostly set till Christmas. Anticipating profit in the coming years! And a web store... hoping it's something I can work on between mat leave and baby, while waiting for the little one to arrive.<br /><br />30 weeks pregnant today - the countdown feels real now, with only 10 weeks left to go (eee! aahhh!). Still much to do, but with summer winding down, there's less travel and more free-ish time to spend preparing house and home. Tho the tireds have returned and I'm pretty low energy... but things get done in tiny bubbles of moments, which I've become increasing good at capitalizing on. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>Wacky car, enduring gravel pile, and bratty puppy. <br /><br />The dawgs are starting to feel a little more high maintenance, with Lune newly enjoying high-speed chases down the driveway to the road (such a scamp). Our road is mostly quiet, but the stress of it is still exhausting so we've taken to chaining her a lot of the time now. The plan is to train her to stay within the electric fencing, like Whisk. Not looking forward to that, but it will make for a happier pup and easier life. <img src="" height="1" width="1" alt=""/>hyd pursuit of beautiful things!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="375" src="" width="500" /></a></div>*photo and handmades by my amazing sister-in-law <a href="">maria</a><br /><br /. <br /><br /? <br /><br />What am I doing instead of striving for beauty? Working, lying around, looking at blogs...<br /><br /><img src="" height="1" width="1" alt=""/>hyd and coast<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>Our feet were in the beautiful Massachusetts ocean Tuesday morning, but how easy it is to settle in back at home. Back to the grind of dogs, dishes, phone calls, laundry, etc. Was nice to get away. Thought the long car rides would be tough while pregnant but they zoomed by. I gave D a tour of New Hampshire for his 36th birthday, and mom gave him the tour of Vermont. Montreal's labyrinth of highways was easily navigated, Ottawa slow but steady. A few moments book-ended at the cottage with dad, enjoying the quiet there.<br /><br /.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>I thought time would feel slower on return (the prep for the trip and getting the house ready for our farm sitter was busy) but we're back and there's more ahead. Good things tho - a baby shower BBQ for the little one on Sunday in the city, and a cottage weekend after that. I think the nesting thing is starting to kick in more - there's a building urgency to prepare, make lists, read books. A desire to strip out all distractions and focus on baby. Soon time to start planning it all out: the laundering of sleepers and prefolds, making food for the freezer, car seats and stroller and crib, finishing up at work... Enjoying it tho. It's really awesome to make space and get excited about our new arrival.<img src="" height="1" width="1" alt=""/>hyd summer<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>Flipping through a Vesey's catalogue on a Saturday morning with a small cup of coffee. Still have some birthday money from my mother-in-law I think I'll put toward a few new bulbs. May-haps. <br /><br /. <br /><br />Friend M has got me into lemon water - I've unearthed a clay pitcher, a wedding present from J&M, and it is my new table companion. A full lemon, some ice, some water. Summer. Summer and a bowl full of lemons.<br /><br /. <br /><br /.<img src="" height="1" width="1" alt=""/>hyd way<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div>New favourite garden flower is a hardy geranium called 'patricia' - a surprisingly large plant this year absolutely covered in long-blooming, dark pink flowers. The rest of the perennial garden is coming along... I think in a couple more years it could be glorious.<br />. <br /><br />There's also a load of gravel in the driveway (scored from D's work site for $50) and 4 cord of wood coming. Really have to watch the heavy lifting as it's tempting to get to work. <br /><br /.<br /><br /.<img src="" height="1" width="1" alt=""/>hyd fueled<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>The reality of solo-dom disappearing soon hits me almost everyday, and in rushes a profound tender admiration for the woman I was in my 20's. Lost yes, but exploring, free to investigate, stuck, then free again, unleashed into the world. There's so much to see.<br /><br /? <br /><br /.<br /><br /.<img src="" height="1" width="1" alt=""/>hyd domestic eveninggg<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>Long day at the office on top of a slow day at the store, home a bit after 5. Husband working late. Pet dogs, sit on porch, breath. Deadhead baskets and pots of porch flowers. Rub belly, smile. Pet dogs again, sit on porch again, breath. Strip bed, throw blankets and pillowcases in the wash, start washing breakfast dishes. Put eggs on to boil, cut up potatoes, put those on to boil. Bring chickens kitchen scraps, clean water, collect eggs. Feed dogs. Pick chives, make potato salad for my hard-working husband and put in fridge. Eat a pickle. Eat some hummus. Eat crackers, drink water. Take butter out of fridge and find a cookie recipe with hopes it'll turn out and be a gift for neighbour's birthday, and excuse to visit. Hang laundry on the line, breath. Make cookies (chewy oatmeal choc chip), bake cookies, smell cookies. Smell lilacs in vase on kitchen table. Call, bribe, chain Luna who seems to wander to the road in the evenings to chase neighbourhood walkers, joggers, rabbits. Wash potato salad and cookie dishes. Yell at barking dogs. Eat cookie... not bad. Drink water, take vitamin, enjoy breeze through windows, singing frogs. Three hours of barefoot, pregnant, domestic bliss. Blog.<img src="" height="1" width="1" alt=""/>hyd begins again<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>It's been too long since I've written - have to not do that because I so enjoy reading back years later on the happenings of each month. Above is a photo of our deck in action: garden stuff, chairs, leftover wood, dogs. I came home the other day and Luna was chewing on Whisk's electric collar, so now we're tying it to him till we fix it - the sound of the thing warning him of the parameter is enough to keep him in. He's a timid dude. Lune's generally always chewing/destroying something... looks like a hunk of wood in pic, leaving a trail of wood shards throughout the front lawn. Messy but cute.<br /><br />We split on a tiller rental with the neighbours this weekend, and D tilled up our big garden this morning, while the rain let up. So nice to have a huge spot of soil ready and waiting. Last week I planted greens, peas, beets, radishes, dill, cilantro. The garlic is up and mulched with chicken straw (the rest of the coop to be cleaned out soon and also dumped on the garden). <br /><br />The flower beds are in full swing, with spring bulbs in bloom and a few other earlies brightening up the place. I went nutty at the local greenhouse yesterday and bought a flat of pansies, a flat of imaptiens, a flat of petunias... and a few other odds and ends (geraniums, potato vine, creeping charley, fushcia, million bells). I told the town flower-planning guy I'd be creating the hanging baskets for the store myself this year instead of buying from him... and the theme is "hot pink" - we'll see how I do.<br /><br />Other things are growing too but more on that later...!<img src="" height="1" width="1" alt=""/>hyd shells<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>Just kissed my love goodbye for the day - he's en route to roof. This boss is tight on safety tho (phew) so he'll have a harness on and I will not worry. He probably won't be home by the time I leave for the city. I've been driving in in the evenings lately, now that the evenings are brighter, and I feel more awake then. Gives me a nice start in the morning - a walk through the city's sidewalk commuters and store openers. I like that buzz time. I also arrive to the office less frazzled... less chance of my clothes being inside out and hair unbrushed.<br /><br /. <br /><br /.<br /><br />I've always liked this quote by Dan Quisenberry, whoever he is: The future is much like the present, only longer.<br /><br />Update: I wikipedia'd Dan and he's an old baseball pitcher, then poet. RIP. Apparently he pitched "submarine style" which I then proceeded to watch videos of on YouTube. Ah, 2011.<img src="" height="1" width="1" alt=""/>hyd down and up<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="320" src="" /></a></div>Been feeling more inspired to make music than write, but not doing much of either. This extendo winter/fake out spring is sort of exhausting. Like, right down into the bones. The sun makes me want to bloom but the cold keeps me frosty and am sorta left feeling fragile. More sensitive to the cold, more sleepy, more impatient and sore. HOWEVER, spring is around the corner so just holding out, staying by the fire, snuggling. Watching the sun set at 8pm, starting seeds, ogling summer dresses, spending a few moments on the porch to hear the birds, opening the window a crack for 10 minutes of fresh, beautiful air.<img src="" height="1" width="1" alt=""/>hyd monday<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="" /></a></div>Weeks matter now. They feel like flags of victory, each Friday sounding the horn. A slow steady trot on Monday becomes a canter mid-week, and a full running gallop by the end. Appearance groomed, hoops jumped, ribbons won and lost. Then a weekend to sit and wonder, in awe of the coming spring and so much more.<br /><br />Photo by <a href="">my husband</a>, capturing the nice light last night after yet another dump of snow. March, what's up? You flood us then you freeze us! I see warmer temps in the coming forecast tho...<img src="" height="1" width="1" alt=""/>hyd
|
http://feeds.feedburner.com/hellotrees
|
CC-MAIN-2017-22
|
refinedweb
| 4,628 | 73.37 |
04 September 2012 09:29 [Source: ICIS news]
SINGAPORE (ICIS)--Saudi Kayan Petrochemical began its start-up activity at its new 300,000 tonne/year low density polyethylene (LDPE) unit at Al Jubail in ?xml:namespace>
The company was feeding ethylene into the plant as part of the start up process, the source explained.
Previously, the new LDPE plant was scheduled to start up in July, the source said, without stating any reason for the delay.
“[The plant] is at its starting-up phase now. However, it remains unclear when the commercial start-up will take place,” the source added.
Products from this new unit will be marketed globally, with the main market targeted at
Saudi Arabia chemicals major SABIC owns a 35% stake in Saudi Kayan Petrochemical, while Al-Kayan Petrochemical holds a 20% stake. The remaining 45% is held by public
|
http://www.icis.com/Articles/2012/09/04/9592386/saudi-kayan-begins-start-up-activity-at-new-ldpe-unit.html
|
CC-MAIN-2014-52
|
refinedweb
| 143 | 60.55 |
23 May 2012 07:12 [Source: ICIS news]
By: Ong Sheau Ling
?xml:namespace>
Indian importers have no interest to procure fresh cargoes this week, as the Indian rupee (Rs) exchange rate crossed the psychological mark of Rs55 against $1 to stand at Rs55.4 to $1 on 23 May.
The rupee-US dollar exchange rate has depreciated by 11% since early March, market sources said.
“It is too risky to import now. We have no idea what our cost will be when the cargoes arrive later,” a Mumbai-based trader said.
India’s customs authority – the Central Board of Excise and Customs - quoted an exchange rate of Rs53.10 against $1 for April, Rs2 higher than the previous month and a rise for the second straight month. Market players expect the board to quote another increase in May which will weigh down import interest further.
“There is no business at all now. If we can’t sell our stocks, how can we import?” another Mumbai-based trader said.
Some large-based converters are delaying the opening of LCs (letter of credit) because of the volatile rupee and the downward trend in the overseas markets.
“End-users are panicking,” a northeast Asian polyolefins producer said, adding that they are uncertain how much their imported cargoes will cost upon arrival, given the exchange rate.
Cargoes arriving on May are estimated to be about 3-5% more expensive than when they were purchased in March when the exchange rate was at about Rs53 against $1.
“The cost of imports is rising day by day. Distributors are actively liquidating their cargoes, so as to get out of the market as soon as possible,” a Daman-based film converter said.
Some Mumbai-based traders are still holding LDPE and HDPE film stocks priced at close to $1,500/tonne (Rs 83,100/tonne) CFR (cost & freight) Mumbai, about 10% higher than the current spot price level.
In addition, an influx of lower-priced Iranian low density PE (LDPE) film and high density PE (HDPE) film has caused a widespread liquidation of high cost stock in the Indian domestic market by local traders.
“The rampant offers of Iranian [LDPE and HDPE film] cargoes are adding more chaos to the already bearish market. We have to offload our stocks are a loss as a result. We can’t hold on to our high cost stocks anymore,” another Mumbai-based trader said.
Spot import prices of various grades of PE and PP in India fell for five consecutive weeks to $1,370-1,500/tonne CFR Mumbai for the week ended on 18 May, down by $85-125/tonne or 5.6-10.5% (please see graph below).
As local distributors and traders seek to clear their inventories, local converters are trying to keep just enough stock to cover their immediate needs.
“We may just cover any shortfall by buying locally instead to avoid any risk attached to the volatile currency,” a domestic oriented Daman-based converter said.
Export-oriented converters are less affected by the depreciation of the rupee, but buying ideas for imports have fallen because of abundant lower-priced imports offered in the domestic market.
“We can procure imports locally as well at a cheaper price, so we need not import directly from the
In the open domestic market, local and imported products were traded at Rs90/kg
Market players largely expect PE and PP import prices to continue the downtrend and for Indian producers to experience better sales, while it will be harder for deals to be done on imports.
“Despite lower LLDPE (linear low density PE) prices, we are still selling,” an Indian polyolefins maker said, adding that LLDPE supply is still fundamentally tight, unlike the other PE grades.
Indian producers have decreased their list prices by as much as Rs3.00/kg with effect from 17 May because of the weak market sentiment.
“It remains unclear how long this weak buying interest will continue but certainly, import offers for June shipments will be lower,” a Mangalore-based trader said.
($1 = Rs55
|
http://www.icis.com/Articles/2012/05/23/9562528/import-activity-for-pe-and-pp-halts-in-india-on-record-low-rupee.html
|
CC-MAIN-2014-35
|
refinedweb
| 680 | 59.53 |
Created on 2015-04-18 09:00 by neologix, last changed 2016-02-12 22:56 by neologix. This issue is now closed.
hanger.py
"""
from time import sleep
def hang(i):
sleep(i)
raise ValueError("x" * 1024**2)
"""
The following code will deadlock on pool.close():
"""
from multiprocessing import Pool
from time import sleep
from hanger import hang
with Pool() as pool:
try:
pool.map(hang, [0,1])
finally:
sleep(0.5)
pool.close()
pool.join()
"""
The problem is that when one of the tasks comprising a map result fails with an exception, the corresponding MapResult is removed from the result cache:
def _set(self, i, success_result):
success, result = success_result
if success:
[snip]
else:
self._success = False
self._value = result
if self._error_callback:
self._error_callback(self._value)
<===
del self._cache[self._job]
self._event.set()
===>
Which means that when the pool is closed, the result handler thread terminates right away, because it doesn't see any task left to wait for.
Which means that it doesn't drain the result queue, and if some worker process is trying to write a large result to it (hence the large valuerrror to fill the socket/pipe buffer), it will hang, and the pool won't shut down (unless you call terminate()).
Although I can see the advantage of fail-fast behavior, I don't think it's correct because it breaks the invariant where results won't be deleted from the cache until they're actually done.
Also, the current fail-fast behavior breaks the semantics that the call only returns when it has completed.
Returning while some jobs part of the map are still running is potentially very bad, e.g. if the user call retries the same call, assuming that all the jobs are done. Retrying jobs that are idempotent but not parallel execution-safe would break with the current code.
The fix is trivial, use the same logic as in case of success to only signal failure when all jobs are done.
I'll provide a patch if it seems sensible :-)
This is a nice example demonstrating what I agree is a problem with the current implementation of close.
A practical concern with what I believe is being proposed in your trivial fix: if the workers are engaged in very long-running tasks (and perhaps slowly writing their overly large results to the results queue) then we would have to wait for quite a long time for these other workers to reach their natural completion.
That said, I believe close should in fact behave just that way and have us subsequently wait for the others to be completed. It is not close's job to attempt to address the general concern I bring up.
This change could be felt by people who have written their code to expect the result handler's immediate shutdown if there are no other visible results -- it is difficult to imagine what the impact would be.
This is my long-winded way of saying it seems very sensible and welcome to me if you took the time to prepare a patch.
Patches for 2.7 and default.
Barring any objections, I'll commit within the next few days.
@neologix: Budgeting time this week to have a proper look -- sorry I haven't gotten back to it sooner.
The patches make good sense to me -- I have no comments to add in a review.
I spent more time than I care to admit concerned with the idea that error_callback (exposed by map_async which map sits on top of) should perhaps be called not just once at the end but each time an exception occurs. Motivated by past jobs which failed overall to yield any results because one out of a million of the inputs triggered an error, I thought the idea very appealing and experimented with implementing it (with happy results). Googling for it though, I found plenty of examples of people asking questions about how callback and error_callback are intended to work -- though the documentation is not explicit on this particular point, most of those search results correctly document in the wild that error_callback is called only once at the end just like callback. I think it best to leave that functionality just as you have it now.
Thanks for creating the patch -- looks great to me.
As an aside: issue24948 seems to show there are others who would find the immediate-multiple-error_callback idea attractive.
New changeset 1ba0deb52223 by Charles-François Natali in branch 'default':
Issue #23992: multiprocessing: make MapResult not fail-fast upon exception.
|
https://bugs.python.org/issue23992
|
CC-MAIN-2021-39
|
refinedweb
| 763 | 59.43 |
How to: View Existing Key Bindings
Visual Studio add-ins are deprecated in Visual Studio 2013. You should upgrade your add-ins to VSPackage extensions. For more information about upgrading, see How to: Convert an Add-in to a VSPackage. Keyboard Shortcuts.
Viewing existing key bindings
Create an add-in.
For more information about using the Visual Studio Add-In Wizard, see How to: Create an Add-In.
Add a reference to System.Windows.Forms, and add this namespace to the using (or Imports) statements for the Connect class.
Paste the function below into the Connect class in the code.
To run the add-in, click Add-in Manager on the Tools menu, select the add-in you created, and click OK.
A message box displays a list of all shortcut keys bound to the File.NewFile command.); }
|
http://msdn.microsoft.com/en-us/library/ms228756.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 138 | 67.15 |
CodePlexProject Hosting for Open Source Software
Is it possible to use the _orchardServices.ContentManager.Query
function including a where statement where the where statement is generated in to the sql query? So i don't retrieve a big array in memory and filter it afterwards?
Sure. Do a join.
How do you mean a join? Because with a join a can't filter right? I have the following query but it the where statement doesn't work:
var result = _orchardServices.ContentManager.Query(VersionOptions.Published, "CommunityMyMessage")
.Join<CommunityMyMessagePartRecord>().Where(s =>
s.UserPartRecordSender != null &&
s.UserPartRecordSender.Id == _orchardServices.WorkContext.CurrentUser.Id);
If what you want to query on is the CommunityMyMessage part/record, you can use Where without doing a Join if you use the generic overload of the Query method. Note, this overload is an extension method in the Orchard.ContentManagement namespace, so you
need to have that imported. You can do something like this:
var result = _orchardServices.ContentManager.Query<CommunityMyMessagePart,CommunityMyMessagePartRecord>()
.Where(r =>
//r is typed as CommunityMyMessagePartRecord here, so you can do whatever filter you want on that
).List();
Also, look for examples of usage of Join and Where in the source code. For example in ItemController.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://orchard.codeplex.com/discussions/270489
|
CC-MAIN-2017-43
|
refinedweb
| 234 | 59.7 |
I came across the below function,
def some_name = {a, b, c -> a==c?bumpUp(b):b}
bumpUp(b)
:b
bumpUp(b):b
The Ternary operator which involves both a ? and a : , is a short and clean way to use an equivalent to an if statement. Reduction of the code verbosity hence makes it more readable.
For example:
String result = (1==1) ? 'equals' : 'not equals'
Lets explain the line above:
If the condition (1==1) is true, then the result assignment will be 'equals', else the result assignment will be 'not equals'.
The long way you probably familiar with is as follows:
String result = "" if(1==1) { result = 'equals' } else { result = 'not equals' }
In order to answer your 3 questions:
question 2 section:
if(a==c){ some_name = bumpUp(b) } else { some_name = b }
|
https://codedump.io/share/5LboPMdTbuUk/1/groovy-syntax-explaination
|
CC-MAIN-2018-13
|
refinedweb
| 131 | 71.44 |
The DFS-N test team has completed some extensive Performance and Scalability testing and we wanted to share some of the results.
To start, here is a comparison of the performance of adding links to a standalone namespace. It includes data for Windows Server 2003, Windows Server 2008 and Windows Server 2008 R2:
As you can see, there is a significant improvement between Windows Server 2003 and Windows Server 2008. Windows Server 2003 goes “off the chart” (above 1 second) after around 30,000 links. Windows Server 2008 and Windows Server 2008 R2 perform similarly up to about 300,000 links per namespace. After that, Windows Server 2008 R2 performs better than Windows Server 2008. We ran the tests with up to 500,000 links per namespace.
The next chart looks into the performance of “2000 mode” domain namespace. Keep in mind that the maximum recommended number of links to “2000 mode” namespaces is 5,000 links per namespace due to limitation in which namespace data is stored in Active Directory in “2000 mode”. A subset of the “2008 mode” domain data is provided for comparison (up to 50,000 links so that you can still see the “2000 mode” data in the chart).
You can verify in the chart that the “2008 mode” domain namespace is consistently faster than the “2000 mode” domain namespace. You can also see that Windows Server 2008 and Windows Server 2008 R2 perform similarly.
Next, we look further into “2008 mode” domain namespaces, pushing the limits in terms of number of links per namespace:
You can see how Windows Server 2008 R2 scales better than Windows Server 2008 for which the performance degrades after around 300,000 links per namespace. We kept adding links in Windows Server 2008 R2 until we had 1,300,000 links, without any issues. At this point we could observe a saturation behavior similar to the one observed around 300,000 links for Windows Server 2008.
The last chart below compares the startup performance of the “2008 mode” domain namespace, with up to 200,000 links:
As you can see, Windows Server 2008 R2 consistently performs better than Windows Server 2008.
We will be presenting details on how the tests were performed and more tests results in a presentation during the upcoming Storage Developers Conference. For details, check...
.
Post by Marcello Hasegawa
|
https://techcommunity.microsoft.com/t5/storage-at-microsoft/windows-server-dfs-namespaces-performance-and-scalability/ba-p/423976
|
CC-MAIN-2020-50
|
refinedweb
| 393 | 59.53 |
The Samba-Bugzilla – Bug 13246
backport Samba VirusFilter
Last modified: 2018-02-07 09:57:45 UTC
This ticket will be used to track the inclusion of Samba VirusFilter in Samba 4.8.0.
Created attachment 13924 [details]
backport for 4.8-test
backport was a straight cherry-pick . I've only compile-tested this patchset.
Reassigning to Karolin for inclusion in 4.8.
Created attachment 13925 [details]
add release notes entry for vfs_virusfilter
@Trever: Feel free to expand on this if you'd like to add more detail.
Comment on attachment 13925 [details]
add release notes entry for vfs_virusfilter
Minimal but to the point. Let's see what Trever and Jeremy have to say about this.
I think this is good. Thank you.
Please, don't ship this just yet. I just hit a crasher. I am sorry this was missed.
(In reply to Trever Adams from comment #6)
> Please, don't ship this just yet. I just hit a crasher. I am sorry this was
> missed.
Thanks for the heads up, taking this of Karolin's queue for now.
I am wondering if I have a bad compile. I updated the system this was on and recompiled.
[2018/01/24 10:18:30.028695, 10, pid=20989, effective(3000007, 100), real(3000007, 0), class=virusfilter] ../source3/modules/vfs_virusfilter.c:1013(virusfilter_scan)
virusfilter_scan: Searching cache entry: fname: (null)
[2018/01/24 10:18:30.028711, 10, pid=20989, effective(3000007, 100), real(3000007, 0), class=virusfilter] ../source3/modules/vfs_virusfilter.c:1024(virusfilter_scan)
virusfilter_scan: Cache entry not found
fname should never be null.
static virusfilter_result virusfilter_scan(
struct vfs_handle_struct *handle,
struct virusfilter_config *config,
const struct files_struct *fsp)
{
virusfilter_result scan_result;
char *scan_report = NULL;
const char *fname = fsp->fsp_name->base_name;
const char *cwd_fname = fsp->conn->cwd_fname->base_name;
struct virusfilter_cache_entry *scan_cache_e = NULL;
bool is_cache = false;
virusfilter_action file_action = VIRUSFILTER_ACTION_DO_NOTHING;
bool add_scan_cache = true;
bool ok = false;
It is impossible for it to be NULL is it not?
What does the gdb backtrace say ?
To get into gdb, set:
panic action = /bin/sleep 9999999
in the [global] section of your smb.conf. Now when it crashes you'll be able to attach with gdb and fully inspect the smbd.
I have never seen fname be null before. I am sorry for missing this.
I believe the problem is that fname is NULL which, how can it be unless there is corruption somewhere. The problem does NOT happen if the file is infected. Only if it is clean. (Yes, I did test clean files, although possibly not the last two patches, but I think I did.)
#0 0x00007f41c9b887ea in waitpid () from /lib64/libc.so.6
#1 0x00007f41c9af4827 in do_system () from /lib64/libc.so.6
#2 0x00007f41cb676fc7 in smb_panic_s3 (why=0x7f41cd952cbd "internal error")
at ../source3/lib/util.c:817
#3 0x00007f41cd900496 in smb_panic (why=0x7f41cd952cbd "internal error")
at ../lib/util/fault.c:166
#4 0x00007f41cd900182 in fault_report (sig=11) at ../lib/util/fault.c:83
#5 0x00007f41cd900197 in sig_fault (sig=11) at ../lib/util/fault.c:94
#6 <signal handler called>
#7 0x00007f41c9b57286 in __strlen_sse2 () from /lib64/libc.so.6
#8 0x00007f41a64529f8 in virusfilter_clamav_scan (handle=0x55dcd54417b0,
config=0x55dcd5c022c0, fsp=0x55dcd5197b60, reportp=0x7fff795ebb70)
at ../source3/modules/vfs_virusfilter_clamav.c:86
#9 0x00007f41a644ce09 in virusfilter_scan (handle=0x55dcd54417b0,
config=0x55dcd5c022c0, fsp=0x55dcd5197b60)
at ../source3/modules/vfs_virusfilter.c:1038
#10 0x00007f41a644e526 in virusfilter_vfs_close (handle=0x55dcd54417b0,
fsp=0x55dcd5197b60) at ../source3/modules/vfs_virusfilter.c:1393
#11 0x00007f41cd4d1ce7 in smb_vfs_call_close (handle=0x55dcd54417b0,
fsp=0x55dcd5197b60) at ../source3/smbd/vfs.c:1736
#12 0x00007f41cd4ba613 in fd_close (fsp=0x55dcd5197b60)
at ../source3/smbd/open.c:790
#13 0x00007f41cd4c801f in close_normal_file (req=0x55dcd517b030,
---Type <return> to continue, or q <return> to quit---
fsp=0x55dcd5197b60, close_type=NORMAL_CLOSE) at ../source3/smbd/close.c:762
#14 0x00007f41cd4c9756 in close_file (req=0x55dcd517b030, fsp=0x55dcd5197b60,
close_type=NORMAL_CLOSE) at ../source3/smbd/close.c:1234
#15 0x00007f41cd5182de in smbd_smb2_close (req=0x55dcd517a940,
fsp=0x55dcd5197b60, in_flags=0, out_flags=0x55dcd517ae82,
out_creation_ts=0x55dcd517ae88, out_last_access_ts=0x55dcd517ae98,
out_last_write_ts=0x55dcd517aea8, out_change_ts=0x55dcd517aeb8,
out_allocation_size=0x55dcd517aec8, out_end_of_file=0x55dcd517aed0,
out_file_attributes=0x55dcd517aed8) at ../source3/smbd/smb2_close.c:260
#16 0x00007f41cd518569 in smbd_smb2_close_send (mem_ctx=0x55dcd517a940,
ev=0x55dcd4f76c70, smb2req=0x55dcd517a940, in_fsp=0x55dcd5197b60,
in_flags=0) at ../source3/smbd/smb2_close.c:334
#17 0x00007f41cd5179b7 in smbd_smb2_request_process_close (req=0x55dcd517a940)
at ../source3/smbd/smb2_close.c:70
#18 0x00007f41cd506aba in smbd_smb2_request_dispatch (req=0x55dcd517a940)
at ../source3/smbd/smb2_server.c:2627
#19 0x00007f41cd50aa07 in smbd_smb2_io_handler (xconn=0x55dcd50baf30,
fde_flags=1) at ../source3/smbd/smb2_server.c:3914
#20 0x00007f41cd50ab0d in smbd_smb2_connection_handler (ev=0x55dcd4f76c70,
fde=0x55dcd5e10280, flags=1, private_data=0x55dcd50baf30)
at ../source3/smbd/smb2_server.c:3952
#21 0x00007f41c9e9a670 in epoll_event_loop_once () from /lib64/libtevent.so.0
#22 0x00007f41c9e98af7 in std_event_loop_once () from /lib64/libtevent.so.0
---Type <return> to continue, or q <return> to quit---
#23 0x00007f41c9e94f5d in _tevent_loop_once () from /lib64/libtevent.so.0
#24 0x00007f41c9e9517b in tevent_common_loop_wait () from /lib64/libtevent.so.0
#25 0x00007f41c9e98a97 in std_event_loop_wait () from /lib64/libtevent.so.0
#26 0x00007f41cd4ef1c8 in smbd_process (ev_ctx=0x55dcd4f76c70,
msg_ctx=0x55dcd4f75f80, sock_fd=48, interactive=false)
at ../source3/smbd/process.c:4127
#27 0x000055dcd320b9da in smbd_accept_connection (ev=0x55dcd4f76c70,
fde=0x55dcd5e1dd40, flags=1, private_data=0x55dcd5bcbf40)
at ../source3/smbd/server.c:1030
#28 0x00007f41c9e9a670 in epoll_event_loop_once () from /lib64/libtevent.so.0
#29 0x00007f41c9e98af7 in std_event_loop_once () from /lib64/libtevent.so.0
#30 0x00007f41c9e94f5d in _tevent_loop_once () from /lib64/libtevent.so.0
#31 0x00007f41c9e9517b in tevent_common_loop_wait () from /lib64/libtevent.so.0
#32 0x00007f41c9e98a97 in std_event_loop_wait () from /lib64/libtevent.so.0
#33 0x000055dcd320c774 in smbd_parent_loop (ev_ctx=0x55dcd4f76c70,
parent=0x55dcd4f876b0) at ../source3/smbd/server.c:1397
#34 0x000055dcd320e8ff in main (argc=4, argv=0x7fff795ec7f8)
at ../source3/smbd/server.c:2164
As I look at this,
virusfilter_scan: Scan result: Clean: /home/MIDDLEEARTH-data/sv.stupiddog.infected
[2018/01/24 10:30:03.947950, 10, pid=21322, effective(3000007, 100), real(3000007, 0), class=virusfilter] ../source3/modules/vfs_virusfilter.c:1113(virusfilter_scan)
virusfilter_scan: Adding new cache entry: sv.stupiddog.infected, 1
[2018/01/24 10:30:03.971563, 3, pid=21322, effective(3000007, 100), real(3000007, 0), class=virusfilter] ../source3/modules/vfs_virusfilter.c:1380(virusfilter_vfs_close)
virusfilter_vfs_close: Not scanned: File not modified: /home/MIDDLEEARTH-data/(null)
it appears that this would be closing/opening of a directory. Is this accurate? How do I test to make sure it isn't a directory if that is the case?
Sorry, I found out what I was looking for. I have used it before (fsp->is_directory).
I think I found the problem. Somewhere along the line
char *fname = fsp->fsp_name->base_name = NULL;
crept in. Likely one of my iterative cleanups.
I am doing one or two other small cleanups while there and testing. I am not sure why this sometimes caused a problem and others did exactly what it should have including the log messages.
Oh, that's a fault of the review process - both Ralph and I missed it, sorry.
Please send a patch to samba-technical asap.
Ultimately we'll need a test suite for this, probably using a fake virus-scanner backend that looks for files matching a wildcard of *bad* and marks them as infected.
Is the Samba project willing to ship a copy of eicar test virus? I suppose that would still need the test setup to have clamav.
I found where it crept in. It was in a cleanup. I am sorry that I missed correcting it. It appeared after version 14.
I am still waiting for a compile to finish before I can test my changes. (Which should short circuit on directories saving a little processing.)
Sorry, that is version 12.
Post the changes as *two* patches. One that fixes the fsp->base_name = NULL issue, and one that fixes the 'is_directory' check.
Let's keep reviewing more simple this time please :-).
(In reply to Trever Adams from comment #15)
No, we don't need to have a real antivirus setup to test the code changes. Simply an external process spun up at test that simply marks any file matching a known wildcard as "infected".
(In reply to Trever Adams from comment #15)
echo 'X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*' > eicar.dat
Voila!
The patches have been sent.
I am creating a python script that fakes being clamav. The protocol is simple.
(In reply to Ralph Böhme from comment #19)
> (In reply to Trever Adams from comment #15)
> echo 'X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*'
> > eicar.dat
Looks like it could put the source repo/git.samba.org in danger of getting black listed.
Created attachment 13926 [details]
fake clamav daemon for testing in future.
import re is only needed if you are using the check_bad_dog function.
check_eicar should be removed if it will get Samba in trouble.
If the check_eicar isn't used look at line 57 & 58 to make appropriate change.
In your test use the clamav module, use this script to pretend to be the clamav daemon.
I have only tested this with socat, which I am not sure how to use to send null terminated strings which vfs_virusfilter will send, so I changed the \0 to \r in readcstr for testing.
any file name containing regex "bad.dog" (the period of course being any character) will trip the check_bad_dog version.
Let's develop the test fake clamav daemon on the samba-technical list rather than on this bug.
Alright. I have put forth a proposal on the list.
As for this bug, things have been more thoroughly tested here. With the actual crasher bug fix committed, the work on back porting to 4.8 can go ahead. I would prefer to also have the optimization as it may be necessary for Sophos and FSAV backends to work properly. (ClamAV won't scan directories using the command used in that backend. Sophos and FSAV may and that may have strange consequences not just poor performance.)
Created attachment 13932 [details]
v2 backport for 4.8-test
This is again a straight cherry-pick from master, with Trever's two latest fixes (c890011a769b497855748e130fa41e998babc305 and e320c4c9b7426be296b3c311861ba2ddeeacdf9f) included. Compile tested only.
Comment on attachment 13932 [details]
v2 backport for 4.8-test
Thanks David, but can you please add the bug urls to the commits? Thanks!
Created attachment 13933 [details]
v3 backport for 4.8-test
Same patchset with Bug URL in commit msgs..
Jeremy.
(In reply to Jeremy Allison from comment #29)
>.
Given that this is a backport patchset, I'd much prefer that we cherry-pick the master commits as-is, rather than squash any changes.
@Karolin, please apply the v3 backport and release notes patches for 4.8.next.
(In reply to David Disseldorp from comment #31)
Done.
Pushed to autobuild-v4-8-test.
(In reply to Karolin Seeger from comment #32)
> (In reply to David Disseldorp from comment #31)
> Done.
> Pushed to autobuild-v4-8-test.
Thanks Karo, though it looks as though the release notes patch "add release notes entry for vfs_virusfilter" didn't make it into autobuild.
(In reply to David Disseldorp from comment #33)
Oh, you are right, sorry!
Pushed to autobuild-v4-8-test now.
Closing... @Trever: please follow up with the selftest functionality on the mailing list. Thanks everyone
|
https://bugzilla.samba.org/show_bug.cgi?id=13246
|
CC-MAIN-2019-51
|
refinedweb
| 1,831 | 60.72 |
Heya.. I located a book called "Hacking - The art of Exploitation" and somewhere into chapter 2 he flips me off because I just can't seem to understand his technique.. Here's his example code:
vuln.c
exploit.cexploit.cCode:
int main(int argc, char *argv[])
{
char buffer[500];
strcpy(buffer, argv[1]);
return 0;
}
One needs to disable the inherent buffer overflow protection of the 2.6 kernel :One needs to disable the inherent buffer overflow protection of the 2.6 kernel :Code:
#include <stdlib.h> long sp(void) // This is just a little function
{ __asm__("movl %esp, %eax");} // used to return the stack pointer
int main(int argc, char *argv[])
{
int i, offset;
long esp, ret, *addr_ptr;
char *buffer, *ptr;
offset = 0; // Use an offset of 0
esp = sp(); // Put the current stack pointer into esp
ret = esp - offset; // We want to overwrite the ret address
printf("Stack pointer (ESP) : 0x%x\n", esp);
printf(" Offset from ESP : 0x%x\n", offset);
printf("Desired Return Addr : 0x%x\n", ret);
// Allocate 600 bytes for buffer (on the heap)
buffer = malloc(600);
// Fill the entire buffer with the desired ret address
ptr = buffer;
addr_ptr = (long *) ptr;
for(i=0; i < 600; i+=4)
{ *(addr_ptr++) = ret; }
// Fill the first 200 bytes of the buffer with NOP instructions
for(i=0; i < 200; i++)
{ buffer[i] = '\x90'; }
// Put the shellcode after the NOP sled
ptr = buffer + 200;
for(i=0; i < strlen(shellcode); i++)
{ *(ptr++) = shellcode[i]; }
// End the string
buffer[600-1] = 0;
// Now call the program ./vuln with our crafted buffer as its argument
execl("./vuln", "vuln", buffer, 0);
// Free the buffer memory
free(buffer);
return 0;
}
Provided this is done.. Simple buffer overflows like this one should work (the protection is circumventable I hear)Provided this is done.. Simple buffer overflows like this one should work (the protection is circumventable I hear)Code:
echo 0 > /proc/sys/kernel/randomize_va_space
Anyway.. A stack representation should be:
ESP
-------------------------------------------- Low memory addresses
buffer
SFP (stack/saved frame pointer)
ret (return address)
argc
argv
-------------------------------------------- High memory addresses
EBP
Ok.. so the stack grows upward towards lower memory addresses.. But his way of finding the correct return
adress is to use the ESP/SP and *subtract* that value with an offset (in this case 0)..
Now.. I don't understand why he'd subtract from the ESP meaning he should end up in an even lower memory address which won't be the address of the buffer
... Modifying the vuln.c code to
vuln.c
Should give a stack of:Should give a stack of:Code:
int main(int argc, char *argv[])
{
long HugeRoadblock[500];
char buffer[500];
strcpy(buffer, argv[1]);
return 0;
}
ESP
-------------------------------------------- Low memory addresses
buffer
HugeRoadblock
SFP (stack/saved frame pointer)
ret (return address)
argc
argv
-------------------------------------------- High memory addresses
EBP
And results in the code not being able to run
So.. could somebody tell me exactly HOW it makes sense to subtract and offset from the SP to reach the address of the buffer ? I would have thought that you needed to *add* bytes.
Please help.. I'm really confused which probably shows in my post
|
http://www.antionline.com/printthread.php?t=273363&pp=10&page=1
|
CC-MAIN-2016-50
|
refinedweb
| 527 | 56.18 |
I recently received a request to incorporate a gauge component in my Design Studio dashboards. The higher-ups had seen an Xcelsius dashboard utilizing gauges, and wanted the same functionality in the dashboards I was building. As we all know, a gauge component is not included out-of-the-box with Design Studio, and searching on SCN turned up nothing but requests from others for the same component. So, with my limited knowledge of HTML and absolutely zero experience with JavaScript, I decided to roll up my sleeves and build one myself. The following is a brief chronicle of the process. This is not meant to be a literal step-by-step tutorial, but to demonstrate the thought process behind the creation of the component and provide direction to those looking to create something similar.
I started with a copy of the clock component provided in the DS 1.3 SDK Samples pack, and followed the steps in section 3.2 of the Design Studio SDK Developer Guide to rename the proper components of the extension.
I removed the hour and minute hands by deleting the appropriate sections of the component.js file. I recolored the outer boarder with a gradient fill and added an inner boarder with the reverse gradient fill to give a 3 dimensional look to the exterior ring. To achieve the correct dimensions, I increased the UNIT variable to 145 (from 100) and moved the tick marks outwards by changing their respective start/end variables.
I wanted the number of tick marks to be dynamic, so I created user inputs in the contribution.xml for the number of major and minor tick marks to create. Using these variables, I changed the for-loops which draw the hour and minute tick marks to create the specified number of tick marks with equidistant spacing across the upper 2/3 of the gauge face.
I then changed the look of the gauge needle by editing the “draw second hand” section of the component.js file. To set where the needle is pointing, I replaced the variable “seconds” with one that the user can input either in the properties pane (in the contribution.xml) or with a setValue method (in the contribution.ztl). I created three triangles which together form the needle. The shading of the needle changes as the needle crosses the direction from which the light appears to come from (top left corner). I also embellished the “boss” by adding some transparency and a linear gradient.
To emulate the Xcelsius color ranges in the gauge component, I created arcs that rest above the ticks that are colored red, orange, yellow, and green. The positions of these ranges should be input by the user, so I created inputs in the contribution.xml for the ends of the red, orange, and yellow ranges (each range begins where the previous one ends — red always begins at the minimum, green always ends at the maximum). I also created a setting that allows the user to select a continuous color distribution across the range instead of the discrete sections.
I wanted the option to have numbers on the dial, so I again created a user option to display numbers. The numbers are created when the tick marks are drawn by rotating the canvas inside of the “for” loop, placing the number text in the correct location, and rotating the canvas back before the next loop iteration. Correct placement of the numbers was moderately difficult to accomplish, mostly because by this point I had lost track of which direction the canvas was rotated and, subsequently, how to place the labels in the correct position relative to the center of the gauge. I created user inputs for the minimum and maximum of the dial, and allowed the gauge values to scale with the number of major tick marks the user selects (calculated based on the loop counter, the step size, and the min). One side effect of this approach: Since the labels are integers, we get some interesting results if the user selects a number of tick marks which does not evenly divide the range, as seen below.
The position of the needle is calculated based on the value (input by the user either in the properties pane or with the setValue method) relative to the max-min range (as seen above, if the value is 50, the needle points to 50 regardless of the min and max). As a finishing touch, I created a label in the bottom portion of the gauge to display the value the gauge is currently pointing to, as well as a user definable unit for the value (such as ‘%’ or ‘$’). This label can also be turned on and off.
I decided to add a bit of a “reflection” to the dial to simulate it being covered in glass. Then, last but not least, I let the user choose a “theme”: the default ‘silver’ or a slick ‘black’
So ends my first adventure using the Design Studio SDK, the creation of my first custom component, and my first contribution to SCN. Your questions, comments, and suggestions are always welcome in the comments section below.
— Nick Stein
Industrious soul you are! Awesome job!.
This is terrific. I like the progression you show in the screenshots. The end product looks really slick! Any plans on sharing the source on Github or as a .ZIP? 🙂
Hi Michael,
I’m waiting for permission from my parent company to release the source. As soon as I get the go-ahead, I’ll attach it to my post.
Thanks!
Nicholas:
This is wonderful work. Are there any other components which would require similar creation?
Sincerely,
Dave
Hi Dave,
Any component which is not included out-of-the-box in Design Studio, one of the SDK examples, or available here on SCN from another contributor would require similar treatment. A few that come to mind immediately are a dial component and combination charts (e.g. a chart that shows a column graph of one data set in the same plot as a line graph from another). Of course, the number of components one could build is really only limited by the imagination of the designer (and the client).
-Nick
Hi Nicholas
Thanks for this post.
It is certainly a very time consuming task, at least for me, to change this clock into a Gauge. I spent quite a few hours and decided to give up, due to the effort required.
Would you be willing to share your .js source code? If so, thanks in advance! I will add animation to it and share back.
Best regards
Hi Stephen,
It was time consuming for me as well, since I had no previous experience with JavaScript or the HTML canvas element. Are you using only Eclipse? One thing that helped me greatly was the use of a separate HTML IDE for a real-time view of the component as I changed code. It makes the development process much faster as there is no need to launch Design Studio to see what the component looks like (85% of the process is just getting the thing to look right). You can test functionality by creating variables for values you intend the user to input.
As for your question, unfortunately since the component design was done for a client, my parent company retains the rights to the source. At the moment, they are unwilling to release the source publicly. If that changes, I’ll be sure to update my post.
–Nick
That’s okay – thanks for the response. I will put make some time to adjust the code.
Thanks for the tip – which HTML IDE do you use?
I use Aptana Studio 3. It’s free and open source, available from. Good luck!
Hi Nicholas
This is great work and im hoping it will help me build a custom component I have been tasked with. Need to create a thermometer 🙂
Did u use the Eclipse Plugin for Aptana or aptana studio standalone?
Thanks
Hi Alex
Perhaps this link will help you – it contains code for a Thermometer.
Good luck.
Thanks Stephen I’ll take a look!
Really like the reflection part.
Really like your work.
Since you do not you shared yout component yet I made one by myself following your instructions, you can find it on that github repo:
DesignStudioCustomGauge/README.md at master · Antoninjo/DesignStudioCustomGauge · GitHub
I do not implement 3d style on the needle and the reflection effect yet.
Feel free to use and modify that!
Antonio, thanks for sharing this component. Would you mind if I added your Gauge to a Utility Pack I’m maintaining a list of community-created addons for? I would give credit where credit is due and not modify the core of your code (aside from a namespace change to get it to work with the other components)
Link: Design Studio 1.2/1.3 SDK – Design Studio Utility Pack
Michael feel free to use it and even improve it if you want.
Thanks!
I’ve added it to the Utility Pack post which includes that deployable version that non-SDKers can use. I’ve not modified the source aside from where needed to package it.
Antonio,
Fantastic implementation! I really like the options to choose the range colors and the animation you added. Your component will certainly be a valuable resource for all.
Keep up the good work!
— Nick
Antonio, this is brilliant!
Thanks for sharing.
I just implemented it and it works brilliantly.
Hi
I noticed the component’s methods do not fire.
In order to resolve it, I corrected the contribution.ztl as follows: the code was referring to ‘val’ which I replaced with ‘value’.
I am now able to set the ‘properties’ in Design Studio with scripting.
Thanks again.
Thanks Stephen. Fixed on github!
Hi Antonio
I’ve downloaded your gauge and have used it in some of our reports.
My concern is to use it for some measure bigger than 100. When I have a value bigger than 100, the gauge passes the set max value and stops wherever the gauge thinks the value would be past 100.
This is a simple example. I have a value of 150 assigned. Essentially it should stop at 150 but passes 200 and stops on 0. Any feedback or help would greatly be appreciated. PS, I’m very new to Design Studio and the SDK’s.
Kind Regards,
Kevin
Hi Nicolas,
Thanks for sharing this great blog.
Hi all…
I just startet to collect all available SDK components. Feel free to participate and send me other ones you know and I will add them to the list…
List of Design Studio SDK Components
Dear all,
I like this gauge and have embedded the component to my Design Studio 1.3 Installation.
Is there a way to bind this gauge to a data source, like it is possible for a pi or colum Chart? I can not find the Data Binding element in th eproperties.
Any idea would be great.
Thanks and best regards,
Stefanos L.
Hi Stefanos,
What I did is simply script a bit in the On Select property of the chart from where you want to trigger the action, this is what I did and works perfectly:
PERC_A_VALUE = MY_DATASET.getData(“00O2TKFI2HZUO5CIQ7SLZLTHA”, {“0DATE”:MY_CHART.getSelectedMember(“0DATE”).internalKey}).value;
SPEEDOMETER_PERC_A.setNeedle(PERC_A_VALUE); SPEEDOMETER_PERC_A.setShowedValue(Convert.floatToString(PERC_A_VALUE/100, “#.## %”));
This is just looking for the selected date value in a line chart, getting the corresponding value in another dataset (using that date as reference) and finally setting this value in the gauge.
Of course you could always trigger from any other component.
I hope this helps.
Regards,
Pablo
hey Pablo I know this is 1 year later posting but I have tried to replicate something similar, basically set the speedo from 0-1million, and i am trying to capture salary value (just to try make it work im keeping it simple)
I created a drop down box which filters results from the DS. Here is a snippet of that:
DS_1.setFilterExt(“_n0o3cO1zEeOnHuJd3MXOCA”, DROPDOWN_STATE.getSelectedValue()); ///////Filters the state
Then I created a variable to capture the result as a value:
var myvalue= DS_1.getData(“_n07yYO1zEeOnHuJd3MXOCA”, {}).value;
And then I applied the following script to alter the state of the needle:
SPEEDOMETER_1.setNeedle(myvalue);
SPEEDOMETER_1.setShowedValue(Convert.floatToString(myvalue));
Problem is every time I chose a state from the dropdown(with state names for filter) the needle starts spinning endlessly..
Any ideas?
Thanks
-Stav
Hi Nicholas,
This is absolutely amazing.
I have been using this for a some days and found out that sometimes when you play around with it moving around different containers it suddenly disappears (yeah, that’s the word).
I placed it into a Grid Layout, put 5px to each margin, saved and then poof! it became invisible, the object is already there, I can select it and modify it (of course the html code appears in the browser) but nothing is displayed.
If I recreate it from scratch it works perfectly even in the new container. Have you ever experienced something like this? Do you have any idea of what could be happening?
Again, thanks for your collaboration, I really appreciate it!
Pablo
Hi pablo, use the version from this repository:
Antoninjo/DesignStudioCustomGauge · GitHub
I fixed the problem of diasappearing when “auto” sizing in the last update!
Hi Antonino,
I have also encountered the issue of disappearing SDK components when the margins are set to “auto”. In your case, was there a particular technical reason for this behaviour and how did you resolve it?
Thanks,
Mustafa.
The component did not take the auto dimension, now i have fixed it!
Nice! Do you have the .jar file around there? I don’t have the SDK to build the solution and I have to reach the support team to install it, during that roundtrip I would like to start playing with this, it is too much to ask? Thanks Antonino!
I mean… I don’t have the Eclipse to install the SDK
OK, I finally got it working 🙂 .
Now even having auto for Width and Height it’s being displayed (BTW, thanks Mustafa for clarifying the issued scenario). Anyway, It’s not resizing in the browser when the container does it (I tried with Chrome 36.0 and IE10) is this the expected behavior?
Thanks!
This component is drawn by javascript in a canvas tag, It simply can’t resize after it render once.
So once it is rendered it cannot resize, sorry.
Gotcha, no worries! I’m sorry but I’m far away of being a web developer so that’s why I’m asking and I really appreciate your help.
Thanks!
My Value in Gauge is showing up right.
but my Needle rotates almost 350 deg for a value of 72 🙁
var val=DS_2.getData(“00O2SNODRJTRZENZHDITULUC8”,{}).value; GAUGE_1.setIndicatorValue(val); GAUGE_1.setShowedValue(Convert.floatToString(val));
Hi Jagannadha,
try the community version, I have moved this code from Antonio into the community package – follow the link Design Studio SDK: Gauge Component.
In case there is some issue, this code is easier to fix and re-deliver.
Karol
How can I import this to my project ?
Any link with tutorial ?
Thansk
Hi Luiz,
try the community version, I have moved this code from Antonio into the community package – follow the link Design Studio SDK: Gauge Component.
you can import the community repository following the link SCN Design Studio SDK Development Community
Karol
Many thanks Karol!
Hey Karol !
What I’m doing wrong ??
In database:
VL_ENDIVIDAMENTO = 69,0
VL_RAZAO = 3,87
——————————-My code————————
/*Limpa os Filtros*/
KPI.clearAllFilters();
/*Realiza o Filtro*/
KPI.setFilter(“NM_EMPRESA”, RADIOBUTTONGROUP_1.getSelectedValue());
KPI_GRAPH.setFilter(“NM_EMPRESA”, RADIOBUTTONGROUP_1.getSelectedValue());
PRINCIPAIS_CREDORES.setFilter(“EMPRESA”, RADIOBUTTONGROUP_1.getSelectedValue());
/*KPI 1 */
var valor_KPIT1_original = Convert.replaceAll((KPI.getDataAsString(“VL_ENDIVIDAMENTO”, {})), “,”, “.”);
var test = TEXT_13.setText(valor_KPIT1_original); //Result 69.0
var valor_novo1 = Convert.stringToFloat(valor_KPIT1_original);
GAUGE_1.setIndicatorValue(valor_novo1);
GAUGE_1.setShowedValue(Convert.floatToString(valor_novo1));
……
——————————–end my code ————————
ERROR:
)
Hi
The component is greta, thanks
I do have 1 issue with it
When I set the width of height or it to ‘auto’, it doesn’t show up at all.
Any ideas???
Shlomi
Hi, I want to add Custom value of colors which can be choosen by user and not only 3 colors
how can we change the font sizes?
|
https://blogs.sap.com/2014/06/19/design-studio-13-sdk-creating-a-custom-gauge-component/
|
CC-MAIN-2021-04
|
refinedweb
| 2,728 | 63.39 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.