question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: An example of business set up for Aspen Fleet Optimizer
Solution: Aspen Fleet Optimizer Version: V.vxxxx.x Users/Optimization Groups o How many users of Optimizer o How many optimization groups and their locations o How many super users Data o Inventory Only, Sales Only, or Both. Order Entry o Via customer support, VFM, WFM, SAP? o Manual entry by Dispatchers? How Many Forecasted Customers? How Many Manual Customers? Working Hours o 24/7, Night & day dispatched during day, who performs optimization? Trucks o Communications emails to drivers, cell phones, texting How many Fill Lines Key KPI?s o Retain & Runouts Controlled Runout % - < 1% Controlled Retain % - approx 2% Other run outs % to be determined o Payload Maximization o Revenue per mile o Cost per mile (monitored by Distribution manager) o Splits (reduction of split ratio) Business Objectives/Strategy o Revenue Center ? revenue comes from the freight o Expansion to different market o Maximize payload o Move towards trucking company model (diversified operations) Other o Road restrictions ? Some roads are restricted. How? o High demand during the week, much less during the weekends o Price Regulation Changes allowed only at certain times can cause huge differences in sales patterns o Business Units Logistics & Distribution Dispatch Team & Distribution Team Inventory Cost Keywords: None References: None
Problem Statement: How can I model a milk run in Aspen Petroleum Supply Chain Planner?
Solution: The ?Milk Run? structure is so named because it resembles the delivery route of a milkman who loads the delivery wagon at the dairy, unloads deliveries along a defined route and returns to the dairy with an empty wagon ready to load again. The dairy is the only location where supplies are loaded. Each trip can take a different route, deliveries may or may not be made at each stop, the wagon can be on only one route at a time and the total trip time including the time to return to the dairy is included. We can emulate this type of delivery configuration for a ship. Even though the path the ship takes in this example is referred to as a route, the Aspen Petroleum Supply Chain Planner (PSCP) Segment and Routes tables are not used. Milk run implementation requires information to be entered in the Nodes, Modes, Materials, Transport, Distance, CapTransport and Collector tables. Material groups may also be used which require information to be entered in the MaterialGroup and MaterialMember tables. An example of an actual Milk run route is shown in Figure1 below. In this example, Terminal1 is the load node and terminals Terminal2, Terminal3 and Terminal4 are potential discharge nodes. As can be seen in this diagram, the milk run concept includes the time the ship needs to return to the loading node. To implement a milk run in Aspen PSCP, each of the transportation arcs must be defined from the load node to each delivery node. This approach, as shown in Figure 2 below, is required so that the vessel can only load material at the load node and must travel to each delivery node (although there is no requirement to unload at every node). Figure 1-Real Path Figure 2-Model Design Quick Keywords: milkrun milk run References: Instructions to Implement a Milk Run 1. Include the load and delivery nodes in the Nodes table. 2. Define milk run modes in the Modes table. 3. Consider defining a milk run-specific material group (MaterialGroup and MaterialMember tables). 4. Define the transportation arcs for the milk runs in the Tranpsort table using the MilkRun mode. Each arc is from the loading node to a destination node. 5. Define the cumulative travel times in the Distance table for the MilkRun transportation arcs for each mlk run mode. 6. Define a milk run capacity in the CapTransport table using the same Material/From/To/Mode information used in the Transport table. Enter ?From? in the Apply column for the capacity. 7. Enter the capacity defined in Step 6 in the Collector column in the Collector table for the load node. Enter MipType ?MilkRun?, enter the total travel time (including the return trip time) in the MipFactor column and define a MipLink collector for route control. 8. Enter the MipLink collector defined in Step 7 in the Collector column of the Collector table. Enter ?span? for the Node, zero for the Min (or leave this cell empty) and 1.0 for the Max. 9. Repeat steps 4-8 for each unique route the same vessel can take as a milk run option. Each milk run route option for the same vessel should use the same MipLink value so that the vessel can be used on only one route at a time. Example Information used for the detailed instructions The following information is used in this MilkRun example. Nodes: Load node: Terminal1 Delivery nodes: Terminal2, Terminal3, Terminal4 Modes: Ship1Route1, Ship1Route2 Materials: G91, G95, JET, DSL Material Group: MilkRun1Cargo MilkRun Mode Capacities: Cap_Ship1Route1, Cap_Ship1Route2 MilkRun Control Link: Ship1All
Problem Statement: Please provide an example of testing Scope for Aspen Fleet Optimizer upgrade
Solution: Type of Testing Factory Acceptance Testing - FAT Executed at AspenTech office Customer data used for testing Focus on customer (priority) business processes Ideal length: 2 calendar weeks on site (executed one week at a time) Benefits Customer testing team has direct contact with R&D if any questions/issues arise. Any critical issues discovered can be resolved by R&D in a timely manner Site Acceptance Testing - SAT Executed at Irving office Aspen SME & Irving testing team execute pre-defined test Day in the life testing as close to production environment as possible Benefits Additional testing prior to go live AspenTech personnel available to assist users with questions Improved quality Testing Scope Standard Business Processes: ? Information Collection Data Quality Management Order Generation Holiday/Storm Planning Optimizing Saving and Exporting Order Entry (via services and optimizer) Customer specific Business Processes To be determined Keywords: References: None
Problem Statement: Why must the Data Quality Manager process be run each day?
Solution: Aspen Fleet Optimizer relies on the ability to receive, use, and manage accurate data. This data can be sales and inventory information collected from customers or up-to-date loading and delivery information, for example. Accurate and timely data can contribute to improved dispatch efficiency, accurate demand planning, effective use of resources, and enhanced productivity. In order to ensure that customer, operations, and management data is accurate, fuels management organizations rely on an ongoing process of quality checking or reconciliation. These checks and balances are used to guarantee data quality and ultimately the quality and accuracy of optimization and schedules. Sales and inventory data, for example, must be collected and reconciled to accurately forecast and optimize fuel shipments. Keywords: None References: None
Problem Statement: Is there a good way to apply a blending margin for a specific property on product level?
Solution: The ?Property Bonuses? option allows adding a property bonus which identifies the difference between the blending value and the neat value.Use ?Property Bonus? to calculate (predict) blend property values. If utilized, the property bonus is added to the corresponding property before linear blending is performed. To create a Property bonus: 1.-Click Model | Property Bonuses to display the Bonus The information displayed on this dialog box is split into two sections. The information at the top of the dialog box allows you to select the property to which you want to assign a bonus, for example SUL. The lower portion of the dialog box displays existing bonus information, for example DSL 0.2. 2.- Select the property to which you want to assign the bonus. Once you select a property, the associated Products are displayed. 3.- Select the Product to which the bonus should be applied. Once you select a product, the associated components are displayed. If _ALL is selected as the product, the property/component combination will apply to all products. Note: It is important to avoid using _ALL as a product tag. 4.- For components, select the component to which you want to assign the bonus. 5.- Click the Add Bonus button to display a new bonus row in the lower portion of the dialog box. 6.- In the ?Bonus? field, enter the value of the bonus. 7.- Click the OK button to save your changes in memory and exit the dialog box. Your changes will not appear in the database until you save your model. ?Applying Property Bonus to All Cases? setting exists in Blending tab of the Settings dialog box to allow specifying a property bonus that applies to all cases. This task reduces the size of the property bonus table (PROPERTY_BONUS), by specifying a property bonus that applies to all cases. To indicate the property bonus applies to all cases: 1.- From the menu bar, click View | Setting to display the Settings dialog box. 2 .- Click the ?Blending? tab. 3.- Select ?Apply PBONUS to All Case?. 4.- Click OK. Recommendations: - It is recommended that both case and non-case specific Property bonus NOT be in the same database. If they do exist together, this is the expected behavior - If the ?Apply PBONUS to All Case? state is changed, the PROPERTY_BONUS table is reloaded with the case specific or the non-specific data. The Property bonus data of the case does not override the non-case specific data. No data merging is performed. - If the option is selected, and ?Add Case? or ?Delete Case? is selected, no property bonus data is deleted even if there is property bonus data for the affected case in the PROPERTY_BONUS table. - If the option is selected, and another case is selected using ?Select Case?, the PROPERTY_BONUS table retains the non-case specific data. - If the option is NOT selected, the Add Case, Delete Case and Select Case works as before. - If there are changes to the Property bonus data and the state of the Apply PBONUS to All Case is changed, a dialog box appears asking whether the PROPERTY_BONUS table should be saved. - In the database, the CASE field is blank for the non-case specific data. Keywords: - Property Bonus - Blending settings References: None
Problem Statement: Why is the menu entry Event / Add Event / Tank Property Change not active using 2006.5?
Solution: The first requirement to have this option available is related to the new Event tables. This new event requires ATOrionEvents/ ATOrionEventTanks/ ATOrionEventComments and ATOrionEventProps . If the final user is using the old EVENTS table, the option about Tank Property Change event will not be available. We recommend the following steps in order to get Tank Property Change Event available in a model where Normalize Event tables are missing: 1.- Open your model using MBO v2006.5, where ATOrionEvent tables do not exist in the database. Old EVENTS structure tables are using instead. ?Tank Property Change? event type is not available. 2.- Run the DBUpdate.exe tool (located in same folder where MBO application exists) and validate the structure of the Database. It is noted that all ATOrion Normalized tables are missing. 3.- Run the Update database option in order to create the Normalized tables: 4.-Validate the NORMALIZED_EVENTS key in CONFIG table is Y instead of N 5.-Finally, Open the Orion model, select Events| Add Event | and Tank Property Change event type is now available: Keywords: Tank Property Change Event Normalized tables References: None
Problem Statement: What is the suggested authentication for Aspen Refinery Multi-Blend Optimizer (MBO)? In association with this, what are the various roles that are suggested for MBO?
Solution: In MBO, there are accounts to log in to the software and the user name and password is stored in the SQL/Oracle Server database or access via a USERS table and a GROUPS table. When Windows authentication is used then the accounts in the USERS table must be the same as the user accounts for network login (no password is necessary, however). We support both methods of connection. There is a Trusted Connection flag in the file DSN which is true if the Windows authentication method is used. The roles that are created are generally up to the client. All users of Aspen Petroleum Scheduler (APS) and Aspen Refinery Multi-Blend Optimizer need to have read and write (R/W) access to all database tables if they are to be able to update both the schedule and the model. Database tables are generally broken down into six categories in APS and MBO. These are ADMIN, Assays, Integration, Schedule, Model, and Results. You can see how these tables have been assigned by going to the OrionDBGen.mdb access database that ships with APS and MBO applications. OrionDBGen.mdb: The DB field in the TableNames table defines how these tables are categorized. In general, all users need R/W access to Client tables. Schedulers need R/W access to all the Schedule and Results tables. If the Schedulers are to update assays then they should also have R/W access to the Assay tables. Anyone who will update the model and do maintenance should have R/W access to all tables. Keywords: -Database References: None
Problem Statement: This tutorial provides an example to educate users as of how to import numeric data available in a text file into a table in Aspen SCM. There are no particular pre-requisites required for reading this document; though, prior working experience in Aspen SCM will aid in easier understanding. The version used in this tutorial is Aspen SCM v7.3. The following are the reasons why it is always better to import data into Aspen SCM against entering them directly in tables: a. For effective optimization, all available data should be fed into Aspen SCM. Hence, there is a lot of data to be transferred. b. Most of the data is generally available in other software (like the ERP systems) c. The data might change from time to time, necessitating the data in the SCM to update. These reasons make data import less time consuming than data entry.
Solution: The following are the steps to accomplish this import: a. The following table contains the actual demand information to be used in this example. There are a total of 12 rows and 2 columns. The objective is to import these 24 numerical values into a table in Aspen SCM. 2013 2014 JAN 1800 1500 FEB 1500 1100 MAR 1100 900 APR 900 1100 MAY 1100 1600 JUN 1600 1800 JUL 1800 1500 AUG 1500 1100 SEP 1100 900 OCT 900 1100 NOV 1100 1600 DEC 1600 1800 The Actual Demand Data b. Create a new file IMPODE in the notepad and locate it in C:\ (any directory is fine). Copy the following information into the file. This data is tab delimited. 1800 1500 1500 1100 1100 900 900 1100 1100 1600 1600 1800 1800 1500 1500 1100 1100 900 900 1100 1100 1600 1600 1800 Tab delimited data in the file IMPODE.txt c. Go to Start | All Programs | AspenTech | Aspen Supply Chain Suite | Aspen SCM. This will open the Aspen SCM application. d. Go to File | Open | C:\Users\Public\Documents\AspenTech\Aspen CAPS\Plant Scheduler\PS_V7-3-1.cas Plant Scheduler is used only as an example; any other case could also be used. e. If you get a CASE DEPENDENT error: In command line (press F3 for the COMMAND LINE or go to View | Command Line Toolbar for the command bar to appear): $TCASES; Change path before all the .cas file names to C:\Program Files (x86)\AspenTech\Aspen CAPs\Shared Libraries for 64-bit machines and C:\Program Files\AspenTech\Aspen CAPs\Shared Libraries for 32-bit machines. f. Go to Data | Catalog in the Menu bar. Type in _TIM in the name text box and click on New. Make sure that it is selected as a Set and click on OK. In the Set Attributes window, there are no changes required. In command line: _TIM; enter 0 through 11 in the code column. The description can be anything. g. Go to Data | Catalog in the Menu bar. Type in _YEAR in the name text box and click on New. Make sure that it is selected as a Set and click on OK. In the Set Attributes window, there are no changes required. In command line: _YEAR; enter 1 and 2 in the code column. The description can be anything. h. Again go to Data | Catalog, enter IMPODE and click New. This time, make sure that you click on Table and click OK. In the Table Attributes windows: In the Rowset, enter _TIM; In the Colset, enter _YEAR. By this way you are making the previously created sets as the row and column headers of this table. Please make sure that the format of this table is I8. This is the table where the data from the text file is going to be imported into. i. Navigate to Enterprise Synchronization | Data Import. j. In the command line type: IMPITEMS; Add an entry in the list as IMPODE in the Code column and Import Demand in the description column. After making the changes press F8 to apply them or press yes when the dialog box appears asking you whether you want to apply. This action will include IMPODE to the import list. k. In the command line type: IMPLNG; Enter description in USEnglish column and IMPODE row. If Aspen SCM is going to be used in other languages, fill those columns as well. l. In the command line type: IMPMAP; Enter 1 in ALL column and IMPODE row. This makes IMPODE appear in the All Data category in the GUI. If it is required in any other category, 1 should be entered in corresponding columns too. m. In the command line type: IMPCTL; In the IMPITEM column and IMPODE row, enter the name you entered in IMPITEMS: IMPODE. In IMPFLAG, enter Y; since you want this item to be displayed in the data import screen. In IMPMETHOD, ASCIID should be entered; since the file that is to be imported is a delimited ASCII file. n. In the command line type: IMPASCID; go to the IMPODE row; in File name column enter IMPODE; this specifies the name of the file name from where data should be imported. In FMTFDEF column, enter IMPORT2; IMPORT2 specifies the path where the Delimited ASCII file (txt file) is stored. In Delimiter column, enter TAB, as this text file is delimited by TAB. In Field 0 name column enter 1; In the Field 1 name column enter 2. This specifies the column names of the IMPODE table. o. In the command line type: FMTFDEF; In IMPORT2, enter C:\%1.%2; this is the path where the IMPODE text file is stored; %1 will be replaced by the corresponding file name and %2 will be replaced by txt within Aspen SCM. p. In the GUI: Click on any other item in the tree (e.g.: Schedule Status) and re-click on Data Import. This will refresh the Data Import screen. Select the Import Demand item in the Import Data list and click on Import. If the import was successful it could be seen in the log files. q. In the command line type: IMPODE. This should be bringing up the IMPODE table along with the data in the text file. Keywords: None References: None
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This is a follow-up article of the previous
Solution: #135832 `XML Tutorial - How to build or edit a sample User Management Screen?. The case file containing the previousSolution can be found in the attachment. Example Keywords: None References: None
Problem Statement: What is the recommended list of things that I need to do to prepare for the upgrade project?
Solution: Obtain and review the documentation for the new release and determine any new or revised hardware and software requirements for the new release. Secure the necessary hardware and third party software as needed. ? Create an upgrade project plan and time line jointly with AspenTech ? Set up a testing environment with configuration similar to the existing production environment. You should then use this for testing the upgrade/install process, and for review of the new release prior to production installation ? Set up test environment versions of interface applications that use Aspen Fleet Optimizer data. ? With the existing application in the lab environment, run the Aspen Fleet Optimizer applications and execute the Check Database utility to ensure all database tables replicated from the production system are correct. Resolve errors as required. ? Determine and document your users? current daily dispatch process steps and workflow requirements (Dispatcher Checklist). SeeSolution 106063. ? Create a Dispatcher Checklist based upon the new release to reflect new features and process changes that will apply. ? Make note of any new customized integration, scripts, or manual interfaces?if any. ? Submit the new application software and the updated hardware and software requirements to your appropriate IT support staff to evaluate the implications of use within the existing standard desktop environment. This ensures that any conflicts with shared systems files will be identified and resolved prior to go-live. Keywords: Upgrade Project Checklist References: None
Problem Statement: What is the Optimization-Modeling Process?
Solution: The world is full of problems that are ripe for optimization; optimization is about choosing or selecting outcomes defined as better. These problems can be solved using mathematical modeling. Practice of optimization is only restricted by the lack of full information, and the lack of time to evaluate what information is available. The optimizing of a business problem is done by using linear programming (LP). Optimization modeling requires appropriate time. The general procedure that can be used in the process cycle of modeling is to: (1) describe the problem, (2) design the model, and (3) obtain theSolution. Between these steps are feedback loops helping to further define the model and theSolution. Define the Problem: As soon as you detect a problem, think about and understand it in order to adequately describe the problem in writing. Develop diagrams, flow charts, etc., to help you see the issue visually. Analysis: Moving from the problem to the mathematical model is known as analysis. During this step you will pull in all details, focusing on the important elements that take place. Sometimes just analyzing the issue can leads to insights into how to approach the problem without having to move to a mathematical model. Modeling: Develop a mathematical model that represents the problem in mathematical equations. The model will contain variables, constraints and an objective function. A good mathematical formulation for optimization must be both inclusive (i.e., it includes what belongs to the problem) and exclusive (i.e., shaved-off what does not belong to the problem). The problem formulation must be validated before it is offered aSolution Data: Scrutinize your data, scrub it, and enter the most accurate data that is available. Remember the old additive; garbage in, garbage out. TheSolution will be meaningless if the correct data is not utilized. Find an OptimalSolution A ?Solution value? for the decision variables, where all of the constraints are satisfied, is called a feasibleSolution. Most LP software will first find a feasibleSolution, and then continues to seek to improve upon it, until finally the objective function has reached a maximum or minimum ?best value?. This result is called an optimalSolution. Verify theSolution Make sure that the LP software is doing what you thought it would. Validation: Validation is the process of making sure that the model orSolution is appropriate for the real situation. This cycle can repeat itself for several iterations until the ?bestSolution? is found for the problem. Once theSolution has been found the next step is to present the findings to the decision makers.Solution Presentation to Management: Once the ?bestSolution? is obtained, theSolution is then presented to the decision-maker in the decision-makers language. If the decision-maker is not technical, don't present it in technical terms, present in terms the decision-maker understands. If he cannot understand theSolution, there is noSolution. Post-Solution Analysis: It is essential to periodically update the model. A model that was valid may lose validity due to changing conditions, thus becoming an inaccurate representation of the problem and could affect the ability of the decision-maker to make good decisions. Note: Marginal values can give insight to the problem, particularly what constraints are the most limiting. Also, think of the model results more as guidelines than as exact bestSolution, since the date is sure to change between modelSolution and implementation of plan. Keywords: None References: None
Problem Statement: How do you build a simple model in Aspen Supply Chain Planner?
Solution: Companies use Linear Planning (LP) to maximize or minimize (objective function) a linear function, usually profit or costs that is subject to a variety of restrictions (constraints). Others will use it to allocate scarce resources, such as facilities, that compete to make different products. LP will find the ?best? (optimal)Solution that satisfies all the constraints via an algorithm and gives the ?best? value for the objective function. Linear Programming has its limitations. An LP model consists of mathematical equations of constraints and variables that are an approximation of the real problem. The first steps in building a model are to define the problem, what is the objective, and then formulate the problem as a mathematical model. In defining the problem you will determine what your decision variables, model constraints and the objective function will be. The decision variables are mathematical symbols that represent activities. The model constraints represent the restriction of the operating system, only x amount of resources available per day. The objective function is mathematical representation of the objective of the company, always expressed as either minimizing or maximizing some value. Aspen Tech Supply Chain Planner is a general model management system which handles matrix generation, optimization, andSolution reporting. In building a model to be solved via Supply Chain Planner application you must start by creating sets and tables for your data identified in your problem. Supply Chain Planner comes pre-loaded with sets and tables containing data to run and demonstrate the functionalities of Supply Chain Planner. Define your objective, what are you trying to maximize or minimize within what constraints. For our example, a company makes two products; each product yields a different revenue in a given time period with given amount of inventory available. The only set of data needed for this simple example is a set that contains the time periods, we will call that set PER and it contains Week1, Week2 and Week3. Create the set PER and enter the three time periods, week1, week2, week3. Next we will need to define the sets COL, POL, ROW and tables COLS, POLI, ROWS and COEF. The known profit and inventory can be added directly to these tables. The COL set represents all the variables, for our simple model; enter the two products (our varailbles) with a code and a description. Open the COLS table. Define the dimension (FLD1-6) and TABL columns in COLS table. Each row in the COLS table represents a generic matrix column in COEF table. The first column, FLD1, defines the type of variable, for example B = Balance, C = Capacity. FLD2 thru FLD6 are the sets (domains) the variable is defined by, in our example the only domain we are worried about is time period (PER) so PER will go in FLD2. The last column, TABL column, contains entries that point to an incidence table or to a policy (POL set entry) or can be blank, in our simple model the entries in TABL are pointing to the POL set. Now that we have COL and COLS defined, next we need to define POL and POLI. The POL set has two sections, column section and row section. These sections contain policy characters used by the matrix generator to identify minimum, maximum, cost, row sense, and right-hand side (RHS) values used during matrix generation. In POL enter the code for the policies, these codes are decided by the modeler. The POLI table defines the relationship of the variables in other words the bounds (upper and lower), cost, row sense and the RHS. The columns section contains the bounds in the MIN and MAX columns and cost in the CST column. For our example enter the revenue for each product. In the Row section of the POLI table enter the constraints of the problem. For our simple model enter the inventory constraints, with ?L? for less than or equal to zero in the SENSE column and the amount of available inventory in the MAX column. Next we need to define ROW set and ROWS table. The ROW set represents all constraints. Each entry in the ROW set represents a single generic row of the matrix. Enter the codes that represent your constraints and for the description enter a generic description of the code, the description column will change once you set up the ROWS table to reflect the domains in ROWS. The ROWS table entries follow the same logic as the COLS table except that entries in the ROWS TABL column point to POLI table, Row Section entries which define the row sense and right hand side (RHS) values. In this example, we entered the inventory constraint with an ?L? (less than) and also the amount of inventory available in the Row Section of the POLI table. The ROWS, COLS, POLI, and Coefficient table (COEF) tables are the heart of the matrix generator. Now we need to build the COEF table. The COEF table is a picture of the matrix elements. The COLS table controls the generation of matrix columns and determines the column bounds and costs. The ROWS table controls the generation of matrix rows and determines the row senses and the right-hand-side values. For our model enter the raw material requirement for the finished product in respective column and row. For our simple problem the COEF table holds the entirety of the matrix coefficients, but for most problems, it is both inefficient and awkward to represent all matrix elements in a single table. Now that we have all the tables built it is time to generate the matrix. Use the GEN command from the command line to generate the matrix based on COL, COLS, ROW, ROWS, COEF, and POLI. Once the matrix has been generated you can then go on to find the ?best?Solution. To find the ?best?Solution using SCM, enter XPRESS in the command line. The ?best?Solution is stored in raw form in the following SCM tables: COLX: Contains columnSolution values and reduced costs. ROWX: Contains row activities and slack and dual values. OBJX: Contains feasibility flag and objective function value. Keywords: None References: None
Problem Statement: How do I plan an upgrade project for Aspen Fleet Optimizer?
Solution: As Aspen Fleet Optimizer is a mission and operation critical application, upgrading to a newer version requires careful planning to make the transition smooth and successful. A good project plan for the upgrade can ensure the success. An upgrade project usually consists of the following steps: 1. Study and understand the Release Notes, User Manuals, and other documentations for the new version. The documentations are published on our Website (http://support.aspentech.com). Please consult AspenTech customer support if you have any questions. 2. Understand the new hardware and software requirements (new hardware specs, new operating system, new database version requirements, new third party software version requirements, etc), and acquire the new hardware and software. 3. Understand your current IT infrastructure constraints. Understand how the key functionalities work in your current version and gather a list of the important functionalities you want to test. 4. Set up lab environment for testing and simulation, or any other tinkering activities to fully understand the new functionalities and features as well as the impact of the new version to your business. 5. Extensive parallel testing to make sure the results from the new version are giving you what you expect. Contact AspenTech customers support if you discover inconsistency or anything you suspect to be a defect. 6. Plan and conduct your user training on the new version. Plan this activity with AspenTech so that you can secure training instructor's availability ahead of time. 7. Make sure you include AspenTech Customer Support in your go-live plan so that you can get help when you needed. These activities are an example of a typical upgrade process. Every company has its own set of requirements and the process may differ. The list here serves as a guideline and template to start your project. To help you organize your information, a questionnaire and a checklist are attached here for you to download, alone with a project plan template. The template can help you organize your project but might not cover every aspect of an upgrade project. Your company might have other methods of planning. So please view these documents as guidelines and a starting point and you can tailor them further to your specific situation. Keywords: Uprade Project planning Questionaire References: None
Problem Statement: When should I use other additions in Aspen Fleet Optimizer Data Quality Manager?
Solution: Other Additions is a way to add or subtract product to help in the reconciliation of an exception. These values include unexpected or undocumented additional shipments of product to the customer. When a retain occurs or a customer receives an unexpected shipment, the quantities of product dropped can be logged here. This value may also be negative if product has been removed from a tank through means other than sales. These values are not stored in that database therefore once the exception is cleared the values will be lost forever. It is better to create a new load or modify an existing load rather than using other additions. If you modify or create a new load then you can place a note in the load which can be reviewed if questions arise at a later date. Keywords: None References: None
Problem Statement: When should I use the override function in Aspen Fleet Optimizer?
Solution: The override function in Aspen Fleet Optimizer should be used very sparingly if at all. It would be our recommendation that for the majority of users that override button be disable in the customized.ini settings under WINOPT. The setting name is showorbutt and we would recommend that it be set to zero. When the override button is used it will override all of the concurrency flags that are in the system. This can allow two parts of the application to have the data at the same time. When this occurs any changes that are made to the data can be over written when the second user of the data saves the data back to the database. This can cause data contamination inside the database. The most frequent problem that is seen when clients allow use of the override functionality is the loss or disappearance of manual orders. In these cases, the manual orders will be seen in the interface tables and appear to be processed correctly but they will not been seen in the core Aspen Fleet Optimizer tables. Keywords: None References: None
Problem Statement: OPC Tunneller software (non Aspentech, third party software) can be used to avoid DCOM issues when a firewall exists between an OPC client and OPC server. This article describes the use of OPC Tunneller software in combination with Cim-IO for OPC.
Solution: OPC Tunneller software provides the same benefits of OPC and COM while eliminating the need for complicated DCOM configuration.A Typically Cim-IO for OPC is configured to connect to an OPC server running locally, or on a remote system.A The remote OPC connection uses DCOM and can be a complicated configuration through a firewall.A With the tunneller installed Cim-IO for OPC is configured to make a local connection to the tunneller which makes the remote OPC connection. Cim-IO for OPC connection through a Tunneller across a firewall to an OPC server:- Keywords: Tunnel References: None
Problem Statement: This knowledge base article explains how to add new decision variables as activities to your Material Balance report in Supply Chain Planner in V7.3.1 / V8. The pre-requisite to read this document is to have a good understanding of Linear Programming and its configuration in Aspen SCM. In the example (performed in the standard CAPs SP model) used here: A new decision variable Firm Demand has been introduced to replace the existing variable Demand Satisfied. The idea was to add a new constraint to the model and say that this Firm Demand is the same as the Demand Satisfied variable. Then this variable, Firm Demand, was added to the Material Balance report.
Solution: This is how the Firm Demand was introduced in the standard CAPs Supply Planner model: 1. Created a new decision variable by adding FRMDEM into the COL set. The description can be blank as the application will rewrite this field as soon the domains are entered in the COLS table. 2. Since this variable was replacing the decision variable D, it was configured to have the same domains in the COLS table. 3. In the COLINC table, entered FD in the Column 1 and Row FD, to represent Firm Demand. If this entry is left blank, the application would not include this decision variable in the matrix. 4. Created a new constraint FIRM.DEM in the ROW set. 5. Entered the same domains as COL for this constraint in the ROWS table and entered E in TABL column so that this constraint will equate Demand Satisfied with Firm Demand. This E is already declared in POLI table row section; this makes the right hand side of the constraint as 'equal to zero'. 6. In the ROWINC table, entered E to row FD and column 1. 7. Created a new row in the POL set. Added a suitable description. 8. Nothing was added in the POLI table, as this variable need not contribute to the objective function. 9. Made changes to the COEF table so that the Firm Demand variable replaces the Demand Satisfied variable in the Balance constraint B and the constraint FD is used to equate the Firm Demand to the Demand Satisfied. Thus a new variable Firm Demand has been introduced into the standard CAPS model. The following are the steps to add this variable as an activity into the Material Balance constraint: 1. If you look at the DVTABLES set, in row 76, you will find LPSUMX listed in the Code section against Material Balance in the Description section. This indicates that LPSUMX is the table name where the Material Balance report gets generated. 2. In the DVCTL table, you will be able to find the configuration for this report. In the CID column, SUM is listed. 3. In the SUM set, you should include the variable you want to add to the Material Balance report. Hence add a new row in this set and add a suitable description. 4. SUMR does not require any changes, as we are trying to add a new variable which has coincidence with the Balance constraint. If you look at the SUMR, Balance constraint B is already listed there. 5. DOTDYSF does not need any changes, as C2RMAP and DYSFCN will be used. 6. In C2RMAP, mention the function to be used for this new variable against the constraint column. Here, DVABSEXP is used, since Demand Satisfied used it too. 7. The next table to be modified is DOTDYS, which is mentioned in the SCRATT column. This table contains all the detailed activities of DOT. Add FD- as a new entry in DOT and DOTDYS set. 8. In the DYSMAP table, Row FD- and Column 0, add the index number where FD- is located in DOTDYS table. In this case, it is in the 19th row  the index number is 18. Also add -1 to column 1. 9. In the Ribbon, click on Run Action | Generate Production Schedule. Once the generation step is complete, click on Planning | Plan Summary | Material Balance in the Navigation Pane. You can see that the FIRM DEMAND is added to this report. Keywords: None References: None
Problem Statement: Some Refineries require to blend some products directly into a large ship. In such a case, the vessel(ship) acts as the destination tank. What should be the best practice to implement this kind of logic in Aspen Petroleum Scheduler (Orion) ?
Solution: 1.- BLEND COMPONENTS DIRECTLY INTO A SHIP: Aspen Petroleum Scheduler and Aspen Refinery Multi-Blend Optimizer (MBO) allow the user to define a Blend Event where no destination tank is specified. This can be used for the purpose described. You can find more information in the Orion Help file documentation: Index | Creating an Inline Blend Event. If your shipments are read into Aspen Petroleum Scheduler (APS) from a nomination system using the existing Integration functionality within APS, this type of Blend Event can be imported via the staging tables. There are two approaches to this: a. If the nomination system knows which events are blends to a ship and which are straight shipments, they can be brought in through the staging tables as blends. A default recipe will not automatically be applied so one would have to be picked up from the default for that grade from the composition table or some other location. b. Where the nomination system cannot differentiate between shipments from a finished tank or blended directly to a ship, we bring in all shipping nominations as shipment events in APS. In these cases, a utility must be written to convert appropriate shipments to blend events 2.- COMPONENT RUNDOWNS GOING DIRECTLY TO A TANK OR SHIP: Inline Blending in MBO/APS is handled by Dummy Component Tank. Assume, there is a Product MS98, and there are 8 Components for this product. If one of the components ( ex: LAN) does not have a storage tank, then it is directly routed to Blend Header without Tank. This is called INLINE Blending or Rundown Blending. But the biggest rule in MBO is each component should have a Tank. So, in this case, the stream LAN is routed to a Dummy Tank, which does not have any Inventory capability. In other words, the Tank MIN = MAX. or MIN=0 & MAX =1, something like that. In this way, the Stream ( LAN) which does not have a storage routed to a Blend header via dummy Tank. This approach holds, if the number of streams that do not have storage is very little. For example, for 10 components, if 1 or 2 streams do not have storage, then dummy tank helps. But if most of the streams don't have storage ( 5 or more out of 10) , then there is no point of Optimization, and MBO does not have Limit to Optimize. Keywords: - Blending References: None
Problem Statement: What is the difference between override and accept in Aspen Fleet Optimizer Data Quality Manager?
Solution: The Accept Button is one of the control buttons. This button is used to accept sales and inventory information for forecasting after the exception has been addressed and corrected. When an exception appears, the Accept Button is grayed out and disabled until the sales, inventory, and delivery information is reconciled. When sales, inventory and delivery information reconcile the Accept Button will light up, advising the user that it is ok for Aspen Fleet Optimizer to forecast that customer's shipments. The Override Button allows the user to override the exception and accept the sales and inventory figures as they are. Before overriding an exception, Aspen Fleet Optimizer displays a dialog box with a message reminding the user to confirm all delivered shipments and update the delivery shift of any shipments coming later than expected. The user should cancel an override if either task has not been completed. Keywords: None References: None
Problem Statement: Deploying Aspen Engineering Suite with Microsoft Application Virtualization 4.6. Brief Overview App-V is a virtualization technology where the virtualization happens at the application level. This enables the encapsulated Microsoft App-V-enabled application(s) to run within an isolated environment, called App-V SystemGuard, on the Microsoft App-V client. This attached document outlines the best practices for deploying Aspen Engineering Suite V7.3 with Microsoft Application Virtualization (aka: App-V) Platform, Version 4.6.
Solution: Please download and review the attached MS Word document. Keywords: None References: None
Problem Statement: The Domain security feature in aspenONE Process Explorer enables the search engine in aspenONE Process Explorer (A1PE) to return only Aspen InfoPlus.21(IP.21) tags user has read access to in the search results. Domain Security can be enabled or disabled for A1PE by using the aspenONE credentials utility. However, when A1PE is connecting to IP.21 server version V8.4 or below,enabling the Domain Security feature can generate some errors. This knowledge base
Solution: provides the correct procedure for implementing Domain Security on A1PE when connecting to Aspen InfoPlus.21 Servers .Solution In order to secure tags in the Search Engine, Process Data must extract the security IDs for the Active Directory Groups and Users that have read access for each tag from the source IP.21 server and publish them with the tag metadata to the Search Engine on the A1PE server. This was accomplished in earlier versions using the SetCim.dll However this feature was extended to the InfoPlus_api.dll starting from version V8.5 to enable Process Data to extract the security IDs from remote IP.21 servers. With that said, the Domain Security feature cannot be implemented when the source IP.21 Server is below V8.4. In order to secure tags that do not have Read access in the V8.4 or below IP.21 Server, follow the steps below 1. First turn OFF domain security using aspenONE Credentials Utility before running the scanning the datsource. Go to Start à All Programs à aspenONE Process Explorer à aspenONE Credentials. Select Domain security. Set it to OFF When domain security is turned OFF using the Credentials utility, Process Data will publish the Security ID for the Everyone group for each tag. 2. Scan the datasource using the A1PE admin page. Once the scan is completed, turn ON domain security using the aspenONE Credentials Utility. Since scan was performed with security turned OFF, tags will remain visible for users from the Everyone group even after security is turned back ON. These tags will be visible in the Search list, but since they are still secured by IP.21, users will not be able to plot them or view any of their attributes or data. If you attempt to trend a tag that is secured for a user, you will receive a TagName is Invalid error. If security is not enabled in the right manner, an error IP21 security call not supported will be reported while indexing for secured tags from IP.21 server’s versions prior to V8.4.However this error will not occur if domain security for the Search Engine is turned OFF in the correct manner using the aspenONE Credentials tool. For Scanning Aspen InfoPlus.21 Servers V8.5 and above: The Aspen InfoPlus.21 datasource can be scanned using the A1PE Admin page after Domain Security is turned ON in the aspenONE Credentials Utility. Once Domain Security is turned ON, Tags with no read access will not be available in Tag Search in the A1PE browser. Ensure that a domain user is registered in the AspenSearchSolrSecurity.xml file. A domain user can also be registered using the Domain Connection Details section in the aspenONE Credentials Utility. More information is available in KB: 140903 Keywords: SOLR Domain Security References: None
Problem Statement: This knowledge base article describes how to output the MBO event number to the CUSTOM DATA section.
Solution: STEP1: Use macro BCI_Custom to output AB_BLN_EVENTS.SEQ to a range, STEP2: Set the Custom Default data cell refer to that range, In effect, when BCI execute macro BCI_Custom for each event, the SEQ will be read as the Custom data. A sample to output SEQ as Text Custom Data NOTE1 is attacked to thisSolution. Rrefer to worksheet CustomMap3 and VBA function BCI_Custom in module BCIBaytownCustomMacroCode in attached file Baytown_BCI.xls. Keywords: PIMS BCI MBO event number BCI Custom data BCI macro References: None
Problem Statement: How to perform a sensitivity analysis with the cost results calculated by Activated Economics.
Solution: The costing result variables calculated by Aspen Process Economic Analyzer (APEA) are not directly accessible within Aspen Plus (the results are fetched to Aspen Plus GUI, but the variables behind are not manageable in A+). Therefore, we cannot target them when performing sensitivity analysis or optimization runs. In addition, Activated Economics is not supported for EO blocks. However, we can set up a sensitivity analysis using Aspen Simulation Workbook (ASW) to manipulate process conditions and see the effects in the costing results. In the attached example, we manipulate the reflux rate in a column and we report the Total Direct Costs, Operating Costs and Payout Period (PO) evaluated using Activated Economics. Keywords: ASW, Activated Economics, Sensitivity Analysis. References: None
Problem Statement: How do you model a multiple effect evaporator?
Solution: Attached is an example of a multiple effect evaporator for sugar syrup production The sugar syrup is evaporated in a 4-effect evaporator. Components: sucrose (C12H22O11) water (H2O) Physical Properties: Properties for the material streams are computed using the UNIFAC Activity Coefficient Model. The ASME Steam Tablesare used to compute the properties of steam passing through the heaters. Parameters for the components exist in the Aspen Plus pure component databanks. Structures of sucrose for the Uniac model is entered. A user-defined property set is created to find bubble point temperatures. Unit Operation Models: FLASH2 models are used to simulate the evaporators. The evaporators are heated by HEATER models. Process Specifications: The following conditions are specified for this model: Feed stream temperatures, pressures, flow rates, and compositions Flash stage pressures Outlet syrup concentration The model predicts the following: Product stream flowrates and compositions Temperature and pressure profiles Heat duties Steam requirement to attain specified syrup concentration The simulation results may be seen in the Graphical User Interface on the process flow diagram and stream summary. Comprehensive block results and profiles are located in the Aspen Plus report. Keywords: application sugar sucrose evaporator References: None
Problem Statement: What is the recommended method for customizing column sets using WCF?
Solution: The correct method when using WCF is to modify the <product>.user.display.config file. The WebHelp in V7.3 CP2 includes examples for how to do this. There is a new section called Web Interface Customization and a topic Setting Display Attributes for User-Defined Entries. This describes the correct way to do this. Below is a copy of that topic. Keywords: None References: None
Problem Statement: Aspen Audit & Compliance Manager (AA&CM) includes a utility called Audit & Compliance Database Backup/Restore. The utility allows users to Back up, Archive, Purge, Restore and perform Integrity Checks on their AA&CM database. Many users have tried to use this utility as an upgrade utility from one version to another. However, this utility was mainly designed to back up a particular version of the AA&CM database and restore it to the same version. This utility was not designed to back up a particular version of the AA&CM database and restore it to a different version of the AA&CM database. Having said that, if the version difference between the source database and the target database is small, and the size of the database is relatively small, the Audit & Compliance Database Backup/Restore utility may be able to handle it. If, however, the source and target database versions are far apart and the size of the database is relatively large (several hundred megabytes), the Backup/Restore utility may fail. This Knowledge Base article provides steps to upgrade a large AA&CM database from one version to another.
Solution: When upgrading from one database version (and/or moving from one vendor) to another, use the vendor tools (MS SQL Server or Oracle) and follow these steps: 1. In the AA&CM Administrator console view, expand the Aspen Audit & Compliance Administrator and <Audit & Compliance Server name>. 2. Right-click on Server and select Stop Queue Processing from the context menu. 3. Use native vendor tools (MS SQL Server or Oracle database backup and restore tools) to back up your AA&CM database and restore it to another database server. 4. Run the Aspen Database Wizard on the target machine and use the Upgrade the existing database and database objects option (see screen capture below) to upgrade your AA&CM database to the target version. 5. Start Queue Processing on the target AA&CM server. General Guidelines: To backup on a routine basis - Use the vendor tools. To archive and purge a set (time range of events) - Use the Audit & Compliance Database Backup/Restore utility to generate archive files. This is recommended since Audit & Compliance Database Backup/Restore utility will group the events that are related (parents & children). Keywords: None References: None
Problem Statement: The prerequisite for reading this KB article is a basic understanding of how XML is used for configuring the User Interface of V8 Aspen SCM. As you might know, one of the most commonly used UI elements is the PropertyView. A PropertyView can be of the type Command, List, Table, String, Bool. For each of these PropertyViews there is an option to run self-written rules or macros or special commands using the After attribute (an example using this attribute, is covered in
Solution: # 136528). Special After Commands include: !Save: To Save pending changes of all the properties in all the Stategroups !Refresh: To Refresh all the properties in all the Stategroups without discarding the pending changes !Reset: To discard all the pending changes in all the Stategroups !ForceRefresh: To forcefully run the Setup Rule and Refresh all the properties !ForceReset: To forcefully run the Setup Rule and Reset all the properties This article describes how each of these commands differs from each other. Solution Consider the following example screen. The screen - _TEST contains a table - _SAMPLE which is editable from the GUI. Ideally for this screen the following should be the code: <?xml version=1.0 encoding=utf-8?> <CONFIG> <SAMPLE Header = TEST ViewModelID=:SAMPLE_VM> <Views> <TABLE Type=PropertyView DataSource=TABLESOURCE/> </Views> <Ribbon ApplyDataChangesCommand=APPLYSCREENDATACHANGES CancelDataChangesCommand=CANCELSCREENDATACHANGES/> </SAMPLE> <SAMPLE_VM> <States> <STATE1> <Properties> <TABLESOURCE Type=Table ValueSource=_SAMPLE/> <APPLYSCREENDATACHANGES Type=Command DisableWhenNoChanges=True> <After A1=!Save/> </APPLYSCREENDATACHANGES> <CANCELSCREENDATACHANGES Type=Command DisableWhenNoChanges=True> <After A1=!Reset/> </CANCELSCREENDATACHANGES> </Properties> </STATE1> </States> </SAMPLE_VM> </CONFIG> Notice the usage of !Save and !Reset - !Save is connected to the Apply Changes button, while !Reset is connected to the Cancel Changes button in the Changes Section of the Data Tab. Please refer toSolution # 137533 and 137534 to learn more about how to connect these two buttons to custom screens. Using a Setup Attribute: Every State Group can have its own Setup attribute connected to a rule or macro. This setup command is run when the screen is initially started. It is also run when the Refresh Button in the Current View section of the View Tab is pressed. To understand when a Setup rule/macro runs, add the following MSGBOX command to STATE1: <SAMPLE_VM> <States> <STATE1 Setup=MSGBOX XMSGBOX> <Properties> <TABLESOURCE Type=Table Where XMSGBOX contains: Code Description TITLE Setup Rule MESSAGE Setup Rule is run now BUTTONS OK ICON EXCLAMATION If you open the screen, the Setup rule is run. But when you Save or Cancel Changes, it wont run. Again, it can be executed when the Refresh button is executed manually. Difference between Reset and Refresh: You would have noticed that when you press the Cancel button, the screen gets rid of all the changes made by the user and presents the data present in _SAMPLE. But if you change the After command to Refresh, as in: <CANCELSCREENDATACHANGES Type=Command DisableWhenNoChanges=True> <After A1=!Refresh/> </CANCELSCREENDATACHANGES> Now if you follow the same steps, you will notice that your unsaved changes won't be lost - that's the difference between Reset and Refresh. Difference between Reset and ForceReset: In the previous step you would have noticed that the Setup command did not run, even when the Reset command was used. That's because the underlying table has not changed. If there is an external rule/macro that changed the data of _SAMPLE, then this would enable the Setup Rule. After opening the screen, go to _SAMPLE through the command window and change one of the values of this table. Now, if you alter a different entry of _SAMPLE through the TEST screen and press the Cancel Changes button, notice that the screen runs the Setup Rule, grabs all the altered values from _SAMPLE and also cancels the changes made through TEST screen. In some situations, you might always want to run the Setup rule irrespective of the underlying changes to data. In those cases, you will use ForceReset. This command will force the screen to run the Setup rule every time the Cancel Changes button is pressed apart from resetting the screen's data. Similarly, the difference between Refresh and the ForceRefresh commands is the Setup rule. Keywords: None References: None
Problem Statement: What are the advantages of using PCWS instead of DCS graphics for operator interface to DMCplus controller?
Solution: Production Control Web Server (PCWS) is the AspenTech standard HMI for all APC applications. However some of the plant in the industries use a custom DCS interface as the HMI for DMCplus application. Most of those plants have DMCplus applications installed before PCWS became a standard. Here are some of the advantages for using PCWS -PCWS provides much more information on the controller. Along with all the standard DMCplus context value, PCWS also provides AspenWatch Data which aid the operators in understanding the DMCplus controller behavior. -Minimum data is required between the DCS and the APC server, as there is no need to pass limits and status to the DCS graphics. Thus there will be less DCS traffic, as a result on using PCWS, instead of custom DCS graphics. -As the need for DCS graphics is no longer, it reducese the DCS work required when building new controllers and updating old controller (Solution 136172 discuss the requirement for DCS work). Keywords: PCWS DCS custom graphics References: None
Problem Statement: This Knowledge Base article (KB) is intended to give readers a basic knowledge of Linear Programming (LP). It will be the first in a series of articles under the topic `Linear Programming using Aspen Supply Chain Management?. This KB is intended for users who do not have any background in LP or in Aspen Supply Chain Management (SCM) programming; though a very preliminary understanding of mathematics is required to pick up LP concepts easily. The screenshots in this document were created using Aspen SCM version 7.3.1. At the end of this tutorial, users will be able to formulate and solve simple Linear Programming problems in Aspen SCM. Example Problem: The Manager of a company producing television sets is in the process of making his decision on tomorrow?s production schedule. The company sells both LCD and Plasma television sets. The profit is $100 for the LCD TVs and $150 for the Plasma TVs. Each set must pass through two phases in the shop floor: Manufacturing and Packaging. There are a total of 1600 man-hours available for manufacturing and 400 man-hours available for packaging every day. Each LCD TV requires 0.55 hours to manufacture whereas each Plasma TV requires 0.50 hours to manufacture. LCD TV requires 0.10 hours to pack, while the Plasma TV requires 0.2 hours to pack. It is assumed that all the Television sets produced will be sold instantaneously and there are no defects possible in the production. How many sets of each type would you suggest to the manager for production tomorrow?
Solution: I. Need for optimization: a. The first step is to analyze the problem and understand the requirement of optimization. There are two models of TV sets: LCD and Plasma. b. In most models like this one, the choice is either to reduce the costs or to increase the profit. Since the cost values are not given here, the choice is to increase the profit for the company. c. It is given that LCD TVs have a profit of $100 and Plasma TVs have a profit of $150. It is evident that Plasma yields the maximum profit. d. Time information has also been provided to produce each of these models. There are two phases that each of these models has to go through: Manufacturing and Packaging. e. Each of these phases has limited resources available in terms of man-hours. If the required man-hours for each of these models are added up across the phases, it is clear that LCD requires less time than Plasma. f. The question here is: should the manager concentrate only on Plasma, since it yields more profit or should he produce only LCD because he can produce more of these sets. The maximum profit (after including the time limitations) in the first scenario is $290,909; the second scenario is $300,000. Neither of these scenarios will yield the maximum profit. In order to find the optimalSolution, mathematical models are necessary. g. The given data is: LCD Plasma Profit $100 $150 Phase Max. Available Man-Hours Required Man-Hours LCD Plasma Manufacturing 1600 0.55 0.50 Packaging 400 0.10 0.20 II. Formulate the problem algebraically: This is a 2-product 1-period problem. Here are the steps to model it: a. Find out the decision variables: The problem is to find the production quantities of LCD and Plasma television sets. The profit of this company is dependent on these two variables. Hence decision on the values of these variables is left to the computer program. Hence these variables are called the decision variables. The decision variables are defined as: L a AMOUNT OF LCD TVS TO PRODUCE P a AMOUNT OF PLASMA TVS TO PRODUCE b. Formulate the Objective function: The natural tendency of any company is either to improve the profit or reduce the cost. Since the profit information is provided in this case, the manager?s production schedule should be such that the overall profit of the company increases. Mathematically, this could be achieved by multiplying the profit and the corresponding amounts of TVs sold. The production quantities of LCD TVs and Plasma TVs are L and P respectively; the profit of each LCD TV is $100 and Plasma TV is $150; Hence, the objective function in this problem is to: MAXIMIZE 100*L + 150*P c. Identify the constraints: Though the objective of the manager is to maximize the profit, there are only limited resources for him to make use of at the Manufacturing and Packaging phases. The mathematical equations representing these limitations are called constraints. i. For the Manufacturing phase: Every LCD TV requires 0.55 hours and every Plasma TV requires 0.50 hours in the Manufacturing Phase. The total time required to manufacture both the television sets should be less than or equal to the total of 1600 man-hours available for the manufacturing phase. Hence the first constraint is: 0.55*L + 0.50*P <= 1600 ii. Similarly, for the Packaging phase: Every LCD TV requires 0.10 hours and a Plasma TV required 0.20 hours to complete the Packaging task. But the total time available in the Packaging phase is 400 man-hours. So the total time required to pack both types of TVs should be less than or equal to 400 available man-hours. 0.10*L + 0.20*P <= 400 d. Other Common Constraints: The production numbers of LCD and Plasma TVs cannot be negative. Normally, this rule would result in two additional constraints: L >= 0 P >= 0 In this formulation, any negative production numbers will contribute negatively to the profit. Since the objective function has already been defined to maximize the profit, the program will never turn to negative production numbers in any scenario. Hence these constraints are not necessary in this formulation. Hence, the algebraic formulation for this problem is: MAX. 100*L + 150*P SUBJECT TO: 0.55*L + 0.50*P <= 1600 0.10*L + 0.20*P <= 400 Since all the equations are linear in nature, this type of programming is called Linear Programming. III. Formulate the problem using tables: There are 2 tasks to be accomplished in this step. One is to introduce SCM programming and the other is to convert algebraic formulation into tabloid formulation, since SCM is programmed to accept only the tabloid form of LP formulation. In this section, the above developed algebraic program will be converted directly into the corresponding tabloid program. a. To open Aspen SCM: Go to Start | All Programs | AspenTech | Aspen Supply Chain Suite | Aspen SCM. After the Aspen SCM opens, go to File | Open and browse for `lpcourse.cas? file. The next steps could be directly entered in the case. b. Pre-defined sets in any case tied to the LP library file: i. COL, ROW and POL are the sets that are pre-defined in the case. ii. COLS, ROWS, POLI, COEF, MATX, RHSX, SENX, POLX, COLX, OBJX, ROWX are the tables that are pre-defined in the table. The explanation for all these sets and tables are provided in the subsequent sections. All the pre-defined sets and tables can be accessed through the command line/window in the case. Command line can be brought up by pressing F3 key in the keyboard. c. To create sets/tables: During the next steps, new sets/tables might need to be created. To create new sets/tables, the `Catalog? in the `Developer? Tab should be accessed. `New? button will popup open a window, where new name for the Set/Table could be entered. d. Set Attributes: Since the `Open Attribute Dialog? is checked by default, this opens the Attribute dialog box. Make sure that the `Type? is selected as `Data? in the Set Dialog Box. The following set is created for illustration purposes. e. Table Attributes: In the table dialog box, the format should be specified as required. In this example, it is specified as I8; I represents the type of entries i.e. Integers and 8 represents the length of the entries. If characters need to be input into the table, C should be specified with the length. For floating point numbers, F should be specified along with the length of the number and the decimal point separated by a period in between; E.g.: F5.2 . The other important fields in a table are the Rowset and the Colset. The Rowset specifies the set that should be referenced in the Row of the table; Colset specifies the set referenced in the Column of the table. If it is just going to be 1 row/column it is best to leave the corresponding text box as 1. The following table is created for illustration purposes. If the Rows/Tables are created correctly, they should appear in the Catalog. Next steps will define how to fill out the pre-defined sets and tables to solve the LP problem in hand: f. COL Set: COL set contains the decision variables. As described in the algebraic formulation, LCD TV?s (L) and Plasma TV?s (P) production quantities are the decision variables. In the tabloid programming, similar types of decision variables should be categorized together. Since both are production quantity variables, they are categorized under PROD. So PROD is entered in the Code section of the COL set. The description section is just to provide additional information about the set entries. g. COLS Table: COLS Table contains the domains for the decision variables specified in the COL set. The above categorized PROD decision variable is expanded through these domains. PROD should have TV as a domain, since separate production decision variables are required for LCD (L) and Plasma (P). Algebraic formulation?s objective function: MAX. 100*L + 150*P a PROD Hence a new set called TV containing two entries, LCD and Plasma, should be created and that should be listed in FLD2. FLD1 is reserved for declaring the decision variable (it is declared as P, here). The TABL column is used to declare subsets, if required in the formulation. In this case, it is not. Hence, the decision variable P can be entered in there. h. ROW Set: ROW set contains all the constraints. From the algebraic formulation, it is known that there are two constraints required. CAPBALM and CAPBALP refer to the capacity constraints of Manufacturing and Packaging respectively. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a CAPBALM 0.10*L + 0.20*P <= 400 a CAPBALP i. POL Set: POL set has two sections: i. Column Section: In the Column Section, the decision variables are declared (here it is P, not to be confused with algebraic formulation?s P for production of Plasma TVs). ii. Row Section: In the Row Section, the Right Hand Side (RHS) of the constraints are declared. In this example, there are two constraints (as per algebraic formulation). The first constraint requires the available capacity in manufacturing to be on the RHS, which is marked as LCM; the second constraint requires the available capacity in shipping, marked as LCP. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a LCM 0.10*L + 0.20*P <= 400 a LCP j. POLI Table: POLI Table has two sections: i. Column Section: In the Column section, the minimum and maximum values of the declared decision variables are specified. The CST column will specify the coefficient that has to be multiplied with the decision variables in the objective function. Here, there are no MIN/MAX values for P. For the CST, a new table called REV is created with TV as the Row Set and 1 as the Column Set (new tables can be created using `Table Selection? / `Data Catalog?). The profit values of the TV $100 and $150 for LCD and Plasma respectively are populated in this table. Algebraic formulation?s objective function: MAX. 100*L + 150*P a P ii. Row Section: In the Row section, against every entry, the SENSE and RHS values should be entered. The SENSE is LE for both constraints, since the algebraic formulation specifies Less Than or Equal to. For LCM, the value 1600 should be entered; for LCP, 400 should be entered. These values directly correspond to the RHS of the two constraints described in the algebraic formulation. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a LCM 0.10*L + 0.20*P <= 400 a LCP k. ROWS Table: Similar to the COLS set, ROWS set specifies the domains of the constraints. FLD1 specifies the name of the constraint. The constraints can be named as A and B. No domains are required in this case. In the TABL column, the corresponding RHS values specified in POL table are entered. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a A 0.10*L + 0.20*P <= 400 a B l. COEF Table: The COEF table contains all the coefficients for the constraints. Tables TIMEM and TIMEP should be added to this COEF table against the PRODUCTION column. i. TIMEM table with Row Set TV and Column Set 1 should contain the time required for production of different TVs in the Manufacturing phase. Algebraic formulation?s 1st constraint: 0.55*L + 0.50*P <= 1600 a TIMEM ii. TIMEP table with Row Set TV and Column Set 1 should contain the time required for production of different TVs in the Shipping phase. Algebraic formulation?s 2nd constraint: 0.10*L + 0.20*P <= 400 a TIMEP IV. Generation &Solution After the model is formulated, the next step is to generate the model and find theSolution. a. Generation: The Generation step enumerates all the decision variables across the corresponding domains. It is executed by typing GEN in the command line. As a result of GEN, an information dialog box opens and also a variety of tables are generated. This dialog box is the place to look for errors, if any. Detailed message on errors can be found by typing ERROR in the command line. The tables generated with the GEN command, can be checked for consistency in the formulation: i. MATX Table: This table thus helps to confirm the coefficients of constraints (A and B) in the table formulation with the algebraic formulation. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a A 0.10*L + 0.20*P <= 400 a B ii. RHSX Table: This table can be used to verify the right hand side of the corresponding constraints. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a A 0.10*L + 0.20*P <= 400 a B iii. SENX Table: This table defines the sense of both the constraints. With the above three equation the tabloid constraints can be verified with the algebraic constraints. Algebraic formulation?s constraints: 0.55*L + 0.50*P <= 1600 a A 0.10*L + 0.20*P <= 400 a B iv. POLX Table: This table can be used to verify the coefficients of the objective function of the tabloid formulation (CST column) with the algebraic formulation. Since there are no values specified in the POLI table as MIN and MAX for the decision variable P, it is displayed as zeros in POLX table for both LCD and Plasma. Algebraic formulation?s objective function: MAX. 100*PLCD [L] + 150*PPLASMA [P] b. Solution To solve a model, there are two solvers available within SCM: CPLEX and XPRESS. These solvers can be called through OPT and XPRESS commands respectively. To specify the Maximization or Minimization of the objective function, CCPLEX or CXPRESS control tables are called and MNMX values are changed accordingly. Here it is MAX, as discussed in the algebraic formulation. Once the solving is complete, there are a variety of tables that are generated. These can be checked forSolutions: i. COLX Table: The X column represents the optimal value of the two decision variables. The XCST column specifies the cost that each of these decision variables contribute to the objective function. ii. OBJX Table: OBJECTIVEFUNCTION column provides the value of the objective function i.e. the total profit earned by selling optimal amounts of LCD TVs and Plasma TVs. This is the maximum possible profit that the company could earn out of these two products under the given constraints. iii. ROWX Table: ROW SLACK column defines the difference between left hand side and the right hand side of every constraint. If you multiply 0.55*2000+0.50*1000 (Time required to complete 2000 LCD TVs and 1000 Plasma TVs) the answer will be exactly equal to 1600, the total available man-hours. So the difference between LHS and RHS is zero and that is represented as ROW SLACK. The same applies to constraint B. Keywords: None References: None
Problem Statement: How do I upgrade the BCI (Blend Control Interface) model from 2004.1 to higher version ?
Solution: How do I upgrade a BCI model (Honeywell BPC - MBO format) from version 2004.1 to a higher version? 1. Add BCI table [Components] to the model tree. 2. Open BCI table [CustomMap]. 3. If the CustomMap table doesn't have column [Product], please add column [Product] to it. Which tables are mandatory for BCI model (Honeywell BPC - MBO format) in version 2006.5? The tables that are required in order for a BCI model (Honeywell BPC-MBO) to work correctly for version 2006.5 and higher depend on the presence of the BLEND.CFG files. 1. If user provides one or more BLEND.CFG files under the model directory BCI Table Mandatory Components Yes ProdMap Yes MaxComp Yes Tanks Yes Specs Yes GlobalSpecs Yes CustomMap Yes Properties Yes Additive No BMSCustomMap No SpareValues No BlendValues No 2 . If the model does not contain BLEND.CFG file under the model directory: BCI Table Mandatory Blenders Yes Components Yes Properties Yes Tanks Yes ProdMap Yes MaxComp Yes Specs Yes GlobalSpecs Yes CustomMap Yes AddiMap No Additive No BlendValues No SpareValues No BMSCustomMap No Notes: If the BCI table is indicated as Required, user must be add the spreadsheet table to the model tree. But table should be empty if user will not use this table. For example, [GlobalSpecs] is mandatory table for the Honeywell - MBO Recipes model, if we don't need this table, we would add an empty table without value as below: Quality MinSpec MaxSpec OverrideMin OverrideMax * Keywords: BCI model upgrade Blend Controller interface References: None
Problem Statement: What is AspenTech's recommended list of things that I need to do to test Aspen Fleet Optimizer?
Solution: Create a test database. The database should be a snapshot of the production data from the point that the dispatch plans have been created and finalized for a given day and new sales and inventory data has been received for the next day Install the new application. Perform a complete backup of the test database prior to executing the database upgrade. Measure the time required to create the dump. Upgrade the test database by executing the appropriate upgrade scripts provided by AspenTech. Make note of any unexpected messages or warnings - if any - and again keep track of the time needed to fully execute all the database upgrade steps. Execute the Check Database utility to ensure all database tables are correct. Resolve any issues as needed. Replicate new interface table data from production into the lab environment to enable complete process and workflow testing. Test your updated interface code, triggers, and scripts (if any) for the touch-points between Aspen Fleet Optimizer and your other systems. Run the NT services as part of this testing to ensure data is fully processed without errors. Using the Dispatcher Checklist (seeSolution 106063), perform a step-wise testing of the new application in the lab as if you are in production on a typical day. Compare your actual dispatch-schedule output from production with the results from the new application in the lab environment to ensure the results are as expected. This should include: re-optimizing a previous schedule, re-forecasting a number of sites, and printout of selected reports that are used on a regular basis. In each case, review the differences and/or expected changes. Revise the Dispatcher Checklist as needed and make special note of changes that impact the end-users. Plan your go-live time line with AspenTech (SeeSolution 127518) Develop a detailed time line for the production upgrade. This should indicate the start date and time of the go-live, the milestones, and the validation steps leading to completion of the upgrade and enabling access for end-users. I Create contingency milestones with plans to abort or otherwise terminate the upgrade and ensure end-users are restored to their original level of service should problems arise during the process. Consider performing one or more additional dry run(s) of the upgrade using this time line. Arrange the schedule with your appropriate support staff to perform the go live. Keywords: Testing Upgrade Go Live References: None
Problem Statement: When same database is used for Aspen Petroleum Scheduler (APS) and Aspen Refinery Multi-Blend Optimizer (MBO), it would be better (as best practice) if there is an option in CONFIG table or settings to select which TNKINV table final user prefers to use: TNKINV or TNKINV_MBO table in MBO.
Solution: In V7.3.1 a new option called Tank Inventory Table, has been added to the Model Settings tab of the Settings dialog box, that allows you to specify which table to use when retrieving and storing beginning tank inventory data either the TNKINV_MBO or TNKINV table Additionally, a new keyword, TANKINV_CHOICE, has been added to the CONFIG table that can also be set to indicate the table to use when storing tank inventory data. There are 3 options for choosing TANKINV table: There is a new drop down list (Tank Inventory Table) in Setting/ Model Setting tab. When user makes a change to this option, the model should be closed and reopen for the option to take effect: . Keywords: -Model Settings -CONFIG setting References: None
Problem Statement: When you create new custom screens, you might want to add them to your Navigation Pane. The following article provides an example with detailed steps on how to create a new node in the Navigation Pane and how to connect that to the created XML screen.
Solution: All the items in your Navigation Pane are created using XML codes. The XML codes are usually written in a table called CNAVBAR. This table is referenced in CGLOBAL set, which is the primary set responsible for all User Interface elements. Both the CGLOBAL and CNAVBAR tables are a part of the standard CAPS library and will not be editable. Instead, a table called CNAVOVER is used to make overrides to the XML codes in the CNAVBAR. Consider the following example: This is the example that was used in the Knowledge Base article `XML Tutorial - How to build or edit a sample User Management Screen?. (Please read thisSolution # 135832, to learn how to build this screen). The case files are attached here. The above screen was built using XML codes in _TEST with the View name TEST_USER. As a tip: without adding to the Navigation Pane, you can still view this screen by entering: SHOWSCRN CAPS_SCREEN(_TEST:TEST_USER) in the command window, where CAPS_SCREEN is this type of screen, _TEST is the name of the set containing the XML codes and TEST_USER is the name of the View section. This command can be used to view the screen, but the more convenient way is to add the screen to the Navigation Pane. As explained earlier, CNAVOVER is the table to be used to enter XML codes required to add this entry to the Navigation Pane. The CNAVOVER table has the Row set CNAVOVRR and Column set 1. You need to edit the Rowset to enter more lines in this table. To edit the Row set, enter the name ? CNAVOVRR in the Command Window and type in as many numbers as you want in the Rowset. If you want to insert lines in between you can right click inside the window and click Insert or you can also use the keyboard shortcut: Ctrl + Insert If the above example has to be added to the Data Management section with the Caption User Management then these are the steps to be followed: 1. Enter RBNAVRS in the command line. This Set will contain all the default elements of the Navigation Pane, Action Menu. The description column will contain the names that appear in the Navigation Pane. Search for Data Management in this column and find out the corresponding name in the Code column, since the aim is to add the User Management screen to Data Management section. 2. NAVBAR is the prefix used to address the items in the Navigation Bar. DATAMGMT is the name used to address Data Management section in the Navigation Pane. 3. Enter CNAVOVER in the command line. By default, the CNAVOVER will contain lines 0 ? the opening line for any XML set/table; 1 ? opening tag of NAVBAR with the Uses keyword; the Uses keyword specifies the default set from which XML codes are used to generate the Navigation Pane; 3 ? closing tag of NAVBAR. 4. In line 2, enter the opening tag of Data Management that you found in Step 1 from RBNAVRS. With a space in between, enter the keyword AccessControlDefault and set it to true; this keyword is used to make sure that all the children of Data Management are visible. If this keyword is not entered, you will not be able to see your new screen in the Navigation Pane, though you have everything else right. The code in Line 2 should look like: <DATAMGMT AccessControlDefault=True> 5. Now insert new rows or cut and paste the closing tag of NAVBAR to line 7 in the table. 6. Now the opening tag of Children should be entered in line 3. This indicates that the following items will be placed under the Data Management category. The code in line 3 should look like: <Children> 7. In line 4, enter a suitable keyword for your new node ? For ex.: USER and open the tag. With a space after that, enter the keyword Caption and enter the suitable caption inside inverted commas ? User Management; this will be displayed in the Navigation Pane. The next keyword that you should enter is the Type keyword which specifies the type of screen that you have created; it can either be a CAPS screen or a Planning Board; in this case it is a CAPS screen and should be entered as CAPS_SCREEN. The final keyword is the Config keyword which should point to the set/table where the screen?s code has been defined - _TEST and specify the name of the View section inside this set/table ? TEST_USER with a colon in between. You should use a forward slash in front of closing angled bracket to mark the end of the USER tag in this line itself. The code in line 4 should look like: <USER Caption=User Management Type=CAPS_SCREEN Config=_TEST:TEST_USER/> 8. In line 5, you can close the Children tag that was opened in line 3. The code should look like: </Children> 9. In line 6, close the DATAMGMT tag that was opened in line 2. It should appear as: </DATAMGMT> 10. In line 7, the closing tag of NAVBAR should be present: </NAVBAR> The completed CNAVOVER table should look like: 11. Now in the Command Window, enter: VMACTION CGLOBAL:GLOBAL REFRESH 12. This command will refresh the entire User Interface, including the Navigation Bar. In the Data Management section, you should be able to see the User Management node added to the list at the end. If the Navigation Pane disappears after the REFRESH command, then there is a syntax error in your XML code. Please revisit the codes in the CNAVOVER table. If you have to create a new section ? instead of Data Management, if you want to create something new, you can just change the name in line 2 and close it correspondingly in line 6. Please make sure to include the keyword AccessControlDefault=?True? in the NAVBAR section, otherwise you may not be able to see the changes in the Navigation Pane. You can add Keywords: Caption and Icon to line 2 to make things complete. The disadvantage of using this CNAVOVER to override CNAVBAR entries is: the entries in CNAVBAR take precedence over CNAVOVER. Hence any entries in CNAVOVER will always add after the existing list. References: None
Problem Statement: What is the outside air velocity used in a detailed heat flux model of the depressuring utility?
Solution: Currently by default, the outside heat transfer calculation is based on natural convection i.e. wind speed = 0 This means that outside air is, by default, assumed to be in laminar flow. Keywords: depressuring, detailed heat flux, air velocity References: None
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This is a follow-up article of the previous
Solution: #136528 'XML Tutorial 3 - How to build or edit a sample User Management Screen'. The case file containing the previousSolution can be found in the attachment. Example Keywords: None References: None
Problem Statement: How is the NRTL-SAC model used in Aspen Plus?
Solution: NRTL-SAC is a segment contribution activity coefficient model, derived from Polymer NRTL model. NRTL-SAC can be used for fast, qualitative estimation of the solubility of complex organic compounds in common solvents. Conceptually, the model treats the liquid non-ideality of mixtures containing complex organic molecules (solute) and small molecules (solvent) in terms of interactions between three pairwise interacting conceptual segments: hydrophobic segment, hydrophilic segment, and polar segment. In practice, these conceptual segments become the molecular descriptors used to represent the molecular surface characteristics of each solute or solvent molecule. Hexane, water, and acetonitrile are selected as the reference molecules for the hydrophobic, hydrophilic, and polar segments, respectively. The molecular parameters for all other solvents can be determined by regression of available VLE or LLE data for binary systems of solvent and the reference molecules or their substitutes. The treatment results in four component-specific molecular parameters: hydrophobicity X, hydrophilicity Z, and polarity Y- and Y+; two types of polar segments, Y- and Y+, are used to reflect the wide variations of interactions between polar molecules and water. The conceptual segment contribution approach in NRTL-SAC represents a practical alternative to the UNIFAC functional group contribution approach. This approach is suitable for use in the industrial practice of carrying out measurements for a few selected solvents and then using NRTL-SAC to quickly predict other solvents or solvent mixtures and to generate a list of suitable solvent systems. Implementation Parameters NRTL-SAC is implemented in aspenONE 2004 release as a system liquid activity coefficient model. The model name is called NRTLSAC. For each component, it has four molecular parameters,,,, and although only one or two of these molecular parameters are needed for most solvents in practice. Since conceptual segments apply to all molecules, these four molecular parameters are implemented together as a binary parameter, NRTLXY(I, m) where I represents a component (molecule) index and m represents a conceptual segment index. In addition, the Flory-Huggins size parameter, FHSIZE , is used in NRTL-SAC to calculate the effective component size parameter, . The Flory-Huggins combinatorial term can be turned off by setting for each component in mixtures. How to Specify NRTL-SAC in Aspen Plus 1) From Data Browser, double-click Properties 2) From Properties folder, click Specifications 3) On Specifications sheet, specify an activity coefficient property method as Base method; for instance NRTL 4) From Properties folder, click Property Methods 5) From Object manager, click New 6) In Create New ID box, type a name for NRTL-SAC, say NRTLSAC 7) In Base property method drop list, select NRTL 8) Click Models 9) Change Model name for GAMMA from GMRENON to NRTLSAC Conceptual Segment Definition In order to use NRTL-SAC, all components have to be defined as oligomers. Four conceptual segments also have to be defined. Then from Components->Polymers->Oligomers, enter a number at least for one of conceptual segments for each oligomer component as required by the definition for an oligomer. Any number entered here is required will not be used in the simulation with NRTL-SAC. NRTL Binary Parameters for Conceptual Segments From Properties->Parameters->Binary Interaction->NRTL-1, the user has to enter the binary parameters between conceptual segments from this table ( assumed four conceptual segments are defined as X, Y-, Y+, and Z, respectively): Segment 1 X X Y- Y+ X Segment 2 Y- Z Z Z Y+ AIJ 1.643 6.547 -2.000 2.000 1.643 AJI 1.834 10.949 1.787 1.787 1.834 CIJ 0.2 0.2 0.3 0.3 0.2 NRTL-SAC Molecular Parameters for Components From Properties->Parameters->Binary Interaction->NRTLXY-1, the user, for each component, has to input a non-zero value at least for one of four molecular parameters. Here is an example to enter the parameters for Acetone: A Template to Use NRTL-SAC model The attached bkp file is a template to use NRTL-SAC. In the file, all four conceptual segments are defined as X, Y-, Y+, and Z, respectively. 62 common solvents defined as oligomers are also included. In addition, ASPIRIN is used as a complex organic compound. This example uses NRTL-SAC to calculate the solubility of ASPIRIN in various solvents. Keywords: NRTL-SAC References: s C.-C. Chen and Y. Song, Solubility Modeling with a Nonrandom Two-Liquid Segment Activity Coefficient Model, Ind. Eng. Chem. Res. 43, 8354 (2004).
Problem Statement: How can I use the ChargeBal block in electrolyte simulations?
Solution: The description in the help is a bit misleading. You should place the ChargeBal block in the flowsheet to ensure its feed is the tear stream of the convergence loop you want to maintain in charge balance. The following diagram should help to clarify. We can see that the ChargeBal block will be the first block in the convergence loop sequence. You can control the selection of the tear stream in various ways: Provide initial estimates in the stream you want Aspen Plus to select as a tear stream Select the stream in Convergence, Tear folder Create a convergence block and select the tear stream Providing reasonable initial estimates is always recommended to improve the convergence of your sequential modular simulations. Keywords: charge electrolytes balance References: None
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This is a follow-up article of the previous
Solution: #136585 'XML Tutorial 5 - How to build or edit a sample User Management Screen'. Example Keywords: None References: None
Problem Statement: What Aspen Fleet Optimizer databases need the Bind Number As Float workaround set in the ODBC drivers?
Solution: In Aspen Fleet Optimizer it is very import to check the workaround Bind Number As Float in the ODBC driver if you are using a Oracle database. If you are using a SQL database you will not need to set up this workaround. If you are using an Oracle database and do not have this checked at the Oracle driver level you can experience an extreme slowdowns in the performance of the Aspen Fleet Optimizer system. To make sure that this work around is properly checked for Oracle systems please do the following. First please go to your ODBC driver administrator and select the driver that is being used to connect to the Aspen Fleet Optimizer database. Next please press the configure button and look for the workarounds tab. Finally simply check the radio dial next to Bind Number As Float. The driver will now use this workaround to increase performance. Please see that attachments on thisSolution for step by step screen shots and additional information provided by Microsoft on this issue. Keywords: None References: None
Problem Statement: Basic XML knowledge in Aspen SCM is a prerequisite for reading this document.
Solution: s # 135832, 135995, 136528, 136585, 137107 and 138030 are recommended, if you are looking to gain basic XML knowledge in Aspen SCM. In the previous versions of Aspen SCM, dialogs were used to build prompts. Though dialogs can still be used in V8, XML is a more elegant way to build prompts. This article explains how to build a prompt using the XML. Solution All containers, properties and attributes used to build a XML screen can be used to build a prompt too. For example, the XML configuration for a sample prompt to display the userid can be: <?xml version=1.0 encoding=utf-8?> <CONFIG> <SAMPLE Header=USERID ViewModelID=:SAMPLE_VM> <View Type=LayoutGrid Rows=auto, auto Columns=70,100> <Views> <STRING1 Type=PropertyView Row=0 Column=0 Editor=Label Margin=5,5,0,20 DataSource=STRING1S/> <STRING2 Type=PropertyView Row=0 Column=1 Editor=Label Margin=5,5,0,20 DataSource=STRING2S/> <BUTTON2 Type=DialogButton Row=3 Column=1 Margin=5,5,0,20 Caption=OK/> </Views> </View> </SAMPLE> <SAMPLE_VM> <States> <STATE1 Setup=>_RTEST> <Properties> <STRING1S Type=String Value=UserID:/> <STRING2S Type=String ValueSource=_USERID(1,1)/> </Properties> </STATE1> </States> </SAMPLE_VM> </CONFIG> From the previous articles, you might have already known that a XML screen can be launched from the command line through: SHOWSCRN CAPS_SCREEN([Name of XML Set/Table]:[Name of View Model]). This example's XML configuration was stored in _SAMPLE set. This prompt can be launched through: SHOWSCRN CAPS_SCREEN(_SAMPLE:SAMPLE) Similar to this command, the command for launching a prompt is: SHOWSCRN CAPS_PROMPT([Name of XML Set/Table]:[Name of View Model]) This screen is modal (does not allow you to interact outside the prompt) in nature and if there are some rules that are executed after launching this prompt, then they would be blocked, until the prompt is closed. The command: SHOWSCRN CAPS_DIALOG([Name of XML Set/Table]:[Name of View Model]) does not block the execution of subsequent rules, but it is also modal in nature. The command: SHOWSCRN CAPS_WINDOW([Name of XML Set/Table]:[Name of View Model]) is non-modal (allows you to interact outside the prompt) in nature and also does not block the execution of subsequent rules. There are some special commands that can be used only with XML Prompts' header: MinWidth - Minimum width of prompt window MinHeight - Minimum height of prompt window CanResize - If true, will allow resizing of prompt window IsToolWindow - If true, will allow minimal headers in prompt window There are also some special commands for XML Prompts' command: IsCancel - If true, will close the prompt window All these special commands have been added to the example: <?xml version=1.0 encoding=utf-8?> <CONFIG> <SAMPLE Header=USERID CanResize=False MinHeight=500 MinWidth=500 IsToolWindow=True ViewModelID=:SAMPLE_VM> <View Type=LayoutGrid Rows=auto, auto Columns=70,100> <Views> <STRING1 Type=PropertyView Row=0 Column=0 Editor=Label Margin=5,5,0,20 DataSource=STRING1S/> <STRING2 Type=PropertyView Row=0 Column=1 Editor=Label Margin=5,5,0,20 DataSource=STRING2S/> <BUTTON2 Type=DialogButton Row=3 Column=1 Margin=5,5,0,20 IsCancel=TRUE Caption=OK/> </Views> </View> </SAMPLE> <SAMPLE_VM> <States> <STATE1 Setup=>_RTEST> <Properties> <STRING1S Type=String Value=UserID:/> <STRING2S Type=String ValueSource=_USERID(1,1)/> </Properties> </STATE1> </States> </SAMPLE_VM> </CONFIG> Simple message boxes can be displayed through MSGBOX commands. This command can be used to display simple line messages and for prompts which require a single user response. A set needs to be filled for getting this to work. That set should contain TITLE, MESSAGE, BUTTONS, ICON, RESULT in the code column. The description should contain the respective values. For example: A set called DMSGBOX can contain the following values: TITLE Information MESSAGE This is an Informational message BUTTONS OK ICON INFORMATION RESULT This message box can be called through the command: MSGBOX DMSGBOX Detailed configurations for all these controls can be found at: http://support.aspentech.com/CustomerSupport/Documentation/V8.0/Planning%20and%20Scheduling%20Chemicals/AspenSCM/AspenSCM_UI_Config.pdf Keywords: Prompt XML SCM References: None
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This is a follow-up article of the previous
Solution: #135995 'XML Tutorial 2 - How to build or edit a sample User Management Screen'. The case file containing the previousSolution can be found in the attachment. Example Keywords: None References: None
Problem Statement: This Knowledge Base article (KB) is the second in a series of articles under the topic `Linear Programming using Aspen Supply Chain Management?. This series is intended for users who do not have any background in LP or in Aspen Supply Chain Management (SCM) programming; the pre-requisite for reading this KB is `Linear Programming using Aspen Supply Chain Management: THE BASICS? stored in KB
Solution: 135232. The screenshots in this document were created using Aspen SCM version 7.3.1. At the end of this tutorial, users will be able to formulate and solve simple Linear Programming problems in Aspen SCM. Example Problem: The problem statement is the same as the previous KB in the series, except that there is an additional constraint to the original problem. The original problem can be found here in KBSolution 135232. The new constraint is: The sales team has come up with the new market trend. Based on that, the management has decided that at least 75% of the TV sets must be Plasmas. Solution The case file related to the 1st KB in this series can be found as an attachment: `lpcourse2.cas?. This new constraint requires changes that need to be made to the previous case. Those changes are discussed in the following sections: I. Algebraic Formulation: The objective function and the previous constraints remain the same. Since a minimum production quantity is required for Plasmas (P), an appropriate constraint is added to the existing set of constraints. The LCDs (L) do not have any minimum production constraint. P / (L + P) >= 0.75 That can be written as: P >= (?)L + (?)P (1 ? ?)P >= (?)L (?)P >= (?)L P >= 3L Therefore the production constraint is: 3L - P <= 0 So the algebraic formulation for the original problem with this additional constraint is: MAX. 100*L + 150*P SUBJECT TO: 0.55*L + 0.50*P <= 1600 0.10*L + 0.20*P <= 400 3L ? P <= 0 II. Tabloid Formulation: This section contains the changes that need to be made in order to make the case include the new constraint. The final screenshots are copied for each step, for you to verify and make sure that everything is right. Open the `lpcourse2.cas? and go to File | Save as; browse to the location where you want to save the new file and provide it a suitable name. a. COL Set: The COL Set remains the same. b. COLS Table: The COLS Table and the TV Set remain the same. c. ROW Set: After CAPBALM and CAPBALP, add LCDCONST which translates the new constraint into the Tabloid LP formulation. The logical description will be `MIN PRODUCTION LCD?. d. POL Set: The new constraint?s sense and right hand side is less than or equal to zero. So a new item should be included in the row section of the POL Set. The code used here is LEZ to signify Less than or Equal to Zero. 3L ? P <= 0 e. POLI Table: i. Column Section: No changes are required here. ii. Row Section: The Row section also remains the same. f. ROWS Table: With the new entry LCDCONST added to the ROW set and LEZ added to the POL set, they should be matched in the ROWS table. In the MIN PRODUCTION LCD row, add C in FLD1 column and LEZ in TABL column. g. COEF Table: The coefficients of the latest constraint are 3 for LCDs and -1 for Plasmas. These values are entered in a new table PERCENT with the Row set as TV and Column Set as 1. After this table is created, PERCENT is added to the COEF table?s final entry. 3L ? 1P <= 0 The rest of the tables referenced in the COEF table remain untouched. III. Generation &Solution a. Generation: After all the changes have been made, generate the model using the GEN command. Please note for any ERRORs in this information table. If you encounter any ERRORs, please re-visit the previously configured tables. Most often type of error is the typographical error. The formulation can be verified from the following tables: i. MATX Table: ii. RHSX Table: iii. SENX Table: iv. POLX Table: If there are any unexpected formulations in any of these tables, please re-visit the previously configured tables. b. Solution Use either of the solvers to solve this model with the new constraint. Please make sure that the objective function is set to MAX in the CCPLEX / CXPRESS table. Solve the model using the OPT or XPRESS command. TheSolutions can be found in the following tables: i. COLX Table: In the model without this additional constraint, theSolution was 2000 LCDs and 1000 Plasmas. With the addition of this constraint, the production numbers of the Plasmas raises to meet the minimum 75% of the total. ii. OBJX Table: In the previous model, since there was no minimum production required for any product, the production was calculated with the motivation of maximizing the profit (objective function). The previous profit was $350000 and with the addition of this constraint, it is evident why the profit decreases. iii. ROWX Table: Since the production numbers are restricted due to the new constraint, the slack in A indicates that there will be some free time available in the Manufacturing. Keywords: None References: None
Problem Statement: How do I read the feed tank inventory of a previously empty tank?
Solution: The example below uses the demo model to demonstrate how to populate an empty tank with feedstocks. First let's add a new tank TK5, so this tank will be empty. 1. Then simulate all. 2. Open Audit Inventory Dialog and select a day that is in the model horizon. 3. Select refresh and Populate Sim Values. Tank TK5 is the new tank which doesn't contain any material. Feedstock tanks have two entries in the tank list, one for feedstocks and the other for 128-composition. Select the entry for the feedstock composition as shown in the image below. 4. Press the small ?Add +? button on the right-hand side: 5. The select feedstock dialog will appear, select the desired feedstock. 6. In the Audit Inventory Dialog add the volume of the feedstock: 7. Press update baseline. The new composition will be pushed to the beginning inventory. 8. Upon roll forward, to 8/2/2006, TK5 will have a volume of 50 ADNC, and the 128-compositoin will be calculated from the active analysis for ADNC. Keywords: feed tank inventory empty tank previously empty tank read References: None
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This knowledge base article will prepare you to build a custom screen or add new objects to existing screens that already exist in the new Aspen SCM GUI interface. To read this article, a prior knowledge of Aspen SCM, Rules coding and Basic XML coding are preferred, though not required. In this Knowledge Base article, a simple example has been provided to help understand this new language.
Solution: Example Keywords: None References: None
Problem Statement: Basic XML knowledge in Aspen SCM is a prerequisite for reading this document.
Solution: s # 135832, 135995, 136528, 136585, 137107 and 138030 are recommended, if you are looking to gain basic XML knowledge in Aspen SCM. In the previous versions of Aspen SCM, dialogs were used to build prompts. Though dialogs can still be used in V8, XML is a more elegant way to build prompts. This article explains how to build a prompt using the XML. Solution All containers, properties and attributes used to build a XML screen can be used to build a prompt too. For example, the XML configuration for a sample prompt to display the userid can be: <?xml version=1.0 encoding=utf-8?> <CONFIG> <SAMPLE Header=USERID ViewModelID=:SAMPLE_VM> <View Type=LayoutGrid Rows=auto, auto Columns=70,100> <Views> <STRING1 Type=PropertyView Row=0 Column=0 Editor=Label Margin=5,5,0,20 DataSource=STRING1S/> <STRING2 Type=PropertyView Row=0 Column=1 Editor=Label Margin=5,5,0,20 DataSource=STRING2S/> <BUTTON2 Type=DialogButton Row=3 Column=1 Margin=5,5,0,20 Caption=OK/> </Views> </View> </SAMPLE> <SAMPLE_VM> <States> <STATE1 Setup=>_RTEST> <Properties> <STRING1S Type=String Value=UserID:/> <STRING2S Type=String ValueSource=_USERID(1,1)/> </Properties> </STATE1> </States> </SAMPLE_VM> </CONFIG> You might have already known that a XML screen can be launched from the command line through: SHOWSCRN CAPS_SCREEN([Name of XML Set/Table]:[Name of View Model]). This example's XML configuration was stored in _SAMPLE set. This prompt can be launched through: SHOWSCRN CAPS_SCREEN(_SAMPLE:SAMPLE) Similar to this command, the command for launching a prompt is: SHOWSCRN CAPS_PROMPT([Name of XML Set/Table]:[Name of View Model]) This screen is modal (does not allow you to interact outside the prompt) in nature and if there are some rules that are executed after launching this prompt, then they would be blocked, until the prompt is closed. The command: SHOWSCRN CAPS_DIALOG([Name of XML Set/Table]:[Name of View Model]) does not block the execution of subsequent rules, but it is also modal in nature. The command: SHOWSCRN CAPS_WINDOW([Name of XML Set/Table]:[Name of View Model]) is non-modal (allows you to interact outside the prompt) in nature and also does not block the execution of subsequent rules. There are some special commands that can be used only with XML Prompts' header: MinWidth - Minimum width of prompt window MinHeight - Minimum height of prompt window CanResize - If true, will allow resizing of prompt window IsToolWindow - If true, will allow minimal headers in prompt window There are also some special commands for XML Prompts' command: IsCancel - If true, will close the prompt window All these special commands have been added to the example: <?xml version=1.0 encoding=utf-8?> <CONFIG> <SAMPLE Header=USERID CanResize=False MinHeight=500 MinWidth=500 IsToolWindow=True ViewModelID=:SAMPLE_VM> <View Type=LayoutGrid Rows=auto, auto Columns=70,100> <Views> <STRING1 Type=PropertyView Row=0 Column=0 Editor=Label Margin=5,5,0,20 DataSource=STRING1S/> <STRING2 Type=PropertyView Row=0 Column=1 Editor=Label Margin=5,5,0,20 DataSource=STRING2S/> <BUTTON2 Type=DialogButton Row=3 Column=1 Margin=5,5,0,20 IsCancel=TRUE Caption=OK/> </Views> </View> </SAMPLE> <SAMPLE_VM> <States> <STATE1 Setup=>_RTEST> <Properties> <STRING1S Type=String Value=UserID:/> <STRING2S Type=String ValueSource=_USERID(1,1)/> </Properties> </STATE1> </States> </SAMPLE_VM> </CONFIG> Simple message boxes can be displayed through MSGBOX commands. This command can be used to display simple line messages and for prompts which require a single user response. A set needs to be filled for getting this to work. That set should contain TITLE, MESSAGE, BUTTONS, ICON, RESULT in the code column. The description should contain the respective values. For example: A set called DMSGBOX can contain the following values: TITLE Information MESSAGE This is an Informational message BUTTONS OK ICON INFORMATION RESULT This message box can be called through the command: MSGBOX DMSGBOX Detailed configurations for all these controls can be found at: http://support.aspentech.com/CustomerSupport/Documentation/V8.0/Planning%20and%20Scheduling%20Chemicals/AspenSCM/AspenSCM_UI_Config.pdf Keywords: None References: None
Problem Statement: What is the best report to run in Aspen Fleet Optimizer to review dispatched loads?
Solution: The Terminal orders report is a report that should be run daily to review what loads the system still thinks are going to be delivered. This report can be sorted by customer, retain, runout and shift. The report shows all orders in the AFO database, forecast and order-entry, sorted by the criteria chosen by the user. Normally the user would sort by runout. A cut-off date can limit the orders up till a certain point in time. This report should be run after the data quality manager has been done. This will quickly allow the dispatcher to see if any loads have not been flagged correctly as delivered. If the dispatcher does find an incorrectly flagged load on this report they will need to go into replenishment planner and manually move the load to delivered orders. Keywords: None References: None
Problem Statement: Can ETs be enabled for Ramp variables?
Solution: ETs are NOT RECOMMENDED to be enabled for use with Ramp variables. The reason is that when ETs are used with Ramp variables and if the ET is active (and is between the operating limits) then the move plan would use ETs in preference to the RAMPSP as a target for the move plan. ET for ramps will work as a simple target during the dynamic calculations similar to other non–ramp CVs. However, unlike for non-ramp CVs, ETs are not used in the steady state claculations for ramp variables. Furthermore, even the RAMPRT (ramp rate) parameter is ignored when ETs are active on a ramp variable. Keywords: Ramp Setpoint External Target Ramps References: None
Problem Statement: Can you provide us with template source code so we can compile our own UBML.dll ? Are there any general guidelines or generic instructions available to help us with this request?
Solution: UBML-(User Blend Model Library) provides you with the ability to incorporate user-defined blending correlations into your model. In addition, UBML supplements ABML, which provides a common set of correlations. There is a sample Correlations.cpp file provided after Aspen Refinery Multi-Blend Optimize(MBO)/Aspen Petroleum Scheduler(APS) installation process, located in the same location as the Aspen MBO/APS executable file. Usually under: C:\Program Files\AspenTech\Aspen Refinery Multi-Blend Optimizer or C:\ProgramFiles(x86)\AspenTech\Aspen Refinery Multi-Blend Optimizer Regarding any general guidelines or instructions to build an UBML, review the topics called : Constructing the UBML.dll and Working with UBML located on the MBO/APS Help files. Attached is a document with some guidelines and generic instructions Keywords: None References: None
Problem Statement: How can I adjust the rebar-to-concrete ratio in my project?
Solution: To adjust the rebar-to-concrete ratio in your project, you need to modify the Rebar Quantity field in the external civil file. This is done by first creating a new copy of the file in your library, next making your modifications, and lastly selecting the modified external civil file to use in your project. Here are the steps: 1.) Go to the library tab and make a duplicate of the civil file you want to modify, and give it a new name. 2.) Next, choose to MODIFY the new copy 3.) The Rebar quantity is entered as either POUNDS or KG, depending upon whether you are using IP or MET, and are based upon CY or M3. Since there are multiple foundation types in the Economic Evaluation products, the REBAR quantity needs to be adjusted for each foundation type. In the example below, we have changed the REBAR quantity to 105 POUNDS for all of our foudations: 4.) Click Okday, and then Close to exit from the library. 5.) Open your project, and then go to the Project Basis View | Customer External Files | Civil Material: 6.) Click on your Right Mouse Button, choose Select, and then choose your modified civil file. It will then appear in the List View: 7.) Your next evaluation will now use the adjusted rebar quantiies. Keywords: None References: None
Problem Statement: This document contains examples for the new features implemented in the COMPR model in version 2004. The new features in version 2004 include: 1. Perform Turbine/Expander calculation using performance curves. 2. Ability to correct for the mass flow rate when away from design temperature and pressure has been added. curves 3. Performance curve data specified in terms of quasi-dimensionless groups.
Solution: Three example files are attached to demonstrate these features: 1. turb_all.bkp includes several turbine blocks using different types of performance curves In the block Setup sheet choose Use performance curves to determine discharge conditions. Enter the performance curve information in the Performance Curves forms. 2. corr_tp_all-AP2004.bkp uses the correction to flow for performance curves. Set this option in block, Performance curves form, Design sheet. For this option please note that only only Pressure and Temperature based on inlet is available. 3. pol-quasi.bkp - uses quasi-dimensionless curves by setting the option in block, Performance curves form, Design sheet. Notes: In version 2004.1 (and later) the definition of corrected flow has changed from a shift on performance (used in version 2004) to that similar to quasi dimensionless curves. Keywords: compressor References: None
Problem Statement: How do I regress mixture viscosity data?
Solution: The Aspen Physical Property System has a number of built-in viscosity models. The most common models are the Andrade/DIPPR/PPDS/IK-CAPE Pure Component Liquid Viscosity and the Andrade Liquid Mixture Viscosity. The model used for a particular property method can be found on the Properties | Property Methods | Models sheet. For mixture data, one of the parameters for the pure component viscosity model can be regressed, normally MULDIP (DIPPR model) or MULAND (Andrade model). Enter the data on the Properties | Data forms and the parameters to regress on the Properties | Regression forms. See the attached file for an example of regressing MULDIP parameter from sucrose and water viscosity data (Perry's Handbook, 6th ed., p.3-254). Keywords: mul References: None
Problem Statement: While using XPRESS, there might be some scenarios when you end up with the same row or column names. Previous versions of XPRESS used in earlier versions of Aspen SCM did not cause any errors or warnings. But, if you do the same in V8, there would be an error message in the log file: 1030 Error: Duplicate row names are not allowed. The following
Solution: details the changes that were made in XPRESS to handle duplicate row entries. Solution The old versions of Xpress did not check for duplicates when you load names into the Optimizer. It was causing some issues when trying to look up a column or row by name. So when FICO (XPRESS vendor) updated the software, they required that names must be distinct. It is ok for a row and a column to have the same name, but two rows or two columns must have distinct names. Even if you fail to load the names, you will still be able to solve the problem. If duplicates are detected then all of the names provided in the call to XPRSaddnames will be discarded. Instead the Xpress Optimizer will use generic names such as C000001, C000002, etc. Keywords: None References: None
Problem Statement: This Knowledge Base article (KB) is the third in a series of articles under the topic `Linear Programming using Aspen Supply Chain Management?. This series is intended for users who do not have any background in LP or in Aspen Supply Chain Management (SCM) programming; the pre-requisites for reading this KB are `Linear Programming using Aspen Supply Chain Management: THE BASICS? and `THE ADDITIONAL CONSTRAINT? stored in
Solution: # 135232 and # 135398. The screenshots in this document were created using Aspen SCM version 7.3.1. At the end of this tutorial, users will be able to formulate and solve simple Linear Programming problems in Aspen SCM. Example Problem: A logistics company is trying to figure out the dispatch schedule of LCD Television Sets. There is demand for LCDs from 3 retailers located in different sites; the demand information is provided in the 1st Table. There is a limited availability of LCDs at 3 different Assembly plants, ready for dispatch; but more than the requirement. The supply information is provided in the 2nd Table. The cost of transporting from each plant to each retailer is in the 3rd Table. Find out which plant should deliver to which retailer so that the total cost of transportation is reduced. Demand: Retailers A B C Demand 30 45 20 Supply: Plants 1 2 3 Supply 60 50 10 Transportation Cost: Retailers / Plants A B C 1 $32 $50 $200 2 $40 $104 $80 3 $120 $104 $60 SolutionSolution Methodology: I. Algebraic Formulation: a. Find out the decision variables: The problem is to find the optimal allocation of quantities of LCD television sets to 3 Retailer locations from 3 Assembly Plants, such that the total cost of transportation is reduced. The total cost of transportation is dependent on which Retailer receives LCDs from which Plant. Hence these 9 variables are called the decision variables and these are declared as follows: L[A][1] a AMOUNT OF LCDS SENT FROM PLANT 1 TO RETAILER A L[A][2] a AMOUNT OF LCDS SENT FROM PLANT 2 TO RETAILER B . . L[C][3] a AMOUNT OF LCDS SENT FROM PLANT 3 TO RETAILER C b. Formulate the Objective function: The objective is to reduce the cost of transportation. By multiplying the cost of transporting from a plant to a retailer and the amount of LCDs transported between these locations, the total cost of transportation can be found. Hence, the objective function in this problem is to: MINIMIZE 32*L[A][1] + 40*L[A][2] + 120*L[A][3] + 50*L[B][1] + 104*L[B][2] + 104*L[B][3] + 200*L[C][1] + 80*L[C][2] + 60*L[C][3] c. Identify the constraints: Though the objective is to minimize the cost, there are only limited resources available. For example, it is quite evident that Plant 3 is close to Retailer C but the demand at C is higher than supply at 3. Such constraints are represented through the following equations. i. For Retailer A: The demand should be satisfied through either Plant 1 or 2 or 3 or a combination of these 3. Hence the constraint is: L[A][1] + L[A][2] + L[A][3] = L[A] Under the same concept, the other constraints are: L[B][1] + L[B][2] + L[B][3] = L[B] L[C][1] + L[C][2] + L[C][3] = L[C] ii. For Plant 1: The supply available in Plant 1 can go to either Retailer A or B or C. Since there is a surplus of LCDs available, the total LCDs transported should be less than or equal to the total available at Plant 1. Hence the constraint is: L[A][1] + L[B][1] + L[C][1] <= L[1] Under the same concept, the other constraints are: L[A][2] + L[B][2] + L[C][2] <= L[2] L[A][3] + L[B][3] + L[C][3] <= L[3] d. Other Common Constraints: The amount of LCDs transported from the Assembly plants to the Retailer cannot be negative. Normally, this rule would result in additional constraints: L[A][1] >= 0 L[A][2] >= 0 . . . . L[C][3] >= 0 In this formulation, since the amount transported to every retailer is equal to the corresponding demand, the program will never turn to negative transportation numbers. Hence these constraints are not necessary in this formulation. Hence, the algebraic formulation for this problem is: MINIMIZE 32*L[A][1] + 40*L[A][2] + 120*L[A][3] + 50*L[B][1] + 104*L[B][2] + 104*L[B][3] + 200*L[C][1] + 80*L[C][2] + 60*L[C][3] SUBJECT TO: L[A][1] + L[A][2] + L[A][3] = L[A] L[B][1] + L[B][2] + L[B][3] = L[B] L[C][1] + L[C][2] + L[C][3] = L[C] L[A][1] + L[B][1] + L[C][1] <= L[1] L[A][2] + L[B][2] + L[C][2] <= L[2] L[A][3] + L[B][3] + L[C][3] <= L[3] e. Feasibility of the Model: Please note that the Demand and the Supply constraints in this model will get satisfied only when the Demand is less than or equal to the Supply. If the Supply is greater than the Demand, then this model becomes infeasible. II. Formulate the problem using tables: In this section, the above developed algebraic program will be converted directly into the corresponding tabloid program. a. Open Aspen SCM: Save the file available in the attachment of thisSolution `lpcourse.cas?. Start Aspen SCM, go to File | Open and point to the location where you saved `lpcourse.cas?. b. COL Set: The transportation quantity variables are categorized under TRANS. It is entered in the Code section of the COL set. The description section is just to provide additional information about the set entries. c. COLS Table: TRANS should have RETAIL and PLANT as domains, since separate production decision variables are required for each of the Retailers and each of the Plants. Algebraic formulation?s objective function: MIN. 32*L[A][1] + 40*L[A][2] + 120*L[A][3] + 50*L[B][1] + 104*L[B][2] + 104*L[B][3] + 200*L[C][1] + 80*L[C][2] + 60*L[C][3] a TRANS Hence two new sets called RETAIL and PLANT containing the names of the Retailers and Assembly Plants should be created and that should be listed in FLD2. FLD1 is reserved for declaring the decision variable (it is declared as L, here). The decision variable P should be entered in the TABL column. d. ROW Set: From the algebraic formulation, it is known that there are two groups of constraints required. DEMBAL and SUPBAL refer to the Demand and the Supply balance constraints respectively. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a DEMBAL L[B][1] + L[B][2] + L[B][3] = L[B] a DEMBAL L[C][1] + L[C][2] + L[C][3] = L[C] a DEMBAL L[A][1] + L[B][1] + L[C][1] <= L[1] a SUPBAL L[A][2] + L[B][2] + L[C][2] <= L[2] a SUPBAL L[A][3] + L[B][3] + L[C][3] <= L[3] a SUPBAL e. POL Set: POL set has two sections: i. Column Section: In the Column Section, the decision variable L is declared. ii. Row Section: In the Row Section, the Right Hand Side (RHS) of the constraints, DB and SB are declared. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a DB L[B][1] + L[B][2] + L[B][3] = L[B] a DB L[C][1] + L[C][2] + L[C][3] = L[C] a DB L[A][1] + L[B][1] + L[C][1] <= L[1] a SB L[A][2] + L[B][2] + L[C][2] <= L[2] a SB L[A][3] + L[B][3] + L[C][3] <= L[3] a SB f. POLI Table: POLI Table has two sections: i. Column Section: In the Column section, the CST column should specify the coefficient that has to be multiplied with the decision variables in the objective function. A new table called COS is created with PLANT as the Row Set and RETAIL as the Column Set. The transportation cost matrix is populated in this table. Algebraic formulation?s objective function: MIN. 32*L[A][1] + 40*L[A][2] + 120*L[A][3] + 50*L[B][1] + 104*L[B][2] + 104*L[B][3] + 200*L[C][1] + 80*L[C][2] + 60*L[C][3] a TRANS ii. Row Section: In the Row section, SENSE for each of the constraint groups, EQ and LE should be entered. The RHS should contain sets containing the right hand sides of all the three constraints in the respective constraint groups. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a DB L[B][1] + L[B][2] + L[B][3] = L[B] a DB L[C][1] + L[C][2] + L[C][3] = L[C] a DB L[A][1] + L[B][1] + L[C][1] <= L[1] a SB L[A][2] + L[B][2] + L[C][2] <= L[2] a SB L[A][3] + L[B][3] + L[C][3] <= L[3] a SB g. ROWS Table: FLD1 column of the ROWS set should specify the name of the constraint and can be named as A and B. The domain which should be enumerated should be specified in the remainder of the FLD columns. For DEMBAL group, the constraint should be expanded on the Retail stores; the Plants should be summed together for that particular Retail store; hence RETAIL set is entered in FLD2 column. Similarly, PLANT set is entered in the FLD3 column. In the TABL column, the corresponding RHS values specified in POL table are entered. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a A L[B][1] + L[B][2] + L[B][3] = L[B] a A L[C][1] + L[C][2] + L[C][3] = L[C] a A L[A][1] + L[B][1] + L[C][1] <= L[1] a B L[A][2] + L[B][2] + L[C][2] <= L[2] a B L[A][3] + L[B][3] + L[C][3] <= L[3] a B h. COEF Table: The COEF table contains all the coefficients for the constraints. All these constraints just have the coefficient 1 multiplied. So 1 should be entered for both rows in the TRANS column. III. Generation &Solution After the model is formulated, the next step is to generate the model and find theSolution. a. Generation: The Generation step enumerates all the decision variables across the corresponding domains. It is executed by typing GEN in the command line. As a result of GEN, an information dialog box opens and also a variety of tables are generated. This dialog box is the place to look for errors, if any. Detailed message on errors can be found by typing ERROR in the command line. The tables generated with the GEN command, can be checked for consistency in the formulation: i. MATX Table: This table helps to confirm the coefficients of constraints in the table formulation with the algebraic formulation. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a A L[B][1] + L[B][2] + L[B][3] = L[B] a A L[C][1] + L[C][2] + L[C][3] = L[C] a A L[A][1] + L[B][1] + L[C][1] <= L[1] a B L[A][2] + L[B][2] + L[C][2] <= L[2] a B L[A][3] + L[B][3] + L[C][3] <= L[3] a B ii. RHSX Table: This table can be used to verify the right hand side of the corresponding constraints. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a A L[B][1] + L[B][2] + L[B][3] = L[B] a A L[C][1] + L[C][2] + L[C][3] = L[C] a A L[A][1] + L[B][1] + L[C][1] <= L[1] a B L[A][2] + L[B][2] + L[C][2] <= L[2] a B L[A][3] + L[B][3] + L[C][3] <= L[3] a B iii. SENX Table: This table defines the sense of both the constraint groups. Algebraic formulation?s constraints: L[A][1] + L[A][2] + L[A][3] = L[A] a A L[B][1] + L[B][2] + L[B][3] = L[B] a A L[C][1] + L[C][2] + L[C][3] = L[C] a A L[A][1] + L[B][1] + L[C][1] <= L[1] a B L[A][2] + L[B][2] + L[C][2] <= L[2] a B L[A][3] + L[B][3] + L[C][3] <= L[3] a B iv. POLX Table: This table can be used to verify the coefficients of the objective function of the tabloid formulation (CST column) with the algebraic formulation. Since there are no values specified in the POLI table as MIN and MAX for the decision variable P, it is displayed as zeros in POLX table. Algebraic formulation?s objective function: MIN. 32*L[A][1] + 40*L[A][2] + 120*L[A][3] + 50*L[B][1] + 104*L[B][2] + 104*L[B][3] + 200*L[C][1] + 80*L[C][2] + 60*L[C][3] b. Solution To solve a model, you can use either of the two solvers available within SCM: CPLEX and XPRESS. These solvers can be called through OPT and XPRESS commands respectively. To specify the Maximization or Minimization of the objective function, CCPLEX or CXPRESS control tables are called and MNMX values are changed accordingly. Here it is MIN, as discussed in the algebraic formulation. Once the solving is complete, there are a variety of tables that are generated. These can be checked forSolutions: i. COLX Table: The X column represents the optimal value of the nine decision variables. The XCST column specifies the cost that each of these decision variables contribute to the objective function. ii. OBJX Table: OBJECTIVEFUNCTION column provides the value of the objective function i.e. the total minimum cost of transportation. iii. ROWX Table: ROW SLACK column defines the difference between left hand side and the right hand side of every constraint. BP2 has a row slack indicating that there will be 25 more LCDs still available in Plant 2, after the transaction ends. iv. Solution Discussion: Please take a moment to see how thisSolution has been formed by looking at the equations and the COLX table. Looking at the difference between the least cost and the penultimate cost in all three columns, the third column has the maximum difference. So the algorithm will aim to make full use of the Plant 3 for Retailer C. But Plant 3 cannot fully satisfy the requirements of Retailer C. Hence the algorithm looks for the next lowest cost and finds Plant 2. If the penalty cost of not choosing Plant 2 for any other Retailer is higher than the cost of assigning Plant 1 to Retailer C, then the choice would have been Plant 1 despite the fact that Plant 1 had a higher transportation cost to Retailer C. The next highest difference between the least and the penultimate cost is for Retailer B. Hence the assignment of Plant 1 to Retailer B. Finally for Retailer A, since Plant 1 cannot satisfy completely, Plant 2 also contributes and fulfills the requirement of Retailer A. Keywords: None References: None
Problem Statement: Example in Visual Basic: how to handle case studies with inconsistency errors.
Solution: In Aspen HYSYS, when we perform a case study if an inconsistency error appears, the whole case study analysis is interrupted. ThisSolution provides an example using VBA Automation that will allow you to perform a case study and handle an inconsistency issue. In order to see the code press Alt+F11 in Excel. The embedded program will do the following: · Open the file through a dialog window · Map the input table to check the number data rows (you can add more points) · Read the variable values and input them in HYSYS. · Execute the solver and read the values back to excel solving the inconsistency error if needed for the next case · Give feedback of the execution and file loading Keywords: Case study, inconsistency, VBA. References: None
Problem Statement: How do you model a flue gas desulfurization process?
Solution: Aspen Plus backup (.bkp) file used to model this process are attached: Filename Description FGD.bkp Flue Gas Desulfurization Flowsheet Process Description Sulfur oxides (SOx) are removed from boiler flue gas using a wet Flue Gas Desulfurization (FGD) process. A slurry containing calcium carbonate is used to remove SOx from the flue gas. The performance of the FGD process can be predicted using a rigorous Aspen Plus model, taking into account theSolution chemistry and solids precipitation. The U.S. Clean Air Act revisions have put increased pressure on chemical process industries to reduce SOx emissions. There are many sources within a typical chemical plant that can emit these compounds, including boilers, process heaters and incinerators. For large scale SOx control, the flue gas is treated either by wet or dry scrubbing. For wet scrubbing, three of the most common setups employ lime or limestone, sodium alkali, or dual-alkali. In all three cases, a major waste stream is produced and requires appropriate disposal. The figure below shows the flowsheet for a simplified FGD process using limestone. The flue gas is first mixed with an air stream where all the sulfur dioxide in the flue gas is assumed to be oxidized to sulfur trioxide. A scrubber is then used to facilitate contact between the oxidized flue gas and a slurry containing calcium carbonate. The SOx chemically reacts with the limestone to form calcium sulfate. The gypsum crystals are only slightly soluble in water and precipitate from theSolution. The solids in the slurry effluent are removed by either ponding or clarification. The clarified liquid is recycled to the scrubber, while the solids are disposed of as a waste stream. For simplicity, the clarified liquid recycle is treated as a feed stream to the scrubber. Fresh limestone is also added to the scrubber. Flue Gas Desulfurization Flowsheet Physical Properties The physical property constants for the molecular species, water, carbon dioxide, nitrogen, oxygen, sulfur dioxide, sulfur trioxide, hydrogen chloride, and hydrogen fluoride are obtained from the Aspen Plus pure component data banks. The property parameters of the ionic species are obtained from the Aqueous data bank. Calcium carbonate precipitate (gypsum) is represented as an inert solid. Carbon dioxide, sulfur dioxide, sulfur trioxide, nitrogen, oxygen and hydrogen fluoride, are considered supercritical components. Henry's law is applied for their property calculations. Keywords: environmental started kit References: : R. McInnes and R. Van Royen, Desulfurizing, Chemical Engineering, Sept. 1990, pp. 124-127
Problem Statement: How do I adjust the Stress Relief calculations without affecting the other activities involved with CS Pipe Erection?
Solution: Stress Relief is reported under COA 317, which is CS Pipe Erection. You can use indexing to adjust the entire COA 317, but this will also affect other activities reported under COA 317 besides the Stress Relief numbers. A more direct way to adjust Stress Relief numbers is to use the Exceptions field in the COA Manager. To do this, first create a new COA 318 and define it as Stress Relief. This is done in the COA Manager under definitions: According to the Icarus Keywords: None References: Manual, Chapter 35-49, there is an Exception of 3 which can be used for Stress Relief. Using that information, we can now use the COA Manager and set up an allocation exception for COA 317 which will then take any exceptions in that COA and map it to our new COA 318: Now any Stress Relief information is reported under COA 318. If you think the numbers generated need to be adjusted up or down, you can index COA 318 and only affect it's results without affecting the other information for COA 317. In the attached example we have created and allocated Stress Relief to a the new COA 318, and then used Manpower Indexing to reduce the original Stress Relief numbers by 50 percent.
Problem Statement: How can I get only my ON/OFF valves in their own COA?
Solution: Valves are reported by material type, and each material has mutiliple valve types, including the ON/OFF valves. There are two basic steps to get the ON/OFF valves into a new COA. 1.) Create the new COA. In the attached COA file we have created a new COA 338 which will contain our ON/OFF ball and slide gate vales, along with our instrument ON/OFF valves. We have set the the Definitions as follows: 2.) Use the exception for ON/OFF valves found in the Aspen Plus Icarus Keywords: References: Manual. Checking the Icarus Reference Manual, we find (35-30) that 402 is the subtype for the ball ON/OFF valve, (35-31) that 481is the subtype for the slide gate ON/OFF valve, and (35-36) that 681 is the subtype for the instrument ON/OFF valve. We used this information to set the Allocations. The attached zipped file illustrates these steps.
Problem Statement: Is it possible to model the penultimate effect in free-radical polymerization?
Solution: The free radical polymerization reaction model in Aspen Polymers assumes that the various reaction rates involving polymer molecules only depends on the concentration of the terminal segment; the concentration of the next-to-last (penultimate) segment is ignored. This type of “terminal model� has been shown to be capable of matching copolymer composition data with appropriate adjustments to the cross-propagation rate constants (Mayo and Lewis, 1944)i. However, further studies have shown that some systems require different cross-propagation rate constants to match conversion and number-average molecular weight data involving different initial monomer feed ratios (Fukada, Ma, and Inagaki, 1985)ii. Researchers have demonstrated that penultimate effect models (which consider the terminal and penultimate segments) are better able to match lab data using a consistent set of cross-propagation rate constants (Ono and Teramachi, 1995)iii. This example describes how to customize the Free-Radical kinetics model using the Gel-Effect subroutine feature to simulate the penultimate effect. This example is a modified version of the polystyrene ethyl acrylate (PSEA) batch polymerization sample case delivered with Aspen Plus. Although this example involves a copolymer with two monomers, the methodology described here can be extended to copolymers with multiple monomers. In the terminal model, the propagation rate, rp, is calculated as shown below: Where kij is the rate constant for the cross propagation of live end segment Si* with monomer Mj. In this example, there are four cross-propagation reactions corresponding to the two monomers, Styrene (STY), and Ethyl acrylate (EA). Note that the Aspen Plus free-radical model identifies the cross propagation rates using the live end segment (Comp 1) and the reacting monomer (Comp 2). The model does not have a designator for the penultimate segment. For this reason, we leverage another feature, the User Gel Effect Subroutine, to modify the rate expressions as needed to extend this model to consider penultimate effects on reaction rates. The extended model requires the concentration of segment pairs in order to estimate the concentration of penultimate segments. Aspen Polymers includes an optional feature to track the flow rate of segment pairs, known as dyads. This feature is automatically activated by including the DYADFLOW and DYADFRAC component attributes. This can be accomplished from within the property environment, on the polymer characterization form: The DYADFLOW attribute stores the molar flow rate of segment pairs; DYADFRAC stores the mole fraction of these pairs relative to the total flow rate of segments. Each of these attributes is an array with an element corresponding to each possible segment pair (Si,Sj). In this example there are two types of monomer, which correspond to two types of segment (Styrene segment and ethyl-acetate segment). The notation in the stream report is based on the segment number (positional location in the list of segments). In this example, STY-SEG is segment number 1, and EA-SEG is segment number 2, so DYADFLOW(1_1) represents the flow rate of styrene-styrene segment pairs, DYADFLOW(1_2) represents styrene-ethyl acetate segment pairs, and DYADFLOW(2_2) represents ethyl acetate-ethyl acetate pairs. The attribute BLOCKN reports the estimated number-average sequence length of continuous runs of each type of segment (e.g., it is a measure of how ‘blocky’ the copolymer is; a block copolymer will have high BLOCKN measures, in a random copolymer BLOCKN approaches unity). The dyad flow rates are calculated by the Free-Radical reaction model. Propagation reactions, termination by combination, and various other reactions influence the generation of dyads. The reaction model takes all this into account automatically when the DYADFLOW attribute is present. Penultimate models need to consider the concentration of the terminal/penultimate segment pairs. The rate of a specific reaction can be written as: Where [~SiSj*] is the concentration of the live end with terminal segment “j� and penultimate segment “i� reacting with monomer “k�. In propagation and termination reactions, the penultimate segment is conserved. The overall rate for the cross-propagation reaction between live terminal segment “j� and monomer “k� can be calculated by summing over all the specific reactions involving penultimate segments “i�: The free radical polymerization model tracks the concentrations of live terminal segments, [Sj*], in attribute LSEFLOW. The model does not track live terminal dyads directly, so we must make an approximation; specifically we assume the probability of active terminal segment “j� being attached to penultimate segment “i� can be determined from the known concentration of dyad (i,j) and the sum of concentrations of all the other dyads involving segment j: In essence, this approximation is the same as saying the concentration of live terminal dyads is proportional to the overall concentration of dyads. Note that this assumption is reasonable for most types of polymerization, but it may be violated for tapered block copolymers that are generated by successive addition of different monomers in a batch reactor. Consider the styrene – ethyl acetate system in this example. Each of the four cross-propagation reactions can be further divided into two specific reactions involving different penultimate groups (in a system with three monomers, there would be three specific reactions for each terminal reaction, etc.).            Net Cross-Propagation                         Specific reactions (4)       Pn[Sty] + [Sty] à Pn+1[Sty]          (4a)     Pn[Sty-Sty] + [Sty] à Pn+1[Sty]                                                            (4b)     Pn[Ea-Sty] + [Sty] à Pn+1[Sty]     The net rate for reaction (4) can be written as the sum of the two specific reactions involving different penultimate species (here lower case “s� stands for styrene, lower case “e� stand for ethyl acrylate: Applying the approximation for calculating terminal dyad concentrations: Re-arranging: We now define Ress as the reactivity ratio, kess / ksss and rewrite the equation as: This equation can be divided into two terms. The left-hand side of the equation is identical to the normal rate expression calculated by the free-radical model. The right-hand side (inside brackets) is a modification of the standard rate expression. This term can be calculated and applied to the overall rate expression through the gel-effect subroutine. The parameter Ress can be specified using the REAL parameter list of the gel effect subroutine. Implementation of Penultimate Effect Example An example model and gel effect subroutine have been created using the methodology described in this document. The propagation rates are each modified using the gel effect term. Gel effect correlations numbers 1 and 2 are built-in. Correlations 3 and higher are available for user customization. In this example the correlations 3-6 are being applied to the four types of propagation reactions. Use the Gel Effect tab sheet to enter the relative rate parameters (see table below). Select the “sentence ID� to navigate from one gel effect correlation to another. Each gel effect correlation can include any number of real parameters. The current model is using a single parameter for each correlation. This model could be extended to include additional terms to account for differences in activation energies or other effects. Reaction GE # REAL(1) Gel Effect Term Pn[Sty] + [Sty] à Pn+1[Sty] 3 Pn[Sty] + [Ea] à Pn+1[Ea] 4 Pn[Ea] + [Sty] à Pn+1[Sty] 5 Pn[Ea] + [Ea] à Pn+1[Ea] 6 The text box below shows the critical section of the user Gel Effect subroutine (USRGEL.F) which calculates the terms shown in the table above. The current example model uses a pair of calculator blocks, “F-1� and “S-TERM� to set the values of all four of these parameters equal to user parameter #1, which can be manipulated easily in an Aspen Plus sensitivity study. These calculator blocks can be removed from the model or modified if you want to set the four reactivity ratios independently. Setting each reactivity ratio to 1.0 forces this penultimate model to give the same results as the terminal model. Setting reactivity ratio less than one implies the reaction rate involving different penultimate and terminal segments is lower than the same reaction involving pure homo-polymer. Setting reactivity ratio higher than one implies the reactions are favored when the penultimate segment is different than the terminal segment. Fragment of User Gel Effect Subroutine USRGEL.F for Penultimate Model: C Keywords: None References: None
Problem Statement: How can I get Economic Evaluator to use a different tube (or shell) piping material than used in my tube (or shell) exchanger design?
Solution: By default, Economic Evaluator will use for the tube (or shell) side piping of an exchanger the same material as the exchanger tube(or shell) side material design (unless a user has specifically specified a piping material type under Pipe Item Details). You can change the default material for the tube (or shell) side piping in the exchanger input form. Go to the drop down menu under Tube side pipe material or Shell side pipe material and choose the material you would like for the piping to the the tube and/or shell side of the exchanger. Economic Evaluator will now use this material for the piping (again, unless a user has specifically specified a piping material type under Pipe Item Details). Keywords: shell, tube, tube side, shell side, pipe, piping References: None
Problem Statement: I am developing my Aspen Properties application using the option you suggested: Multiple Property Methods. I can see that I can retrieve the currently active (global) property method OPSET name by calling PPUTL_GOPSET. But, I cannot find a way to (1) Determine how many Property Methods were specified in the application in addition to the global Property Method. (2) Determine the secondary Property Method OPSET names. (3) Determine the values of HENRY, CHEM, ITRUE, FREEW, and ISOLU so that I can call PPUTL_PPSWOP for any of the specified Property Methods.
Solution: There is an undocumented subroutine PPUTL_LSOPST that you can call to determine the number and names of the Property Methods specified in a simulation: CALL PPUTL_LSOPST(KOPS, NAME, IST, INUM ) VARIABLE I/O TYPE DIMENSION DESCRIPTION AND RANGE KOPS O I - number of option sets NAME O C 8*KOPS OPTION SET NAMES (in character strings; Each option set name is 8 character long) IST I I - First property method, set to 1 INUM I I Property method number An example of how to use PPUTL_LSOPST is in a Calculator block in the attached Aspen backup file. The following is an excerpt from the Fortran entered in that Calculator block: C retrieve number of property methods F Call PPUTL_LSOPST (KOPS, OP_Name, 1, 1) F write(nhstry,30) KOPS F 30 format(/,'number of option sets KOPS',I4,/) C C retrieve property method names F do i=1,KOPS F Call PPUTL_LSOPST (KOPS, OP_Name, 1, i) F write(nhstry,10) i, OP_Name F 10 format(/,'OP_Name(',I4,') = ',A8/) F end do where OP_Name is declared as Character*8 The attached backup file has three property methods specified: H2O-STM, IDEAL, and PR-BM (global). The above Fortran results in the following output in the history file: NUMBER OF OPTION SETS KOPS 6 OP_NAME( 1) = SYSOP0 OP_NAME( 2) = SYSOP12 OP_NAME( 3) = STEAM-TA OP_NAME( 4) = H2O-STM OP_NAME( 5) = PR-BM OP_NAME( 6) = IDEAL Note that three additional property methods (SYSOP0, SYSOP12, and STEAM-TA) were added by Aspen Plus. Also note that the global (base) property method (in this case PR-BM) is not necessarily the first user-specified property method. To switch between property methods using PPUTL_PPSWOP, you must input the values of HENRY Character*8 CHEM Character*8 ITRUE Integer FREEW Character*8 ISOLU Integer These five values apply to all property methods, and these five values are represented in the NBOPST array (elements 2-6) as bead numbers; however, use of PPUTL_PPSWOP requires you to input string variables for HENRY, CHEM, and FREEW, not NBOPST values. An alternative way to switch between property methods in an application program is to reset the first element of the Property Method array (NBOPST), which is the OPSET name bead. The name bead can be retrieved using the undocumented utility PPUTL_FIOPST. CALL PPUTL_FIOPST(NAME ,NBOPST ) VARIABLE I/O TYPE DIMENSION DESCRIPTION AND RANGE NAME I I 2 OPTION SET ID'S, 8 CHARACTERS (2 WORDS), MAX. NBOPST O I - OPTION SET BEAD NUMBER To change property methods within an application, you must (1) Equivalence the Character*8 OP_SET name variable to an Integer array dimensioned 2. (2) Call PPUTL_FIOPST with the Integer array name for NAME. (3) Set the first element of the NBOPST array to the returned value of NBOPST from PPUTL_FIOPST. In the attached Calculator Block example, for the global Property method PR-BM, PPUTL_GOPSET retrieves: NBOPST = 100000143 0 0 1 100000097 3 And for the H2O-STM property method, PPUTL_FIOPST retrieves: NBOPST(1) = 100000128 Replacing the first element of NBOPST with 100000128 will switch the property method from PR-BM to H2O-STM. The attached example file uses a Calculator block to test these calls. These calls can be used in a similar manner in a user subroutine or Aspen Properties application. A Fortran compiler is required to run the example. Credit: Thanks to Tony Zehnder at Honeywell Specialty Materials for his assistance developing this example. Keywords: None References: None
Problem Statement: How do I use a template file to remove the 'hours for the rental equipment operators' from a project when their wages are included in the rental equipment cost?
Solution: The template feature is new in V7.3 and allows you to specify most anything in the Project Basis View that you would like to use routinely in projects. The attached template is an example of using a template to specify that the rental equipment operators are included in the cost of the rental equipment. This is done in Aspen Capital Cost Estimator by using the Crew Mixes form to map 100 percent of the operator crafts out of the project. In the attached template, craft codes 56, 57, 59, 60, 62, 63, and 64 have been mapped out of the project: If you create a new project using this template, you will see that the hours are 0 in the the Manpower Summary report of the CCP for these operator crafts. Keywords: None References: None
Problem Statement: How can I compare the binary parameters retrieved from Aspen Properties to the parameters obtain from experimental data?
Solution: The present example shows how to compare two sets of binary parameters by using the Analysis tool in Aspen Properties. This example library can be applied not only when we want to compare binary parameters from different sources, but also when we need to specify another set of binary parameters at another range of temperature and pressure. In the attached file, the system is based on water, methanol and benzene, being the property method selected NRTL. As an example, the binary parameters that will be used for the study are for the pair water - methanol. To continue with the example, please follow the next steps: 1) Once the components and the property method are selected, check the binary parameters obtained from the databanks in Methods | Parameters | Binary Interaction | NRTL-1 2) In order to create a new set of binary parameters, it is necessary to have experimental data for this purpose. In this example, the source used to obtained the data was NIST (available from the Home Ribbon) but of course the user can use other sources such as literature or researches if those are available. Once the data is saved in the simulation, it is regressed to obtained the new binary parameters. 3) The next step is to link the new set of binary parameters with a new property method. Otherwise, the new binary parameters obtained by regression will overwrite those retrieved from the databanks. Most of the activity coefficient methods have by default 2 property methods specified: NRTl NRTL-2; UNIQUAC, UNIQUAC-2, etc. This is because the activity coefficient models strongly depend on the temperature, so this gives the chance to specify multiple sets of binary parameters. In this example, a new property method is selected. To define it, go to Methods | Specifications and in the method name, you need to scroll down, select new and enter the method name (do not change the Base method). Now, you can check that in the Selected Methods form, there are two property methods available. 4) Now, once a new property method is created and experimental data is available, it is necessary to regress the data to obtain the coefficients of the binary parameters. To make a regression in Aspen Properties, the Regression mode must be selected on the Home ribbon (by default Aspen uses the Analysis mode). When working in the regression mode, a new folder Regression is available in the Navigation Pane. To create a new regression case, open this folder, click on the new button and in the Setup tab, specify the Data set that will be used to regress the parameters and the method selected. Then in the Parameters tab, it is necessary to specify the type of the parameter, name, element and component. (Remember that binary parameters are bidirectional, a pair of binary parameters must be calculated for each pair of components). After running the regression, the results can be checked in the Regression folder in the Results form: the binary parameters regressed will be storage in the NRTL-1 (source: R-DR-1) method and those retrieved from the databanks in Aspen Properties will be available in the NRTLEXP (or NRTL-2) method. (source: APV86 VLE-RK) 5) Finally, a binary analysis will be performed for each property method to compare the results. In this type of analysis, it is necessary to select the type (TXY, PXY or Gibbs energy of mixing), the components involved, the valid phases (VL, VLL, etc), the conditions (pressure, temperature) and the property method, one for the NRTL-1 and another for the NRTLEXP (or NRTL-2). Thus, two graphs will be displayed. Both graphs can be displayed in the same graph using the merge plots option available in the Design ribbon | Data section. Keywords: Binary parameters, experimental data, compare References: None
Problem Statement: How do I specify a certain percentage of pipe supports to be sent to a remote shop?
Solution: In V7.3.2, a new input field - Pipe supports in remote shop (PERCENT) - has been added to Design Basis | Piping Specs | General with the default being 0 percent. If pipe is being sent to remote shop, a portion of the total number of pre-fabricated pipe supports (based on value specified in the new field), will be procured in remote shop. Thirty percent (30%) of the man-hours for installing the supports will also be estimated in the remote shop. The labor cost for these man-hours will be added to the material cost of the remote shop pipe supports. The rest of the prefabricated pipe supports will be procured in the field. The manhours to erect all pipe supports will be booked in the field as well. As an example, for a remote shop pipe having 8 pipe supports with 50% of the supports specified in remote shop: Remote shop cost = (material cost of 4 pipe supports) + (unit-man-hours for pipe supports) * (0.3) * (4 supports) * (shop labor rate) * (shop productivity adjustments) Field material costs = (material cost of 4 supports) Field labor costs = (unit-man-hours for pipe supports) * 0.7 * (4 remote shop supports) + (unit-manhours for pipe supports) * (4 field procured pipe supports) These changes will apply to installation bulk pipe associated with project components, plant bulk pipe, utility piping and utility stations, and yard pipe. The same code of account (366) will be used for pipe supports procured in remote shop or field. The subtype 951 will be specified for pipe supports procured in remote shop. Users can use this subtype to create code of account exception and book the remote-shop pipe supports to a new user-defined code of account if needed. In the attached project, there is a plant bulk pipe component with 6 inches in diameter and 100 feet in length. When default percentage (zero) of pipe supports in remote shop is used, there is a $ 782 of material cost (8 prefab pipe supports) and a $ 623 of labor cost (22 hours) in COA 366. If 50% of pipe supports in remote shop is specified, 4 prefab pipe supports are procured in remote shop and the other 4 supports are procured in the field. As a result, under COA 366, there is a $ 391 of material cost for 4 prefab pipe supports in the field. The field labor hours and cost are reduced to 18 hours (22 / 8 * 4 + 22 / 8 * 4 * 0.7) and $ 529 (623 / 8 *4 + 623 / 8 * 4 * 0.7). There is a new remote shop pipe supports material cost of $ 445, including the material cost of 4 prefab pipe supports and the 30% cost for installing those 4 supports with shop labor rate and productivity adjustments. Keywords: Pipe support, remote shop References: None
Problem Statement: Is it possible to use the Aspen Properties Excel Add-In to create isotherms ternary diagrams for a system water/ethanol/methanol?
Solution: This example illustrates a ternary diagram for a system water/ethanol/methanol, set up in Aspen Properties. Hence, it is necessary to have the Aspen Properties Add-in available in MS Excel. In order to link Aspen Properties with MS Excel the Aspen Properties Add-in, follow these steps: · Open MS Excel: · Go to File | Options | Add-ins. · At the bottom of the window select from Manage scroll down list ‘Excel Add-ins’. · Click on ‘Go…’ · Click on ‘Browse…’ · Navigate to C:\Program Files (x86)\AspenTech\Aspen Properties VX.X\Engine\Xeq · Select the file Aspen Properties.xla then click Ok button. · Follow the instructions on screen. · When complete, Aspen Properties Add-in will appear under the Add-Ins ribbon as Aspen. How to use the example? The example file lets you analyse and graph the ternary diagram for a system with water, ethanol and methanol. In order to run the calculations, you need to set the following inputs in Excel, follow these instructions below: - Open Aspen Properties file directly, or; - Open MS Excel Ternary.xlsm file and then use the Excel Add-in menu: Go to Aspen menu (Add-ins), click on Aspen Properties | Select Property Package | New. - On the Aspen Properties interface enter the components. - Then, specify the thermodynamic method, click Next button until it prompts to run “Generate a load module”. - Run and afterwards close Aspen Properties. 2) Return to Excel. You should see the selected cell in Excel and it should show the name and path of Aspen Properties file (e.g: C:\Users\’Machine name’\Desktop\Ternary.aprbkp): In this case, the Aspen Properties file is already defined, as well as the Excel calculation example module. So, once you have installed such Add-in in Excel it will be enough to run the calculations pressing ‘Calculate’. Remember to change the directory where you've saved the Aspen Properties simulation file, since the file location changes from one user to another, otherwise error messages will appear on screen. To see the isotherms graph generated from the ternary group of components, click on the tab below ‘Chart1’: The isotherms ternary diagrams will be shown: What is it based on? Each Aspen Properties Excel application obtains its content (components, property methods, model parameters, etc.) from an Aspen Properties file, in this case, Ternary.aprbkp. Ternary.aprbkp is a standard Aspen Properties file that uses the NRTL property method. All of the model parameters are retained at default values; this example demonstrates the predictive capability of Aspen Properties. In addition, you can easily add new compounds to the Aspen Properties file. The calculation procedure in the Excel example is as follows: 1. The Aspen Properties program calculates 50 point to be represented in the graph. 2. The Aspen Properties Global Units Set used are SI: Temperature (C), Pressure (bar). 3. The pressure is equal to 1 bar and vapour fraction equal to 0. Keywords: Isotherms, ternary diagram, methanol, ethanol. References: None
Problem Statement: How do I model an Atmospheric Crude Tower using HYSYS?
Solution: A different model of an Atmospheric Crude Tower. Here three side strippers are used and the tower has one pump around. For another Crude Tower example, please see R-1.hsc in the Samples directory that gets installed with HYSYS. Keywords: Atmospheric; Crude; Tower; Example References: None
Problem Statement: In the standard reports, some of my items get reported with the correct COA but under a COA Group that I don?t want it to be associated with;. if I want to change this, how can I do it?
Solution: Changing the COA group to which a given COA is associated to requires some customization of the report database. For example, if you wish to move COA 473 - Building Furnishing (usually reported under ?Build - Arch?) to ?Other Civil? user needs to write an UOD query that will update one record (QSUMCOA and QSUMCOADES) in QSUM table of Reports.mdb file. QSUM table - QSUMCOA - 4700 code is associated with the Bldg-Arch. QSUMCOA for Other Civil is 4800. To move Building Furnishing under Other Civil for 473 we need to modify QSUMCOA to 4800 and QSUMCOADES to ?Other Civil? record of QSUM table (in Reports.mdb). Table Xref_QSumCOA contains the QSumCOA and its description Below are the steps for writing the ?Update On Demand? query 1. Add record into the StoredReports table (of Icarus_User.mdb) as shown below ID: 8888 TreeView1: Full Import_COA_Update Description: Moving COA 473 Building Furnishing to Other Civil KbaseFlag: 3-Kbase, I-IPM UserGroupNo: 0,2,3,4,5,6,9 Type: UOD SubQueries: [UpdateCOA] Name-Lev1: Full Import_ Name-Lev2: COA_ Name-Lev3: Update Location of Icarus_User.mdb on XP - C:\Documents and Settings\All Users\Documents\AspenTech\Shared Economic Evaluation V7.2\Reporter Location of Icarus_User.mdb on Vista\Windows 7 - C:\Users\Public\Documents\AspenTech\Shared Economic Evaluation V7.2\Reporter\Database 2. Add query to StoredQueries table in Icarus_user.mdb as shown below ID: 8888 Name: UpdateCOA SQLStr: UPDATE QSUM SET [QSUM]![QSUMCOA] = 4800, [QSUM]![QSUMCOADES]=Other Civil WHERE ((([QSUM]![QSUMCOA])=4700) AND (([QSUM]![QSUMCOADES])=Bldg - Arch) AND (([QSUM]![ICACOA])=473)) UOD query will be shown under the ?Update On Demand? Report mode as shown below. Note that client needs to run this query (for the first time) before invoking any reports otherwise old reports will be displayed. Attached file contains the Icarus_User.mdb file with the mentioned changes for reference. Keywords: UOD query report standard code account References: None
Problem Statement: Is it possible to use the solid unit operation models with salts resulting from precipitation reactions specified in the chemistry?
Solution: See the attached file for a simple example. The feed is a brine with water and sodium chloride (NaCl which is fully dissociated to Na+ and Cl-). The EVAP block evaporates the water, concentrating theSolution so that salt NaCl(s) precipitates. In the Flash2 block, the particle size distribution (PSD) has been specified. The PSD is not calculated from crystallization kinetics or other things, but you can use, for example, a calculator block to access the PSD characteristics (the calculator C-1 sets the D50 for the GSS PSD distribution function). If needed, the exported variables may be specified as tear variables (in Convergence folder) to ensure the proper convergence procedure is used. The SCREEN block is used to remove the coarse particles. The model assumes a 5% liquid entraiment (this means 5% of the mass or mole flowrate of the liquid phase in the feed is leaving with the coarse outlet stream - as the composition of the liquid does not change in the block, mole or mass basis here are identical). The design spec LEVEL is used to control the evaporation. It looks a bit counter-intuitive to specify that the water in should be equal to the water out flowrate, but this is required due to the sequential modular nature of the simulation mode. Keywords: PSD, ELECNRTL, electrolytes, salt, chemistry, SCREEN References: None
Problem Statement: When does equipment get 'field touch-up' paint?
Solution: In order to get equipment field touch-up paint, the equipment paint option at project or area level needs to be set to Remote Shop. Field touch-up paint will not be generated unless equipment paint is generated.Solution 131081 explains when equipment will get paint. In the attached file, there are two areas, EQ paint option 1 and EQ paint option 2. Area EQ paint option 1 uses the default setting of Equipment paint option under Area specs | Area paint, which is Field Paint. The table below shows there is no field touch-up paint generated for any of the components under this area (first three rows in the table). Area EQ paint option 2 uses Remote Shop under Area specs | Area paint. Two of the three components under area EQ paint option 2 get field touch-up paint (last three rows in the table). There is no field touch-up paint for the third one because the third component will not get paint at all accordingSolution 131081. The table below is generated by the attached project. Component name Design temperature [F] User specify # of primer/final coats Equipment paint COA911 Equipment paint option Equipment field Touch-up paint VT_MeetInternalCriteria_1 120 No Yes Field No VT_SpecifyNoOfCoat_1 650 Yes Yes Field No VT_NotMeetIntCriteria_1 650 No No Field No VT_MeetInternalCriteria_2 120 No Yes Remote Yes VT_SpecifyNoOfCoat_2 650 Yes Yes Remote Yes VT_NotMeetIntCriteria_2 650 No No Remote No Keywords: Equipment, field touch-up paint References: None
Problem Statement: Contingency is different than what specified in the project basis when pipe line job is considered. According to project basis contingency is 20 percent and when you evaluate the project in project summery it shows 16.7% but in project data sheet it shows 20%.
Solution: First Note that this is PipeLine job - which is handled as a Prime Contractor job(IE No contractors) so the reports are slightly different. The reason is because the 16.7 is the % of the total cost, not the percentage applied in the calculation(or the % of the Base Cost). The 20% specified in the design basis means that the value desired for contingency is 20% of the plant costs(indirect and direct costs excluding special charges and contingency). The 16.7% is the % the contingency contributes to the total including contingency. For Example: Lets say the Direct costs are $600,000 dollars and lets say the total of the indirect (not including contingency and special indirect) is $400,000. So the total plant costs(not including contingency) is $1,000,000 Now if we specify Contingency as 20%, the calculated contingency is $200,000 The total project cost(assuming $0 special charges) is then $1,200,000 If you back calculate the % of the contingency as $200,000/$1,200,000 you get 16.7% Formulaicly, Assuming P=Plant Cost C=%Contingency T=Total Project Cost F=%Contribution of Contingency T = P*C + P F = P*C/T so F = P*C/(P*C + P) or F = C/(C+1) Note that the heading of the column in the report is % of Total Cost. And note that the Total Cost % ends up at %100 by definition. If you run a non-pipeline job which is contractor job - these tables are slightly different and the heading is PERCENT OF BASE TOTAL on the Project Summary report, and PERCENT OF CONTRACTOR TOTAL on the Contractor Summary report. The % in the report is caculated on the Plant cost(Total excluding escalation, contingency, special charges), not on the Total Cost. Note that the Total Cost % ends up greater than %100 Keywords: None References: None
Problem Statement: How can a create a Standard Basis for a Pipeline area?
Solution: To create a new Standard Basis for a Pipeline area, you must start with the existing Pipeline Standard Basis and make a new copy which you can then modify. 1.) From the Library mode, go to Library | Basis for Capital Costs | Inch-Pound and click on Pipeline 2.) Click on the Right Mouse Button and choose Duplicate. 3.) On the Duplicate screen give the new Pipeline Standard Basis the name you want to call it. 4.) Click OK. 5.) Now click on the new Pipeline Standard Basis file, choose do a Right Mouse Button, and then choose to Modify. 6.) You can now make the modifications to your new Pipeline Standard Basis file that you would like to make. Keywords: References: None
Problem Statement: How can I delete Field Office Costs COA 81-85 from reporting in my project?
Solution: To remove your Field Office Costs, which are COA 81, 82, and 85, you can use the Engineering Workforce adjustment By Phase. Go to: Project Basis View | Engineering Workforce | By Phase Once there add three columns and adjust Home Office Construction Services, Construction Management, and Field Office Supervision to show percent of hours as zero. Keywords: None References: None
Problem Statement: How do I model and optimize a granulation or coating process in Aspen Plus?
Solution: The attached Aspen Plus V8.2 demo will show you how an industrial urea granulation process can be modeled in Aspen Plus V8 and how the process can be optimized with regards to throughput. There is an associated PDF to guide you through the steps. This example will cover · Basic description of the granulation/coating model · Simulation of a process that uses a granulator and a recycle loop with a screen and crusher unit to produce a granule product · Demonstrating the Aspen Plus sensitivity capability to determine the optimal energy input to the mill to increase the product flow rate by 60%. Keywords: Solids Capabilities, Unit Operations, Granulation, Granulator Block, Particle Growth, Coating References: None
Problem Statement: Are there any examples that demonstrate how the project component graphical model (P&ID) can be modified, or customized, in Aspen Capital Cost Estimator?
Solution: All process equipment, and some plant bulk, come with installation bulks when they are added in the project scenario and the cost is evaluated. These installation bulks can either be the default internal volumetric models which are written into the program or external volumetric models which are saved externally in the P&ID library directory. Both the internal and external default models yield very similar results in terms of design and cost. While the internal model can only be modified from within a project scenario i.e in the process equipment installation bulks form, the external model can be modified both inside and outside the project scenario. The file attached to thisSolution provides the following information: 1) An overview of Aspen Capital Cost Estimator P&ID 2) How to modify system P&ID in the Libraries Keywords: P&ID Libraries, external volumetric models, user-guide References: None
Problem Statement: When does a vessel get paint?
Solution: Vessels do not get paint unless one of the following conditions is satisfied: 1. User enter # of final or primer coats on the installation bulk | Paint form for the equipment, or 2. The equipment meets the internal criteria for getting paint. For vessels, the criteria is a) the material of construction should be CS, and b) the design temperature is between 50F (10 C) and 150F (65 C). In the attached file, there are three vertical vessels with the same specifications except for the ones listed in the table below. Whether equipment paint is generated is also tabulated in the last column of the table. Component name Design temperature [F] User specify # of primer/final coats Equipment paint COA 911 VT_MeetInternalCriteria 120 No Yes VT_NotMeetInteralCriteria 650 No No VT_SpecifyNoOfCoat 650 Yes Yes Keywords: Vessel, paint References: None
Problem Statement: How do you model a vacuum pump in Aspen Plus?
Solution: You can model a vacuum pump using the a compressor model. Aspen Plus has both a multistage compressor model MCOMPR and a single stage polytropic or isentropic compressor COMPR . The attached simulation file uses both MCOMPR and COMPR for a simple vacuum pump application. In the COMPR block, the exit temperature is very high due to the high compression ratio. The MCOMPR with intercoolers is more reasonable. Alternatively, a HEATER model with a specified temperature and pressure could be used if the power calculation is not important. Keywords: Vacuum pump, MCOMPR, COMPR, HEATER References: None
Problem Statement: How can you have an Economic Evaluation product estimate the area size if it forces you to put in the area length and width?
Solution: The Economic Evaluation products have the ability to estimate an area size based upon a loose arrangement of equipment in the area. To be able to do this, there must be equipment in the area. When you first create an area there is no equipment and since it is possible using the Economic Evaluation products to have areas with nothing in the area, the area requires size parameters like length and width. But once you place a component in the area, you can go back and remove the size parameters (ie length and width). When you do a Project Level evaluation the Economic Evaluation product will size your area based upon the equipment in the area. In the attached project the area is left length and width are blank: Upon evaluation the length and width based upon the components are 134 feet by 74 feet. AREA DATA Area type GRADE GRADE Area length 134.0 FEET Area width 74.00 FEET Slab thickness 0.0 INCHES Low ambient temperature 0.0 DEG F High ambient temperature 86.00 DEG F Keywords: area, size References: None
Problem Statement: What is the default length of pipe per fitting going to remote shop when pipe fabrication type is RMT?
Solution: The default length of pipe per fitting going to remote shop is 10 ft when pipe fabrication type is RMT. Prior to V7.3.2, this value cannot be changed. In V7.3.2, a new input field - Length/fitting to remote shop is added to Design basis | Piping Specs | General form, which allows for users to change this parameter. The default value is still 10 feet. This value will be used to determine the length of pipe to be procured and fabricated in remote shop as follows: Let, Lfit = Length per fitting to remote shop specified in the newly added field; Ltotal = Total length of pipe (specified by user on the pipe input form); N = Number of fittings (N > 0). IF (Ltotal - N*Lfit) >= Lfit , Length of pipe procured/fabricated in remote shop = N*Lfit ELSE Length of pipe procured/fabricated in remote shop = Ltotal. If there is no fittings in the pipe and pipe fabrication type is RMT, the pipe will be procured in field. In the attached file, there a 100 feet long plant bulk pipe component, with 5 elbows. With default length per fitting (10 feet), the estimation report shows 50 feet of pipe is sent to remote shop. If the length per fitting is changed to 8 feet, the report shows 40 feet of pipe is sent to the shop. Keywords: Pipe fabrication References: None
Problem Statement: Pump error: Specified head is out of range. My pump is set up as 60 Hz with 1800 rpm and 3600 feet of head, but when I do an evaluation, I get the Specified head is out of range error; what do I need to do to remove the error?
Solution: TheSolution is either remove the 1800 rpm speed or provide 3600 rpm speed for the 60 HZ pump. If you have specified 1800 rpm (30 Hz) as the pump speed and 3600 feet as head it will show the error of specified head is out of range. At 1800 rpm we can only design up to 1000 FT. 3600 feet is out of range in this case. You need to remove the 1800 rpm (in which case the engine will estimate the pump @ 3600 rpm) or you can provide the speed input as 3600 rpm. You will get rid of the error. In the attached file it can be viewed more clearly. There are two pumps where one is showing error as we have set up the speed 1800 rpm to achieve the 3662 feet of head. Other one is not showing error as we did not specify the speed and Economic Evaluation will consider 3600 rpm to achieve that head. WIth 1800 rpm (30 HZ) we can achieve maximum of 1000 feet head. Keywords: pump error, rpm References: None
Problem Statement: How do I find the project total I/O Count?
Solution: To find the total I/O count of a project: 1. Go to Project basis view | Design basis for capital cost | Systems | Process control, open the input form for each control center in the project, and enter 1000 for Software cost per I/O count. 2. Evaluate the project. 3. In the CAP_REP.ccp report, navigate to the bulk summary report (search for COA 669, Software total cost) and divide the software total cost by $1,000 to get the total I/O count of the project. The I/O count for each individual control center can be found in a similar way by going to Detailed Bulk By Report Group section of the CAP_REP.ccp report. The attached project has 47 I/O counts, which can be found in line # 8647 in the CAP_REP.ccp report. Keywords: I/O count References: None
Problem Statement: How do you regress heat of mixing (HLXS) data without sacrificing VLE results?
Solution: In order to do the regression properly, both HLXS data and VLE (TPXY, TXY or PXY) data needs to also be regressed. In the attached example, if heat of mixing data is regressed for an ethanol-water system (as in case R-1), the VLE results are then very poor. If you first use Tools\Analysis\Binary to generate a P-XY diagram, and note the shape and that there is an azeotrope. If you then run case R-1 and generate a new P-XY diagram, it will look very different. In fact, Aspen Plus predicts that ethanol-water forms two liquid phases over most concentrations with the newly regressed parameters. These results are incorrect. Since Aspen Plus has binary parameters in its databanks regressed from VLE data, it is possible to use these parameters to generate data to regress with the HLXS data (as in case R-2). To generate data in Aspen Plus: Create a new mixture data set. On the Setup sheet, select a Data type=TPXY, TXY, or PXY, and select the components. On the Data sheet, click on the Generate button. On the Generate dialog box, select the Property Method, click on the Generate button, and click on the close button. On the Setup sheet, enter the temperature or the pressure. Note that the Property Method of the generated data can NOT be the same as the Property Method being used in the regression. In this case, NRTL is used for the regression, and NRTL-2 which uses a second data set is used to generate the data. Keywords: DRS vapor-liquid equilibrium References: None
Problem Statement: The attached example demonstrate aspen Polymer capability of modeling Melamine Polymerization Process.
Solution: Melamine is a phenol-formaldehyde polymer/resin. Phenolic resins are produced through the reaction of formaldehyde with phenol and/or one or more substituted phenol derivatives such as cresol or xylenol. The resulting products can be linear, branched, or network polymers depending on the relative amounts of the various monomers. Formaldehyde reacts with phenol compounds at the ortho- and para- positions ( see attached). Phenol itself is trifunctional. Some of the derivatives of phenol are bifunctional by virtue of substituent groups occupying one of the normally reactive site. Phenol-formaldehyde polymerization process can best be described by step-growth process. The attached examples are developed using thermodynamic and transport data from open literature source. Keywords: Melamine, Phenol, Cresol References: None
Problem Statement: How to model a kettle reboiler using the shortcut method
Solution: With the shortcut calculation method you can simulate a heat exchanger block with the minimum amount of required input. The shortcut calculation does not require exchanger configuration or geometry data. Although you cannot model a kettle reboiler with the shortcut method, you can use a transfer block to transfer the heat duty calculated for the reboiler by the HeatX block to the Flash2 to mimic the kettle reboiler. A file created with Aspen Plus V8.4 is attached as an example. Keywords: Kettle reboiler, HeatX, shortcut method. References: None
Problem Statement: When importing a simulator file and map the components in Aspen Process Economic Analyzer, utility streams will be generated for proper equipments and the cost of the utility streams can be viewed in the utility summary. How can I create new utility streams and use them to replace the ones connected to certain equipment?
Solution: In the sample file (ETOH.izp) installed on your computer, utility streams ICUST-IN and ICUST-EX have been created based on existing utility Steam@100psi defined under Project Basis | Process Design | Utility Specifications and connected to equipment HEAT, after importing and mapping. (under C:\Documents and Settings\All Users\Documents\AspenTech\Shared Economic Evaluation V7.1\Archives_Econ_Process\Sample Projects). If user wants to use a different utility to replace the current one, one can create a new utility stream manually and re-mapping the related equipment by the following steps. File ETOH_NewUtil.izp has the updated utility. 1. Create a new utility in the Project Basis window (Project Basis | Process Design | Utility Specifications). Right click on Utility Specification | Edit. In the new window, choose Create, and give it a name for the new utility, say test. Then select the fluid class, for example Steam. Click on Create button and a form will pop up. Enter all the required data and click on OK to close the form. 2. Now you should see the newly created stream test under Modify Existing Utility Stream field after switching to Modify option in the Develop Utility Specifications window. Un-check all the existing utility stream, except for test, the one just created. The purpose of this step is to make sure this utility stream will be used when remapping equipment HEAT later. 3. Delete the mapping for equipment HEAT. In Process View window, right-click HEAT, then Delete Mappings. 4. Right click on HEAT again, then Map, to remap the equipment. After the Interactive Sizing window shows up, select the new utility (test-IPE UTILITY) for the Hot Inlet Stream. Click on OK to apply the change and close the window. You should see equipment HEAT is remapped with the newly created utility with names of ICUtest-IN and ICUtest-EX. Keywords: Utility streams, create References: None
Problem Statement: Example of DRS of UNIFAC-PSRK parameters from VLE data
Solution: This example aims to guide user to regress UNIFAC-PSRK parameters from VLE data. The experimental data are for the propane/hydrogen sulfide system at 298.15 K (Figure 1). See the attached files. Figure 1. Pxy Diagram for propane-hydrogen sulfide at 298.15 K. The UNIFAC-PSRK groups for the components are summarized in Table 1. Table 1. UNIFAC-PSRK groups of system: propane/hydrogen sulfide. Component UNIFAC-PSRK Group Group Number Number of Occurrences Propane CH3 CH2 1505 1010 1 1 Hydrogen sulfide H2S 3860 1 The PSRK (Predictive Soave-Redlich-Kwong) equation of state is based on the Soave-Redlich-Kwong equation. It uses the UNIFAC method to calculate the mixture parameter a and includes all already existing UNIFAC parameters and same groups as UNIFAC with added groups for light gases. In the UNIFAC-PSRK formulation, functional groups are divided into main groups and groups (See Aspen Plus/Help/Table 3.12 UNIFAC Method Functional Groups for a complete list of the PSRK-UNIFAC functional groups). The UNIFAC-PSRK parameters are defined for the interaction between main groups. The binary parameters are the same for the interaction between the groups in one main group and the groups in any other main group. If the interaction for one group is regressed, all other groups within that main group are equated to that value. This applies to both existing UNIFAC-PSRK groups and new user-defined groups. In this example, the group binary parameters are regressed for groups CH2-H2S, and H2S-CH2. Since the groups CH2 and CH3 belong to the same main group CHn, the parameters for CH2-H2S and H2S-CH2 must be the same as for CH3-H2S and H2S-CH3 respectively. DRS regresses only the binary parameters for CH2-H2S and H2S-CH2. DRS then automatically set the other parameter equal to the regressed parameter. Regressed UNIFAC-PSRK parameters are copied into UNIFPS-1 folder (Properties/Parameters/UNIFAC Group Binary/UNIFPS-1) (Figure 2). Figure 2. UNIFAC-PSRK regressed parameters. Hint User should define UNIFAC groups ID in Components/ UNIFAC Groups (Figure 3). Figure 3. UNIFAC groups ID. Keywords: DRS, UNIFAC-PSRK groups. References: None
Problem Statement: Is there an example showing how to use the Steel Library file in Aspen Capital Cost Estimator?
Solution: The attached file is a simple model of a double diameter tower where we used a Steel Library file to override the default price that Aspen Capital Cost Estimator used for the Ladders and Platforms for the project. Initially, the file showed, based upon the 2009 Cost Database, a price of $7,953.85/ton ($3.98/lb) for the Ladders (COA 512) $5,599.64/ton ($2.80/lb) for the Platforms (COA 513). (report below is the Excel Capital Cost Reports | Direct Costs | Item Summaries | Steel We know that we are currently paying about $4.25/lb for steel used for ladders and about $$2.95/lb for platform steel. To use this, we need to first close the project, then visit the Library section, Customer External Files, Steel Material, and do a right mouse button and choose New. Give it a name. We will call it Steel Ladder and Platform: Next we choose Specifications and Modify: We specifically want to adjust our price for the Ladders with cage ($4.25/lb) and Platforms ($2.95/lb), so we enter the follow values, then hit okay and close: We next open our project again, and visit Project Basis View | Basis for Capital Costs | Customer External Files and choose Steel Material, right mouse button. At this point you will have a choice to Select a file. Do that, and you will get a window giving you a chance to click on your new Steel_Ladder_And_Platform file. Do this and click OK. Now if you do an Evaluation on your file, and visit the Steel Item Report again, you will see the system is using our new cost weight for the Ladders and Platforms of $8498/ton ($4.25/lb) and $5890/ton ($2.95/lb). Please note, our still in this file was limited to platforms and ladders. If you use the Customer External Steel file, any steel being used in the project will require cost data to be entered. Keywords: None References: None
Problem Statement: How does the use of the DIAG_LEVEL parameter change the diagnostic messages from OOMF?
Solution: We have Aspen Plus bkp file and an OOMF script(attached) with DIAG_LEVEL parameter set to a value of 7. The default value of DIAG_LEVEL is 4. Please follow the steps: Step 1: Open attached bkp file and run it. Step 2: In the EO Command line in Control Panel enter, Invoke test_obj 3. You will notice that non-default messages are also printed to control panel. Step 3: Open test_obj.ebs and change DIAG_LEVEL parameter value to default value 4. Step 4: Enter, Invoke test_obj in command line again and hit enter . You will notice only default messages get written to control panel. Keywords: EO, OOMF References: None
Problem Statement: Jump Start: Assay Management in Aspen HYSYS Petroleum Refining
Solution: This document is intended as a “getting started” guide. It will cover the process of adding an assay to an Aspen HYSYS Petroleum Refining model, characterizing it, and integrating it with the complete process simulation. It will also show you how to generate plots and extract information about these assays for review. Keywords: Assay Management, V8.4, Aspen HYSYS Petroleum Refining References: None
Problem Statement: How do you model a Triple-Effect Evaporator in Aspen Plus?
Solution: Feed-forward multi-effect evaporators improve efficiency by using vapor from each stage as heat source for the subsequent stage. Pressures (and temperatures) must decrease from one stage to the next. The modeling challenge is to determine steam flow required to meet the desired concentration. In this example, a triple effect evaporator to condense NaCl is modeled. The attached file (triple effect evaporator V9.bkp) will run in V9 and higher. To specify a stage, specify stage pressure. Specify vapor fraction of zero to assume the steam condenses completely. The heat stream indicates direction of information flow. The coil model sets the duty and writes it to the flash block In the model, pressures are specified, and flow rates and temperatures are calculated. To simulate the control action needed to meet target concentration use a design specification. Vary the stream flow to reach the target salt content. Keywords: multiple effect evaporator sugar salt References: None
Problem Statement: Sensitivity analysis model and/or Design Spec block not taking into account vary/defined variables and is not showing any results after I run the simulation. This happens even if it is specified properly.
Solution: When the user specifies a sensitivity analysis tool, it is important to remember to include in the calculation, the variables which are in Active streams and blocks. If you deactivate the blocks or streams (and block variables) which are included in the sensitivity analysis Input specification, Aspen Plus will not take these into account and the user will not see any results in the sensitivity folder. The user should activate streams and blocks which contain variables entered to the sensitivity Vary/Defined tab and then Aspen Plus will do the calculations and show the results properly. A similar issue can be observed with the Design Specification block. When you include variables from deactivated block or stream in Define/Vary tab in Design Spec input specification Aspen will not show any results. In the attachment you will find two example files which show how the Sensitivity analysis tool and Design Spec block behave when the value from a deactivated block is used. To check if the results will be displayed, right mouse button click on the Heater block, select activate and run the simulation. Note: The example was prepared based on the available cumene.bkp example from AspenTech examples directory: C:\Program Files (x86)\AspenTech\Aspen Plus V8.0\GUI\Examples Keywords: Sensitivity, Design Spec, deactivate References: None
Problem Statement: How to use data reconciliation with ACM?
Solution: The attached example shows how to use data reconciliation with ACM. Data reconciliation is simply a special case of estimation. This is available for steady state simulation only. The example is a heat exchanger where the temperature and flowrates of inlet streams have been measured. The temperature of the outlet streams have been measured as well. We can use the overall thermal balance to get estimates for the temperatures and flowrates. For a data reconciliation, the fixed variables which have been measured have to be selected on both the estimated variables sheet and on the measured variables of the experiment. In this example, let's say we have the following measurements: To set up the data reconciliation, follow these steps: open file htx-dr-start.acmf change the run mode to Estimation open the flowsheet table Data go to Tools, Estimation Only for version 12.1 and below, drag and drop the fixed variables from Data table to Estimated Variables sheet of Estimation window (for version 2004 and above you need to skip this step). click on the Steady State experiment sheet press the <New> button to create a new experiment (accept the default settings) drag&drop all varibels from the table Data on the Measured Variables of the experiment sheet enter the measured values on the experiment sheet You can now run the simulation. The results are displayed in the simulation messages window. Observed versus Predicted: Time Observed Predicted %Error Standardized Absolute Residual Residual Steady state experiment SteadyStateExp_1: 6.9000e+001 6.9210e+001 -0.3 1.0000e+000 2.0999e-001 S4.p_in.T 2.1000e+003 2.1000e+003 0.0002 1.0000e+000 4.2247e-003 S1.p_in.F 6.2000e+002 6.2001e+002 0.0024 1.0000e+000 1.4945e-002 S4.p_in.F 3.5000e+001 3.4018e+001 2.9 1.0000e+000 9.8192e-001 S2.p_in.T 2.7000e+001 2.6716e+001 1.1 1.0000e+000 2.8449e-001 S5.p_in.T 2.4000e+001 2.5056e+001 -4.2 1.0000e+000 1.0564e+000 S1.p_in.T Note: the Experiment sheet should display the predicted values, but there is a defect in version 11.1 which causes ACM to display zero instead of the actual values. The correct values are also displayed in the table. You can define only one experiment for a steady state reconciliation. Keywords: References: None
Problem Statement: Is it possible to vary the Selector Block´s feed stream using a sensitivity analysis?
Solution: If you want to use a sensitivity analysis to vary the feed stream to a block you first need to setup a Selector block with your possible feed streams. The variable that you will vary is: Variable: Name of the variable Type: Block-Var Block: Selector block name Variable: Stream Sentence: Param The specified limits should be 1 for lower and the total number of streams connected to the selector as upper. In the example we have three possible feed so the parameter should vary from 1 till 3. The increment should be 1 to run all possible streams. One important thing when setting up a sensitivity block that varies the feed stream to a block it is recommendable to reinitialize the blocks and streams before each case, this can be adjusted in the Optional tab of the Sensitivity analysis. By setting the case to reinitialize you can avoid convergence problems caused by using the previous results of another feed stream. Attached you can find an example that uses a selector to feed a mixer with one of three streams, each one with a different composition. A sensitivity analysis is used to change the outlet stream specification in the selector block. Keywords: Aspen Plus, Sensitivity Analysis, Stream selector, Feed Stream References: None
Problem Statement: Example of modeling liquid-liquid equilibrium in an electrolyte system using Apparent Components in a Flash3 block and True Species in an RGibbs block.
Solution: Liquid-liquid (LLE) systems can be modelled with ElecNRTL, but there are several important steps that must be taken. SeeSolution 4402 for details. Only the Apparent Component approach can be used. Starting in 2006, RGibbs will allow chemistry and electrolytes calculations using true components. With this enhancement, it will be possible to model Vapor-Liquid-Liquid (or Liquid-Liquid) equilibrium with electrolytes using true species. It will also be possible to simulate more robustly superheated electrolyte systems with vapor and salts, with no liquid present. Only liquid phase reactions are allowed. It cannot be used for additional reactions in the vapor or solid-phase. In 2006.5, this method will be available for flash calculations also. In the attached file, both an RGibbs block using true approach and a Flash3 block using apparent approach are used to calculate the liquid-liquid equilibrium for a water (H2O), carbon tetrachloride (CCl4), sodium hydroxide (NaOH) system. Keywords: lle, electrolytes, caustic References: None
Problem Statement: How to toggle between two streams? How to choose one of the two stream? Example in steady state and dynamics mode.
Solution: Toggle between two streams can be performed with the help of Spreadsheet and a dummy Mixer operation. If you can use both of the streams as feed of the unit operation, you don't need to use the dummy mixer. Example in steady state mode: The attached Toggle btn Two Streams SS.hsc, if temperature of stream 1 is greater than 50 F, we want stream 1 active, otherwise stream 2. Toggle conditions is tested in cell B4 of the spreadsheet. Flow rates of Stream1 and Stream2 are calculated utilizing the value in B4. The flow rates are exported to the corresponding streams. Please note that the toggle condition can be any variable that is dependent on the manipulated variable. In this case you will have a circular an unsolvable situation. Example in Dynamic Mode: The attached Toggle btn Two Streams Dyn 1.hsc file, for example, stream Ethane is backup stream of Methane. If stream Methane fails to achieve outlet temperature of the reactor >= 5000 F, then we want Methane stream to be turned off and Ethane stream turned on. The methane composition is adequate to attain the temperature. If you change the Methane stream's composition (molar) to 10% methane and 90% oxygen, then the temperature will fall and the spreadsheet will turn Valve2 on and Valve1 off. If the composition of methane in the Methane stream is changed back to a higher value that is capable of achieving 5000 F and we want Methane stream to be active again, then this file is not adequate. Once Ethane stream is active, flow rate of Methane stream is becoming 0. To test a whether new composition is adequate to achieve the desired temperature or not, we need a dummy reactor to test it. The attached Toggle btn Two Streams Dyn 2.hsc file has the dummy reactor. The spreadsheet uses the temperature of the dummy reactor's outlet. In this file whenever the Methane stream is adequate to achieve the desired temperature, then Methane stream will be turned on. Otherwise, the Ethane stream will be turned on. Please note that the examples are based on totally arbitrary conditions. The sole purpose of these examples is to demonstrate that toggle between two streams can be performed with the help of spreadsheet. Keywords: Choose, toggle, sample, example, stream References: None
Problem Statement: How do I model a batch dryer in Aspen Plus?
Solution: The attached Aspen Plus V8.2 demo will show you how a batch dryer can be simulated in Aspen Plus using the clock method. There is an associated PDF to guide you through the steps. This example will cover · Application of the clock method using calculator, transfer and sensitivity blocks to model a batch process within Aspen Plus · Demonstrates for a batch dryer how temperature, moisture content and the drying rate change as a function of time Keywords: Solids Capabilities, Unit Operations, Drying, Batch Drying, Clock Method References: None
Problem Statement: For feed streams (streams not coming from any other block) we can easily set a different value for the temperature of the substreams. How can we achieve the same for a stream coming from another block? One such scenario is to work around the limitation that RPLUG allows only one feed stream, but for special cases this block does allow mixed and solids to be at different temperatures. This is challenging if the feed is actually the result of mixing different streams.
Solution: See the attach example for Aspen Plus v7.3 and higher. Note that you must have a fortran compiler installed to use this feature. The first calculator C-1 right after the HEATER does all the setting. It contains the code to set the mixed substream temperature to TMIXED and the cisolid temperature to TSOLID. There are a few comments in the code (please note that on the Fortran declarations we are including some common blocks to access the plex IB(...) and the number of components (NCOMP_NC). The structure of the stream and sub-streams is documented in the Aspen Plus User Models reference guide. Right when the first block after the setting is to be executed, the stream gets initialized. You can see it from the history file. The stream summary will show the values for T and H to be the same for 1 and 2 (and 3 since DUPL does not flash). Also note the NOFLASH flag for FLASH-SPECS of stream 2 in the calculator blocks. The second one C-2 is not needed - just for demonstrating it still has the changed values. Keywords: References: None
Problem Statement: There is no PENALTY column available for T.TRANSFER in MPIMS or XPIMS. How can I implement user defined penalties for streams transfers?
Solution: Please refer to the example below: Stream LNX is transferred from plant B to C via mode R. It has a MIN limit of 2.5 and MAX limit of 5. I removed these limits from T.TRANSFER as I am going to implement them by control rows: * TABLE TRANSFER * Inter-Plant Transfers TEXT MIN MAX FIX COST !Product !Source !Destination !Mode * LNXBCR Lt SR Naphtha 0.20000 LNX B C R I implemented MIN and MAX constraints of the transfer using L and G rows in T.ROWS. Additionally I used two columns with intersections with Epenrow to allow PIMS to violate these constraints for a certain penalty: * TABLE ROWS * User Defined Rows USER TLNXBCR. UONEONE. UDNPEN1. UUPPEN1. * LLNXBCR. 1.00000 -1.00000 2.50000 -1.00000 GLNXBCR. 1.00000 -1.00000 5.00000 1.00000 * Epenrow -10.00000 -10.00000 Row Epenrow will work fine in XLP as PIMS generates global Epenrow row for each period automatically. However in DR there needs to be Epenrow with one plant identifier like Epenrow.A. That would assign the transfer penalty to one particular plant - it works fine as the penalties are collected globally. Please note that activity of column UONEONE needs to be fixed to 1 via T.BOUNDS: * TABLE BOUNDS * User Defined Bounds TEXT MIN MAX FIX * UONEONE 1.00000 Keywords: transfer penalty mpims xpims References: None
Problem Statement: Example of power distribution of a two-stage compressor linked with a common shaft
Solution: The attached dynamic case is an example for a two-stage variable speed compressor linked with a common shaft. The compressors are driven by gas turbine and transmitted to the shaft. The turbine power and gear ratio of the compressors are specified in a spreadsheet. The linker power is the total power transmitted to shaft (Refer to knowledge baseSolution ID 126383). If the power is entered in kW then this should converted to kJ/h, kcal/h or Btu/h before exporting the value to the compressor. The gear ratio makes sure that the linked compressors have the following speed relationship: Speed of compressor 2 = Speed of compressor 1 * Rear Ratio. The dynamic specifications should include the Linker Power loss and Compressor Characteristic curves. When the compressors are linked then power balance is performed as follows: Keywords: Linker Power, Gear Ratio, Compressors References: None
Problem Statement: How do I validate the Fisher Equation for Liquid Flow through a Valve in HYSYS Dynamics?
Solution: It is possible to validate the Fisher Valve for liquid flow through a spreadsheet using HYSYS Dynamics. The equation for the liquid flow through the valve, using the Fisher Valve Equation is documented in the Online Help Guide. The Fisher Equation for liquid flow through the valve is also documented below: (1) In the attached HYSYS simulation, equation 1 is implemented in the HYSYS spreadsheet for Equal%, Quick Opening% and Linear Valve Opening Characteristics. A valve position ramp is introduced by altering the mass flow rate specified in stream 3. The simulation shows three stripcharts for when the valve is set to Linear, Quick Opening and Equal Percentage opening characteristics. Each strip chart shows an overlap for the spreadsheet calculation and HYSYS Valve calculation. An example screenshot showing this overlap is demonstrated in the below screenshot showing the valve position ramp and the opening characteristic is set to Quick Opening%. Red: Valve Opening %, Green: Mass Flow Rate HYSYS Calculation, Blue: Mass Flow Rate Spreadsheet Calculation. Keywords: HYSYS Dynamics, Valve, Liquid References: None
Problem Statement: Do you have any example of modeling a heated pipe Aspen Plus Dynamics?
Solution: Refer to the attached example that considers the following scenarios: 1. Considering heat transfer with environment 2. Including uniform heat flux 3. Pipe initially heated at elevated temperature Keywords: pipe, heat flux, environment References: None
Problem Statement: How do I automate a login process in a workspace?
Solution: Please follow below mentioned steps to use a code explained at the end of thisSolution, Open blank excel workbook Open VB editor for excel Add reference for AZClientTools Version 16.2 from tools | Keywords: VBA References: s Paste above code in VB editor for This workbook objects Save this file (as *.xlsm file for Excel 2007 version) Now, whenever you open above excel file it will automatically connect to workspace named AZ162. This is just a sample to demonstrate how it works with excel. Please test it thoroughly in your test environment before applying it to live environment. Option Explicit Option Base 0 Public WithEvents g_objWorkspace As AZClientTools.AZWorkspace Private Sub Workbook_Open() Set g_objWorkspace = New AZClientTools.AZWorkspace Dim colObjectIDs As New Collection 'line below will assign workspace name to be opened g_objWorkspace.Workspace = AZ162 'line below will connect to AZ162 workspace g_objWorkspace.Connect MsgBox Workspace & g_objWorkspace.Workspace & is connected Exit Sub End Sub