question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: What happens to my session of HYSYS if I lose connection to my network license server? | Solution: The following series of steps will show what occurs if your client machine running HYSYS loses connection to your network license server:
1) After the machine becomes disconnected from the network license server, the two license error messages (as seen below) will pop up on the screen after 5 minutes of no connection.
2) The HYSYS application will not allow you to do any further work and force you to save your work and exit the program. The error message below will also appear explaining this.
3) If the program is reopened, you will continue to get the error message shown in step 2 until a connection to the license server is made again.
Keywords: License error
Lose license connection
License connection
HYSYS license
References: None |
Problem Statement: Why must property indices like RVI and PPI be entered directly into Table MICROCUTS? | Solution: When using Table ASSAYS, property indices like RVI, PPI and more can be defined by a calculation that resides in Property Calculation Formulas or through entries in Table INDEX. PIMS will find RVP or POR information in the Assay table and use the provided relationship to calculate the corresponding index values like RVI and PPI. Unfortunately neither of these are compatible with the use of ShortCut Distillation and Table MICROCUTS. Therefore all property index values must be entered directly into Table MICROCUTS.
NOTE: as of V11, Table MICROCUTS is compatible with Table INDEX and Property Calculation Formula entries. There is no longer a need to enter all the indices directly in Table MICROCUTS.
Keywords: None
References: None |
Problem Statement: How can I create an input | Solution: file from a previous run?
Solution
In Aspen PIMS V11, there is a new feature that creates an inputSolution file from a previous run that is saved in the results database. To use this feature, follow these steps:
Select Run | Create InputSolution File (red box)
Select the desiredSolution run from which you’d like the inputSolution file created (green box)
Select the specific case from which you’d like the inputSolution file created (blue box)
Designated the desired file extension for the create file (purple box)
Keywords: None
References: None |
Problem Statement: How to verify aspenONE Process Explorer was installed successfully? | Solution: Steps to validate if aspenONE Process Explorer can launch and trend historical data successfully.
1) Launch aspenONE Process Explorer by opening the browser and navigating to http://<servername>/ProcessExplorer/.
2) Click on the Process Explorer icon to launch the application.
3) Click on the large “+” to create a new trend.
4) From the “Basic” group, select “Trend”.
5) If you know the tag names, you can start typing them in the search box, and the look-ahead will list tags matching the search string. You can use the * wildcard to find all tags matching a pattern, e.g. fv_s60*.
6) To select multiple tags, click on the magnifying glass where you can use search filters and select multiple tags to add to the plot. Click on “Add”, then click on “Close”.
7) The result will be a trend for all 3 tags value.
Keywords: validate A1PE trend
A1PE install
References: None |
Problem Statement: When you list the Batch handles using the Aspen Production Record Manager Excel Add-in you will see that the Batch Handles are sorted in the descending order. This is as designed and cannot be changed when the most recent batches option is used as condition.
This Knowledge Base Article show how to change the order of the Batch Handles in the legacy APRM Add-in if the batch handle queries use filter time condition. | Solution: To change the order of how the Batch Handles are displayed in the Excel Add-in when using filter time condition, you can edit the Production Record Manager.Profile.XML located in the folder c:\programdata\aspentech\production record manager\
If the BatchAscFilterbyTime flag is set to 0 or not set in the profile XML file, then the same query will output batch handles in descending order. The default is set to 1 which is ascending order.
Open the XML and check if the value exists if not then add as shown below in the example.
<?xml version=1.0?>
-<AspenSystemProfiles sharedDirectory= currentProfile=Profile1 product=Production Record Manager>-<AspenSystemProfile name=Profile1>-<Server>-<Asynchronous><Path value=C:\ProgramData\AspenTech\Production Record Manager\Async\/></Asynchronous>-<Batch
Keywords: Aspen Production Execution Manager
Batch.21
Batch handles
Excel add-in
References: Daemon><DataSources/></BatchReferenceDaemon>-<Cache><DataSources/></Cache>-<Eventing><MaxRetry value=60/><RecentActions value=0/><RetryDelay value=300/></Eventing>-<Reporting><DataSources/></Reporting><BatchAscFilterbyTime value=1/></Server>-<Logging><ApplicationInterface Scheduling= Performance= Calc= XML= KPI= Profile= Publish= SPC= Security= Audit= Steps= SQL= Interface= Size=1/><AtlServer Scheduling= Performance= Calc= XML= KPI= Profile= Publish= SPC= Security= Audit= Steps= SQL= Interface= Size=1/><BcuServer Scheduling= Performance= Calc= XML= KPI= Profile= Publish= SPC= Security= Audit= Steps=
Once you change the value then make sure to 'Save' the XML and close and open Excel again. |
Problem Statement: While upgrading Aspen InfoPlus.21 Server, do I need to upgrade other MES application? | Solution: If Manufacturing Execution systems suite products are installed on single machine then you must upgrade all the products while upgrading Aspen InfoPlus.21 server. If Manufacturing Execution systems suite products are installed on multiple different machines then you may choose to upgrade products based on criticality.
Note: AspenTech recommends to upgrade all Manufacturing Execution Systems suite products like Aspen InfoPlus.21 Server, Aspen Cim-IO Server, Aspen Data Source Architecture Server, Aspen Framework Server and Aspen Production Record Manager to similar version.
Manufacturing Execution Systems suite products supports multiple versions interoperability. Refer to our Manufacturing Execution systems interoperability document to know version compatibility with each application. Based on product interoperability customer can decide to upgrade products.
Keywords: Product compatibility
MES Products
IP21 version compatibility
IP21 Upgrade
References: None |
Problem Statement: What are my choices for modeling crude unit cut points? | Solution: In Aspen PIMS there are 3 ways to model crude cutpoints. Of course, customized structure can be made to handle additional options, but these are the 3 standard ways to model crude unit cut points with PIMS.
Classic PIMS crude structure
Main cuts are designated and characterized
Small swing cuts are configured and characterized
Swing cut distribution (up/down) is optimized by PIMS
Properties of stream above and below are adjusted for the swing amount.
All portions of the swing stream have the same properties
Swing Cut Gradients
Requires PIMS Advanced Optimization (PIMS-AO)
Same configuration as classic PIMS crude structure
Check option box to turn this feature on
When this feature is activated, properties for the up and down portions of the swing cut are adjusted to account for expected gradients of the properties. For example, if half the swing cut goes up and half went down, the sulfur value for the down portion would be higher than the sulfur value for the up portion. These corresponding values would then be used to adjust the destination streams.
Cut Point Optimization (Short Cut Distillation)
Requires PIMS Advanced Optimization (PIMS-AO)
Completely different crude configuration tables are used
Crude cut information is generated for many very small cuts
Crude towers are modeled with SI factors
Cut points are optimized as crude selection is optimized by PIMS
You can learn more about each of these options in the PIMS Help.
Keywords: None
References: None |
Problem Statement: After adding a new Aspen Production Record Manager (APRM) Area, and configuring the triggers and characteristics, batches are not being recorded. In the <area>.errors log file, the following message is displayed:
1/24/02 6:32:01 PM : Failed while looking up the batchB21BCU-55072: Failed while looking up a batch by its designators: B21BAI-60150: Additional designator values required
Where <area> is the name of the newly added Batch.21 Area. | Solution: The error message is about the action that the BCU is performing. Before creating a new batch, it tries to look up inside the APRM database whether a batch with this designator value already exists. In this case, it cannot find these previous designator values.
Under these circumstances, the APRM database needs a re-initialization.
Stop and delete the BCU-script from the scheduler in the BCU Administrator.
Delete all recent cached batches (under the Server menu).
Select update batch configuration (under the Server menu).
Stop and start the BCU server - in some cases this is the only step needed, so try this first.
Select update batch configuration (under the menu Server).
Verify and install the BCU-script.
Keywords:
References: None |
Problem Statement: Is it possible to regress Henry parameters for light gases in a polymer system? | Solution: Yes,it is possible to regress Henry parameters for a polymer system though the polymer needs to be specified as Oligomer in the regression.
An example of fitting the solubility of Nitrogen in polyethylene is attached (Estimate Henry parameters for N2 in polyethyleneV9.bkp). This file will run in Aspen Plus V9 and higher. More details and screen shots are in the file Fitting Solubility of Light Gases in Polymers.docx.
Component Definition
Start with the Aspen Plus Polymer with Metric Units template. Enter components associated with the light gas, each type of polymer segment, and each type of polymer. If a polymer is not in the database, the generic polymer component “POLYMER” can be used. Be sure to set the segment component type to “Segment” and the polymer component type to “Oligomer” (the “Polymer” component type is not supported in property data regression or property analysis; the oligomer representation is used to characterize the polymer).
Go to the Polymers form, enter the segment type (repeat) then specify the number of segments of each type for each polymer. The specified number of segments should correspond to the number-average molecular weight of the polymer (for a co-polymer, the sum of the segment masses should add up to the expected number-average molecular weight). In many Journals, the average molecular weight of the polymer is not reported; in this situation use a relatively high chain length such as 1000 units; the influence of the chain length on phase equilibrium levels out after a chain length of about 100 so this assumption will have little impact on the simulation results for phase equilibrium. Viscosity is the one exception to this rule asSolution viscosity is very sensitive to chain length over a wide range. When fitting viscosity, you can enter the expected polydispersity index (ratio of weight-to-number average molecular weights) using the unary property parameter “POLPDI”.
Defining Experimental Data
The data used in this example are based on the work of Sato et al, Fluid Phase Equilibria 162 1999 261–276. The paper can be found at:
http://www-eng.lbl.gov/~shuman/NEXT/MATERIALS&COMPONENTS/Xe_damage/solubility_diffusion_N2_CO2_PP_PS_PE.pdf
The data in the paper report temperature, pressure, and gas solubility (in g gas / g polymer). The data have been converted in Excel to mass fractions for entry into Aspen Plus. In the Properties environment, go to the Data object manager, click “New”, and select type “Mixture”
In this example, we are dealing with gas solubility in polymer. Select the light gas and polymer components and data type “TPXY”. When working with polymers, always use mass fractions to avoid confusion between “true” molecular basis and the “reference” basis used in Aspen Plus (typically the molecular weight of a single repeat unit is used as the reference molecular weight of the polymer, but in practice any value can be used; Aspen Plus converts the data to the true molecular weight basis internally, using the number-average molecular weight of the polymer component).
Enter the experimental data as shown below. Since the polymer is not volatile, specify the vapor phase composition as 100% light gas. Since the vapor phase is pure, you must also remove the polymer from the list of constraints on the Constraints tab.
Fitting Henry Law Coefficients
When using an activity-coefficient method such as POLYNRTL, you may choose to represent light gases such as nitrogen using the Henry-Component approach. Create a Henry list, reference it in the global property method, and include the light gases of interest in the Henry List.
Next, go to Parameters, Binary Interaction, HENRY and enter an initial value of the Henry coefficient for the light gas in the polymer. The initial value will be replaced with a fitted value.
Create a new data regression case, reference the TPXY data set created earlier.
Go to the “Parameters” tab sheet and enter the list of HENRY parameters. The Henry coefficient equation is:
H = a + b/T + c*ln(T) + dT + eT2, where a-d represent Henry parameters 1-4, parameters 5 and 6 set the lower and upper temperature bounds, parameter 7 is “f”.
To avoid convergence problems, run one round of regression using only the first Henry term. This gives an approximate fit of the data. The fit can be improved in subsequent rounds by adding additional terms.
With only a single parameter the quality of the agreement will be limited. Expanding the regression to three terms gives a much better match to the data.
The figures below show the model results (solubility of N2 in HDPE) versus data for the single-parameter and three-parameter data regression cases.
Data Regression Results – Predicted v Measured Solubility of N2 in Polymer
One Parameter Fit
Three Parameter Fit
Keywords: Regression
polymer
Henry
References: : Solubility Data of Light Gases in PP, HDPE, and Polystyrene, Y. Sato et al., Fluid Phase Equilibria 162 (199) 261-276 |
Problem Statement: In our refinery there are multiple modes to operate a unit and we want Aspen PIMS to select the best financial choice. | Solution: The best approach is to use binary variables. In the example attached component AAA enters to reactor RX1. This reactor can operate in 3 modes. Mode AA1 gives a yield of 80% BBB, 10 % CCC and 10% DDD; mode AA2 10% BBB, 80% CCC and 10 % DDD; mode AA3 10% BBB, 10% CCC and 80% DDD. The outputs of the reactor are products that will be sold. Aspen PIMS will decide the mode that give us the highest revenue (OBJFN).
This can be achieved by modeling the RX1 reactor as follows:
*TABLE
RX1
Reactor
*
ROWNAMES
TEXT
AA1
AA2
AA3
*
*
MATERIAL BALANCE ROWS
VBALAAA
Balance for component AAA
1
1
1
VBALBBB
Balance for component BBB
-0.8
-0.1
-0.1
VBALCCC
Balance for component CCC
-0.1
-0.8
-0.1
VBALDDD
Balance for component DDD
-0.1
-0.1
-0.8
*
CAPACITY ROWS
CCAPRX1
RX1 capacity
1
1
1
In the table above component AAA enters as AA1, AA2 and AA3, modes, but we must remember that by defining a MIP problem just one of them will have a non-zero activity. Note: the names in the matrix for these variables are SRX1AA1, SRX1AA2 and SRX1AA2.
We will achieve this by defining 3 binary variables (VARIAB1, VARIAB2 and VARIAB3) in the MIP table.
*TABLE
MIP
*
*
BIVALENT
*
VARIABLES
ROWNAMES
TEXT
BV
*
*
*
VARIAB1
Variable for mode 1
1
VARIAB2
Variable for mode 2
1
VARIAB3
Variable for mode 3
1
The equations (rows) that will be used to select just one mode are:
SRX1AA1-1000*VARIAB1<=0 row 1
SRX1AA2-1000*VARIAB2<=0 row 2
SRX1AA3-1000*VARIAB3<=0 row 3
VARIAB1+VARIAB2+VARIAB3=1 row 4
From row 4 just one variable must have an activity of 1. This also implies that row 1, 2 and 3 will be affected. For example if VARIAB1 takes the value of 1. In order to comply rows 2 and 3, SRX1AA2 and SRX1AA3 must have a zero activity.
These rows will be defined in table ROWS as follow:
*Table
ROWS
USER DEFINED ROWS
*
TEXT
SRX1AA1
SRX1AA2
SRX1AA3
VARIAB1
VARIAB2
VARIAB3
FIX
MAX
RHS
rOWMOD1
row 1
1
-1000
1
rOWMOD2
row 2
1
-1000
1
rOWMOD3
row 3
1
-1000
1
SELECTM
row 4
1
1
1
1
1
***
By including this structure in your model you can make Aspen PIMS to select a mode over others.
An alternative is to designate the variables SRX1AA1, SRX1AA2, and SRX1AA3 as part of a Special Order Set in Table MIP instead of making them Bivalent variables. In this case, you would designate a Type 1 SOS which only allows one of the grouped variables to have activity. The entries in Table ROWS are not necessary for this approach.
Keywords: MIP, modes, unit operation, bivalent
References: None |
Problem Statement: Does the API insert historical data more efficiently than SQLplus? | Solution: First of all, it does not matter how the inserted data is written (SQLplus or API), the results will be the same. Any history writer puts history events in the event queue. These events are then de-queued by the archive program and are placed in the archive filesets. It is the archive program that is responsible for the construction of the archives, not the history writing programs.
Let's say you use SQLplus to insert large amounts of historical data into InfoPlus.21. Your experience when inserting large amounts of data using SQLplus was that for a given file set the arc.key file tended to be ~1/4 the size of the arc.dat file. So, if the data you inserted created a 200 MB arc.dat file, then the arc.key file was 50 MB. This is not the typical behavior during the normal operation of the historian.
Upon further investigation of the file set summary, you discovered that the history system was creating the smallest size records (1/4 K) in the arc.dat file - instead of using any of the larger sizes. Because the small records were being used, this made the total number of records in arc.dat very large. Consequently arc.key became very large in order to accommodate all the pointers to the records in arc.dat.
Now why is this happening...
The archive program is optimized to handle 'normal' history input. 'Normal' means new history events always have timestamps more recent than the previous history event already in the archive. In other words, normal history is current history. Inserting history is considered a special circumstance. This means there is a lot of special code to handle the situation. Because insertion is considered a special occurrence that does not happen often, the archive program creates small history records in the filesets to hold the inserted data.
When current data is written to history and a new history record is needed, the size of the new record is calculated by looking at the past data rate of the history repeat area and the current fileset's data consumption rate. This 'prediction' cannot be done with inserted data, simply because the archive program cannot predict where data will be inserted.
Here is how you can fix this problem ==>
In the repository record, there is a field called INSERT_SIZE and CUR_INSERT_SIZE. These fields control the size of the 'insert' records that are created when history is inserted. The values in these fields should be powers of 2 (512, 1024, 2048, etc) and should not exceed 65536. These fields do not appear on the Property sheet for the repository, so a new value must be entered directly into the field from the Administrator. DO NOT change CUR_INSERT_SIZE. Put the new value to be used into INSERT_SIZE, then restart IP.21. The new value should take affect on the restart. After the restart you should see both fields with the same value.
IMPORTANT NOTES:
Insert size should only be changed on systems where large amounts of data are inserted in a contiguous time range. Otherwise, there is really no need to mess with it.
Keywords: API
API calls
References: None |
Problem Statement: How to define Temperature approach in REquil block? | Solution: REquil is usually used to model a reactor in which reactions reach or nearly reach equilibrium states. In practice, reaction equilibrium states are hard to reach because of limited residence time in it. As a result, the Temperature approach is usually used to determine how far the real reaction is away from the equilibrium state. However, you should pay more attention to use a positive or a negative value for Temperature approach specification. If it is an exothermic reaction, a positive value will make the conversion of the reaction decrease and a negative value will make the conversion increase. However, conversion increasing makes no sense in practice, so we should always input a positive temperature approach for exothermic reaction.
In contrary, it is practical to use a negative temperature approach for an endothermic reaction, which will have a comparative smaller conversion.
Key Words
REquil Temperature Approach Conversion Exothermic Endothermic
Keywords: None
References: None |
Problem Statement: What is the difference between 'Def Pnlty' and 'Exc Pnlty' columns from Product screen and 'ExcCost' and 'DefCost' from the Optimize Blend screen? | Solution: In the product screen, 'Def Pnlty' and 'Exc Pnlty' columns are used to penalize any product property not within their MIN and MAX property specification range. You can also consider them similar to infeasible breakers.
For example, in the product screen shown above, we have RVP spec MAX=9. Then any product RVP greater than 9 will be penalized with the cost of 20. Likewise for DON MIN Spec, any product property RON below 87 will be penalized at the cost of 10.
However, the giveaway costs are for the product properties which meet its spec but exceed the specification. For example, on the second screen above, property RVP has a giveaway cost at 200. The MAX spec RVP=9, any product property less than 9 will be given the cost at 200.
Note, the Property giveaway cost values are entered through database table as blow.
Keywords: None
References: None |
Problem Statement: Why my Aspen Mtell View page does not load up completely?
or you may see below error message:
HTTP Error 404.7 - Not Found
The request filtering module is configured to deny the file extension. | Solution: This article explains how to address Aspen Mtell View page loading issue. User's will experience this issue if some file extensions were blocked or denied on the Aspen Mtell Server machine.
1. Login to Aspen Mtell Server
2. Launch Internet Information Services (IIS) Manager
3. Expand Server -> Default Web Site -> AspenTech -> AspenMtell -> MtellView
4. Select Request Filtering within MtellView
5. Click Edit Feature Settings...
6. Select Allow unlisted file name extensions and Click Ok
7. Launch Aspen Mtell View once again and now it should load the page successfully.
Keywords:
References: None |
Problem Statement: This knowledgebase Article describes in detail how Tokens are consumed by APC Desktop (client) applications. | Solution: Starting in V7.1, under Token-based agreements, all APC Desktop (client) applications use a single license key. This key (SLM_APC_Builder ) is checked out both at install and run time. These APC Desktop Applications include:
Product
Sub Program
Aspen Process Control
APC Builder
Aspen DMCplus
Model.exe
Aspen DMCplus
Simulate.exe
Aspen Nonlinear Controller
Apollo_mod.exe
Aspen Inferential Qualities
IQModel.exe
Aspen Inferential Qualities
IQConfig.exe
Aspen Process Statistical Analyzer
IQPowertools.exe
Aspen Watch Performance Monitor
AWMaker.exe
Aspen Watch Performance Monitor
PIDWatch.exe
Aspen Watch Performance Monitor
SmartAudit.exe
Aspen RTO Watch Performance Monitor
RTOMaker.exe
Aspen Process Sequencer
RecipeExplorer.exe
In order for this functionality to be enabled, in addition to a Token-based license file being in place, each client must be configured using the APC License Scheme Selector tool. Choose either the APC or APC (Olefins) schemes:
Important: Once the scheme is selected, a single user can open up to four of the applications listed above and only 4 tokens will be consumed. This behavior can be observed by using the WLMAdmin Tool to monitor the SLM Server. In the example below, we can see a client using four APC Applications and only consuming 1 Token:
Client PC: 4 APC Desktop Applications: Aspen Nonlinear Controller Model, Aspen IQModel, Aspen DMCPlus Simulate and Aspen DMCPlus.
SLM Server - SLM_APC_Builder key shows 4 Applications, 1 License being consumed.
SLM Server - SLM_Pool key shows 1 user and 4 Tokens being consumed.
Keywords: APC Tokens
SLM_APC_Builder
APC License Scheme Selector
References: None |
Problem Statement: For an immiscible system, is it possible to create a TXXY plot?. | Solution: Starting in V11, Binary analysis includes new types Txx, Txxy, and Pxxy which compute liquid-liquid equilibrium in addition to or instead of vapor-liquid equilibrium. With the new types you can generate Txy and Pxy plots which include the liquid-liquid behavior as well as the vapor-liquid behavior.
TXXY is a plot of Temperature (T) versus vapor-liquid (y and x) and liquid-liquid (x1 and x2) compositions at given pressures for two-liquid systems. PXXY is a plot of Pressure (P) versus vapor-liquid (y and x) and liquid-liquid (x1 and x2) compositions at given temperatures for two-liquid systems.
For Txx, and Txxy analysis, specify lower temperature limit for the liquid-liquid equilibrium calculations. For Txx analysis, specify an upper temperature limit for the liquid-liquid equilibrium calculations. (For Txxy analysis, the calculation extends to the top of the two-liquid region).
For example, for a Butanol-Water mixture:
You may need to use higher number of temperature intervals to get closer to upper consolute temperature point
The point where the curves inflect probably indicates the solid phase region. Can use the lower temperature limit to avoid this transition.
Keywords: TXXY, PXXY
References: None |
Problem Statement: Can you specify different binary parameters for groups in the same UNIFAC main group | Solution: UNIFAC functional groups are categorized into main groups and subgroups. A given main group can have more than one subgroup. Each subgroup has a unique group number assigned to it for identification purposes. Non-ideality is described by the group-group binary interaction parameters. The group interaction parameters are defined for main groups, not for subgroups. Because of this, it only makes sense if you specify same parameters for subgroups in the same main group with other functional groups.
However, Aspen Plus or Aspen Properties don't produce any warning messages if you use different binary parameters for subgroups in the same main group. It means that users should pay more attention to this and double check the values you are using.
Keywords: UNIFAC, Binary parameters, Main group, Subgroup
References: None |
Problem Statement: Flowsheet Analysis may send a WARNING to the Control Panel regarding INFORMATION TEAR variables (see the example below). What are INFORMATION TEAR variables, and how can they be converged?
* WARNING DURING FLOWSHEET ANALYSIS
BECAUSE TEAR-VAR=NO IN CONV-OPTIONS, AN INFORMATION TEAR:
STREAM $WRV482 FROM C-2 TO PUMP-1
WILL NOT BE CONVERGED BY A CONVERGENCE BLOCK. THIS MAY CAUSE
CONVERGENCE PROBLEMS OR MISLEADING RESULTS. IF THIS STREAM IS
A TEAR, SET TEAR-VAR=YES IN CONV-OPTIONS TO CAUSE IT TO BE
CONVERGED. OTHERWISE, PLEASE CHECK THE LOCATION OF FORTRAN
BLOCK C-2 IN THE SEQUENCE AND CONSIDER MOVING
IT BY USING THE 'EXECUTE' KEYWORD OR OTHER METHODS. | Solution: Calculator blocks were originally designed as a feed forward controller for an Aspen Plus simulation. However, you are not prohibited from using a Calculator block in feedback mode instead of feed forward mode. The Calculator variables that are used in feedback mode are the INFORMATION TEAR variables. One example of a variable used in this way would be when a Calculator block is used to set the Make-up flow for a recycle.
Consequently, a simulation could have completed normally and be out of mass balance. However, the sequencing algorithm can detect the presence of INFORMATION TEAR variables.
The WARNING message previously mentioned says that WRITE-VAR number 1 ($WRV1) is being used in feedback mode by FORTRAN block MAKE-UP to set an input to block DECANT.
By default, INFORMATION TEAR variables are not converged by default. To converge INFORMATION TEAR variables, follow the procedure below:
1. On the Convergence | Conv Options | Defaults form on the Sequence tab, check the Tear calculator export variables box.
2. On the Calculator | Sequence form for each Calculator block, list the Import and Export variables to sequence the Calculator block.Solution 102299 has an example that needs to tear a Calculator write variable.
Keywords: Write Variables
WRV
References: None |
Problem Statement: How to use SFE Assistant in Aspen Plus? | Solution: Aspen Plus has 2 kinds of solid components. One of them is Conventional Inert Solids (CI Solids) and the other is Nonconventional Solids (NC solids). CI solids can be used to represent the solid phase of pure components (for example ice is the solid phase of water).However, CI solids are inert to phase equilibrium. A+ uses Reaction Chemistry to model solid-fluid equilibrium (i.e. handle the formation of the solid as a salt precipitation reaction with appropriate K-Salt). Conventional components are defined to be fluids (liquid or gas only), and components of type Solid (CI Solid) are used to model solid versions of these components. In a Chemistry block, Salt reactions containing just 1 mole of the conventional component changing to 1 mole of the solid version are used to model the phase change. In earlier version, user needs to manually define the solid components and Chemistry block. From V10, a new tool (SFE Assitance) has been added which allows user to quickly set up chemistry for solid-fluid equilibrium.
Open a new case in Aspen Plus and define water in Components|Specifications table.
Click SFE Assistant button below the grid.
Select water and move it to the right column with the arrow button.
Click Next button, you can see that a Solid type component is created. In the meantime, the program creates a reaction in Chemistry folder. You can either change the ID of the new Chemistry object or add the reaction into an existing Chemistry with the options in this window.
If you have the equilibrium constants, you can input them in Chemistry|Object ID|Equilibrium Constants tab. If no equilibrium constants are specified, the program will use the reaction Gibbs free energy to determine the equilibrium point.
Create a flowsheet to check the results.
Key Words
SFE, CI solid, phase equilibrium
Keywords: None
References: None |
Problem Statement: How to do of Aspen Properties Enterprise Database (APED) using SQLLocalDB
If you have installed Microsoft SQL Server Express then the information contained in this knowledge base article does not apply.
Please refer to KB Article 126674 for Aspen Properties Enterprise Databases being used with SQL Express or SQL Server.
In most cases, the reason Aspen Plus or Aspen HYSYS does not connect to the database is because the user's APED profile is corrupted or APED has not been restored with administrative rights. | Solution: Important: Always run the Aspen Properties Database Tester (with run as administrator) first to know the database status connection.
Located at Start | All Programs | AspenTech | Aspen Properties
If the connection status fails, follow the instructions below to restore the databases.
Method 1: Database Tester Restore
1. Go to C:\Program Data\Aspentech\APED Vx.x
Note: You will need to unhide the ProgramData folder from Organize | Folder and Search option.
2. Delete the user profile folder experiencing the issue
3. Launch the Aspen Properties Database Tester
4. Click Restore Databases Directly
5. Run the Database Tester again and click Start to verify the connection status
Method 2: Command line Manual Restore
How to manually restore the database using the following .bat files.
These .bat files are located at C:\ProgramData\AspenTech\APED Vx.x\DeleteDBInstance.bat and
C:\ProgramData\AspenTech\APED Vx.x\DBRestore.bat
1. Open a DOS window (with run as administrator)
2. Change the root to C:\ProgramData\AspenTech\APED Vx.x
Type: cd C:\ProgramData\AspenTech\APED Vx.x (change Vx.x to your aspen version, i.e V10.0)
3. Type: DeleteDBInstance.bat <Enter>
4. Type: DBRestore.bat <Enter>
Aspen Properties Database Configuration window will appear and should show the restore results.
5. Run the Database Tester - Aspen Properties Vx.x to verify a successful connection
If the restore steps above does not resolve the issue, please email [email protected] to open a case and a consultant will assist you.
Keywords: Database Connection, DeleteDBInstance.bat, DBRestore.bat, and Aspen Properties Database Configuration Tester
References: None |
Problem Statement: If you have installed products like Aspen InfoPlus.21 or Aspen Process Explorer in version V7.2 or above with licenses provided under an Old Commercial Model (OCM) perpetual or legacy token term contract (you had to use the Token Media), then you do not have to reinstall the software after you have set up your new token term license, but you can use the Token Conversion utility instead. | Solution: A conversion tool called Token Conversion Utility has been developed which CS&T or your token administrator can send you to avoid reinstallation with the Token Media.
For Windows 7 and 2008 server, the utility must be run by an administrator user, so the executable has been created so that it does this automatically. Some customers may have IS policies that do not let them write to the registry, but that can be addressed by Support at the time that it happens.
The Utility will run on the machine and inform the user what their current license model is:
1. The user has V7.1 or earlier: can?t do anything.
2. The user has OCM and the license is not a good token one: can?t do anything.
3. The user has OCM and the license is a good token one: the utility will allow you to switch to the NCM.
Depending on the case, the utility will present the appropriate message in a popup window.
In the case in which you can use the utility, the following message will be shown:
Simply select the first radio buttom (Use aspenONE Licensing Model) and click on Apply, then OK. The Utility switches the installation from OCM to NCM. The success of the operation is confirmed by the following message:
Note: The Refresh button does the equivalent of closing and opening the utility. This is for the case where the user changes the SLM Configuration Utility parameters.
Keywords: aspenONE MSC Licensing Model
Token licenses
Perpetual licenses
Legacy Aspen Manufacturing and Supply Chain licensing
References: None |
Problem Statement: For a highly immiscible system, why is the vapor phase line in a Txy Analysis plot for a binary mixture a straight line with no points? This does not capture the vapor bubble point line.
For example for a CS2-Water mixture: | Solution: The problem is that for a highly immiscible vapor-liquid-liquid (VLLE) system, the liquid compositions are close to 0 or 1 in each phase. In order to get a range of vapor compositions, vary the mass fraction on a logarithmic scale from 10-6 to 1 to get results for the temperatures matching vapor compositions.
In V11, Txxy and Pxxy Analysis plots can be generated to extend the temperature range of the plot for the liquid-liquid equilibrium calculations..
Keywords: TXY, PXY, TXXY, PXXY
References: None |
Problem Statement: Why can I not see the Aspen Calc Schedule objects that I migrated from another server? | Solution: When migrating the calculations and libraries from one server to another, you need only to copy the .atc files associated to the calculations and libraries from the old server locations and paste them into the associated folders on the new server. Unlike the migration of the calculations and libraries for Aspen Calc, when migrating the Aspen Calc Schedule groups you should:
Stop the AspenTech Calculator Engine service via the Windows Services applet,
Copy the files Schedules.atc and Schedules_Backup.atc located in C:\ProgramData\Aspentech\Aspen Calc\Bin from the old server to the new server, and
Then restart the AspenTech Calculator Engine service via the Windows Services applet.
Without doing this the Aspen Calculator Engine sees what was in memory when the schedule information was invoked and overrides what is in that folder.
Keywords: Schedules
Aspen Tech Calculator Engine service
References: None |
Problem Statement: How to train a Machine Learning Agent using the Background Server service? | Solution: User’s machine system resources will used when training the Machine Learning agent using Background (Client). AspenTech recommends using Background (Server) option when training the Machine Learning agent using Aspen Agent Builder on the user’s machine.
Launch Aspen Agent Builder
Select Machine Learning tab
Right click the Machine Learning agent and select Train Agent
Expand Processing and select Background (Server)
Click Start Training
Training Service on the Aspen Mtell Server will train the agent.
Keywords: Training service
Train agent
Anomaly agent training
Hidden agent training
Failure agent training
References: How to horizontally scale Training Service to distribute Agent’s training load? |
Problem Statement: Are there any tips to resolve memory errors for a large simulation with many components?
This error occurs:
SYSTEM ERROR
UNABLE TO ALLOCATE (large number) BYTES OF CONTIGUOUS MEMORY.
REFER TO TROUBLESHOOTING HELP. | Solution: This can happen with large simulations with complex operations and many components. The engine requires a chunk of continuous memory for any one block. For example, if a RadFrac distillation column has a big number of stage and components and the engine has a check to see if enough resources are available,
There are a fewSolutions that may help.
Make more memory available to Aspen Plus by closing other large applications.
Increasing the virtual memory under the System Control Panel can help indirectly, by making it possible to free memory in use by other applications, but only if doing so allows a contiguous section of memory to be freed. If your hard disk is full, in order to increase virtual memory you may need to clear some disk space for virtual memory and/or put the page file on a different drive. But when the number is a significant fraction of the total system memory, it becomes unlikely that a large enough contiguous section can be allocated, regardless of the virtual memory size.
Reduce the complexity of the problem by splitting it into multiple simulations to run separately, removing unnecessary operations or unnecessary stages in distillation columns, or removing unnecessary components. Pseudocomponent generation by assay analysis can lead to particularly large numbers of components.
Within RadFrac columns that contain only a small subset of the specified components, specify the Max. No. of active components on the RadFrac | Convergence | Algorithm sheet to the maximum number of nonzero components expected on any stage of the column. Be generous in this number if there is any doubt, since RadFrac may crash if you set it too low. This reduces the memory used by RadFrac.
For complicated flowsheets, you can reduce memory usage by clearing the box for Dynamic update of calculation results in Run Settings.
Keywords: None
References: None |
Problem Statement: Where do I find help for automating Aspen Plus in Microsoft Excel or other applications? | Solution: The Aspen Plus Windows user interface is an ActiveX Automation Server. The ActiveX technology (also called OLE Automation) enables an external Windows application to interact with Aspen Plus through a programming interface using a language such as Microsoft's Visual Basic. The server exposes objects through the COM object model.
With the Automation interface, you can:
Connect both the inputs and the results of Aspen Plus simulations to other applications such as design programs or databases.
Write your own user interface to an Aspen Plus plant model. You can use this interface to distribute your plant model to others who can run the Aspen Plus model without learning to use the Aspen Plus user interface.
In order to use the Aspen Plus Automation Server, you must:
Have Aspen Plus installed on your PC
Be licensed to use Aspen Plus
Aspen Plus and Aspen Properties now share the same type library, happ.tlb, which is located in the APrSystem GUI\xeq directory.
The out-of-process server is AspenPlus.exe.
Before you can access the Aspen Plus type library from Visual Basic, in the Visual Basic Project
Keywords: None
References: s dialog box, you must check the Aspen Plus GUI Type Library box.
Before you can access the Aspen Plus type library from Excel VBA, in the Excel Tools | References dialog box, you must check the Aspen Plus GUI Type Library box.
If Aspen Plus GUI Type Library does not exist in the list, click Browse and find happ.tlb in the directory listed above.
More help on this interface is found under Simulation and Analysis Tools -> Custom Models -> Using Aspen Plus via Automation |
Problem Statement: Calculations may fail if an Aspen InfoPlus.21 (IP21) server is renamed or IP21 is moved to another system. Calculations that missed an execution or were scheduled to activate around the restart time will activate when the AspenTech Calculator Engine service is started (or soon after), and fail because the original server is no longer available. This could make it difficult to change and validate the new IP21 server. | Solution: ORIGINAL IP21 SERVER STILL RUNNING:
If the original IP21 server is still RUNNING then the calculation bindings can be changed by using the Change InfoPlus.21 Server dialog. Right click on the Aspen Calc server name and select Change InfoPlus.21 Server to change the bindings. It's also a good idea to validate the bindings by right clicking on the Aspen Calc server name and selecting the Validate InfoPlus.21 dialog.
ORIGINAL IP21 SERVER NOT RUNNING: (three methods for changing the IP21 server)
1) To prevent the calculations from failing when the AspenTech Calculator Engine service starts, Edit the Schedule Group and change the Next Run Time to a future time. Then stop and restart the AspenTech Calculator Engine service and the Aspen Calc GUI. This prevents the calculations from executing and allows time to change the IP21 server and validate the calculations as noted above.
2) Calculations can also be prevented from executing by removing them from the Schedule Group and restarting the AspenTech Calculator Engine service and the Aspen Calc GUI. Then change the IP21 server and validate the calculations as noted above.
3) If there are still problems changing or validating the InfoPlus.21 server that can not be resolved with the methods above it may become necessary to edit the Calculation XML file. To do this, right click on the Calculation server and Export the Calculation XML. The bindings can be changed by editing the XML file. Stop the AspenTech Calculator Engine service and the AspenCalc GUI. Make sure that reliable backups of the calculations exist. Move or delete the calculation files *.atc in the C:\ProgramData\ AspenTech\Aspen Calc\Calc folder. Also move or delete any calculation folder created under ?\Calc. Do not delete the Edit folder. Then restart the AspenTech Calculator Engine service and the Aspen Calc GUI and Import the modified Calculation XML. It's also a good idea to validate the calculations as noted above.
Keywords:
References: None |
Problem Statement: This knowledge base article outlines the steps required to move the various Schedule Groups from one Aspen Calc Server to another, while maintaining the calculation associations of each group. | Solution: In order to move the schedule groups from one Aspen Calc server to another, you must move the schedules.atc file. The default location of this file is:
C:\ProgramData\AspenTech\Aspen Calc\Bin\schedules.atc
Before moving the file, you must close the Aspen Calc application window and stop the AspenTech Calculator Engine service on the destination node. Restart the service and the AspenCalc application window after moving the file.
Important: A backup copy of the schedules also exists as a file named schedules_backup.atc. If the schedules.atc file is not readable when it is copied to the new system then Aspen Calc will read the backup file upon restart.
Keywords: None
References: None |
Problem Statement: Many customers use Aspen SQLplus to query the DiskHistoryDef definition records for information about their filesets. However, the scripts that used to work prior to InfoPlus.21 v6.x won't return any information on filesets which are numbered greater than 113. Why? | Solution: As of InfoPlus.21 v6.0, when the number of filesets was increased beyond the previous limit of 113, the information about them is now stored in a shared memory space and cannot be accessed by querying DiskHistoryDef records. Instead, a new InfoPlus.21 component, AtIP21HistAdmin.dll, now implements COM methods that allow other applications to programmatically configure history repositories and access history configuration information. So, for example, SQLplus, Visual Basic, and Windows scripts can be used to automate configuration that is normally done via the InfoPlus.21 Administrator.
The new component implements more than one hundred methods that allow you to create/delete repositories, add/remove archive filesets, get/set repository properties, get/set archive fileset properties, and control historian behavior.
Note: For information on the History Administration Scripting Interface, see InfoPlus.21 Administration Help for details.
Keywords:
References: None |
Problem Statement: What are the various prediction errors parameters in Aspen DMCplus and what do they mean? | Solution: PREDER & ACPRER: The cycle to cycle prediction (PREDER) is of little use because it tends to be noisy and random. What we look is the accumulated prediction error (ACPRER) over time and for its drift up or down. Ideally, the ACPRER should be around zero, a drift up or down indicates a model mismatch.
When this value is trending upward, it indicates a disturbance (or model error) is decreasing the actual dependent variable value.
When this value is trending downward, it means a disturbance is increasing the actual dependent variable value. There are no rules of thumb for the actual magnitude of the errors because over time it could be large.
The ACPRER is reset to zero when the controller is initialized or when it is greater than 9000.
CV Model Error: Keep in mind, that the CV model error is the accumulated error of all the models of that CV to the independents. Once it is determined that the CV has an unacceptable model error, we still have to isolate the model error to a particular MV/CV pair. Thus we can look back in history for periods that only couple MVs are moving and go from there, or you can set up a test and move one MV at the time to determine which model is causing the error.
Prediction CV filter: This is used to 'smooth' out the prediction, but it adds lag to the controller. It is not recommended to filter heavily in the DMC engine. If the signal is noisy, consider filtering in the DCS because the sample times are much faster in the DCS. In the DMC engine, there are 3 types of filtering.
Dependent variable prediction error filtering type:
0 (DMC) Traditional DMC prediction error filtering
1 (FIRST ORDER) Apply first order filter to the prediction error
2 (MOVING AVG.) Apply moving average filter to the prediction error
PRERTAU, PRERHORIZ: Controller actions are based on the prediction of dependent variable values (PDEP). The prediction error filter type governs how the controller updates the prediction based on feedback received from the plant. DMCplus filtering takes the difference between the current measurement and the current prediction to calculate a bias that is applied to each element of the prediction array. The first order filter option allows a fraction of the bias to be applied to the prediction array. The fraction is calculated based on the value of the prediction error filter time constant PRERTAU. The moving average filter uses an average of past values of the bias to determine the final bias applied to the prediction array. The number of past values used is specified by the prediction error filter time horizon PRERHORIZ. In addition, an exponential filter will be applied to the error. If PRERTAU is set to zero then a value equal to one-tenth of the PRERHORIZ will be used.
AVPRER: Average Prediction error (AVPRER) is another parameter we can look at to determine how well the CV is predicting. It is heavily-filtered because it is average of the absolute value of prediction error on every cycle. It is reset to zero when the controller is initialized.
One caveat worth mentioning is that any prediction error in DMCplus is assumed to be model error thus it eventually makes it to the prediction. We have various methods of filtering to dampen the effect. In other MPC, such as state space and non-linear controllers, we have the capabilities of assigning prediction errors as to a combination of measurement noises or unmeasured disturbances (including model errors). This allows for tighter control even in less than ideal environment.
Keywords: Prediction error
PREDER
ACPRER
AVPRER
References: None |
Problem Statement: After we install APS/MBO on our machine and we go into ORIONDBGen.mdb file to see the database structure, do we need to modify anything in that file? | Solution: The answer is no. Please keep the file as it is. The database structure is already pre-defined by Aspen and you do not need to check any checkboxes. For example, if you noticed a table EV_PARAMS (which sounds familiar) wasn't checked for both APS and MBO, you don't need to check it manually either. If the table is not checked, it means it is a legacy table that is not going to be used in the version you installed. So the idea is to keep the file as it is without moving it.
Keywords: None
References: None |
Problem Statement: How pipe flow regimes are calculated in Aspen Plus? | Solution: Pipe and Pipeline models predict patterns, often called flow regimes, for two-phase horizontal and upward flow.
For horizontal flow (-20 to +20 degrees), the method of Taitel and Dukler, 1976 is used.
For upward flow, the method of Taitel, Bornea, and Dukler, 1980 is used.
For horizontal flow up to an elevation of -20 to +20 degrees, below Taitel and Dukler chart is used in the flow regime prediction and the step by step procedure is as below:
Calculate X for the conditions in your pipe. X is the square root of the ratio of liquid pressure drop to vapor pressure drop, both pressure drops being evaluated for single phase flow within the same pipe.
Then calculate F, the sqrt (rhoG/(rhoL-rhoG)) UGS / sqrt (Dg Cos α) where the rho terms are the liquid and gas density, UGS is the superficial velocity of the gas, D is the pipe diameter, g is gravity and alpha is angle of elevation
Plot this point and compare to the A line using the F axis on the right and the X axis on the bottom. If it's below the line, it's stratified, and if above it's some other kind of flow.
If it is below the A line, calculate K. It's got the same terms in it plus the liquid superficial velocity and nu_L is the liquid viscosity. Plot K vs. X (using the K axis on the left) and compare vs. the C line to distinguish stratified wavy vs. stratified smooth.
If it is above the A line, the B line is simply at X=1.6. Left of the line is annular dispersed (mist).
If you are above A and right of B, calculate T. This has a bunch of terms we have already used. And plot again using the right axis for T. Above the D line is dispersed bubbly (Bubble flow), below is intermittent (slug flow).
Attached the reference literature article on A Model For Predicting Flow Regime Transitions in Horizontal and Near Horizontal Gas-Liquid Flow by Yemada Taitel and A.E. Dukler
Keywords: Flow Regimes, Taitel and Dukler, 1976, Taitel, Bornea, and Dukler, 1980, horizontal flow, upward flow
References: None |
Problem Statement: Which variable should be used for the interfacial area factor in a Rate-based column?.I see CA-AREA-FACT, PR-AREAFACT, and TR-AREAFACT. Which one should be used? | Solution: Interfacial area factor is the scaling factor for interfacial area. The area predicted by the correlation is multiplied by this factor. The variable names are different for the new Column Analsys and for the legacy tray and pack rating.
CA-AREA-FACT is the Scale factor of interfacial area used in rate-based calculations for column analysis. Area calculated from correlations is multiplied by CA-AREA-FACT. Column internals name is in ID1, and the section name is in ID2.
PR-AREAFACT is the scale factor of interfacial area used in rate-based calculations for legacy packing rating. Area calculated from correlations is multiplied by PR-AREA-FACT. Section number is in ID1.
TR-AREAFACT is the scale factor of interfacial area used in rate-based calculations for legacy tray rating. Area calculated from correlations is multiplied by TR-AREA-FACT. Section number is in ID1.
Keywords: None
References: : VSTS 22136 and 23217 |
Problem Statement: What is the difference between Allocated Days and Total Days in the Equipment Rental Summary Report? | Solution: The Allocated Days are the days that are directly allocated to a particular task in the project (e.g. backhole digging of a foundation). The Total Days are the allocated days rounded up to the next standard rental period, or in some cases, it may represent equipment that is required on the job but not directly used in the field (e.g. office trailers for the duration of the project).
Note: The Rental Cost basis is indicated in the *.CCP report:
(From above, the rental cost of item 388 is calculated as: (570 / 22) * 1,002 = 25,961)
Keywords: Allocated Days, Total Days, Allocated, Total, Rental, Equipment Rental Summary
References: None |
Problem Statement: Why cannot I find UNIFAC group 3920 in Components | Molecular structure | Functional Group ? | Solution: Groups 3815, 4005, and 4110 through 4210 (including 3920) are only used with the PSRK equation of state as normal components. Aspen Property Constant Estimation System does not support using PSRK as an estimation method, so these groups cannot be entered on the Components | Molecular Structure | Functional Groups sheet.
The Aspen Physical Property System does not support entering or regressing parameters for these groups, so you cannot enter them on the Component | UNIFAC Groups form.
Keywords: UNIFAC group, 3920
References: None |
Problem Statement: Is it possible to manually specify the entrance and exit loss for the Pipe block? | Solution: This example shows how to specify the entrance and exit loss for the Aspen Plus PIPE unit operation model based on velocity head using a design specification. User-specified correlations to represent valves, orifices or fittings not supported by the PIPE model may be implemented using this method.
If the equivalent length for an unsupported fitting or valve is known, it can be specified directly on the Pipe Settings | Fittings1 sheet using the Misc-L/D field or using the FITTINGS MISC-L-D= input language sentence.
This example sets the pipe length to an equivalent length containing the entrance and exit loss based on the velocity and pressure drop calculated by PIPE for a straight pipe section. For user-specified correlations that are not based on pipe results, a Calculator block could be used to set the pipe length before the the PIPE model is executed. The EXECUTE BEFORE sequencing specification should be made for Calculator blocks of this type.
For reference, valves and fittings supported by Aspen Plus along with the correlations that are used to estimate the equivalent length are tabulated below:
Equivalent Length= fac * 3.82 * (1 + d) * (d**0.1975)
where d is diameter in inches, equivalent length in feet.
Fitting Fac
------- ---
Elbow, Flanged 0.18
Flanged Straight T 0.10
Flanged Branched T 0.45
Screwed Elbow 0.45
Screwed Straight T 0.15
Screwed Branched T 0.70
Gate Valve 0.10
Butterfly Valve 0.80
The length specification in the PIPE1 block is a dummy value. The straight length should be specified in the FINDL design specification that follows. No other equivalent lengths should be specified in this block.
After the PIPE executes, the CALC Calculator calculates the total equivalent length. This length contains an entrance and exit loss that is based on a specified number of velocity heads. This variable and the intermediate variables XKENT, XKEXIT, DF, ENTLD, and EXITLD are defined as Parameter variables 1 to 6 so that they can be accessed in the design specification and the report Calculator block PIPE-REP.
C SPECIFY THE LENGTH OF STRAIGHT PIPE, SLEN, IN FEET.
F SLEN=10.
C SPECIFY THE NUMBER OF VELOCITY HEADS FOR THE ENTRANCE AND
C EXIT LOSS, XKENT AND XKEXIT, RESPECTIVELY.
F XKENT= 0.78
F XKEXIT= 1.00
C BACK CALCULATE THE DARCY FRICTION FACTOR, DF,
C THE PRESSURE DROP MUST BE CONVERTED TO LBF/FT2,
C GC IS 32.174 LBM-FT/SEC2-LBF.
F DF= XDP*144./DENS/(XV**2.)*2.*32.174/EQLEN*XID
C CALCULATE THE TOTAL EQUIVALENT LENGTH (L/D IN FEET).
F ENTLD= XKENT/DF
F EXITLD= XKEXIT/DF
C THE TOTAL EQUIVALENT LENGTH = STRAIGHT + EQUIVALENT LENGTH.
F TEQLEN= SLEN + ENTLD + EXITLD
The Pipe length is varied in design specification FINDL to match the total equivalent length.
Revision: 9/96, developed by M. Jarvis
9/96, reviewed by A. Forouchi
8/19, updated by L. Roth
Keywords: None
References: None |
Problem Statement: What is the PPR78 Property method?. | Solution: In V11, there is a new Property Method PPR78 for the Predictive PR equation of state. This property method can be used to predict VLE from structure using the Peng-Robinson equation of state with the “kij” binary interaction parameters estimated from molecular structure. It combines the 1978 Peng-Robinson model with classical Van der Waals mixing rules involving a temperature-dependent binary interaction parameter predicted by PPR78 from the chemical structures of molecules within the mixture.
The PPR78 method can represent the phase behavior of any fluid containing alkanes, alkenes, aromatic compounds, cycloalkanes, permanent gases (CO2, N2, H2S, H2), mercaptans, and water. You can expect reasonable results at all temperatures and pressures. The PPR78 property method is consistent in the critical region. Therefore, it does not exhibit anomalous behavior, unlike the activity coefficient property methods. Results are least accurate in the region near the mixture critical point.
This model (model ESPR78 for mixtures, ESPR780 for pure components), developed by Jaubert and Mutelet [1] and extended by them and others [2-9], is based on the Peng-Robinson equation of state as published in 1978 [10]. The 1978 version of Peng-Robinson consists of Standard Peng-Robinson using the HYSYS alpha function for Peng-Robinson models, which is the same as the standard Peng-Robinson alpha function when ω < 0.49. It is used in the PPR78 method.
The modifications from 1978 Peng-Robinson implemented by Predictive Peng-Robinson are:
A group contribution method is used to estimate the kij.
A volume-translation feature, similar to that described in Volume-Translated Peng-Robinson. The equation incorporating the volume translation term there applies to Predictive Peng-Robinson.
The model uses option codes to determine whether the Peneloux liquid molar volume correction is used and to determine the root-finding method.
Parameters
Predictive Peng-Robinson uses the following parameters:
Parameter Name/Element Symbol Default MDS Lower Limit Upper Limit Units
P78TC Tci TC x 5.0 2000.0 TEMPERATURE
P78PC pci PC x 105 108 PRESSURE
P78OMG ωi OMEGA x -0.5 2.0 —
P78C ci † — — — MOLE-VOLUME
P78GRP (k,νk, m,νm, ...) — — — — —
P78GBP Akl, Bkl — — — — PRESSURE
P78KIJ kij See below x — — —
† When P78C is missing, ci is calculated from:
Groups
If P78KIJ is provided, it is used as a constant (not temperature-dependent) kij and the group contribution method is not used.
If P78KIJ is not provided, the group contribution method is used to calculate a temperature-dependent kij as described below. If a compound includes unsupported functional groups, P78KIJ defaults to 0 for all pairs involving that compound.
Parameter P78GRP contains the group counts for each component. It has 24 elements, which alternate between functional group numbers and the number of occurrences of the group. For pseudocomponents, the method of Xu et al. [8] is used to estimate group counts. If P78GRP is missing but the structure is provided, the Aspen Physical Property System will generate P78GRP from the structure.
Parameter P78GBP contains the binary group parameters for each pair of groups. These are used [1] to calculate kij which is used to compute a as used in Standard Peng-Robinson:
Where:
ai = from Peng-Robinson
bi = from Peng-Robinson
θik = The fraction of molecule i occupied by group k, based on the number of functional groups present (P78GRP)
T = Temperature in Kelvin
Akl, Bkl = Group interaction parameters (P78GBP), where k and l represent individual groups.
Akl=Alk, Bkl=Blk, Akk=Bkk=0
N = The number of groups in the model
The group parameters are based on the version of the model by Xu et al., 2015 [9]. The defined groups and their numbers are listed in Table 3.21 in Physical Property Data.
Keywords: None
References: s:
“VLE predictions with the Peng-Robinson equation of state and temperature dependent Kij calculated through a group contribution method”, J.N. Jaubert, F. Mutelet, Fluid Phase Equilibria 224 (2004) 285-304.
“Extension of the PPR78 model (predictive 1978, Peng-Robinson EOS with temperature dependent Kij calculated through a group contribution method) to systems containing aromatic compounds”, J.N. Jaubert, S. Vitu, F. Mutelet, J.P. Corriou, Fluid Phase Equilibria, 237 (2005) 193-221.
“Extension of the PPR78 model (predictive 1978, Peng-Robinson EOS with temperature dependent Kij calculated through a group contribution method) to systems containing naphthenic compounds”, S. Vitu, J.N. Jaubert, F. Mutelet, Fluid Phase Equilibria, 243 (2006) 9-28.
“Predicting the phase equilibria of CO2+hydrocarbon systems with the PPR78 model (PR EOS and Kij calculated through a group contribution method)”, S. Vitu, R. Privat, J.N. Jaubert, F. Mutelet, J. of Supercritical Fluids, 45 (2008) 1-26.
“Addition of the Nitrogen Group to the PPR78 Model (Predictive 1978, Peng-Robinson EOS with Temperature-Dependent Kij Calculated through a Group Contribution Method)”, R. Privat, J.N. Jaubert, F. Mutelet, Ind. Eng. Chem. Res. 2008, 47, 2033-2048.
“Addition of the Hydrogen Sulfide Group to the PPR78 Model (Predictive 1978, Peng-Robinson EOS with Temperature-Dependent Kij Calculated through a Group Contribution Method)”, R. Privat, J.N. Jaubert, F. Mutelet, Ind. Eng. Chem. Res. 2008, 47, 10041-10052.
“Extension of the E-PPR78 equation of state to predict fluid phase equilibria of natural gases containing carbon monoxide, helium-4 and argon”, V. Plee, I.N. Jaubert, R. Privat, P. Arpentinier, Journal of Petroleum Science and Engineering, 133 (2015) 744-770.
“Predicting Binary-Interaction Parameters of Cubic Equations of State for Petroleum Fluids Containing Pseudo-components”, X. Xu, J. N. Jaubert, R. Privat, P.D. Suchaux, F. B. Mulero, Ind. Eng. Chem. Res. 2015, 54, 2816-2824.
“Addition of the Sulfur Dioxide Group (SO2), the Oxygen Group (O2), and the Nitric Oxide Group (NO) to the E-PPR78 Model”, X. Xu, J. N. Jaubert, R. Privat, Ind. Eng. Chem. Res. 2015, 54, 9494-9504.
“The characterization of the heptanes and heavier fractions for the GPA Peng-Robinson programs”, Gas Processors Association, Research Report RR-28, 1978. |
Problem Statement: How do I configure the Aspen Mtell Log Manager? | Solution: ThisSolution describes the workflow for configuring the Aspen Mtell Log Manager to view diagnostic information about the Aspen Mtell system from a web browser.
Sends email reports at scheduled intervals to those administering the system about errors and/or warnings that have occurred in any connected Aspen Mtell software.
Sends email reports at scheduled intervals to AspenTech support (if this service has been purchased) about errors and/or warnings that have occurred in any connected Aspen Mtell software.
Provides a self-managing database of log histories, so logs do not build up over time and require additional administrative oversight.
Create MtellLM database
1. Launch SQL Server Management Studio
2. Go to C:\inetpub\wwwroot\AspenTech|AspenMtell\LogManager\SQL Scripts\ folder and copy MtellLM.sql script
3. Create the MtellLM database by running MtellLM.sql
Modify Configuration Files
1. Go to C:\inetpub\wwwroot\AspenTech\AspenMtell\LogManager\LogMonitor\
2. Modify Mtelligence.LogMonitor.exe.config file, update the hostname with fully qualified hostname
<setting name=ConnectionString serializeAs=String>
<value>Data Source=Hostname; Initial Catalog=MtellLM; User ID=MtelligenceUser; Password=Mt3ll1g3nc3U$3r1;</value>
</setting>
4. Save the file
5. Go to C:\inetpub\wwwroot\AspenTech\AspenMtell\LogManager\LogService\
6. Modify web.config file, update the hostname with fully qualified hostname
<connectionStrings>
<add name=MtellLM connectionString=Data Source=hostname;Initial Catalog=MtellLM;User ID=MtelligenceUser;Password=Mt3ll1g3nc3U$3r1; providerName=System.Data.SqlClient />
</connectionStrings>
7. Save the file
Update Aspen Mtell System Manager
1. Launch Aspen Mtell System Manager
2. Click Configuration Tab and select Setting
3. Click General option, under System Settings and check Write Logs to Log Manager option
4. Click Save
Update Aspen Mtell Log Manager
1. Click Start and launch Aspen Mtell Log Manager
2. Click Settings option
3. Update Email Settings (Server Address, Username, Password, Domain, Port, From)
4. Click Test Email
5. Update Notification Settings
Note: we recommend configuring daily notification for all errors, check Daily and Include Errors option
6. Click Save Settings
Configure the EAM adapter to write logs to Log Manager
1. Go to C:\inetpub\wwwroot\AspenTech\AspenMtell\Adapter\Maximo\bin\
Note: The location will change as per your EAM Adapter
2. Launch Mtelligence.Maximo.Services.Configuration.exe
3. Click Adapter Configuration
4. Check Log Errors to Database and Log Errors to Log Manager
5. Update the hostname of your Log Manager Server
6. Click Save
Keywords: Aspen Mtell Log Manager configuration
References: None |
Problem Statement: What are the variables that are available for checking the status flags of various Aspen Plus features from Variable Explorer? | Solution: After a run is complete, in order to check for the run-status of Aspen Plus calculation through VBA Automation, following status flags can be called from “Root -> Data -> Results Summary -> Output” of the Variable Explorer window:
CSSTAT – provides status of Case Studies
CVSTAT – provides status of Convergence blocks
FORSTAT – provides status of user-subroutine
PCESSTAT – provides status of property estimation
PPSTAT – provides status of property table (property analysis)
PROPSTAT – provides status of property calculation
RSTAT – provides status of calculator and transfer blocks
SENSSTAT – provides status of sensitivity blocks
Note: Ensure Model has been run at-least once.
In general, a value of 0 for one of these flags means there was no error of the indicated type. 1 means an error. 2 means a warning. A missing value (RMISS or NaN, or 2 for cases where there cannot be warnings) means there was no occurrence of this feature in the run.
PCESSTAT is more complicated because estimation can be run standalone or in combination with a simulation. For an estimation run type, the value will be 0, 1, or 2 indicating success, error, or warning as above. If estimation is run in conjunction with a simulation, it can have other values. In such cases, 10 indicates a warning, 11 indicates the simulation succeeded, and other two-digit values with the last digit not zero also indicate success of the estimation. An error in this case results in 4 digits, in which case the first digit represents the severity of the error (1 = severe error), and the other digits indicate details about the type of error. If a simulation did not contain estimation, the value is NaN.
Keywords: Variable Explorer, CSSTAT, CVSTAT, FORSTAT, PCESSTAT, PPSTAT, PROPSTAT, RSTAT, SENSSTAT, VB Automation
References: None |
Problem Statement: Are there any tips to resolve memory errors for a large simulation with many components?
This error occurs:
SYSTEM ERROR
UNABLE TO ALLOCATE (large number) BYTES OF CONTIGUOUS MEMORY.
REFER TO TROUBLESHOOTING HELP. | Solution: This can happen with large simulations with complex operations and many components. The engine requires a chunk of continuous memory for any one block. For example, if a RadFrac distillation column has a big number of stage and components and the engine has a check to see if enough resources are available,
There are a fewSolutions that may help.
Make more memory available to Aspen Plus by closing other large applications.
Increasing the virtual memory under the System Control Panel can help indirectly, by making it possible to free memory in use by other applications, but only if doing so allows a contiguous section of memory to be freed. If your hard disk is full, in order to increase virtual memory you may need to clear some disk space for virtual memory and/or put the page file on a different drive. But when the number is a significant fraction of the total system memory, it becomes unlikely that a large enough contiguous section can be allocated, regardless of the virtual memory size.
Reduce the complexity of the problem by splitting it into multiple simulations to run separately, removing unnecessary operations or unnecessary stages in distillation columns, or removing unnecessary components. Pseudocomponent generation by assay analysis can lead to particularly large numbers of components.
Within RadFrac columns that contain only a small subset of the specified components, specify the Max. No. of active components on the RadFrac | Convergence | Algorithm sheet to the maximum number of nonzero components expected on any stage of the column. Be generous in this number if there is any doubt, since RadFrac may crash if you set it too low. This reduces the memory used by RadFrac.
For complicated flowsheets, you can reduce memory usage by clearing the box for Dynamic update of calculation results in Run Settings.
Keywords: None
References: None |
Problem Statement: The wait for async message in Cim-IO transfer-records indicates the CimIO client is waiting for
an asynchronous reply from Store & Forward (communication-acknowledge).
The problem persists after a clean start (see | Solution: #103176).Solution
The message may clear itself after a short time. If it does not clear then here are a few items to check.
Check the prerequisites of Store & Forward.
1. Make sure Asynchronous or Unsolicited communication works fine WITHOUT store & forward enabled.
o switch off Store & Forward in the device record
o re-initialize the device record by switching off and on the io_device_processing
o re-initialize the transfer record by switching off and on the io_record_processing
o activate the transfer record manually, by switching IO_Activate to yes
o observe whether data comes in (if data won't come in with S&F disabled, it will also not come in with Store & Forward enabled)
2. Make sure the Store & Forward processes (scanner, forward and store) are running and linked correctly.
o The 3 processes need to be started with the correct DLGP name, as defined in the 'cimio_logical_devices.def' file at the Cim-IO server.
o Every Store & Forward process needs a TCP/IP service name, defined in the services file. The maximum length of the servicename is 15 characters. In general they look like dlgpname_SC for the scanner process dlgpname_ST for the store process dlgpname_FW for the forward process
o Verify the correct DLGP service and Store & Forward service names are defined.
o The TCP service numbers need to be unique
o The last entry of the services file needs to have a <CR> carriage return i.e. the last line of the services file needs to be an empty one.
3. Make sure that the transfer records are NOT scheduled or activated automatically. With Store & Forward enabled, the Cim-IOserver is designed to work autonomously. The data-frequency can be specified with the IO_Frequency field in the transfer record. Upon enabling of Store&Forward this frequency is sent to the Cim-IO server.
4. In the InfoPlus.21 Manager, confirm that the executable field for the asynchronous task (for example, TSK_A_OPC) points to %SETCIMCODE%\cimio_c_async.exe. The wait for async error may occur if the asynchronous task points to cimio_c_client executable.
5. If the message still persist after checking the items above, perform a clean restart of an Aspen Cim-IO Interface following instructions in the knowledge base article titled How do I perform a clean restart of an Aspen Cim-IO Interface with Store and Forward?
Keywords: wait for async
Store & Forward
S&F
References: None |
Problem Statement: How can I model bonus price and/or penalty for a blend specification constraint? | Solution: 1) Bonus price may be awarded to products with blended quality exceeding specification minimum
2) Conversely, penalty may be levied on products with blended quality below a pre-determined target minimum
Example
*TABLE
ROWS
*
TEXT
SLACK
NSPGDSL
*
NSPGDSL
SPG MIN SPEC FOR DSL
1
UBALDSB
BONUS FOR DSL SPG
-1
*TABLE
UTILSEL
*
TEXT
PRICE
*
DSB
DSL SPG BONUS, BBL*SPG
$0.20
? Column SLACK in Table ROWS is used to access the slack in a blended product specification
Row name in Table ROWS is a blended product specification row name
Entry of 1 in column SLACK causes Aspen PIMS to:
Add a matrix column with the same as the row (in this example NSPGDSL)
Change the specification row to equality
These two changes result in the activity of the new column (NSPGDSL)being the same as the difference between the specification and the actual quality (in prop*bbls).
The UBAL entry in Table ROWS transfers the activity of the slack to a utility balance row.
The UTILSEL table causes Aspen PIMS to:
Add a price coefficient of 0.20 for variable SELLDSB in the objective function thereby establishing the value of the bonus in units of prop*bbl. If the utility DSB is also entered into the UTILBUY table, then it can model either bonus (via UTILSEL) or penalty (via UTILBUY) for the deviation from specification.
If it is desired for the SLACK to allow the blending specification to be violated at the cost of a penalty, then slack column (NSPGDSL) must also be designated as FREE in table BOUNDS so that it can have a negative activity.
Key words
PIMS bonus price
Target specification penalty
Blend specification
Keywords: None
References: None |
Problem Statement: How are the individual parametric cases initialized? | Solution: When a parametric run is made, the base case is solved first. TheSolution of this is used as a warm-start for all the parametric scenarios. Because of this, it does not matter in which order the parametric variables are listed in the PARAOBJ table – they will all be initialized identically.
Keywords: None
References: None |
Problem Statement: How do you report Stream Results in different units of measure from the inputs? | Solution: By default, the Stream Summary and Global Data displayed on the flowsheet use the Global Units of Measure defined in the Units area of the Home Ribbon.
However, a different Unit Set can be selected on the Flowsheet Options dialog found by going to the File Options Flowsheet sheet or by clicking on the down arrow in the Stream Results section of the Modify Ribbon available when viewing the Flowsheet.
Keywords: None
References: : VSTS 453004 |
Problem Statement: What is the correct way to move existing calculations from one Aspen Calc server to another? | Solution: Here is the correct way to move existing calculations from one Aspen Calc server to another.
Older Aspen Calc calculation and formula files are directly compatible with most recent Aspen Calc Versions. You can just copy all the .atc files from the bin,calc and lib sub-folders under C:\ProgramData\AspenTech\Aspen Calc folder to the new system.
The fact that the InfoPlus.21 server has also changed is significant. Aspen Calc calculations contain the InfoPlus.21 server names that were specified when they were created. Fortunately, there is an Aspen Calc menu option to deal with this. In the Aspen Calc windows user interface, select the Calculation view, right click the Aspen Calc server (the computer icon) and select Change InfoPlus.21 Server, you can then select the old server name and type the new name. Aspen Calc will then update all the calculations to use the new server.
Calculation folders use a Windows folder under the C:\ProgramData\AspenTech\Aspen Calc\calc directory but also a folder under C:\ProgramData\AspenTech\Aspen Calc\calc\edit. This second folder is usually empty and is only used when editing calculations. When copying calculation folders from one machine to another, you should copy both folders. If the folder under calc\edit is not copied (or created), you will not be able to edit the calculations in the folder.
For example, if you have a calculation folder called folder1 , then the following folders should exist:
c:\ProgramData\AspenTech\Aspen Calc\calc\folder1
c:\ProgramData\AspenTech\Aspen Calc\calc\edit\folder1
Keywords: Calc
Aspen Calc
Element Not Found
References: None |
Problem Statement: CIMIO-for-PI interface always read C:\Program Files (x86)\PIPC\dat\pilogin.ini [Defaults]
[Defaults]
PIServer=PIServer1
However, PILOGIN.INI only can contain one default server | Solution: Wordaround to update pilogin.ini PIServer=XXXX when each PI DLGP startup.
Steps are as following:
1).Create pilogin_PI1.ini in which [Defaults] PIServer=PIServer1
Create pilogin_PI2.ini in which [Defaults] PIServer=PIServer2
2).Modify cimio_pi_start.bat as below.
…
COPY C:\Progra~2\PIPC\dat\pilogin_pi1.ini C:\Progra~2\PIPC\dat\pilogin.ini /Y
cd /d C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api
start cimio_pi_dlgp %1 /B /MIN cimio_pi_dlgp %1 > cimio_pi_dlgp%1.out
start cimio_pi_diop_rw %1 /B /MIN cimio_pi_diop_read_write %1 > cimio_pi_diop_read_write%1.out
start cimio_pi_diop_u %1 /B /MIN cimio_pi_diop_unsol %1 > cimio_pi_diop_unsol%1.out
start cimio_pi_dlgp_h %1 /B /MIN cimio_pi_dlgp_hist %1 > cimio_pi_dlgp_hist%1.out
Modify cimio_pi_start2.bat as below.
…
COPY C:\Progra~2\PIPC\dat\pilogin_pi2.ini C:\Progra~2\PIPC\dat\pilogin.ini /Y
cd /d C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api
start cimio_pi_dlgp %1 /B /MIN cimio_pi_dlgp %1 > cimio_pi_dlgp%1.out
start cimio_pi_diop_rw %1 /B /MIN cimio_pi_diop_read_write %1 > cimio_pi_diop_read_write%1.out
start cimio_pi_diop_u %1 /B /MIN cimio_pi_diop_unsol %1 > cimio_pi_diop_unsol%1.out
start cimio_pi_dlgp_h %1 /B /MIN cimio_pi_dlgp_hist %1 > cimio_pi_dlgp_hist%1.out
3).Modify C:\Program Files (x86)\AspenTech\CIM-IO\commands\cimio_autostart.bat to start 2 DLGP Service.
If it need S&F, start S&F processes.
CALL C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_start PI
ping -n 10 127.0.0.1>nul
CALL C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_start2 PI2
Modify C:\Program Files (x86)\AspenTech\CIM-IO\commands\cimio_autostop.bat to stop processes.
CALL C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_stop.bat
CALL C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_stop2.bat
4).Revise C:\Program Files (x86)\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_stop.bat and cimio_pi_stop2.bat as below.
cimio_pi_stop.bat
C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_shutdown PI
cimio_pi_stop2.bat
C:\Progra~2\AspenTech\CIM-IO\io\cio_pih_api\cimio_pi_shutdown PI2
5). Start CIMIO Manager service and check processes
6).Test_API
Keywords: PI,CIMIO for PI,pilogin.ini
References: None |
Problem Statement: What is AspenTech's policy regarding Microsoft updates with the aspenOne Manufacturing Suite products? | Solution: Microsoft frequently releases individual updates for the Windows operating systems to fix defects and to patch security loopholes. This knowledge base article describes AspenTech's policy towards application of Microsoft updates on server and client computers which use the Manufacturing Suite products.
AspenTech tests new versions of the Manufacturing Suite with the latest Microsoft service packs and Microsoft updates which are publicly available from Microsoft when the testing cycle begins for a new release. After product testing is completed and the new Manufacturing Suite version is officially released, no additional testing is retroactively performed with subsequently released Microsoft updates.
AspenTech will address incompatibility issues which are caused by Microsoft updates. If any incompatibilities are verified between a Microsoft update and a Manufacturing Suite product, AspenTech will publish a knowledge base article to alert the user community.
Note: If any problems are encountered with a particular Microsoft update, it can always be uninstalled by following this procedure:
Stop the AspenTech applications which are running.
In Control Panel | Add or Remove Programs, select the newly applied Microsoft update then click on the Remove button.
Restart the AspenTech applications
Keywords: MES
MS
update
hot-fix
References: None |
Problem Statement: This script will return the last known good value of a tag historized in Aspen InfoPlus.21 | Solution: You can use the SET MAX_ROWS=1 so that you just get one row. A query such as SELECT ip_trend_value FROM atcai WHERE ip_trend_qlevel='good' reads data from the most recent time backwards. So, the complete query would be:
SET MAX_ROWS=1;
SELECT ip_trend_value FROM atcai WHERE ip_trend_qlevel='good';
Keywords: SQLPlus script
SQL+
Value Status
Last Value
References: None |
Problem Statement: This knowledge base article outlines the reasons that can cause the Aspen Audit & Compliance Manager overflow queue to fill up quickly. | Solution: Usually when there is a problem in which the overflow queue fills up very quickly the problem is associated with the relational database connection. The following are the possible causes for the overflow queue to fill quickly.
- Aspen Audit & Compliance Manager can't connect to the relational database
- The relational database has exceeded its pre-defined size quota
- The disk on which the relational database resides is full
- Too many events are being written to the database. The system cannot keep up with the workload.
- System resources are low on the relational database server
A better understanding of the cause of the problem can be obtained by examining the log files associated with Aspen Audit & Compliance Manager. These log files are located in this folder:
C:\ProgramData\AspenTech\DiagnosticLogs\AuditAndComplianceManager
Events will still be stored in the queue files even if the Aspen Audit & Compliance Manager service is stopped. Therefore, if you need to temporarily disable event generation this must be done through the event-generating application itself (for example, in Aspen InfoPlus.21 remove the applications listed in the AuditPropertyDef record.)
Keywords: Rapid
Fast
Full
Fill
Large
Big
References: None |
Problem Statement: How is it possible to enter a non-databank ion and a non-databank volatile electrolyte? | Solution: An example on how to enter a non-databank ion and a non-databank volatile electrolyte such as amine is attached.
The new amine's parameters are estimated using the property constant estimation system (PCES) using its molecular structure. The molecular structure is entered on the Properties Components | Molecular Structure | General sheet.
Atom 1 Atom1 Atom2 Atom 2
Number Type Number Type Bond Type
1 C 2 C Single
2 C 3 C Double
3 C 4 C Single
4 C 5 C Double
5 C 6 N Single
6 N 1 C Double
The formula is entered on the Properties Components | Molecular Structure | Formula sheet C5H5N.
The properties estimated are as follows:
PropertyName Parameter
MOLECULAR WEIGHT MW
NORMAL BOILING POINT TB
CRITICAL TEMPERATURE TC
CRITICAL PRESSURE PC
CRITICAL VOLUME VC
CRITICAL COMPRES.FAC ZC
STD. HT.OF FORMATION DHFORM
STD.FREE ENERGY FORM DGFORM
ACENTRIC FACTOR OMEGA
HEAT OF VAP AT TB DHVLB
LIQUID MOL VOL AT TB VB
PARACHOR PARC
IDEAL GAS HEAT CAPACITY CPIG
VAPOR PRESSURE PLXANT
HEAT OF VAPORIZATION DHVLWT
MOLAR VOLUME RKTZRA
VAPOR VISCOSITY MUVDIP
LIQUID VISCOSITY MULAND
LIQ THERM CONDUCTIVITY KLDIP
LIQUID SURFACE TENSION SIGDIP
The formula for the ion is entered on the Properties Components | Molecular Structure | Formula sheet C5H6N. For negative ions, there is an Atom type of E-.
The parameters for the ion are entered directly on the Property/Parameters form.
The parameters entered are as follows:
Parameter Value
CHARGE 1.0
MW 80.108330
IONTYP 1.0 (cation)
DHAQFM 1 (BTU/lbmol) - dummy value
PLXANT/1 -1e20 (non-volatile component, seeSolution 104189 for more information)
The chemistry involving the electrolyte and ion is added to the Chemistry Global.
AMINE + H2O <--> AMINE+ + OH-
Keywords: ion, amine
References: None |
Problem Statement: How to configure Aspen Mtell Adapter for OSIsoft PI AF to connect OSIsoft PI Data Archive Server | Solution: To setup Aspen Mtell Adapter to connect OSIsoft PI Data Archive Server user will need to install Pi AF Client application on the Aspen Mtell server.
Install PI AF Client application
1. Aspen Mtell V11 supports OSIsoft PI Data Server 2016 and 2017.
2. Install PI-AF-Client_2017-R2_.exe for Aspen Mtell V11
Validate to make sure PI AF Client tools were installed
1. Go to C:\Program Files (x86)\PIPC and make sure the following folders were available
2. Go to C:\Program Files\PIPC and make sure the following folders were available
3. Go to Programs and Features and it should show Pi AF Client (x64) 2017 R2
Verify Pi System Explorer can connect to PI Data Server:
1. Launch Pi System Explorer
2. Click File and select Connections...
3. Click Add Data Server and connect to Data Server
4. Click Search and select Tag Search and make sure PI System Explorer is able to show tag values.
Configure Aspen Mtell Adapter for OSIsoft PI AF using Pi Authentication:
1. Go to C:\inetpub\wwwroot\AspenTech\AspenMtell\Adapter\Pi\bin\ folder
2. Launch Mtell.Sensor.OsiSoft.Pi.Configuration.exe
3. Select the PI Server
4. Select PI Authentication, AspenTech recommends using PI Authentication when trying to connect with PI Data Server.
5. Type the ID\Password and Click Test.
6. Click Save
Configure Aspen Mtell Adapter for OSIsoft PI AF using Windows Authentication:
1. Go to C:\inetpub\wwwroot\AspenTech\AspenMtell\Adapter\Pi\bin\ folder
2. Launch Mtell.Sensor.OsiSoft.Pi.Configuration.exe
3. Select the PI Server
4. Select Windows Authentication
5. Click Test.
6. Click Save
7. Launch IIS Manager
8. Expand the server and select Application Pool
9. Select AspenMtell_Pi Application Pool
10. Select Advanced Settings
11. Click on the browse button within Identity
12. Select Custom account and click Set
13. Enter Group Managed Service Account that has read access to the PI Data Server. This account password should never expire.
Caution: While IIS application pools do encrypt the authentication information you enter, local administrators are still able to decrypt the information. It is recommended that you set up a Group Managed Service Account to use with the application pool; such an account will be able to be used over the network and Windows will manage the password as it does for other service accounts, so no password is stored in the application pool configuration.
14. Launch CMD
15 Type: IISRESET
Configure the Data Source within System Manager:
1. Launch System Manager
2. Click the Configuration tab and select Settings
3. Select Sensor Data Source
4. Click Add Data Source
5. Select Source as Plant Historian
6. Select OSIsoft PI as Historian
7. Click Test.
Keywords: How to connect Pi
Pi configuration
Connect Pi
References: None |
Problem Statement: How can I create reports of my Aspen Plus simulation results? | Solution: There are multiple options you can employ to display and produce results from an Aspen Plus simulation. This document will list and summarize the most common ways.
1) Stream Results
Standard stream results can be viewed at multiple locations within the Data Browser. Each stream within the Streams folder, and each block within the Blocks folder has its own stream results table. You can add additional streams to these tables using the empty column on the right. Further, if you navigate to Results Summary | Streams you can view a results table of all streams in the flowsheet, organized alphabetically.
You can customize the results shown. You can choose from all available physical properties that Aspen Plus can calculate. In addition you can apply properties to a certain phase, component, or other qualifier. Other options include specification of the units of measure, choosing which material streams to include, and much, much more.
2) Flowsheet-based Results
Simulation results can be added to the flowsheet window as well. Standard stream results tables can be placed on the flowsheet a picture item. Simply click the Stream Table button at the top of any stream results table, and that table will be copied onto the flowsheet. These tables can be added and removed as needed.
Another option for flowsheet-based results is to use the Global Data option. First, navigate to the View menu and ensure Global Data has a check mark next to it. Then, go to Tools > Options and select the Results View tab. You can select any results you wish to include on the flowsheet. The values will appear in distinct icons attached to each stream.
3) History File
The Aspen Plus History file can be generated by going to File > Export, and then selecting the .his file option. The history file can then be opened by any text editor. Information included in the History file is typically all simulation inputs; i.e. components, property method data, stream/block inputs, etc. The file will also define the flowsheet connectivity.
Also included with the history file is a detailed listing of all convergence-related results. Information on convergence blocks, iterations, err/tol values, and more is included - much like in the Control Panel. And like the Control Panel, you may increase or decrease the level of information printed out in the history file. In Aspen Plus navigate to Setup | Specifications and click the Diagnostics tab. From here you may set the desired level of detail in both the Control Panel and in the History File.
History files are useful as a record of simulation inputs and run-time messages. Since they can be opened separately in a text editor, they can be easily circulated and interpreted by Aspen Plus users and non-users alike.
4) Report File
Like the History File, the Report File is an alternate file type that can be exported from an Aspen Plus simulation. To create the Report File, ensure the simulation has run and has ben saved, then to to File > Export and select the Report File (.rep) option. The Report File will be a text-document style object - just like the History File.
Similar to the History File, the Report File contains all simulation inputs and run-time messages. But it also includes all calculated results for the streams and blocks in the model. Report Files also grant full flexibility to Aspen Plus users and non-users alike as they can be readily viewed and interpreted by all.
5) Calculator Block
Aside from the built-in reporting methods, you have the ability to create customized input and result summaries using the Calculator block functionality in Aspen Plus. Calculator blocks allow you to access specified and calculated variables within your simulation. Once accessed, you can display these variables in a Microsoft Excel calculator block. Any type of formatting or presentation may be applied to the Excel Calculator block - giving you the ultimate oversight on which variables are presented. For further information on the creation of Excel Calculator blocks, please look upSolution ID 103803 in the Knowledge Base.
Keywords: results; report
References: None |
Problem Statement: Hundreds of thousands of cimio_c_changeover.exe messages are being received daily from Aspen InfoPlus.21 into the Aspen Audit and Compliance Manager. A typical message reads as follows:
*TSK_DETECT*,*3 IO_SECONDARY_STATUS* changed from *ON* to *ON*.
This Knowledge Base article shows how to stop the above message from being logged by AACM. | Solution: In order to reduce the number of undesired audit messages generated by cimio_c_changeover.exe, please execute the InfoPlus.21 utility ChangeAuditAttribute.exe, as described in Aspen KB article 111028 , and change the current value of the attribute AUDIT_PROPERTY to Never for the record IOEXTERNALFTDEF and the field IO_PRIMARY_STATUS and IO_SECONDARY_STATUS.
In addition, make the following changes to records and field names listed below:
RECORD
FIELD_NAME
AUDIT_PROPERTY
IoGetDef
IO_ACTIVATE?
Never
IoPutDef
IO_ACTIVATE?
Never
IoPutOnCosDef
IO_ACTIVATE?
Never
IoUnsolDef
IO_ACTIVATE?
Never
IoGetHistDef
IO_ACTIVATE?
Never
IoLongTagGetDef
IO_ACTIVATE?
Never
IoLongTagUnsDef
IO_ACTIVATE?
Never
IoLongTagPutDef
IO_ACTIVATE?
Never
IoLongTagPOCDef
IO_ACTIVATE?
Never
IoLLTagGetDef
IO_ACTIVATE?
Never
IoLLTagPOCDef
IO_ACTIVATE?
Never
IoLLTagPutDef
IO_ACTIVATE?
Never
IoLLTagUnsDef
IO_ACTIVATE?
Never
IoExternalFTDef
IO_ACTIVE_DEVICE
Never
IoExternalFTDef
IO_PRIMARY_STATUS
Never
IoExternalFTDef
IO_SECONDARY_STATUS
Never
Please note that in future versions of Aspen InfoPlus.21, the AUDIT_PROPERTY attribute in the above-mentioned fields will be set to Never by default.
Keywords: TSK_DETECT
cimio_c_changeover.exe
changeover.exe
changeover
References: None |
Problem Statement: How do I change the services account on Aspen Mtell Server? | Solution: During installation and configuration it's recommended to use services account. The services account and its password used during installation and configuration can be changed, if required.
As the configuration varies for every customers, update the account or password where ever it's applicable. The below procedure covers all the locations which may be required to change the account or password.
Update the account and password in Aspen Mtell System Manager
EAM Adapter Update:
a. Depending on the EAM adapter use, update the account or password.
b. Select the EAM Adapter and change the account or password.
Sensor Data Sources Update:
a. Depending on the sensor data source use, update the account and password.
b. Select the Sensor Data Sources Adapter and change the account or password.
Agent Services Update:
a. Select the Agent Services and change the account or password.
Training Services Update:
a. Select the Training Services and change the account or password.
Security Settings Update:
Select the Security Settings and change the account or password.
Update the account and password in Internet Information Services (IIS) Manager
Click Start and type inetmgr to launch Information Services (IIS) Manager
Expand Server Name and select Application Pools
Select the Application Pools and right click on AspenMtell_Pi and select Advanced Settings
Under Process Model in the Advanced Settings screen select Identity and then click browse button.
Select Custom Account and then click Set…
Enter Domain\Username and password, then click OK.
h. Update all the Application Pool account which are required to be changed.
I. After updating IIS, it's required to reset IIS.
j. Click Start and type CMD, right click and Run As Administrator
k. Type IISRESET
Update the account and password in Windows Services
a. Click Start and type services.msc
b. Scroll down and select the services with Log On As account that required to be changed.
c. Right click the service and select properties
d. Click Logon tab and type the ID\Password
e. Click Apply and Ok
Keywords: Services account
Domain name
Username
Password
References: None |
Problem Statement: What is the pre-Release 8 algorithm option for Regression? | Solution: The pre-Release 8 algorithm is an option provided for compatibility with an old algorithm based on the original Britt-Luecke algorithm (used before 1990, release 8). This option enables users to reproduce the results from this old algorithm. If Aspen Plus is not used on DOS or Unix (very old versions), then this option should be deactivated.
Keywords: Regression, Algorithm, Pre-Release 8
References: None |
Problem Statement: How to rate Multiple Downcomer (MD) Trays under Column Internal section in Aspen Plus? | Solution: RadFrac with Column Analysis now supports trays with lattice downcomers. This modern style of tray allows greater liquid load than conventional trays, and up to 12 downcomers per tray. In these trays, the downcomers on each tray are rotated 90 degrees from the downcomers on adjacent trays, so the overall grid of downcomers looks like a lattice. Only portions of the bottom of each downcomer are open, allowing them to distribute the liquid as well as avoiding dropping liquid too close to the downcomers on the tray below.
Limitations:
All downcomers on lattice trays must have the same width and the same height.
Lattice trays are only supported for sieve trays in rating calculations. The sizing mode is not supported, nor are valve or bubble cap trays.
Following mandatory inputs are required for rating the Multiple Downcomer (MD) trays:
You can find a distillation column rated using MD Tray with detailed comparison against the conventional Sieve tray hydraulics in KB 48923.
Keywords: Multiple Downcomer (MD) Trays, Rating, Design, Downcomer, Tray Spacing, Lattice downcomers, Column Analysis, ECMD Trays, High Performance Trays
References: None |
Problem Statement: How to model Vapor Compression Heat Pump Assisted Distillation (HPAD) Columns with Multi Downcomer (MD) Trays? | Solution: Progressive depletion of conventional fossil fuels with increasing energy demand and federal laws on environmental emissions have stimulated intensive research in improving energy efficiency of the existing fractionation units. In this context, the vapor compression Heat Pump Assisted Distillation (HPAD) scheme has emerged as an attractive separation technology with great potential for energy saving in fractionating close boiling mixtures.
With Vapor Compression HPAD column, top vapors from the distillation column are compressed and the latent heat of this compressed vapor is utilized in re-boiling the column bottom, returned to the column as liquid reflux. These schemes are particularly helpful in fractionating close boiling mixtures like Propane-Propylene / Ethane-Ethylene mixtures.
A typical 150 tons/hr of FCC cracked LPG stream at 22 barg/58C from de-ethaniser bottoms containing 65% propylene and 35% propane (mole purity) is considered for separating propylene product at 99.9% mole purity. Bottom Propane flow rate is adjusted with a limitation to propylene loss not beyond 5% mole purity.
As the relative volatility of these components is near 1, it requires greater number of trays to separate this mixture. MD Trays find its advantages in this system thereby reducing the column pressure drop and column height.
Column is rated with 189 MD Trays in equilibrium calculation type (with 6 lattice down-comers) with a tray spacing of 0.45m and for a diameter of 7.5m. When a conventional sieve/bubble tray is employed (for the same tray spacing / column height), it would require a diameter of 13.2m against 7.5m (with MD tray), and pressure drop across the column would be up to 1.04 bar against 0.75bar (with MD Tray). Reflux ratio required to achieve the separation is as high as 14.
Column is operated at a pressure of 7barg and compressed to a pressure of 16barg, a small part of the vapor is condensed/sub cooled to 10C, returned to the compressor flash suction drum. The other part of compressed vapor is used in re-boiling the column bottoms, flashed above column pressure & returned as hot reflux flow.
A part of pumped top liquid product (collected in the compressor suction drum) is returned as cold reflux to attain the desired purity and the remaining product is removed out as top product. Reboiler is separately modelled in this example for simplicity. There is close match to the heat duty of this external reboiler.
Sr. No Property UOM Sieve Tray MD Tray
1 Feed Flow tons/hr 150 150
2 No of Stages - 190 190
3 Feed Tray Location - 135 135
4 Propylene Product Flow tons/hr 95 95
5 Propane Product Flow tons/hr 55 55
6 Propylene mole purity in Propylene stream % 99.87 99.86
7 Propylene mole purity in Propane stream % 1.92 1.93
8 Hot Reflux Flow tons/hr 1286 1260
9 Cold Reflux Flow tons/hr 84 84
10 Reflux Ratio - 14.42 14.15
11 Reboiler Heat Duty Gcal/hr 85.4 85.6
12 Tray Spacing m 0.45 0.45
13 Pressure Drop bar 1.04 0.75
14 Hole Area / Active Area - 0.12 0.12
15 Column Pack height m 85.05 85.05
16 Column Diameter m 13.2 7.5
17 % Approach to flood % 80 73
HPAD Distillation column rated with MD Trays and designed with Sieve tray are attached for reference and comparison (Prepared with Aspen Plus V11 build).
Keywords: C3 Splitter, HPC, MD Tray, Vapor Compression Heat Pump Assisted Distillation, HPAD, Sieve Tray
References: None |
Problem Statement: How to delete Petroleum assay created in Aspen HYSYS V11 if you want to save your simulation file as .xml file format to use it in lower HYSYS version such as V10.0 or V9.0? | Solution: Open HYSYS simulation in Aspen HYSYS V11
Click on Properties environment
Press down CTRL + SHIFT + P at th esame time to get Petroleam Assay backdoor window.
Delete the Petroleum assays
Aspen HYSYS will allow you to save your simulation as HYSYS .xml file which can be used to open it in HYSYS lower versions than V11.
Keywords: Xml, Petroleum assay
References: None |
Problem Statement: Why I did not receive any email notification when the alert was triggered?
System Health Message:
Live agent [agent name] for asset [asset name] triggered but no notifications were sent because there is an existing alert. | Solution: In order for two different emails to be sent for two different agents on the same asset it is necessary to deploy those agents in different Live Agent Group. If an alert is triggered for an agent and it's part of Default Live Agent Group, then other agents for similar asset will suppress the alert.
To check Live Agent Group for deployed live agents, follow below steps:
1. Launch Aspen Mtell System Manager
2. Select Equipment Tab
3. Expand and select the Asset where you deployed your live agent
4. Click Agents within Health Monitoring section
5. Select the Agent, in right hand section, you will find Live Agent Group.
If both the agents are deployed in same Live Agent Group, then Alert will be suppressed and no email alerts will be send if there is already an agent in alert state.
Note: If you select Live Agent Group option None, then alert will be triggered for all live agents and it will not get suppressed.
Keywords: Live agent groups
Email
Alert
no email
no alert
no notification
References: How to create Live Agent Groups? |
Problem Statement: How to configure Aspen Watch Dog Service to alert when critical services are down? | Solution: Using Aspen Mtell System Manager you can configure the Aspen Watch Dog to get notifications whenever Asset Sync, Work Sync, Agent Service, training service are not working. This article will help you to configure Aspen Watch Dog service and all critical services to auto start if failed.
Configure Aspen Watch Dog to receive notification for Asset Sync, Work Sync, Agent Service, training service are not working
1. Launch Aspen Mtell System Manager
2. Click Configuration and Select Watch Dog
3. Enable the following check boxes
Enable Watch Dog Notifications
Monitor Asset Sync Service
Monitor Work Sync Service
Monitor Agent Services
Monitor Training Services
Report Undelivered Notifications
Always send notification even if no issues.
4. Enter the recipient email address and Click Save.
Configure the Windows services to auto start if ever failed
1. Click Start and launch Services
2. Scroll down and right click on any Aspen Mtell Services and select Properties.
3. Click Recovery Tab and Select Restart the Service for First failure, Second failure and Subsequent failure.
4. Click Apply and Ok.
5. Follow Step 2 to 4 until all the Aspen Mtell services are configure to restart automatically.
Keywords: agent service failure
service not running
service crashed
References: None |
Problem Statement: How to delete an Agent, Data Set, Equipment Set, Asset, Location and Site in Aspen Mtell? | Solution: This article will help the user to delete Agents, Data Set, Equipment Set, Asset, Location and Site using Aspen Mtell System Manager and Aspen Mtell Agent Builder.
Procedure to delete Live Agent in Aspen System Manager
1. Launch Aspen Mtell System Manager
2. Select an Asset within Equipment tab
3. Select Agents within Health Monitoring
4. Select one by one all the Agents and click Delete
5. Click Yes to delete the agent
Procedure to delete Agents in Aspen Agent Builder
1. Launch Aspen Mtell Agent Builder
2. Select Agent within Machine Learning Tab
3. Click Delete and select all the Agents which you want to delete and click OK
4. Click Yes to delete all the selected Agents.
Procedure to delete Data Set
1. Launch Aspen Mtell Agent Builder
2. Select Data Set within Machine Learning Tab
3. Click Delete then Yes to delete the selected data set
Procedure to delete Equipment Set Profile
1. Launch Aspen Mtell Agent Builder
2. Select Equipment Set Profile within Machine Learning tab
3. Click Yes to delete selected Equipment Set Profile.
Procedure to delete Asset
1. Launch Aspen Mtell System Manager
2. Select the Asset within Equipment tab
3. Right click the Asset and select Delete
Procedure to delete Location
1. Launch Aspen Mtell System Manager
2. Select the Location within Equipment tab
3. Right click the Location and select Delete
Procedure to delete Site
1. Launch Aspen Mtell System Manager
2. Select the Site within Equipment tab
3. Right click the Site and select Delete
Note: Deleting Asset and Data Set using GUI does not delete them from the database. The following do not get deleted from the database:
1. Asset Hierarchy
2. Data Set raw data
3. Logs
Keywords: Delete agents
Delete Asset
Delete Equipment set profile
References: None |
Problem Statement: Old models created in Aspen Plus 9.x with Fortran blocks or models with Calculator/Fortran blocks give the following error message in Aspen Plus 10:
*** SEVERE ERROR
COULD NOT RESOLVE USER OR IN-LINE FORTRAN SUBROUTINE(S): SUBROUTINE ZZFORT IS MISSING
If a Compaq (or formerly, Digital) Visual Fortran compiler is not available, can this problem be resolved? | Solution: Yes, under some circumstances.
Aspen Plus has a built-in Fortran interpreter that can handle many Fortran commands. When the built-in interpreter encounters a command or format outside of its scope, it issues the above error message. This error is a request to make an external Fortran compiler available to handle the interpreter's unsupported Fortran commands.
PC Versions of Aspen Plus 9.x did not have this problem because the product was shipped with a DOS based Fortran compiler. No Fortran compiler is shipped with version 10.x and later.
If Fortran needs to be compiled in Aspen Plus 10 and later, then you must install a Fortran compiler. Externally purchased Fortran compilers are needed for all platforms where the Aspen Plus 10 Simulation Engine is installed. For Windows and NT platforms, Digital Visual Fortran is needed. SeeSolution 4335 for information about where to purchase this product.
Sometimes, the in-line or Calculator Block Fortran code can be slightly modified to avoid these unsupported extensions of the interpreter. Below are the most common causes for the above 'ZZFORT' error message:
1. Variable names longer than 6 characters need to be compiled
2. Unformatted write statements need compilation: write(,) flow,temp
Note: Use formatted write statments instead (see attached simulation file):
write(*,100) flow,temp
100 format(F10.2,5X,F6.1)
3. The following Fortran commands require compilation:
CALL
CHARACTER
COMMON
COMPLEX
DATA
ENTRY
EQUIVALENCE
IMPLICIT
LOGICAL
PARAMETER
PRINT
RETURN
READ
STOP
SUBROUTINE
The following Fortran statements are interpreted;
Some Declaration statements (entered on the Declaration sheet)
REAL
INTEGER
DOUBLE PRECISION
DIMENSION
Arithmetic expressions and assignment statements
IF
GOTO (except assigned GOTO)
WRITE (with formatted text)
FORMAT
CONTINUE
DO loops
Calls to some built-in Fortran functions
DABS
DACOS
DASIN
DATAN
DATAN2
DCOS
DCOSH
DCOTAN
DERF
DEXP
DFLOAT
DGAMMA
DLGAMMA
DLOG
DLOG10
DMAX1
DMIN1
DMOD
DSIN
DSINH
DSQRT
DTAN
DTANH
IABS
IDINT
MAX0
MIN0
MOD
You can use the equivalent single precision or generic function names in your Fortran statements. However, Aspen Plus always performs double precision calculations, and using the other names WILL require compilation.
See the Aspen Plus Help for more information. Help Simulation and Analysis Tools -> Sequential-Modular Flowsheeting Tools -> Calculator Blocks and In-Line Fortran -> Using Fortran in Aspen Plus -> About the Interpreter.
Keywords: ZZFORT, FORTRAN, CALCULATOR BLOCK, COMPILER Fortran
Compile
Compiler
Interprete
Interpreted
References: None |
Problem Statement: When Should I Change my Feasibility Objective Factor? | Solution: In PIMS-AO, you may see a pattern in the execution log where there are intermittent small infeasibilities. The case solves to a feasibleSolution however, these small infeasibilities are listed for many iterations (in the last column of the iteration list in the execution log). This can be a sign that the Feasibility Objective Factor is too low in comparison to your model’s overall economics. In this case the value should be increased – generally from the default value of 10,000 to 100,000. This option is found on the XSLP Settings Advanced1 tab.
Keywords: None
References: None |
Problem Statement: Multiple AspenTech applications, such as aspenONE Process Explorer (A1PE) and Aspen Production Record Manager (APRM), include the ability to send -email messages to users. In order to send e-mails, however, SMTP must be properly functioning on the specified computer. This | Solution: provides a couple options for verifying the proper functioning of an SMTP Server.
Solution
Testing the SMTP Service
A quick telnet session can confirm if the SMTP server is running and accessible. To test a specific SMTP server (either locally on the SMTP server or from a remote client machine), open a command prompt and type the following:
telnet <node name> 25
If the SMTP server is running, the results of this command will indicate that it is ready and include a current timestamp. For example:
If you receive an error that the connection failed:
1. Ensure that the Simple Mail Transfer Protocol service is running on the SMTP server.
2. If the service is running, refer to the Windows services file (C:\WINDOWS\system32\drivers\etc\services) to verify the port configured for SMTP use. If the port specified in the services file is different than the default of 25/tcp, try the telnet command again using the correct port number.
3. Ensure that the specified SMTP port is not being blocked by a firewall.
Testing SMTP Using the Pickup Directory
You can compose a simple e-mail text file based on the SMTP specifications (RFC 822). Here is the content of a sample text file typed in Notepad:
From: [email protected]
To: [email protected]
Subject: Test IIS Pickup
Date: Thu, 26 Sep 2019 11:17:35 -0400
Message-ID: <200504281117350@InfoPlus21>
Testing SMTP Pickup and Delivery.
Simply copy or move the text file into the pickup directory where SMTP was installed. (The default path should be \Inetpub\mailroot\Pickup but if you have Exchange installed then the path will be \Program Files\Exchsrvr\Mailroot\Vsi 1\Pickup.)
The SMTP service periodically checks the pickup directory and will attempt to deliver any of the messages found in the directory. Verify if you can receive the test message from the destination mailbox.
NOTE: A file with the above sample text is also attached to thisSolution. Simply modify the email addresses specified in the text file to your own personal account, specify a timestamp that is within the last five minutes of when you intend to test the delivery, and place the file in your SMTP servers pickup folder.
If the SMTP server is unable to send the text email message, try changing the from address in case you don't have sufficient permissions to mail messages from the specified domain. A good test of this would be to change the from address to the SMTP server's name, for example, [email protected] instead of [email protected]. (SMTP doesn't require that the from email account exists on an exchange server, but could have permission problems using an existing exchange server account.)
Keywords: EM alerts
mail notification failure
could not open connection to the host
connect failed error
References: None |
Problem Statement: We have some IP_TextDef records that we want to insert data into for the past time. But before we do this, we need an estimate of how much disk space we will need for them. How much disk space is needed to insert a historical value for an IP_TextDef record? | Solution: The IP_TextDef record details in the Aspen InfoPlus.21 Definition Editor shows the IP_#_OF_TREND_VALUES Repeat Area Summary table.
From this table, we could find the field data types for the four fields included in one historical value.
IP_TREND_TIME: XTimestamp -- 8 bytes
IP_TREND_QLEVEL: 5-bit Unsigned Integer -- 0.625 byte
IP_TREND_QSTATUS: 16-bit Signed Integer -- 2 bytes
IP_TREND_VALUE: 80-Byte Character -- 80 bytes
Therefore, for an IP_TextDef record, the total space needed for one historical value is 90.625 bytes (8+0.625+2+80=90.625 bytes).
Keywords: IP_TextDef
Disk space
Historical value
Bytes
References: None |
Problem Statement: Why hourly TDS is showing minutes granularity when using CSV Datasource?
TDS is set to hourly, but data can be seen coming through every minute | Solution: User will experience this issue if Interpolation mode is set to none within CSV Datasource.
1. Launch Aspen Mtell System Manager
2. Click Configuration tab and select Sensor Data Sources
3. Select CSV Sensor Data Source
4. Select either Linear or Stair Step Interpolation Mode to interpolate data
5 Click Save.
Keywords: Processed data
TDS
Granularity
References: None |
Problem Statement: How to establish Aspen InfoPlus.21 for ProMV connection with Cim-IO? | Solution: Once you have Installed and configured a Cim-IO interface, you can now establish a Cim-IO IP.21 connection.
1. Start Cim-IO IP.21 Connection Manager.
2. To begin configuring a connection, click Create a new one.
3. The Cim-IO IP.21 Connection Wizard will appear. Enter a device name that represents the connection between the client and the Cim-IO server computer in the Logical device name field. Click Next.
4. On the Configure Connection screen, enter the name of the Cim-IO server Computer. Then click the Discover button to populate a list of services on that computer. Select the Cim-IO service you want to establish a connection to. If you would like IP.21 to establish a connection to a second, redundant Cim-IO server provided the interface is of the same type and has the same Cim-IO service name, check the Enable Cim-IO redundancy checkbox and enter the Redundant server computer name. Click Next.
5. On the Summary screen, review the information. Click Next to create and start the connection.
6. Click Finish. You will be returned to the Cim-IO IP.21 Connection Manager. Here, you can see the Cim-IO IP.21 connection that you created has started running.
Editing Cim-IO Global Connection Settings
Cim-IO global connection settings apply to all Cim-IO connection.
1. Select Cim-IO Connections from the left-hand panel.
2. Here you can edit the global connection settings. On the Variables tab edit the following settings:
Enable changeover standby Cleanup – Selecting this checkbox causes the Changeover task to tell the node that becomes secondary to clean up its scan, store lists, and to clear its store file. Only the store file from the primary node will be accepted and processed. Clearing this checkbox causes both store files to be forwarded and processed. Duplicate and older values will be rejected.
Enable client boxcar deadbanding – Selecting this checkbox causes the Cim-IO client to monitor the gap between points. If the gap between the current point time (Tcurrent), and the previous point time (Tprev) is greater than twice the update time (2*Tupdate), the Cim-IO client will insert in history an artificial point with timestamp Tcurrent – Tupdate and value Vlastknown, where Vlastknown is the last known value of the point before the present value. Then it will insert in history the point’s present value. Note that the deadband used in this algorithm is defined on a point-by-point basis in the corresponding transfer record. Clearing this checkbox causes the Cim-IO client to simply join the current value to the previous as indicated by the dashed line. This is the default behavior.
Inactive PUT when IO_DEVICE_PROVEDDING is set to ‘ON’ – Selecting this checkbox causes the Put record to be sent when the record is activated. Clearing this checkbox causes the following behavior: When an IoPutDef record’s parent IoDeviceRecDef has its IO_DEVICE_PROCESSING set to ON, the Cim-IO client is notified and the device’s Put records are activated.
Update points if status changes or the deadband is exceeded – Selecting this checkbox will cause the point to be updated whenever its status changes or its deadband is exceeded. Clearing this checkbox will cause the Cim-IO clients to ignore data value changes when the data quality remains BAD.
Reject old data from Store and Forward – Selecting this checkbox causes the Cim-IO client to skip any previously added store files. As soon as the client tasks detects that while processing forwarded data for a tag, a sample is detected with a timestamp older than the timestamp of the most recent value inserted, the whole file will be rejected. Clearing this check box causes the Cim-IO client to process all the files being recovered.
Use scan off – Selecting this checkbox causes the Cim-IO clients to check any error codes returned by points against those in the comma-separated list of integers in CimIODeviceScanOff. If the return code is in this list, the point will be set to “Scan Off” to minimize the amount of points that will return errors. Clearing this checkbox causes the list in CimIODeviceScanOff to be ignored.
Changeover timeout – This redundancy specific variable takes the following values. The scenario where the variable is applied is as follows: When performing a Changeover, the Cim-IO client side will attempt to terminate its TCP/IP connection to the failed server, in case the server is still running in some partially functional state. For every transfer record, the Cim-IO client side will issue a stop get, stop put, or cancel request waiting this amount of time for a response, before closing the TCP connection and establishing a connection to the secondary server
1 tenth of a second
1/2 second
1 second
10 seconds
30 seconds
1 minute
5 minutes
10 minutes
30 minutes
1 hour
Dual failure delay – This redundancy specific variable takes the following values. The scenario where the variable is applied is as follows: If the network connection between the InfoPlus.21 Server and both redundant Cim-IO Server nodes goes down but the two redundant nodes remain active gathering data and storing it. This variable regulates the behavior of the Changeover task in InfoPlus.21, on the event that communication with the secondary node comes up first, to decide whether it should become the one active by allowing reasonable time for the primary to restart. If the primary system does not come up during the timeout, then the secondary node becomes the active system. In this way, Cim-IO reassures that the primary’s store file will be the one to be recovered.
1 tenth of a second
1/2 second
1 second
10 seconds
30 seconds
1 minute
5 minutes
10 minutes
1 hour
Scan off return code – CIMIODeviceScanOff is a comma-separated list of integers each representing a status code returned by the Cim-IO Server that the site has chosen to avoid by taking the points reporting them off scan. If DisableCIMIODeviceScan is set to “Use Scan Off”, then any point returning any error code in this list will be set to SCAN OFF to minimize reads of bad points. This is generally used to eliminate points that return an “invalid tag name” code. Because this setting is used in conjunction with DisableCIMIODeviceScan, setting this value to a non-empty string will set DisableCIMIODeviceScan to 0 (“Use Scan Off”). Setting this value to an empty string will set DisableCIMIODeviceScan to 1.
Max log size(Mb) – The maximum size that Cim-IO log files can reach. This max applies to the standard log files CIMIO_MSG.LOG and its backup.
Ping frequency(sec) – This is the rate at which Cim-IO connections will be tested for “aliveness” with a Cim-IO Ping.
Max ping failure – This is the number of Cim-IO Ping calls that must fail in order for Cim-IO to consider that a connection is down.
Cim-IO on unavailable flag – When a point becomes unavailable, its status is set to bad. Since this is an event, not a data transmission, there is no timestamp associated with it. This variable regulates how to treat the timestamp on that circumstance. Update timestamp to current time uses the current time for this event. Update timestamp to most recent time + 0.1 seconds causes Cim-IO to use the point’s last timestamp plus 0.1 seconds.
Cim-IO rescan logical devices – This variable indicates to Cim-IO client tasks whether redundancy is enabled or not. When Enabled, the client tasks follow device reconfigurations made by the Changeover task as a result of a switch, thus the name of the variable. If Classic: Cim-IO runs with no redundancy enabled is selected, then no redundant processing or verification is done by the clients, even if all devices in the system have been configured to use redundancy. If Redundant: Cim-IO changeover used for a redundant setup is selected, the client and the Changeover task creates a special file in the CIM-IO\io folder with an .ldv extension. This file is used for coordinating the Changeover process and therefore should not be modified.
Cim-IO send cleanup cancels – The scenario where this variable applies is as follows: When one of the redundant nodes has failed, the Changeover task must send a cleanup request to this node to cleanup all files and the connection. By default the task also sends requests to cancel all declared unsolicited tags. If the switch happened as a result of a network failure, there is no communications with the node and therefore every attempt to cancel tags will be timed out. If the number of unsolicited tags and transfer records is significant, the cancel requests could delay considerably the overall Changeover operation. For cases like this, this variable provides an option to skip canceling unsolicited tags as part of the cleanup. Send CANCELs when cleaning up causes Changeover to cancel unsolicited tag declarations as part of the cleanup. Send only a DISCONNECT when cleaning up eliminates the cancellation of unsolicited tag declarations.
3. Click Diagnostics Logging to edit the following settings:
Diagnostic Logging – Turn diagnostic logging on, off, or disable the feature.
Maximum log size(Mb) – The size that the log file is allowed to grow to.
Reload interval(Sec) – Specifies how often the cimio_diag.cfg file will be checked for changes and it will be reloaded if the file has been changed.
Thread lock timeout(Sec) – In multithreaded applications, this controls how long one thread will wait for another to update the diagnostics control structures before timing out. If a timeout occurs, an error will be logged to cimio_msg.log and the diagnostics will be disabled to prevent program crashes.
Logfile lock timeout(Sec) – The time a program will wait to obtain a lock on a diagnostic output file before timing out. If the program cannot get a lock, it will still log the diagnostic.
Default log – All diagnostics that have not been configured to go to a specific file will go to this file. The default is cimio_diag.log.
4. Click Checksum to edit the following settings:
Enable checksum – Select or clear this checkbox, depending on whether you want a checksum performed on each message.
Excluded nodes – When Enable checksum is selected, you will have to exclude older Cim-IO versions that do not have the checksum functionality. To exclude a node, enter the node, then click Add. To remove the exclusion, select the node and click Remove.
5. Click Manager Parameters to edit the following settings:
Connect timeout (Sec) – The time, in seconds, that the Cim-IO Manager will wait on a response from a Cim-IO server process.
Health check frequency (Sec) – The frequency, in seconds, in which the Cim-IO Manager will check for the existence of a Cim-IO server’s processes.
6. Click Save Configuration when finished.
Keywords: None
References: None |
Problem Statement: CLP kicked off when one of the participating DMCplus controller was turned off | Solution: Use the table below with the info from the controller and use the rules of thumb below to determine appropriate timing. Also refer to the DMCplus Composite Users Guide,Solution #105590.
Tier Critical CTOFF CLPTIME
FURNADMC 1 0 0 45
FURNBDMC 1 0 0 45
FURNCDMC 1 0 5 45
FURNDDMC 1 0 10 45
FURNEDMC 1 0 15 45
FURNFDMC 1 0 20 45
FURNGDMC 1 0 25 45
FURNHDMC 1 0 30 45
FBDMC 2 1 35 50
If the GCTIM is too large relative to the CLPTIME's of the individual controllers, it will cause these sorts of errors. I use the following rule of thumb to set up my initial timings.
GCTIM = MAX(CTOFF) - MIN(CTOFF) + 5
CLPTIME for each controller = GCTIM + 5* tier number
The 5 seconds added to the CTOFF delta is to give the last controller that much time to read and validate its data. The 5 second delay in the CLPTIME is to give the CLP time to solve and transmit its plan back to the controllers.
Whenever a controller leaves the CLP, the CLP waits the full GCTIM before solving. In this case 40 seconds. What I suspect is that one or more of the controllers have CLPTIME's less than or equal to 45 seconds. If this is the case, if the CLP takes more that 5 seconds to remap and solve, it will issue 15154 error message and leave the CLP causing the shedding, please try adjusting the timing and let us know what happens.
Keywords:
References: None |
Problem Statement: If a tag (data record) in Aspen InfoPlus.21 receives a value (perhaps a positive value indicates that a machine is running and a 0 (zero) indicates that it is not) and we want to test that value, report on it, and cause another different record to receive that value (but only if it is a positive value), how might that be coded in Aspen SQLplus? | Solution: This example uses tags defined against either IP_AnalogDef or IP_DiscreteDef. The record we read from is called 'RECSource' and the one receiving the values is called 'RECTarget'. Please substitute appropriate record names for RECSource and RECTarget when using the SQLplus code and in the QueryDef record that are relevant to your environment.
Note - the statements in green after the double dashes ( -- ) are just comments and do NOT have to be included in the record. One version with and one version without the comments is included below.
--- WITH comments ---
local yy; -- This is a local variable
yy = (select ip_input_value from RECSource); -- The variable holds the data from the source record
write 'The current time is: '||getdbtime; -- Displaying the current time
write ''; -- A blank line
write 'The value that we are measuring is: '||yy; -- Writing the value which has been retrieved
write '';
CASE -- Here we will check on the value being retrieved
when yy = 0 -- If the value is 0 then perform actions below
then write 'The machine is off.'; -- Indicate that the machine is not running
when yy > 0 -- If the value is greater than 0 then perform actions below
then write 'The machine is on.'; -- Indicate that the machine is running
update RECTarget set ip_input_value = yy; -- Update the target record with the (positive) value
end; -- The end of checking on the value (and the end of the query)
--- WITHOUT comments ---
local yy;
yy = (select ip_input_value from RECSource);
write 'The current time is: '||getdbtime;
write '';
write 'The value that we are measuring is: '||yy;
write '';
CASE
when yy = 0
then write 'The machine is off.';
when yy > 0
then write 'The machine is on.';
update RECTarget set ip_input_value = yy;
end;
Enter the code above in the Aspen SQLplus Query Writer and save it as a QueryDef record. Once it is saved then expand the #WAIT_FOR_COS_FIELDS repeat area in that record to be 1 and set the following values:
WAIT_FOR_COS_FIELD = RECSource IP_INPUT_VALUE
COS_RECOGNITION = all
SET_OPTION = none
The above actions will cause the SQLplus code to be run any time there is a new value in the IP_INPUT_VALUE field of the RECSource record.
Here are some examples of what the output would look like:
The current time is: 20-JAN-20 18:24:42.8
The value that we are measuring is: 0
The machine is off.
The current time is: 20-JAN-20 18:28:18.2
The value that we are measuring is: 1
The machine is on.
1 row updated.
Keywords: None
References: None |
Problem Statement: How to import asset hierarchy information in Aspen Mell System Manager? | Solution: 1. Open Aspen Mtell System manager
2. Select the Equipment Tab
3. Select the Import object button in the above bar
4. Select Enterprises
5. Navigate to the file where the equipment sets are stored
6. Make sure all import fields are processed with no error messages and then click ok
7. Click on the import objects button and select Sites
8. Navigate to the file where the equipment sets are stored
9. Make sure all import fields are processed with no error messages and then click ok
10. Click on the import objects button and select Locations
11. Navigate to the file where the equipment sets are stored
12. Make sure all import fields are processed with no error messages and then click ok
13. Click on the import objects button and select Assets
14. Navigate to the file where the equipment sets are stored
15. Make sure all import fields are processed with no error messages and then click ok
16. Click on the Refresh Equipment button on the left next to the asset hierarchy
17. You will now be able to see all the imported asset hierarchies.
Keywords: Asset Hierarchy
import
References: None |
Problem Statement: Mtell System Manager prompts the following error upon Login (Enabled Mtell Security)
Error Message:-
(Default Exception Handler) Application Exception: Number must be either non-negative and less than or equal to Int32.MaxValue or -1. Parameter name: dueTime | Solution: This error message will prompts when Aspen Mtell Security Timeout is configured above 35791 minutes.
1. Launch Aspen Mtell System Manager
2. Click Configuration tab and select Settings and click Security Settings
3. Set the Timeout After Value below 35791 Minutes
4. Click Save
The changes will take affect after restarting Aspen Mtell System Manager
Keywords: Application Exception
Mtell Security
Timeout After
Int32.MaxValue
dueTime
References: None |
Problem Statement: Why do I sometimes get the message that the molecular weight (MW) for a component is different than the molecular weight calculated from the formula? Which one is used?
The message is similar to the following:
MW AVAILABLE FOR COMPONENT GE IS DIFFERENT FROM MW CALCULATED
FROM FORMULA (ATOMNO/NOATOM). CALCULATED VALUE IS USED.
AVAILABLE MW = 72.64000 CALCULATED MW = 72.61000 | Solution: By default, the molecular weight.is calculated from the atomic formula. Molecular weight is available in all Aspen Properties databanks. However, the databank MW value may not contain enough significant figures for certain applications for which atomic balance is important, such as reactor modeling. Calculating the molecular weight from the atomic formula makes the molecular weights consistent for all reactions. The calculated MW is more accurate than the databank MW.
Do not use this option with components like HE-3 which represent specific isotopes.
If you do not want Aspen Properties to calculate the molecular weight but instead use databank values, then clear the Calculate Component Molecular Weight From Atomic Formula check box.
The atomic mass of the atoms can be found in Knowledgebase document 95561.
Keywords: None
References: : VSTS 488967 |
Problem Statement: Getting Vectors from an Excel Spreadsheet into Aspen DMCplus Model | Solution: Getting Vectors from an Excel Spreadsheet into AspenDMCplus Model.
1. Save file from Excel as a Tab delimited text file.
2. Invoke Aspen IQModel, new a project and click Specify Data, accept the default choice by click OK button.
3. Select the file at dialog Select a file with input data and click Open button to import it.
4. Select or new a format to correspond your dataset file at dialog File Format, and click OK button.
5. At Data Specification dialog, select a variable as Dependent variable and click OK button to confirm.
6. When back at the process chart, click on the arrow between Specify Data and Condition Data, select Input data.
7. Click on the diskette symbol on the top tool bar Set Save As type to *.clc, provide new file name and Save.
8. Use CLC file saved from Aspen IQ Model as input to Aspen DMCplus Model.
Keywords: DMCplus, Model, Vector, Excel, Spreadsheet
References: None |
Problem Statement: Is it possible to use an Extract block with a petroleum assay? | Solution: The Extract block is used to model the rigorous counter-current extraction of a liquid with a solvent.
The NRTL and UNIQUAC binary parameters for water and pseudocomponents are intended for use in LLE calculations, as water and hydrocarbons tend to form two liquid phases. These interaction parameters are estimated from the mutual solubility data.See knowledge document 81223 for more information.
Attached in an example file based on the properties in that example that will run in V9 and higher.
Note that in order to select a pseudocomponent as one of the key components as shown below,
The pseudocomponent IDs need to be generated for the assay as shown below:
Keywords: Pseudocomponent, NRTL, UNIQUAC, Extract
References: None |
Problem Statement: How to include Vol.% Curves or Wt.% Curves in stream results? | Solution: After a successful run, Vol.% Curves / Wt.% curves can be analyzed for the specific stream from Stream Analysis (available under home tab) and by selecting the Distillate curve.
In case, if this distillate / petroleum curve analysis must be performed for all the available streams during every run and to be viewed under stream results, then the following workaround needs to be carried out:
Enabling Property Sets for the desired Distillate / Petroleum curve
Create a new Property Sets
Include the desired distillate curve (example – ASTM D86 curve) from the search window
Save the Property Set
Adding the prop-set in the stream result template
Include the newly created Prop-Set in the stream result by navigating to Setup -> Report Options -> Stream -> Property Sets -> Property Sets window
Enabling the Property Set in Stream Results template
Navigate to Stream Summary tab -> Select Properties
In the Edit Stream Summary Template window, select Add Report Prop-Set button available under Select Properties
Run the model to view the stream analysis results under the Vol.% Curve
Keywords: Vol.% Curves, Wt.% Curves, ASTMD80, ASTMD1160, Petroleum Curves, Distillate Analysis
References: None |
Problem Statement: My Learning Performance Metrics don't match the Agent Performance Metrics | Solution: The Sample Learning Performance matrix is calculated at the end of training on the entire data set date range, and is entirely based off the default metrics for an agent (minimum alarm duration, probability threshold, etc.).
This is an example of the Learning Performance Metrics straight after training
The Agent Performance Metrics are calculated based on the selection when populating Agent Probability Trend. The metrics will be based on the Date Range, Filter Granularity, Minimum Alarm Duration, Threshold etc. The Metrics will change in correlation with any settings changes and any new data from your historians.
This an example of the Agent Performance Metrics when launching Agent Probability Trend. You will notice that the metrics does not match.
It is unlikely that these two tables will match unless it is observed immediately after training the agent - any changes to the agent will cause changes in the Metrics.
Keywords: Mtell
Agent
Performance Metrics
Probability Trend
References: None |
Problem Statement: How do I register a CAPE OPEN property package (cota) file? | Solution: The CAPE OPEN property packages exported from Aspen Plus are automatically registered on the computer from which they have been exported. For CAPE OPEN property packages exported on another computer, you need to register them. In V8.8 and earlier, the CAPE OPEN property package manager cotappm.exe can be used to register the property package.
This program is installed in \aspentech\APRSystem xx.x\Engine\Xeq (adapt the path according to the version of Aspen Plus)
Double click on cotappm.exe program.
In the File menu, browse to locate your .cota file.
Again in the File menu, select the Save command.
If you need to do this frequently, you can associate the cota files with the cotappm.exe application.
Starting in V9, CAPE-OPEN Property Packages using the version 1.1 standard can now be imported and exported without registering them with cotappm.exe. Users without administrative privileges on their computer can now use CAPE-OPEN Property Packages. Exporting and registering CAPE-OPEN Property Packages no longer uses cotappm.exe. Now exported packages are automatically written to the%LOCALAPPDATA%\AspenTech\CAPE-OPEN Property Packages V11 folder. %LOCALAPPDATA% is normally C:\Users\(username)\AppData\Local but it could be elsewhere depending on Windows installation. To install CAPE-OPEN Property Packages from old versions or other computers, add them to this folder.
Keywords: registry
CAPE-OPEN
References: None |
Problem Statement: How to estimate distillation column relief load for reflux failure scenario? | Solution: Relief calculation is one of the most discussed aspects of chemical engineering design. Licensors and American Petroleum Institute (API) specify the broad boundaries of “dos” and “don’ts” for relief system analysis and sizing. Still, much is left for engineering judgement to define the optimum safe design.
The detailed reflux scenario is defined by the following considerations, driven by guidelines in API Standard 521:2
All pumps and compressors driven by electric motors are assumed offline.
Where both electric motor and steam turbine drivers are available for a given service, the turbine driver is assumed to be in service
Electric fans on air-cooled heat exchangers are assumed offline.
No credit is taken for any favorable instrument response from automatic control valves during the relieving period to mitigate relief.
The upset conditions in the upstream (reaction) section of the unit affect the downstream (stripping and fractionation) sections and must be accounted for.
Unbalanced Heat (UBH) approach:
The conventional approach of tower relief load calculation, especially for grassroots units, is to balance the unbalanced heat across the tower during an upset scenario. Although the unbalanced heat method has its limitations, it is one of the most trusted methods for relief load estimation for a distillation column. One of the basic assumptions for the method is an unlimited supply of liquid to the top tray, and the liquid is considered to vaporize from the top tray during a relieving scenario. This results in a conservative (high) relief load owing to the low latent heat of vaporization of top-tray liquid.
However, it is important to recognize that top-tray liquid is lighter and demonstrates a lower relieving temperature, risking incorrect material selection and design of the column overhead, relief valve and downstream system, in many cases. The effect on relieving temperature is more pronounced in a column that has a wide range of boiling temperatures between the top and bottom trays. The total inventory of the system, including the diameter of the column and the number of side draws and side strippers, is also critical in the scenario in question. If the re-boiling/stripping is continued for a relief scenario, then the likelihood of column overheads being exposed to higher boiling fluids during that relief scenario is more realistic for a small-diameter column with no or a limited number of side draws, rendering the design overhead system vulnerable to high temperature exposure.
The method employs a heat and material balance around the column envelope (see Figure 2) at relieving conditions in order to estimate excess heat input. The relief rate (WA) is calculated from the excess heat divided by the latent heat of the relieving material.
where
WA = Accumulation rate, kg/h
WF = Summation of rates of all feed streams, kg/h
WP = Summation of rates of all product streams, kg/h
LA = Latent heat of vaporization of the accumulation, kcal/kg
QI = Summation of all heat inputs, kcal/h
QO = Summation of all heat removed, kcal/h
WF HF = Summation of the products of each feed rate times its enthalpy, kcal/h
WP HP = Summation of the products of each product rate times its enthalpy, kcal/h
The excess heat calculation considers the enthalpy of each stream at relieving pressure assuming all product stream compositions remain constant. An endless supply of relieving material is assumed available (typically represented by the top tray liquid of the column). Normally, no credit is taken for the following mitigating factors:
Compositional changes including depletion of light components
Accumulation of mass within the system volume as pressure increases
Hydraulic limitations
Overhead cooling before the reflux drum is flooded.
The UBH method is simple, effective, conservative and it is also subject to the following key limitations:
The UBH method is unreliable for scenarios where the upset leads to a significant compositional change within the column envelope. A typical example is a blow-through of vapor from an upstream high-pressure section to the low-pressure column via a failed open control valve.
UBH can over or underestimate relief loads for systems where the column energy balance is sensitive to minor compositional changes
UBH is not normally suitable for complex systems such as:
Reactive distillation columns
Columns with relief occurring near the critical region
Scenarios with significant transient effects such as a major upset upstream affecting feed condition.
Steady State Approach:
Steady state modelling is recommended as an alternative approach for column relief calculations where the UBH method is not applicable. It is most commonly employed in scenarios where compositional changes affect the relief load (vapor blow-through case), as well as systems that are highly sensitive to product composition (strippers or stabilizers). The upset cases are directly simulated in a steady state model at relief conditions along with any feed changes, and the resulting product stream compositions are predicted by thermodynamics.
Steady state modelling is also useful for relief cases where conditions in the column are near critical temperature and pressure. Because the liquid and vapor densities approach one another in these cases, the reduction in liquid volume through boiling is significant compared to the generation of vapor volume. Taking credit for this volume exchange can reduce the estimated relief load significantly.
While steady state modelling is valuable for the scenarios discussed above, for most systems it offers only incremental advantages over the UBH method. The transient effects of an upset case are still ignored by assuming constant stream conditions, and no credit is taken for system volume or hydraulic effects. Additionally, steady state models fail to converge for some upset cases where the column trays run dry due to loss of liquid loading (as a result of reflux failure or loss of feed), limiting the application of this method.
Aspen Safety Analysis:
Aspen Safety Analysis allows user only to manually input the vapor rate from the column with some safety multiplier factor for Reflux failure scenario type. There is no thumb rule or consideration factors on the usage of this multiplier factor.
Keywords: Reflux failure scenario, Relief load calculation, UBH approach, Un Balanced Heat, Steady state simulation, API 521, Safety Multiplier
References: None |
Problem Statement: Why a site name will not appear in System Manager but will appear in Agent Builder?
When a site with the same ID as its enterprise is imported into Aspen Mtell System Manager via csv file, and the csv file does not contain a Site UID, the new site will not appear in System Manager even though it appears in Aspen Mtell Agent Builder.
In the example below both the Enterprise ID and Site ID are 'TestEnt', and the Site UID column is empty.
csv file:
After the csv file is imported:
System Manager, does not show any Site name under TestEnt Enterprise
However, Agent Builder does show the site TestEnt under TestEnt enterprise | Solution: An enterprise and the sites under it should not share the same ID. The Enterprise ID and Site ID should match your EAM system, so please check your EAM system to confirm, as the IDs are same.
Note: When create a new Site with the same ID as its enterprise is not allowed while entering the asset hierarchy manually.
Keywords: Enterprise ID
Site ID
Asset hierarchy
References: None |
Problem Statement: How to create Live Agent Groups? | Solution: When a live agent is deployed, it is grouped by Live Agent Group. Live Agent Group decide how agent will behave when an alert is triggered. If an alert is triggered for an agent and it's part of Default Live Agent Group, then other agents for similar asset will suppress the alert.
How to create a live agent group
Launch Aspen Mtell System Manager
Select Configuration tab from the ribbon
Select Settings options and Click Live Agent Groups
In the ribbon select Create new agent group
Type the name of the new live agent group
Decide if this is going to be the default box and tick accordingly
Press the save button in the ribbon
Selecting Live Agent Group when deploying live agent
Launch Aspen Mtell Agent builder navigate to Machine Learning tab
Right click on the desired agent
Select deploy live
In the third page of the wizard in the drop-down menu select the name of the newly created Live agent group
Go through the wizard and deploy the agent
Going back to the live agent creating section in Aspen Mtell System Manager it will show the agents deployed in the group.
Note: If you select Live Agent Group option None, then alert will be triggered for all live agents and it will not get suppressed.
Keywords: Live agent groups
alert suppressed
why no alert
References: None |
Problem Statement: Crystallizer Model for PET, PBT process in Aspen Plus | Solution: The crystallizer model in Aspen Plus is intended for conventional chemicals that crystallize fromSolution due to super-saturation conditions. It considers nucleation and growth of the particles. This model is not appropriate for modeling solidification from the melt phase or for polymers in general.
Polymers exhibit two state transitions: they have a melting point and a glass transition temperature below which the material becomes brittle. Most polymers can only crystallize to about 40-50 weight %. Crystallites form inside the melt phase and expand outward. Crystal growth is limited by chain entanglement. The initial crystallization is very fast (minutes-hours), reaching conversions of around 40%, then there is a much slower “annealing” process that occurs over longer time spans (10-30 hours).
A simple quick model of polymer solidification can be developed using RSTOIC like in the attached example. In this example I have defined the polymer as an “oligomer” component (to use property methods to calculate the properties). I needed to add the segments corresponding to terephthalic acid and butane diol. You can represent crystallization in one of two ways:
PBT (Poly-butylen-terephthalat)
Define a reaction PBT (mixed) -> PBT (CISOLID)
Define a reaction PBT -> PBT (CRY)
We used approach (1) in the attached example. By convention, polymer in the CISOLID phase is treated as a pure crystalline polymer.
Approach (2) could avoid the need for the CISOLID substream in Aspen Plus. When using this approach, set unary property parameter POLCRY to 1.0 for PBT(CRY) component. The POLYCRY property is the weight fraction crystallinity of a polymer component.
Aspen Polymers can estimate the melt transition and glass transition temperatures of the polymer components. You can include property set “TMELT” or “TGLASS” to see the results of this calculation.
Key Words:
Crystallizer, PET, PBT
Keywords: None
References: None |
Problem Statement: Where are aspenONE Process Explorer plotting comments stored? | Solution: aspenOne Process Explorer stores all comments in a record named Comments defined by IP_CommentDef.
Keywords: Comments
InfoPlus.21
References: None |
Problem Statement: When you attempt to connect Aspen Simulation Workbook (ASW) to an Aspen Plus model, you get the error message failed to create COM object Apwn.Document.xx.0. The problem occurs only when Excel is not running as a local administrator. | Solution: Excel does not normally need to be a local administrator to launch simulation models from ASW.
One possible cause is that some program such as the Appsense Application Manager can interfere with privileges for starting applications.
TheSolution is to disable Appsense and connect to the model from ASW. Once this has been done, you may be able to restart Appsense and have it not interfere with launching models.
Keywords: None
References: : VSTS 513912 |
Problem Statement: How to install and configure Remote Simulation service on a Remote Server for remote ASW applications? | Solution: Aspen Simulation Workbook allows you to run simulations remotely over the network to take advantage of faster computers or applications which you do not have installed locally.
A remote model runs under the user profile configured to run the Aspen Remote Simulation Service when the service is installed on the server; restricted (non-administrator) access on the client is sufficient. ASW places the files for the remote model in a temporary folder under this user profile on the server while the model is run.
Aspen Simulation Workbook must be installed on the client (end-user) computer. A separate installation package is available for download (only V11) from this link : https://esupport.aspentech.com/apex/S_Article?id=000048224
For previous version installation, custom installation of Aspen Simulation Workbook can be done.
Aspen Remote Simulation Service (available under Server Product and Tools) and the simulation application must be installed on the server computer through custom installation. Server OS is required for installation of Aspen Remote Simulation service.
After successful installation, open Aspen Remote Simulation service application form Aspen Engineering Tools folder and start the service. Ensure firewall on the server is turned off. Else, an exception to “C:\Program Files(x86)\AspenTech\Aspen Remote Simulation Service V#\AspenTech.AspenCXS.RemotingSvc.exe” should be added on the firewall application
On the end-user (client) computer, which has an excel file with / without embedded simulation, enable Execute case on remote server check box found in Aspen Simulation Workbook | Organizer | Configuration | Simulation. Provide the remote server host name (server name) and remote port number (9011 for V11, 9010 for V10). Optionally the connection can be tested.
So, when the embedded / loaded simulation file is connected, the simulation file will be temporarily copied to the remote server and ASW actions shall be carried out as per the user inputs. When the execution is enabled on remote server, simulation file visibility is disabled.
After successful start of the simulation on the remote server, same can be verified with the logs on the Aspen Remote Simulation Service application
Keywords: Aspen Simulation Workbook, Aspen Remote Simulation Service, Remote Server, Server Product and Tools
References: None |
Problem Statement: The Foxboro I/A OPC server for AW nodes requires the AW node name to be used in the addresses listed in the IO_TAGNAME fields of Aspen Cim-IO transfer records. When TSK_DETECT switches scanning from one node to a redundant node, the node name embedded in the IO_TAGNAME fields must change. This article explains how to change the AW node names in IOGetDef records when using Aspen Cim-IO Redundancy. | Solution: ThisSolution assumes you have already defined a redundant Cim-IO logical device for the Foxboro I/A OPC Server
Download the query ChngAWNodes.txt attached to thisSolution and open it in the Aspen SQlPlus query writer. Set the macro variable logicaldevname to the Cim-IO logical device name for the Foxboro I/A OPC server, the macro variable primarynode to the physical node name of the primary Cim-IO server, and secondarynode to the physical node name of the secondary Cim-IO server.
Save the query as a CompQueryDef record named ChngAWNodes.
After refreshing the Aspen InfoPlus.21 Administrator, use the Aspen InfoPlus.21 Administrator to expand the field IO_#TAGS in the record TSK_DETECT to determine the occurrence number associated with the logical device.
Next open the record ChngAWNodes and set the field #WAIT_FOR_COS_FIELDS to 1. Expand this repeat area and set the field WAIT_FOR_COS_FIELD to TSK_DETECT n IO_ACTIVE_DEVICE where n is the occurrence number you found in the previous paragraph. Also, change the field COS_RECOGNITION to ALL.
Aspen InfoPlus.21 activates the query ChngAWNodes when the field IO_ACTIVE_DEVICE in TSK_SAVE changes from Primary to Secondary or from Secondary to Primary. The query first turns off device processing for the logical device. Then, the query substitutes the name of one AW node for the other in the IO_TAGNAME fields in all the IOGetDef records associated with the logical device that have the field IO_RECORD_PROCESSING set to ON. After that, the query turns on device processing for the logical device and activates all the IOGetDef records associated with the logical device that are turned ON.
If you use transfer records not defined by IOGetDef, then you may substitute the definition record name of the transfer record you are using for IOGetDef.
Keywords: redundancy
redundant
Fox
Foxboro
I/A
AW
node
References: None |
Problem Statement: How do I switch from one SQL database to another SQL database using Aspen Mtell System Manager? | Solution: Launch Aspen Mtell System manager.
Select Configuration tab
Select Database Connection from the Menu
In Data Source select the Name of your Source and in Database Name enter the name of the new Database you wish to switch to.
Click the Test Connection icon denoted by a blue stack and green check mark.
Verify that the test results indicate that Database connection test successful and click ok.
Close Mtell System Manager and open the program again.
Keywords: Switch database
move from one database to another
connect database
References: None |
Problem Statement: A step by step guide on how to reset Mtell View authentication if your access is denied with the error 'Unable to load trend data' (as shown in the image below). | Solution: 1. Navigate to C:\inetpub\wwwroot\AspenTech\AspenMtell\MtellView\
2. Open Web.config with Notepad
3. 'Allow users' default will be equal to * as seen in the following image
4. We need to set 'Allow users' to windows. Change the * to windows as seen in the following image
5. Save the Web.config file
6. Restart system to sync the configuration with Windows Active Directory
7. Mtell View should work normally on restart.
Key words
Allow users
Windows
Config
Mtell View
Keywords: None
References: None |
Problem Statement: How to set up the Process Explorer Publish remote path in Aspen Process Graphic Studio? And what is the firewall port requirement for Aspen Process Graphic Studio to remotely publish the graphic project to aspenOne Process Explorer? | Solution: After the graphic project was saved, click the Tools | Options | ProcessExplorer Publish tab | Enter http://[server username] into the Remote Path | Click OK
Click the File | Publish Project | To aspenOne Process Explorer. The graphic project will be remotely publish to the aspenOne Process Explorer server, and we will be able to see the graphics in aspenOne Process Explorer.
We could use the Command Prompt to track the port usage.
Before the graphic project was remotely published, the port status is:
After the graphic project was remotely published, the port status is:
The port 80 was used for remote publish. As a result, in order for Aspen Process Graphic Studio to remotely publish the graphic project to aspenOne Process Explorer, the port 80 needs to be opened in the firewall
Keywords: Aspen Process Graphic Studio
Remote Publish
aspenOne Process Explorer
Firewall port requirement
References: None |
Problem Statement: What is the XTRGCST term in my objective function? | Solution: In PPIMS models using PIMS-AO there is a setting called “Calculate Holding Costs based on the Closing Inventory” located in XNLP settings on the Advanced tab. When this setting is active, holding costs for the target amount of inventory are included in the objective function. This setting is off by default and that behavior is consistent with how a DR model is configured.
Keywords: None
References: None |
Problem Statement: What firewall exceptions are required for PIMS-AO and why? | Solution: In Aspen PIMS-AO, there are some firewall exceptions that are required for the case parallel processing and multi-start parallel processing to function properly. Without these exceptions, the firewall will not allow the network communication with MPI and parallel processing will not be available.
The specific firewall exceptions are:
CaseParallel.exe
MultistartParallel.exe
PIMSWIN.exe
Smpd.exe
MPIexec.exe
Please work with your IT group to configure these firewall exceptions.
Keywords: None
References: None |
Problem Statement: Why is it that when I try to open an Excel file from the PIMS model tree nothing happens? | Solution: Normally if you double-click an Excel file attached to the PIMS model tree, the file will open in Excel and can be reviewed and/or edited. However sometimes there is no response when double-clicking the file on the tree. This can happen if there are orphaned Excel processes active. To see if this is the case, close all your Excel files and sessions. Then go to Task Manager and look for any Excel sessions that are still active. If you find any, use the “End Task” option to close them. Once all orphaned Excel sessions are closed, the PIMS behavior should return to normal.
Keywords: None
References: None |
Problem Statement: How to configure Aspen Mtell System Health Monitoring? | Solution: Using Aspen Mtell you can get early, accurate warnings of equipment failures to avoid unplanned downtime, and prescriptive guidance to mitigate or solve problems. Therefore it's very important to monitor Aspen Mtell System Health.
To monitor any errors or warning in Aspen Mtell System Health, you can configure Aspen Log Manager.
Allows to sends email reports at scheduled intervals to those administering the system about errors and/or warnings that have occurred in any connected Aspen Mtell software.
Provides a self-managing database of log histories, so logs do not build up over time and require additional administrative oversight.
Refer our KB article on How do I configure the Aspen Mtell Log Manager?
To monitor the machine resources like High CPU Usage, High Memory Consumption and Low Disk Space, you can create Rule Policy Agent to monitor high CPU, high memory usage and low disk space.
Refer our KB article on How to create Rule Policy Agent to monitor Aspen Mtell Server Health?
To monitor critical Aspen Mtell services like Aspen Mtell Agent Service, Aspen Mtell Asset Sync, Aspen Mtell Work Sync and Aspen Mtell Training Services, you can enable Aspen Watch Dog Services to watch on these processes.
Refer our KB article on How to configure Aspen Watch Dog Service to alert when critical services are down?
Keywords:
References: None |
Problem Statement: How to create Rule Policy Agent to monitor Aspen Mtell Server Health? | Solution: Using Aspen Mtell you can also monitor your Aspen Mtell server to alerts when there is minimum Memory, CPU and Hard Drive Disk space available on the server. This article will explain how to create a Rule Policy Agent to monitor your Aspen Mtell Server.
Creating Simulated Historian
1. Launch Aspen Mtell System Manager
2. Click Configuration and Select Sensor Data Sources
3. Click Add Data Source and Select Simulated Historian
4. Click Save
Identify the server sensor to monitor
1. Launch Aspen Mtell System Manager
2. Click Configuration and Select Sensor Data Sources
3. Click Map Sensors and click Refresh
4. Find the Sensor names which you want to monitor
We recommend using AvailablePhysicalMemory, Drive_C_DiskAvailableSpace and CPu Usage - Overall sensors
Create a Rule Agent to monitor system health
1. Launch Aspen Mtell System Manager
2. Select Equipment and an Asset (You can create an Asset for your server if do not want to use existing asset)
3. Select Agents and Click Add Agents
4. Select Rule Policy Agent Wizard
5. Give a name to your Agent and click Next
6. Select Use Minimum Alert Duration and choose 1 Hours duration
This setting will wait for an hour before triggering an alert
7. Click Insert Sensor
8. Select the sensor which you have identified in previous steps.
9. Enter a condition for which you want the agent to trigger an alert.
For Example: SimulatedHistorian::AvailablePhysicalMemory < 1000000000 (This will trigger an alert if the available memory goes below 1 gb)
10. Validate the condition and Click Ok, then Next.
11. Click Next
12. Select Send Email Notification and an Email Template
11. Click Finish.
12. Create similar Rule Policy Agents to monitor CPU and Disk Space.
Keywords: Machine health
system health
References: None |
Problem Statement: Which property set will give me the ideal gas ratio of specific heats Cp/Cv?
This would be Ideal Gas Cp/Cv in Aspen Hysys
I can find the Ideal Gas Cp (CPIGMX-M), and the normal Cp/Cv (CPCVMX) but not and ideal Gas Cv (so I can calculate the ratio myself) nor the Ideal Gas Cp/Cv ratio that is already calculated. | Solution: In Aspen Plus V11, there are new property set properties to provide some variations to the the heat capacity ratio CP/CV for pure components and mixtures, respectively.
CPCV - The heat capacity ratio CP/CV) for a pure component.
CPCVMX - The heat capacity ratio (CP/CV) for a mixture.
CPCP-R -- The heat capacity ratio CP/(CP-R) for a pure component.
CPCP-RMX -- The heat capacity ratio CP/(CP-R) for a mixture.
CPCVIG -- The ideal gas heat capacity ratio (CP/CV) for a pure component.
CPCVIGMX -- The ideal gas heat capacity ratio (CP/CV) for a mixture.
Keywords: None
References: : VSTS 86535 |
Problem Statement: Aspen Excel Add-in on a client machine is receiving the following error message: Time out occurred before data was retrieved.
This knowledge base article shows how to resolve this error. | Solution: User may encounter the Time out occurred before data was retrieved error message when using Aspen Process Data Excel Add-in with many tags over a long period of time. To avoid the timeout problem, user can remove the Aspen Process Data service from the data source in ADSA.
The Aspen Process Data service is only used by Excel Add-in and A1PE and it is using Windows Communication Foundation (WCF) which is where the timeout happens.
User may consider creating a new data source in ADSA without the Process Data service if the same data source is used for both Aspen Process Data Excel Add-in and aspenONE Process Explorer (A1PE).
Another way to avoid the timeout problem in Aspen Process Data Excel Add-in is to split the number of tags into two or three small formulae rather than using one big formula.
Keywords: MES Addin
References: None |
Problem Statement: How to debug Aspen Properties user model in HYSYS? | Solution: Copy and paste all the Fortran files from your project to a new folder (for debugging).
Launch the Customize Aspen Plus V# command line utility from the Start menu.
Change command line directory to the new folder (cd <directory path>).
Compile all user routines as debug option using aspcomp *.f dbg.
Debug and Release are just labels for differentSolution configurations, which each consist of project configurations (again, just a label). In general, you use “Debug” when you want your project to be built with the optimizer turned off, and when you want full debugging/symbol information included in your build (in the .PDB file, usually). Microsoft. “Set debug and release configurations – Visual Studio.”
Link user routine objects as debug option (asplink debug <name>) to build *.dll, again using the custom command line utility for Aspen Plus.
Copy *.dll, *.lib and *.pdb files to …\AspenTech\AprSystem <version>\Engine\Xeq
Note: common command line tools like e.g. robocopy can be used, which can be included in a batch file to speed-up frequent file transfers.
Open Aspen HYSYS (new case or existing case)
Open MS VS click on Attach to Process
Choose Native code and then select AspenHysys.exe from available processes before clicking Attach.
Click on “Open” to open user routine *.f file (like e.g. esu.f)
Set break point(s)>
Note: Microsoft. Use Breakpoints in the Debugger - Visual Studio.
Run HYSYS case and execution will stop at selected break point(s) during execution.
Keywords: None
References: None |
Problem Statement: Users configuring Cim-IO for OPC may receive the following message while testing communication via the Cim-IO Test Utility (cimio_t_api.exe):
CIMIO_USR_GET_CONNECT, Error connecting to device
CIMIO_MSG_CONN_SOCK_CREATE, Error creating an outbound socket
CIMIO_SOCK_OUT_CONN_FAIL, Error connecting to the server
WNT Error=10060 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
What can one do to fix the problem? | Solution: Ensure the following things are correct:
- Verify that the contents of the %cimiodef%\cimio_logical_devices.def file are the same on the Cim-IO server machine and the Cim-IO client (Aspen InfoPlus.21) machine in terms of the logical device in question (the files may contain entries for multiple logical devices).
- Verify that the DLGP service name entry (and if using Store and Forward, the entries for the scan process, the store process, and the forward process) in the %windir%\system32\drivers\etc\services file are the same on both the Cim-IO server and Cim-IO client machines.
- Verify that the Cim-IO client machine can ping the Cim-IO server machine and vice-versa.
- Verify that the Aspen CIM-IO Manager service is running on the server machine (you may want to restart it even if it's running)
In addition, make sure that any firewalls that exist on the machines or between the machines are either configured to allow communication through the ports specified in the services file or are turned off.
Keywords: None
References: None |
Problem Statement: For the CIM-IO Store&Forward process, the default location for the CIM-IO store file is C:\Program Files (x86)\AspenTech\CIM-IO\io folder. How to customize the CIM-IO store file location? | Solution: Here is the procedure to customize the CIM-IO store file location:
1. Stop the OPC interface on CIM-IO Interface Manager
2. Modify the Store file path from Default to the customize folder, then save the configuration
3. Start the OPC interface, and the CIM-IO store will be generated on the customize folder
Keywords: CIM-IO Store&Forward
CIM-IO store file
Customize location
References: None |
Problem Statement: When using a Shared On Demand Calculation - what can be done if the message 'Error Number -4004' shows up in place of the value (in the legend of A1PE, for example)?: | Solution: Stop and restart TSK_APEX_SERVER via the Aspen InfoPlus.21 Manager on the IP.21 server (detailed procedure below):
1. On the IP.21 Server find TSK_APEX_SERVER in the list of Running Tasks in the lower left part of the IP.21 Manager and click on it:
2. Click the 'STOP TASK' button which is in the lower left corner of the Manager and confirm that you want to stop the task. It should disappear from the list of Running Tasks.
3. Locate TSK_APEX_SERVER in the UPPER LEFT corner of the Manager in the Defined Tasks list and select it:
4. Click the 'RUN TASK' button beneath the Defined Tasks list in the middle left part of the Manager to cause TSK_APEX_SERVER to start. TSK_APEX_SERVER should now have a check mark to the left of it to indicate that it is running and it should also reappear in the Running Tasks list.
This should eliminate the 'Error Number -4004' message and fix the problem.
Keywords: None
References: None |
Problem Statement: How to export Economic Analysis variables in Aspen Simulation Workbook? | Solution: Ensure Economic Analysis is completed (after mapping and sizing) and the results are saved without any errors / warning messages.
Open Economic Analysis results and select Send to Excel / ASW option
Select the tables to be exported, choose a table formatting template, select “Export Variable link information with tables to enable live links using Aspen Simulation Workbook” and click on Export tables to Excel. Variables can be exported even to an existing workbook
After Export, open the Excel spreadsheet, and enable Aspen Simulation Workbook
Create automatic links and attach the existing Aspen Plus model in Aspen Simulation Workbook in the upcoming windows
Once these economic analysis variables are linked successfully, the results will vary dynamically as per the model input changes.
If the values did not update after simulation, kindly use the refresh button (comes in handy).
Keywords: Aspen Simulation Workbook, ASW, Economic Analysis, Aspen Process Economic Analyzer (APEA)
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.