question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: How to solve the warning ERROR: Keyword <word> is invalid. in Aspen OptiPlant 3D Layout. | Solution: When the Input Generation process is executed, users can face this warning Keyword <word> is invalid.
This type of error occurs when One of the line in the line-list has an invalid start type (or) end type or a line does not have a pre-defined end type (or) start type.
Keywords: Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
References: None |
Problem Statement: How do I change the Rho V2 constraint in Aspen Flare System Analyzer (AFSA)? | Solution: To change the Rho V2 constraint, select Scenarios from the Build section of the Home ribbon. Choose the scenario(s) for which you want to change the limit and click on Edit.
In the Scenario Editor pop-up select the Constraints tab where you can edit the Rho V2 limits for both Headers and Tailpipes.
The changes will be reflected in the Scenario Input table
Keywords: Rho V2, limit, constraint, multiple scenario Edit.
References: None |
Problem Statement: How to solve the warning ERROR: Start/End Point could not be adjusted’’ in Aspen OptiPlant 3D Layout. | Solution: When the Input Generation process is executed, users can face this warning Start/End Point could not be adjusted
This type of error occurs when more than one line is connected to same equipment with same start or end type. For e.g. a pump can have only one suction and one discharge, but if more than one line has suction or discharge from the same pump, then this error message may occur.
Keywords: Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
References: None |
Problem Statement: How do I change the Noise constraint in Aspen Flare System Analyzer (AFSA)? | Solution: To change the Noise constraint, select Scenarios from the Build section of the Home ribbon. Choose the scenario(s) for which you want to change the limit and click on Edit.
In the Scenario Editor pop-up select the Constraints tab where you can edit the Noise limits for both Headers and Tailpipes.
The changes will be reflected in the Scenario Input table
Keywords: Noise, limit, constraint, multiple scenario Edit.
References: None |
Problem Statement: How do I change the Vapour and Liquid velocity constraint in Aspen Flare System Analyzer (AFSA)? | Solution: To change the Vapour and Liquid velocity constraint, select Scenarios from the Build section of the Home ribbon. Choose the scenario(s) for which you want to change the limit and click on Edit.
In the Scenario Editor pop-up select the Constraints tab where you can edit the Vapour and Liquid velocity limits for both Headers and Tailpipes.
The changes will be reflected in the Scenario Input table.
Keywords: Vapour and Liquid velocity, number, limit, constraint, multiple scenario Edit.
References: None |
Problem Statement: How to find the lean approach to equilibrium for acid gas cleaning process modelling in Aspen HYSYS? | Solution: Lean approach to equilibrium is the acid gas content of overhead gas that is in equilibrium with the lean amine. This will normally set the reboiler duty (solvent regeneration).
If Acid Gas property package is used, HYSYS will calculate the equilibrium approach (or efficiencies) for acid gas components on each stage based on tray dimensions. To calculate the total approach to equilibrium, user will need to use a spreadsheet to do the calculation manually.
A few simple calculations in Aspen HYSYS are required to obtain the equilibrium approach. To calculate the lean approach, we can import the lean solvent composition and temperature to a stream, and specify the vapor fraction of stream to 0, Aspen HYSYS will automatically calculate the equilibrium pressure and vapor phase composition of the stream, then we can use that information to estimate the equilibrium fugacity of acid gas, and finally calculate the lean approach by using the calculated equilibrium acid gas fugacity and the acid gas fugacity of the sweet gas.
Lean Approach = equilibrium CO2 partial pressure of lean amine / CO2 partial pressure of the sweet gas
1) Create two streams to duplicate the lean amine and rich amine streams
2) Delete the pressures of the streams, set the vapor fraction to 0,
3) Calculate the equilibrium CO2 partial pressure of lean amine and rich amine (engine will do it automatically once we delete the pressure of the stream and specify the vapor fraction)
4) Calculate lean approach in the spreadsheet.
Keywords: Lean Approach, Equilibrium, Spreadsheet, Acid Gas
References: https://esupport.aspentech.com/S_Article?id=000055664 |
Problem Statement: Aspen Air Cooled Exchanger – Can we get setting plan for an A or V frame air cooler exchanger? | Solution: New feature – Aspen Air Cooled exchanger V14 - Draw setting plan for an A or V frame air cooler exchanger is available now.
Now you can view, copy, print, etc., setting plan for an A or V frame air cooled exchanger. Below is an example of the setting plan of Unit View for an A frame air cooled exchanger.
Below is an example of the setting plan of Unit View for a V frame air cooled exchanger.
This article replaces the article “Why is the Setting Plan missing for the Air Cooler?” https://esupport.aspentech.com/S_Article?id=000063126 which says that setting plan is not available for A & V frame. This article is still valid for the Air cooled exchanger for version below V14.
If in case user unable to see the A or V frame setting plan in version V14 & above, make sure the tube side flow direction selected correctly.
Keywords: A or V frame air coolers, setting plan air cooler
References: None |
Problem Statement: How do we find the Omega Values for two phase volume in Aspen Plus? | Solution: The v0 and v9 Omega Two phase volume values are available in the Safety Environment on the “Fluid Properties” tab in the scenario page. User should also find these values on the Omega method sizing report in the PSV datasheets.
To calculate the omega values, user needs to select the “Reliving phase method - Omega sat. or two phase” & is allowed only for two phase applications.
Keywords: Omega method, Omega two phase volume – v0, v9
References: None |
Problem Statement: What is the applications of “Limit checks” option available under Fired Heater? | Solution: Aspen Fired heater have an option called Limit check which could be really useful application for users to monitor the maximum limits for important parameters:
For a number of key parameters relating to the performance of a fired heater, it is possible for you to specify your own particular limit value, which the program should check. If you enter such a limit, the program will generate a warning if this limit is exceeded. Otherwise it will produce a note indicating that it is not.
This facility can save you the need to find the relevant parameter within the results and reduce the risk of your missing something that is a potential problem area. The parameters for which you can request limit checks are:
Peak firebox tube wall temperature & Peak firebox tube heat flux –
Maximum firebox process fluid outlet temperature, Maximum tube wall temperature in any convection bank, Maximum gas velocity (mass flux, based on minimum flow area) in any convection bank, Maximum pressure drop across convection banks (all banks combined)
Keywords: None
References: None |
Problem Statement: Where can I find the maximum film temperature in fired heaters for firebox & convection banks? | Solution: The film temperature or the maximum film temperature is one of the important criteria in any design to make sure the fluid temperature shall be well below the maximum allowable temperatures.
Aspen Fired Heaters – can also provide similar values as maximum temperature is key parameters in simulation for fired heater which could be handled for high temperature applications.
User can find Maximum film temperature (tube side) from our program in API Sheet or Firebox & Bank Performance | Peak Temperatures and Fluxes
Below snap shows the details of the maximum stream fouling layer temeprature :
The Peak Temperature and Fluxes tab gives information, both for the overall firebox and for each tube group, on peak heat fluxes and maximum tube inside and outside temperatures. Both for the circumferential mean, and the circumferential maximum temperature are shown. The initial radiation calculation assumes a tube temperature which varies along the tube length, but which at any point is uniform around the circumference, this is the circumferential mean, for which the maximum value along the tube is shown. An API correction is then applied, which estimates the circumferential variation of heat flux around the tube circumference at any point From this the “circumferential max” temperatures can be calculated. The temperatures values should be treated with some caution, as there are physical features such as circumferential conduction around the tube wall, and higher local heat transfer coefficients at higher temperatures which are not accounted for.
Keywords: Film temperatures, fouling layer temperature, maximum tube side temperatures, fired heater maximum temperature
References: None |
Problem Statement: Does Aspen Air cooled exchanger calculate tube side film temperatures like the option available under fired heater? | Solution: Aspen Air Coolers does not have tube side film temperature like Fired Heater. The option available as the Stream temp and Metal surface temp profiles could be useful to track the temperatures. The Tube side film temperature should be between them at each Distance location.
Keywords: Tube side film temperature, stream temperatures, metal surface temperature
References: None |
Problem Statement: How to calculate missing free energy of formation (DGAQFM/DGAQHG) for component MDEA+ while running simulation in Aspen Plus? | Solution: MDEAH+ is a solvent which is used in CO2 carbon capture process modelling in Aspen Plus. Sometimes while modelling such complex process and running the simulation we get the warning for missing properties like (DGAQFM/DGAQHG)
These properties error unable the simulation to converge and generate the results.
To resolve this issue for missing property parameter for component we need to generate or provide this information in the property environment.
This can be done by following the steps below:
Go to properties environment
Expand the method tab
Under the method tab option expand Pure components tab
4. Scroll down on these pure components options and you will find one parameter as REVIEW-1
5.Double click on this parameter to view the details as below:
6.Select the component for which the missing property parameter error is seen and if the value is know we can enter the value in the column
7.If the value of the parameter is not known, use Retrieve parameters option on the ribbon tab of property environment to calculate the missing parameters.
8. Once the missing property value is entered or calculated go to simulation environment.
9. Reset and run the simulation model again to remove the missing property error for the component.
Keywords: Free energy of formation, DGAQFM, DGAQHG, Carbon Capture
References: None |
Problem Statement: This | Solution: frames a short procedure on how to Connect the DMC3 controller Tags to existing Tags inside the IP21 Database (In this case IP21 will simulate the OPC/DCS system)
Solution
The controller used in this example is a controller from the DMC3 example library. The controller name is COL5X3Local and it contains 5 MVs and 3 CVs. This controller can be found under the following path on the Online Server:
C:\ProgramData\AspenTech\APC\Online\app\col5x3
Consider that this is originally a CCF controller and need to be imported into DMC3 Builder.
The procedure is the following:
1.- Go to deployment on the controller. You will see the connection grid where are listed all Controller variables and their required parameters. In this case we are going to connect only the MV FIC-2001.SP Measurement Parameter which represents the Reading from the field instrument.
On the IP21 administrator we have already created a tag, named FIC-2001.SP (In the Case of the OPC the parameter should created as well in order to have a connection, otherwise when testing Connection DMC3 Builder will fail showing that the target tag does not exist)
2.- Select the IO device that will be use for the Connection in the case of this example I will use device IOIP21
3.- On the Connection Grid type down all the required information that every field require. In this case for the parameter measurement of the variable FIC-2001.SP we will use the following information:
IO Source: IOIP21
IO Tag: FIC-2001.SP
IO Datatype: Double
4.- If the IO source is correct and the Tag name is correct, we can proceed to do a Test Connection. This feature can be found on the top ribbon of the Deployment node (refer to the previous picture highlighted on yellow).
5.- If the Test connection is successful it will bring back a value (whether it is a numeric value or string) on the Test Value field from the grid and highlight the connection with a green connection marker. In this particular case, we can observe that the value is 2.5, this is because DMC3 builder is reading the default value from IP21 database which is 2.5
6.- Finally, we can deploy the controller and do one more change on the IP21 value just to verify that the Online Controller is taking the correct values from the IP21 Database (OPC).
Keywords: Test Connect, DMC3 Builder, IP21
References: None |
Problem Statement: Model Switching is a utility on DMC3 equivalent to CCF Switch on DMCplus Controllers. It can be used to implement relationships and file replacements that enable applications to switch models and/or tuning sets while the DMC3 application are operating Online, Offline or in Simulation Mode. | Solution: Requirements for using Model Switching:
1.- To use Model Switching it is necessary to have Engineer Permission
2.- It only can be used with FIR model type.
3.-It is necessary to have the mdl or mdl3 (in case the current project is DMC3) files and/or .tuningset file already exported and located in a know directory
4.- Changing the active tuning set must be done either manually or by an output calculation. An input calculation cannot be used to switch tuning sets because, during the controller cycle, the tuning switch occurs prior to input calculations.
NOTE: The mdl3 files can be exported from a DMC3 Builder project by performing the following instructions:
On the open project go File and select the option Export File
Once is open select Export Model; this will open a window where you will have to specify the location to save the exported file. In this window change the type of project from .dmc3model to .mdl3
Select the Location and save the file. Then you can go to the location and verify that the model file has been saved as mdl3
Example on the use of Model Switching
To Trigger to Model Switching a Calculation has to be Prepared to use this utility. On the Attached PDF please find an example on how this can be prepared and used.
Keywords: Model, DMC3, ModelSwitching
References: None |
Problem Statement: There is a problem reported in V11 and previous version when trying to set some Parameters for FFWD variables on Deployment using an Auto generating Customer Template. This tech tip mentioned a couple of | Solution: that can help with that problem
Solution
On deployment there is basically three kind of variables that can be selected General, Input and Output. The general variables are mostly for general controller parameters as On/Off Request, Watchdog etc. and for the for example we will consider in this group also structure parameter as Sub-controller and Test Group parameters.
Input variables mainly are variables that were mapped on the Model as Model Inputs. MV and FFW essentially. Although that in DMC3 builder we can select on the Model or Optimization an MV as FFW when the controller is created all independent variables are selected as Inputs.
Outputs variables will be essentially CV variables.
The problem resides in this definition. As FFW and MV variables are both set as Input there is not a way to specify a difference between these two kinds of variables on the Tag Template. Take the following example. I want to define the auto generation of Steady State Value for MV but not for FFW, if you start creating the template you will notice that you can only define a “filtering” only for inputs, outputs and general. So, any change define for inputs will affect both FFW and MV, thus, when you generate the Tags Steady State Value will be created for both FFW and MV.
Please be aware that this is not an actual bug. But, most likely an expected behavior.
Nevertheless, there is two ways to solve the problem. But on both cases, they have to be executed manually:
1.-On deployment select the FFW variable to be modified. Then, on the top Ribbon click on customize and uncheck the box for the parameters that you do not want to see on the Tag mapping. Then select the radio option for “apply the same changes to selected variables of the same type”. This will modified only the selected FFW and you have to repeat the same procedure for the rest of the FFW variables.
2.- on Deployment top Ribbon select Export. This will create a csv. file whith your current IO mapping that can be manually modified on Excel. After you made the changes on excel you can import back the file using the import option on Deployment Top ribbon
Keywords: DMC3 Builder, Deployment, FFWD
References: None |
Problem Statement: It is unclear what the numbers next to FUTMOV mean, are they minutes or are they cycles? How is the total number of values determined? Why does it skip a lot of values? | Solution: When you set a DMC3 controller to output a debug print file for a cycle, either during simulation or running online, one of the most useful sections is the Future Moves (FUTMOV) for the manipulated variables, it is showing how the controller is planning to move the independent variables on the next few cycles.
What you will see on this section depends on how many future moves you configure for your controller and the time to steady state. The future move configuration is done on the Simulation section of DMC3 Builder under “Move Settings” (default of 14, can be up to 64):
Here is an example of the future moves section of a demo controller debug print:
And this is the time to steady state and cycle for the same controller:
The value that you see on the first column next to FUTMOV represents future cycles, the future moves calculated will roughly represent ½ of the TTSS of the controller. On the first example since each cycle is one minute then the numbers shown are both cycles and minutes, but let’s see a different example:
Here the controller has a 4 minute TTSS and a cycle of 8 seconds, so the 14 future moves shown is roughly ½ the TTSS of the controller (14 x 8 seconds = 112 s ≈ 120 s).
The reason why it’s “skipping” values is because it’s trying to cover that half of the TTSS and it shows only where the most significant movement for the variables is, usually showing most of the first few cycles and then spacing them out.
Also, when you have a very long TTSS and/or a very small number of future moves it may even skip the first cycle to show you cycle 2, then 4, 6, etc… What is shown on the future moves is just a plan, it doesn’t mean that what is shown is exactly how the controller is going to behave, on each cycle the DMC3 engine does dynamic calculations and depending on the response of the feed forward and controlled variables it will adjust the manipulated variables accordingly.
Keywords: DMC3, APC, debug print, debug, future moves, FUTMOV
References: None |
Problem Statement: AspenTech (very strongly) recommends using TSK_HBAK for backing up Aspen InfoPlus.21 History Filesets. This video demonstrates how to back up filesets to limit the impact in cases such as a catastrophic disk failure on the production system. | Solution: AspenTech distributes for free a special executable/external task, as well as a database record and a few Aspen InfoPlus.21 Administrator settings, specifically designed for performing extremely safe backups of History Related files on a user-defined scheduled basis. Once the backup has been performed the user has the option to start another application which could for example copy/move the backups to some external device for safe keeping - or any other procedure or application for that matter.
This video attempts to explain how to setup and use the above described procedure called TSK_HBAK (also known as just HBAK).
Keywords: None
References: None |
Problem Statement: This | Solution: frames the scenario where a deployed DMC3 application fails to turn on and PCWS page shows Setpoint Validation Errors such as: “Invalid Setpoint”, Setpoint Validation: Setpoint change exceeds the operator high limit., Setpoint Validation: Value change exceeds the maximum move constraint., etc.
This could affect also the application during the Simulation of the controller.
Solution
In V10 CP1 a new feature called Setpoint Extended Validation was added. This feature was meant to replicate the Last Check (LSTCHK) entry in DMCplus. It may be enabled in DMC3 Builder at the Deployment view by clicking on Online Settings.
When Setpoint Extended Validation is enabled and the controller is permitted to write setpoints, the controller will perform additional checks to ensure that the setpoint change calculated by the controller or by user calculations, does not violate maximum movement constraints or exceed operator, engineering or validity limits.
Additionally, it examines the setpoint entry for each MV. In a controller configured to write setpoints through another entry (for example User Defined entries) extended validation will not be performed on those alternate entries.
By default, on V10 CP1 when Setpoint Extended Validation is set to YES, meaning that this feature is enabled, this could lead to the Setpoint errors mentioned in the previous section that prevent the controller from turning on. To remove the errors, change Setpoint Extended Validation from YES to NO, save the changes and redeploy the controller. The Setpoint Validation errors should stop, and the controller will be allowed to turn on assuming no other errors are detected.
NOTES:
ThisSolution also applies when controller Simulation is used.
If this validation is turned off the controller will still adhere to all setpoint constraints, for example, max move limits, operator limits, etc.
If an MV is enabled with a current value outside operator limits, the controller will behave in the same manner as if the validation is turned on.
If the tracking flag is set then the value is validated and tracked. If no tracking is turned on, then the value will be validated and determined to be outside of limits.
Keywords: DMC3 Builder, Setpoint Validation Errors.
References: None |
Problem Statement: When searching for Aspen Unified patches on the https://esupport.aspentech.com/ site, whether it is for Refining Planning & Scheduling (Aspen Unified PIMS, Aspen Unified Scheduling), Aspen Dynamic Optimization (Aspen Unified GDOT Builder), or Manufacturing Execution Systems (Aspen Unified Reconciliation and Accounting), not all of the patches show up, they seem to skip when you filter by Family and Version. | Solution: An example, this is what shows up when you search for Aspen Unified GDOT Builder patches when filtering Family and Version:
As you can notice, only some patches are showing up, what happened with CP3 EP1, EP4, EP5…?
The reason behind this is as of the time of writing this article, February 2023, the Aspen Unified tool encompasses many different suites (RPS, ADO, MES) and when a patch is uploaded to the esupport site it must be uploaded with only one primary product (even if in most cases the patch contains fixes for multiple products), the family that this primary product is part of is what will show up if you filter by Family on the esupport site. The Aspen Unified patches have many related products:
But when filtering by Family, it will only show up if the first product is part of the selected family. A workaround for this is to filter by Product not Family:
This will show up all patches that have the specific software on its related product (even if it is not the primary product), but may not be as convenient on some situations or could cause some confusion.
At this moment it is unclear whether this product structure or the esupport site filtering will change in the future, this is an explanation on why the patches may not show up depending on the filtering.
Keywords: esupport, patch, Apsen Unified, PIMS, Scheduling, GDOT, CP, EP
References: None |
Problem Statement: How to get rid of the following error message Class does not support Automation or does not support expected interface when trying to use HSR with Aspen HYSYS in order to extract the data in excel? | Solution: In order to get rid of the following error message Class does not support Automation or does not support expected interface when trying to use HSR with Aspen HYSYS in order to extract the data in excel:
You will need to make sure that the HSR sheet is unlocked from the properties as per below:
Keywords: Error 430, HSR, Aspen HYSYS
References: None |
Problem Statement: How to setup constraints on Diesel production from a Crude unit in Aspen PIMS? | Solution: The user might want to control the Diesel production from the Crude Unit due to various reasons like:
Market demand
Upcoming downstream Diesel Units shutdown
Limitations on downstream Product blending
In case of these scenarios, we can setup constraints on Diesel production from a Crude unit in Aspen PIMS by the following steps:
Introduce a CCAP row in table Assays as shown in the screenshot below:
Now we need to introduce a corresponding entry in T.CAPS to control the activity of this CCAP row:
PIMS will now optimize the yield of cut DS1 as per the constraints entered in the T.CAPS:
Keywords: Diesel Production, Cut yield, capacity, T.ASSAYS, T.CAPS
References: None |
Problem Statement: What are the various purposes of Pool segregation via T.CRDCUTS in Aspen PIMS? | Solution: The user is provided with the feature of pool segregation via T.CRDCUTS for various purposes:
If your refinery processes the cuts from different crude units independently in the downstream units, then we need to group them into different pools. PIMS will now generate different tags for them based on the Logical Crude Unit and the pool number specified by the user. For example, the Kerosene cut from CD1 will be tagged as KE1, from CD2 will be tagged as KE2 and so on.
The same crude cuts from different Logical Crude Units might have varied properties because of the differences in the Crude assays being used. In this case, pool segregation is highly necessary so that we can recurse on the properties of the cuts.
Keywords: T.CRDCUTS, Pool Segregation, Property Recursion, Crude Assays
References: None |
Problem Statement: How do the utility consumption rows entered in T.ASSAYS and T.CRDDISTL vary from each other? | Solution: When the utility consumption rows (UATMXXX / UVACXXX) are entered in T.ASSAYS, the utility consumption is calculated per unit flow of the individual crudes treated.
* TABLE ASSAYS Table of Contents
TEXT ANS AHV ARL BAC
*
ICONLV1 ConCarbon, WT% 0.00000 0.00000 0.00000 0.00000
ICONHV1 ConCarbon, WT% 2.00000 1.25000 3.36000 8.40000
ICONVR1 ConCarbon, WT% 19.40000 24.20000 19.20000 24.30000
*
IVANVR1 Vanadium, WPPM 151.00000 205.00000 116.00000 888.00000
IVANCOK Vanadium, WPPM 130.00000 406.00000 230.00000 3000.00000
*
UATMSTM ATM Steam, M LBS 0.00853 0.00850 0.00815 0.00906
UVACSTM VAC Steam, M LBS 0.02275 0.02266 0.02173 0.02417
*
PXXXREM Ave API, Crude Mix:
PAPIAV1 CDU1, Mode1 26.38814 27.00075 33.79958 17.15457
***
On the other hand, when it is entered in T. CRDDISTL (ATMXXX / VACXXX), the utility consumption is calculated per unit flow of the total feed to the logical crude unit.
* TABLE CRDDISTL Table of Contents
* Crude Distillation Map
TEXT CD1 CD2 CD3 ***
* ATMOS & VAC TOWER MAP:
ATMTWR Physical Atm Tower 1.00000 2.00000 2.00000
VACTWR Physical Vac Tower 1.00000 2.00000 1.00000
*
* ESTIMATED CRUDE CHARGE:
* SWEET CRUDE MIX:
ESTBAC Bachequero 20.00000
ESTKUW Kuwait Export 20.00000
ESTMX1 Arabian Mix 0.00000
*
* UTILITIES (PER BBL):
ATMFUL Atm Twr FUL, MMBTU 0.04160 0.04070 0.04070
ATMKWH Atm Twr KWH, KWH 0.79020 0.77240 0.77420
ATMSTM Atm Twr STM, MLBS 0.00630 0.00620 0.00620
*
VACFUL Vac Twr FUL, MMBTU 0.00330 0.00330 0.00330
VACKWH Vac Twr KWH, KWH 0.58090 0.56780 0.56910
VACSTM Vac Twr STM, MLBS 0.01980 0.01940 0.01940
***
However, when the user enters utility rows in both T.ASSAYS and T.CRDDISTL, the entries in T.ASSAYS are given the first priority.
Keywords: Utilities, T.ASSAYS, T.CRDDISTL
References: None |
Problem Statement: Is it mandatory to have T.ASSAYLIB to trigger Crude Architecture in my PIMS model? | Solution: No, T.ASSAYLIB is an optional table that is used when the user would like to attach multiple Assays tables to the PIMS Model Tree for different logical crude units. The user can enter the names of the assay tables to be included in the model and the corresponding logical crude units to which they are applicable to.
*TABLE ASSAYLIB
* Library of Assays
TEXT CD1 CD2 CD3 ***
*
ASSAYS CDU1, Mode1 1.00000
ASSAY2 CDU2, Mode2 1.00000
ASSAY3 CDU2, Mode3 1.00000
***
Keywords: Assays, Logical Crude Units, T.ASSAYS
References: None |
Problem Statement: This query shows how to send a high-importance e-mail message with an attachment using the SMTP protocol. | Solution: Important: For this to work - first go into the SQLplus Query Writer and from the menu bar choose
View |
Keywords: e-mail
attachment
sample query
132567-2
References: s:
and select Microsoft CDO for Windows 2000 Library. Note that it may not necessarily be at or near the top of the list that appears in the box (screenshot below).
With the above reference selected, use the example code below to send the e-mail:
-- Beginning of code --
local msg;
msg = CreateObject('CDO.Message');
msg.Configuration.Fields.Item ('http://schemas.microsoft.com/cdo/configuration/sendusing')
=2;
msg.Configuration.Fields.Item ('http://schemas.microsoft.com/cdo/configuration/smtpserver')
='[email protected]'; --Please substitute the name of an appropriate e-mail server which has SMTP enabled
msg.Configuration.Fields.Item ('http://schemas.microsoft.com/cdo/configuration/smtpserverport')
=25;
msg.Configuration.Fields.Update;
msg.from = '[email protected]';
msg.to = '[email protected]';
msg.subject = 'SQLplus test message 2023-03-24'; --Place appropriate subject line here
msg.textbody = 'I am sending you this important message from SQLplus.'; --Message for body of e-mail
msg.Fields.Item('urn:schemas:mailheader:importance').Value ='high';
msg.AddAttachment('c:\CIMIO_MSG.LOG');
msg.send;
-- End of code --
The above code (without the comment lines in green) will also be included as an attachment to this |
Problem Statement: aspenONE™V14 Validation Certificate | Solution: aspenONE™V14 Validation Certificate November 2022 aspenONE V14 has been validated and is in compliance with our internal Quality Assurance procedures. It is hereby certified for general release. This document describes the testing procedures adopted for the aspenONE V14 release. Product-specific validation information is included, summarizing the testing executed for individual products.
The tests were run as part of both automated and manual suites of tests. Coexistence Testing Coexistence is the ability for multiple versions of aspenONE applications to be installed on the same desktop computer. For product-specific details, please refer to the Platform Support document posted at https://www.aspentech.com/en/platform-support Virus Checking aspenONE Gold Media was scanned between Nov 8 and Nov 10, 2022, using CrowdStrike Version 6.46.16010.0.
Please refer attached validation certificate .pdf in detailed.
Keywords: Validation certificate
References: None |
Problem Statement: If there is a large number of applications, the Aspen Watch records can quickly fill up the default disk location (C:\ usually), causing old history to be lost, archives to be overwritten, and possibly other disk problems. A clear guide is needed to determine the easiest procedure to move these history files to a different drive on the machine. | Solution: The majority of Aspen Watch files are saved in just 4 default locations (with their respective network share):
IP21g200 C:\ProgramData\AspenTech\InfoPlus.21\db21\group200
IP21g200AggHis C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\agghis
IP21g200EvtHis C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\evthis
IP21g200His C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\arcs
The first entry corresponds to the location where the snapshots and other InfoPlus.21 information is saved, the other three correspond to repositories AW_AGGH, AW_EVTH & TSK_DHIS respectively. When looking for information, InfoPlus.21 just looks at the network share and then Windows re-directs it to the correct location, so in theory if we want to move the history files we simply need to move the contents of these folders to a location on a different disk (let’s use a D:\ drive for this article as example) and recreate the shares to point to the new folders on that D:\ drive.
NOTE 1: If this is a new install, complete all the steps of the post-install configuration guide as normal to create the Aspen Watch repositories on the default location and then follow this procedure to move them.
NOTE 2: There is a repository called TSK_DHIS_AGGR that uses the folder location instead of a network share, but this repository does not collect significant amounts of data (if any), so it is okay to leave that on the default location.
NOTE 3: Make sure to on the Windows File Explorer, on the View tab, check the “Hidden items” checkbox to show all files on a given folder.
Here is a step-by-step guide on how to move the history files this:
Stop the InfoPlus.21 database from the InfoPlus.21 Manager.
Open a Command Prompt window as administrator and type the following command to see the information of the first share, write down the “Permission” section (you only need to do this once):
net share IP21g200
After you write down the accounts and permissions, delete the first share with the following command:
net share IP21g200 /DELETE
Create a folder on the D:\ drive where you will store the data for this share (for example create D:\IP21g200).
Move all the contents from the default location to the new location (example: C:\ProgramData\AspenTech\InfoPlus.21\db21\group200 to D:\IP21g200).
Re-create the share that you deleted on step 3 now pointing to the new location on the D:\ drive, granting the permissions that you wrote down on step 2 (on my example, APC\Student is the account that has permissions to fully access this share):
net share IP21g200=D:\IP21g200 /GRANT:Everyone,READ /GRANT:APC\Student,Full
Repeat steps 3-6 with the other 3 shares, deleting the default shares, creating new folders on the D:\ drive, moving the contents, then recreating the new shares. What is highlighted in blue is what you need to change on each iteration, what is highlighted in yellow is specific to the account that has the permission set on your system.
After you are done with all 4 shares, restart the InfoPlus.21 database.
To verify that the procedure was successful, open the InfoPlus.21 Administrator and go to InfoPlus.21 -> <ServerName> -> Historian and check that all 4 repositories are showing green.
Keywords: Aspen Watch, InfoPlus.21, drive, disk, move, migrate, arcs, repositories
References: None |
Problem Statement: How to efficiently view the 2D / 3D DWG files created on the deliverables folder in Aspen OptiPlant 3D Layout? | Solution: In Aspen OptiPlant, we can model equipment and structures by 2D or 3D DXF import option. The created models will be available in the deliverables folder inside the Aspen OptiPlant project folder. The OptiPlant Configurator provides extended interface capability in the form of Data Exchange Format (DXF), which can be read as input for major third-party software. This enables the objects created in OptiPlant to be used in other CAD packages like AutoCAD, MicroStation, and Navisworks, etc. The output in DXF format is an intelligent output which carries Line IDs, Equipment IDs and colors as well.
Autodesk DWG TrueView is a free multimedia software that allows users to view AutoCAD and other DWG files. Because DWG TrueView is just a viewer, you cannot use it to alter a drawing. You can, however, measure and print your drawings and convert DWG files between AutoCAD formats.
2D drawing opened inside DWG true viewer
3D Output opened inside DWG true viewer
Keywords: AUTOCAD, Deliverables, DXF, DWG, 3D Output
References: https://esupport.aspentech.com/S_Article?id=000099067
https://esupport.aspentech.com/S_Article?id=000100674
https://esupport.aspentech.com/S_Article?id=000098975 |
Problem Statement: In the background a record defined by AlertUserDef will be created in the InfoPlus.21 database when a user subscribes to alerts for a tag for the first time. The record is used to keep track of settings related to a user and their alert subscriptions. What account is used to create the child record from the AlertUserDef definition? | Solution: The account for the user who is logged in to A1PE is the one that is used to make the AlertUserDef record. That user must have rights to create and modify records in IP.21 (specifically against AlertUserDef).
Additional Details: The actions for creating or editing an AlertUserDef record are handled by the PD REST service. While PD REST is hosted in IIS, the logic runs in the context of the remote a1PE web user due to the Windows Authentication impersonation feature. This means that the remote a1PE web user must have rights in IP.21 to create an AlertUserDef record (if it is new) and must have rights to modify the record that gets created.
Keywords: None
References: None |
Problem Statement: Users can write supplemental code for their Aspen Fidelis models in VSTA. The Watch Window can be used while debugging code in VSTA to view variables of interest. Users may see the following error while trying to view a variable in the Watch Window. | Solution: When this error appears, it is usually because the debug settings need to be changed.
Open Visual Studio
Click Debug -> Options
Go to Debugging -> General
Check the box for “Use Managed Compatibility Mode”
Exit Visual Studio
Next time you try debugging code, the error should be gone, and you should be able to see the value of variables
Keywords: Debugging VSTA
Debugging Fidelis
Write Key Routines
VSTA Watch Window
References: None |
Problem Statement: ProMV Online agents are color coded based on their state. A purple agent indicates that there is a data disruption, a calculation error, or no license.
If you navigate to the agent dashboard, you will see a Result delay message.
This article shows how to check if the result delay is related to your execution frequency, and if so, how to change the execution frequency so the result is no longer delayed. | Solution: One possible reason for a delayed result is that your historian and ProMV are not able to communicate fast enough to exchange data in time for ProMV to execute the agent. The steps below show how to check if this is the case.
From the ProMV Online Continuous or ProMV Online Batch page, click on the wrench icon in the top right corner.
Select Logging Options.
Click the download icon for the agent with a delayed result.
Unzip the logs file.
Open the most recent file beginning with online.
Search looking for messages similar to these with a recent timestamp.
(6-17112) 01/25/23 09:12:50.123 [Error] Executor.cs::LogPerformance ReadData took 698,7694786 seconds to finish which may result into result delayed.
(6-17112) 01/25/23 09:12:50.235 [Error] Executor.cs::ExecuteOnline Performance issue detected: it took too much time (698916 milliseconds) to finish current execution with execution frequency 240000 milliseconds.
This message indicates your agent is configured to execute faster than it can acquire data. You can try to solve this issue two ways: first, by increasing the communication speed between ProMV and your historian, or second, by reducing the execution frequency. The following steps will show how to reduce your agent’s execution frequency.
Calculate a new execution frequency. Check the log file and find the longest it has taken your agent to acquire data and execute. Take the number in the parentheses and convert it to minutes. Here, 698916 milliseconds = 699 seconds = 11.6 minutes. Your new execution frequency should be longer than this. In this example, we could select 12 minutes.
Go to the ProMV Online Continuous or ProMV Online Batch Agent Summary page. Find your delayed agent, and click on the 3 dots in the bottom right corner.
Choose Edit & Deploy Agent to change the execution frequency.
Select Deploy to skip to the section where you will change the execution frequency.
Under the Detection Dashboard Parameters, update the Execution interval to the number you determined in step 8.
If data acquisition is taking a long time, it is also recommended to set Start calculations to From current time.
In the bottom right of the page, click Deploy. Your agent should now be executing at the updated interval, and after it finishes acquiring data, you should no longer see the Result delay state. You may have to wait for one execution cycle to see the change.
Keywords: Result delay
Purple ProMV agent
Agent not updated
Agent delayed
References: None |
Problem Statement: If you view a ProMV Online Agent Dashboard, you may see a red caution icon next to the Dashboard header. If you click on the icon, it has a message with the warnings NoDataXTags and MissingXVariables.
These messages appear when ProMV has not received updated data for a tag within a certain window of time. The section below shows how to set this window to match the expected frequency that the tag updates with new data. | Solution: First, we will check the frequency of the tag, and then we will update the stale time out window.
Return to the ProMV Online Batch or ProMV Online Continuous Agent Summary page.
Click the wrench icon in the top right and select aspenOne Process Explorer.
Close the Quick
Keywords: ProMV agent missing data
ProMV agent stale data
NoDataXTags
MissingXVariables
References: if it appears.
Select Process Explorer.
In the search bar, type the name of the tag indicated in the NoDataXTags message.
Select it in the list of results that appears.
Check the box next to the tag. If the pencil tool bar is not expanded, click to expand it.
Choose the table icon to display the values of the tag in a table.
Use the timestamps to determine how often your tag is updated. This tag is updated every 15 minutes. Your new stale time out window for this tag should be longer than the interval between measurements.
If your tag is not updated frequently enough, you may have to increase your search window. Try switching between time periods using the pictured buttons. If your table does not update, try deleting and re-adding the tag.
Return to the ProMV Online Batch or ProMV Online Continuous Agent Summary page. For the agent with the missing tag data message, click on the 3 dots in the bottom right corner.
The Stale Time Out can be updated either through Update Live Agent or Edit & Deploy Agent, so choose one of those options.
Click on the tag icon to go to the Input Mapping page. Your view will be slightly different depending on which option was chosen in the last step.
Increase the Stale Time Out to the interval you determined in step 9.
Use the navigation buttons at the bottom right of the screen to finish updating or deploying your agent.
You should no longer see the missing data warning. You may have to wait for the next execution cycle to complete before the warning disappears. |
Problem Statement: When encountering an issue, Aspen eSupport consultants may ask for the Process Pulse support package which contains useful diagnostic information. This article describes how to generate the support package which can then be relayed to Aspen eSupport. | Solution: Open Aspen Process Pulse Dashboard and click on the i information icon in the right-side of the top ribbon, then click on the SUPPORT PACKAGE icon
You may type in any additional comments about the package and then click OK
A Windows dialog box will then appear. Save the file to a location on your system, then email the support package to Aspen eSupport.
Keywords: Process Pulse
support package
logs
References: None |
Problem Statement: When connecting to a Mtell server from a remote desktop, a variety of network errors may occur, such as:
MIS web service error...Unable to connect to the remote server, A related KB article here: https://esupport.aspentech.com/S_Article?id=000095090, can be used to resolve the error if on the Mtell server, but does not apply for remote desktops
Training and agent services are unable to find and connect to Mtell server | Solution: These connection issues are commonly caused by network or firewall settings, we can use Windows commands (ping, netstat) as a starting point to help determine where the source of the issue is:
Open Command Prompt by typing cmd in Windows search
Execute ping Mtell server
Example: ping 127.0.0.1
If the request timed out or there is 100% packet loss, then the remote desktop is likely being blocked by a firewall or there may be internet connectivity issues. The relevant IT team should be contacted for configuring the firewall to allow access
If ping request completes successfully with no dropped packets, move on to the next step
Execute netstat | findstr /c::Mtell service port
Example: netstat | findstr /c::4505
The ports used for agent services can be located through System Manager > Configuration > Agent Services > Service (right-side) > Port
If the TCP connection for the training/agent services does not display ESTABLISHED and is stuck on SYN_SENT instead, this indicates that the client was able to send a message to the server but did not receive a response. This is likely caused by the server-side port (50097 in the above case) blocking incoming messages. In this case the IT team also needs to be notified about opening the blocked port.
If the output of netstat is normal and all Mtell service connections are established, move on to the next step
Whitelist Mtell folders from antivirus monitoring
Follow the KB article here: https://esupport.aspentech.com/S_Article?id=000075106
If after these steps, the connection issues remain, then the root cause is potentially not network-related and further troubleshooting is required
Keywords: Mtell
agent services
training services
not connected
netstat
ping
References: None |
Problem Statement: This problem has two parts. After upgrading from V12 to V14, when machine learning agents are triggered and emails are sent out, the received emails display a configuration error message, and the sensor trends are completely missing from the email. The error message is shown below:
To resolve configuration error in emails
The cause of the error is due to a web.config file which was not properly updated and references an older DevExpress version. There are two | Solution: s, (1) replace the config file with one from a proper V14 installation (2) if a compatible config file is not available, uninstall Mtell, delete C:\inetpub\wwwroot\AspenTech\AspenMtell\APM directory, reinstall Mtell.
The instructions for replacing the config file are as follows:
Replace web.config file
Open File Explorer
Go to this location (C:\inetpub\wwwroot\AspenTech\AspenMtell\APM)
Replace web.config file with one from another machine with a clean V14 installation (you can check this by seeing if the replacement web.config file references DevExpress.Web.v22.1 instead of DevExpress.Web.v13.2)
Restart IIS services
To resolve missing sensor trends in emails
Open Internet Information Services (IIS) Manager
Expand [Server] > Sites > Default Web Site > AspenTech > AspenMtell -> APM
Click on APM, then double-click Authentication under IIS
Check that Anonymous Authentication is set to Enabled, if not, right-click > Enable
Right-click Anonymous Authentication and click Edit..., check that the Specific user is enabled and set to IUSR or a user who has elevated privileges on the server
Reset IIS, and the alert emails should include sensor trends
Keywords: configuration error
email
dev express
missing trends
References: None |
Problem Statement: The alert email for tags subscribed in aspenONE Alerts application accessed from aspenONE Process Explorer are received as text as seen in screenshot below
instead of a formatted message demonstrated in screenshot below. | Solution: Verify that the file called EmailConfig.txt exists in C:\Program Files\AspenTech\InfoPlus.21\db21\code folder. If not, copy the file EmailConfig.txt attached to thisSolution to C:\Program Files\AspenTech\InfoPlus.21\db21\code folder.
Keywords: alerts
email
e-mail
text
References: None |
Problem Statement: Can´t upload an image to Aspen Graphic Studios following the next steps:
Open Aspen Graphic Studios.
Go to File, Open, Project/Graphic.
Go to Draw, Static Object, Image.
Select the Image you´d like to insert and click Open. | Solution: If the name of the Image contains a “.” you won´t be able to upload it. Delete the “.” on the name. Aspen Graphic Studios should now recognize the Image.
Key Words
Graphic Studios
Image
Upload
Keywords: None
References: None |
Problem Statement: This knowledge based article describes how to enable debug logging for TSK_KPI task. | Solution: Note: please take a backup of Aspen InfoPlus.21(IP.21) database snapshot from IP.21 Administrator first.
1- Load the attached DebugTask.RLD to IP.21 database in IP.21 Administrator. Please refer to KB000077937 - How do I load a Recload file (.RLD) into an Aspen InfoPlus.21 database?
2- Stop and start the TSK_KPI task in IP.21 Manager for the changes to take effect after loading DebugTask.RLD.
3- When the error condition is detected in TSK_KPI task, stop TSK_KPI task from IP.21 Manager and please share both TSK_KPI.OUT and TSK_KPI.ERR to [email protected] for further investigation.
4- Open Aspen SQLplus and run the attached CleanupDebugTask.SQL to disable debug logging.
5- Start TSK_KPI task from IP.21 Manager.
Key Words:
KPI
TSK_KPI
Debug
Logging
Keywords: None
References: None |
Problem Statement: How to create more than one Unit Operation Event using Edit in Excel? | Solution: To edit, add or delete event data using the multi-event editor template:
On the Event interface, select the events you wish to edit. You can either use SHIFT+click to select individual events or click and drag around the desired events to select multiple events. Your selected events are highlighted. Right-click on any highlighted event and click Edit in Excel from the shortcut menu.
For the demo model a default template with the next columns and a row for each selected event will appear.
In Excel, modify, add, or delete data as desired. If you add more rows, new Unit Operations will show up in the Users Interface when you save and close.
Save your data and exit Excel.
Should you want to modify additional information from a new Unit Operation Event, you can create a new template. To do so
To create a multiple event editor template:
While on the Event Interface click Events | Edit in Excel Templates. The Multi-Event Editor Template wizard appears.
If you are creating a new template, click in the Select Template Type drop-down list to display and select the type of event template to create. In this case Unit Operation Events
Select the Open as Default option if you wish the select template to always be the default template type for the selected template type. The value is written as the EVENT_DEFAULT_TEMPLATE_<eventType> CONFIG keyword value.
The Available column lists all available data that you can select to add to the spreadsheet. Greyed items are required data.
The Selected column lists the data that will be displayed. Grayed options are standard data that will always be displayed.
Select an item from the Available list and click > to move the item to the Selected list.
Clicking >> moves all items to the Selected list.
Click Next > to continue. The Edit Report Layout and Save to Database screen appears.
Click to select desired data and click the Move up and Move Down buttons to shift the selected data to a desired position. The order in the Selected Data list box indicates the order this data is displayed in Excel. The top to the bottom list appears in left to right order in Excel. Repeat the ordering step until data is in the desired order.
Enter a template name in the Template Name field. If you are editing an existing template, this name appears in this field.
Click Create Template to save your template changes.
You will get a confirmation Dialog Box
Now if you click on Edit in Excel, you will see the updated Template
Keywords: Edit in Excel, template, APS, Aspen Petroleum Scheduler, unit operations
References: None |
Problem Statement: How the Find Object tool works in Aspen Plus? | Solution: Activate by RMC on flowsheet then Find Object (or type Ctrl F) or select from the View ribbon
Very useful for navigating large, complicated flowsheets
Works in both the Properties and Simulation environments
Keywords: Aspen Plus, Find, Object
References: None |
Problem Statement: How the flowsheet Copy/Paste functionality works in Aspen Plus? | Solution: You can copy and paste flowsheet objects while maintaining the integrity of the input data within the input forms for the selected objects:
Highlight block(s) and/or stream(s)
Choose Copy from the Home ribbon, or right-click and choose Copy, to make a copy of the objects within the selection rectangle
Or, choose Copy Special to include any other input data, such
as Components, Properties, Design Specs, Calculators, etc.,
in the copy function
Use the Cut functionality on the right-click menu to copy the object to the clipboard and delete it, in a single step
Select Paste to paste the selected objects
The Resolve ID Conflicts dialog box appears when two or more objects share the same name. You must fix the names before pasting the objects in the current flowsheet
In addition to using menu selections, you can also use Ctrl-C for Copy, Ctrl-X for Cut, and Ctrl-V for Paste
You can copy these same objects from one simulation into another
Keywords: Aspen Plus, Copy, Paste
References: None |
Problem Statement: What is the difference in use between Pipe and Pipeline Models in Aspen Plus? | Solution: Both models calculate the pressure drop and heat transfer changes due to acceleration, friction, and elevation
Pipe block models a single pipe segment
Pipeline models a multiple-segment pipe
Calculation options:
If the inlet pressure is known, Pipe or Pipeline calculates the outlet pressure
If the outlet pressure is known, Pipe or Pipeline calculates the inlet pressure and updates the state variables of the inlet stream
For Pipe, entrance effects can be specified on the Fittings2 sheet
Keywords: Aspen Plus, Pipe, Pipeline
References: None |
Problem Statement: What is the difference in use between Compr and MCompr Models in Aspen Plus? | Solution: Compr block can be used to simulate:
Polytropic centrifugal compressor
Polytropic positive displacement compressor
Isentropic compressor
Isentropic turbine
For multi-stage compressors, use MCompr
MCompr can have intercoolers and knockout streams between stages
Keywords: Aspen Plus, Compr, MCompr
References: None |
Problem Statement: How Aspen plus reports the volume flowrate Nm3/hr vs Sm3/hr and what is the best way to make this information available through stream <add properties>? | Solution: 1. For AspenPlus, the volume flow unit Nm3/hr is equal to Sm3/hr.
2. Basically for volume flowrate, there are three different sets of units with the word standard in their names used in various industries which are available in Aspen Plus, with different reference conditions:
• Standard vapor volume in standard cubic feet (ideal gas at 14.696 psia and 60°F).
• Standard vapor volume in standard cubic meters (ideal gas at 1 atm and 0°C), also called normal cubic meters.
• Standard liquid volume in various volume units (approximately 60ºF and 1 atm, but the exact conversion is determined by the property VLSTD for each component, which provides an equivalent mass of the component).
3. Through Add Properties, Standard Volume Flowrate can be added to a stream.
Click Add Properties to expand the Edit Stream Summary Template window and search Volume flow, check the desired Standard volume flow options, for vapor phase or for liquid phase, for mixture or for components.
Besides properties added to the stream can be saved by saving the stream summary as a new template, click the Save as New button to name it and then it can be used to the whole model.
Keywords: Aspen Plus, Volume, Flowrate
References: None |
Problem Statement: How to view the plate efficiency when using the rate-based modelling in Aspen Plus? | Solution: To show the plate efficiency in a column, from Blocks| Column name (eg. C-101)| Rate-Based Modeling| Rate-Based Report| Efficiency Options, check the box before the “Include tray efficiencies”.
After that, re-run the column model and view the rate-based report on the Blocks| Column name|Rate-Based Modeling| Efficiencies and HETP form.
These efficiencies are calculated basing the hydraulics of the column and the results won't change if you have that option enable or not because Aspen Plus always calculated but in the case is uncheck it was only in the background and won't be display
Keywords: Aspen Plus, Plate, Efficiency
References: None |
Problem Statement: What physical property model does HYSYS BLOWDOWN use? | Solution: Unlike HYSYS Depressuring, HYSYS BLOWDOWN uses a unique physical property method called PREPROP. It is a computer package specially developed for BLOWDOWN by Imperial College, which calculates the thermo-physical properties of multi-component mixture by an extended principle of corresponding state (i.e. common Equation of State functions). The extension is used to fix molecule shape factor for non-spherical molecule. The basic idea is the relation of the mixture properties to those of a single reference substance. Methane is used as the reference substance.
The key factor to get an accurate depressurization result is good prediction of phase equilibrium, enthalpy and density simultaneously. This is because depressuring process always happens in critical conditions, and it requires accurate heat transfer calculation not only just for the process fluid, but also in different fluid phase interface and fluid metal interface. Most EOS models give very reasonable phase equilibrium calculation, but are less successfully in enthalpy and density prediction. Because EOS considers more physical (attractive and repulsive) forces and association (hydrogen bonding) between molecules, but less on the molecule shape and size. In contrast, PREPROP considers all these above-mentioned aspects and thus is more rigorous accurate.
For example, a volatile liquid such as condensate depends critically on following the correct thermodynamic trajectory in phase space, with correct enthalpy calculation result in phase boundaries. PREPROP can estimate the uncertainties. While other EOS cannot.
Another advantages for HYSYS BLOWDOWN regarding in physical property is that it can calculate a rigorous three fluid phases in a vessel, including their heat transfer with metal and through choke.
For more information about HYSYS BLOWDOWN theory, please refer to the article BLOWDOWN of Pressure Vessels by M.A.HAQUE of Department of Chemical Engineering, Imperial College, London.
Key words
BLOWDOWN, Depressuring
Keywords: None
References: None |
Problem Statement: The routing process reported no errors when the Auto Force Through Rack option was ON. However, certain branch pipes have unnecessary bends, and in some instances, pipes cross one another. | Solution: The answer to this issue is to manually remove the Volume Rack ID from the. lls file (the Line List File).
Let's examine the identical sample model and determine the rack volume.
The only way to remove the Rack Volume IDs assigned by Auto Force Pipes in the current version of V14.0 OptiPlant is to manually delete each branch's Rack Volume ID by accessing the. lls file in a text editor:
Because all pipes with a Battery type are required to have a Volume Rack ID by the algorithm, the Rack Volume IDs cannot be removed using the Details window in Line List. If the user does not supply this information, the algorithm will automatically add it. This is the reason why the Rack Volume ID cannot be deleted via the UI in this version. The next OptiPlant release ought to include a patch for this problem.
After manually removing the branches Rack Volume ID’s, the routing process showes correct results:
This time, the route has been properly planned out without any unnecessary bends or pipe crossings.
Keywords: Auto Force Through Rack, Line List, Rack Volume.
References: None |
Problem Statement: How does OptiPlant calculate Foundation? | Solution: The method for calculating foundation in OptiPlant is described below.
Foundation Overview
OptiPlant automatically calculates the required foundation for all equipment and steel placed either at grade or above grade or below grade elevations based on various soil properties and site conditions.
User can select either Shallow Foundation type or Deep Foundation (Piling) type
The automatically calculated foundation will be displayed in the 3D model with a tag, can be modified by the user and MTOs can be extracted for them.
Below is the typical configuration picture for shallow & deep foundation to assist understanding the terms used in detailing FoundationData.csv data file.
Pre-requisites per recommended work-process
Weight of equipment – to be provided through the parametric form
Weight of structures – standard member sizes for pipe-racks and other structures to be assigned through the parametric form for OptiPlant to automatically calculate weight of steel.
SteelProp.csv file for Dead & Live Loads – Foundation calculation reads SteelProp.csv file for various Dead & Live loads. This includes –
All Design margins
Design factors for Cable tray and Walkway
Pipe fitting consideration
Pipes and cable tray routing - to be done in order to account for their weights as well. These are optional. However, foundation calculation can also be done solely for the steel and their equipment placed on its floors or above the pipe racks. It is recommended to route the pipes and cable trays (if any) for the foundation calculation to consider their weights as well.
Grade elevation - to be provided through the plot-plan properties form.
FoundationData.csv file
All parametric values listed in FoundationData.csv file can be configured by user according to specific project requirement. Commonly used default values have been taken to run the calculation. Below are the descriptions for the various parameters listed in FoundationData.csv file in the same sequence as given in the data file.
Project Soil Type – Some commonly found soil class with their soil bearing capacities is provided in Table 1. Copying any of these classes from Table 1 as Project soil type will be considered to calculate bearing surface area of the footing.
Specified concrete strength – Default concrete strength has been taken as 3000psi. It is used to calculate Allowable shear stress in concrete. User can change this value for any specific concrete mixture.
Factor of safety – User can decide it based on the requirement. Factor of Safety is a common design principle used by civil/structural engineers to define the actual load-bearing capacity of a structure or component, or the required margin of safety for a structure or component according to code, or design requirements. It’s usually max stress/designed stress. Common numbers used are 1.3 – 1.5 for reliable designs and could be higher for unknown or stressful conditions.
Maximum elevation for the foundation calculation with respect to grade – This elevation decides to build foundation for all such objects which are not directly placed on the Grade but needs foundations. For such objects, pedestal length will be extended above the grade up to the base of the column. Above pictures shows the same case, “h” values show the extended pedestal from the grade level.
Table 1 - This table listed various kinds of soil types with their respective soil bearing capacity. User can update it with any specific kind of soil and if want to use the same for calculation, then needs to copy it as “Project Soil Type”. User can also directly update the same in “project soil Type” without updating in Table 1.
Table 2 – Values given in this table controls foundation extension from the Tank/Tower periphery.
In the first run, OptiPlant calculates the foundation diameter based on the weight input provided by the user. If the calculated foundation diameter does not fulfil the minimum extension value given from the object periphery, then it will recalculate the foundation diameter per the following formula and will display in 3D model: Object diameter + 2x Min extension value
In the first run, if the calculated foundation diameter exceeds the maximum extension value given, then it will recalculate the foundation diameter per the following formula and will display in 3D model: Object diameter + 2x Max extension value
In the first run, if the calculated foundation diameter lies in between the above said formulae then it will adopt the same and will display the same in 3D model.
Table 3 – Values given in this table controls foundation extension from the rectangular flat bottom object’s periphery.
In the first run, OptiPlant calculates the foundation length/width based on the weight input provided by the user. If the calculated foundation length/width does not fulfil the minimum extension value given from the object periphery, then it will recalculate the foundation length/width per the following formula and will display in 3D model: Object length/width + 2x Min extension value
In the first run, if the calculated foundation length/width exceeds the maximum extension value given, then it will recalculate the foundation length/width per the following formula and will display in 3D model: Object length/width + 2x Max extension value
In the first run, if the calculated foundation length/width lies in between the above said formulae then it will adopt the same and will display the same in 3D model.
Table 4 –This table will be used for embedment depth (Min D). When a user adds weight to any equipment in OptiPlant, that weight will be referred against this table and the corresponding value of minimum “D” will be used to build the embedment depth. (Embedment depth “D” is shown in figure under section Foundation Overview)
Table 5 – Objects placed above pipe rack & structure frame building can exclude and include through this table for foundation calculation. Objects placed above the pipe rack/structure will be included if it comes under the elevation range value given in the data file and excluded if it does not come under the given range for foundation calculation.
Depth of bearing stratum – Depth of bearing stratum to be provided by the user per the site condition. OptiPlant will then calculate the length of pile require Per the following formula-
Length of pile = (Depth of bearing stratum with respect to grade elevation) – (Embedment depth “D”) – (height of footing “d”).
It is assumed that no length of pile is penetrated into the bearing stratum.
Diameter of Solid Concrete Pile – User can select various pile diameters to check the numbers of pile required for the same sizes and then conclude which size to be finalized for Pile.
Solid friction angle (deg) – Table 9 listed various solid friction angles versus respective bearing capacity factors. Copying any of the solid friction angle here will fetch up respective bearing capacity factor and will be used to calculate load-carrying capacity of the pile point.
Sand Unit Weight – Its default value has been taken as 17 KN/m3. This value is also used to calculate load-carrying capacity of the pile point. User can change this value for any specific sand unit weight.
Table 8 - This table listed “depth below ground surface” versus “penetration resistance values (N60)”. This table is used to calculate “Average Penetration resistance values (N60)” using depth of bearing stratum provided by user. This calculated “Average Penetration resistance values (N60)” will then be used to calculate average unit frictional resistance. It will then calculate frictional resistance which will finally be used to calculate ultimate load-carrying capacity of pile.
Table 9 - This table listed various solid friction angles versus respective bearing capacity factors.
Types of Foundations
As elaborated above, FoundationData.csv file can be configured based on the project requirement. User will then be able to calculate foundation which will be based on the project specific inputs provided in the data file. User can also run the foundation calculation using the default values provided in data file.
Based on the requirement, two types of foundations can be built in OptiPlant:
Shallow Foundation: Below are the steps to automatically calculate the Shallow foundation:
Go to the menu Build >> Civil/Foundation, select calculate foundation, which will open up a ‘Calculate foundation’ window.
Select Type of foundation as “Shallow”.
Click on “Calculate Foundation” will build Shallow foundation for the objects in the model.
Deep Foundation: Below are the steps to automatically calculate the Deep foundation:
Go to the menu Build >> Civil/Foundation, select calculate foundation, which will open up a ‘Calculate foundation’ window.
Select Type of foundation as “Deep”.
Click on “Calculate Foundation” will build Deep foundation for the objects in the model.
Foundation Calculation’s Formulas:
Some basic formulas used for foundation calculation are provided as-
Shallow foundation: Following formulae are being used in shallow foundation calculation -
Footing dimension (B) –: B = Sqrt (Q/number of columns* Bearing capacity of soil)
Footing height “d”: - d2x (4Vc + q0/4) + d (4x0.85√fc + qo/2) w - (B2-w2) x q0/4 = 0
Embedment depth “D” – to be taken from Table 4.
Pedestal height “h” – it will be calculate based upon the elevation of column/legs/saddle from the grade elevation.
Deep Foundation: Following formulae are being used in deep foundation calculation-
Footing dimension (B) = Sqrt (Q/number of column* Bearing capacity of soil)
Footing height “d” = d2x (4Vc + q0/4) + d (4x0.85√fc + qo/2) w - (B2-w2) x q0/4 = 0
Embedment depth “D” = to be taken from Table 4.
Pedestal height “h” = it will be calculated based upon the elevation of column/legs/saddle from the grade elevation.
Ultimate load-carrying capacity “Qu” = Qp + Qs
Load-carrying capacity of the pile point “Qp” = Apxqp = Apq’Nq* = Apx(0.5xpaxNq* x tan ɸ’)
Qp can also be calculated by the formula given below:
Qp = Ap x ᵞ x L x Nq*
Least value of Qp is chosen from the above two formulae.
Frictional resistance “Qs” = p x L x fav
Average unit frictional resistance “fav” = 0.02 x pa x avg(N60)
Keywords: None
References: None |
Problem Statement: Why am I unable to see the calculation for the foundation? | Solution: You must review and make adjustments to your Plot Plan Information and Parametric Forms to ensure they are on the same Grade Elevation/Elevation (Z) because OptiPlant will automatically calculate the required foundation for all equipment and steel placed at grade.
Plot Plan Information can be accessed under Edit ribbon, and then Plot Plan Properties.
For equipment, check the Placement in Z direction.
For structure, check for Elevation.
There will be no foundation calculation, for example, if your plot plan has a grade elevation of 1000 and your equipment elevation is 0.
Keywords: Foundation, Plot Plan
References: None |
Problem Statement: How should vapor-liquid (VLE) or liquid-liquid (LLE) solubility data be regressed in Aspen Plus when the composition of only one phase is available for each temperature? | Solution: It is possible to regress vapor-liquid (TPXY) data with missing phase compositions; however, it is usually not possible to regress liquid-liquid (TPXX) data with missing phase compositions.
Aspen Plus can estimate missing vapor-liquid equilibrium TPXY points, but it can only estimate missing binary liquid-liquid equilibrium TPXX data points in V14 and higher.
Regression with missing TPXX data points will not give accurate results
Vapor-Liquid Data
It is possible to enter the vapor-liquid (TPXY) composition data on the Properties | Data | Mixture form and simply leave the compositions for a phase empty.
Using Raoult's law, Aspen Plus can estimate the missing compositions missing in vapor-liquid equilibrium (VLE). It is common to regress Henry parameters from data that only has the liquid phase compositions.
Liquid-Liquid Data
From compositions in one liquid phase, it is difficult to estimate the composition in the other liquid phase for multicomponent mixtures computationally.
It is possible to enter TPXX and TPXXY data on the Properties | Data | Mixture form in V10 and earlier; however, in V11, V12 and V12.1, data which is missing X1 or X2 is no longer permitted, and the forms containing them will be marked incomplete. In V14 and higher, data for a binary system, if either X1 or X2 is missing for a point, Data Regression either searches for or estimates the missing value by interpolation among all data points for the current dataset, and all binary data involving this binary in the current regression. A warning will be issued.
To complete the regression with liquid-liquid equilibrium (LLE) in Aspen Plus, users will need to estimate the missing points themselves by plotting the data and reading pairs of data points from the graph before performing the regression to ensures best results. The estimated points should be given a higher standard deviation that the measured points.
Keywords: DRS
solubility
data regression
missing composition
References: : VSTS 602453, VSTS 718714 |
Problem Statement: What is the best route to use LLE data that is not in mutual solubility pairs in data regression?
My normal procedure is to graph the data in Excel, and use an graphical interpolation add-in (such as CharTools) to pull off mutual solubility data. It works but is time consuming. Is there a way to use the data directly in Aspen Properties? | Solution: From compositions in one liquid phase, it is difficult to estimate the composition in the other liquid phase for multicomponent mixtures computationally. Using Raoult's law, Aspen Plus can estimate vapor phase compositions missing in vapor-liquid equilibrium (VLE), but this does not work for liquid-liquid equilibrium (LLE). The missing points should be estimated data by hand graphically. The estimated points should be given a higher standard deviation that the measured points.
In V11, V12, and V12.1, when entering data for property regression, TXX, PXX, TPXX, or TPXXY data which is missing X1 or X2 is no longer permitted, and the forms containing them will be marked incomplete. This change was partially reverted starting in V14. For TXX or TPXX data for a binary system, if either X1 or X2 is missing for a point, Data Regression either searches for or estimates the missing value by interpolation among all data points for the current dataset, and all binary data involving this binary in the current regression. A warning will be issued.
Keywords: DRS, regression
References: : VSTS 602453 |
Problem Statement: What is the BIOFEED databank new in V14? Where were the parameters obtained? | Solution: The BIOFEED databank contains parameters for 19 base biomass components, 13 intermediate and product conventional components, and 530 biocomponents used as feeds to biomass conversion processes. It should be used only with one of the methods in the BIOCONV filter as base property method.
The biocomponent feedstocks are categorized by the letters at the start of their aliases, with the remainder of each alias being a distinguishing number:
Category Alias Count
Agricultural waste AWASTE 127
Industrial waste IWASTE 6
Municipal waste MWASTE 23
Micro/macroalgae ALGAE 9
Grasses GRASS 168
Hardwood HWOOD 119
Softwood SWOOD 78
The biocomponents represent biological compounds which may have complex or inexact formulas. You can represent them by atom ratios.
When you add a component of compound class BIOCOMPONENTS to the simulation, either using the Find Compounds window or by typing its component ID, the Type on the Components | Specifications | Selection sheet is set to Biocomponent, and data for it is filled in on the Biocomponents form. In addition, if the biocomponent is from the BIOFEED databank, the Biomass Lookup button becomes available on the sheet. Clicking this adds to the simulation conventional components that any biocomponents in the simulation convert to.
The Biomass Lookup button is used to display all bio-base components used in bio-feedstock components defined on this form. Click OK in the dialog box to add the selected components to this form as type Solid.
You can check the composition of each bio-feedstock component on the Biocomponents | Biofeed sheet.
More details are found in the Help under Aspen Plus
Keywords: None
References: -> Physical Property Data Reference Manual -> Databanks -> Available Pure Component Databanks -> BIOFEED Component Databank.
In addition, there is an example file located in
C:\Program Files\AspenTech\Aspen Plus V14.0\GUI\Examples\Biofuel and Biochemicals\Biomass characterization
The files are
Biofeed converter.apwz
Biomass characterization with BIOFEED.pdf
References
P. E. A. Debiagi et al., “Extractives Extend the Applicability of Multistep Kinetic Scheme of Biomass Pyrolysis,” Energy Fuels, vol. 29, no. 10, pp. 6544–6555, Oct. 2015, doi: 10.1021/acs.energyfuels.5b01753.
A. Aden et al., “Lignocellulosic Biomass to Ethanol Process Design and Economics Utilizing Co-Current Dilute Acid Prehydrolysis and Enzymatic Hydrolysis for Corn Stover,” National Renewable Energy Lab. (NREL), Golden, CO (United States), NREL/TP--510-32438, Jun. 2002. Accessed: Nov. 14, 2015. [Online]. Available: http://www.osti.gov/scitech/biblio/1218326-process-design-report-stoverfeedstock-lignocellulosic-biomass-ethanol-process-design-economics-utilizing-co-current-diluteacid-prehydrolysis-enzymatic-hydrolysis-corn-stover
D. Humbird et al., “Process Design and Economics for Biochemical Conversion of Lignocellulosic Biomass to Ethanol: Dilute-Acid Pretreatment and Enzymatic Hydrolysis of Corn Stover,” National Renewable Energy Laboratory (NREL), Golden, CO., NREL/TP-5100-47764, Mar. 2011. Accessed: Nov. 14, 2015. [Online]. Available: http://www.osti.gov/scitech/biblio/1013269-process-designeconomics-biochemical-conversion-lignocellulosic-biomass-ethanol-dilute-acid-pretreatmentenzymatic-hydrolysis-corn-stover
D. W. Templeton, E. J. Wolfrum, J. H. Yen, and K. E. Sharpless, “Compositional Analysis of Biomass Reference Materials: Results from an Interlaboratory Study,” BioEnergy Res., vol. 9, no. 1, pp. 303–314, Mar. 2016, doi: 10.1007/s12155-015-9675-1.
Reference: VSTS 848722 |
Problem Statement: Is it possible to use multiple sets of NRTL parameters? | Solution: Yes, it is possible to use multiple sets of most thermodynamic and transport parameters. The different data sets need to be associated with different property methods. These different property methods can be used for different flowsheet sections (specify on the Properties | Specifications | Flowsheet Sections sheet) or unit operation blocks (specify on the block's Block Options | Properties sheet).
To change the data set number, go to the Properties | Property Method | Models sheet and change the data set for the parameters in the desired model.
Once the data set number has been changed, it will be possible to enter values for the parameters. The parameters will be indicated by forms called PARAMETERNAME-2. The binary parameter forms such as NRTL-2 are created automatically. Temperature-dependent parameter data sets may appear in Miscellaneous folder rather than the correct category for data sets other than set 1.
Note that the parameter values for Data set n+1 defaults to data set n. This means that if you do not enter all parameters in the second data set, parameters from the first data set may be used where they are missing in the second data set. The reverse is not true. In general, if parameters in a data set are missing, the Aspen Physical Property System will look into lower numbered data sets for the parameters, but not into higher numbered data sets. This is designed for convenience, to allow you to create a second data set which modifies only a few parameters, but you should be aware of this behavior. If you do not want this kind of defaulting, you must specify all parameters in the second data set which are defined in the first data set.
The attached file has three blocks that are set up to use three different sets of NRTL parameters for ethanol-water. The first uses the parameters from the NRTL-IG databank, the second from NRTL-LIT, and the third from NRTL-RK. One thing to notice is that when changing property methods, there is a discontinuity. Notice that each heater does an adiabatic flash, however, the temperature changes because the property calculations have changed.
Keywords: data set 2
dataset
References: None |
Problem Statement: This knowledge base article illustrates how to resolve the issue of DMC3 Builder Keeps crashing when switching to Simulation or Optimization node. | Solution: A crash event of DMC3 builder when trying to open the simulation or optimization node, and an error message pops up (DMC3Builder has stopped working)
To overcome this issue, follow the bellow workarounds:
Review the Event viewer (Application, System, and security) for clues.
you may also use the filter option to only the critical, Warning and Error event level.
The user account should have sufficient permissions to perform a file copy to the ProgramData\AspenTech\APC\V14\Builder\Config directory – you will need to check that you have sufficient permissions there.
Antivirus software may be blocking the copy - in this case we need to double check that the ProgramData\AspenTech\APC folder is on an exclusion list (should also check that other AspenTech folders are on exclusion list, which is generally recommended)
Make sure that
SSCSimulationTemplate file is available in the following directory (C:\Program Files (x86) \AspenTech\APC\V14\Builder) and
SSCSimulationTemplateUser file is available in the following directory C:\ProgramData\AspenTech\APC\V14\Builder\config
if one of the files is not available, make a copy of it Past it to the other directory (Keeping in mind to rename the file as shown above)
Run the DMC3 Builder as administrator.
Keywords: DMC3 Builder, Crash, Simulation, Optimization
References: None |
Problem Statement: How to provide Cut-Off length in Pipe-Rack for a line list template in Aspen OptiPlant 3D layout? | Solution: The second stage of the Aspen OptiPlant 3D layout work-process is input generation which comes after the completion of 3D equipment and structure modeling. After creating a line-list, the user can terminate a pipe running in a pipe-rack or a rack volume by specifying the cut-off length in the Line-List Template for that line.
To provide Cut-Off length in Pipe-Rack:
1. Open the Line-List template and select the line-id for which cut-off length
needs to be assigned.
2. Click Detail to open the second form of Line Details.
3. Select the option for Cut-Off Length to enable the field for adding in the
value.
4. Give the length in millimetres at which you want the pipe to terminate.
The following attributes must be considered while including the cut-off length in pipe-rack:
Value of cut-off length should always be given in MM.
This option will get enabled only for the battery lines or for lines ending at pipe rack or rack volume.
Cut-Off length does not work on lines routing between multiple pipe racks.
Keywords: Cut-off length, line-list, rack volume, routing
References: https://esupport.aspentech.com/S_Article?id=000099950
https://esupport.aspentech.com/S_Article?id=000099949 |
Problem Statement: This knowledge base article illustrates how to resolve the issue of unable to add or edit configure composite participation in DMC3 Builder, where the windows dialog is not editable as below: | Solution: Usually, this problem with the dialogs occurs when a non-standard font size is used.
So, to overcome this issue try using 100% font size on your machine, and that require a reboot to be effective.
The dialog should look like the below:
Note: If you are using VM (Virtual Machine), try rebooting the VM after changing the font size to 100%.
Keywords: DMC3 builder, composite, composite participation, Dialog, blank, add.
References: None |
Problem Statement: This knowledge base article explain how to record who had made manual entries into or had made changes to records in Aspen InfoPlus.21 (IP.21). | Solution: Many customers are asking whether there is a way to record or know who made the IP.21 manual entries.
Although IP.21 does not track changes within the database, it is possible to record who had made manual entries or changes to records through the use of Aspen Audit and Compliance Manager (AACM). AACM need to be installed and configured. By turning on auditing in IP.21, changes to tag value will be captured in an external relational database such as Microsoft SQL Server.
To turn on the auditing in IP.21:
1. Open Infoplus.21 Administrator.
2. Expand database icon and right click and click Properties.
3. Click Audit Trail tab.
4. Select the Activate the audit trail generation for InfoPlus.21 check box to enable audit trail generation.
5. Click OK or Apply to confirm the configuration changes
You can refer to knowledge base article titled What events are recorded by AACM? which shows more details about AACM recorded events.
https://esupport.aspentech.com/S_Article?id=000062318
Keywords: IP.21 record, manual entry, AACM, Audit Trial
References: None |
Problem Statement: On Aspen Production Execution Manager MOC, you will need to enter the audit reason for some procedures. It is possible to configure Aspen Production Execution Manager to not require an audit reason or store an audit trail? | Solution: There is a key to stop the MOC from requiring audit reasons.
Open folder C:\Program Files\AspenTech\AeBRS\cfg_source, and use Notepad to edit the file flags.m2r_cfg
Find AUDIT_REASON_MANDATORY and set it to zero:
AUDIT_REASON_MANDATORY = 0
Save the file, and then run the 64-bit codify_all.cmd as administrator. The codify_all.cmd is in the folder C:\Program Files\AspenTech\AeBRS\cfg_source.
Now the Aspen Production Execution Manager does not require an audit reason or store an audit trail.
Keywords: Aspen Production Execution Manager MOC
Audit reason
References: None |
Problem Statement: On Aspen Production Execution Manager MOC, how to remove the requirement to provide the domain name every time you are asked for your user credentials? | Solution: There is a key to remove the requirement to provide the domain name.
Open folder C:\Program Files (x86)\AspenTech\AeBRS\cfg_source, and use Notepad to edit the file config.m2r_cfg
Find the DEFAULT_DOMAIN and set it to your server domain name:
e.g. DEFAULT_DOMAIN = MES
Save the file, and then run the 32-bit codify_all.cmd as administrator. The codify_all.cmd is in the folder C:\Program Files (x86)\AspenTech\AeBRS\cfg_source.
Now the MOC does not require you to provide the domain name every time you are asked for your user credentials.
Keywords: Aspen Production Execution Manager (APEM)
MOC
User credentials
References: None |
Problem Statement: On Aspen Production Execution Manager (APEM), how to edit a Recipe Procedure Logic (RPL) that has a verification order? | Solution: If the RPL is being used in a verification order, then we need to cancel the order first. Open the Orders interface, select the verification order, and click “Cancel order”.
Then click “Library”, and select “RPL Verify”.
The RPL Verification interface will pop up. Select the RPL that we want to modify, and click Edit.
Then click Yes for Swith RPL to editing mode, and click Yes for Test order will be deleted. Are you sure you want to proceed.
As a result, the verification order will be deleted, and the RPL will be converted to editing mode. Now we can edit the RPL in the Load Designer.
Keywords: APEM
verification order
RPL Design
References: None |
Problem Statement: On Aspen Production Execution Manager MOC, you will be automatically logged out after being inactive for 300 sec. How to stop this behavior? | Solution: There are keys to prevent the auto log-out from happening in MOC, Operations, and the web applications.
Open folder C:\Program Files\AspenTech\AeBRS\cfg_source, and use Notepad to edit file path.m2r_cfg
Find INACTIVITY_PERIOD, OPERATION_INACTIVITY_PERIOD, and WEB_INACTIVITY_PERIOD (add them to the end of the file if it does not exist), and make sure the comment character # is removed from the beginning of the line. Set them from the default value 300 to the new value 0:
INACTIVITY_PERIOD = 0
OPERATION_INACTIVITY_PERIOD = 0
WEB_INACTIVITY_PERIOD = 0
Save the file, and then run the 64-bit codify_all.cmd as administrator. The codify_all.cmd is in the folder C:\Program Files\AspenTech\AeBRS\cfg_source.
The change has been made, and the APEM MOC will no longer auto log-out.
Keywords: Aspen Production Execution Manager (APEM)
MOC
Inactivity
References: None |
Problem Statement: On Aspen Production Execution Manager (APEM), is it possible to have a canceled phase go back to active, and how? | Solution: When we execute an order, if we cancel a phase, the phase will show Cancelled and can not be executed directly on the order tracking or workstation BP.
In order to execute this phase again, we need to open the Orders, then select the Phase tab, and Reactivate the canceled phase.
After the phase is reactivated, the phase will become available for execution again.
Keywords: APEM
Canceled Phase
Reactivate
References: None |
Problem Statement: On Aspen Production Execution Manager (APEM), how to create a new Recipe Procedure Logic (RPL)? | Solution: On the Aspen Production Execution Manager MOC, click the Library, and then select RPL Design.
The Recipe Procedure Logic List interface will pop up, and then click + button to insert a new RPL.
The RPL Management page will pop up, and you can fill out the Name, Description, etc...
Click Confirm. Then add the required Basic Phase Libraries.
Finally, click Load Designer to open the PFC Editor, and you will be able to design your RPL (Unit Procedure, Operation, Phase).
Keywords: Aspen Production Execution Manager (APEM)
Basic Phase Libraries (BPLs)
Recipe Procedure Logic (RPL)
References: None |
Problem Statement: What information is contained in Arc.dat, Arc.key, and Arc.byte? | Solution: Arc.dat, Arc.key, and Arc.byte are located in “C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\arcs\arc#” (or wherever a file set is created).
Arc.dat contains the process data for points (timestamps, values, qualities) between archive start and end time.
Arc.key is a binary tree index into the arc.dat and arc.byte files. The key indexes are the record IDs and timestamp. This file is recreated when deleted when the repository is started up.
Arc.byte contains BLOB (Binary Large Object) data, meaning that the data in the History Repeat Area is greater than 256 bytes per history occurrence. This can be any data defined in the History Repeat Area for an Aspen InfoPlus.21 record..
Keywords: Arc.dat
Arc.key
Arc.byte
References: None |
Problem Statement: If not configured correctly, the Microsoft SQL database can have a very aggressive memory utilization, leading to performance issues and possibly an interruption of the GDOT services. | Solution: To limit the amount of memory that Microsoft SQL Server can use, there is a server property called “Maximum server memory”. By default, this parameter is set to 2 petabytes (2,147,483,647 MB), for practical purposes this represents an unlimited amount of memory that it can use, so if it is not modified the process memory could grow unrestrictedly causing the previously mentioned problems.
The recommendation for a GDOT / Aspen Unified server machine is to limit the memory to 2 GB (2,048 MB), this should be enough for the Aspen Tech processes. If you are using Microsoft SQL database for other tasks you could increase the memory limit as deemed necessary, as long as you have enough available memory on the machine.
To modify this limit, you need to follow these steps:
Open Microsoft SQL Server Management Studio.
Connect to the server.
Right click on the server name and select “Properties”.
4. Go to “Memory” and change the “Maximum server memory”.
5. Click OK to save the changes.
After you apply click OK the change should take effect immediately, but if it does not then a reboot might be required, just make sure to follow the proper shutdown procedures for your GDOT applications if they are running.
Keywords: SQL, SQL Express, memory, consumption, utilization, GDOT, Unified
References: None |
Problem Statement: How to perform a radiation analysis for the Flare Tip in Aspen Flare System Analyzer v14. | Solution: The Radiation tab of the Flare Tip Editor allows you to determine the levels of thermal radiation emitted from flares, which serves as a key factor in facility design. This information is used to site flares and to establish flare stack heights to ensure that workers and equipment are protected.
The American Petroleum Institute Recommended Practice, Section 521 (API, 7th edition, 2020) gives the following equation (Hajek and Ludwig) for calculating the minimum distance from a flare to an object whose exposure must be limited, assuming the flare is a single point of radiation:
Where:
D = Minimum distance from the midpoint of the flame to the object being considered (in feet)
τ = Fraction of heat intensity transmitted
F = Fraction of heat radiated
Q = Net heat release (lower heating value), in Btu per hour (kilowatts). This heat value is provided by calculations performed by Aspen Flare System Analyzer.
K = Allowable radiation, in British thermal units per hour per square foot (kilowatts per square meter)
To perform a radiation analysis for the Flare Tip:
Select the Radiation tab of the Flare Tip Editor.
You can adjust the following values. All values listed below are constants.
Parameter Default Description
Flare stack base diameter (d) 1 m Specify the diameter of the base of the actual flare in the plant.
Fraction of heat radiated (F) 0.3 Specify a value between 0 and 1. F factor is a dimensionless number that corrects for the heat that is not passed through radiation. This value generally changes with the component and the design of the flare. The default value is obtained from API 7th Edition (2020). This value represents F in Equation.
Flare stack height (h) 100 m Specify the flare stack height. This value represents h in the figure shown above.
Fraction of heat transmitted through the atmosphere (τ) 1 Specify a value between 0 and 1 for the fraction of heat transmitted through the atmosphere. The default value of 1, which is obtained from API 7th Edition (2020), indicates a clear day. Factors such as precipitation and humidity result in a lower value. This value represents τ in Equation.
Wind speed (u) 9 m/s The default value represents an average wind speed.
3. You can click + to evaluate multiple analyses, if desired. You can add a maximum of ten analyses to the table.
4. For each row in the table, you must either:
In the Distance to base (r') field, specify the distance to the base of the flare tip. Aspen Flare System Analyzer calculates the Radiation Level (K) value so you can check if the value is within the limits provided by API. In these calculations, Aspen Flare System Analyzer first calculates the length from the correlation provided by API 521 and the possible tilt by the wind. Aspen Flare System Analyzer assumes the flame is moving towards the point of reference to obtain a conservative estimate.
-or-
Specify the Radiation Level (K) value. Aspen Flare System Analyzer calculates the Distance to base (r').
5. The following values are reported:
Required Protection Level: Reports the necessary protection level.
Max Exposure Time: Reports the maximum exposure time for personnel.
These values are calculated based on the following table:
Radiation Level Required Protection Level Max Exposure Time
> 9.46 Exposure not allowed 0 seconds
9.46 - 6.31 Maximum ~5 seconds
6.31 - 4.73 Moderate <30 seconds
4.73 - 1.58 Mild 2~3 minutes
<1.58 Minimum Continuously
6. Click OK.
The method calculates the direct distance to the core of the flame. At low distances, you have passed under the center, which means that there are two possible distances. For example, if you start at 15m and get closer, the radiation level goes down since you are walking away from the center of the flame. When calculating the distance, the furthest distance is always presenting to account for any changes in wind, which could lead to a change in the center of the flame. Click the diagram below to see the approximate relationship between K and r'.
Keywords: Radiation analysis, Flare Tip
References: None |
Problem Statement: How do I convert an existing legacy Aspen DMCplus ACO application (MDL/CCF platform) to an Aspen DMC3 application? The user can convert to a DMC3 application by either remaining on the ACO platform (MDL3 and CCF files) or converting to the RTE platform (DMC3 Builder). This article will address how to convert the controller in both ways.
For more information on ACO versus RTE terminology, please refer to this KB article:
APC Terminology: ACO versus RTE and DMCplus versus DMC3 -
https://esupport.aspentech.com/S_Article?id=000099492 | Solution: Please note that the following license is required for conversion to DMC3: SLM_RN_APC_DMC3. You may use the aspenONE SLM License Manager tool > License Profiler to verify that this license key exists. If it does not, please submit a license key upgrade request here: License Request (aspentech.com)
Method 1: MDL to MDL3 Conversion Within ACO Platform
Note: If you do not have the DMCplus Model Project .DPP file, skip to Step 1B below. Only one of either Step 1A or Step 1B are required, not both.
Step 1A - Convert Existing DMCplus Model Project .DPP File to DPP3 File
1. Open the DMCplus Model Project File with extension .DPP, click on File, and select from the menu Convert to DMC3 Project...:
2. Click Yes on the following prompt to continue:
3. Save the converted .DPP3 file with the same as or different than original project name.
4. If the above steps were completed, skip down to Step 2 below (no need to do Step 1B).
OR Step 1B: Create New .DPP3 File with Original Model File .MDL
In the case that a user does not have the .DPP file that was used to export the original model file .MDL, the following steps can be taken instead of Step 1A above:
1. Open DMCplus Model, go to File and select New Project.
2. When prompted to select the type of project, select Aspen DMC3:
3. Right-click on Models (left side navigation pane), select Import, and select Models:
4. Use the File Explorer directory that opens up to select the .MDL file to be imported.
Can't remember where the .MDL file is stored?
If the controller was/is loaded online, there would be a copy of the .MDL and .CCF files in this default directory on the APC Online Server (where the DMCplus controller was running online): C:\ProgramData\AspenTech\APC\Online\app
Want the latest copy of the .MDL file?
To get the latest copy of the .MDL file from a running controller, go to the APC Online Server, open the program APCManage (this can also be done from PCWS Online tab > Manage), click on the controller name, and select Save to get a snapshot of the loaded online controller.
Step 2: Export MDL3 File from .DPP3 Project
1. Within the DMCplus Model .DPP3 project, click on Models (left side navigation pane).
2. Select the controller model name in the list on the right side, right-click on the model, select Export <ModelName>:
3. Save the exported .MDL3 file in this folder to be loaded online: C:\ProgramData\AspenTech\APC\Online\app. Close DMCplus Model tool.
4. Open the existing DMCplus Build .CCF File for the controller. Go to Tools menu and select Options... from the dropdown list:
5. Under the General tab, click the Browse button for Model section on top and select the .MDL3 file that was exported in the previous steps:
6. Optional step: if the user would like to configure the Optimization Strategy using Smart Tune, which is included only for Aspen DMC3 controllers, and still remain in the ACO platform, please refer to this KB article:
How to Configure Smart Tune for DMC3 Controller in the ACO Platform (CCF) -
https://esupport.aspentech.com/S_Article?id=000100068
7. Save the .CCF file in the same /app directory and close DMCplus Build.
8. Load and start the ACO application as usual from either APCManage or PCWS | Online | Manage.
Method 2: DMCplus ACO (CCF/MDL) Conversion to DMC3 RTE Controller (DMC3 Builder)
1. Open the program DMC3 Builder, click New (left side pane), and select DMC3:
2. Enter project name (this can be arbitrary; it will not be the controller's name online) and save it in any location (there is no specified file path required for RTE controllers DMC3 Builder projects):
3. Select the Controller option from the bottom left navigation pane, then click Import from the top left tools ribbon, and select Import Application:
4. Import the CCF File. If there is an error on Import indicating that the model file is missing, copy the model file to the same directory that the CCF file is located.
5. IMPORTANT NOTE - It is highly recommended to review all the Input and Output calculations in the DMC3 Builder platform to ensure calculation are imported correctly. In Aspen DMC3 Builder, there are many IO flags associated to User defined entries used in calculations. More information on IO flags available in below article:
Setting IO flags for user-defined entries in Aspen DMC3 Builder -
https://esupport.aspentech.com/S_Article?id=000052404
6. The above procedure is enough to migrate a controller. However, if you would like to migrate your full DMCplus Model Project file .DPP to a DMC3 Builder project, please refer to this KB article:
Migrating from a DMCplus Project (*.DPP) to Aspen DMC3 Builder -
https://esupport.aspentech.com/S_Article?id=000058667
7. If this is the first time deploying an RTE controller online on this server, the server needs to be configured using the program called Configure Online Server. This can be opened either from the Windows Start Menu (on the APC Online Server) or from DMC3 Builder | Online view | Servers | Configure Server button on the top left tools ribbon:
In the Configure Online Server program, under Server tab, verify that the checkbox is selected for Enable Server and the Server Status shows running.
Under the IO Tab, add IO Sources for the CIMIO logical devices being used by the RTE controller (this is the equivalent of ACO platform DMCplus Build .CCF file | Tools | Options | Connect tab for adding Cim-IO Logical Devices):
Equivalent ACO settings for reference:
Maximum List Size setting here is the same as the DMCplus entry in the .CFF file under Configure section called LISTSZ
Frequency setting would have been set manually in the ACO platform using System Environment Variables, more info in this KB article: https://esupport.aspentech.com/S_Article?id=000074039
Timeout would have been configured the same in ACO as Frequency using Environment Variables
KB reference for more information on Configure Online Server: https://esupport.aspentech.com/S_Article?id=000093523
8. Back in DMC3 Builder | Online view (bottom left) | Servers (top left), click on the Add button to add connection to the Online server, where Server is arbitrary and Host must be the APC Online server host name (set it as localhost if it is the same machine that DMC3 Builder is open in):
9. Once the controller is ready to be deployed, select the “Deployment” section on the left side navigation tree, hit Test Connections to verify it is connected to the IO Source and reading data properly, and then click Deploy to load the application online:
9. The view should change to Online on the bottom left pane, click on the Application Name and Start from the top left tools ribbon. This Start can also be done from the PCWS | Online | Manage view.
On the PCWS web page, the controller should now be displayed with the DMC3 icon:
Keywords: DMC3, Builder, ACO, RTE, migrate, convert, upgrade, conversion, dmcplus, new, platform
References: None |
Problem Statement: Is it possible to use an electrostatic precipitator (ESP) to droplets instead of solid particles?
For example, can you model a coke oven gas (COG) processing unit in Aspen Plus? This unit is mostly used to clean COG by stripping tar and BTX from the raw gas. There is an electrostatic precipitator (ESP) unit where the COG comes in from the bottom and flows upwards in a tube bundle. In these tubes, the tar is charged electrically and pushed by wires that run through the middle of the tubes. The tar is separated and runs downwards where it is separated in an output stream and the cleaned COG leaves through the top.
Is there a way to use ESP block in Aspen Plus for separating gas and tar (viscous liquid) instead of solid? Aspen Plus will also give a warning if liquid is present. | Solution: The electrostatic precipitator (ESP) works on the principle that electric charge builds up on the outside of particles or droplets of fluid. The calculations require particle size distribution information in order to calculate the surface area and surface-to-volume ratio of the particles or droplets as this influences the amount of charge that builds up and ultimately it influences the separation efficiency which is related to the ratio of electrostatic forces (related to charge on the droplets) to momentum (related to particle mass).
It is common in simulation to treat solids as pseudo-liquids. This is an application where one can treat a liquid as a pseudo-solid. In the attached example, a heavy organic component (C40H80) is defined as tar and declared as a solid component (event though it may be a liquid in real world conditions). The stream structure of Aspen Plus is modified to add a geometric particle size distribution (PSD) to the Mixed substream on the Setup | Solids | Substream sheet. In this case, the particles are droplets of liquid, which would make this the droplet size distribution.
The feed stream should include the PSD of the tar in the feed gas (assuming 50 micron average size with some variation).
The attached model of an ESP is meant to be a proof of concept and not real equipment. The example file will run in V11 and higher.
Keywords: None
References: None |
Problem Statement: Aspen SQLplus query that provides detailed information on Aspen InfoPlus.21 repositories. | Solution: The attached query provides information on Aspen InfoPlus.21 repositories.
It displays detailed information on history repositories and filesets in an Aspen InfoPlus.21 system using the AtIP21HistAdmin object in the Aspen InfoPlus.21 History Administration Scripting Interface and the Aspen h21qwatch.exe program.
AtIP21HistAdmin implements COM methods that allow other applications to programmatically access history configuration information in repositories.
For details on the History Administration Scripting Interface, see the Aspen InfoPlus.21 Administration Help for details.
The h21qwatch.exe program is a utility provided with Aspen InfoPlus.21 software and can be found in one of these directories:
C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin
C:\Program Files (x86)\AspenTech\InfoPlus.21\c21\h21\bin
Example output from the query has also been attached to thisSolution.
(This query was previously published asSolution 136659.)
Keywords: history
repository
query
status
References: None |
Problem Statement: BatchSepで、NRTLなどの活量係数モデルを使用する際に、塔内では気液平衡のバイナリパラメータ、コンデンサーでは液液平衡のバイナリパラメータを使うことができますか? | Solution: BatchSepでNRTLの気液平衡と液液平衡のバイナリパラメータを使い分けるには、NRTLとNRTL-2という物性推算法を2つ用意し、例えばNRTLに気液平衡、NRTL-2に液液平衡のバイナリパラメータを割り当て、使い分けることができます。ここで、設定手順を説明します。
① 下記のようにNRTLとNRTL-2という2つの物性推算法を設定します。
② NRTL用のBinary parameterはNRTL-1で、例えばこれには塔内用の気液平衡のBinary parameterを設定します。NRTL-2のBinary parameterはNRTL-2で、これにはコンデンサー用の液液平衡のBinary parameterを設定します。
③ BatchSepのBlock Options|Component Listsシートで、新しいComponent Listを追加し(名前は何でも良い)、物性推算法をNRTL-2に設定します。
④ Parameterフォームで、CONDENSER.COMPONENTLISTというパラメータを検索し、値をDefaultから上記③で作成したCONDに変更します。Reflux drumありの場合は、REFLUX(1).COMPONENTLISTについても、同様に変更します。
Keyword: BatchSep、バイナリーパラメータ、気液平衡、液液平衡、使い分け
Keywords: None
References: None |
Problem Statement: Does Aspen Cim-IO core on Aspen InfoPlus.21 Server connect to an older version of Aspen Cim-IO core on the Aspen Cim-IO Server, or vice versa? | Solution: All client/server versions of the Aspen Cim-IO kernel starting with version 4.8 are backward and forward compatible with one another. This means that a customer can use a newer version of the Aspen Cim-IO kernel on the Aspen Cim-IO server, but not touch the kernel version on the Aspen InfoPlus.21 server. It's also possible to upgrade the Aspen InfoPlus.21 server and not upgrade the Cim-IO Server.
Keywords: Backward/forward compatibility
CIMIO Kernel
References: None |
Problem Statement: On versions V12, V12.1 & V14 there isn’t an option to configure the column sets from within APC Web the way that you can with classic PCWS configuration tab. Is there a workaround for this or is it even possible to customize APC Web like this? | Solution: Even though there isn’t an option to directly customize the column sets from APC Web, the customization that you do from PCWS also affects the look of APC Web.
Let’s say that for example we modify the Operations view to add a user defined entry:
This is how it looks on PCWS:
And how this changes also affects APC Web:
When we modify the Column Sets all it’s doing is creating/modifying the C:\ProgramData\AspenTech\APC\Web Server\Products\APC\apc.user.display.config file, mentioned on these articles:
How to add User Defined Entries as PCWS columns https://esupport.aspentech.com/S_Article?id=000099970
PCWS Column Enumeration Example https://esupport.aspentech.com/S_Article?id=000100132
The plan is on future versions for APC Web to have the same configuration sections that PCWS has, but on the previously mentioned versions the Column Sets option has not been added yet so you could use this workaround of using PCWS to do the customization instead.
Keywords: APC Web, PCWS, column, column sets, customization, customize
References: None |
Problem Statement: How to get rid of the following error message Cannot enter the Safety Environment. Ensure a simulation case is open and active in Aspen HYSYS Safety Environment? | Solution: In order to get rid of the following error message Cannot enter the Safety Environment. Ensure a simulation case is open and active in Aspen HYSYS Safety Environment:
The HYSYS file has to be in .hsc format instead of Template format (.tpl) in the Safety Environment.
Note that there is no direct way to convert a .tpl file back into a .hsc, since the 'Save as...' window doesn't show the option for .hsc However, as a workaround, you can open the template file, go to 'Save as…' and manually add the .hsc extension to the file name.
Keywords: Aspen HYSYS, Safety Environment, Error Message
References: None |
Problem Statement: In the DMC3 Builder desktop tool, in the Online tab, in Servers view, there is a button in the top tools ribbon called Configure Intermittent Tags.
If this feature is attempted to be used for an intermittent variable in an FIR DMC3 controller, it will not work and you may see the following symptoms:
After deploying the variable online, it will have a BAD value and it will not change to good despite the controller reading in new values as they become available
The NEWPVInput flag is not automatically getting triggered when a new value for the intermittent tag is read in
If the user manually forces the NEWPVInput flag to 1 (YES) on the web, then the CV will read in a good value but only for one cycle, where the combined status shows NORMAL. The CV then stays in combined status of USE PRED until it times out, and then it turns BAD again
Root Cause
This behavior is as expected for DMC3 FIR controllers because this Configure Intermittent Tags feature is only applicable for MIMO State Space controllers, not FIR DMC3 controllers.
Along with reading in the value for the tag that this feature is connected for, it will also set the timestamp of that tag's value. So when the system reads the value of an entry, it also reads the timestamp of the entry. In MIMO controllers, there is logic that considers that timestamp of the entry and based on its change, it knows if the entry value is new or not. However, in FIR DMC3 controllers, there is no such logic, it just assumes the value is new by default. That is why for FIR DMC3 controllers, there is the NewPVInput flag available for the user to trigger in a calculation that relies on checking for a value change of the tag, as there isn't inherent logic set for this in the controller already. In a MIMO controller you will notice that this NewPVInput flag is not an available entry. | Solution: Based on this information, in the use case of FIR DMC3 controllers, the user should use the traditional calculation method to set the analyzer reading instead of using this feature. The following KB articles can provide additional information on this traditional method:
Configure Intermittent Variables in DMC3 Builder: https://esupport.aspentech.com/S_Article?id=000098792
How to do Analyzer spike and freeze dead band detection using DMC3 Builder?: https://esupport.aspentech.com/S_Article?id=000062063
Keywords: Configure, intermittent, tags, dmc3, mimo, fir, newpv
References: None |
Problem Statement: Some models may need a complex initialization strategy to solve robustly. One approach commonly used is to solve the equations using a simple formulation, then activate a more rigorous formulation (for example by changing a parameter) and solve again. This means the block needs to be solved multiple times. This is something that cannot be done with a VBScript (e.g. PreSolve).
Another initialization method consists of using homotopy.
How can we use those techniques with an ACM model exported to Aspen Plus? | Solution: New in version 2006, the model developer can attach a script to the exported model. The script must be named EBSOLVE. The script must be written using the Object Oriented Modeling Framework (OOMF) scripting language (not VBScript).
In its most simple form, the EBSOLVE script can consist in the single Statement
SOLVE
This will tell OOMF to solve the block. (This is the ExampleEB1 example). Note that you can use the EBSOLVE method with PreSolve and PostSolve VBScripts. In this case, the PreSolve script is invoked each time a SOLVE command is issued from the EBSOLVE script (before actually solving), and the PostSolve script is invoked after solving.
Let's assume that the model has a string parameter named rigorous (a YesNo parameter for instance). The following EBSOLVE script can be used to first solve the block using the equations active when rigorous set to No, then a second run with the rigorous parameter set to Yes. The test checks whether the block has been previously converged, which means that if the block is in a recycle (convergence loop) for example, this approach will be used only the first time the block is executed.
IF (&Self.SM_STATUS <> Converged) THEN
'&self.rigorous' = No
SOLVE
'&self.rigorous' = Yes
ENDIF
SOLVE
This is the ExampleEB2 example in the attached file.
For logicalparameter, you can use
'&self.rigorous' = TRUE
'&self.rigorous' = FALSE
The following script shows how the homotopy can be used to ramp the variable a from 0 to the value currently set.
HOMO PARAM [&self.a, val(&self.a), 0]
SOLVER HOMO_SPARSE
SOLVE
This is the ExampleEB3 example.
You may also want to initialize an array. For example if we have declared an array x in the model (x([1:n]) as realvariable), we can use a while loop:
// initialize the array of x
set i = 1
while (&i <= &self.n) do
'&SELF.x(&i)' = &i
set i = &i + 1
endwhile
To access an array declared using the componentlist (e.g. z(componentlist) as molefraction in the port definition):
// copy the inlet port composition to the outlet port composition
for i in val(&self.Componentlist.components) do
'&self.Out_P.z(&i)' = '&self.In_F.Z(&i)'
endfor
These are in the ExampleEB4.
The OOMF scripting language is documented in the file Aspen OOMF Script Language.pdf available from the documentation set of Aspen Plus. Look on esupport.aspentech.com page to download the documentation file for your version of Aspen Plus.
Keywords: None
References: None |
Problem Statement: When you configure the ADSA with a datasource either manually or using the Aspen Database Wizard and then open any of the Server or Clients tools you are not able to see the datasource or connect. An error may also show 'No datasource available'. | Solution: If the user installing the software or changing the ADSA configuration does not have access to the registry or the UAC is turned to High then then the datasource is not entered into the registry.
To verify this you will need to open REGEDIT with Administrator access and locate the following keys depending on what version of Windows you have and what version of Aspen software is installed.
Change the DirectoryHost to your ADSA datasource or ADSA directory server.
For use by 32bit applications:
[HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\AspenTech\ADSA\Locator]
DirectoryHost=CHANGE_TO_ADSA_SERVER
For use by 64bit applications:
[HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\ADSA\Locator]
DirectoryHost=CHANGE_TO_ADSA_SERVER
If you do not have any of the dwords or keys in the location you can add the keys below:
DisableLookup=dword:00000000
Protocol=dword:00000001 (0=DCOM, 1=Web Service)
Timeout=dword:00003a98 (15000)
Keywords: ADSA
Datasource
No Datasource available
References: None |
Problem Statement: Aspen Production Execution Manager (APEM) client machines are temporarily losing connection with the server. A symptom of this is that phases fail to auto-start. Another symptom is that the scripts in recipes are not activating - so background task is not happening – and operating screens are not updated. Usually this is seen in workstations logged in to MOC for long periods of time or indefinitely.
The debug logs on the Apache Tomcat server may show lengthy gaps when nothing appears to be happening and the overall APEM applications run extremely slowly. The Apache Tomcat service may even crash.
There may be no apparent pattern to this (it seems quite random) but you may note that it is primarily happening on workstations that are connecting to the network using a wireless (WiFi) adapter. | Solution: A likely cause of these symptoms is an unstable connection between client and server. Clients are dropping out of the network (either through WiFi connection failure) or the operator turns the machine off without first closing MOC, causing Apache server to continually try to reconnect, eventually using up all its resources.
Assuming you are not using a mobile device to connect to APEM server and you have multiple network adapters, we recommend that you disable the WiFi adapter and rely on a proven stable ethernet connection.
Additionally, make sure you are up to date regarding patch level. Several improvements in the handling of connection issues have been made, so recommended minimum patch level installed on the APEM server is listed at the bottom of this article.
When mobile devices are required, changes to the behavior of APEM server can be made that can further improve matters. First confirm the above minimum patch level installed on the APEM server. Then open Notepad with elevated privileges (Run as Administrator) and open flags.m2r_cfg config file according to the following installed APEM version (assuming default installation folder):
APEM V11* or later: C:\Program Files\AspenTech\AeBRS\cfg_source\flags.m2r_cfg
APEM V11* or earlier: C:\Program Files (x86)\AspenTech\AeBRS\cfg_source\flags.m2r_cfg
*For V11, modify files found in both Program Files AND Program Files (x86)
Add the following configuration key to the end of the file:
NOTIFY_UNSUBSCRIBE_FAILED_CLIENT = 1
This option ensures that a failed MOC client will not continue to be contacted by the APEM Server (Note, set key value to 0 to disable this feature). Not removing failed client might bring performance issue since trying to contact failed client takes time and causes delay.
With NOTIFY_UNSUBSCRIBE_FAILED_CLIENT enabled, the socket for the client notification listener is refreshed by default every 5 minutes (300 seconds). The default interval can be overridden by a complementary configuration key CLIENT_NOTIFICATION_LISTENER_REFRESH_PERIOD which takes a value measured in seconds.
Save the modified file(s). Run codify_all (found in each folder containing the modified files) and you must then restart Apache Tomcat service in services.msc.
Recommended minimum patch level for each version
V10.0.3 with Aspen_Production_Execution_Manager_V10.0.3_00584079 (March 2021) or later
V11.0.1 with Aspen_Production_Execution_Manager_V11.0_00618423 (May 2021) or later
V12.2, V14.0 or higher
Keywords: unacceptably slow performance due to unhealthy Apache Tomcat
java.net.SocketTimeoutException: connect timed out
java.net.SocketException: Connection reset
Error Code: 0x80004005 - [AspenTech][APRM] Network Failure or incompatible APRM server
Defect 573297 - AutoStart issue
References: None |
Problem Statement: New PipesimLink Extension | Solution: In HYSYS V14, the legacy PIPESIM Link was replaced with a new PipesimLink Extension. Like the previous PIPESIM Link, this operation allows you to use the PIPESIM software package within an Aspen HYSYS framework. PIPESIM 2017 and later versions are supported, allowing you to take advantage of newer functionalities within PIPESIM.
You can use the new PipesimLink Extension in HYSYS V11, V12, and V12.1 by installing the PipesimLinkRegistration.exe file attached to this article. Before installing the new PipesimLink Extension, HYSYS must be installed on the machine. Also, the file path C:\Program Files\AspenTech folder should be present.
If HYSYS V14 is not installed on the machine, then the PipesimLink extension will be installed under the C:\Program Files\Common Files\AspenTech Shared\PipesimWebApiLink\PipeSimLink folder. If HYSYS V14 is installed on the machine, then the PipesimLink extension will be installed under the C:\Program Files\AspenTech\Aspen Hysys V14.0\extensions\PipesimWebApiLink\PipeSimLink folder.
When the installation is finished, a message will indicate whether the installation was successful or not. If the installation was unsuccessful, then install 7-zip and run the PipesimLinkRegistration.exe installation again. If the installation was successful, then you will see PipesimLink Extension listed as an Extension in HYSYS. You can see the list of extensions by selecting Customize | Register Extensions in HYSYS. The icon for the PipesimLink extension in the Model Palette will not be affected by this installation.
In the future, the PipesimLink extension may be updated for repairs and improvements. Return to this article to find the most recent version of the extension.
Post-Release Updates
Please see attached Patch Notes.
Last updated: June 30, 2023
Keywords: Flow Simulator, Pipeline, Pipe Model, Hydraulics, Schlumberger
References: None |
Problem Statement: When using ABE end-user applications the software communicates with a database on a server, this can either be a local or enterprise server. This server connection can fail for multiple reasons. In this article, you will find a recompilation of troubleshooting suggestions for the server connection. | Solution: Local Server
Make sure that ABE Local Server has been installed. See the following KB article: https://esupport.aspentech.com/S_Article?id=000099840
Add the broker account to the ZyqadAdminitrators group in the Computer Management | Local Users and Groups | Groups | ZyqadAdministrators | Properties to have complete control of the workspace.
Enterprise Server
Test if the connection to the server is possible through a client application (i.e. ABE Explorer) using the following format: http://<machine_name>:82. Google Chrome is the recommended browser.
Test if the connection to the server is possible through Aspen Plus/HYSYS in the Select Server Connection menu using the following format: http://<machine_name>:82.
Check if the AZ380Broker is running in the user computer Services app.
Check if RabbitMQ and AspenTechSystemAgent are running in the computer Services app on the server machine.
Set up the inbound rule to allow TCP/IP ports. See the following KB article: https://esupport.aspentech.com/S_Article?id=000098152
Restart the RabbitMQ service and check if there are login issues. The account used here should have a permanent password and should be the same account on the AspentechServiceSystemAgent in the Services | Properties | Log On. See the following KB article: https://esupport.aspentech.com/S_Article?id=000073387
Check if the aspenONEWebsocket.exe file is in C:\Program Files\AspenTech\aspenONE V12.0\aspenWebsocket
Add the new server URL in the Security | Sites window in the computer's Internet Properties. This will skip the step to log in again after selecting the workspace for the Datasheet button.
Always ensure that the latest Cumulative Patch is installed.
Key Words
Server, workspaces, connection, greyed-out, error
Keywords: None
References: None |
Problem Statement: This Knowledge Base article illustrate how to resolve the “Server error in ‘/SQLplus’ application” when trying to generate report of PID loop in PCWS | Solution: Many customers reported a “Server error in ‘/SQLplus’ application” when trying to generate report of PID loops which is configured in Aspen watch maker for performance analysis.
The following KB article explain how to:
Why are Aspen Watch reports successful only from Aspen Watch computer?
https://esupport.aspentech.com/S_Article?id=000068443
Also, sometimes to resolve this issue and get rid of this error you will need to clear the browsing data, to clear your browsing data in Microsoft Edge:
1. Log off from PCWS web page.
2. Clear browser cache.
3. Log in and try again.
Select Settings and more > Settings > Privacy, search, and services. Under Clear browsing data > Clear browsing data now
Keywords: PID report, Aspen Watch, Analysis, SQLplus application error, server error
References: None |
Problem Statement: What can be done to ensure that the folders show up in the tree view pane? | Solution: Assuming that the folder structure in the Aspen InfoPlus.21 database is undamaged, a registry entry will need to be set on any machine which has an Aspen Tag Browser. This is true beginning with version 10 of the Aspen Manufacturing products.
Starting with version 14 of the Tag Browser, click the View | Options menu to access a window where you can set the Maximum tags returned limit:
For earlier versions of the Tag Browser, a new string value (called 'MaxRecords') must be updated directly using a registry editing tool (like regedit.exe). The key can be found in this registry branch (on pre-V14 client machine):
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AspenTech\Aspen Tag Browser
For safety's sake please back up the registry before making such changes. Navigate to the location above using Registry Editor and right-click and create a new string value:
Call the value 'MaxRecords'. Once it is set, right-click it and modify and set it to a value:
The value to use for MaxRecords should be the maximum amount of tags to get from InfoPlus.21, but the greater this number the slower the time to display. In this example 10000 is used:
The Tag Browser will now use the MaxRecords entry and the folders will be displayed:
Note that for version 14, the value is stored elsewhere in the registry (HKEY_CURRENT_USER\Software\AspenTech\Aspen Tag Browser\Options). It does mean that after a future upgrade (to version 14+), Aspen Tag Browser will apparently return to having a 1000 record limit until the user updates View | Options window themselves.
Keywords: filter
range
References: None |
Problem Statement: Exporting a Component List or Fluid Package from Aspen Plus may be required at times, either to utilize them in Aspen HYSYS or as a foundation for another Aspen Plus model. In this article, we will discuss the steps involved in exporting such information. | Solution: The following steps outline the process for exporting an Aspen Plus component list:
1. Open Aspen Plus and navigate to the File menu
2. From the navigation pane select Export and then File
3. In the File Export dialog box, choose Aspen Properties Backup Files (.aprbkp) as the file type and provide a name to save it
To import the *.aprbkp file into Aspen HYSYS, navigate to the Fluid Package page and select the Import button.
Keywords: Export, Import, Fluid Package, Component List, Property Package
References: None |
Problem Statement: The Wilke-Chang correlation in Aspen Plus help indicates a 2.26 association factor for water, while Poling's Properties of Gases and Liquids suggests a value of 2.6. | Solution: According to the Hayduk and Laudie paper from AIChE Journal (May 1974, p. 611), the Wilkie-Chang correlation with a water association factor of 2.6 resulted in an average error of 6.9%. However, reducing the factor to 2.26 led to a decrease in the average error to 0.4%, as stated on the first page of the attached paper.
Keywords: Association factor, Wilke-Chang correlation, Average error
References: None |
Problem Statement: Microsoft recently released a Windows 10 Patch : KB5019959, which is impacting AORA communication with SQL model server.
Microsoft Windows / Security Updates introduced some incompatibility on the AORA Client PCs.
AORA users get the error SQL Return: SQL_Invalid_Handle Error Code 00000, SQL Return: SQL_Error Error Code 00000 after installing the offended patch.
Examples of the first 2 SQL Query Errors leading up to the Load Failure of the Model DB:
Aspen Tech Response to Microsoft KB5019959:
The following Critical Defect Record was recently submitted to request that our AspenTech PM, R&D, and QE Teams to start discussing and looking into what action will be required either by AspenTech or by Microsoft in order to address this issue being caused by Microsoft’s latest KB KB5019959 and KB5020613 updates.
VSTS Defect 821898: CRITICAL -- Windows Updates KB5019959 and KB5020613 are preventing Several Key AORA Customers from Accessing their AORA Model Databases.
VSTS Defect 834947: REQUEST FOR OPEN DIALOG WITH MICROSOFT
Wordaround:
1. Uninstall offending MS Patch : KB5019959
[or]
2. Reconfigure ODBC connection with an ODBC driver other than Microsoft ODBC SQL Server Driver (sqlsrv32.dll).
For Example, AORA still works perfectly fine in a Windows 10 22H2 box with the Microsoft Updates installed and all AORA Patches installed, as long as you have reconfigured your ODBC Connections to use the SQL Server “Native” Client Driver, as shown per the ODBC Driver that is underlined in the screen shot below.
If the User Systems and Users does not have the option to switch to the SQL Server Native Client Driver, then installing the “Client Driver” is something that should be simple and free to execute if users are Microsoft Customers currently using SQL Server.
Refer below links to install SQL Server Native Client Driver,
1. Microsoft ODBC Driver 11 Download for SQL Server - Windows
https://www.microsoft.com/en-us/download/details.aspx?id=36434 [eur02.safelinks.protection.outlook.com]
2. Installing - SQL Server Native Client - Microsoft Learn
https://learn.microsoft.com/en-us/sql/relational-databases/native-client/applications/installing-sql-server-native-client?view=sql-server-ver16 [eur02.safelinks.protection.outlook.com] | Solution: Microsoft released monthly security update patch which has fix for this issue.
Customers can utilize the Microsoft update feature to access cummulative security patches to resolve errors related to an ODBC component (sqlsrv32.dll) that prevented some customers from accessing AORA/ATOMS applications on their desktop.
Users and administrators can review the 'date modified' attribute of sqlsrv32.dll in C:\Windows\System32 to quickly review whether the patch has been applied.
If the .dll was modified during the period of Nov 10, 2022-Jan 10, 2023 it may require additional support.
If the 'date modified' is more recent, the patch has likely been applied.
This is not a comprehensiveSolution; please reach out to AspenTech Customer Support if you have additional questions.
The KB articles listed in the table below will provide specific notes related to this issue from Microsoft for each of our supported platforms.
V11.x V12.x V14 OS KB Results
Release x x x Windows 10
(version 2015) 5022297 N/A, see below [1]
x x x Windows 10
(version 1607) 5022289 Fixed
x x x Windows 11 5022287 Fixed
x Windows 2008 R2 SP1 5022339 N/A, see below [2]
x x Windows 2012 R2 5022346 Fixed
x x x Windows 2016 5022289 Fixed
x x Windows 2019 5022286 Fixed
x Windows 2022 5022291 Fixed
1. AORA cannot be installed on Windows 10 Original version due to lack of .Net 4.7.1. Newer version of Win 10 is tested.
2. Windows 2008 R2 SP1 is skipped due to end of support from Microsoft unless the customer purchase the Extended Security Update (ESU).
Keywords: SQL Invalid Handle
Error Code 00000
SQL Return SQL Error
References: s:
1. https://community.progress.com/s/article/1188
2. https://bytes.com/topic/sql-server/answers/81454-unknown-token-received |
Problem Statement: If the correct keys are not set on the license file, the GDOT Online application, Web Viewer, Model Update or Data Reconciliation will not run, where can we find the license log to review what is happening? | Solution: Here is an example of a GDOT Online application that will not connect:
By reviewing the log that is created alongside the configuration and model files, we see the following errors:
2/14/2023 3:36:22 PM No license is available. Application is terminating.
2/14/2023 3:36:22 PM See license log for more information.
2/14/2023 3:36:22 PM Aborting initialization
2/14/2023 3:36:22 PM Terminating application
Where can we find this “license log” to troubleshoot the issue? This can be found under C:\ProgramData\AspenTech\GDOT Online\VXX\logs with the name of DynOptLicensing.log (where VXX is the version that you are currently running).
Here is an example of a situation where we are trying to connect to an application on the GDOT Online Console and it cannot find the necessary key:
Tue Feb 14 13:46:34 2023: [???] SLMWrap() object created. SLM client loaded.
Tue Feb 14 13:46:34 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license checkout (deferred) initiated successfully; waiting for license validation.
Tue Feb 14 13:46:34 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license validation completed. License acquisition pending.
Tue Feb 14 13:46:34 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license validation completed. License not acquired.
Tue Feb 14 13:46:34 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license check in not required; license state is 'no license'.
And here is an example on how it looks when it does find it:
Thu Feb 16 15:53:08 2023: [???] SLMWrap() object created. SLM client loaded.
Thu Feb 16 15:53:08 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license checkout (deferred) initiated successfully; waiting for license validation.
Thu Feb 16 15:53:08 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license validation completed. License acquisition pending.
Thu Feb 16 15:53:08 2023: [MD_OPT] SLM_RN_GDOT_ONLAPP license validation completed. License acquired.
Here is a list of the keys that should be present on a license used to run GDOT:
If GDOT is not able to find the necessary key and you already verified that there is a connection to the license server and the Sentinel RMS License Manager service is running, you should open the aspenONE SLM License Manager and check that:
The previously shown keys are present.
The expiration date hasn’t been reached yet.
Keywords: GDOT, SLM, license, licensing
References: None |
Problem Statement: How do I configure PIMS/APS in order to pass planning targets (KPI) from Aspen PIMS to APS? | Solution: In order to pass planning targets you must follow the next steps:
APS integration configuration services
1. Open the program as administrator:
2. It will ask for the name of the machine and some ports. If the ports selected are not open you will need to open them using the Windows Firewall:
3. Ports 9999, 9998 and 9995 are the default ports.
4. Select the model where you want to send the data. If it is a dsn file, it will need the user and password to access the server:
5. Test the service connection and click finish:
Aspen PIMS
1. Open your model in PIMS and solve the case for the planning targets that you want to export:
2. Go to Integration/Configuration
3. Enter the name of the machine and the port corresponding to APS End Point discovery (9995 is the default) that was set up in the APS integration configuration services step.
4. Click the test button and the next dialog box should prompt:
5. Close and open PIMS, this step is required to send the targets successfully.
6. Go to Integration/Production Targets
7. Select the targets that you want to pass to APS. in order to achieve that, you must select the case and then check the box of the product; this will send the information to the spider graph. Then you need to click the red arrow to pass them to the targets dialog.
Make sure that the Start date and End date match the horizon that you are analyzing in APS
8. Click the send button:
9. The next dialog box should appear:
10. Check in the data base that the next tables got new entries:
ORION_MGR_PIMS_KPI_TAG
ORION_MGR_PIMS_KPI_DATA
APS
1. Load the model in APS.
2. Go to Integration/PIMS to APS/Map Planning Targets
3. The next dialog box should appear:
4. Map the required parameters and click OK:
5. Go to Planning targets pane and add a new screen, edit the KPI trends of the screen and add the desired trends:
6. The trends and information should appear on the screen:
7. Simulate all to see the scheduled plots:
Keywords: Planning targets, KPI, configure, APS integration configuration tool
References: None |
Problem Statement: I am getting the error HTTP Error 500.19 - Internal Server Error in Aspen Unified, how can I solve it? | Solution: Go to IIS and double-click on your application server. In this example, the name is PSCV121. Then, double-click on Feature Delegation.
Right-click on Authentication- Anonymous and Authentication-Windows and select Read/Write.
Keywords: HTTP, 500.19, Internal Server Error, AU, AUP, GDOT.
References: None |
Problem Statement: How can I create and read a Trancol Report in PIMS? | Solution: How is it useful?
This report is also known as the substitution report. The term substitution report results from the fact that the coefficients in the body of the table for a limiting (non-basis) variable indicate what will happen to the non-limiting (basis) variables that are rows of the table.
How to create a Trancol Report?
To create a Trancol Report, make sure to have one of the Trancol report options selected in the Reporting section of the Model Settings
Then, go to the REPORT table on your Model Tree. The table REPORT in PIMS accepts rows starting with the letter T dedicated to creating the Trancol Report. The second character is either an R for Rows or an C for Columns and they represent the Columns and Rows of the Trancol Report, not the matrix. Then the 3rd, 4th, 5th and 6th character will be MASK and the last character will be a number between 1 and 9. Then, you will add rows:
TRMASKn
TCMASKn
Where n is a number between 1 and 9. Each row must have a unique name.
Trancol Report will show as columns all the limiting TCMASKn and what would happen to the non-limiting TRMASKn. Add the matrix columns or rows in the TEXT column of the REPORT table to define what you want to see in the Trancol report. Note that you can use * as wildcards. You should add an extra * if you are using a periodic model. Each mask can contain 2 entries with wildcarded variable names and as many as you want if you don’t use *s. Separate the tags with a space. For example:
In this example PIMS will print the capacities of VT2, SF4 and DHT, the minimum and maximum specification blends and the purchases and sales as columns only if they are not hitting a limit and the purchases, sales and the capacities of AT1 and VT2 as rows only if they are hitting a constraint.
How to read it?
Attached is an Trancol Report generated from the Volume Sample using the previous REPORT table.
In this example, we can see that the purchases of NSF are hitting a constraint and if we buy 1 unit more of NSF the OBJFN will increase 1.7389. This DJ value is the same one that you get in the FullSolution Report.
Then, the Trancol Report creates what if scenarios for the non-limiting matrix variables and rows. If the purchases of NSF are relaxed in 1 unit, then the purchases of AHV will increase in 0.0413 and the purchases of NC4 will decrease in 0.3844 whereas ANS and TJL will remain the same.
The sensitivity (in blue) is defined as the derivative of a dependent variable (calculated) with respect to an independent variable (fixed) based on a given PIMSSolution, so keep in mind that these changes will create a daisy chain reaction in the rest of the matrix, so the Trancol report will gives you a fast insight in how your next case will behave.
In the attachement you will see that capacity of the VT2 is not reported as a row, that is because it is hitting a constraint and that’s the reason you do see it as a column.
Bonus
To save storage space by not writing to the DB and to access the Marginal Analysis Viewer, go to Reporting> Outputs and select Create XLPTC File.
Run the model and go to Run>AO Sensitivity Analysis and select the case or base model that you want to analyze.
The Margin Analysis Viewer extends the capabilities of a simple TRANCOL report by allowing you to create a what-if view of the sensitivities. You can do so by choosing one or more fixed variables and treating them as if it was not constrained by the LP basis by pairing them with a corresponding dependent variable one at a time. To calculate the sensitivities for these newly free (independent) variables, a matching number of variables that are currently free must be fixed (dependent). The variables chosen should have non-zero sensitivities in theSolution case to avoid singularities in the what-if case.
Keywords: PIMS, AO, DR, Tracol, Reporty, read, create
References: None |
Problem Statement: This article explains how to install Cim-IO server . | Solution: If you are installing for AspenONE Advanced Process Control, Cim-IO server will be automatically installed when you install Aspen APC Online or Aspen APC Performance Monitor.
If you are installing for AspenONE Manufacturing Execution System, you can select Aspen Cim-IO Interfaces in AspenONE installer, the Cim-IO server will installed with your selections of 'OPC interfaces' and 'All Other Interfaces'.
Keywords: Cim-IO server
installation
References: None |
Problem Statement: With the recent hardening changes that Microsoft implemented on DCOM technology, it is unclear how this affects APC controllers and what patches need to be applied to address this. | Solution: The AspenTech patches that address the DCOM hardening changes can be found on the following Knowledge Base article:
https://esupport.aspentech.com/S_Article?id=000099493
On the APC environment, there are 2 main products that are affected by the Windows DCOM hardening changes, one is Configure Online Server and the other one is Cim-IO.
Configure Online Server is affected when you configure an IO source for DMC3 where it connects directly to the OPC (which is not very common, but some users do have this configuration).
To address this you need to apply the patches listed on the KB article or a later patch, since all APC patches are cumulative the patches on EP6 for example include all the fixes of EP5, EP4, etc…
(Notice that the patches for V10, V11 and V12.1 are EPx for CPx, so in order to install these patches you need to install the corresponding cumulative patch first. These patches include other fixes besides the DCOM hardening, so even if it is not needed for DCOM it is still advisable to install them. For APC V14 the DCOM hardening changes is addressed from launch so no need to apply any extra patches on that version.)
Now, the other product that is affected more commonly is Cim-IO. The two most common configurations to connect the APC Online server to the OPC are either having the Cim-IO server on the same machine as the APC Online server, or the second option is having the Cim-IO Server on the same machine as the OPC Server, the first one uses DCOM and the second one doesn’t, please refer to the following diagram for a more detailed explanation on when the patches are necessary:
If your setup is the Case 1 (or in general if the Cim-IO server is on a different machine than the OPC Server), then you need to apply the Aspen Cim-IO Core patch listed on the previous KB article or a later patch, the cumulative structure of the patches mentioned for DMC3 patches also applies for these.
(Notice that the V11 patch is V11.0.1_ECR_007, this means that you need to apply the Aspen MES V11.0.1 Cumulative Patch first; the V12 patch since it is an ECR for V12.0.0 this can be installed directly, for V12.2 the Aspen InfoPlus.21 Product Family V12.2.1 Cumulative Patch 1 addresses this, please review which version you have installed by opening the Cim-IO Interface Manager. When you apply these patches on the APC Online server machine you need to stop the ACO Utility Server as well as the AspenTech Production Control RTE Service, on top of the other ones mentioned on the patch release notes.)
For Cim-IO V14 the DCOM hardening changes is addressed from launch so no need to apply any extra patches on that version.
Important Note: Below are only rare configurations where the following products are also affected for APC, a standard APC installation will not have these configurations by default, only if custom changes were made by the user.
Additional products affected: Of the list of affected products, there are another two that are installed with the APC Manufacturing Suite media, these are the Aspen Data Source Administrator and Aspen Calc. On a common APC setup these products are not affected by DCOM hardening changes, a patch application would be necessary on the following situations:
1. If the ADSA protocol is set to DCOM instead of Web Service (the default), this is configured from the ADSA Client Config Tool.
2. If Aspen Calc is connecting to a remote InfoPlus.21 server, this is not common for APC setups, usually the InfoPlus.21 / Aspen Watch / Aspen Calc components are all installed on the same machine.
On these specific situations the patches for these two products should be applied.
Keywords: APC, DMCplus, DMC3, Cim-IO, DCOM, Cim-IO for OPC, hardening
References: None |
Problem Statement: You are trying to train a Maestro agent or view its trend in Agent Builder. You have tested the connection in System Manager and see that it’s successful, but you are getting errors when working in Agent Builder.
Under Log History in System Health, you see the following message.
StatusCode: 404, ReasonPhrase: ‘NOT FOUND’
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
This issue occurs when you have entered one of the Maestro URLs in the wrong field. For example, you entered the Maestro Data Repository URL in the Maestro Model Builder URL field. Depending on which URL is in the wrong place, you may see any of the following errors.
When training Maestro agents:
Failed to get Maestro Model Info: NotFound – NOT FOUND
Could not begin Maestro Model Builder run
When using View Trend or View Probability Trend:
Maestro Connection failed at ‘URL’; Verify that the Maestro Connection is configured and alive
Failed to connect to Maestro scaling-factors API. | Solution: Open Aspen Mtell System Manager, go to the Configuration tab -> Settings -> Maestro
Click the Test button next to each of the URLs. You want to see success messages that match the following screenshot.
The Maestro Calculation Engine URL message should reference “online,” the Maestro Model Builder URL message should reference “builder,” and the Maestro Data Repository URL should reference “repo.”
You will need to correct the URL which does not show the correct message.
If you are not sure what the correct URL should be, the easiest way to identify it is with the maestro_driver.exe tool that comes with Maestro.
On the Maestro server, navigate to the maestro_driver.exe. It will most likely be in the following path. C:\ProgramData\AspenTech\Aspen Mtell Maestro
Right click maestro_driver.exe and run it as an administrator
Choose option 3, Automated deployment of containers
When the task is finished, maestro_driver.exe will display the correct URLs for you. The first one is for the Calculation Engine, the second one is for the Model Builder, and the third one is for the Data Repository.
Return to System Manager -> Configuration -> Settings -> Maestro and correct the URL which was wrong. Be sure to save your changes.
Keywords: Maestro
Error training agent
Error viewing trend
References: None |
Problem Statement: The error Could not load file or assembly Proficy.Historian.ClientAccess.API,... or one of its dependencies may appear while configuring the GE Proficy historian adapter or under Agent Services logs. | Solution: For Proficy versions 3.x and 4.x, the GE Proficy adapter needs the GE Proficy Client Tools to be installed on the machine where the adapter is located. For Proficy 5.x and later the client tools may be installed anywhere, but you must copy the Proficy.Historian.ClientAccess.API.dll assembly to the following folders under the product installation folder (by default C:\Program Files\AspenTech\Aspen Mtell)
Mtell Agent Service\
Suite\Tools\Agent Builder\
Suite\Tools\System Manager\
Training Service\
Then go to Services > Restart Aspen Mtell Agent Service and Aspen Mtell Training Service
Then close and reopen System Manager and try to configure the GE Proficy adapter again
If the error still occurs, then the .dll file may need to be copied to the following locations as well:
Suite\API Service\
Suite\MDM\
Suite\Reporting Service\
Suite\Retry Service\
Suite\Tools\Mimosa Explorer\
Suite\Watch Dog Service\
As well as in these folders in the IIS installation:
C:\inetpub\wwwroot\AspenTech\AspenMtell\APM\bin\
C:\inetpub\wwwroot\AspenTech\AspenMtell\InteropServer\MIMOSA\bin\
C:\inetpub\wwwroot\AspenTech\AspenMtell\MtellView\bin\
Then go to Services > Restart Aspen Mtell Agent Service and Aspen Mtell Training Service
Then close and reopen System Manager and try to configure the GE Proficy adapter again
Keywords: Proficy.Historian.ClientAccess.API
missing dependency
missing assembly
GE Proficy
References: None |
Problem Statement: How to setup site filtering in Aspen Mtell?
It is often required during the setup of a multi-site installation to separate the users access across the different sites. This can be handled with the site filtering options
Before filtering:
After filtering: | Solution: Open Aspen Mtell system manager
Select the Configuration tab
On the left menu click on the Security Setting option
Check the Security Enabled checkbox
Check the Site Filter Enabled checkbox
Save with the button in the ribbon
Navigate to the Groups submenu within the Security settings options
Select the group we want to make changes to
In the menu in the bottom right we will have the possibility to check between the different sites configured in the Asset hierarchy. Selecting the sites will allow the users to visualize only the assets within the specific site and the others will not be visible
Save with the button in the ribbon
Restart System Manager
Keywords: Access
User based security
Restrict
Hide
References: None |
Problem Statement: While using Mtell Alert Manager, the page may not be syncing or updating properly, the probability trend is not viewable in alerts listed from anomaly and Maestro failure agents, and other general issues may occur | Solution: This article details how to restart MAM services to try and resolve general issues with services either being stopped or needing a manual reset.
MAM relies on three services to function correctly:
APMDataCollector
APMDataTransformer
RabbitMQ
Follow these steps (in order!) to restart MAM services:
Right click RabbitMQ > Restart
Right click APMDataCollector > Restart
Right click APMDataTransformer > Restart
Keywords: Mtell Alert Manager (MAM)
Services stopped
Probability trend
alerts
sync issues
References: None |
Problem Statement: What are the requirements to import 3D DXF file into Aspen OptiPlant 3D layout? | Solution: OptiPlant allows you to create 3D Plants through modeling of Structures and Equipment. The product has a library of pre-defined equipment and structures which you can select to model them. Another option for modeling equipment and structures: BY 2D/3D DXF. You can Import just as a reference to validate the modeled layout, or, start modeling the layout using the drawing in the background as a reference.
The OptiPlant Configurator provides extended interface capability in the form of Data Exchange Format (DXF), which can be read as input for major third-party software. This enables the objects created in OptiPlant to be used in other CAD packages like AutoCAD, MicroStation, and Navisworks, etc. The output in DXF format is an intelligent output which carries Line IDs, Equipment IDs and colors as well.
The major requirements to import 3D DXF file into Aspen OptiPlant includes:
The 3D DXF file must have the same co-ordinate system as that of the Aspen OptiPlant plot plan.
The units of the 3D DXF file must be the same as that of the plot plan units in Aspen OptiPlant.
Keywords: DXF, 3D model, Units, Co-ordinates
References: https://esupport.aspentech.com/S_Article?id=000099066 |
Problem Statement: This video | Solution: outlines how to perform a clean restart of an Aspen Cim-IO Interface with Store and Forward enabled.
Solution
The following are general guidelines for stopping and starting an Aspen Cim-IO Interface with Store and Forward enabled. The exact procedures may vary depending on the version of the Cim-IO, the interface type and configuration. Here we assume all the tags are being updated by Cim-IO client tasks running on the Aspen InfoPlus.21 server and are connected to a single Cim-IO server hosting a single Cim-IO Interface. You can extend this for your own particular requirements (multiple logical devices and multiple Cim-IO servers etc).
Keywords: None
References: None |
Problem Statement: Detailed description of vulnerability CVE-2021-44548 as advised on the https://nvd.nist.gov/ website:
An Improper Input Validation vulnerability in DataImportHandler of Apache Solr allows an attacker to provide a Windows UNC path resulting in an SMB network call being made from the Solr host to another host on the network. If the attacker has wider access to the network, this may lead to SMB attacks, which may result in: * The exfiltration of sensitive data such as OS user hashes (NTLM/LM hashes), * In case of misconfigured systems, SMB Relay Attacks which can lead to user impersonation on SMB Shares or, in a worse-case scenario, Remote Code Execution This issue affects all Apache Solr versions prior to 8.11.1.
This article describes AspenTech response to vulnerability CVE-2021-44548. | Solution: We do not deploy with any configured DataImportHandler. Users should, none the less, be sure to restrict access to the hosting Solr system, Solr ports and configuration (see Guidance for Securing Solr port 8983 for V12.0 and later versions of A1PE).
Users can confirm that no DataImportHandler is configured by connecting to the Solr Administrator (see below), to verify there is no definition of the DataImportHandler in solrconfig.xml:
Keywords: aspenONE Process Explorer A1PE
search
tomcat
References: None |
Problem Statement: In the context of this document, the term GET record refers to an IO Transfer record defined by any of the following Definition records:
IoGetDef
IoGetHistDef
IoLLTagGetDef
IoLLTagUnsDef
IoLongTagGetDef
IoLongTagUnsDef
IoUnsolDef
Is it more efficient to have a large number of GET records, each with a minimal number of occurrences, or to have a small number of GET records which contain a large number of occurrences? For example, if I need to scan 1000 tags once per minute, should I use 4 GET records with 250 occurrences or 2 records, each with 500 occurrences? | Solution: In general, it is more efficient to have a small number of large GET records rather than a large number of small GET records. In the example above, it is better to have two records, each with 500 occurrences. Having many GET records requires more time to process as more lists of tags will be sent to the Aspen Cim-IO server and eventually to the device. Using too many GET records at a fast scan rate can cause Cim-IO messages to backlog, potentially causing no message replies to be sent.
Keep in mind that the different GET definition records allow different maximum numbers of tags. For example, IoGetDef allows a maximum of 1234 occurrences, whereas IoLLTagGetDef allows only 558. Also, certain process devices may implement their own size restrictions. For example, RSLinx only allows 400 tags per list.Some OPC DA servers perform better with fewer items in a group (e.g., they can optimize communications to the device), so in these cases, large GETs could slow down communication. Thus, is always a good practice to check your specific Interface User's Manual for such limitations.
(This article was previously published asSolution 116595.)
Keywords:
References: None |
Problem Statement: This knowledge base article describes how to obtain locking information while requesting an AspenTech license. | Solution: 1. Download the attached SLM Locking Tool executable to the machine on which you will install the license. This applies to both standalone licenses and network licenses.
2. Run the SLMLockInfo.exe. You will see a screen similar to the one below:
3. Click on the Copy to Clipboard button then copy the content to an email or Microsoft Word document. Send the email to AspenTech for license configuration.
Keywords: SLM, Lock Info
References: None |
Problem Statement: Detailed description of vulnerability CVE-2022-42252 as advised on the https://nvd.nist.gov/ website:
If Apache Tomcat 8.5.0 to 8.5.82, 9.0.0-M1 to 9.0.67, 10.0.0-M1 to 10.0.26 or 10.1.0-M1 to 10.1.0 was configured to ignore invalid HTTP headers via setting rejectIllegalHeader to false (the default for 8.5.x only), Tomcat did not reject a request containing an invalid Content-Length header making a request smuggling attack possible if Tomcat was located behind a reverse proxy that also failed to reject the request with the invalid header.
This article describes how to address vulnerability CVE-2022-42252. | Solution: Update Tomcat Server.xml configuration settings to reject illegal headers. To reject illegal headers in Tomcat do the following:
Stop the Apache Tomcat service in Services.msc
Go to the Tomcat subdirectory named conf
Make a copy of the server.xml file for safe-keeping
Open Notepad (run as administrator), open server.xml
Update the connector for port 8080 to include the definition of rejectIllegalHeaderName=true
Below is an example of an updated connector definition line from V14.0 in the server.xml file (your connector definition may be different, include keystoreFile etc):
<Connector executor=8080 port=8080 protocol=HTTP/1.1 connectionTimeout=20000
redirectPort=8443 maxHttpHeaderSize=65536 URIEncoding=UTF-8 rejectIllegalHeaderName=true/>
Save changes made to server.xml
Start the Apache Tomcat service in Services.msc
Note, Apache Tomcat/8.0.36 is the default version installed on a V10 MES web server, it is not impacted by CVE-2022-42252 since this vulnerability was introduced in Tomcat 8.5.0. If you have upgraded the Tomcat by following knowledge base article: Upgrade Tomcat 8.0.36 to Tomcat 8.5.73 , then the aboveSolution still applies since Tomcat 8.5.73 does support the rejectIllegalHeaderName property.
Keywords: IllegalHeader
HTTP error 400
Inconsistent Interpretation of HTTP Requests
did not find a matching property
References: None |