question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: Specific Enthalpy in EDR (Property Data--> Hot/Cold Stream Properties) is total enthalpy or enthalpy of pure component. | Solution: The Specific enthalpy in EDR under properties data hot & cold stream properties is the enthalpy of the stream (for the said compositions).
For example if the stream compositions is 40% Methanol & 60% Water, then the enthalpy is for this compositions & with respect to said temp & pressure conditions. (above snap is for pure water).
Above snap from the online EDR help says about the specific enthalpy is for stream.
Keywords: Specific Enthalpy, Hot & Cold Stream properties
References: None |
Problem Statement: How to print process flowsheet in Aspen Adsorption V12? | Solution: In Aspen Adsorption, to print process flowsheet,
Click on the File | Page Setup menu option available on any flowsheet window to order multi-page printing of flowsheets.
Use the Page Setup dialog to specify:
How many sheets to use for the height and width of the flowsheet
Paper size
Orientation
Margin size
Keywords: Print flowsheet, etc
References: None |
Problem Statement: How to setup Chimney trays in Aspen HYSYS column? | Solution: In Aspen HYSYS, Chimney trays can be accessed through the sub-flowsheet of the column.
In the Column sub-flowsheet, double clicking on the column enters Tower: Main Tower window.
Select Ratings tab, Sizing page
Select Non-Uniform Tray Data
User should be able to scroll down in the Internal Type
Select Chimney.
Keywords: Chimney tray, Column internals
References: None |
Problem Statement: What is causing inconsistency in result database error in Aspen PIMS and how to generate trace log file for Aspen PIMS machine that will help to investigate the issue of error Inconsistency in result database | Solution: The error of “inconsistency in result database” usually means that there is a problem in writing theSolution data to the output database. Since clustering is being used to run case executions, each child node needs to be checked that it can connect and write to SQL Server. This can be done in couple of ways:
1. Limit the number of child nodes in the cluster configuration
To verify that all child nodes can write to a SQL Server database, reconfigure the PIMS cluster configuration to use a small number of child nodes with a specific case stack. The case stack should contain enough cases that guarantee that all child nodes will be utilized. This can be verified by setting the trace level to low before running a case stack. This trace level will generate a case parallel log file for each processor that was used, and the log file will state what machine the process was running on.
2. Run Aspen PIMS locally on each child node
Open Aspen PIMS on each child node and run a sample model in Aspen PIMS. Make sure that SQL Server has been configured as the output database in the general model settings.
Set the Trace Level to low below are the steps:
1. Open the Model Settings->Non-linear Model (XNLP)->XNLP Settings dialog->Trace Level field
2. Set Trace Level field to Low
3. Click OK
4. After a case stack is executed, case log files will be created in the model folder and the names are CaseParallelRankX.log where X is the rank of the log file.
5. The first line will have the machine name on which the case parallel process ran like “Rank 1 on host yyyyyyy” where yyyyyyy would be the machine name.
The above steps will generate case parallel log file for each processor that was used, and the log file will state what machine the process was running on which will help in the investigation of the issue.
Key words:Solution file
Aspen PIMS
Keywords: None
References: None |
Problem Statement: How to resolve Matrix Generator error in Aspen Unified PIMS | Solution: When the above error occurs in Aspen Unified PIMS during model run then below workaround can we tried to fix this error.
Under the settings>>General settings>>Application>>Advanced>>Miscellaneous>>Uncheck the option Enforce RFG/CARB regulatory bounds.
Key Words:
Matrix Generator
Aspen Unified
Keywords: None
References: None |
Problem Statement: Two types of the license are available: Token and Non-Token (standard). Similarly, Aspen software is distributed in two separate installation media, one for token and another for standard. If the license file is for a token license, the token media must be used; similarly for standard license file. The media for token license is labeled with letter T at the end to differentiate it from the standard license.
If APC software is installed using the Standard license media, one would expect AspenIQ to retrieve the Standard (non-Token) type license key when running online. However, if Token type license keys are also available, IQ retrieves the Token type license key. Although no error messages are displayed, the explanation for this behavior is that IQ online cannot retrieve the non-token type license key.
Root Cause
AspenIQ online applications always check out the Token type license first, even if the software is installed using the non-Token licensing type.
It is possible for a user site to deploy both License file types, Standard and Token. In such cases the Token type license is retrieved when AspenIQ runs online and the Standard type license is ignored. This happens even when the licensing type of the APC server is configured as non-Token. | Solution: To force AspenIQ to use the Non-Token type license, the user must not register their License server containing Token type license keys. This can be done as follows - refer to the screen shot below:
Launch the SLM License Manager from the Windows Start button on the server where AspenIQ is installed
Select Configuration Wizard to open the SLM Configuration Wizard window
If the site uses a network license, there should be two servers listed in the Server Name column, one for each license type
Select the server containing the token license file and delete it.
Ensure that only the server containing the standard token license file remains in the list.
Press Apply Changes to save the changes.
Keywords: AspenIQ
Perpetual license
Token license
Non-Token
Standard
References: None |
Problem Statement: Aspen Online gives error Failed to check out a license | Solution: In Aspen Online user may face such issue of “Failed to check out a license
The issue could be caused if the tokens are limited & not available to run the Aspen online model (in the background Aspen Plus or Hysys Model could be continuously closing & opening with certain time intervals)
if the user does not have spare tokens, the schedule should be specified as the gap between next time run and previous time closing the model should be long enough (e.g., > 3 minutes) to wait for the tokens to be released.
Keywords: Aspen Online, Failed to checkout license
References: None |
Problem Statement: How do I use “Heat Exchanger” tab available under home tab in Aspen Plus to transfer Heating/cooling curve data to EDR | Solution: This option is very helpful if user is working on the simulation in Aspen Plus & wants to study the detailed heat exchanger design in separate EDR, use this option to transfer heating/ cooling curve data like Aspen Properties, flowrate & other process conditions from Aspen Plus to EDR (Aspen Exchanger design & rating).
The “Heat Exchange” is one the option available which is very simple to use & user can quickly transfer data to EDR from HeatX or Heater block & can have quick testing or having detailed design of exchanger separately in EDR program.
Using this option user can transfer heating/cooling curve data very simply.
Steps to generate EDR file from Aspen Plus is:
Make sure user have selected either heater or HeatX block in Aspen Plus.
Aspen Plus results shall be available: click on Run button - once the Aspen Plus run completed & results are available under Aspen Plus, “Heat Exchanger” tab will be activated & ready to transfer heating/cooling curves to EDR.
Click on the tab as mentioned in above snap & new window will open for you with all available exchangers in your simulations:
Select the exchanger you would like to design/rate/simulate separately within EDR.
Once you select exchanger, if user wants to review/change the properties ranges, user can select check box available for “review or change Property Range for design”. Temperature and pressure information for the selected block is displayed in the Stream Data table. By default , the Stream Data table displays the inlet and outlet temperatures and pressures of the selected block. If the block does not have a pressure drop, a default pressure drop is used. By default a third, intermediate pressure is added. The temperature and pressure information determine the range of properties data written to the PSF file. These ranges can be edited
If user wants to save the .psf file generating the EDR properties, user can select the check box for “Advanced” tab. To view the data that will be exported in the PSF file, select Save PSF to generate the psf files for each block.
If the psf files are not necessary, user can directly clock on the “Size exchanger” without selecting “Advanced” tab. Once user clock on Size Exchanger, new EDR file will be available for you where user can modify, get more accurate & details heat exchanger design separately.
This will help users to easily transfer the heating/cooling curve data means Process data & the Hot & Cold Properties data in EDR.
Keywords: Aspen Properties to EDR, Heating/ Cooling curves, Launch Heat Exchanger
References: None |
Problem Statement: What is the significance of W908 warning message in Aspen PIMS? | Solution: The value in PGUESS can cause wide bound to be made wider if the current bound (determined by other model data) does not include this PGUESS value. Powers of 10 is used to widen the bound. The SCALE value will always win, but you may still see W908, since that logic is checked before SCALE is applied. If the widest value in the model is 10, but the PGUESS value is 15, we widen to 100. If the widest value in the model is 98, but PGUESS has a value of 100, we widen to 1000.
PIMS will always use the PGUESS value when determining wide bounds. There is a hierarchy to how the table objects are processed to determine the wide bounds. As you said, SCALE wins.
In particular, for PGUESS, which is processed just before SCALE:
MIN is typically reset to 0 unless it was less than 0, then it is reset to -10, -100, or 2*current MIN, depending on the value.
MAX is set to zero if value was less than zero, to 1 if value was zero, and to a power of 10 if greater than zero.
MAX is set to zero if value was less than zero, to 1 if value was zero, and to a power of 10 if greater than zero
So, the value max must have been very small, but not zero, so it is reset as 0.01 using the power of 10.
The W908 PGUESS message is given before we know if the quality has a SCALE value. So, yes, you may have messages for both and that is by design. This message let's you know that your guess in PGUESS is outside the wide bounds gathered from the rest of the model, so it is asking you to check that the guess is good, given the rest of the data in your model.
W908 is not redundant. It is informational and was added on purpose. It lets you know that your wide bounds are being widened. It's up to you whether you want to adjust your model data or guesses based on this information. Bounds on qualities are very important.
Key Words:
W908
Warnings
Aspen PIMS
Keywords: None
References: None |
Problem Statement: When deleting models from AUP, they disappear from the Users Interface, but stay in the Input DB. This can cause storage issues on the long run so, how can I purge deleted models from the Input DB? | Solution: First go to Microsoft SQL Server Management Studio and connect to the server that stores your Input DB. Expand the Databases section, look for your Input DB and take a look at the aumodel.Models table. There is a column named IsDeleted. When a model is deleted from the interface, this column will be equal to True, whereas a non-deleted model will be equal to False (you can right-click the table and select “Edit Top 200 Rows” to get this view).
From here, you can create a script that lists the deleted models as follows.
Close SQL before continuing. Now open Command Prompt and paste the next text and click enter,
cd C:\Program Files\AspenTech\Aspen Unified\Admin\binX64 [ENTER]
It will take you to the path: C:\Program Files\AspenTech\Aspen Unified\Admin\binX64. From here you can open PSCAdmin and purge the models by writing the next and clicking enter.
PSCAdmin purge --modelName “YourModelName” [ENTER]
If you want to see a list of the models you can purge, you can write the following and click enter
PSCAdmin purge [ENTER]
For example, let’s see what models we can purge and add the corresponding lines to do it. In blue you will see the line that we must write, in white the information that comes back and in green the confirmation that the model has been purged. When a model is purged, all entries to the Input DB will be deleted.
Pay attention to spaces and signs like “ “ if you are getting an error. Each model must be deleted individually, meaning that you cannot separate the names of the models by commas. You must write something like
PSCAdmin purge --modelName “YourModel1” [ENTER]
PSCAdmin purge --modelName “YourModel2” [ENTER]
Keywords: Input DB, Purge, PSCAdmin, storage, SQL, server, command prompt
References: None |
Problem Statement: How would you use Aspen Plus to model the Sulfuric Acid Process? | Solution: Attached is an example of modeling the Sulfuric Acid Process with Aspen Plus. To use the example download the .zip file and extract all the files to one folder. Double click on the .bkp file to open the simulation.
Note that this example includes a kinetic subroutine for the conversion reactors in the form of a .dll file. A Fortran compiler is NOT needed to run.
Sulfuric Acid Key Challenges
Moderate to High Cost of Maintenance
Regulated SO2 Emissions
Energy Integration is Important
Acid from Metallurgical Gas
Important Facts
Leading sulfuric acid technology companies and manufacturers use Aspen Plus to design, operate, and troubleshoot their plants.
An integrated model is necessary to properly model the interactions amongst all key variables in a sulfuric acid plant.
Key Variables In Sulfuric Acid Production
Gas Strength
Production Rate
Stack SO2
Converter Catalyst Loading and Temperature Profile
Acid Strength
Steam Production
Gas Pressure Drop
Gas Dew-Point
Figure 1 - Sulfur Burning Double-Absorption Plant Flow Diagram (Riegel's Handbook of Industrial Chemistry, 1983)
Components
ID
Formula
Name
H2O
H2O
Water
H2SO4
H2SO4
Sulfuric acid
SO2
SO2
Sulfur dioxide
CO2
CO2
Carbon Dioxide
SO3
SO3
Sulfur trioxide
S
S
Sulfur
N2
N2
Nitrogen
O2
O2
Oxygen
C10H22
C10H22
n-Decane
H3O+
H3O+
Hydronium ion
HSO4-
HSO4-
Bisulfate ion
SO4--
SO4=
Sulfate ion
Property Methods Recommended for Sulfuric Acid
System
Property Method
Comments
Gas System
Ideal
Vapor phase at high temperature.
Acid System
ElecNRTL
Non-ideal electrolyteSolutions. Also,
Henry's Law to calculate gas (SO2, O2,
N2 and CO2) solubility in sulfuric acid.
Steam
STEAMNBS
Specific model for pure water.
Gas Reactions and Acid Chemistry
Gas Reactions
1. S + O2 <--> SO2 Fast to Equilbrium
2. SO2 + 0.5 O2 <--> SO3 Kinetic, Catalytically Enhanced
Absorbtion Reaction
3. SO3 + H2O <--> H2SO4 Reactive Absorbtion
Acid Chemistry
4. H2SO4 + H2O <--> H3O + HSO4- Important at All Concentrations
5. HSO4 + H2O <--> H3O + SO4= Important in WeakSolutions
Aspen Plus Blocks Used in the Sulfuric Acid Model
Unit Operation
Aspen Plus Block
Comments and Specifications
Drying and
Absorbing Towers
RadFrac
Rigorous absorption, includes
absorbtion reaction and acid
chemistry. Use a pumparound
to model acid-cooling and
recirculation.
Blower
Comp
Typical pressure rise ~140 in
H2O. COMP Block may also be
used to model the steam turbine
driver.
Sulfur Burner
RGibbs
Adiabatic Gibbs-Reactor (Free
Energy Minimization).
Converters
RCSTR
Adiabatic Reactors with User
Reaction Kinetics.
Boiler, Superheater,
Economizers, Gas-to-Gas
Heat Exchangers
MHeatX
Simplified Heat Exchanger.
Checks Crossover.
Design Specifications
Spec
Target
Manipulated Variable
Gas Strength
11.0%
SO2 Sulfur Flow to Burner
Acid Conc.
98%
H2SO4 Make-Up Process Water
Steam Prod.
Pass 1 Inlet Temp
Boiler Feed Water (BFW) Make-Up
Results
Variable
Value
Air Flow Rate, lbmol/hr
7850
Sulfur Flow Rate, lb/hr
26905
Sulfur Burner Temperature, F
Converter Catalyst, liter
2011
PASS1
27,000
PASS2
31,000
PASS3
30,000
PASS4
42,000
Converter Temperatures, F
In / Out / Del-T
PASS1
750 / 1114 / 364
PASS2
824 / 954 / 130
PASS3
810 / 858 / 48
PASS4
759 / 802 / 43
SO2 in Stack, PPM
283
Sulfuric acid concentration, wt%
98.5%
Sulfuric acid production, STPD
1000
Steam Production, lb/hr
109,152 (645 psig)
Sulfuric Acid Model Usage
Design, De-bottleneck, and Troubleshoot
Converter Profile Optimization (with EO capabilities)
Rate Present Catalyst Condition and Evaluate Catalyst Purchases
Energy Recovery Analysis
Emulate Gas-to-Gas Hex Leaks
Keywords: sulfuric
electrolyte
References: None |
Problem Statement: How to add RON and MON as a property in a stream result? | Solution: 1. Go to the properties environment/Customize/Type ROC-NO and MOC-NO as user parameters. Those are the parameter names for RON and MON respectively.
2. Go to Components, create a new Assay and enter the distillation curve data. The users can specify random temperatures since those will not be used for the octane calculation. The True Octane Number blending is calculated with the components' volume fractions.
3. Specify a property curve for each ROC-NO and MOC-NO parameter, here random values are allowed too.
4. Go to the Simulation Environment, click on the stream results and Add the RON and MON properties (ROC-NO and MOC-NO).
Keywords: True Octane number, Blending, Crude assay, Add properties
References: None |
Problem Statement: What stage is the Vapor Flow / Feed Flow (V/F) calculated for ConSep Column? | Solution: V/F is a ratio of vapor flow to feed flow. When a ConSep block solves for a feasible design, it will show the stripping and rectifying V/F results on the Results | Design page. The vapor flow rate for these sections is calculated at the stripping/rectifying boundary i.e. the feed stage. For the rectifying section, it is the vapor flowrate above the feed stage. For the stripping section, it is the vapor flowrate below the feed stage.
Keywords: ConSep, RadFrac, Aspen Plus, Feasible Design, Vapor Flow / Feed Flow, V/F
References: None |
Problem Statement: DMC3 Builder simulation allows to create customized columns settings through the column settings dialog box.
This mechanism relies on .xlm files that can be imported.
How do I export my own customized setting for the current column setting I have?. | Solution: You can set up your simulation column sets and this saves them into the SSCSimulationTemplateUser.xml file located in the C:\ProgramData\AspenTech\APC\Vxx\Builder\config directory.
You can then save a copy of this file and move it to a new machine and import it.
Remember that any time you made changes with column settings, you must close and reopen the simulation session (or even the project) to see the changes applied.
Keywords: …DMC3 Builder, Column settings, Simulation
References: None |
Problem Statement: When a new CIM-IO logical device is created then registration of Miscellaneous tags and PID tags for the new device fails in Watch Maker. The following error message is displayed: Tagname not found in server database.
This happens when the logical device is added to the “Cimio_logical_devices.def” file for the first time as shown in the example below. | Solution: If this logical device is newly added, the corresponding InfoPlus.21 task must be restarted from Aspen InfoPlus.21 Manager, before data collection can be started. This may be done as follows.
Please launch InfoPlus.21 Manager from the Windows Start button - refer to the screen shot below.
Find the corresponding task, TSK_M_IODEVn, where n = 1, 2, 3 or 4, listed in the Running Tasks list (bottom left frame). For example, if logical device IODEV3 was added, the corresponding task is TSK_M_IODEV3.
Stop the task by pressing the STOP TASK button and waiting for the task to be removed from the Running Tasks list.
Find the same task in the Defined Tasks list (top left frame) and press the RUN TASK button.
Confirm that the task is running; it should now appear in the Running Tasks list and a confirmation message should appear at the bottom of the screen as shown below.
Then data collection may be started.
NOTE: if no other data collection is running already, an alternative way is to restart the database: press the STOP button, wait for confirmation that the database has been stopped successfully, then press the START button.
Keywords: Miscellaneous tag
PID tag
PID watch
Watch Maker
References: None |
Problem Statement: After upgrading the APC software, we became to be unable to open Simulation file(.PSM) which was created before the upgrade in the DMCplus Simulate application.
Is this usual behavior? | Solution: Unfortunately, even if DMCplus Simulate is newer than the environment where “.PSM” was created, “.PSM file” is not compatible with different version of APC for now.
And this is not only applicable for the version difference but also for patch level difference.
(For example, .PSM file which is created in the CP1 environment is not compatible with CP2 environment.)
CCF file which is created older version is basically compatible with newer version environment.
So if you encounter this situation, please consider to recreate the simulation file from CCF file again.
Keywords: DMCplus Simulate
PSM
Simulation
References: None |
Problem Statement: InfoPlus.21 Obsolete Tasks in V12.0 | Solution: The table below lists tasks that are generally not required by current versions of Aspen InfoPlus.21 systems (version 3.0 and later). Most of them do not appear in the InfoPlus.21 start task list that is distributed with the current versions of Aspen InfoPlus.21. However, they could still show up after an upgrade if you merge the old task list with the new task list. After the upgrade, you can use the InfoPlus.21 Manager to manually remove the obsolete tasks from the task list.
C21_WIN_INIT Remove – No longer required by InfoPlus.21
Loaddb Remove – No longer required by InfoPlus.21
TSK_API_SERVER Remove – No longer required by InfoPlus.21
TSK_B21_INIT Remove – No longer required by Batch21
TSK_BCU_INIT Remove – No longer required by Batch21
TSK_BCU_SCHED Remove – No longer required by Batch21
TSK_BCU_SERVER Remove – No longer required by Batch21
TSK_BCU_START Remove – No longer required by Batch21
s21_backup_logfile Remove – Previously required by obsolete SCAN21
TSK_S21_MKSHM Remove – Previously required by obsolete SCAN21
TSK_S21_NETBUF Remove – Previously required by obsolete SCAN21
TSK_S21_OUTSERV Remove – Previously required by obsolete SCAN21
TSK_S21_SCHED Remove – Previously required by obsolete SCAN21
TSK_S21_INTER Remove – Previously required by obsolete SCAN21
TSK_DDE21 Remove – Previously required by obsolete DDE21
TSK_E21_INIT Remove – Previously required by Event.21
TSK_E21_QINIT Remove – Previously required by Event.21
TSK_E21_QNETBUF Remove – Previously required by Event.21
TSK_H21_ARCCK Remove – No longer required by InfoPlus.21
TSK_H21_INIT Remove – No longer required by InfoPlus.21
TSK_H21_MNTTAB Remove – No longer required by InfoPlus.21
TSK_N21_ROUTER Remove – Previously required by Event.21 and SQLA tags
TSK_N21_TIMER Remove – Previously required by Event.21 and SQLA tags
TSK_N21_SOCKET Remove – Previously required by Event.21 and SQLA tags
TSK_MKOB Remove – Previously required by Event.21 or SQLA tags
TSK_PD_SERVER Remove – Previously required by Event.21 and SQLA tags
Keywords: InfoPlus.21
Obsolete Tasks
References: None |
Problem Statement: Before Version 12, some MES services were installed with an unquoted service path that contains at least one whitespace. A local attacker can gain elevated privileges by inserting an executable file in the path of the affected service.
For example, below are two MES services with untrusted path:
CalculatorServerService : C:\Program Files (x86)\AspenTech\Aspen Calc\Bin\CalcScheduler.exe
AfwSecCliSvc : C:\Program Files (x86)\AspenTech\BPE\AfwSecCliSvc.exe | Solution: The issue has been fixed on MES V12 if the software is a new install instead of an upgrade. If the MES software is below V12 or upgraded to V12, please use the below workaroundSolution to address the issue.
The workaroundSolution is adding a double quote to the ImagePath of vulnerable services in Registry Editor.
Below is the procedure to address the issue for CalculatorServerService:
1. Launch the Registry Editor and find the CalculatorServerService under Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CalculatorServerService.
2. Modify the ImagePath from C:\Program Files (x86)\AspenTech\Aspen Calc\Bin\CalcScheduler.exe to C:\Program Files (x86)\AspenTech\Aspen Calc\Bin\CalcScheduler.exe by adding a double quote to the ImagePath.
3. Done.
4. You can repeat a similar procedure for other vulnerable MES services such as AfwSecCliSvc.
Aspen will release ECR to fix this issue for other supported versions.
Keywords: Vulnerability
Security
Cybersecurity
CalculatorServerService
AfwSecCliSvc
References: None |
Problem Statement: Does Aspen Cloud Connect need special license to be activated? | Solution: Yes, and it will depend on the destination servers that will be enabled. Aspen Cloud Connect include 3 different license models in V12:
Aspen Cloud Connect Express: No license key is required.
Aspen Cloud Connect provides the out of the box (OOTB) feature to work with all Aspentech products as destination.
All source servers are enabled OOTB
Only the following AspenTech destination servers are enabled out of the box:
IP21 using Grpc
Aspen Cloud
Aspen Cloud Connect Standard: A license key is required to enable 3rd party destination servers.
Allows customers the ability to use Aspen Cloud Connect to move data into standard destination endpoints that can be used by customer’s inhouse development
All source servers are enabled OOTB
Only the following AspenTech destination servers are enabled out of the box:
IP21 using Grpc
Aspen Cloud
3rd party destination servers enabled:
SQL Servers
CSV servers
PostgreSQL
Aspen Cloud Connect Premium: A license key is required to enable 3rd party destination servers
Allows customers the ability to use Aspen Cloud Connect to move data into premium destination endpoints that can be used by customer’s inhouse development
All source servers are enabled OOTB
Only the following AspenTech destination servers are enabled out of the box:
IP21 using Grpc
Aspen Cloud
3rd party destination servers enabled:
SQL Servers
CSV servers
PostgreSQL
MQTT
RabbitMQ
OSISoft
OPC UA
Azure IoT Hub
AWS S3
Hadoop HDFS
InfluxDB
Kafka
Keywords: Aspen Cloud Connect
License Key
Grpc
ACC
References: None |
Problem Statement: SOAP Exception - Unsupported response content type | Solution: The cause should be some other software is writing to port 8888 of this machine, therefore this error/warning message is shown to users.
Review that no other application is using this port.
Keywords: SOAP
References: None |
Problem Statement: Is it possible to see an aspenONE Process Explorer (A1PE) saved trend on Aspen Process Explorer? | Solution: It is not possible to transform a saved trend on A1PE to Aspen Process Explorer.
However you can open the saved trend on A1PE, select all the tags and then drag and drop them on the Aspen Process Explorer document.
Keywords: Trend
References: None |
Problem Statement: In order to add more entries to a Get Transfer it is required to turn OFF the field IO_RECORD_PROCESSING, then the data collection stops until you turn IO_RECORD_PROCESSING back to ON. | Solution: This behavior is expected, turning OFF the IO_RECORD PROCESSING will result in not having a TagList available on the Cim-IO Server, and therefore you can not get data from your OPC or DCS.
Once you turn ON the IO_RECORD_PROCESSING, a new TagList is created and new data will come.
In order to prevent the data loss you will need to do this task by query on SQLplus.
Using queries to add more entries will not avert turning OFF the field IO_RECORD_PROCESSING as it is a must step, however doing it via query will do it faster than doing it manually and therefore prevent data loss.
Keywords: IO_RECORD_PROCESSING
Data Loss
References: None |
Problem Statement: What to do when Apache Tomcat logs are taking much space on the server?
Located under C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.36 Bakup\appdata\scheduler\logs | Solution: 1.- The logs can be deleted when required as those are not required for Tomcat to be running.
2.- Unfortunately there is NO “Clean-Up” Option for the Logs, only the ability to change the Logging Level to reduce the amount of Logging that occurs (Apache Tomcat is a 3rd Party tool, so we are unable to add functions to the program itself)
The Minimum Logging Level that should produce the fewest number of messages and less logs is the “Error” Logging Option. So if you want to Minimize the Logging AspenTech suggests configuring Tomcat to log messages only for Errors.
How do I change the logging properties in Tomcat?
Note: If you have another drive where would be possible to save the logs, you can change the Log Path to that location.
3.- Other way to further reduce the amount of logs is to Configuration tab from http://localhost:8080/AspenCoreSearch/Scheduler and modify the level of the logs.
You can either put all of them to Error, so only those messages get logged, or you can turn all of them to OFF and only enable them when required.
4.- Other thing to implement that reduces the number of logs you have is modifying the file logging.xml (C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.36 Bakup\appdata\scheduler\config).
By updating the string “-%d{yyyy-MM-dd}” to “%i”, the date in the rolling file name it makes each file name unique and so the index added at the end via the “%i” will always be “1”.
If you want to do this, you need to stop Apache Tomcat Service, modify the xml, and then turn back again the Apache Tomcat Service.
Would look like
<RollingFile name=scheduler fileName=${log4j:configLocation}/../appdata/scheduler/logs/scheduler/AspenScheduler.log
filePattern=${log4j:configLocation}/../appdata/scheduler/logs/scheduler/AspenScheduler-%i.log>
<PatternLayout pattern=%d{ISO8601} [%t] %p %c %x - %m%n/>
<Policies>
<SizeBasedTriggeringPolicy size=100 MB />
</Policies>
<DefaultRolloverStrategy max=5/>
</RollingFile>
<RollingFile name=tasks fileName=${log4j:configLocation}/../appdata/scheduler/logs/tasks/tasks.log
filePattern=${log4j:configLocation}/../appdata/scheduler/logs/tasks/tasks-%i.log>
<PatternLayout pattern=%d{ISO8601} [%t] %p %c %x - %m%n/>
<Policies>
<SizeBasedTriggeringPolicy size=100 MB />
</Policies>
<DefaultRolloverStrategy max=5/>
</RollingFile>
Keywords: Tomcat
Log
References: None |
Problem Statement: How to temporary disable the switchover between Cim-IO Redundant Servers? | Solution: For just temporary disable the switchover you will only need to stop TSK_DETECT.
If there are other logical devices for which you do not want to temporary disable the switchover, then you will need to just turn OFF the occurrence related to that Logical Device from the TSK_DETECT record in the Administrator.
Keywords: Cim-IO
Redundancy
References: None |
Problem Statement: Aspen Production Control Web Server (PCWS) is working on the Microsoft IIS(Internet Information Services) platform.
There are multiple log files in the “C:\inetpub\logs\LogFiles\W3SVC1” directory (and other directories at the same level as W3SVC1). They end in .log and are usually named exNNNNNN.log, inNNNNNN.log, or ncNNNNNN.log where the N represents a number (some examples include ex080811.log and ex070731.log).
There may be many of them and they may consume quite a bit of hard disk space
If the free disk space become 0, the server may corrupts.
So we need to manage free space of the system drive periodically. | Solution: These files belong to Microsoft's Internet Information Service (IIS); this is the web server present on Windows server OS. More information can be found on Microsoft's website in Knowledge.
Whether or not to enable logging and how many of the logs to keep will vary depending on the needs of the site. Usually it is a matter of striking a balance between having the information provided by the logs versus keeping an appropriate amount of hard disk space free.
Please monitor and secure C drive free disk space on the server Periodically by move or delete old log files to avoid server corruption .
There is one option to remove IIS logging from PCWS. (From KB: 000080068)
If you follow this procedure, IIS log files are no longer created.
But in that case, you cannot check the log file later. Please consider that if you disable logging.
1. Open Internet Information Services Manager (IIS), on the left side tree click on the Web server name, just under Start Page. Look for Logging under IIS and double click on the icon to open the feature.
2. Disable logging on the Actions panel on the right side.
3. Finally, run an IISRESET command in a command prompt instance (Make sure the command prompt is opened as an Administrator) to force the logging change to take effect.
(This will stop PCWS page temporally. After doing this, please ensure Aspen APC Web Provider Data service is running and if not please start it. Then Connect PCWS again.)
Keywords: Aspen APC Web Provider Data Service
DMC3
DMCplus
Aspen APC Web Server
PCWS
References: None |
Problem Statement: After editing long CCF calc in the note pad, we pasted it on the one CCF calculation.
Then noticed that the end part of the pasted calculation is not reflected to the CCF.
Are there any limitation of the number of the characters for CCF calc? | Solution: There is max number of character limitation for CCF calc.
The maximum number is 1024.
You cannot enter additional character if the number of characters reaches to 1024.
If you encounter this situation, please consider to divide the calculation into two calculations.
Keywords: DMCplus Build
CCF
Calculations
References: None |
Problem Statement: DMC3 Builder shows Failed to transfer application when deploying RTE controller | Solution: The RTE service creates temporary files in C:\Windows\Temp with names such as ~91AE.tmp, tmp9D21.tmp, etc. These temp files should get cleaned up by the RTE service automatically once the operation completes. But there are instances where these files are not cleaned up properly. Once the number of files reaches a maximum limit for unique names for these files, DMC3 Builder cannot create any more files in this folder and it will fail to deploy controllers. The resulting error message is Failed to transfer application.
Detailed messages can be found in the NodeRepositoryErrors.log in the following format-
AspenTech.ACP.RTE.Remoting.AppTransferException: Failed to transfer application. ---> System.IO.IOException: The file exists.
To fix this problem:
Turn OFF and Stop all RTE running controllers.
Stop the RTE service and delete all the .tmp files in the C:\Windows\Temp folder.
Restart the RTE service and you should be able to deploy new controllers.
Keywords: Failed to transfer application
DMC3 Builder
Deployment
RTE controller
References: None |
Problem Statement: Model Switching is a utility on DMC3 equivalent to CCF Switch on DMCplus Controllers. It can be used to implement relationships and file replacements that enable applications to switch models and/or tuning sets while the DMC3 application is operating Online, Offline, or in Simulation Mode. | Solution: Requirements for using Model Switching:
1.- To use Model Switching it is necessary to have Engineer Permission
2.-It is necessary to have the mdl or mdl3 (in case the current project is DMC3) files and/or .tuningset file already exported and located in a known directory
3.- Changing the active tuning set must be done either manually or by an output calculation. An input calculation cannot be used to switch tuning sets because, during the controller cycle, the tuning switch occurs prior to input calculations.
NOTE: The mdl3 files can be exported from a DMC3 Builder project by performing the following instructions:
On the open project go File and select the option Export File
Once is open select Export Model; this will open a window where you will have to specify the location to save the exported file. In this window change the type of project from .dmc3model to .mdl3
Select the Location and save the file. Then you can go to the location and verify that the model file has been saved as mdl3
Example on the use of Model Switching
To Trigger Model Switching a Calculation has to be Prepared to use this utility. In the Attached PDF please find an example of how this can be prepared and used.
Keywords: Model, DMC3, ModelSwitching
References: None |
Problem Statement: Cannot see options menu when clicking any variable in the Production Control Web Server (PCWS). | Solution: Double check the AW_HOSTNAME field in the AW_CTLDef record in InfoPlus.21 Administrator is the same as the host name shown in the PCWS area (top part of the Operations View).
Keywords: Production Control Web Server, PCWS, APC Web Interface, pop up options menu, InfoPlus.21 Administrator, Definition records, AW_CTLDef
References: None |
Problem Statement: The user gets one of the below error messages when trying to debug an Aspen Fidelis model in Visual Studio, and Fidelis shuts down.
System.BadImageFormatException: 'Could not load file or assembly 'Xceed.Chart.GraphicsGL.v4.3, Version=4.3.100.0, Culture=neutral, PublicKeyToken=ba83ff368b7563c6' or one of its dependencies. An attempt was made to load a program with an incorrect format.'
OR
System.BadImageFormatException
HResult=0x8007000B
Message=Could not load file or assembly 'Xceed.Chart.GraphicsGL.v4.3, Version=4.3.100.0, Culture=neutral, PublicKeyToken=ba83ff368b7563c6' or one of its dependencies. An attempt was made to load a program with an incorrect format.
Source=<Cannot evaluate the exception source>
StackTrace:
<Cannot evaluate the exception stack trace> | Solution: This error message is typically caused when Use Managed Compatibility Mode is not selected. Please follow the steps bellow to select Use Managed Compatibility mode.
1. Open VstaProjects (you can do this by opening a model in Aspen Fidelis and clicking on Write Key Routines)
2. Click on Tools > Options
3. Click on Debugging in the left panel of the Options screen
4. Find Use Managed Compatibility Mode on the right side of the Options screen and check the box next to it
5. Click OK
6. Restart Aspen Fidelis
Keywords: Debug
Debug issues
Debug issue
Unable to debug
Freezing
Crashing
References: None |
Problem Statement: What is the SLM License Profiler and how do you use it? | Solution: The SLM License Profiler allows you to obtain specific information about the licenses available on an SLM license server or license file. Typically, you will use the SLM License Profiler to verify licenses on a license server or license file and to diagnose license related problems. Non-administrators can also use the SLM License Profiler to retrieve SLM license System Name if needed to verify support entitlement.
The SLM License Profiler consists of a single dialog box that is used to:
Query a specified license server or license file for the system name, company name, and locking information associated with a license.
View available licenses on a specified license server or license file.
Copy license information to other applications, such as Notepad or Microsoft Excel.
View license usage showing who is currently using a given license.
NOTE: The SLM License Profiler is included with the SLM Client Tools that are automatically installed when installing any aspenONE product.
How to use the SLM License Profiler
1. Start the SLM License Profiler by opening the aspenONE SLM License Manager and click License Profiler. The SLM License Profiler dialog box is displayed.
2. Select either a license server or a local license file.
To select a license server, select a server name from the License Server dropdown list or type the name of the server in the License Server box. You may click the Refresh Server Listing button at any time to refresh the list of license servers configured on the local computer. This is useful if the SLM Configuration Wizard was used to change the configuration of one of the servers on the local computer.
To select a local license file, select Local License File in the License Server dropdown list. The License File field is enabled. Click the Browse button to select the license file.
3. Click Load Information to query the license server or license file for the System Name, Company, and Locking Info information.
If an error occurs, verify that the correct server name or license file was specified in the License Server or License File fields. If a problem occurs, an error message is displayed. The error message provides information about the problem and provides suggestions for resolving the problem.
4. Click View Licenses to display all available license information that was found on the license server or in the license file.
License Information such as available license keys, licenses in use, birth date, expiry date, and more are now displayed for you to review.
5. Clicking on the User tab will show you the current license usage and end client information that is using a license.
6. Click Copy to Clipboard or Export to Excel to copy all of the displayed information into another application, such as Notepad or Microsoft Excel.
7. The Diff License Files option will allow you to compare two different license files and highlight the differences.
Keywords: SLM
License File
System Name
SLM License Profiler
References: None |
Problem Statement: How does EDR calculate baffle cut if not specified in Rating/Checking mode? | Solution: For single segmental baffles, it is set so that free flow area in the baffle window is roughly equal to the crossflow area at the exchanger center line. The nominal value is rounded to the nearest 5%.
For both double and triple segmental baffles, the outer baffle cut is determined in the same way as the single segmental baffle cut. If the outer baffle cut calculated this way is too large for that baffle type, it is reset to the maximum reasonable value for that baffle type. Then, the rest of the cuts are calculated to try to give a similar open area for the different baffles.
For example, for double segmented baffles, it is estimated on the basis that the superficial area (ignoring tubes) between the inner cuts of the cap-shaped baffles is the same as the sum of the two cut areas either side of the band-shaped central baffle.
Even if provided as an input, if necessary, the input value of the baffle cut is adjusted slightly to be sensibly positioned relative to tube locations that pass through the baffle.
Key Words:
Exchanger Design and Rating (EDR), Baffle Cut, Not specified, Rating, Checking
Keywords: None
References: None |
Problem Statement: How to see the correct order and the units of the Results for a Front Head Cover in S&T Mech? | Solution: In Aspen Shell & Tube Mechanical the Rich Text output for the second panel of Front Head Cover Results|CodeCalculations|Cylinders/Covers are misaligned and so some values are associated with the incorrect labels.
For seeing the results and units in the correct order, run the simulation, go to the same panel (the second panel of Front Head Cover Results|CodeCalculations|Cylinders/Covers), and the results and units will be aligned in the correct order.
Note: This issue is only in Aspen Shell and Tube Mechanical V12.0 and for this specific panel (the second panel of Front Head Cover Results|CodeCalculations|Cylinders/Covers). It is fixed in Aspen Shell and Tube Mechanical V12.2.
Keywords: Code Calculations, Results, Front Head Cover, units, actual length, material, corrosion.
References: None |
Problem Statement: A compressor is specified with a single curve at reference speed. If the operating speed is changed, then the results are different. How does Aspen Plus calculate these results with only a single curve specified? | Solution: If fan laws are applicable, then Aspen Plus employs fan laws to calculate a new curve at the operating speed. Fan laws are only applicable for Head, Head Coefficient, Power and Efficiency curves.
Exph, Expp, and Expe are the fan law exponents, and N is shaft speed. The fan law exponent values are set on the Compressor | Performance Curves | Operating Specs form. These relationships are usually quite close to observed behavior for small changes in speed. For larger changes it is better to obtain multiple performance curves at different speeds covering the entire range of operation and interpolate as needed.
If fan laws are not applicable, then Aspen Plus employs the performance curve at reference speed and, taking inlet volumetric flow as a basis, it estimates a new volumetric flow for the specified operating speed. This is calculated as:
Q = Qref (N/Nref)
where
Q = new volumetric flow at specified operating speed
Qref = specified volumetric flow at performance curve reference speed
N = specified operating speed
Nref = performance curve reference speed
The proceeding calculations are developed using this new reference volumetric flow. By spline interpolation, it estimates the value of discharge pressure at the reference volumetric flow using the reference speed performance curve provided. All the calculations are still done using the single performance curve provided.
It is important to note that if fan laws are not applied for the first dependent variable (e.g. discharge pressure), they are not applied to the second dependent variable (e.g. efficiency) either.
These relationships are usually quite close to observed behavior for small changes in speed. For larger changes it is better to obtain multiple performance curves at different speeds covering the entire range of operation and interpolate as needed.
Keywords: Compressor, Single Performance Curve, Multiple Performance Curves, Fan Laws, Interpolation, Head, Discharge Pressure
References: None |
Problem Statement: What is the role of NIST Thermal Data Engine (TDE) in Aspen Plus? | Solution: The NIST Thermal Data Engine estimates pure and binary property parameters based on molecular structure
Drawing tool from Home ribbon Tools group
Retrieve experimental data for pure components or binary mixtures of pure components
It can be run from the Home ribbon Data Source group
Keywords: Aspen Plus, NIST, TDE
References: None |
Problem Statement: What are the utility options available that Aspen Plus can calculate? | Solution: A Utility is an option in Aspen Plus that can be used to calculate:
Energy usage
Utility usage (for example, pounds of High Pressure steam/hr)
Energy/utility cost
You can assign a utility to any block where Duty or Power is either specified or calculated (except MHeatX)
To calculate the required utility flow for a given process:
From the Block specification form, select the Utility sheet
Choose the <New> from the Utility ID dropdown list
Enter a name for the Utility (you can select a predefined utility)
Click the Next button to go to the Utilities folder
Choose the Utility type from the eight selections provided
For utility cost calculations, enter either the Purchase price or Energy price
Set the calculation option as Specify heating/cooling value (default), or Specify inlet/outlet conditions (set values on Inlet/Outlet sheet) and supply related parameters
Choose to calculate CO2 emissions if appropriate on the Carbon Tracking sheet (optional)
Keywords: Aspen Plus, Utilities, Calculation
References: None |
Problem Statement: It is possible to export the master model of a DMC3 controller as a .gdm or .gda file using DMC3 Builder to then import the model into Aspen GDOT or Aspen Unified GDOT Builder. | Solution: To export the Master Model of your controller you need to go to DMC3 Builder, click on the Master Model and under the Model Operations ribbon click on Conversion.
This will prompt up the Convert Model window, here is where you can export models from a DMC3 controller to a GDOT model file or vice versa; in this case we will export from DMC3 to GDOT. You need to click on Convert to generate the new curves and then Export to save the model file.
NOTE: The converted curves may not match exactly with the DMC3 model curves, this is because DMC3 works with coefficient matrices while GDOT works with transfer functions, so when you click convert the program approximates a second order transfer function with deadtime out of the coefficient matrix.
After you have the GDOT model file, the next steps are different for GDOT Excel-based applications and Unified GDOT Builder, we will go through each one of them individually.
NOTE: DMC3 Builder V11 & V12 will export a .gdm file while V12.1 and later will export a .gda file, both are GDOT model files with the difference that the .gda file is encrypted and cannot be modified by non-Aspen applications. The GDOT Excel add-in can only read .gdm files while Unified GDOT Builder can load both .gdm and .gda model files.
GDOT Excel-Based Projects
To import the GDOT model file using the GDOT Excel Add-Ins you simply need to open the Workbook that you are working with, go to GDOT and then click on Read Model File to select your .gdm file.
After you do this the Dynamics and the Gain Matrix sheets will be filled out with the model information that you imported from DMC3 Builder.
Unified GDOT Builder
For a Unified GDOT model that you are working with, the steps to import the model file are right-clicking on the DMC Model item that you have on your Model Flowsheet, select Details and then “Go to DMC3 Matrix View”:
Once you are on the Gain Matrix View click on the Import button on the top-right section of the screen and click Import Model File to upload the .gdm or the .gda file from your computer.
NOTE: Unified GDOT Builder only accepts variable names with letters, numbers, periods, and underscores, so for example if you have a variable named FIC-2001 like on the previous screenshots and you try to import the file you would get an error message, so in that case need to rename it to either FIC2001 or FIC_2001 to be able to import it into Unified GDOT Builder.
After you complete this the Gain Matrix View and the Curves Matrix View will be filled out with the model that you imported.
Keywords: model, DMC3, import, export, gdm, gda, GDOT, Unified GDOT
References: None |
Problem Statement: How do the WEEK_NUMBER and ISO_WEEK_NUMBER functions work? | Solution: There are three main systems for calculating the week numbers of the year:
ISO system.
The first week of the year is the first week that contains Thursday. This is also sometimes stated as the first week having at least 4 days or the week containing January 4th.
USA system.
The first week of the year is the week containing Jan 1.
Other.
The first week of the year is the first complete week
Your wall calendar could be based on any of these systems and may not match the results of running the WEEK_NUMBER function in Aspen SQLplus.
The Aspen SQLplus function WEEK_NUMBER uses the same system as the WEEKNUM function in Microsoft Excel. This system is similar to the USA definition in that week 1 always starts on January 1st. December 31st is always the last day of the last week of the year. Microsoft Excel requires an indicator of 1 or 2 setting the first day of the week to Sunday or Monday respectively. The Aspen SQLplus WEEK_NUMBER function allows the indicator to set the starting day of the week to any day of the week where 1=Sunday and 7= Saturday.
According to Excel and Aspen SQLplus, this effectively means that there are 53 weeks every year and the last week on one year and the first week of the next year will be short weeks with 7 days between them. The only exception would be when January 1st is on the day defined as the starting day of the week.
For Example, given that the starting date is set to 1 (Sunday):
Week 53 of 2021 contains Sun Dec 26, Mon 27, Tue 28, Wed 29, Thu 30 & Fri 31
Week 1 of 2022 only consists of Sat Jan 1
Week 2 of 2022 will run from Sun Jan 2 -to- Sat Jan 8
ISO_WEEK_NUMBER returns the week number as defined by the ISO8601 standard. ISO 8601 1988 (E) paragraph 3.17: week, calendar: A seven day period within a calendar year, starting on a Monday and identified by its ordinal number within the year; the first calendar week of the year is the one that includes the first Thursday of that year. In the Gregorian calendar, this is equivalent to the week which includes 4 January.
Keywords: week
calendar
References: None |
Problem Statement: What do Equipment Symbol Shape and Equipment Footprint mean and what are they used for? | Solution: Equipment Symbol Shape and Equipment Footprint are fields that are shown on the Spreadsheet Import/Export report generated from Aspen Capital Cost Estimator.
Equipment Symbol Shape
ACCE’s process equipment is divided into different categories. Each category is assigned an item symbol for identification purposes; for example, CP is the Symbol Shape for Centrifugal Pumps. This Equipment Symbol Shape is also used by Aspen OptiPlant to map the equipment from ACCE into an equipment from OptiPlant.
Equipment Footprint
ACCE’s engine currently calculates the equipment footprint area for all process equipment, including pumps and compressors. For estimation purposes, it is assumed that the pump or compressor base plate area is equal to the equipment footprint area. This data is shared with Aspen OptiPlant to size the piece of equipment.
Keywords: Import / Export, plotplan results, product integration, transfer, share, reconcile coordinates, 3D model.
References: None |
Problem Statement: How can I share equipment data from Aspen Capital Cost Estimator into Aspen OptiPlant? | Solution: Both Equipment Shape and Equipment Footprint data are important pieces of information to transfer from ACCE into OptiPlant.
ACCE’s Spreadsheet Import/Export Report, namely the API Report, has the capability of printing Equipment Shape and Equipment Footprint when the Print equipment coordinates option is set to Y.
Users can find this option at the bottom of the Equipment Specs form under Project Basis View.
To write this information in the Excel Report, go to File > Spreadsheet Import/Export, and select Export data + plotplan results option in the Window menu. Note that the Components by Area option needs to be checked in the Select section to correctly create the file.
Once the report is generated users will be able to find data for Equipment Footprint and Equipment Symbol Shape information for the different pieces of equipment present in the ACCE Project.
For more information on Equipment Symbol Shape and Equipment Footprint, refer to KB no. 000099258.
Keywords: Connect, product integration, 3D model.
References: None |
Problem Statement: When Importing the Spreadsheet Import/Export Report (or API Report) from ACCE, an error about the Equipment not having a valid “Equipment Symbol Shape” in the ACCE Worksheet shows up:
Equipment ID: ‘XXX’ does not have a valid Equipment Symbol Shape in the ‘XXX XX’ worksheet of ACCE file.
Why does this happen and what can users do to make it go away? | Solution: The reason is that the spreadsheet was imported without an Equipment Symbol Shape, which is used by Aspen OptiPlant to map the equipment from ACCE into an equipment from OptiPlant.
If an estimator generates the ACCE Import/Export Report (or API Report) WITHOUT selecting the Export data + plotplan results option in the report window, this key data will be missing, causing this error in OptiPlant.
Once the plotplan report is generated, users will be able to find data for Equipment Footprint and Equipment Symbol Shape information for the different pieces of equipment present in the ACCE Project.
This report can now be imported to OptiPlant.
In case the 'Export data + plotplan results' button is greyed out, refer to KB no. 000099261 on how to activate it.
For more information on Equipment Symbol Shape and Equipment Footprint, refer to KB no. 000099258.
Keywords: 3D model, fail to import, product integration, Aspen Capital Cost Estimator.
References: None |
Problem Statement: What is the server URL format for connecting to ABE server through Aspen Plus/HYSYS Datasheets or a web browser? | Solution: To use the datasheet functionality in Aspen Plus and Aspen HYSYS, you will need to either connect to an ABE enterprise server or have the ABE local server installed on their machine. If you do not know which configuration is being used, then contact your ABE administrator or IT.
For an ABE local server configuration, you will choose Use Personal Workspace in Aspen Plus and Aspen HYSYS. On a web browser, you will use the following server URL.
http://localhost:10025 (10024 for V11)
To connect to an ABE enterprise server, you will choose Join a Project Team in Aspen Plus and Aspen HYSYS and type in the server URL. The server URL should have the same format as below, and can be used on a web browser.
http://’ServerMachineName’:82 (81 for V11)
The ‘ServerMachineName’ will be specific to the company’s ABE server. The server machine name will be displayed next to the workspace name in the Administration tool. Contact your ABE administrator or IT to determine your server machine name.
Keywords: ABE, Datasheets, Host URL Selection, Server URL, Local Host, Join a Project Team
References: None |
Problem Statement: The out-of-the-box engineering data model was updated in ABE V12. Due to changes made to the data model, users who are moving from ABE V9 to ABE V12 are advised to run this migration script after restoring their workspace data. | Solution: We provide the attached migration script to facilitate the data model migration clean-up so that minimal user interaction is needed. Please refer to the guide in the attachments for steps on how to run this rule.
Note that the attached migration script should only be used after users have already completed the migration of their workspace library and workspace. The script is written only for users moving from V9 to V12.
After running the script, a log file that documents all changes is created in the Workspace folder C:\AspenZyqadServer\Basic Engineering38.0\Workspaces\. If this is not the file path for your workspace folders, then change the file path on line 34 of the .azkbs file.
Keywords: Migration, Backup, Restore, OOTB, Datamodel, KB Script
References: None |
Problem Statement: In order to publish a Process Unit simulation from Aspen Unified PIMS to GDOT you need to configure the simulation first from the Flowsheet page to be able to publish it using a site catalog and add the component on Aspen Unified GDOT. | Solution: Let us use as an example that we want to publish the process unit “Hydrocracker” that we are using on a Planning Model, for a newly created site catalog, if we go to Model Data -> Catalog -> Site Catalog we will not have the option to add it until the simulation is configured:
To configure the unit, first we need to go to the main flowsheet, right-click on the unit we want to publish and select the “Details” option (or alternatively, click on the unit to highlight it and then press Ctrl + I).
Click the “Configure” button on the bottom-right corner of the screen, then again on the bottom-right corner click on “Simulations” to expand the drop-down list and select “AUP Submodel Simulation”.
It will display a message saying that there is no simulation configuration for the submodel that you are working on, click “OK” and then name the submodel to create it.
After you create the simulation, you will be able to go to Model Data -> Catalog -> Site Catalog and add this simulation to then configure the rest of the sections and publish for use on Aspen Unified GDOT.
Keywords: Unified, PIMS, GDOT, simulation, submodel, catalog, publish
References: None |
Problem Statement: When opening the Configure Online Server functionality in the APC Online server, an error message shows up with the next message:
“This Windows user account does not have privilege to update the ‘cimio_logical_devices.def’ file.
You cannot configure, enable, or disable this server.” | Solution: When you open the Configure Online Server functionality, the software attempts to open the cimio_logical_devices.def file of the computer, if the file is already open in a text editor (such as Notepad) or in the background by other AspenTech software, this error message will show up preventing the configuration window to open.
Other AspenTech software that edits or uses the cimio_logical_devices.def file is:
Cim-IO Test API
Cim-IO Interface Manager
Cim-IO IP.21 Connection Manager
Make sure to close all of them and try to open Configure Online Server again.
Keywords: Configure Online Server
References: None |
Problem Statement: It is a known issue that sometimes when we try to open DMC3 Builder the title screen displays and the program DMC3Builder.exe starts but after a few seconds the task ends, there are some troubleshooting steps that can be followed to solve this. | Solution: A corruption on the files that DMC3 Builder uses to start can cause the error of the program showing the title screen and the task DMC3Builder.exe starting but then crashing:
Looking on the Event Viewer under Windows Logs -> Applications we can find error messages that say that DMC3Builder.exe was closed due to an “Unhandled Exception”. This may seem a big problem but luckily there are a few easy troubleshooting steps that we can follow to solve this issue.
1. Go to C:\Users\{User}\AppData\Local\Aspen_Technology,_Inc and delete all the files on that folder
2. Go to C:\Users\{User}\AppData\Roaming\AspenTech\APC and rename the file APCDesktop.UserPreferences to something different.
3. Go to C:\Users\{User}\AppData\Local\Temp\2 and delete all the .tmp files that you find.
After completing these steps, you should be able to run DMC3 Builder. Note that when you start working on a new project the APCDesktop.UserPreferences.dat file will be created again.
Additional note, deleting these files will cause the server configuration to be reset for DMC3 Builder, this can be solved by going to Online -> Servers and adding back the server that you had, the online Applications will auto-populate.
Keywords: DMC3 Builder, crash, dmc3builder.exe, title screen, open
References: None |
Problem Statement: What versions of the compiler are supported? How do I troubleshoot problems setting up the compiler configuration? | Solution: Aspen Plus supports a variety of Intel Fortran compiler versions along with a variety of Visual Studio linker versions.
By default, the current versions of Visual Studio do NOT install Visual C++ (and associated tools). These are needed with Aspen Plus; hence, users must select the Custom installation option and then check Visual C++ (under Programming Languages).
To select the compilers, go to Start | Aspen Plus | Select Compiler for Vx.x and select your compiler and exit. The options are defined in the Compilers.cfg (for V10 and earlier) or the Compilers64.cfg (for V11 and higher) files. One compiler can be set up to be used with multiple versions of Aspen Plus, both 32-bit and 64-bit.
When new versions of the compiler are released or problems with the configurations are discovered, the .cfg files are updated can be obtained from the following link:
V10 and earlier: https://esupport.aspentech.com/apex/S_SoftwareDeliveryDetail?id=a0e0B00000BbrKcQAJ
V11 and higher: https://esupport.aspentech.com/apex/S_SoftwareDeliveryDetail?id=a0e4P00000RYJkvQAH
All version of Aspen Plus and Aspen Properties require a Fortran compiler AND linker tools for user and system generated routines. Various versions of Microsoft Visual C++ are supported as indicated in the Compilers64.cfg and compilers.cfg files. Note that the State of the configuration must be OK before selecting the option. If the option for your setup is ERROR then that is not a valid configuration or the software was not installed properly for the Set Compiler program to recognize.
The compiler selection actually used is determined by the first setting on this list which exists:
1. HKEY_CURRENT_USERS\Software\AspenTech\Aspen Plus\??.?\aplus
in the registry, where ??.? is the version number
2. HKEY_LOCAL_MACHINE\Software\AspenTech\Aspen Plus\??.?\aplus
in the registry, where ??.? is the version number
3. CompilerSection in this file, which is found in %aprsystem%\xeq
You can use ApSetComp to set the first registry setting, and if you are an administrator, also to set the second registry setting. You will be warned if you pick a section which ApSetComp detects as an ERROR.
Troubleshooting
From the Customize Aspen Plus window, the following commands can be used to generate a diagnostics file (diag.txt in this example) to help diagnose issues in the configuration:
apsetcomp -list > diag.txt
chkpath path include lib >> diag.txt
ApSetComp -outenv -debug=2 >nul 2>>diag.txt
ApSetComp
Type ApSetComp (in a Customize Aspen Plus window) by itself to get the command syntax and usages.
When you run ApSetComp, it checks (using the registry and paths required) for the existence of each compiler/linker combination. If the registry lookup fails or an error is detected, ApSetComp marks the section with ERROR in the State column. Since you probably don't have most of these versions installed, you will probably see many errors and one or a few sections with OK listed.
Note that some versions of compilers and linkers with similar names actually require different configurations. For example, Intel Fortran 2013 SP1 is different from Intel Fortran 2013. The ERROR/OK status can help you if you don't know which version you have installed.
To get a quick report of registry lookup errors, using the currently configured compiler section and a local Compilers.cfg file, use:
ApSetComp -outenv -cfg=Compilers.cfg -debug=2 >nul
Or to test a specific compiler section (IVF14_VS12):
ApSetComp -outenv -sect=IVF14_VS12 -debug=2 >nul
To get a detailed report of the expansion of instruction lines in a compiler section use the following command:
ApSetComp -outenv -cfg=Compilers.cfg -debug=3 >SetEnv.bat 2>SetEnd.log
SetEnv.bat contains DOS commands to set INCLUDE, LIB, and PATH env variables.
SetEnv.log contains instruction line expansions and error status.
The information in these files and the RegEdit.exe tool will help you to debug compiler issues for Aspen Plus.
Structure of compilers.cfg and compilers64.cfg Files
Each section (From Begin line to End line) below describes one supported combination of a Fortran compiler and linker you might have installed. The first word after Begin (such as Intel_VS71) is the ID of the section.
ApSetComp uses this file to provide compiler support for Aspen Plus and Aspen Properties.
You can set up a new compiler set by adding a new section with a new ID and the necessary environment variables.
Syntax of compilers.cfg and compilers64.cfg Files
This file is not case sensitive, but is blank sensitive. You should edit this file with notepad or similar editors but not Word. Also you should not break long lines.
The original ApSetComp supports:
Registry lookup such as HKLM(mypath)
Env variable substitution such as $(SDKDir)
\.. for parent directory
Comment lines that begin with # (but not #! - see below)
The argument to registry functions accept # as a single digit which matches the latest version. If you need a specific version, replace the # by a specific number in the section of interest or create a new section.
The new ApSetComp (used since V9) also supports the following features:
Nested expansion such as HKLM($(mypath)\aplus)
Additional functions: GetVer(), Exist()
Alternatives: HKLM(path_1)||HKLM(path_2)||Exist(path_3)
A line that requires new ApSetComp can be prefixed with #! so it is ignored by old ApSetComp. When the new ApSetComp encounters a line beginning with #! it uses the line and skips the next non-comment line (which is processed by old ApSetComp). The new line is usually more robust but is not backward compatible.
Keywords: None
References: None |
Problem Statement: New V12.1 CP1 DMC3 Builder feature allows to override the deployed targets and operator limits | Solution: Starting V12.1 CP1 the user can change the values of the external targets and operator limits connected to DCS tags at redeployment.
Consider the following scenario where you have a running DMC3 controller connected to interface points for the operator limits. Due to a prolonged shutdown or interface initialization the limits are wrong, with this feature instead of manually changing the values on the DCS you can use the values stored on the DMC3 Builder application.
Below is a small example of this feature, the FRAC DMC3 controller has a value of 2 for the operator limits for MV FIC-2001.SP:
The application limits are 1.5 and 3.5, this can be confirmed on the simulation view:
During redeployment select the option to override the deployed targets and operator limits
After the controller runs for a few cycles do another test connection, this time the values show 1.5 and 3.5.
Keywords: DMC3 Builder
Override limits
References: None |
Problem Statement: What is the meaning of “Device Unit” or simply “Unit” when configuring a Cim-IO connection to collect data either through Aspen Watch or collect.exe? | Solution: When configuring IO connections to collect data we can encounter the term “Device Unit”, “Unit”, or “Unit Number” for Cim-IO connections, what does this mean?
If we refer to the Cim-IO user guide, we can find a thorough explanation:
IO_DEVICE_UNIT
“Specifies the device unit number through which the values are to be read or written. In some hardware configurations the Cim-IO device-specific server DLGP task may be required to read data from, and write data to, several different hardware units.
For instance, the DLGP task can communicate with four different DCS systems using network communications. There may be 10 different PLC devices connected to RS-232 serial ports. This field in the record identifies which of the four gateways or which of the 10 PLCs, the values are to be read from, or written to.”
What this means is that for specific architectures we can set up the Device Logical Gateway Program (DLGP) task to read and write data from several different DCS systems or PLC devices, this hardware requirement is not very common so unless it is explicitly necessary, we can simply use the default unit number of 1.
Keywords: Cim-IO, Aspen Watch, collect, unit, device unit
References: None |
Problem Statement: Sometimes, depending on the complexity and structure of a simulation, it can turn difficult to solve and get results. Equation Oriented is a useful way to converge simulations and other additional benefits in Aspen Plus. | Solution: The video attached to this animated tutorial explains what Equation Oriented (EO) is and how it is different from the default Sequential Modular (SM) mode. It is explained how to change a simulation from SM to EO mode and some basic characteristics and advantages of this run mode.
Main topics covered in this video:
Difference between SM and EO run modes.
Convenience of using EO and some examples.
Which are the different run modes in EO?
How to set up an EO Simulation?
EO Variables.
EO in the Control Panel.
Variable Specification and variable swapping.
Keywords: EO, convergence, variables,
References: None |
Problem Statement: In real life, controllers have small delays between the read values of the Process Variables and the actual number in the equipment at that point in time. | Solution: In the following video, it is shown how to use a transfer function to create a delay between the PV values and the controller readings:
Basic steps:
1.- Set up a simulation with working PID controller(s)
2.- Add a transfer function, set up the PV ranges and check the Delay box.
3.- Set the controller PV source as the Transfer Function's OP.
4.- Re-specify the controller's ranges.
5.- Run the simulation, observe the difference between the actual PV value and the controller reading.
Keywords: Aspen HYSYS Dynamics, Controller, Delay, Transfer Function, PID, PV .
References: None |
Problem Statement: How to create a custom group of variables for APC applications on PCWS, additional to the default “All Variables”, “Independents”, and “Dependents”. | Solution: On a situation where we have an APC controller with too many variables and we want to “filter out” the most important variables for better monitoring, we can create a custom group additional to the default “All Variables”, “Independents”, and “Dependents” on the Aspen APC Web Interface.
Let’s say we have the controller DEMOCOL12_DMC3 and we want to create a group so only the pressure and pressure drop variables are displayed:
The way to do this is by going to Configuration -> Applications -> Group Definitions and selecting the application that you want to work with, then click on “New”.
Now, you set up a name for the new group and you choose the variables that you want to include by double clicking on them to add them to the Selected list.
Additionally, you have the option to add a Separator between MVs or CVs, on my example I want to separate COLDP & CONDDP from OHVALVP.
Once you are finished you click Apply.
After the changes are applied, if you navigate back to the Online Apps and click the dropdown menu for the application that you worked with you will be able to see the new group with the selected variables.
This works for both ACO and RTE applications.
Keywords: PCWS, Web Interface, custom, group definitions, variables, group
References: None |
Problem Statement: Is it possible to model a solid-liquid-liquid-vapor electrolyte system? | Solution: Liquid-liquid (LLE) systems can be modelled with ElecNRTL, but there are several important steps that must be taken. SeeSolution 4402 for details. These systems can include vapor and/or liquid.
The attached file that can be opened in Aspen Plus V10 and higher uses the True Component approach to model a water (H2O), 1,2-dichloropropane (C3H6Cl2), sodium chloride (NaCl), and nitrogen (N2) system. This system should have a vapor, two liquids, and a solid salt phase. Some parameters such as the electrolytes binary pair parameters (GMELCC) are estimated based on the recommendation inSolution 4402 of using 5 for solvent with ions and 5 for ions with solvent; therefore, so the results should not be used quantitatively.
These parameters are entered on the Properties Methods | Parameters | Electrolyte Pair | GMELCC form.
When running the simulation, the salt will generally appear in the aqueous liquid stream out of a Flash3 block.
Keywords: lle, electrolytes, NaCl
References: None |
Problem Statement: Configuration details for using the RDBMS data source component in conjunction with an Oracle database aren't clear. This | Solution: provides details for setting up this connection.Solution
On the Process Explorer client machine, check the Oracle client connection to the relational database:
Start | Programs | Oracle - OraHome92 | Configuration and Migration Tools | Net Configuration Assistant
On the Net Configuration Assistant: Welcome screen, select the radio button:
Local Net Service Name configuration - then click the Next button
On the Oracle Net Configuration Assistant: Net Service Name Configuration screen,
Select the Test radio button, then click the Next button.
On the next screen, use the Drop-down menu to select the net service name of interest. This will be the link to the Oracle database where the RDBMS tag data will reside. Click the Next button.
If the test does not succeed, try changing the Login information used. This test must be successful in order for RDBMS tags to work properly.
The next step involves creation of a translation table within Oracle to map the various tag attributes to their column names within the table containing all the RDBMS tag data. This translation table is referred to as the Tags Table.
For detailed information regarding the Tags Table, see the Aspen Process Explorer Installation Guide - Section 4-28 (Aspen Process Data for RDBMS).
After the Tags Table has been created, a new ADSA Data Source must be configured for accessing the RDBMS data. Using the ADSA Client Config Tool, add a new data source. Configure the new data source using the Aspen Process Data (RDBMS) component.
Click the Setup button to configure the ADO Connection String. The Data Link Properties dialog is displayed. On the Provider tab, select Microsoft OLE DB Provider for Oracle. On the Connection tab, specify the server name using the Oracle net service name used when testing the Oracle client connection. Fill in the User name and Password fields appropriately, and check the option to Allow saving password. Use the Test Connection button to verify the configuration. Click OK.
Finally, fill in the Tags Table field using the name of the Oracle table created for the translation discussed above.
At this point, Process Explorer can be used to trend data from the new RDBMS data source.
Keywords:
References: None |
Problem Statement: Is there a relatively quick and easy way to determine how many tags (as well as which ones) are in a specific Aspen InfoPlus.21 (IP.21) file set? | Solution: Yes! Open the Aspen InfoPlus.21 Administrator and do the following steps:
1. Select a specific History repository:
2. Right click a specific file set and choose 'Summary...':
3. On the Summary dialog, put a very large number (bigger than the expected number of tags, like '99999') in the No. of Tags to Summarize: field:
Note that at this point the number of tags in this specific file set is listed in the 'Tags:' field to the left of the 'Refresh' button (2034 in the example above).
4. Click Enter or use the Refresh button in the upper right and the summary will list all of the tags in the file set (and the value will decrease and match the Tags: value in the upper right).
Note: In addition to showing the tag names that are in the fileset, the total number of history values is listed as well in the '# of Events' column. This can be verified with an SQLplus query as shown in the screenshot below:
The difference between the count(ip_trend_value) and # of Events is due to the difference in multiplying by 1000 versus multiplying by 1024, per AspenTech Development.
Keywords: File Set
Fileset
References: None |
Problem Statement: I have a directory of templates (.apt) files. Is it possible for Aspen Plus to use this directory as the default?
The default directory is in a system folder:
C:\Program Files (x86)\AspenTech\Aspen Plus V8.x\GUI\Templates
This directory is not really suited because users need administrator rights to add user templates there. | Solution: If you store your template files in this folder:
C:\ProgramData\AspenTech\Aspen Plus <version>\Templates
Then they will be available when you click My Templates when starting a new run from a template. This default location cannot be changed. If you want to point to some other network directory, use a shortcut folder to the desired location in the ProgramData default location.
In V12.0 and higher, click on the New from User Template button to take you to this directory.
Note: The C:\ProgramData folder is hidden by default in Windows. If you cannot see it:
In Windows 7:
1. Open Windows Explorer.
2. Click Organize | Folder and search options.
3. Click the View tab.
4. Under Hidden files and folders, click Show hidden files, folders, and drives.
5. Click OK.
In Windows 8 and 10:
1. Open Windows Explorer.
2. Click the View tab.
3. Check the box for Hidden items.
This design is more flexible than the V7.3 design, and more secure. Companies can choose to install files or templates in the protected space (under Program Files) by scripting the installation. These files are protected under normal user accounts. Users can add their own templates at will through the My Templates folder. They can change these templates at will too since no special administrator rights are required.
Keywords: None
References: None |
Problem Statement: In Aspen HYSYS it is possible to enter the efficiencies of the column stages as individual inputs. However, it is also possible to create groups of stages to simplify the input of values. This is particularly useful for columns with many stages. | Solution: Go to the Column Parameters and select the Efficiencies section. Select Efficiency Type as “Overall” as this will allow to enter the efficiency of the stage as a whole and not of particular components. Then select “Grouped” on Efficiency Values. This will make a new section appear in the window specified as Efficiency Tuning by Group.
Then, select the Group Definition button and choose the number of the efficiency groups to be created. each group will have specific stages assigned with the same value specified.
As an exmple here, three efficiency groups are created to assign an efficiency to the condenser and reboiler in the same group and other two groups for tray efficiencies in the column.
After the groups have been created and assigned with stages, a value for each group can be specified and all the tray efficiencies will be populated.
Keywords: Efficiencies, distillation, efficiency values, grouped definition
References: None |
Problem Statement: How to download the Surface Jet Pump Unit Operation Extension from Aspen Hysys V12.0? | Solution: For downloading the Surface Jet Pump Unit Operation Extension, inside Hysys V12.0 and following versions follow the next steps:
1. Inside Aspen Hysys go to Resources and click on the Aspen Knowledge icon.
2. Internet windows will open, on that page look for the Surface Jet Pump Unit Operation Extension.
3. Click on the link.
4. The new window contains information on Surface Jet Pump Unit Operation Extension, in the left side, find the link for downloading the extension.
5. After downloading the extension, the registry is necessary, review this article with more details about the registration.
Surface Jet Pump Unit Operation Extension
Keywords: Caltec, extension, jet pump, Aspen Knowledge, ejector.
References: None |
Problem Statement: How to modify the True Boiling Point (TBP) values shown in the Stream Properties to be shown in a different basis? | Solution: The TBP values available in the stream properties are shown by default in a volume basis and at specific percentage cuts, but this information can be adjusted to the user´s needs.
This can be done in the Correlation Manager, which can be launched by clicking the little arrow at the bottom right corner of the Simulation section in the Home ribbon. Expand the Petroleum properties to find the TBPs.
The TBPs to be adjusted need to be selected and then the desired composition basis needs to be selected as shown.
Keywords: Properties, petroleum, distillation, edit
References: None |
Problem Statement: How to use Aspen Custom Modeler model inside Aspen Plus? | Solution: The Aspen Custom Modeler (ACM) files need to be added to Aspen Plus in each computer that will use the Aspen Plus file. For adding the ACM models follow the next steps:
Open a new Aspen Plus file, in the Simulation environment go to Customize ribbon, select Manage ACM Models.
This should open the ACM Models tab. To import an exported custom model, click Import.
In the file navigator, look for the ACM model and select it, the type of the file is *.ATMLZ.
After these steps, close the new file and open your simulation file.
Note: Users must use a new Aspen Plus file to add the ACM models in steps 1,2, and 3.
Keywords: Model, add new model, ACM, *.ATMLZ, Aspen Plus.
References: None |
Problem Statement: How to modify the time communication in Aspen Plus Dynamics? | Solution: For modifying the communication time in Aspen Plus Dynamics follow the next steps:
1. Go to the Run menu.
2. Select Run Options...
3. In the pop-up window Time control/Communication modify the value that you want for having the communication enter the value of the communication intervals in Communication. This value uses the time units selected in the Time units section in Select the time units that correspond to the units used in your models.
4. Click on OK.
In this same pop-up window, you can modify the time units in the Time units section in Select the time units that correspond to the units used in your models and you can modify the time unit shown in the plots in Select the time units in which the user interface should display time. Click on OK.
Keywords: Time, modify, communication, axis units, reported units, time units.
References: None |
Problem Statement: How to add a correct name to a variable for declaring it in Aspen Plus Dynamics? | Solution: For declaring a variable in Aspen Plus Dynamics it needs to have a specific structure for the name, this structure contains the next rules:
1. Any name starting with a letter.
2. Up to 27 letters or numbers (it is not
case sensitive).
3. Variable type is one of the predefined variable types, which gives a default value, bounds, and unit of measurement. The list of predefined variable types can be seen in the Dynamics library, under Variable Types.
Keywords: Variable, name, declared, valid name capital letter, character.
References: None |
Problem Statement: It is possible to find wrong results in the viscosities calculated in HYSYS for an oil manager composition attached to a stream. There is a possible workaround for this issue. | Solution: Depending on the fluid package selected in the Properties Environment of HYSYS, it is possible to select the viscosity calculation method in the Fluid Package Options. The default method that is used is HYSYS viscosity. If the viscosity results obtained when using oil manager are far from what is expected, it is possible to modify the viscosity calculation method to Indexed Viscosity. There are scenarios that have proved to make better viscosity calculations with this method.
Keywords: Oil, Petroleum, mixture, wrong viscosity, temperature
References: None |
Problem Statement: HYSYS shows the error message “How to solve the error message The CAPEOPEN 1.1 Link Extension is not registered. Please register it to gain access to this operation. | Solution: An extension needs to be registered to solve this issue.
The file to be registered can be found in the location C:\Program Files\Common Files\Hyprotech\CAPE-OPEN
The dll file that needs to be registered is found in this folder with the name of ActiveXtender11.dll
To find more details on how to register an extension see KB 89678
Keywords: Extension, HYSYS, Aspen Custom Modeler.
References: None |
Problem Statement: An error is observed when attempting a test connection between the OPC HDA server and Aspen Mtell server through the Test button under Configuration --> Settings --> Sensor Data Sources on Mtell System Manager.
The following error is also observed when loading the HoneywellPHD adapter URL on the internet browser. (Application can be browsed through Internet Information Services (IIS) Manager --> localhost server --> Sites --> AspenTech --> AspenMtell --> Adapter --> right click HoneywellPHD --> Manage Application --> Browse)
HTTP Error 500.0 – Internal Server Error
Calling LoadLibraryEx on ISAPI filter “C:\Windows\Microsoft.NET\Framework\v4.0.30319\\aspnet_filter.dll failed | Solution: To solve this issue, it is necessary to perform the next procedure:
Go to the Internet Information Services (IIS) Manager, click on the server and double click on ISAPI Filters.
Look in the Executable column for any executable path containing two backslashes. Right click on edit and find the double backslashes.
Delete one of the backslashes and then click OK.
Reload the HoneywellPHD adapter URL on the internet browser to confirm that error has now resolved.
Go to Mtell System Manager and click on the Test button again to confirm that the connection has now been established.
Keywords: Test Connection Failed
ISAPI filter
HTTP Error 500.0
References: None |
Problem Statement: When editing a datasheet in Excel Datasheet Editor or the web Datasheet Editor (in HYSYS, Plus, or web Explorer) there is an Aspentech and ABC corp logos but there is no option to change them. | Solution: It is possible to easily customize a PSV datasheet with no experience with Aspen Basic Engineering (ABE) i.e. without using the ABE Datasheet Definer
TheSolution is to export the datasheet to Excel, this will detach the file to the ABE server, making it totally independent and ready to print, there are 2 paths:
1- From the Web Explorer (in HYSYS, Plus, or web browser) the only thing that needs to be done is to click on the Options button when the PSV datasheet is displayed and then select the Export to Excel option, it will automatically export it into your Downloads folder.
2- From the Excel Datasheet Editor, go to the Aspen Datasheet tab and within the Document section click on the Detach button, this will open a window that will let you choose the directory where you would like to save the file.
The detached file now will allow you to delete the default logo and place your own.
Keywords: ABE PSV Datasheet, Excel, Customize, logo, ABCorp.
References: None |
Problem Statement: Are there any third-party products that need to be installed for creating custom reports in the New Reporter? | Solution: SQL Server Management Studio is used to create and modify custom reports. 64-bit Microsoft SQL Server Version 2014 SP2 is the recommended choice to modify the database instance (Icarus_User120) installed by Economic Evaluation.
Microsoft SQL Server 2014 Service Pack 2 (SP2) Express can be downloaded from the following link:
https://www.microsoft.com/en-us/download/details.aspx?id=53167
Choose to download SQLManagementStudio_x64_ENU.exe installation media.
To properly install SQL Server Management Studio, it is also a pre-requisite to have Microsoft SQL Server 2012 Native Client and Microsoft ODBC Driver 11 for SQL Server. These 2 components can be separately installed from the following locations.
Microsoft SQL Server 2012 Native Client:
https://www.microsoft.com/en-us/download/details.aspx?id=50402
Microsoft ODBC Driver 11 for SQL Server:
https://www.microsoft.com/en-us/download/details.aspx?id=36434
Important Note: If the Icarus_User120 database is modified with a higher version of SQL Server or LocalDB, then the internal file format and structure will be upgraded to that higher version. Custom reports can be created with higher versions, however, the database will no longer be usable with lower versions of SQL Server or LocalDB.
Keywords: Custom reports, SQL, Icarus_User120, SQL Server, custom, reports, New reporter
References: None |
Problem Statement: Since Aspen Watch records have a different structure on their repeat areas compared to InfoPlus.21 analog records, when trying to extract information from the aggregates using a SQLplus query you need to specify the field you are looking for on Aspen Watch records, unlike with InfoPlus.21 records where you don’t need to. | Solution: On the InfoPlus.21 analog record repeat areas, the main type of numerical data that is being stored are the IP_TREND_VALUEs.
If we wanted to extract some basic information from the aggregates (let’s say the maximum and minimum values every hour for the last couple of days), the simplest SQLplus query that we can write would be:
Select ts, max, min from Aggregates
Where name like '1-FC01.OP'
and ts between '22-MAR-22 00:00' and '24-MAR-22 00:00'
and period = 01:00;
However, if we tried using this exact same code on an Aspen Watch record, we would get the message “No rows selected”.
This is because on the Aspen Watch record repeat areas there are a lot of different types of data that are being stored.
By using the previous code as it is, the query does not know where to look for, that is why we need to add an extra AND operator where we define the field, that is written as:
AND FIELD_ID=FT('%datatype%')
With this condition added, the query now knows what type of data you are looking for, let’s say we want to search for the maximums and minimums of the calculated steady state values of a dependent variable (AW_SSDEP_H) over the past couple of days. The new query would be:
Select ts, max, min from Aggregates
Where name like 'C01D_COLDP'
AND FIELD_ID=FT('AW_SSDEP_H')
and ts between '22-MAR-22 00:00' and '24-MAR-22 00:00'
and period = 01:00;
Now that we specified the FIELD_ID we are actually able to retrieve information.
Keywords: InfoPlus.21, Aspen Watch, record, SQLplus, query, aggregates
References: None |
Problem Statement: Is it possible to define the unit material cost for a component using different currencies? | Solution: It is possible to set up multiple currency conversions for a project so that this can be used to enter the material cost of a component. To do so please follow the steps below:
In the Project Basis View go to the Currency form
In the External Currency window that will display, open the Specifications option
In the Procurement Currency Specification window specify the different currencies needed for the project and then click OK
Note: Item 1 must be the original currency base from the project, do not modify this item
In the Project View navigate to the component of interest and expand the Currency unit for malt cost drop-down list, the currencies defined before will be listed
Keywords: Currency, conversion, country, USD, PS, EUR, KY, SAR, MXN
References: None |
Problem Statement: How are the Mach number and Rho V2 calculated in a valve? There are no results on the upstream side of the summary tab of a valve. | Solution: In Aspen Flare System Analyzer for a relief valve and a control valve the Mach Number and the Rho V2 are reported in the summary tab of each valve and calculated as follows:
Mach number: Includes columns for both Upstream and Downstream. The Mach number is calculated using the following equation:
The Mach number calculations use the outlet velocity of the fluid because it considers the velocity after the pressure drop. The sonic velocity and density are calculated using the upstream and downstream thermodynamic conditions.
Please refer to thisSolution for more details https://esupport.aspentech.com/S_Article?id=000077802
Rho V2: Includes columns for both Upstream and Downstream. The Rho V2 value is calculated using the following equation:
The Rho V2 calculations use the outlet velocity of the fluid because it considers the velocity after the pressure drop. The density is calculated using the upstream and downstream thermodynamic conditions.
In relief and control valves, the Mach Number and Rho V2 use the difference between the properties upstream and downstream of the valve and the downstream velocity, therefore results are only going to be shown for the downstream side.
Keywords: Valve, Pressure Drop, PSV, PRD, Sonic Velocity
References: None |
Problem Statement: Each time I try to open an example file from Plus Dynamics I get the following error message:
Cannot load problem XXX, working directory is not writeable. If you are trying to load from a read-only location, try checking 'Allow setting of working folder location' on the Preferences tab of Tools/Settings.
How can I troubleshoot this? | Solution: Close the error message dialog, in the main menu go to Tools > Settings > Preferences tag and check the “Allow setting of working folder location” checkbox. Click Apply to save changes and then OK to close the window.
You should now be able to open any Plus Dynamics built-in example file.
Keywords: Example file, open, not-writeable, read-only, load, directory.
References: None |
Problem Statement: How to create an input file for an Aspen Plus simulation? | Solution: Aspen Plus input files are compact summaries of the specifications for a flowsheet simulation. An input file can:
Be used as the input file for a stand-alone Aspen Plus engine run
Provide a compact summary of the input specifications for a simulation (for example, to be included in a report)
Provide the documentation of record for a simulation study (for example, as part of the archives for a design project)
Help expert users diagnose problems
As their name implies, input files contain only input specifications. No results are saved. If you want to save results, reconcile input first or use a different format. Nor information about embedded objects such as spreadsheets. If you use features such as Excel Calculator blocks which use embedded objects, do not use these formats.
To generate an input file for an Aspen Plus simulation file follow the steps below:
1. Click on the Input button from the Summary group in the Home ribbon.
2. A notepad file will open. Save it as a *.txt file.
To open the input file generated with Aspen Plus follow the steps below:
1. Browse for the *.txt file saved and manually change its extension to *.inp. A message will display asking if you are sure to change the file extension, click Yes.
2. Open the *.inp file with Aspen Plus
Note: Restoring BatchSep models into the user interface from input files may result in incomplete simulations. In particular, streams may not be connected to the block or may be connected to the wrong ports. After loading the input file, check and correct all stream connections for BatchSep blocks. The engine runs input files containing BatchSep blocks without such issues.
Keywords: Input file, summary, import, batch
References: None |
Problem Statement: How to display Nozzle in OptiPlant Model. | Solution: Procedures
Go to View | Show/Hide | Nozzles.
When the option is selected, it show the nozzles on all the equipment having pipe connections.
The nozzles show up on the equipment for even those pipes that have failed to route.
To hide the nozzles, select the option Nozzles again to toggle the selection off.
Nozzle Properties
As a nozzle is selected, it displays its tag in the status bar. The tag of a selected nozzle is same as the rule based nozzle selected in the line-list for that line. Example:
A pump suction line will show nozzle tags as SUC
A line connecting to the tube side UP nozzle on a heat exchanger will show the nozzle tag as TUP
You can select any nozzle and relocate it per the design requirement to change the pipe route. As soon as a nozzle is moved, its tag gets updated by getting a suffix “UD” appended to it indicating that particular nozzle has been moved therefore, it’s no longer a rule-based nozzle location but a user-defined nozzle location. Example: HZ1N_UD, S1UP_UD
As soon as a nozzle is relocated, the line-list also gets updated automatically with the new tag as HZ1N_UD, S1UP_UD etc for only the nozzles that gets moved.
Key Words:
Display Nozzle etc
Keywords: None
References: None |
Problem Statement: Why OptiPlant is throwing C++ Runtime Error upon exporting a 3D DXF file? | Solution: This error message will appear if there are any spaces or special characters present either in the project folder name or the project folder path leading to it. So, in order to get rid of this issue please make sure that the project folder name or path does not contain any spaces or special characters.
Keywords: 3D DXF, C++ Runtime, DXF export
References: None |
Problem Statement: OptiPlant to AVEVA E3D/PDMS interface - On E3D/PDMS machine | Solution: OptiPlant shares a Bi-directional interface with AVEVA E3D/PDMS. This document showcases the procedure to transfer Equipment, Structure, and piping form OptiPlant to E3D/PDMS once you have all the files from OptiPlant.
On E3D/PDMS Machine:
This section highlights the pre-requisites and steps to import the OptiPlant generated deliverables for Equipment, Structures and Piping.
Pre-Requisites:
Setting The PROJASD Environment variable:
The PROJASD variable must be set in the systems environment variables and must be pointing to the OptiPlant working project folder to Import the piping. Please follow the below mentioned steps to set up this variable:
Copy your OptiPlant project folder to your E3D/PDMS machine.
Launch System Properties (Control Panel | System | Advanced system settings).
On the Advanced tab, click the Environment Variables button
4. The Environment Variables window will appear. Click the New button under the System variables table
5. In the appearing window, name the variable PROJASD. In the Variable value field, provide the OptiPlant project folder location.
6. Click OK to add the variable
7. Click OK to on the Environment Variables window to save the changes and close the window
8. Restart your machine.
Configuration of Vars File:
Go to \\AVEVA\E3D12.1\E3Dui\des\admin and open the “VARS” file in notepad or WordPad to edit.
Edit the “VARS” file by adding the following five lines BEFORE “CHOOSE AUTOC OFF” as shown in the image below.
$* ASD Auto Router Setup
$G ROUTE = $M/%PROJASD%/
$U ROUTE
$S CALLUR = $:ROUTE$:$S1
$U CALLUR
Configuration of LEXICON Module:
In order to invoke the macro’s present in your project folder during the import process, we would need to set-up some User Defined Attributes or UDA’s to the Lexicon module. So, enter the LEXICON module and perform the following steps:
In the command window, At prompt, type: $M/%PROJASD%\ASD_UDA
Then type: compile
It will Add ASD_UDA to your Lexicon module.
After this you are all set to import the Deliverables.
Process to Import OptiPlant Deliverables:
Equipment:
To import the Datal file from OptiPlant, please follow the given steps:
Launch E3D/PDMS and select equipment Zone
Now, simply drag and drop the Datal file to the command window of E3D/PDMS
Your Equipment(s) will get imported without any issues.
Structures:
To import structures, the E3D machine must have an active license for AVEVA AutoSteel. If you have this, please perform the following steps.
Launch E3D
Go to Tools >> AutoSteel or IFC button >> SDNF and then, select the SDNF file to Import.
Piping:
Launch E3D and then, command Window.
Select Piping Zone
Now in the command window type CALLUR LOADMAC ASDAutoALL.Bat 1.
Note: Please Note that here ASDAutoALL.bat is your batch file name that you selected while generating the PML file. By default, when you route pipes in OptiPlant it creates this batch file with all the pipes sequenced in it. Although users can always create their own batch file. So, the name of Batch file in this command can be differ.
A confirmation dialog will appear. Click YES.
Your pipes will be loaded into E3D
Keywords: E3d, PDMS, Interface, Aveva
References: None |
Problem Statement: OptiPlant to AVEVA E3D/PDMS interface- On OptiPlant Machine | Solution: OptiPlant shares a Bi-directional interface with AVEVA E3D/PDMS. This document showcases the procedure to transfer Equipment, Structure, and piping form OptiPlant to E3D/PDMS.
On OptiPlant Machine:
There are a few pre-requisites that the user must follow prior to use this interface. The pre-requisites are as follows:
Pre-Requisites:
Perl:
The first step would be to download and install Strawberry Perl using admin account. It can be downloaded https://strawberryperl.com/ for free of cost.
Once it is Installed, Go to the installation directory of Strawberry Perl and copy “Perl” folder to C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1 folder.
Go to the environment variables and update the ‘Path’ system variable with the path of Perl exe, if it does not update the Path variable automatically upon installation. You may take assistance with your IT team with this.
Reboot the machine
OptiPlant Project folder:
So, once you configure your OptiPlant machine for this interface, the next step would be to get your project folder ready to run AVEVA interface. Below are a few steps that need to be followed:
Go to C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\Bin folder and copy Aig2Pdms.pl file to your working OptiPlant project folder.
2. Go to C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\Data folder and copy A150.pl file to your working project folder
Note: Please note that A150.pl is a spec file. The user would need to write and rename the same spec file individually for all the spec’s that have been used in OptiPlant model. The procedure to write this .PL file can be found in OptiPlant help manual.
3. Copy All the macro’s from C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\aroute folder and paste that to your project folder.
Generating Deliverables from OptiPlant for AVEVA Interface:
Once all the pre-requisites have been fulfilled on OptiPlant machine. Now we are ready to generate the Deliverables for Equipment, Structures and Piping.
Export OptiPlant Equipment(s):
Go to Deliverables >> E3D/PDMS >> Equipment
b. This will open a window to select Equipment, from this window click on Select all to select all the equipment for Datal file generation.
c. Now click on Generate Datal button.
d. The datal file will be saved inside the Deliverable sub-folder of your working project folder.
Export OptiPlant Structures:
For structures, OptiPlant generates a Steel Detailing Neutral file also known as SDNF and that can import in PDMS/E3D. The procedure to export SDNF file is as follows:
Go to Deliverables >> E3D/PDMS >> Structures
A window will Open up, from that window click on Select All to include all the structures for SDNF generation.
Finally, click on Generate SDN file. The SDNF file will be saved Inside the Deliverables sub folder of your working Project folder.
Export OptiPlant Piping:
For Piping, OptiPlant converts piping from Brd format to PML format, so that it can be loaded into E3D/PDMS easily. Below are the steps that needs to be followed to convert OptiPlant Piping to PML format:
Go to Deliverables >> E3D/PMDS >> Piping
It will open a Window and ask to select the batch file.
Click on the Select Batch button. Once you click on the select batch button, a new window will open, from that window select the ASDAutoAll.bat file and hit on the Open button.
After selecting the batch file
Finally, click on the Generate PML file button. It will save all the pipes to PML format.
These files will be saved inside the PML folder within your working project folder
Keywords: E3D, PDMS, AVEVA, Inertface
References: None |
Problem Statement: How can I access HYSYS special parameters via VBA e.g. cricondenbar? | Solution: The given code helps user to access Cricondenbar Pressure/Temperature, Cricondentherm Pressure/Temperature & Solid/Ice formation temperature.
Dim hyApp As HYSYS.Application
Set hyApp = CreateObject(HYSYS.Application.V12.0, )
hyApp.Visible = True
Dim hyCase As SimulationCase
Set hyCase = hyApp.SimulationCases.Open(C:\Program Files\AspenTech\Aspen HYSYS V12.0\Samples\Atmospheric Crude Tower.hsc) ''full path to the file
hyCase.Activate
Dim bdRC As BackDoor
Set bdRC = hyCase.Flowsheet.MaterialStreams(Raw Crude)
bdRC.SendBackDoorMessage (CreateUtility EnvelopeUtilityObject)
''Call the method CreateUtility EnvelopeUtilityObject create an envolope object
''titled as Envelope-Name Stream
''in this case the Envelope object is called Envelope-Raw Crude
Dim CricondenbarValue As Double
Dim CricondenthemValue As Double
Dim utilityEnvelope As EnvelopeUtility
Set utilityEnvelope = hyCase.UtilityObjects(Envelope-Raw Crude)
CricondenbarValue = utilityEnvelope.CricondenbarValue
CricondenthemValue = utilityEnvelope.CricondenthemValue
Dim bdHC As BackDoor
Set bdHC = hyCase.Flowsheet.MaterialStreams(Hot Crude)
bdHC.SendBackDoorMessage (CreateUtility EnvelopeUtilityObject)
''Call the method CreateUtility EnvelopeUtilityObject create an envolope object
''titled as Envelope-Name Stream
Dim utilityEnvelopeHC As EnvelopeUtility
Set utilityEnvelopeHC = hyCase.UtilityObjects(Envelope-Hot Crude) ''in this case the Envelope object is called Envelope-Hot Crude
CricondenbarValue = utilityEnvelopeHC.Cricondenbar.GetValue(psia)
CricondenthemValue = utilityEnvelopeHC.Cricondenthem.GetValue(F)
'CricondenthemValue
Dim bdC As BackDoor
Set bdC = hyCase.UtilityObjects(Envelope-Hot Crude)
Dim selectionTD As Object
Set selectionTD = bdC.BackDoorVariable(Selection.502).Variable
''''''' to find the Temperature we require find the pressure from CricondenbarValue is required find it in the table
Dim objectPressureTable As Object
Set objectPressureTable = bdC.BackDoorVariable(Pressure.500.0.[]).Variable
Dim opt() As Double
opt = objectPressureTable.GetValues(psia)
Dim counterTable As Integer
counterTable = IndexArray(opt, CricondenbarValue)
Dim objectTemperatureTable As RealFlexVariable
Set objectTemperatureTable = bdC.BackDoorVariable(Temperature.500.0.[]).Variable 'The table Temperature.500.0 belongs to the bubble pt table
Dim ott() As Double
ott = objectTemperatureTable.GetValues(F)
Dim criconderbarValueTemperature As Double
criconderbarValueTemperature = ott(counterTable)
''''''' to find the Pressure we require find the pressure from CricondenthemValue is required find it in the table
bdC.BackDoorVariable(Selection.502).Variable.Value = 1 ''this allow us change the view on the UI
Set objectTemperatureTable = bdC.BackDoorVariable(Temperature.500.1.[]).Variable ''the table 500.1 belongs to the Dew point table
ott = objectTemperatureTable.GetValues(F)
counterTable = IndexArray(ott, CricondenthemValue)
Set objectPressureTable = bdC.BackDoorVariable(Pressure.500.1.[]).Variable
opt = objectPressureTable.GetValues(psia)
Dim criconderthemValuePressure As Double
criconderthemValuePressure = opt(counterTable)
''''''''''''''''''''''''''''''''' find ice point
Dim icePoint As Double
atg = 5 * 2 ''just for debugging and look the values of the variable
End Sub
Keywords: VBA, automation, Cricondenbar Pressure, Cricondenbar Temperature, Cricondentherm Pressure, Cricondentherm Temperature, Solid formation temperature, Ice formation temperature
References: None |
Problem Statement: Why are the mole flows specified for a stream different from the ones appearing in results for my electrolyte stream? | Solution: It could be confusing at first sight to observe different values of the components´ molar flow rates in the input and results sections. This situation happens when using the true approach with Chemistry. The definitions of the reactions present is in the Chemistry folder from the Properties environment. Usually, the apparent components are specified in a stream, and the equilibrium, dissociation, and salt chemistry results in a molar expansion or contraction for different true species. In order to verify that Aspen Plus is calculating correctly the flows it is possible to add the apparent molar flow rate and total apparent molar flow rate properties to the stream report to report the values that were input.
Keywords: Different molar flows, material balance, inconsistency, reactive, components, chemistry
References: None |
Problem Statement: What is AspenTech’s policy about mixed versions and side by side installations of APC software? | Solution: Side-by-side installations of APC software (two different versions of the same product, like APC Desktop , installed on one server) is not supported.
Mixed versions of APC products (where the web server is a different version than the online server, for example) is allowed with certain conditions. However, for best feature compatibility, we always recommend installing one consistent version across all APC servers.
If mixed version installations exist (where one server is at v10 and another at v12 for example), here are some points to consider.
We require that the APC Web Server and Aspen Watch Server be the highest version in the product servers when you have a mixed version environment. This is because sometimes (not every release) there is a need to update the data service architecture to support new features. When a change like this happens, older web servers and watch servers will not be able to transfer the application data from the online server(s) due to incompatibilities.
Desktop software is not backward compatible. Once you open a DMC3 Builder project in a higher version, it is not guaranteed to function or open using the older original version. So it is always good to keep a backup of projects before upgrading or opening them in a newer version.
We recommend always running supported versions.
Keywords: side-by-side installation, APC, DMC3 Builder, Aspen Watch, PCWS.
References: None |
Problem Statement: How to set up Cim-IO for IP.21 (Cim-IO for Set-Cim) to read tags from the IP.21 database into Aspen Watch as inputs for Custom Calculations? | Solution: Aspen Watch provides a default method of creating records in the InfoPlus.21 database and reading/writing from them via Miscellaneous Tags. By creating a Miscellaneous tag in Aspen Watch using “None” as the IO Get Record Type, there is a record automatically created by the same name under the Definition Record AW_MSCDef and the field name AW_VALUE is updated in Aspen Watch in real time. Select this “None” option if the Miscellaneous tag obtains a value from a source other than Aspen Cim-IO. An example might be a custom calculation result that is placed in a database record by Aspen Calc. Another use case would be if the user wants to go into IP.21 Administrator and manually change the AW_VALUE for that Miscellaneous definition record and those changes will be reflected in Aspen Watch. If using Miscellaneous definition records is sufficient for your case, you can skip down to Step 4 (points 1-2) below as configuring the Cim-IO interface is not necessary. All you need to do is create Miscellaneous tags in Aspen Watch. Instructions for this are also provided in the Watch Maker Help files.
If the user would like to read/write data from the IP.21 database for records that are not under the Miscellaneous Definition Records, AW_MSCDef, they can configure a Cim-IO interface to read/write directly from the database for any records.
There are two types of Cim-IO interfaces we can set up for clients (Aspen Watch in this case) to read from the IP.21 database, using Cim-IO Interface Manager:
Cim-IO for OPC interface (Aspen.InfoPlus21_DA.1)
KB on how to set up Cim-IO for OPC DA interface: https://esupport.aspentech.com/S_Article?id=000086893
Note: syntax required when reading record fields from IP.21:
“<IP.21 Definition Record Name>”.<IP.21 Field Name>
(the IP.21 record/tag name between double quote marks, followed by a period, and then the IP.21 field name)
Example using Miscellaneous tag record under AW_MSCDef: “TI1001”.AW_VALUE
Cim-IO for IP.21 interface (aka Cim-IO for Set-Cim)
Note: The syntax for reading/writing using the interface Cim-IO for Set-Cim is:
<IP.21 Definition Record> <IP.21 Field Name>
(the IP.21 definition record name, followed by a space, then the IP.21 field name)
Example using Miscellaneous tag record under AW_MSCDef: TI1001 AW_VALUE
This article will focus on the secondSolution, using Cim-IO for IP.21 interface:
Steps 1-3: set up Cim-IO for IP.21
Step 4: create Miscellaneous tags in Aspen Watch, map their IO address to the IP.21 record fields, use the miscellaneous tags as inputs in custom calculations.
Note: this information can also be found in the APC Configuration Guide found here on APC servers: C:\Program Files (x86)\Common Files\AspenTech Shared\APCConfigurationGuide
Step 1 – Create Cim-IO for IP.21 Interface on the Aspen Watch / IP.21 Server
Using Cim-IO Interface Manager, on the IP.21 server, create a new interface of type Cim-IO for IP.21:
After clicking Next, you will see the Interface name is automatically populated as CIMIOSETCIM_200. The Interface description is arbitrary. Select the “Start at boot” option to have it automatically start when the Cim-IO Manager service is started. Also, leave the Allow entry of username and password check box cleared. (In most circumstances, the administrator's account specified during installation of Aspen InfoPlus.21 and Aspen Cim-IO should have sufficient permissions to enable Aspen Cim-IO services.)
Finish the steps in the wizard to create the interface. Then stop the interface, uncheck the Enable Store&Forward option (under Description), save configuration and start the interface again. Aspen Watch does not require this Store & Forward feature.
The final configuration should look like this:
There are two services running for the Cim-IO for Set-Cim:
CIMIOSETCIM_200 – this is the DLGP service, default port 60017
CIMIOSETCIMH_200 – this is the history DLG service, default port 60018
Step 2 – Configure the Services file on the Watch / IP.21 server
The ports for these DLGP services should be defined in the Services file.
Open to edit the Services file in a text editor, which is usually located here: C:\Windows\System32\drivers\etc
Verify these two entries are here:
CIMIOSETCIM_200 60017/tcp # InfoPlus.21 Cim-IO Server DLGP service
CIMIOSETCIMH_200 60018/tcp # InfoPlus.21 Cim-IO Server History DLGP service
Copy this information to the Services file on any remote server that will be reading data from this Cim-IO for Set-Cim interface and make sure the ports are not blocked by a firewall. For example, you may want to add this configuration to the APC Online servers for running applications reading data directly from IP.21 database.
If it is required to change the default port numbers, they can be edited in the Services file. The port number can be any unique number higher than 5000. Lower numbers are reserved for standard services and system use. See KB article for instructions:
How to change port numbers for logical device entries, in Aspen CIMIO interface manager? - https://esupport.aspentech.com/S_Article?id=000094184
Step 3 – Configure the cimio_logical_devices.def file on the Watch / IP.21 server
Cim-IO clients use the cimio_logical_devices.def file to read data from the Cim-IO server. In the case of Miscellaneous tags for Aspen Watch that are reading tags using Cim-IO, which we will configure later, they require the use of Logical Devices specifically named IODEVx. In the Watch server, configure a Logical Device named IODEVx using Cim-IO for Set-Cim we just created and use the host as the Watch / IP.21 server itself.
Open to edit the cimio_logical_devices.def file using a text editor in the Watch Server found here: C:\Program Files (x86)\AspenTech\CIM-IO\etc
Add the following entry:
IODEV1 WatchServerName CIMIOSETCIM_200 CIMIOSETCIMH_200
where IODEV1 is the logical device name, the WatchServerName is the host machine for IP.21 / Aspen Watch, the CIMIOSETCIM_200 is the DLGP service name and CIMIOSETCIMH_200 is the DLGP history service name.
Optional: if you are not planning to create Miscellaneous tags in Aspen Watch and are only looking to read tags from IP.21 that are part of other records, the logical device name does not need to be IODEVx. Instead, you can use any arbitrary name, the common one being IOIP21. The DLGP and History DLGP services will still be using Cim-IO for Set-CIM to read from the IP.21 database, with the host being the Watch/IP.21 server:
IOIP21 WatchServerName CIMIOSETCIM_200 CIMIOSETCIMH_200
Optional: If you are planning to add Miscellaneous tags to be read directly from the DCS/OPC server and have already configured a Cim-IO for OPC interface, you can also add an entry like this:
IODEV2 DCSServerName CIMIODCS1 CIMIODCS1_his
Where IODEVx is the logical device name required for Miscellaneous tags, DCSServerName is the DCS/OPC host, CIMIODCS1 is the DLGP service name for the Cim-IO for OPC interface and CIMIODCS1_his is the history DLGP service name.
Copy these entries for the cimio_logical_devices.def files on any remote servers that will be reading from these logical devices. For example, you may want to add this configuration to the APC Online servers for running applications that are reading tags from IP.21 directly.
This would be a good time to use Cim-IO Test API to check if the configuration was done properly and you can read data from IP.21 database using this logical device. The syntax for reading/writing using the interface Cim-IO for Set-Cim is:
<IP.21 Definition Record> <IP.21 Field Name>
(the IP.21 definition record name, followed by a space, then the IP.21 field name)
Example using Miscellaneous tag record under AW_MSCDef: TI1001 AW_VALUE
Step 4 – Create Miscellaneous Tags and Use in AW Custom Calculations
In order to use a tag as an input to a custom calculation in Aspen Watch, you will need to use Variable Binding when creating the calculation. You will notice when trying to add variable bindings, the source of the tag can either be:
APC controllers or IQ applications
Miscellaneous Tags
PID Loop Tags
If the tag that you would like to read from IP.21 is not associated with any of the existing/automatically created entries, we will need to create a Miscellaneous tag that links to the IP.21 record, to be able to bind it to the calculation.
Detailed instructions on how to create a single Miscellaneous tag or multiple tags, see Aspen Watch Maker Help File.
Open Watch Maker and navigate to Tools > Tag Maintenance > Miscellaneous tab and click Add Tag.
When selecting the IO Get Record Type, if you choose “None”, this will create a Miscellaneous tag with no link to any Cim-IO devices, it will only create a record in IP.21 under AW_MSCDef and store its data which can be read or written to. If you only want to read/write from the AW_VALUE field from Miscellaneous Definition Records, AW_MSCDef, then this will be sufficient and configuring Cim-IO for IP.21 interface is not necessary. Once you create this tag in Aspen Watch, the record will be created in the IP.21 database (as seen in Infoplus.21 Administrator) and changes made to the record field AW_VALUE in IP.21 Administrator will be reflected in Aspen Watch Miscellaneous tags.
For example, creating Test miscellaneous tag with IO Get Record Type “None”:
The record is created under AW_MSCDef, the value for AW_VALUE can be manually changed by the user in IP.21 Administrator and Aspen Watch will read this value in real-time:
In order to read/write from the Cim-IO interfaces we created in the previous steps for records not under AW_MSCDef, you will need to create a Miscellaneous tag with a Cim-IO Configuration (IO Get Record Type other than “None”). Choose the logical device name IODEVx for Cim-IO Device Name that we created in Step 3 above, as defined in the cimio_logical_devices.def file for CIMIOSETCIM DLGP service. As noted in Step 3, for the IO address, the syntax will be:
<IP.21 Definition Record> <IP.21 Field Name>
Here’s an example of creating a miscellaneous tag that reads the IP_INPUT_VALUE field from the IP_AnalogDef record called AAA_Test.
This is the record in IP.21 Administrator:
This is the configuration for adding the single Miscellaneous tag, note the Cim-IO Device Name is IODEV1 and the IO Address is AAA_Test IP_INPUT_VALUE
This is the result of the tag being read in real-time from the database in Aspen Watch:
Now that the Miscellaneous tag has been created and is linked to the IP.21 record field, via Cim-IO, it can be used for Variable Binding in an Aspen Watch custom calculation:
Other
Keywords: aspen watch, aw, ip21, miscellaneous, tags, read, write, cimio
References: s:
How to collect (historize) Inferential Qualities (IQ) apps using Watch Maker? - https://esupport.aspentech.com/S_Article?id=000074433
How to configure the interface for multiple IP.21 servers - https://esupport.aspentech.com/S_Article?id=000076894 |
Problem Statement: Sometimes the startup of Aspen InfoPlus.21 fails with the indication of problems with one or more of the history filesets.
The normal procedure for solving the problem would be for the user to try to run the executable called h21arcckwizard.exe which is located in the directory...
drive:\.....\AspenTech\InfoPlus.21\C21\H21\bin
Note: The executable can also be accessed via the Aspen InfoPlus.21 Manager by clicking on the menu selection Actions | Repair Archive
However, with the database not running, some history initialization needs to first take place. | Solution: TheSolution is to open a Windows OS Command Prompt (formerly called the DOS Prompt).
Change Directory to
drive:\.....\AspenTech\InfoPlus.21\C21\H21\bin
Click on Enter after typing
h21init.exe
The cursor will just be blinking.
Leave the command prompt window open in that state.
Now open the Repair Archive Wizard either from the Aspen InfoPlus.21 Manager or else from Windows Explorer.
Once the required Filesets have been repaired, go back to the command prompt window and do a CTRL-C to close the h21init executable.
You can now retry starting the database.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional info: If, when running the h21arcckwizard.exe wizard, the filesets do not show up in the list after selecting the repository, open another Windows OS Command prompt using 'Run as Administrator' and issue a command like this:
h21arcck -r TSK_DHIS -a1 -o -d -t0
The -r designates the repository and the -a specifies a file set number (do this command for any file set that does not show up).
Please see thisSolution for an explanation of the switches (parameters) for h21arcck.exe (and other history programs):
What are the Aspen InfoPlus.21 history utility programs and their functions?
https://esupport.aspentech.com/S_Article?id=000078793
Keywords: None
References: None |
Problem Statement: If you are upgrading your Aspen InfoPlus.21 database and at the same time migrating to a new server then you will undoubtedly need to use the h21chgpaths.exe utility to alter the history fileset paths to reflect the new location of the database.
If you do not adjust the fileset paths, you may be unable to start the Aspen InfoPlus.21 database due to the following error that appears in the TSK_DBCLOCK.out file:
Cannot Mount Read-Only Active Archive > #
Below is a description of the h21chgpaths command line utility. Also, please review | Solution: #000062839 for more information on how to properly move or upgrade an InfoPlus.21 database.Solution
!!!! SAVE A COPY OF CONFIG.DAT BEFORE USING THIS UTILITY !!!!
This file can be found in the following directory C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\dat
The utility is called 'h21chgpaths.exe' and it is located in C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin directory.
The 'h21chgpaths' must be run from DOS as an Administrator.
Usage syntax:
h21chgpaths old_string new_string
The program will search for 'old_string' in all the paths in your config.dat and replace it with 'new_string'. If either string contains a space in it, enclose the string in double quotes. Be careful, because ALL 'old_strings' are replaced, not just the first one found in the path. So if you have a path like this...
\\system1\archives_for_system1\TSK_DHIS\arc1
and you run h21chgpaths like this...
h21chgpaths system1 system2_new
you will get this...
\\newsystem2\archives_for_system2_new\TSK_DHIS\arc1
If you really only wanted the first one replaced, run h21chgpath like this...
h21chgpaths \\system1 \\system2_new
You can always check the results by running H21MON (in DOS) and looking at all your paths. Both of these tools (h21chgpaths and H21MON) should only be used while your database is NOT running. You can use H21MON to adjust any mistakes you find, or you can start with a fresh copy of config.dat and try using h21chgpaths again.
Example:
View the Repository tab in the Properties for the repository, and check the Current path value, like \\system1\ip21g200his\, this is the path to the share where the file sets for the repository point to, this is the oldstring argument for h21chgpaths.
Then figure where you?re going to put the file sets with data in the new system, system2, then create a share for this folder like ip21g200his_new, and use this as the newstring for the h21chgpaths command,.- The newstring argument then would look like: \\system2\ip21g200his_new\.
The command can be built like:
h21chgpaths \\system1\ip21g200his\ \\system2\ip21g200his_new\
Keywords: KB 106366
Change paths
H21mon
InfoPlus.21
References: None |
Problem Statement: To fully secure an Aspen InfoPlus.21 database, it is desirable to give most end-users limited access to the records held within it. Nevertheless, if such users also need to make full use of features such as Statistical Process Control (SPC), Key Performance Indicator (KPI) or Golden Batch Profile (GBP), some configuration changes are required to open the database up sufficiently.
This article gives guidance as to what configuration changes will be required to allow full access to these listed features whilst maintaining only limited access rights to all the other records. | Solution: Given that we want end-users to have limited privileges within the database but open access to some records, it does mean that record level security must be used.
Consequently, the security role that these users belong to should be given (Read, Write, Create, Delete) Database level permissions.
To do this, see How to allow writes operations on specific tags for selected users section in following article:
KB 146027-2
A practical example for setting up Aspen Infoplus.21 database security and record level security
The article says For Writers_Role select Read and Write permission, but note we also want to add Create and Delete.
It is then necessary to set Default Record permission = Read, Delete, Activate (i.e. no Write) - See Setting Default Record Permissions in the Aspen InfoPlus.21 Administrator help file.
With this configuration, it means that the end-users can create record but cannot modify until specific record permissions are applied - and since they have limited privileges they won't be able to do this themselves.
Our advice is that record creation could be done initially by someone who can also grant security permissions to the new records. They can grant the WriteGeneral permission to the role for the created records. For example this following SQLplus statement could be saved as a QueryDef record in the database, and would grant permission for all the GBP/KPI/SPC records to Writers_Role when activated:
Grant Read, WriteGeneral On (
SELECT NAME FROM ALL_RECORDS WHERE
DEFINITION IN ('BatchProfileDef', 'KPIDef', 'BatchKPIDef', 'KPIScheduleDef',
'Q_XBARCDef', 'Q_XBar21Def', 'Q_XBarCDef', 'Q_XBarCSDef', 'Q_XBARDef',
'Q_XBarS21Def', 'Q_XBARSDef'))
TO Writers_Role;
The problem we are trying to solve here is that a limited privileges end-user is able to create a new record (according to the Default Record permission), but they cannot modify any part of the record until it is given sufficient WriteGeneral permission for the role that person is a member of.
Note, this query (or something similar) could be saved in the database and scheduled to run on a regular interval to remove the administrative overhead, in which case the limited privileges user need only wait for that period after creating a record for it then to be granted permission to be modified by the same person.
Keywords: aspen framework
References: None |
Problem Statement: The capabilities of using SQLplus queries for a standard IP.21 database are exactly the same as with using them for an Aspen Watch IP.21 database. However, Aspen Watch adds additional records that are applicable specifically for APC applications, which may have different names for areas where data is stored in the IP.21 database. Therefore, SQLplus queries that were created for standard IP.21 databases may not necessarily work for those records in an Aspen Watch IP.21 database and would need to be modified based on the different record structure of the Aspen Watch IP.21 database.
This article aims to provide tips on how the user can understand the Aspen Watch IP.21 database structure, navigate around it and construct SQLplus queries based on this information.
Disclaimer: Aspen Watch uses custom records and stored procedures in IP.21 to store and extract APC information and present it in a user-friendly way such as KPI plots, History Plots, Inspector, Reports, etc. The record structures and stored procedures used by Aspen Watch are subject to change as enhancements are made to the product. Therefore, any custom queries that access this information could cease to function if breaking changes occur in the product. This article will provide tips for users that choose to use SQLplus queries to retrieve the data being collected, but this is not a recommended practice. | Solution: Finding DMCplus / DMC3 Entries Records and Data in IP.21
The DMCplus and DMC3 Entry Dictionaries (part of Help Files) include the Aspen Watch record information to help the user locate where that value gets stored in Infoplus.21 Administrator. This information can then be used to construct SQLplus queries to extract the data.
The entry dictionary can be accessed directly from the PCWS web page by clicking on any parameter in the column header that is underlined (or from the APC desktop tools):
Here is an example of the DMC3 entry for Steady State Target (or CurrentTarget), note the section called “Aspen Watch Information”:
The following information is provided here, which can then be used to locate information in InfoPlus.21 Administrator:
Definition Record – This is the main definition record name under the node “Definition Records” and usually begins with “AW_” to identify it as an Aspen Watch record. Under this record name node, you may find more records that are for the specific variable or controller.
In the example of CurrentTarget, the Definition Record is AW_INDDef, and under this node you will find more nodes for each variable that has a CurrentTarget entry, like the variable FEED.
Note: the variable name in the record may not be the same as the tag name used in the controller but may include a prefix. For example, this variable in the controller is called FEED but the name has a prefix for the controller identified so the Name to be used in SQLplus queries is C01I_FEED:
Field Name – When a record name is selected in IP.21 Administrator, the fields are displayed on the right side for each parameter associated with the record. The field name noted in the Entry Dictionary is for the current value and the field name in brackets is the historical value associated with a time stamp. For example, the field name for CurrentTarget is AW_SSMAN, which you can see on the right side here with its current value of 37.921:
Map Record – A database record that correlates a record field to a piece of data that can be requested by Process Explorer and displayed. The Map Record can be found under Definition Records > AtMapDef and is useful for SQLplus queries because it identifies where data is stored for that record. For example, it tells you the definition record name, history area and time stamp area so you can easily use them in a query rather than navigating to the record itself to find where it is stored:
Aggregate Fields – if aggregate values exist for an entry, such as running averages, then it will be stored in the AW_#_IN_MEMORY_x history repeat areas.
However, aggregate fields will not be applicable for all entries so the user may request information from aggregates at their own risk. One example of a definition record that does have some aggregate fields is AW_PIDDef, such as averages (ex. AW_AVG_PV_HR_H) and standard deviations (AW_STD_ERR_HR_H), but this will not be the case for all Aspen Watch records.
It is also important to note that since the aggregate fields were designed to simplify data calculation and extraction for Aspen Watch Reports, the data is generated at the start of the timestamp, i.e., either the top of the hour, the top of the day, or top of the month. Therefore, the user is limited to the timestamps of data that can be requested directly from the Aggregate Fields.
A better approach to generating aggregate data in SQLplus is to use a regular IP.21 Aggregates pseudo-table query on the raw data itself, rather than trying to read directly from the Aggregates Field. Using the pseudo-table approach would give the user more flexibility on the exact time range they want to get data for and calculate averages and standard deviation. See end of section C below for an example of an Aggregates pseudo-table query that reads from the History Raw Data (not the Aggregates fields) and calculates averages for a specified time range.
Requesting Historical Data from History Repeat Area
The historical data can be found in the repeat areas under the green nodes AW_#_IN_MEMORY_x. The parameter name to be used to extract the data is found in the Column Header. For example, when requesting data for SS Target, the name from the column header is AW_SSMAN_H. With each value, there is a timestamp under the column AW_H_TIME_1, which can be used to specify the time range:
Note: as shown in the screenshot above, there are four history repeat areas. The Aspen Watch entries have been split among these 4 areas based on how frequently the variables are expected to change. For example, AW_#_IN_MEMORY_1 would have entries that change every control cycle like the Steady State Target. In AW_#_IN_MEMORY_2, entries that change periodically can be found such as limits and service status. In AW_#_IN_MEMORY_3, there are entries that change rarely like tuning parameters.
This is done to conserve disk space. However, it is important to note this structure when creating SQLplus queries because with each Memory repeat area, there is a unique timestamp field associated with the historical data. So, in AW_#_IN_MEMORY_1, the timestamp associated with historical data entries (like AW_SSMAN_H) is AW_H_TIME_1. However, when creating a query to read data for entries found in AW_#_IN_MEMORY_2 (like variable status AW_INDSTA_H), the timestamp associated will be AW_H_TIME_2. You can find the associated time stamp name by either navigating to the repeat area and searching for the entry or, more efficiently, check the Map Record instead as it will have this information too.
Here is an example of a query that can be used to extract historical data for the SS Target of independent variable FEED:
SELECT NAME, AW_H_TIME_1, AW_SSMAN_H FROM AW_INDDEF WHERE NAME= 'C01I_FEED'
AND AW_H_TIME_1 BETWEEN '12-JUL-21 16:00' AND '12-JUL-21 17:00'
Specifying Duration for History Repeat Area
When requesting data from the history repeat area for a specified time range, you can select the timestamp field that is in the same repeat area as the history value field. This is described in the example above using AW_H_TIME_1.
Alternatively, the following query example shows how to generate vector timestamp and value data from the SQLplus History pseudo-table without having to know the timestamp field name:
local rec char(24);
local fld char(24);
local StartTime timestamp;
local EndTime timestamp;
local SampleTime integer;
StartTime = '15-JUL-21 11:00';
EndTime = '15-JUL-21 12:00';
--- sample time is the frequency of data timestamp, i.e. every x seconds of data from start to end time
SampleTime = 15;
--- rec is name of record
rec = 'C01I_FEED';
--- fld is name of history repeat area entry
fld = 'AW_SSMAN_H';
select ts, (case trim(VALUE) when '' then PAD('-9999.0' TO 12)
else PAD(VALUE TO 12) end) as value
from HISTORY
where REQUEST = 6 and STEPPED = 1
and NAME = rec
and FIELD_ID = FT(fld)
and PERIOD = SampleTime * 10
and TS between StartTime and EndTime;
The example above is requesting data from the SQLplus History Pseudo Table. This is common practice to simplify SQLplus queries. One thing to note when using this method, is that it is important to identify the field that the data is being requested from by adding the following line, where “fld” can be replaced with the history field name, such as ‘AW_SSMAN_H’:
and FIELD_ID = FT(fld)
If the field is not identified in the query, it may result in either the query using a default field or you may see a message indicating “no rows selected”.
Another common practice is to use a SQLplus Aggregates Pseudo Table to extract raw data and perform automatic calculations on it, such as averages and standard deviations. Similar to the query above, here is an example of an Aggregates pseudo-table to calculate averages:
SELECT name, ts, avg FROM aggregates
WHERE name LIKE 'C01I_FEED'
AND FIELD_ID=FT('AW_SSMAN_H')
AND ts between '01-JUL-21 00:00' AND '01-JUL-21 06:00'
AND period = 0:30
As noted above as well, it is important to specify the FIELD_ID in the query, otherwise it will gather information for a default field or none at all. It is also important that you specify a field that is raw historical data and not aggregates data. If you pick an aggregates field such as an hourly average, then run an aggregates pseudo-table query on it, you will essentially be calculating the average of the hourly average data. So, this type of query should only be used on raw data.
Keywords: sql, sqlplus, aw, aspen watch, ip21, infoplus21, database, query
References: None |
Problem Statement: In some scenarios, Aspen Recipe Explorer clients and MS SQL Server are all running in servers which are not in a domain but in a workgroup. Recipe Explorer requires Windows Authentication to login database, and this requires the interactive account all exist in SQL Server and Recipe Explorer clients, and have same password.
If the account doesn't exist in MS SQL Server or password is not same, it will get error Invalid database server when connect database in Recipe Explorer. | Solution: The steps to resolve this problem:
1. Create an account with the same name in Recipe Explorer and MS SQL Server, update them to have same password. In the Recipe Explorer client, this account can be in Users group or other group has permission to interactive and invoke Recipe Explorer; In the MS SQL Server, this account can be in Guest group.
2. Open SQL Server Management Studio, add this account as Windows authentication and assign to public and sysadmin Server Roles.
3. Open Recipe Explorer in the client, confirm you are able to connect the database in the MS SQL Server.
Keywords: Process Recipe, SQL Server, workgroup
References: None |
Problem Statement: How do I resolve access denied errors to Aspen Production Control Web Server with domain-based user groups? | Solution: If this is a role-based access configured for application, it will be mandatory to ensure the below:
- Make sure to Add or Select/Update the appropriate Host name in the top section of the Security page (Configuration Tab) in PCWS.
- When configuring role permissions, the Host name should be the name of the DMC3/DMCplus/IQ Online server where the application is running and NOT the host name of the PCWS web server.
If problem persists, verify that the AFW Security Client Service is running with an account that has permission to access the domain accounts. The Logon As column in the Services panel will indicate which account the service is using. This account needs access to the active directory or domain server to look up domain user accounts.
Then in AFW Security Client Service Administrator Refresh Cache and then Click in Advanced Options.
Under the Test Security Functions Refresh Cache and click in the option Get Roles.
You should be able to view the user here:
And in AFW Security Manager
Once you verify that the user has the roles you should be able to access to PCWS with this user.
Keywords: Domain
AFW
PCWS
References: None |
Problem Statement: Is it possible to make unique | Solution: Id’s in Aspen PIMS (Merge 2 result.mdb database)? How we could identify the differentSolutions in multiuser environment using common database?
It was thought that theSolutionID is unique but it was found:
So different users on different machines can create the sameSolutionId but storing them in the same database can make mixed reports.
Is there method to make it unique regardless users?
Solution
For theSolutionID stored in SQL Server, theSolution ID’s will only be unique within a single PIMS database and not among several SQL Server databases. TheSolution id is retrieved through a stored procedure in the database and is determined by a counter or identity in the SQL Server database.
No, there is no known method or automated method to merge several PIMS’ SQL Server databases together and to make sure theSolution ids are all unique. There is no automated process that I know of that can take care of this issue. This would involve probably a manual process of identifying the duplicateSolution ids and then manually creating queries to update the duplicateSolution ids to unique ID's.
TheSolution ids for access should mostly be unique but could also have the same ones since theSolution ID's are created based on the current time. So, if several case executions were run at the same time and written to different access databases, then it is possible to have the sameSolutionID in different databases.
Instead of merging SQL Server databases together, its better that users just write to the same SQL Server database. SQL Server can handle multiple users writing to the same database.
Key words:SolutionID
Database
SQL server
Keywords: None
References: None |
Problem Statement: You cloned a failure agent, but you cannot train the clone. When editing the cloned agent in Aspen Mtell Agent Builder, the failure the original agent was trained on does not appear as an option under Failure Work Items.
Example:
Original Agent
Cloned Agent | Solution: It is likely that this work order is no longer marked as a failure in the Failure Library. If this is the case, you should see circular arrows next to the original agent icon.
To train the cloned agent on this failure, you will need to remark it as a failure in the Failure Library.
In Agent Builder, navigate to Failure Library -> Failure Editor. In the Equipment Tree, click on the asset/location this failure belongs to.
Locate the desired failure. The Is Failure box will not be checked.
Check the Is Failure box.
If you get an Agents Out of Sync pop up, make sure none of the agents are selected. This will allow you to keep those agents without having to retrain them.
Next time you try to train the cloned again, this failure will appear as an option.
Keywords: Cloned agent
Missing failure
Missing work order
References: None |
Problem Statement: Trouble shooting problems with the Production Control Web Server | Solution: Utilize the following trouble shooting guide:
After upgrading PCWS, some web browsers show script errors or do not display the Navigation Bar
Solution Access the web page and dismiss any errors, then press Ctrl-F5 to force the browser to fetch all web page content from the server and refresh the browser cache.
PCWS Trouble Shooting Guide
This guide is intended as a tool for trouble shooting commonly reported problems with the Production Control Web Server, and contains information from a number of earlier knowledge base articles.
Correcting problems with PCWS crashing, hanging, disconnecting or no longer receiving updates
PCWS stops working with Service Unavailable Message (IIS 6.0)
Configuring DAIS Trader
Trouble Shooting the DAIS Trader
Correcting problems with PCWS crashing, hanging, disconnecting or no longer receiving updates
Back to Top
PCWS can loose connection with the online controllers or simply fail to operate correctly if improper Network Interface Card (NIC) configurations or power management settings are used.
In most cases, modifying a few settings on all related PCs will allow PCWS to operate normally without interruption. These settings must be made on the PCWS server itself and also on all PCs that have DAIS AtAcoViewNode offers listed in the Dais Trader Manager (trman.exe). This typically means all DMCplus, SmartStep and Apollo Online PCs on the network (don't forget notebook PCs being used for plant testing also).
ALSO NOTE: Installation of Microsoft updates (Windows Update) has been known to re-set some of these settings. Therefore you should re-check these periodically.
Network Interface Card (NIC) settings
Desktop settings for PCWS server and all Online servers
Multiple Network Interface Cards (NICs)
Virtual Network Adapters (Virtual PC & VMware)
Use IIOP protocol instead of TCP or UDP for Dais
Network Interface Card (NIC) settings
Back to Top
Select the Properties for your Network connection(s) and click on the Configure button.
Never use the Auto Detect option for the link speed and duplex. Use the correct values for the network. This prevents re-negotiations from happening and disrupting CORBA communication. Also check the switch(s) that the NIC cables are connected to and make sure they us the same settings.
Never allow the computer to turn off the device.
Desktop settings for PCWS server and all Online servers
Back to Top
Right-Click on the Desktop and choose Properties.
Disable the screen saver or use one that uses little or no CPU when active.
Then click the Power... button (if available).
Never allow the system to enter standby mode or turn off the hard disks.
Never allow the system to hibernate. Even one of the Online PCs going into hibernation mode can disrupt PCWS communication to the point where the web server hangs.
Multiple Network Interface Cards (NICs)
Back to Top
If your Web server or Online PCs have multiple NICs, you must make sure that the primary card is the one that DAIS Trader is associated with. If you have them configured to Team, then you must make sure the teaming software uses a virtual NIC configuration with a single MAC address and IP address. DAIS will not work in any other configuration where multiple NICs are present.
Select the Advanced Settings? menu option from Network Connections. This may have to be done inside the teaming software provided by your NIC vendor, but it should provide a way of changing the adaptor and binding order.
Make sure the network connection that is used for the Dais traffic is listed FIRST in the Connections list.
Use IIOP protocol instead of TCP or UDP for Dais
Back to Top
After all the above items have been checked, if you still have problems, it may be that a network hardware issue is causing partial transmission of packet data between the CORBA servers and the Data Provider. A typical example of this is a bad network cable or bad port in a network switch that causes periodic transfer rate re-negotiations or re-transmissions of data. Using the IIOP protocol adds additional validation checks for the packet data being passed between the CORBA servers and the Data Provider and can provide more robust error correction for these situations.
You can force Dais to use the IIOP protocol instead of TCP or UDP via the Dais ntconf.exe program. You must do this on the PCWS server as follows:
1) Navigate to the \Program Files\Common Files\AspenTech Shared\Dais\bin folder.
2) Double-click on ntconf.exe
3) Click on the General button.
4) Change the Protocol: option from TCP UDP IIOP to just IIOP (without the quotes).
5) Click OK to exit the General dialog.
6) Click the Clean button to clean all offers.
7) Click the Master Trader button to check where the Master Trader is running.
8) Restart the Dais_Trader Service on the Master Trader machine determined from step 7.
9) Make sure each dmcp_viewsrv.exe, smartstep_viewsrv.exe and apollo_viewsrv.exe process is restarted on all online machines that use the same master trader determined in step 7. Use the shutdown_dmcpview then startup_dmcpview commands from a command prompt (replace smartstepview or apolloview in place of dmcpview for all commands as needed) to restart those CORBA view servers so they all use the IIOP protocol. This procedure prevents you from having to stop any running controllers on these machines.
10) Finally, restart the Aspen ACO View Data Provider Service on the PCWS machine to complete the process. All web sessions will be logged out and will simply need to reconnect.
Configuring DAIS Trader
Back to Top
The Dais Trader is installed automatically with the Aspen Production Control Web Server (PCWS), Aspen DMCplus, Aspen SmartStep, Aspen Apollo and Aspen IQ online applications. It is used to facilitate the connection of the controller and Aspen IQ online applications to the PCWS and IQView respectively.
The Dais Trader configuration consists of designating a Master and a local Trader and then configuring them on each host where online applications or the PCWS is running.
The Master Trader is an application that allows our client/server applications to find one another. Aspen DMCplus, Aspen SmartStep, Aspen Apollo and Aspen IQ each have a view server that provides data to client applications. The process names of theses servers are dmcp_viewsrv.exe, smartstep_viewsrv.exe, Apollo_viewsrv.exe and iqlinksrv.exe. As each view server is initialized, it locates the Master Trader and posts an offer that identifies the type of data that it is serving and the hostname and port number that it can be contacted on. When the clients initialize they contact the Master Trader and examine the offers to determine if there are any servers that are offering the type of data they are designed to consume. If the clients find a compatible offer they contact the server using the hostname and port information. The client process names are acoviewdp.exe, and iqview.exe.
The ACO Utility Server, that controls our online applications, is configured with a dependency on the DAIS Trader Service. If the DAIS Trader must be restarted, it will also restart the ACO Utility Server, which will restart all online applications. For this reason, Aspen does not recommend placing the Master Trader on the same machine as the online controllers. In multi-host environments we recommend using the PCWS machine as the Master Trader node.
To configure DAIS, you must log into each online server and the PCWS in turn starting with the Master Trader node. The process for each host is the same.
Use the Services Manager (Control Panel | Administrative Tools | Services) to stop the DAIS Trader service on the local machine.
1. Use Windows Explorer to execute the ntconf.exe program;
Program Files\Common Files\Aspentech Shared\Dais\bin\ntconf.exe
Press the Master Trader button.
Provide the hostname of the Master Trader node.
Set the net id to aspentech
Set the port number 11002.
Uncheck the Backward Compatibility box.
Then click on OK.
2. Press the Local Trader button, and configure it exactly like the Master Trader.
3. Press the General button, and check the protocol list. By default this will be TCP UDP IIOP. In locations that have experienced network issues the list may be reordered with IIOP placed first in the list. All the machines that share the same Master Trader should have matching protocol list order.
4. Press the Clean button.
5. Use the Services Manager to restart the DAIS Trader.
6. Check the configuration by running the trader manager program;
Program Files\Common Files\Aspentech Shared\Dais\bin\trman.exe
If the configuration is correct, the manager should connect to the Master Trader and open a dialog. The dialog's title box will display the node name.
Trouble Shooting the DAIS Trader
Back to Top
If the applications that depend on the Dais Trader will not run, then there is a good possibility that they cannot contact the Master Trader. If you encounter this situation then use the Trader Manager to determine if it can access the Master Trader. Use Windows Explorer and launch;
Program Files\Common Files\Aspentech Shared\Dais\bin\trman.exe
If the Trader Manager is unable to contact the Master Trader do the following;
confirm that the machine is connected to the network
confirm that the log file path is correct and that the file exists.
make sure you are not using localhost as the local or master trader node on the Online machines. You must use a real machine name or IP address.
confirm that the protocol list, as defined in the Dais Trader / NTCONF under the general tab, are the same on all machines. This would include not only the same list, but in the same order. The IIOP protocol is specifically for the Dais Trader.
Verify network connectivity between the problem machine and the Master Trader. Open a cmd prompt and enter Ping <Trader hostname> or tracert <Trader hostname>.
If the Trader Manager is able to connect with the Master Trader check the following;
Verify that the hostname displayed in the dialog title area is the name you expect.
- Try to use the IP address of the network card in the NTCONF / Dais Trader configuration rather than the machine name.
Check is that the problem PC and the Master Dais Trader machine can both do a reverse DNS lookup on themselves and on each other.
o Start a cmd prompt,
o Enter ipconfig /all and note the hostname and IP address of the local machine
o Enter nslookup hostname and verify that the correct IP address is returned
o Enter nslookup IP address and verify that the correct hostname is returned
Do this on BOTH PCs. Have the network engineers investigate any discrepancies in the DNS tables.
If you are unable to get the DNS tables fixed, you can also update the host tables on all participating machines so they all have the other machine names identified for the connections.
PCWS stops working with Service Unavailable Message IIS 6.0
Back to Top
After some time, Internet Explorer stops updating and forcing a refresh of the PCWS web page results in a blank page with the message: Service Unavailable. This is caused by the Worker Process Recycling mechanism that is enabled by default in IIS 6.
To prevent this problem, disable the Recycling mechanism for the DefaultAppPool of IIS on the Aspen production Control Web Server machine as follows:
1. On the PCWS server, choose Internet Information Services (IIS) Manager from the Administrative Tools menu.
2. Expand the (local computer) and Application Pools nodes.
3. Right-click on DefaultAppPool and choose Properties.
4. On the Recycle Tab, make sure all checkboxes are un-checked, then click OK.
This will prevent IIS from recycling the PCWS web application and causing the Service Unavailable error.
Aspen Framework Security also uses IIS and if this application recycles when the PCWS has an outstanding request, it can cause the PCWS to hang. For this reason recycling should also be disabled for the AspenAppPool of IIS.
1. On the PCWS server, choose Internet Information Services (IIS) Manager from the Administrative Tools menu.
2. Expand the (local computer) and Application Pools nodes.
3. Right-click on AspenAppPool and choose Properties.
4. On the Recycle Tab, make sure all checkboxes are un-checked, then click OK.
It should be noted however that IIS processes can become fragmented or grow in memory after long periods without recycling. This can lead to slow web server performance. However, in recent testing the aspenONE 2006 version of PCWS seems to cause minimal problems of this nature. It is good practice to monitor the web server memory usage for the following processes related to PCWS: w3wp.exe, acoviewdp.exe and AfwSecCliSvc.exe.
Considerations for the new WCF connect with V7.2 and later:
On the Configuration page of the PCWS, configure the server to point to the server where the online applications are running, ie. the DMCplus server.
Use the default port number provided when the application is selected in the Configuration page.
Confirm in Control Panel / Date and Time Display, that each server, the PCWS and the online server are set to the same time and timezone.
Confirm the Aspen APC Data Service is running for the application on the online server. These are listed in the Configuration Tasks section of the Advanced Manufacturing Suite Advanced Process Control Installation Guide.
Keywords:
References: None |
Problem Statement: Cannot see the Aspen Process Controller (APC) application on the Aspen Production Control Web Server (PCWS). | Solution: The Aspen Process Controller uses the RTE process to communicate with the Aspen Production Control Web Server.
If the APC application is running, then the the RTE service is running.
Go to the Configuration page of the PCWS and make sure the RTE application is checked and that the server name of the APC is listed.
The combination of these items should provide for the APC application to be visible from the PCWS.
If role-based access has been configured for applications:
- Make sure to Add or Select/Update the appropriate Host name in the top section of the Security page (Configuration Tab) in PCWS.
- When configuring role permissions, the Host name should be the name of the DMC3/DMCplus/IQ Online server where the application is running and NOT the host name of the PCWS web server.
Keywords: PCWS, APC
References: None |
Problem Statement: When Aspen InfoPlus.21 (IP.21) server(s) and aspenONE Process Explorer (A1PE) web server are installed in different geographical locations (on the Wide Area Network aka WAN), there is a chance that process trends and graphics may be significantly slower than if the servers resided on the Local Area Network (LAN) within the same plant or facility.
This Knowledge Base article provides the steps for how to improve the A1PE performance when the data source servers (IP.21 servers) and the web server are located on the Wide Area Network as opposed to being located on the Local Area Network within the same plant or facility. | Solution: Please follow these steps:
Set <UseProcessDataService> in the AtProcessDataREST.config file to True; the AtProcessDataREST.config file is located on the web server in the C:\inetpub\wwwroot\AspenTech\ProcessData directory;
Add 'Aspen Process Data Service' to ADSA and make sure port number 52007 is open in the firewall;
Open the Windows Services snap-in on the participating servers and verify that the 'Aspen Process Data Service' service is running; the service should be set to start automatically on server startup;
Hard code A1PE server IP address in the Hosts file on the IP.21 server(s) and vice versa;
Test the performance of trends and graphics – you will notice a dramatic improvement.
If, after implementing the above suggestions, there are still performance issues, check the AspenTech Support Website for any patches that may improve system performance.
Also, make sure your servers have enough processing power and amount of RAM to handle the workload.
Keywords: lag
slow
load
References: None |
Problem Statement: When writing user-defined routine for Aspen Plus (user block, user kinetic...) it might be necessary to call numerical routines provided by third-parties as LIB files (for example, the IMSL libraries). These files should be linked with Aspen Plus before the Aspen Plus run is executed. | Solution: To link the LIB files, the asplink.prl file as described below. Any LIB file can be included. In the following example, we want to include IMSL.LIB, IMSLS_ERR.LIB and IMSLMPISTUB.LIB from the IMSL libraries.
In Aspen Plus 10 and earlier:
The asplink.prl file is located by default in
C:\Program Files (x86)\AspenTech\APrSystem Vx.x\Engine\xeq
Look for the two following lines in asplink.prl:
# Define MS Fortran system dlls
@ms_ftn_dlls = ( dfordll, msvcrt);
And replace the second one by:
@ms_ftn_dlls = ( dfordll, msvcrt , imsl, imsls_err, imslmpistub);
In Aspen Plus 11 and higher:
The asplink.prl file is located by default in
C:\Program Files\AspenTech\APRSYSTEM Vx.x\Engine\xeq
Look for the two following lines in asplink.prl:
# Define MS Fortran system dlls
@ms_ftn_dlls = ( dformd, msvcrt);
And replace the second one by:
@ms_ftn_dlls = ( dformd, msvcrt , imsl, imsls_err, imslmpistub);
Notes:
The DLL name should be entered without the .LIB extension. This will make sure the IMSL libraries IMSL, IMSLS_ERR and IMSLMPISTUB are included when the Aspen Plus simulation is linked to the user models. Before running the simulation, any reference to the IMSL libraries should be removed from the DLOPT file.
Modifying the asplink.prl will affect all simulations run on the computer where the file is located, even if the IMSL libraries are not needed. This might result in slightly longer linking time while the simulation is prepared to be run.
Keywords: link
fortran
References: None |
Problem Statement: How to configure a remote connection to PostgreSQL for Aspen Multi-Case? | Solution: There are three steps to configure a remote connection to PostgreSQL for Aspen Multi-Case:
Test and troubleshoot connection to remote machine
Configure PostgreSQL for remote connection
Create aspenuser in host DB machine
Please see attached PDF for details on how to perform each of these steps.
Key Words:
Aspen Multi-Case, Multicase, Remote Connection, PostgreSQL
Keywords: None
References: None |
Problem Statement: How can we fix below error message that occur during Aspen PIMS model run. | Solution: The above error appears when the model is run in Aspen PIMS, as Aspen PIMS is 32-bit application 32-bit access driver might be missing and needs to be installed.
Here are the very simple steps to follow and install the 32-bit access driver to fix this error:
1. Type in ‘Microsoft access database engine” in google.
2. Open below link highlighted in yellow:
3. Once opened above link click download:
4. Two access drivers will be available 32 and 64 bit:
5. Select the first .exe which is 32-bit driver and click next:
6. Once installed double click this .exe file and proceed for the installation by clicking next:
7. Now launch Aspen PIMS again and run the model the error will now go away and the model will now run successfully.
Key words:
ODBC
Drivers
Access
Keywords: None
References: None |
Problem Statement: Which functional groups does Aspen Plus use to fit a molecule's structure during a Property Estimation (PCES) run? Can the Graphical Structure on the Components | Molecular Structure | Structure and Functional Group sheet be used to determine the functional groups used in estimation. | Solution: Aspen Plus attempts to fit your entered molecular structure information using the functional group methods available. The Functional groups are documented in the help under Aspen Plus
Keywords: None
References: -> Physical Property Data Reference Manual -> Group Contribution Method Functional Groups.
The groups are determined at run time during the estimation calculations from the general structure defining the molecule by its connectivity is specified on the Components | Molecular Structure | General sheet. The molecular connectivity can either be entered on the General sheet or it can be generated by clicking on the Calculator Bonds button on the Structure and Functional Group sheet after drawing or importing a 2D .mol file of the molecule. Many components in the NIST-TDE databank only have the 2D graphical structure. Users must click on the Calculate Bonds button for a molecule in order to estimate parameters if that option is specified on the Estimation | Input.
The property data format file (.PRD file) produced during an Estimation run indicates which functional groups fit the molecule during the estimation calculations. To create a .PRD file go to the Setup | Report Options | Property sheet and check Property Data format file (.PRD file). Then, after executing an estimation or data regression run go to File Export and select the report option. A file with the extension PRD will be created along with the .rep report file.
In the .PRD file, you will see the groups used for each molecule that has the molecular connectivity specified on the Components | Molecular Structure | General sheet.
E.g.
STRUCTURES
UNIFAC NEWCOMP 1200 1 /
1015 1 / 1010 1
UNIF-LBY NEWCOMP 1200 1 /
1015 1 / 1010 1
UNIF-DMD NEWCOMP 1200 1 /
1015 1 / 1010 1
UNIF-R4 NEWCOMP 1200 1 /
1015 1 / 1010 1
JOBACK THIAZOLE 113 3 /
136 1 / 140 1
BONDI THIAZOLE 127 1 /
118 1 / 105 3
PARACHOR THIAZOLE 147 1 /
125 1 / 144 1 / 105 3
REICHENB THIAZOLE 125 1 /
112 3 / 127 1
JOBACK NEWCOMP 119 1 /
100 1 / 101 1
BENSON NEWCOMP 189 1 /
100 1 / 211 1
BONDI NEWCOMP 123 1 /
101 1 / 100 1
PARACHOR NEWCOMP 114 1 /
101 1 / 100 1
REICHENB NEWCOMP 100 1 /
101 1 / 117 1
ORRICK-E NEWCOMP 113 1 /
100 2
RUZICKA NEWCOMP 189 1 /
100 1 / 211 1
Also, UNIFAC groups are special since there are parameters for them in the databanks. If you export a Project data file (.APPRJ file) or report All physical property parameters (in SI units), the generated groups as the parameters UFGRPx will be reported in the .rep file. Both of these options are also specified on the Setup | Report Options | Property sheet.
E.g.
PROP-LIST UFGRP 1
PVAL C2H6O 1200.00 1.00000 1015.00 1.00000
1010.00 1.00000 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
PROP-LIST UFGRPD 1
PVAL C2H6O 1200.00 1.00000 1015.00 1.00000
1010.00 1.00000 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
PROP-LIST UFGRPL 1
PVAL C2H6O 1200.00 1.00000 1015.00 1.00000
1010.00 1.00000 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
0.100000E+36 0.100000E+36 0.100000E+36 0.100000E+36
Reference: ScopusID 3035 |
Problem Statement: If you are modeling a convection section without a firebox Fire Heater, EDR gives you the option to enter the total mass flow of the gas at inlet to the convection section, in the Problem Definition | Process Data | Flue gas.
NOTE: You will also need to specify the flue gas temperature at inlet. | Solution: To enable this input you need to set your exchanger as follows:
Go to Problem Definition | Application Options | Application Options, select YES for Exclude firebox from calculation, and for Flue gas properties select form the options Provide as input.
Move to Fuel + Oxidant | Fuel | Fuel, and for the Number of fuels specification enter a value of Zero (0).
These prior steps will deactivate the fuel input chart and will enable the Total mas Flow into convection section input field.
Keywords: Fire Heater, Total Mass Flow into Convection Section
References: None |
Problem Statement: How are the initial and final boiling points used in the assay curve calculations? The end points specified in the assay are different than what is reported for the stream.
The final boiling points (TBP, D86 and other curves) generated by Aspen Plus for the bottom product and the feed differ up to 70 C. I would expect that the final boiling points be close together because they contain about the same amount of heavies. | Solution: When distillation curves are calculated for assays, the Aspen Physical Property System extrapolates from the first and last points in the assay to get the values for the 0% and 100% points. Options on the Petro Characterization | Analysis Options | Assay Procedures sheet affect this extrapolation:
Extrapolation method determines the form of the extrapolated curve. The Probability method is based on the Hazen probability approach, which assumes a normal distribution of boiling points and uses the last point provided to extrapolate to each end of the curve. The Quadratic method fits a quadratic curve to the last three points on the curve.
Initial boiling point (IBP) and Final boiling point (FBP) determine the temperatures reported for the 0% and 100% points. The temperatures at these points in the extrapolation (0.5% and 99% by default) are reported as the 0% and 100% temperatures. Some software uses other values for these points; you may wish to adjust these values to match other calculations.
To illustrate the calculation, if the first point in the assay is at 10% and the last at 90%, Aspen Plus extrapolates between 0 - 10% and 90 - 100% using two possible methods: Probabilistic and Quadratic. The default is Probabilistic, which assumes a normal distribution of boiling points and uses the last point provided to extrapolate to the initial and end point.
By default Aspen Plus use 0.5% and 99% for the initial and final boiling points, respectively. Other programs such as ProII use the 98% point as the final boiling point (FBP) and the 2% as the initial boiling point (IBP) by default.
The initial and final boiling points may be adjusted to match end points. These values can be set on the Components | Petro Characterization | Analysis Options | Assay Procedures sheet.
Many users think that the initial and end points should be corresponding to the boiling points of the lightest and the heaviest component or pseudocomponent in the assay. That is NOT true. As a matter of fact, TBPs of an assay are a function of component distribution. For two streams containing the same components but with different distribution, their TBP curves will differ.
TBPs are defined by the cumulative mid-point mass fractions and the boiling temperature of components (pure or pseudo) in the mixture. The cumulative mid-point mass fraction is the sum of all the mass fractions of the components lighter than the component plus 1/2 of the mass fraction of the component.
Example:
Fraction Cumulative Frac
Pseudo Feed Residue Feed Residue Tb,C
PC242C 0.004045 4.97E-06 0.002023 2.48E-06 242
PC253C 0.007435 1.08E-05 0.007763 1.04E-05 253
PC267C 0.008231 1.5E-05 0.015596 2.33E-05 267
PC281C 0.00921 2.12E-05 0.024316 4.14E-05 281
PC295C 0.010555 3.11E-05 0.034199 6.76E-05 295
PC309C 0.012675 4.83E-05 0.045813 0.000107 309
PC323C 0.018611 8.98E-05 0.061456 0.000176 323
PC336C 0.023724 0.000148 0.082624 0.000295 336
PC351C 0.025983 0.000215 0.107478 0.000477 351
PC365C 0.036273 0.0004 0.138605 0.000784 365
PC379C 0.057014 0.000845 0.185249 0.001406 379
PC392C 0.067484 0.001328 0.247497 0.002493 392
PC406C 0.058821 0.001595 0.31065 0.003954 406
PC420C 0.067442 0.002511 0.373781 0.006008 420
PC440C 0.134319 0.008157 0.474662 0.011342 440
PC468C 0.125365 0.014943 0.604504 0.022892 468
PC496C 0.087523 0.021416 0.710947 0.041071 496
PC524C 0.085013 0.044097 0.797215 0.073828 524
PC548C 0.059187 0.058576 0.869315 0.125164 548
PC579C 0.016588 0.037357 0.907203 0.173131 579
PC607C 0.015729 0.068538 0.923362 0.226078 607
PC635C 0.01623 0.118288 0.939341 0.319491 635
PC677C 0.033627 0.375979 0.964269 0.566625 677
PC720C 0.018917 0.245386 0.990541 0.877307 720
If Tb vs cumulative fraction is plotted for the two streams, the curve will look differently.
Notice the ends at the cumulative mid-point mass fraction of the heaviest component. It is 0.99 for Feed and 0.877 for Residue. This means that points above 88%wt (90%, 95% and end point) for Residue have to be extrapolated. The extrapolation may well generate an end point higher than the boiling temperature of the heaviest component. The fact that the highest mass fraction for Feed is 99% explains why its True Boiling Point Curve (TBPCRV) end point is much closer to the boiling temperature of the heaviest component.
The extent of extrapolation is controlled by the Assay Procedure initial and final boiling point values. The specified values determine at what percentage the 0% and 100% points are reported. To improve end point calculation:
Increase the number of cuts.
Change the initial/final boiling points settings.
Use a different extrapolation method.
Keywords:
References: None |
Problem Statement: In view of the overlaying of products, the steps to upgrade Aspen Watch to a new version on a new computer can be confusing. The attached pdf file should provide a guide through the process.
Though it will try to give hints along the way, the user should also be able to determine which steps can be omitted if an upgrade is not taking place, i.e. if he is just moving to a newer computer. | Solution: NOTE: For this KB Article, please Download and Refer to the PDF File Attachment Named,
Aspen Watch Upgrade Procedures for a new Computer v3.pdf
Keywords: AspenWatch, Upgrade, Migration
References: None |
Problem Statement: After opening a file in a new version and updating the databanks, the parameters still seem to come from the old databank when looking at the forms.
For example, after upgrading to V12.1, the databank should be PURE39, as seen on the Components | Specifications | Enterprise Database sheet.
However, on the Methods | Parameters forms, the Source is still DB-PURE32. | Solution: These parameters are not USER entered parameters, but are the results from retrieving parameters in a previous version. The forms need to be refreshed after upgrading to a new version. Click on Retrieve Parameters in the Tools section of the Home ribbon to see the source and values of the parameters in the new release.
If the parameters are written to the forms, the Source will be updated.
Keywords: None
References: None |
Problem Statement: How can I use Aspen SQLplus to look at Aspen Calc information, such as calculation names, comments, inputs and outputs? | Solution: Knowing that Calculations, Formulae & Schedules, etc. are not stored in the database but in files, it would seem that SQLplus would not be the right tool for the task but SQLplus does allow access to COM objects. The COM object support in SQLplus is like COM object support in VBScript. This permits SQLplus to have access to Aspen Calc, Aspen Production Record Manager and several other applications.
An example of a query that lists the parameters for each calculation, would be:
local CalcCmd, list, i int, calc, j int, param;
CalcCmd = createobject('CalcScheduler.CalcCommands');
list = CalcCmd.GetCalculationList();
if isarray(list) then
for i = lbound(list) to ubound(list) do
write 'Calculation Name:'||list[i];
calc = CalcCmd.GetCalculationObject(list[i]);
write 'Description:'||calc.description;
write 'Parameters:';
for j = 1 to calc.parameters.count do
param = calc.parameters(j);
write ' '||pad(param.name to 20)||param.source;
end
end
end
SPECIAL NOTE>>>
If you want to run the above query on a system that is remote to the Aspen Calc Server, you would need to define that server in your createobject statement.
For example if your Aspen Calc Server was a machine called Houston1 you would modify your code as follows:
CalcCmd = createobject('CalcScheduler.CalcCommands','Houston1');
However, note that to do this you must also have Aspen Calc installed on the IP.21 box from where the query is executing.
Another useful property that you can acquire via SQL would be the TEXT associated with the Calculation. To do this the code would read:
Write 'Script: '||calc.scriptText;
To get the names of other properties of the calculation and parameter objects, you can use the SQL Object Browser.
Via the SQLplus Query Writer, go to View =>
Keywords: None
References: s and select the relevant type libraries.
Then go to View => Object Browser.
From there, via clicking on '+' buttons you can view methods and properties.
The Aspen SQLplus Online Help files contain several examples of other Queries related to Aspen Calc, as well as how to use the References and Object Browser, as well as an overview of COM object access. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.