question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: Acid Gas Cleaning in Aspen HYSYS V8.3 | Solution: This article contains an overview of the new feature in Aspen HYSYS V8.3 for Acid Gas Cleaning. Along with the overview, you will find a set of instructions to perform simple tasks in the feature to learn more about its capabilities, supporting HYSYS.hsc files for the same purpose, and a video recording of the demo tasks being performed. For any questions please contact: [email protected].
Keywords: Acid Gas, Aspen HYSYS 8.3
References: None |
Problem Statement: It is possible to use Aspen Simulation Workbook (ASW) in Excel to deploy an Aspen HYSYS Model? | Solution: Attached is a set of files showing you multiple ways to deploy an Acid Gas Cleaning Model to an End User Using Aspen Simulation Workbook. This example will show you how to send the HYSYS model summary grid or the HYSYS Process Data Table to Excel via Aspen Simulation Workbook. Additionally it has an example of a Scenario table in ASW Linked to an Acid Gas Model for a user to run a what-if study on the effects of changing Reboiler duty and Recirculation Rate on the H2S and CO2 composition in the Sweet Gas Stream in this flowsheet.Â
The .zip file includes the HYSYS and Excel example files. The Word document gives a detailed description of the workflows described above.
Keywords: Aspen Simulation Workbook, ASW, Deployment, Acid Gas Cleaning, Process Data Table, Model Summary Grid
References: None |
Problem Statement: This KB describes the best practice to install .NET Core updates without affecting GDOT components. | Solution: Updating .NET Core is not required by GDOT. The GDOT V11 media automatically sets up .NET Core 2.1.5 as part of the installation process. The GDOT Web Viewer can run without issues using this version.
However, if there is an IT or Cybersecurity policy which requires an update to .NET Core 2.1 on the GDOT Web Viewer server, the update must be applied using the ASP.NET Core Hosting Bundle installer. The Hosting Bundle installer includes the .NET Core runtime, ASP.NET Core runtime and the ASP.NET Core IIS Module, all of which are required by the GDOT Web Viewer.
This is the recommended option by Microsoft for Windows Servers that deploy .NET Core applications. The GDOT Web viewer server deploys a .NET Core application – the GDOT Web Viewer web page – and it requires the runtime portion and IIS support.
It is important to remark that .NET Core updates are not typically distributed via Windows Update, as opposed to .NET Framework. .NET Core updates are only available from https://dotnet.microsoft.com/download/dotnet-core/ and manually installed, or deployed using silent packages (customized by local IT resources).
Possible scenario
It has been reported that the GDOT Web viewer may fail to start after installing the .NET Core 2.1.19 update with HTTP 502.5 error.
This issue will occur after installing the .NET Core Runtime installer, instead of the Hosting Bundle installer. To fix this issue, it will be necessary to install the ASP.NET Core Hosting Bundle.
Tech Tip
To confirm the .NET Core version installed in the server, open a PowerShell instance and issue the following command:
dotnet --info
.NET Core version installed on a GDOT V11 Web Viewer server (2.1.5)
.NET Core version after installing the ASP.NET Core 2.1.19 Hosting Bundle on a GDOT Web Viewer server
Keywords: .NET Core, GDOT Web Viewer, HTTP Error 502.5, GDOT, .NET Core Runtime, .NET Core Hosting Bundle
References: None |
Problem Statement: When calculating blend using a single assay, one would expect the resulted blend would be identical to the assay. However, when viewing component breakdown composition of the blend and the assay, they appear to have some small differences. What is the reason for this inconsistency? | Solution: When you blend two or more assays, they need to be on the same cut range. If they are not, then the Aspen Physical Property System first converts the assays to the same cut range. This conversion uses points spaced at 1% intervals from 0% to 100% distilled. If you have points which are very closely spaced in your distillation curves in the constituent assays, it is possible some information may be lost. The fore-mentioned conversion also applies to a single assay as a step to capture the distillation curve, so if the distillation curve contains points that are too close, they can't be captured with the level of granularity of 1%. Hence, the data loss in this process can cause inconsistency between the resulted blend and the original assay.
Keywords: None
References: : VSTS 512596
Key Words:
Petroleum characterization, Aspen Plus, Aspen Properties, Oil characterization |
Problem Statement: How to view the log of changes that had been made to the existing Aspen ONLINE project? | Solution: Aspen ONLINE has capabilities to track changes made to the online system. In many cases, a problem may be result of what happened to be a minor change to the system. By determining when the online system stopped working and looking for changes made at or just before this time, it may become apparent what caused the problem.
Following types of changes are tracked:
All changes to the Tag, Variables done through GUI or through database load programs
Starting and stopping of the online programs
Tracks up to 2000 changes (default). Can be modified on the Change Log Size option in Project Configuration | Specifications
To view the change log, on the Home tab of the ribbon, in the Diagnostics group, click View Changes. This option can be useful in determining why the online system is no longer functioning correctly.
Sample Change Log report:
The same is also available inside your project folder with the file name “ChangeLogs.txt”
Keywords: Change Logs, View Changes, Change Log Size, ChangeLogs.txt, Diagnostics, Aspen ONLINE
References: None |
Problem Statement: How to create an initialization script in ACM? | Solution: Scripts are VB based programs that can be used to perform automated sequence of event like a Task.
Tasks are often preferred for automating sequence of events in dynamic mode and this is due to ACM language used in developing them.
One of the important uses of a script is to assign initial values to model variables.
After developing and successfully running a model in ACM, quite often we will need to provide good initial values for variables and save these for future use.
If we Right click the mouse on a model which has been run successfully, then a menu pops up and the option “Create Model Initialization Script” can be selected to create this script.
The created script can be found in the model folder. When we right click on the PreSolve script, then it can be run by selecting the Invoke option.
The script created has the contents as shown in the figure, but further variable values can be added if required.
Keywords: Creating Initialization script
References: None |
Problem Statement: How to display specified value base on tag quality in Graphic.
For example: You may want to display -1 if tag quality is Bad , you may also want to display 9999 if tag quality is Bad Tag. | Solution: You can use ad-hoc calculation with If statement to achieve this.
For example:
=IF{tag01 IP_INPUT_QUALITY} =-6 THEN -1 ELSE IF{tag01 IP_INPUT_QUALITY} =-13 THEN 9999 ELSE tag01
IP_INPUT_QUALITY is maintained by the record QUALITY-STATUSES.
Key words:
Else IF
Ad-hoc
Calculation
Bad
Bad Tag
Keywords: None
References: None |
Problem Statement: Which aspenONE Supply Chain Management applications are supported on Azure and what are the best practices for deployment? | Solution: The following aspenONE Supply Chain Management applications have been certified on an Azure virtual machine (VM):
Aspen Collaborative Demand Manager™
Aspen Supply Chain Planner™
Aspen Plant Scheduler™ Family
Aspen Supply Chain Connect™
Best Practices
To deploy Supply Chain Management applications in Azure follow the same instructions as installing on premise. See the Aspen SCM V10.0 Installation Guide on the support site.
To use the Supply Chain Management applications, you will need to configure the software license manager (SLM) client to access the SLM server.
The SLM server manages license keys and tokens and can be running in the cloud or you can use an on-premise license server shared by applications running on premise.
To access your on-site SLM server you will need a VPN tunnel connecting your corporate network with your Azure network. Microsoft provides guidance on how to Extend Active Directory with VPN or Express Route connection to Azure. This will also simplify file transfer for your simulation files that are stored locally.
Known Issues
Problem Workaround
Opening and saving SCM files on your local machine can increase latency and time-out.
Open and save files in storage as close to cloud VM as possible. Avoid using local storage
Keywords: SCM,
Azure,
Cloud,
Supply Chain Management
References: None |
Problem Statement: The following may be symptoms shown for this issue:
When trying to delete or update an application in Aspen Watch Maker, it fails with this message: The requested task was not performed. Error creating record: Current license state of IP.21 server does not allow this operation
The Run Status in Aspen Watch Maker shows License Lost: See Events
When opening the History Plots from the PCWS, the Value column in aspenONE Process Explorer shows Server is unable to acquire SLM_InfoPlus21_Embed license.
Root Cause
One or all of these symptoms may indicate that the InfoPlus.21 database is running in an unlicensed state. This can be due to changes on the license server or rebooting of the IP.21/Watch server. To check if this is the case, open InfoPlus.21 Administrator, expand the node for InfoPlus.21, right click on your server name and select Properties. Then go the License Status tab. If this is the cause of the issue, it will show License Denied, instead of the normal status License Granted: | Solution: If the above root cause is confirmed to be true, the database will need to be restarted when safe to do so. Please follow these steps to resolve the issue:
Make a backup copy of the snapshot file just in case. You can do so by opening InfoPlus.21 Administrator, navigate to the server name and right-click on it, and select Save Snapshot.
Go to Aspen Watch Maker and stop data collection for all controllers.
Open InfoPlus.21 Manager and click STOP InfoPlus.21 to stop the database.
Open Services and restart the service Aspen InfoPlus.21 Task Service. This should automatically start the database but verify this in InfoPlus.21 Manager to make sure the tasks are running again.
Check InfoPlus.21 Administrator Properties (as mentioned in Root Cause above) to see if it now says License Granted.
Start data collection in Watch Maker.
If the above steps worked to change the License Status to License Granted but the History Plots issue is still not resolved, use command prompt as administrator to run an IISRESET on the Aspen APC Web Server.
If restarting the database did not change the status to License Granted or it did but the issue is still not resolved, please contact [email protected] for further assistance. Additional resource on the License Denied issue: KB Article 000081425
Keywords: Watch maker, run status, license, IP.21, infoplus.21, history plots, pcws, a1pe, SLM_InfoPlus21_Embed
References: None |
Problem Statement: How can I review constraints and penalties in Aspen Unified PIMS V12? | Solution: In this video you can find information about the view Data Inspector, Constraints or Penalties functionality in Aspen Unified PIMS V12
Keywords: None
References: None |
Problem Statement: What is the calculated block duty of a convective dryer? Why doesn't it change when I change feed flowrates or temperatures? | Solution: The energy balance of a convective dryer can be described with the following diagram and formulas.
Details are also explained in article 00005005.
There are 2 types of heat transfer in a Convective Dryer:
Direct heat transfer: heat transfer from gas directly to solid, which is Q in the diagram
Indirect heat transfer: heat transfer through the surface of equipment, which are Qind.G and Qind.S in the diagram.
Unit operations in Aspen plus usually report indirect heat transfer as its block duty results, because they are useful for sizing equipment (or determining heat transfer area).
Similarly, a convective dryer considers indirect heat transfer rate as its duty results. However, only the heat transferred to gas (Qind.G) is taken into account. This is because heat transferred to gas goes through equipment surface, while heat transferred to solid (Qind.S) doesn’t, for solids are normally on a conveyor belt and do not touch the dryer equipment.
Normally, in a convective dryer, hot gas provides all heat that evaporates moisture in solid and indirect heat transfer is not used, so the default values of Qind.G and Qind.S are zero. As a result, the block duty of a convective dryer is always as small as zero.
However, when there is indirect heat transfer, users could specify the values after checking the Consider indirect heating box in Mass/Heat Transfer tab of a dryer block. In that case, the heat input to gas is treated as the block duty.
In the attached example, the specified heat input to gas is 7 cal/sec.
Qind.G = 10 X (1-0.3) = 7 cal/sec
The unit operation has the same duty as this value.
Keywords: Convective Dryer, Heat duty, Aspen Plus, Dryer
References: None |
Problem Statement: Is it possible to reassign the port number used by the ADSA Web Service protocol? | Solution: The ADSA Web Service uses the same port as that specified for the default web site on the ADSA server (80, by default). If the port number for the default web site is changed, ADSA clients can also be modified to connect using the new port.
The following example assumes that the user would like to use port 80 for the default web site on the ADSA server. To allow a Web Service connection to the ADSA server, changes must also be made on client machines.
On the Server
? Open the Internet Information Services (IIS) Manager (Start | Run | inetmgr)
? Right click Default Web Site and select Properties
? Type the new port number in the TCP Port field
? Click OK
? Restart IIS (Start | Run | iisreset)
On the Client
? Open the ADSA Client Config Tool (Start | Programs | AspenTech | Common Utilities | ADSA Client Config Tool)
? After the ADSA server name in the Directory Server field, type ?:? followed by the port number
? Test the connection to the ADSA server
Keywords: ADSA
web
service
port
References: None |
Problem Statement: In the simulation import process of an Aspen Plus file, streams or certain data may not be transferred to ABE. what do I do to make this data available? | Solution: The reason for this missing data is because the necessary properties/property set has not been set up in Aspen Plus.
A property set is a collection of properties that can be used in stream reports. Selecting the property set will enable its list of properties to be reported in the Aspen Plus bkp file and thus be available for import to ABE.
The user can either
set up his/her own property set(s) or
import the attached template file (ZYD-STR prop set.apt) into a new or existing Aspen Plus simulation. This adds a prop set (ZYQ-STR) to the simulation that contains all the stream properties imported by Zyqad.
To set up a new Property Set:
Go to the Stream Summary report under Results Summary > Streams (or, alternatively to a Stream Result page)
Add or delete any properties to format the desired report output
Save the new format as a new template. For that, go to Stream Summary tab > and press the Save as New button. Define a name and description as in the below image:
4. Now the new Template should be available at all summary report pages.
To use Template:
Load simulation (*.bkp file)
Go to File -> Import Template
Select attached ZYD-STR prop set.apt
Go to Property Sets form and see that the ZYD-STR property set is in list.
To use the Property Sets, go to Setup in Navigation Pane. Select Report Options.
Go to Streams tab and click Property Sets.
A Property Sets window will pop up. Select ZYD-STR.
Keywords: Aspen Plus, Property Sets, simulation import, mapper
References: None |
Problem Statement: You see the following error message in the CIMIO_MSG.LOG file when starting the Aspen Cim-IO for OPC-UA server (no Cim-IO client involved) with an interface to an OPC-UA server defined:
“Error converting device to service”
This Knowledge Base article shows how to resolve this error message. | Solution: The fix to the error “Error converting device to service” is to define the OPC-UA server as a logical device in the CIMIO_LOGICAL_DEVICES.DEF file.
Keywords: None
References: None |
Problem Statement: Aspen Mtell Maestro V11.1.2 quick deployment guide when there is no internet connection available in host machine. | Solution: This article will explain how to deploy Aspen Mtell Maestro V11.1.2 when there is no internet connection available on host machine.
Prerequisites:
Aspen Mtell Maestro host machine must meet the corresponding platform specifications (see V11.1 Containers in https://www.aspentech.com/en/platform-support).
Aspen Mtell Maestro server should have Windows Server 2019 OS version 1809 or higher.
Disable automatic Windows updates
Install Docker Engine Enterprise and Docker Compose on Aspen Mtell Maestro server (refer to KB: 97772 ).
A second machine with internet connection to download attached files.
Phase 1: Downloading required files to a machine with internet connection
Download credentials file acrcreds.key from AspenTech Download Center.
Download file Aspen-V11.1.2-APM-Mtell-Maestro-Win.zip from AspenTech Support Center. To do this, click on Show Patches and then click on Aspen Mtell Maestro V11.1.2.
Download attached maestro_driver.exe
Download attached maestro_v11_1_2.tar.gz
Phase 2: Moving files to Aspen Mtell Maestro server
On Aspen Mtell Maestro server, create path C:/ProgramData/AspenTech/Aspen Mtell Maestro if missing.
Move all downloaded files to C:/ProgramData/AspenTech/Aspen Mtell Maestro/ folder
Extract the Aspen-V11.1.2-APM-Mtell-Maestro-Win.zip file
Move maestro_driver.exe to C:/ProgramData/AspenTech/Aspen Mtell Maestro/Aspen-V11.1.2-APM-Mtell-Maestro-Win/ folder.
Phase 3: Setting up Aspen Mtell Maestro container images
Right click and run maestro_driver.exe as Administrator.
Type 2 (i.e., “Load container images from tar.gz file”). You will be prompted to type the full path and name of the corresponding tar.gz file (include tar.gz extension).
Note: On average, this step will take between 8 and 15 minutes. Once the container images are setup, the option menu will appear again.
Phase 4: Starting Aspen Mtell Maestro containers
Right click and run maestro_driver.exe as Administrator (if it is not running already).
Type 3 (“Automated deployment of containers”).
If Docker Swarm is not active, you will be prompted to provide a valid IP address of the Aspen Mtell Maestro server. Pick a valid IP address from the list that will appear. In this example, 10.0.0.4 was used.
Note: Remote connection might be lost for a few seconds during the execution of this step.
You will be prompted to type a name for the stack to deploy. Use alphanumeric characters, you can also use the underscore symbol (spaces are not allowed). In this example, v11_1_2_prod was used.
Containers will be deployed in Swarm mode, and the executable will test the connectivity to the containers.
If test fails, the following message will appear TEST FAILED: Containers will be stopped and restarted in Compose mode. Containers in Swarm mode will automatically be stopped. Then, the containers will automatically be restarted in Compose mode and the test will be executed one more time. If test fails again, you need to check that the required ports are open through the firewall of the Maestro host.
If test passes, you will see a message similar to the following:
Phase 5: Testing Aspen Mtell Maestro containers from Aspen Mtell server
Copy maestro_driver.exe file to Aspen Mtell server (if it is in a different machine).
Right click and run maestro_driver.exe as Administrator
Choose option Test containers
Type Aspen Mtell Maestro URLs as
http://<Maestro host IP address>:<Maestro service port>/api/
For example:
If test passes, you will see a message similar to the following:
If test fails, you need to check that the required ports are open through the firewall.
Troubleshooting
Ports
Make sure the ports specified in docker-compose.yml are open though the Firewall.
Note: Ports are specified in docker-compose.yml file as a mapping “HOST_PORT:CONTAINER_PORT”. The ports that need to be open through the Firewall are HOST_PORT ports.
Aspen Maestro host IP address
Sometimes, Maestro host IP address might not be automatically identified by maestro_driver.exe. If necessary, specify the IP address, or host name, to use in a file called “test_ip_address.txt”, place this file in the same folder as maestro_driver.exe, and repeat the automated deployment.
Swarm mode
In addition to the ports specified in docker-compose.yml, make sure the following ports are open through the Firewall:
TCP port 2376, inbound
TCP port 2377, inbound
TCP port 7946, inbound and outbound
UDP port 7946, inbound and outbound
UDP port 4789, inbound and outbound
If multiple valid IP addresses exist, try starting Docker Swarm with a different one.
Check if latest Microsoft Security Update installed on Aspen Mtell Maestro server which was released between May and September 2020 (i.e., KB4551853 to KB4570333). These updates cause connectivity issues to containers deployed in Swarm mode. If the last installed security update corresponds to any of those, install Microsoft Update KB4577069.
VMware server
VMware Tools versions V11.0.1-V11.0.5 cause connectivity issues to containers both Swarm and Compose mode (connectivity is lost 10-40 minutes after deployment). Need to upgrade VMware Tools to V11.0.6.
Do not include Maestro host in High-Availability and/or VMotion. This will impact communication with Maestro containers since networking elements change.
VM snapshots: Either the generation of the snapshot fails, or the communication with Docker containers is affected during the process to generate the snapshot.
Keywords: Maestro install
Xray install
Docker
Container
with out internet
no internet
offline
References: Docker Engine Enterprise and Docker Compose quick installation and upgrade guide (without internet connection / offline)
Aspen Mtell Maestro quick deployment guide
Docker Engine Enterprise and Docker Compose quick installation guide for Aspen Mtell Maestro |
Problem Statement: Aspen Mtell Maestro quick installation and start up guide. | Solution: This article will explain how to install Aspen Mtell Maestro.
Note: For detailed information, please refer to Aspen Mtell Maestro V11.1.1 Installation Guide.
Prerequisites Installation:
1. Aspen Mtell Maestro host machine must meet the corresponding platform specifications (see V11.1 Containers in https://www.aspentech.com/en/platform-support).
2. Operating System should be Windows Server 2019 version 1809 or higher.
3. Windows OS automatic updates must be disabled.
4. Install Docker Enterprise and Docker Compose, refer to Docker Engine Enterprise and Docker Compose quick installation guide for Aspen Mtell Maestro
Aspen Mtell Maestro Installation:
1. Create Aspen Mtell Maestro folder in “C:/ProgramData/AspenTech/Aspen Mtell Maestro” location in host machine.
2. Download Aspen Mtell Maestro Credentials file acrcreds.key from AspenTech Download Center, and save it in C:/ProgramData/AspenTech/Aspen Mtell Maestro.
3. Download and extract the file Aspen-V11.1.2-APM-Mtell-Maestro-Win.zip from AspenTech Support Center and save it in C:/ProgramData/AspenTech/Aspen Mtell Maestro.
4. Open Command Prompt (run as Administrator).
5. Login to AspenTech’s Azure Container Registry.
a. Open acrcreds.key using notepad and then copy the docker command specified for Windows.
b. Paste the docker command in Command Prompt and execute it.
6. Change the path corresponding to the folder with the V11.1.2 docker-compose.yml file by executing command:
cd C:/ProgramData/AspenTech/Aspen Mtell Maestro/Aspen-V11.1.2-APM-Mtell-Maestro-Win
7. Execute the following command to download the container images:
docker-compose pull
a. Expected result:
8. Once all container images are downloaded successfully, this is the expected output in Command Prompt:
Note: The first time you download Aspen Mtell Maestro, the command “docker-compose pull” could take around 15 minutes. Download times highly depend on your internet network download speed.
Starting Aspen Mtell Maestro:
Open Command Prompt (run as Administrator).
Make the host machine part of a Docker Swarm.
Determine the host machine IP address (one option is to run command ipconfig).
Execute command:
docker swarm init --advertise-addr <host machine IP address>
Note: If the machine is already part of a Docker Swarm, you will see a message similar to the one in the next screenshot. Continue to Step 4.
If you have a different version of Aspen Mtell Maestro running on the host machine, stop it.
See steps for “Stopping Aspen Mtell Maestro” below.
Start Aspen Mtell Maestro.
Choose a name for the stack. We recommend Maestro_V11_1_2.
Execute command:
docker stack deploy -c <yml file full path and name> <stack name>
Check that Aspen Mtell Maestro is running.
Execute command:
docker stats
Note: Aspen Mtell Maestro is ready when all corresponding stack services appear.
To exit docker stats view, press Ctrl+C.
By default, the ports used by Aspen Mtell Maestro are 9001, 9002, and 9003. Check the ports being used by the stack services by executing the following command:
docker stack services <stack name>
Test the connection from Mtell to Maestro services in Aspen Mtell System Manager. Maestro URLs must be written as
http://<Maestro server IP address>:<Maestro service port>/api/
By default, the ports must be written in the following sequence: 9001, 9002, and 9003.
If Mtell cannot connect to Maestro services, check that the required ports are open through the Firewall on the server running Aspen Mtell Maestro.
Stopping Aspen Mtell Maestro:
Open Command Prompt (run as Administrator).
Identify the name of the stack you want to stop. You can check which Docker stacks are active by executing command:
docker stack ls
Execute command:
docker stack rm <stack name>
Example:
Keywords: Maestro install
Xray install
Docker
Container
References:
Docker Engine Enterprise and Docker Compose quick installation guide for Aspen Mtell Maestro
Aspen Mtell Maestro quick deployment guide (without internet connection / offline)
Docker Engine Enterprise and Docker Compose quick installation and upgrade guide (without internet connection / offline) |
Problem Statement: What does the Interpolate using past data only checkbox do when training an Agent? | Solution: When training an Agent, the user has an option to interpolate the data for missing data points within the training date range.
Interpolate using past data only option will interpolate data point using the past value for the particular sensor. It will use Stair Step interpolation when interpolating the past data.
For Example, if your sensor has missing values for a few data points before the training period then missing data points will get populated using previous value.
Data points as imported from Historian: Training Period: 1/1/2019 - 04:00:00 - 1/1/2019 10:00:00
1/1/2019
01:00:00 1/1/2019
02:00:00 1/1/2019
03:00:00 1/1/2019
04:00:00 1/1/2019
05:00:00 1/1/2019
06:00:00 1/1/2019
07:00:00 1/1/2019
08:00:00 1/1/2019
09:00:00 1/1/2019
10:00:00
10 12 16 15 14
Data points when using Interpolate using past data only option. Training Period: 1/1/2019 - 04:00:00 - 1/1/2019 10:00:00
1/1/2019
01:00:00 1/1/2019
02:00:00 1/1/2019
03:00:00 1/1/2019
04:00:00 1/1/2019
05:00:00 1/1/2019
06:00:00 1/1/2019
07:00:00 1/1/2019
08:00:00 1/1/2019
09:00:00 1/1/2019
10:00:00
10 10 10 10 10 12 12 16 15 14
Keywords: Mtell
Interpolate
Check box
Training
interpolating
References: None |
Problem Statement: In the standard workflow of Simulation Importer, users often need to go to Material Ports tab to map process streams to the relevant equipment material ports so that information from the streams are also available in the equipment objects. For users whose workflow is creating and connecting Process Flow Diagram (PFD) objects in the Drawing Editor before importing their simulation, the mapping information between objects are therefore already available. However, this information is not used by default in the Simulation Importer. This article explains how to let ABE use the connection information from the PFD to auto-map streams and equipment material ports in the Simulation Importer, hence optimizes the existing workflow and saves users time. | Solution: We will use a KB script to create the auto-map. When the simulation stream is mapped to the process stream in Simulation Importer, the stream-to-port mapping is created automatically, and the stream data will be transferred onto the ports after users run the transfer.
The KB module contains a demon that needs to be loaded into the workspace before streams are connected to the relevant equipment ports in the Drawing Editor.
Note: If users have existing PFDs before the KB was loaded, they can apply the KB's effect to those existing objects by running Reset All Demons under Tools menu in the Rules Editor.
This KB is loaded to the workspace using normal procedure. If you're not familiar with using KB script, refer to this article:
How to use KB Scripts in ABE? https://esupport.aspentech.com/S_Article?id=000062079
How to use the Automap:
1. Create PFD drawing in the Drawing Editor.
2. Using the Simulation Importer, import user's simulation.
3. Map Streams and Equipment accordingly. Go to Material Ports tab, see that they have been auto-mapped.
4. Go to Results menu > click Transfer to complete the transfer. Data from the streams are now available in the equipment material ports.
Note for users who have a previous version of Autoport KB
Attached KB is a modified version of 'Autoport.azkbs'. The previous version caused an issue with PFD Symbols on stockpile disappearing (if you pick filter 'By Diagram') after you delete an object without deleting its connections. To correct the issue, this new KB removes the array element instead of clearing it for Remove operation.
For the existing empty mappings created by the previous KB, please run the attached QueryEditor script. It removes those empty mappings. If you have never used Autoport.azkbs before, you can ignore this note and proceed with loading the attached KB.
Keywords: Automatic mapping, data transfer, unit operation ports data, KB script examples, Rules Editor
References: None |
Problem Statement: Some customers may want to display the tag value with commas in Graphic (i.e. 1,000,000 instead of 1000000). | Solution: 1. Go to Aspen SQL plus, open attached Commas_format.SQL
2. Modify the tag name “atcl101” to your tag name
3. Click on the Menu Record | Save As...
4. Type a name such as Commas_format , select QueryDef, click on Create
Select TSK_IQ1, click on OK
5. Go to Infoplus.21 Administrator, search for the record Commas_format “, set #SCHEDULE_TIMES and #OUTPUT_LINES to 1
6. Double click on #SCHEDULE_TIMES, SET SCHEDULE_TIME to a future timestamp, for example: after 2 minutes. set RECHEDULE_INTERVAL to 1 sec.
After 2 minutes, you should be able to see the formatted tag value in #OUTPUT_LINES
7. Go to Aspen Process Graphic Editor, add a Data field object, double click on it, go to Data Source tab,type Tag Name: Commas_format, Attribute: OUTPUT_LINE, click on Apply | OK. you will see the formatted value and it updates every 7 seconds by default.
Key Words:
Tag Value
Format
Commas
OUTPUT_LINE
Keywords: None
References: None |
Problem Statement: Is it possible to trend with SQL server tags but not creating SQLA tags in Infoplus.21 ?
This KB article provides you an example on how to trend with SQL server tags using Aspen Process Data (RDBMS). | Solution: 1. In SQL Server Management Studio, create a database named PD_RDBMS.accept defaults for database creation
2. Run Create_RDBMS_Tables.sql from a database tool in MSSQL to create the required tables in the database; this will populate data for two RDBMS test tags:RAR_RDBMSTag1, RAR_RDBMSTag2
3. Create an ADSA data source that uses PD for RDBMS. you will need to supply RDBMS server name (MSSQL), user name, and password. the TagsTable name should be TagsTable (this is the name of the table that is used to define the structure of an RDBMS tag)
4. To verify that database tables and data got created correctly, open Aspen Process Explorer, search for all tags under RDBMS data source, and then plot the two tags RAR_RDBMSTag1 and RAR_RDBMSTag2. You should see about 6 data points for each tag.
5.To plot the tags from aspenONE Process Explorer (A1PE), please perform tags scan first. (http://localhost/ProcessExplorer/WebControls/PBItemDetails.asp?admin=true)
Key words:
Trend SQL Server
Process Data
RDBMS
Keywords: None
References: None |
Problem Statement: What is the difference between IP_TREND_QSTATUS and IP_TREND_QLEVEL in a tag's repeat area? | Solution: In the Aspen InfoPlus.21 (IP.21) Administrator a selector record titled QUALITY-LEVELS defines three unique descriptors for quality (good, bad and suspect). Another selector record, QUALITY-STATUSES, defines 73 unique descriptors for quality. This same record also maps each of its 73 occurrences to one of the three defined by QUALITY-LEVELS.
The difference between the fields IP_TREND_QSTATUS and IP_TREND_QLEVEL is as follows:
The 73 occurrences in the repeat area of QUALITY-STATUSES comprise all possible values for IP_TREND_QSTATUS and reflect the various statuses that can be retrieved from all of Aspen's interfaces. Since Process Explorer only operates with values of good, bad, and suspect, the IP_TREND_QLEVEL field is the Process Explorer equivalent to IP_TREND_QSTATUS generated by the interface. The mapping in QUALITY-STATUSES record relates the two.
QUALITY-STATUSES record is also used in the fixed area of a tag to define the fields IP_INPUT_QUALITY and IP_VALUE_QUALITY.
Examples:
QUALITY-STATUSES
Numerical Value IP_TREND_QSTATUS IP_TREND_QLEVEL
-64 High Good
-62 ?-High Suspect
-50 Unavailable Bad
-6 Bad Bad
-3 Suspect Suspect
-2 No status Good
-1 Good Good
0 Initial Good
Keywords: qlevel
qstatus
QualityStatusDef
Select10Def
References: None |
Problem Statement: The following error is shown in the IO_DATA_STATUS within the transfer record:
Facility:77, Error: 77106
What does this mean?
According to the cimio_opc.def file (OPC errors) located in <drive>:\Program Files (x86)\AspenTech\CIM-IO\etc folder, the reason is unknown:
STATUS_NOSPECIFICREASON,77106,No specific reason is known | Solution: Error 77106 usually implies that something is wrong on the DCS or in the OPC server. This code means that the OPC server sends data to Aspen Cim-IO for OPC in a format that Aspen Cim-IO for OPC does not recognize. As a possibleSolution, try to read the same tag from another OPC client. If it fails, examine these points on the OPC server/DCS. If the test is successful, try performing a clean restart of Aspen Cim-IO Manager and the Aspen Cim-IO for OPC Interface as per the knowledge based article titled How do I perform a clean restart of an Aspen Cim-IO Interface with Store and Forward?.
Keywords: Facility 77
Error 77106
References: None |
Problem Statement: You are not getting data in the InfoPlus.21 database from your Cim-IO interface and there are many messages in the cimio_msg.log file on the Cim-IO server machine like:
CIMIO_DAL_CONNECT_CONNECT, Error connecting to service
CIMIO_MSG_CONN_SOCK_CREATE, Error creating an outbound socket
CIMIO_SOCK_OUT_CONN_FAIL, Error connecting to the server
WNT Error=10061 No connection could be made because the target machine actively refused it.
This knowledge base article explains what the No connection could be made because the target machine actively refused it error means. | Solution: This error generally means that either there is some type of problem with the services file or the DLGP for the Cim-IO Interface is not running. There is an entry in the services file on the Cim-IO server machine, located at
C:\Windows\System32\drivers\etc
for the DLGP Service name, the port number, and network protocol. If using the Store & Forward feature of Cim-IO, the services file contains entries for the scan, store, and forward processes. These entries are duplicated on the Cim-IO client machine's (InfoPlus.21's) services file. The actively refused error could indicate some type of problem with these file entries. Perhaps the spellings of the DLGP service names or the port numbers do not match between Cim-IO server and client or entries are missing.
The other common reason for the actively refused error message is that the DLGP for the Cim-IO interface is not running. The DLGP, the Device Logical Gateway Process, is the executable that communicates with the proprietary API supplied by the DCS or PLC vendor. For some Cim-IO interfaces, the DLGP may direct this communication to be done by read and write DIOPs, Device Input/Output Processes. In the actively refused situations, the DLGP is not running and needs to be restarted.
Keywords: None
References: None |
Problem Statement: How do I resolve Error 407: Proxy Authentication Required when trying to connect to aspenONE Exchange? | Solution: aspenONE Exchange requires your network to be configured to connect to external sites to retrieve information. The following Ports and URL's are required to be open for connectivity:
Ports:
· 443, 80, 9350 to 9354
URL's:
. *.servicebus.windows.net
. kbdocs.connect.aspentech.com
· connect.aspentech.com
· literature.connect.aspentech.com
· esupport.aspentech.com
. *.aspentech.com
Keywords: aspenONE exchange
aspenONE exchange can't connect
aspenONE exchange proxy authentication
aspenONE exchange proxy
aspenONE exchange IP address
aspenONE exchange ports
Error 407 Proxy Authentication required
References: None |
Problem Statement: This article explains how to enable QP algorithm for DMC3 controller Steady State Optimization. | Solution: When user is defining the CV Ranks and SS ECE tuning parameters in DMC3 Builder, the controller will use LP optimization by default. If user would like to enable QP algorithm for different CV ranks, he/she can tick the checkbox of Show rank groups.
By enabling 'Use QP' column for different CV ranks, controller will switch to QP optimization for those CV ranks.
It is the same procedure if user is using Smart Tune, QP can be enabled as below.
Keywords: DMC3 controller
DMC3 Builder
QP
References: None |
Problem Statement: This article describes in details about the procedure to enable smart tune for the controller application that is imported from CCF (legacy based) to DMC3 builder platform.
As such, the CCF would be using the traditional optimization method.
After importing to the Aspen DMC3 builder, Smart Tune should be manually enabled and configured, to utilize this feature. | Solution: Below procedure describes the steps to enable smart tune in Aspen DMC3 builder.
a. Open DMC3 Builder and navigate to Controller selection at bottom left hand side.
b. Once you select Controller, in DMC3 Builder, Import option will be available on top left hand corner.
c. Select Import -> Import application -> Select the recent CCF.
d. Notice the controller details are all available at left hand side.
e. From left side, select Optimization -> Smart tune (Tool bar at top)
f. Strategy section of Smart tune will be available and select the default to go to evaluate strategy.
For V10, Smart tune will automatically take the economic information that was set in CCF.
In case this is a new CCF and no economic information has been put in CCF, then fresh Smart tune configuration should be done.
For ore information about details of Smart tune configuration, please also refer the below KB:
https://esupport.aspentech.com/S_Article?id=000045164
https://esupport.aspentech.com/S_Article?id=000006909
Keywords: Smart Tune
DMC3 builder
CCF to DMC3
ST configuration
References: None |
Problem Statement: This | Solution: frames where you can find the Messages generated in PCWS.
Solution
There are basically two locations where the Messages generated from PCWS are saved.
The first one, is the eng. file created on each folder of every DMC applications under the path C;\ProgramData\AspenTech\APC\Online\app.
This log will show the events of the controller from the moment the controller starts. In the case the controller stops, the log will stop as well and it will write again when the controller is back on. However, it must be considered that this log will overwrite when the controller starts again, so if you want to keep this log we suggest create a backup of this file.
The second place to look at is the Windows Event Viewer. The windows event viewer saves all the messages from PCWS despite the controller has started or stopped.
The way to check this log is by accessing to the event viewer by looking at it on the windows start menu. Then double click on the applications and services logs, this will show all the APC applications. Then you can click on the Aspen DMCplus folder and this will display all the messages of all DMCplus applications that you are currently using.
A major difference of this log with the one created in the eng. file is that the Event Viewer can display all the messages (current and past) that the controllers has shown and can be filtered by date or applications. (as an additional note, this last option could take a couple of minutes depending on the quantity of messages for application).
To check a specific messages just filter by application check the time around the message appear on PCWS and click on details to check the specific message.
Keywords: DMCplus, PCWS
References: None |
Problem Statement: How to change the location of a repository currently in use? For example, moving the current repository from the C drive to the E drive. | Solution: IMPORTANT NOTE: This procedure assumes that you do not have any unmounted filesets in your historian (i.e. no filesets with FS_STATUS_FLAG = None). If you do have any, delete or shift into them first.
1. Create a new folder on the E drive (e.g. E:\arcs). This will be the new location of the repository.
2. Share the new folder with a different Share Name.
3. Create a new subfolder within the new repository folder, with the name arcsXX, where XX is 1 plus the total number of filesets you have now (e.g. E:\arcs\arc21 if you have 20 filesets). Naming conventions may differ according to users.
4. In the InfoPlus.21 Administrator, go to nodename-> Historian-> repository_name-> File_Sets. Right-click and Add File Set. Set the fileset path to the new repository subfolder (e.g. E:\arcs\arc21). Click Create and then Close.
5. Stop and restart InfoPlus.21 so the new fileset is added into the Historian.
6. In the InfoPlus.21 Administrator, go to nodename-> Historian-> repository_name-> File_Sets. Right-click and Shift File Set. This should shift archiving to the new fileset in the new location, since it is the only fileset with FS_STATUS_FLAG = None. Note that it may take a few minutes before the fileset status is updated. Look in the fileset summary to confirm that the new fileset is now the active fileset.
7. In the InfoPlus.21 Administrator, go to nodename-> Historian-> repository_name. Right-click and select Stop. This will stop the Historian.
8. Copy all existing filesets to the new repository location (e.g. E:\arcs).
9. In the InfoPlus.21 Administrator, point the Historian to the new location by changing the REPOS_FILE_PATH field.
10. Similarly, point all the current filesets to the new location by changing the FS_FILE_PATH field for each individual fileset. You may wish to use the chgpaths utility if you have many filesets (refer toSolution #106366). Note that you must stop IP.21 first before using the chgpaths utility.
While the Historian is down, data is still being inserted into the queues in memory. If it takes a long time to copy the files to the new location, the queues will eventually become full, and incoming history data will be written to disk - an EVENT.DAT file will be created in the old repository location. To avoid any loss of data, copy this EVENT.DAT file to the new repository location after InfoPlus.21 has been shut down.
11. Stop InfoPlus.21. A SAVE.DAT file will be created in the old location. This file is a saved copy of the memory queues.
12. Copy the EVENT.DAT, CACHE.DAT AND SAVE.DAT files to the new repository location.
13. Restart IP.21.
14. The SAVE.DAT file should disappear immediately, and the EVENT.DAT file should unbuffer after some time.
15. Check the new error.log file for any errors.
16. Check that the CUR_REPOS_FILE_PATH and CUR_FS_FILE_PATH fields are all pointing to the new locations, and ensure that data is being inserted into history.
If you wish to move more than one repository, we recommend that you move them one at a time.
IMPORTANT: You should save a snapshot and backup the config.dat, tune.dat and map.dat files after you have completed the procedure.
Keywords: change location
repository
fileset
References: None |
Problem Statement: What is the difference between using the CCF and the DBG file to run Aspen DMCplus Simulate? | Solution: The CCF file contains a snapshot in time of the controller (current values, tuning, calculation results, etc.) as well as the Cim-IO connection information. It contains no history, therefore no future predictions, and is therefore required to Initialize Controller on the first step. Using Simulate with a CCF can be done without ever having the controller online, and since it has no history, all predictions are flat-lined for the future (no MV movement, therefore CV's are predicted to not move).
You may use the CCF in Simulate for testing or for tuning purposes.
The DBG file contains a picture of the current online controller. Like the CCF, it contains the current values, tuning, calculation results, etc. Unlike the CCF, it does not contain the Cim-IO connection information but it does have a history of Current Values (or more correctly: a future of predicted values). Using Simulate with a DBG file should allow debugging a real-world online controller. It won't have to Initialize Predictions on the first step because it already has a set of good predictions.
Loading the .dbg file would be helpful in troubleshooting a possible issue.
Keywords: None
References: None |
Problem Statement: This knowledge base document provides a short description of the common tasks that are defined in the Aspen InfoPlus.21 Manager. | Solution: Task Name
Description
TSK_C21_WIN_INIT
Initializes/enables inter-process communications. Continues to run after startup.
TSK_H21_INIT
Creates and loads shared memory for the IP.21 historian. Runs and terminates during startup.
TSK_H21_ARCCK
Mounts, checks and verifies every repository and fileset during startup. It will also repair any damaged filesets. Runs and terminates during startup.
TSK_H21_MNTTAB
Applies any modifications to the way archive filesets are mounted. Runs and terminates during startup.
TSK_H21_PRIME
Starts an archive program for each of the repositories. Runs and terminates during startup.
TSK_DBCLOCK
Creates shared memory for the database and updates internal database time. The Command line parameters field accepts two arguments,an optional DOUBLE keyword, and the size of shared memory in database words.
Loaddb
Loads a snapshot into shared memory. Name and path of snapshot is defined in the Command line parameters field.
Note 1: All the above tasks MUST be started in that particular sequence to work!
Note 2: As of version 2006.5, all of the tasks shown above (with the exception of TSK_H21_PRIME) have been combined into a single task - TSK_DBCLOCK. A list of the tasks subsumed into TSK_DBCLOCK follows:
TSK_C21_WIN_INIT
TSK_H21_INIT
TSK_H21_ARCCK
TSK_H21_MNTTAB
LOADDB
Furthermore, TSK_H21_PRIME now starts after TSK_DBCLOCK.
Task Name
Description
TSK_ACCESS_SVC
Enables remote access of the database via WCF interface. This WCF service host is needed to allow the ?Find
Keywords: IP.21 Manager
task
external task
References: s? operation and the Excel Configuration Tool in IP.21.
TSK_ADMIN_SERVER
Provides remote access for IP.21 Administrator and IP.21 Definition Editor.
TSK_APEX_SERVER
Provides remote access for all Process Explorer clients.
TSK_BATCH21_SRVER
Enables remote access of the database. Provides remote access for all Aspen Production Record Manager/BCU clients.
TSK_BGSNET
A listener process for incoming GCS connection requests. It will create a new process for each incoming connection.
TSK_CIMQ
External task for Q product (a Statistical Process Control based package). Forms subgroups (averages) and generate limits for Q records.
TSK_DBCLOCK
Creates shared memory for the database and updates internal database time. The Command line parameters field accepts three command line arguments; the first one is an optional DOUBLE keyword, the second one is the size of shared-memory in database words, and the third one is the snapshot file name which it will load into the shared-memory (depending on the snapshotlist.config file).
TSK_DEFAULT_SERVER
Enables remote access of the database. This particular RPC-Server provides remote access for all clients that are not connecting to the other four servers.
TSK_EXCEL_SERVER
Provides remote access for clients using Excel add-ins.
TSK_EZTR
An external task to support EZTrend displays for GCS. It is responsible for processing user requests on EZTrend displays.
TSK_H21_PRIME
It starts an archive program for each of the repositories. It runs at startup time and terminates after starting all of the required archive programs.
TSK_H21T
Synchronizes the history configuration parameters between the database and history shared memory.
TSK_HBAK
Processes records defined by HistoryBackupDef.
TSK_HLTH
Executes InfoPlus.21 health tests in response to activations of health test records.
TSK_IQ1
Processes record based SQL+ queries, for example QueryDef and CompQueryDef records.
TSK_KEYS
An external task that processes records defined by SimulatedKeysDef. It will send simulated keys to GCS consoles.
TSK_KPI
External task record that processes activations of records defined by KPIDef, BatchKPIDef, and KPIScheduleDef.
TSK_MVAM
MVAMon.exe allows you to run models, write model data to the Aspen InfoPlus.21 database, and trigger actions, should a statistic value reach an alarm state.
TSK_OPCU
OPC UA client new to V7.3. Set to skip starting by default .
TSK_OPCUA_SVR
OPC UA Server new to V7.3
TSK_ORIG_SERVER
Provides remote access for all pre v3.0 clients.
TSK_PLAN
Processes records defined by PlantApDef, ScheduledActDef, CalculationDef and COSActDef.
TSK_REPL_PUB
Acts as the replication publisher when replication is enabled.
TSK_REPL_SUB
Acts as the replication subscriber when replication is enabled.
TSK_SAVE
Processes records defined by DatabaseSaveDef. Saves a snapshot of the database when IP.21 is shutdown.
TSK_SQL_SERVER
A multi-threaded server process, sqlplus_server.exe executes SQLplus queries from the SQLplus Query Writer, Tag Browser, and Desktop ODBC (the SQLplus ODBC driver).
TSK_SQLR
Default external task used by automated reports in SQLplus Reporting, the SQLplus web client.
Other common external tasks:
Task Name
Description
TSK_M_*****
CIMIO main client task. Processes all requests for data and handles synchronous data transfers. Eg. TSK_M_OPC.
TSK_A_*****
CIMIO asynchronous client task. Handles asynchronous data transfers. Eg. TSK_A_OPC.
TSK_U_*****
CIMIO unsolicited client task. Handles unsolicited data transfers. Eg. TSK_U_OPC. |
Problem Statement: After a hard restart of the online server, the IQ application seems to be running, but the calculated predictions are frozen even though the inputs are changing. Reloading the IQ application and/or deleting the history file does not help. No error files are created. | Solution: Turn ON the debug diagnostics for the IQ application (Debug Print Switch). Please refer to this KB 121401 on how to obtain debug files. Once the diagnostic information is generated, examine the iqp1_{IQ application name}.dbg. If the file is showing the following messages
03/10/16 13:03:33: P:XXXX: Application: XXXX : Starting SV Calculation
03/10/16 13:03:33: P:XXXX: Application: XXXX : Starting PR Calculation
03/10/16 13:03:33: P:XXXX: Application: XXXX : Initializing PR Calc
03/10/16 13:03:33: P:XXXX IQ: YYYY ** Prediction Calculation Initialization **
every cycle and there is no other meaningful data in the debug file, it indicates that the Aspen Calc module associated with this IQ application is corrupted.
To fix this problem, the corrupted Aspen Calc module needs to be deleted, which will force IQ to reload a new module from the iqf file. Listed below is the detailed procedure on how to obtain a new Aspen Calc module:
- Stop and unload the affected IQs
- Go to C:\ProgramData\AspenTech\Aspen Calc\Apps\IQonline\Calc
- Delete the .atc files associated with the malfunctioning IQs
- Reload and start the IQ applications
- Confirm new .atc files are created and that the IQ is executing normally.
Keywords: Aspen IQ
Aspen Calc
Frozen prediction
References: None |
Problem Statement: AspenTech distributes two different types of Aspen Manufacturing Suite (AMS) patches.
One is what we would call a Cumulative patch, whereas the other would be an Emergency patch.
In particular the Cumulative patch would typically be an 'msi' patch that can be viewed via the Aspen Update Center, whereas the Emergency patch is distributed as a zip file or a non-msi patch.
This | Solution: article discusses the situation of downloading a cumulative msi patch, invoking the installation by double clicking on the downloaded exe file, then after a few minutes getting an error message
Could not open cached msi file - 2147467259: open database, database path, open mode
Possibly followed soon afterwards by an error referring to atprogbar.htmlSolution
The error itself was generated because the patch install needs to find the msi file that exists from the original installation.
Those msi files are stored in C:\Windows\Installer
To find the correct msi file, the installation first checks the registry
HKLM\Software\Microsoft\Windows\CurrentVersion\Installer\Userdata\S-1-5-18\products - to find the original installation information (correct msi file)
The error means that it cannot open the specific msi file for some reason (non-existent, corrupt, protected etc..).
A very common cause of the error is the usage of Microsoft's Terminal Server connectivity to install the patch. The standard Terminal Server session blocks access to the required msi files.
The reSolution in this case is to use a ConsoleSession for the installation.
It is a 'documented by Microsoft' limitation that to install any such patch you should NOT use a standard Terminal Server session.
Unfortunately AspenTech has no control over this Microsoft restriction
Keywords: None
References: None |
Problem Statement: If using S&F, the IO_ASYNC field must be set to YES in the Logical Device record and the transfer records. | Solution: The IO_ASYNC? field (located in records defined by IODeviceRecDef and by any of the transfer records such as IOGetDef) indicates if an asynchronous process is allowed. If 'Yes' is selected, the main client task sends data requests to the Cim-IO server tasks, but does not wait on a reply. Therefore, there is no synchronous involvement here. Just the asynchronous.
Store and Forward requires that you use asynchronous transfer. If you do not set IO_ASYNC? to 'Yes' then an initial read will be performed but no subsequent read requests will be performed unless the transfer record is activated. The result would be that the timestamp and value of the tag will not be updated after the initial read.
Another question that has been posed in conjunction with the setting of IO_ASYNC? is about what the value should be set for the IO_TIMEOUT_VALUE field. You don't need to configure this field when IO_ASYNC is set to 'Yes'. This field is only for synchronous configuration. The IO_TIMEOUT_VALUE field specifies the time-out value that the main Cim-IO client task is to use during synchronous read operations. Therefore, this field does not apply to asynchronous requests.
Before changing IO_ASYNC? and IO_TIMEOUT_VALUE, please turn IO_RECORD_PROCESSING 'off' first, and after making the changes, turn IO_RECORD_PROCESSING 'on' again.
NOTE: Keep in mind there are 5 fields that Store & Forward uses.
IODeviceRecDef:IO_ASYNC?, IO_ASYNC_TASK, and IO_STORE_ENABLE and for transfer records such as IOGetDef: IO_ASYNC? and IO_FREQUENCY. For additional information of the usage of these fields please refer to KB 105250 and KB 116627.
Keywords: IO_ASYNC?
IO_TIMEOUT_VALUE
Store and Forward
no value update
References: None |
Problem Statement: How to import assays from an external source in assay manager | Solution: Following are the steps to understand how to import assay data from external source (eg. TOTAL website) using an example
1. Download the excel file from external library (in this case the TOTAL website)
2. Open Assay Manager
3. Characterize the existing assays
4. Go to Model Assays tab
5. Go to import file option on top
6. Select Import from file and you will get a dialog box. Select the appropriate assay data format depending on where you are getting the assay data. Add the excel file to the dialog box and click import.
7. Once added, characterize the new assay data. In the description for the crude add PIMS code.
8. Now go back to the PIMS TABLE tab and click on cut crudes option. Click on Add button and add the imported crude.
Once this is done, generate cutting results and update PIMS tables. Make sure you update BUY and CRDDISTL tables.
Keywords: External source
Importing assays
References: None |
Problem Statement: How can I set up flange configuration in the Relief Valve Editor? | Solution: You can model a source flange effect in the Relief Valve Editor.
When you specify the Relief Valve flange diameter under Conditions tab in the Relief Calve editor, Aspen Flare System Analyzer (AFSA) introduces an expander to perform the swage calculations using the settings in the Swage section under the Methods tab for that relief valve. On the contrary, if you leave a Valve flange diameter empty, AFSA assumes it to be the same as the downstream pipe diameter and does not consider a source flange effect.
Alternatively, you can also represent a source flange by adding a dummy pipe with zero length but the same diameter as the flange, then connect it to the tailpipe with a Connector (Refer to KB Article 000032253). To get the same calculation results between two methods, ensure the Methods configuration under Calculations tab in Connector Editor is the same as you set under Methods tab in the Relief Valve Editor.
Note that when you check the Include kinetic energy option in the Calculation Setting Editor, two methods may give slightly different results, but the deviation should be negligible. When you use Rated flow for tailpipe calculation, make sure to check Rated flow for downstream nodes attached to tailpipes option in Calculation Options Editor so that Rated flow can be applied to Connector calculation as well.
Refer to the attached sample case showing that two methods give the same results such as PSV back pressure.
Keywords: Flange, Relief Valve, PSV
References: None |
Problem Statement: How to model columns with multiple overhead condensers. | Solution: The following diagram illustrates how to use RadFrac to model a column with two overhead condensers. The basic idea is to simulate the first condenser using stage 2 with a heat stream.
In the above configuration, the stages are treated as follows:
Stage 1 --- 2nd condenser
Stage 2 --- 1st condenser
Stage 3 --- top tray of the actual column
Install a side heater on stage 2 to model the first condenser. Use a pumparound to return the liquid from the second condenser (stage 1) to the top of the column (stage 3).
Convergence is much faster with this integrated approach than setting up a separate block for the 2nd condenser with a recycle.
In the attached example, the reflux ratio for the double condenser configuration has been set using a design specification where the heat load in the first condenser is adjusted to meet the specification target.
Keywords: RadFrac, double condenser, heat streams.
References: None |
Problem Statement: When trying to start the CIMIO to OPC Interface service, it fails with the pop-up message Error 1067 Process Terminated Unexpectedly. Similar information may also be present in the NT Event Viewer.
The 1067 error may be accompanied by the following message in the CIMIO_MSG.LOG.
Fri Mar 09 14:33:45 2001, Logged by CIMIOOPCMGR on node <node name>: OpenScManager Failed
, error: 5 | Solution: There are two reasons that the error messages may appear.
The first is that the account used to start the interface may not have proper privileges. The following information comes from the CIM-IO for OPC User's Manual.
Before you install the interface, make sure that you determine the username and the password of the account that will be used to start the CIM-IO for OPC interface. Also, the account needs to have administrative privileges.
It is also possible that the name and password are not entered properly for the service (in Settings | Control Panel | Services). Confirm that the log on information is entered correctly for the service.
Keywords: manager
services
References: None |
Problem Statement: How is pipe wall temperature defined in Aspen FLARENET? | Solution: The Wall Temperature reported in the P/F Summary is the internal wall temperature, and the External Temperature is the outside wall temperature. External Temperature in Aspen FLARENET is not simply the metal pipe wall itself but includes the insulation thickness if there is insulation. The reported wall temperature in the P/F summary is an average over the length of the pipe.
To get wall temperatures shown, you need to do the following:
1. Select the Calculations | Options menu item, and make sure Enable Heat Transfer option is checked (on the General pagetab).
2. On each pipe, go to the Heat transfer tab and select Yes for the Heat Transfer Enabled option.
Keywords: temperature, pipe wall, pipe wall temperature, wall, wall temperature, pipe temperature
References: None |
Problem Statement: How do I export data from Aspen FLARENET to Excel? | Solution: There are a couple of different ways to export data from Aspen FLARENET to Excel. The first method is to print the desired data to a file, and open this file in Excel. Follow the steps below to do this:
1. Open your Aspen FLARENET case and from the File menu, select Print.
2. Use the checkboxes to select the data that you would like exported to Excel. Check the Print to File box and then click Print.
3. Select a file name and location and save the file as a .prn file.
4. Start Excel. Open the file that you saved in Step 3 and keep clicking Next. Finally click Finish.
Another way to export data from Aspen FLARENET to Excel is to use the Export Wizard; this tool can also be used to export data in the form of Microsoft Access database (*.mdb) files and XML (*.XML) files. Detailed instructions for using the Export Wizard can be found in the Printing, Importing and Exporting section of the Aspen FLARENET
Keywords: export, Excel, FLARENET
References: Manual. |
Problem Statement: How to delete a stream? | Solution: Select the stream name under Data || Process Streams form then press Delete button.
Keywords: Delete, stream
References: None |
Problem Statement: When trying to specify a variable of assay components, one cannot find the assay component name in stream variables form. Is it possible to specify assay components as design spec variables? | Solution: Yes. User can access assay components in variable definition dialog box only when user specify pseudocomponents Generation and Naming Options.
The procedure is listed as below:
Create a new object under Components \ Petro Characterization \ Generation.
Select assay/blend ID(s) from drop-down list and enter a weighting factor if needed.
Specify cut points in the Cuts form.
On the Naming Options sheet, select User defined list and enter cut IDs.
Then user can find the specified pseudocomponent IDs in any Variable Definition dialog box accessed from the Define sheet for Design Spec, Sensitivity, etc.
Keywords: petro
petroleum
ada
assay
generation
pseudo
pseudocomponent
characterization
variable definition
design spec
sensitivity
References: None |
Problem Statement: Need to insert old data into an already existing record that contains a history repeat area. The repeat area contains existing history. | Solution: Use the INSERT INTO history method.
For example:
INSERT INTO atcai (ip_trend_value, ip_trend_time, ip_trend_qstatus)
VALUES (4.5, '30-NOV-99 12:30:00', 'GOOD');
When inserting a lot of historical data to many tags, it might be a good idea to use 2 queries where one passes parameters to the other. A text file of some sort (.csv, .txt) would contain the name, value, and timestamp to be loaded.
For example:
-- main.sql
LOCAL rname RECORD;
LOCAL ptime TIMESTAMP;
LOCAL rvalue INT;
-- *** read rname, rvalue and ptime from a .csv file *** START 'sub.sql',rname,rvalue,ptime
-- sub.sql
INSERT INTO &1 (ip_trend_value, ip_trend_time) VALUES (&2, '&3')
NOTE: For more information on inserting into history, see the SQLplus Users Manual or the online help. Also, seeSolutions #100949 and #103040 for additional information about inserting older data into history.
Keywords: insert
history
References: None |
Problem Statement: Determine the value of the Universal Gas Constant R in HYSYS | Solution: The value of Universal Gas Constant R can be calculated using the HYSYS Spreadsheet function.
In this example file(attached), the IDEAL fluid package is used from the Aspen Properties database. A stream is created for providing the variables that are required by equation PV=nRT.
It is also possible to calculate R for any fluid package from the ideal gas density in Properties for any stream.
Keywords: Universal Gas Constant R
References: None |
Problem Statement: Unable to view AFW roles when a Domain User is Member of Local Group | Solution: When a Domain User is part of a Local Group and it’s been assigned to an AFW Administrator role. Then during login using Domain User account will not allow to fetch AFW Roles in AFW Security Client Tools.
gnLocalGroupsLevel variable in configured by default not to allow Local Group.
The following are the parameters available for gnLocalGroupsLevel variable:
        0 :       No Local Groups anywhere
         1 :       Server\Administrators Group,    No Client Groups
         2 :       Server\All_Local_Groups,         No Client Groups
         3 :       Server\All_Local_Groups,         Client\All_Local_Groups
Update the gnLocalGroupsLevel variable in appropriate file.
1. Verify the configuration of AFW Tool Server Registry URL
http://Hostname/AspenTech/AFW/Security/pfwauthz.aspx or
           http://Hostname/AspenTech/AFW/Security/pfwauthz.asp
2. Go to C:\inetpub\wwwroot\AspenTech\Afw\Security
3. Update pfwauthz.aspx.vb or pfwauthz.asp.vb file depending on your URL.
4. Change the value of gnLocalGroupsLevel variable to 3 to allow all local groups
Example:
Before: Dim gnLocalGroupsLevel As Integer = 0 ‘Default level for local groups
After:    Dim gnLocalGroupsLevel As Integer = 3 ‘Default level for local groups
5. Restart AFW related services for changes to take effect.
Keywords: AFW Roles
Domain
Local
Group
gnLocalGroupsLevel
Security
References: None |
Problem Statement: Upgraded from 5.0 to 5.4 and now gets an error message: USR_GET_CONNECT, ERROR CONNECTING DEV (error connecting to device) | Solution: This error can occur if you do not do a CIM-IO shutdown before upgrading. By running the upgrade without doing a shut-down a store file may be built. Immediately after the upgrade the store file will not be readable by the newer software. The user will see the above error message. Make sure to check the
C:\Program Files\AspenTech\CIM-IO\io\
directory for an old store file.
Keywords: usr_get_connect
error connecting dev
References: None |
Problem Statement: I logged in with the wrong password and I can't get back into Aspen Retail after using the correct password. | Solution: Once the wrong password is typed, you are unable to connect to the database during the current session. This is normal behavior when you are running MapPoint. Since MapPoint does not shut down when you enter an invalid password for Aspen Retail, you are unable to log back into Aspen Retail again.
The workaround is to end the Aspen Retail process, SUPPLY.EXE, with the task manager.
This issue will be fixed in a future version.
Keywords: Logon
Password
References: None |
Problem Statement: How can I save/print the Aspen FLARENET PFD in PDF format? | Solution: To save the PFD in PDF format, first ensure that the 'Use wire frame icons' check-box is activated on the File | Preferences | PFD tab, as this is the only type of graphical output that can be converted to PDF format.
After verifying the aforementioned check-box is activated, press the 'Save image as windows meta file' button on the PFD Toolbar. In the ensuing dialog box, specify the desired location and name of the file. Prior to clicking the 'SAVE' button, make certain the 'Save as type' option has been set to PDF (*.pdf).
Keywords: PDF, PFD, print, save, preferences
References: None |
Problem Statement: What are the rating, design and debottleneck calculation modes in FLARENET? | Solution: ? Design: Calculate Diameter (D) of pipe, given flow rate (W), temperature (T), and Pressure (P) with constraint of Ps (MABP), Velocity, Mach number and noise
o Note: The pipe diameter will increase in the direction of flow
Rate: Calculates Ps (static pressure), Velocity, Mach, Noise, given W, T, and P with constraint of Diameter for all Pipe.
Debottleneck: Same as in design, but you cannot have the diameter smaller than already specified in the network.
Keywords: design, rating, debottleneck , calculation, calculation mode
References: None |
Problem Statement: I'm getting a PH Flash Failure message when running my Aspen FLARENET case. What does this error mean and how do I correct it? | Solution: The PH Flash Failure message indicates that a flash calculation that Aspen FLARENET is attempting has failed for some reason.
This can occur for various reasons. Some things to check are:
- Ensure that your pipe segments are connected consistently, ie all of the connections are blue to red.
- If you have a knockout drum in your system, ensure that there will be a non-zero flow downstream of the knockout drum.
- Ensure that Heat Transfer calculations are enabled/disabled consistently across the flowsheet. Having heat transfer calculations enabled for isolated pipes can sometimes lead to this error message.
This error may appear during intermediate calculations while Aspen FLARENET is solving; in many cases the calculations continue and the flowsheet will converge. To see if this is the case, view the messages under View | Results | Messages | Solver. If the last message before Calculation Time is a PH Flash Failure, then the flowsheet has not converged and the results cannot be trusted. If you are unable to resolve the issue, please contact technical support for assistance at [email protected].
Keywords: unconverged, PH flash, failure
References: None |
Problem Statement: Why do the noise calculations performed by Aspen Flare System Analyzer give different results from the ones obtained per API 521? | Solution: The noise equation included in API 521 is applicable only for a point of discharge to the atmosphere (i.e. a Flare Tip). It calculates the sound pressure level generated by a relief load measured at a fixed distance of the observer from the source of the sound.
On the other hand, the noise equation used by Aspen Flare System Analyzer is a legacy empirical equation which was built to calculate the noise a person on-site will experience at a given distance from the source. This equation is fully applicable to measure the sound pressure level generated by pipes, which are the sound sources taken into account on the expression.
Therefore, the results will differ because the scope of each equation is different.
Keywords: Noise, Sound Pressure Level, SPL, Different
References: None |
Problem Statement: Is it possible to make Aspen Plus solve the flowsheet automatically every time that a change is made? How do I select the auto-run mode in Aspen Plus so that the flowsheet is solving continuously as every block specification is completed? | Solution: Aspen Plus has an auto-run mode so that the flowsheet is solving simultaneously when every block specification is completed. It is generally not recommended to use Auto Run because it can lead to issues with the forms updating slowly as the simulation is always running. The recommended setting is the default one.
To turn on Auto-Run
1. In the Home Ribbon, in the Run tools, click on the bottom arrow to open the Run Settings window
2. In Run Settings | Options / interactive runs, select Auto-Run mode and then click OK
The simulation will now run automatically after every input is complete.
Keywords: Run mode, Auto Run, Run settings
References: None |
Problem Statement: How do you insert a conventional non-databank component in an Aspen Plus simulation? | Solution: If you have searched all available Aspen Plus databanks to find the component you need, and failed to find it, complete the following process:
1. Define the non-databank component. On the Components | Specifications form, define the Component ID and leave the formula and component name blank.
2. Determine which parameters are required for the simulation.
For all simulations, Molecular Weight (MW), Extended Antoine vapor pressure parameters (PLXANT), Ideal gas hat capacity parameters (CPIG or CPIGDP), and Heat of vaporization parameters (DHVLWT or DHVLDP) are needed. This table can be found in the help under Using Aspen Plus | Basic Features | Entering Data for Simulations | Physical Property Parameters and Data | Determining Property Parameter Requirements.
The other parameters required will depend on the Property Method used. Refer to Physical Property Methods and Models reference manual.The descriptions of the property methods contain tables of the parameters needed. This information is be found in the help under Aspen Plus
Keywords: None
References: | Physical Property Methods and Models | Physical Property Methods | Property Method Descriptions.
3. Obtain these required parameters from literature, experimental sources, property estimation (PCES), or data regression (DRS).
See the help under Using Aspen Plus | The Aspen Physical Property System for more information on Estimating Property Parameters and Regressing Property Data.
4. Enter the required parameters. Enter unary, binary pair, or ternary parameters on the Properties | Parameters forms.
Alternatively, you can use the User-Defined Component Wizard to start to define the properties needed for conventional, solid, and nonconventional components. You can modify the parameters supplied at any time by returning to the User-Defined Component Wizard or by going to the forms where the information is saved.
Use this wizard to define components that are not in any pure component databanks. You can define conventional components, solid components, and nonconventional components. The wizard also helps you enter commonly available data for the components, such as molecular weight, normal boiling point, vapor pressure and heat capacity data.
To open the User-Defined Component Wizard, click the User Defined button on the Components | Specifications | Selection sheet. |
Problem Statement: How to preserve curve operations when moving a model between DMCplus Model project files (.dpp file)? | Solution: The following options can be used to preserve curve operations when moving a model from one .dpp to another.
1. The simple method to preserve the model curve options is to save the old .dpp file with a different name and edit the copy .dpp file in such a way that the unwanted contents are selectively removed from the project.
2. An alternative is to export the model from the old .dpp as a .dpa model file and then import the .dpa into the new .dpp project file. Please note that the user will need to import all source file (cases and other models) from the old .dpp that are included as references in the model being imported.
3. Selectively import objects from the existing .dpp file into the new .dpp file. This requires the existing project to be not in use or open at the time of import. The following steps can be followed to correctly import an existing .dpp file into a new .dpp file–
a. Open the new .dpp file and click on File | Import | Project. Please note that the Import Project option is only available when the Project Outline Window is active.
b. On clicking Import Project, proceed to select the vectors, cases and models that need to be imported into the new .dpp file. A pop up appears with a message to automatically run the cases that you wish to import.
Click “YES” to re-run the cases and predictions. After completing the import click on cancel to exit out of the import project data toolbox.
c. Please note that the information about curve operations will be retained in old .dpp even if these items have been imported into the new .dpp.
Keywords: Preserve curve operations
Export project
.dpa file
References: None |
Problem Statement: This query displays all the transfer records defined by IOGetDef along with the logical device associated with the transfer record for a particular tag in the Aspen InfoPlus.21 database. | Solution: Local TagName;
TagName = 'ATCAI'; --Change this for the IP.21 tag name you want to search
select io_main_task->io_device as cimio_devName, Name, IO_VALUE_RECORD&&FLD, io_tagname
from iogetdef where IO_VALUE_RECORD&&FLD = TagName||' IP_INPUT_VALUE';
You can modify this query to search using an DCS/OPC address as follows:
Local TagName;
TagName = 'ATCDAI.PV'; --Change this for the DCS/OPC tag name you want to search
select io_main_task->io_device as cimio_devName, Name, IO_VALUE_RECORD&&FLD, io_tagname
from iogetdef where io_tagname = TagName;
Either query produces output similar to:
In this example IOSIMUL is the name of the logical device and AspenChem_Get is the name of the Get transfer record.
Both queries search transfer records defined by iogetdef. You can change the queries to search through other transfer records by substituting IoLongTagGetDef, IoLLTagGetDef, IoUnsolDef, IoLongTagUnsDef, IoLLTagUnsDef, etc. for iogetdef.
Keywords: SQLplus
Transfer gecord
Logical device
query
References: None |
Problem Statement: Is there an Aspen SQLplus query to search for a specific tag (or string) inside a QueryDef record? | Solution: Below is an example query to do this. The query works by using the POSITION function to search for a substring within another string. In this case the tag name is the substring and the whole string is a line in the QueryDef repeat area. The POSITION function returns an integer > 0 if the substring exists.
-- sample procedure to find a string in a QueryDef record
-- User must provide string (up to 24 characters in upper case) and record name;
procedure findchar24 (char24 char(24), qname record)
local i,j,k int;
local c char(80);
-- find number of lines in repeat area
j =(select #QUERY_LINES from QueryDef where name = qname);
-- search thru each line in repeat area
for i=1 to j do
c=(select query_line[i] from QueryDef where name = qname);
c=upper(c);
k=POSITION(upper(char24) IN c);
If k > 0 then
write char24 ||' was found in line of ' || i ||' in '|| qname;
end
end
end
local tname char(24);
FOR (Select name nym from QueryDef) DO
tname=nym;
findchar24 ('atcai',tname);
END
Result:
atcai was found in line of 8 in ATC_Calcs
atcai was found in line of 15 in ATC_Calcs
Keywords:
References: None |
Problem Statement: Where to I set the mass balance checking tolerance so that I get an error message when the block is out of mass balance greater than the convergence tolerance? | Solution: Mass balance checking is performed with a relative tolerance of 10-4. This tolerance cannot be changed.
Check mass balance error around blocks should be checked on the Setup | Simulation Options | Calculations sheet. If this option is checked a warning message will be printed if there is an imbalance along with a suggested cause for the imbalance.
Imbalances can occur for numerous reasons:
improper stoichiometry or yield fraction specifications
loose convergence tolerrances
inconsistent user kinetic rates
flows changed by Fortran, Transfer, or Balance blocks
calculations in a User or User2 block
There is also a mass balance tolerance for reaction stoichiometry (STOIC-MB-TOL) that can be specified on the Setup | Simulation Options | Reactions sheet. This tolerance checks the mass balance of stoichiometry based on the stoichiometric coefficient and molecular weight of the components. You can select whether an error or a warning should be given during input processing if mass imbalance occurs.
Keywords: None
References: None |
Problem Statement: I have a simulation file with only one component and the mixture analysis is greyed out. How can I use the mixture analysis with one component? | Solution: The pure component analysis is the typical tool for when users have only one component. It includes a list of typical transport and thermodynamic pure component properties. However, it has limited options compared to the mixture analysis.
If there is only one component in the simulation, the Mixture analysis would be greyed out. In order to activate it, there should be at least three components in the components list. The steps to create the mixture analysis with only one component are:
1. Add two additional components that will not be used in the simulation.
2. Select the appropriate property method for the desired component.
3. Create a property set with the property or group of properties to be reported.
4. Use the qualifiers tab to select the component that will be reported.
5. On the Home Ribbon, select Mixture from the Analysis tool group.
6. Select the components and flow from the Composition box. Then select the properties to report from the list.
7. Select manipulated and parametric variables. For the manipulated variable, it can be either a range or a list of values. In the screenshot below there is a list of values.
8. Click on run the analysis.
9. Review the results. By default the analysis tab will open a graph. This window can be closed and review the results as a table.
Keywords: Analysis tool, mixture analysis, pure component, property sets
References: None |
Problem Statement: How do I uninstall Aspen PIMS? | Solution: The Aspen application installation comes with an uninstall program. This is always the first step to try. If this does not work, try to use the Windows Uninstall function from Control Panel | Programs and Features. If this is still not working, you can try the Windows Repair first, then try the Windows Uninstall, this usually fix the problem.
Keywords: Error 1702
References: None |
Problem Statement: How do I switch off the loop identification when designing a HX network? | Solution: In Aspen Energy Analyzer, when user tries to modify the exchanger network, the program may hang for a long time to identify the loops and paths.
In order to switch off this automatic loop identification, user can go to Topology View and uncheck the Always Run Topology box, see the screenshot below:
Keywords: Loop identification
References: None |
Problem Statement: Should I heat integrate in the process simulator to get the base model or import it without heat integration? | Solution: It is a good idea to integrate as much you can in the process simulator (Aspen Plus or Aspen HYSYS) and then import it into Aspen Energy Analyzer. Then it becomes easier to analyze the heat exchanger network and make automatic or manual modifications.
Keywords: Heat integration, process simulator, HX-Net.
References: None |
Problem Statement: How is it possible to use liquid heat capacity parameters directly in a simulation? | Solution: By default, Aspen Plus does not use the DIPPR liquid heat capacity polynomial parameter CPLDIP to calculate the liquid enthalpy. Instead, most option sets default to calculate liquid enthalpy using the ideal gas enthalpy and the heat of vaporization.
To calculate liquid enthalpy from liquid heat capacity polynomials, you must change to a liquid reference state. This can be done in a few equivalent ways.
1. Use a property method that uses a liquid reference state.
There are also property methods that use a liquid reference state for enthalpy similar to the DHL09 route by default. These property methods are WILS-LR (Wilson with Ideal gas and specifiable liquid reference state) and WILS-GLR (Wilson with Ideal gas and specifiable ideal-gas/liquid reference state) . These property methods can be used as templates for methods that use other activity coefficient models. To create similar property methods that use NRTL or UNIQUAC models together with the special liquid enthalpy departure routes, check the Modify property models box on the Properties | Specifications | Global sheet and select the desired liquid gamma model.
2. Check the Use liq. reference state enthalpy box
For activity coefficient models, it is possible to check the Use liq. reference-state enthalpy box on the Properties | Specifications | Global sheet. This option is not available for all models.
3. Modify the property route for DHL.
Complete the following steps for this process for the ideal or activity coefficient option sets:
1. Specify the property method on the Properties | Specification | Global form.
2. Select Properties | Property Methods. An Object Manager appears. Click on the property method to modify.
3. Go to the Routes tab. Under Property Route, change Major Property to Subordinate Property. Look for the Property DHL, the liquid enthalpy departure. Modify the route id to now read DHL09.
For property methods other than activity coefficient property methods, the above steps are not sufficient. For equation of state option sets, like RK-SOAVE, you must also change the property route for DHLMX to one that uses the DHL route (for example, DHLMX00).
Note: The heat of vaporization is used to calculate the reference enthalpy for the component at 25 C. This reference state temperature can be changed by specifying the parameter TREFHL.
Keywords: None
References: None |
Problem Statement: Even if the BWR-LS (Benedict-Webb-Rubin-Lee-Starling equation-of-state) property method is not selected, the binary parameters BWRKV (Molecular Size Binary parameter for the BWR-LS equation of state) and BWRKT (Molecular Energy Binary parameter for the BWR-LS equation of state) appear on the Properties\Parameters\Binary Interaction object manager list. Why does this occur? | Solution: Some property methods use the BWR-LS equation of state method to calculate transport properties such as viscosity (MULMX) and thermal conductivity (KMX). These property methods include PSRK, RKSMHV2, PRMHV2, SR-POLAR, RKSWS amd PRWS.
These parameters on these forms can be deleted; however, the forms cannot be removed from the Object manager list.
Keywords: eos
equation of state
BWR
References: None |
Problem Statement: How does Aspen Plus calculate property sets for vapor and liquid heat capacity at constant pressure? What is the route or model used for the calculation of real gas or liquid heat capacities?
Also, what is the difference between CP and CPIG Prop-Sets? | Solution: You can report the mixture heat capacity using the Property set CPMX. The pure component property set is CP. The valid phase qualifiers are V, L, L1, L2, S (solid) and T (Total mixture). The units can be mole or mass based.
Property set CP is the real gas or liquid constant pressure heat capacity at the temperature and pressure of the system. Aspen Plus calculates the heat capacity directly by taking the temperature derivative of the enthalpy (at constant pressure). The calculated heat capacities are based on the routes and models used in the enthalpy calculations.
Note that liquid heat capacities can be calculated directly from the DIPPR liquid heat capacity equation by selecting the DHL09 the enthalpy departure route.
Property set CPIG calculates the ideal gas heat capacity of a pure component. CPIG is calculated by evaluating the ideal gas heat capacity equation using the CPIG or CPIGDP parameter. The term ideal gas here means a hypothetical state at system temperature and near zero pressure. For many components, the ideal gas heat capacity is indistinguishable from the vapor heat capacity below the normal boiling point. The CPIG parameter can be estimated from the Benson and Joback structural groups methods if it is not available in the databank.
Keywords: CPL
CPV
References: None |
Problem Statement: How is flash point estimated? The properties FLPT-PM and FLPT-TAG can be accessed in a property set, but how are they calculated? | Solution: Flash Point is a measure of the volatility and the inflammability of liquid petroleum mixtures. It is the lowest temperature at which a combustible material will give off enough vapor to form an flammable mixture with air.
Aspen Plus provides several properties representing different methods of calculating the flash point. The following methods are available:
FLPT-API, the API method for determining flash point (ASTM-D86)
FLPT-PM, the Pensky-Martens method (ASTM-D93)
FLPT-TAG, the Tag method (ASTM-D56)
FLASHPT and FLASHCRV, user specified assay property data for petroleum mixtures
These methods can be accessed in a property set and reported or used elsewhere in a simulation.
FLPT-API uses the ASTM D86 10% temperature for the petroleum fraction or the normal boiling point for pure components in a procedure based on the API computerized procedure 2B7.1. Linear extrapolation is also performed.
The equation used is
Where,
TFP = flash point of petroleum fraction, in degrees Rankine.
T1 = ASTM 10% temperature for petroleum fractions or
normal boiling point for pure compounds, in degrees Rankine.
The other two methods, FLPT-PM and FLPT-TAG, use a modified bubble point calculation described in Seader and Henley. These methods use mole-weight-modified K-values for a bubble point flash in which the value of the summation SUM(KiXi), normally 1, has been replaced with an experimentally determined parameter a. The parameter a has different values for the two methods.
To enter user specified flash point information for a mixture of pseudocomponents for FLASHPT and FLASHCRV, the temperature values for the flash points at several mid-percent distilled points must be entered on the Componens | Assay/Blend | Property Curves form. Four such data points are required to define a property curve.
Keywords:
References: s
J.D. Seader and Ernest J. Henley, Separation Process Principles, p. 281, Wiley and Sons, 1998.
M.R. Riazi, private communication, 1985.
M.R. Riazi, API Databook, 5th Ed., procedure 2B7.1 (1986). |
Problem Statement: I have the experimental data of heat of melting or heat of crystallization, how do I put them into Aspen Plus so that the Crystallizer or RGibbs can use them? | Solution: In Aspen Plus, the heat of melting and heat of crystallization is calculated by the difference in enthalpies between the liquid phase and the solid phase. Thus the experimental data can not be used directly with Aspen Plus since by default Aspen Plus does not use heat of melting or heat of crystallization for enthalpy calculations.
The enthalpy for a pure solid component is based on its solid heat of formation (DHSFRM) and solid heat capacity (CPS). For a pure liquid component, the enthalpy is calculated from its ideal gas state heat of
formation (DHFORM) and ideal gas state heat capacity (CPIG) and plus liquid enthalpy departure (which involves heat of vaporization (DHVL) for activity coefficient model) selected). The
enthalpy change when a solid melts is taken as the difference between the
liquid state enthalpy and the solid state enthalpy.
The solid component enthalpy can also be calculated based on the heat of
sublimation (DHVS) as well as some other user routes and models. In order to employ
the data of enthalpy of crystallization, one needs to change the
default routes and models in the property method.
A simpler way is to first use Aspen Plus
to predict the heat of crystallization using the Crystallizer Block to see if
the predicted enthalpy change is close to his data. If not, one can adjust the
parameters, say, CPS or DHSFRM to match the data.
Keywords: Crystallizer
Heat of Melting
Heat of Crystallization
Solids Enthalpy
References: None |
Problem Statement: When using RadFrac to simulate a reactive distillation column with electrolytes, does the RadFrac block use Chemistry, or must the Reaction stages in RadFrac be specified? | Solution: When using the TRUE approach, RadFrac will use the Reaction | Chemistry as a source of reactions if the Reaction stages are not specified on the RadFrac | Reactions | Specifications sheet. If the Reaction stages are specified with a Reac-Dist or User reaction, the Chemistry will NOT be used. The code only allows one reaction object (Reac-Dist, User, or Chemistry). If any Reaction stages are specified, the Chemistry will NOT be used at all in the block.
Because the Chemistry will not be used if Reactions are specified with the TRUE approach, the user must normally duplicate all of the reactions in Chemistry as equilibrium reactions in a Reaction | Reactions paragraph of type REAC-DIST. The parameters from the Chemistry can be used for these reactions on the Reaction form using Equilibrium reactions with a Mole gamma basis.
When using the APPARENT approach, RadFrac will use both the Chemistry (everywhere in the block) and the Reactions (on the specified stages). Because Chemistry and Reactions are used simultaneously, reactions that appear on the Chemistry form should not appear on the Reactions form.
The logic is similar when using RateSep, RateFrac or BatchFrac.
Keywords: None
References: None |
Problem Statement: An Aspen SQLplus query displays the error box: | Solution: This error can appear if a query tries to fetch historical information for a tag pointing to a repository that no longer exists. Verify that all tags referenced by the query point to valid repositories.
Keywords: None
References: None |
Problem Statement: What are the tuning parameters for DMCPlus | Solution: There are a lot of parameters for DMCplus controller but these are the tuning parameters.
Steady State Tuning
-Feasibility stage
CV/ET Ranking - determine the order of give up in case of feasibility (Smaller the rank, more importance the limit.
Steady state ECEs - weighting factor to amount of give up within the same rank (smaller the ECE, more importance the limit) in engineering unit.
-Economics optimization
MV or CV cost - drive the controller to the most economics operating points based on economics factor (min cost MVs: positive cost means minimize MV when possible; negative cost means maximize MV when possible. Min move MVs: positive cost only, cost to move thus only move to relief constraints)
Dynamic Tuning:
-Move suppression
Specific how aggressive a MV should move. It is a unitless number (smaller the number more aggressive the MV is moving)
-Dynamics ECEs
Determine how tight the controller should hold the CV dynamically. It is in engineering unit (smaller the ECe, the tighter that CV is hold).
-Transition Zones.
Provide a smooth transition between different dynamic ECE tuning.
Remember the controller is solved simultaneously thus changing one tuning will affect how the rest of the controller react.
Do not use MAXMOV and/or SSSTEP as tuning parameters. They are there as safety precautions. You are taking away the 'trading off' effect of a multivariable controller if you utilize MAXMOV and/or SSSTEP
Keywords: Ranking, ECE, MV cost, CV cost, Move suppression, Transition zones, MAXMOV, SSSTEP
References: None |
Problem Statement: It has been noticed on 64bit Server installs that the AuthWrapperSvc 1.0 Type Library is missing from SQLplus | Solution: The references can be added to SQLplus by making a registry update on the SQLplus host to add the win64 key for the typelib similar to the existing one for win32.
Open the registry editor (regedit) and go to key:
Computer\HKEY_CLASSES_ROOT\TypeLib\{EF401451-CF55-11D3-827A-00C04F12B1D3]\1.0\0
Beneath this there should be a key for win32. Add a new key called win64, and for its Default value add:
C:\Program Files (x86)\AspenTech\BPE\AfwSecCliSvc.exe
This should be the same as for the Default value for key win32. It assumes that you have used the default installation directories, or otherwise you need to alter accordingly.
Once this has been done restart SQLplus and you should find the missing references.
As a long termSolution the missing references will be added to the installation for a future version of SQLplus Server 64-bit.
Keywords: AuthWrapperSvc
SQLplus
References: s. This is needed to manage AFW users in SQLplus. Can this be added to the system? |
Problem Statement: How do you delete history points from Aspen InfoPlus.21 using Aspen SQLPlus? | Solution: There is no way to delete history values in InfoPlus.21. This is true using SQLplus or any other tool.
If you have entered data that you wish to remove from the database, the only thing you can do is change the status of the unwanted points to bad using the UPDATE statement in SQLPlus. This way they won't be used in trending or calculations, and skew your data.
Examples of the Update statement can be found in the SQLPlus Help files.
Keywords: delete
history
UPDATE
References: None |
Problem Statement: The Aspen Exchanger Design and Rating (EDR) suite of programs have cost information to permit the cost of the exchanger to be determined. Instructions for changing the unit of currency are described below. | Solution: From the main menu, select Tools | Data Maintenance | Units of Measure.  Select the Units Maintenance tab and then from the “Select Category� set to currency and then from “Select a Unit� find the appropriate currency.
The default cost in the program are Dollars (US) $, so on the right hand side set the appropriate conversion rate. Click on the OK button.
To view the costs in pounds UK, £ there are a number of options;
· In the appropriate form, select the units and select from the drop down box
· From the main menu, Tool | Data Maintenance | Units of Measure, select the Defaults tab. Select Currency from the “Select a Physical Quantity� and then for example Set 1, select Pounds as the “Change the Default Units�.
From the Units of Measure scroll drop down, select “Set1�, where the entered units will be used.
Keywords: Units of Measure, Currency
References: None |
Problem Statement: I have a flash convergence failure. How can I provide the information to AspenTech support without having to send my complete flowsheet? | Solution: You can add the following debug paragraph to the input file:
DEBUG FLASH-MSG=1
Whenever a FLASH is called, the specifications will be echoed to the history file, including composition, temperature, pressure, flash specs, etc.
If you set FLASH-MSG=2, the inline-Fortran for a Calculator block which you can use to reproduce the flash will be written It writes this Fortran only for flash failures.
To use this feature, either export the simulation input file or go to Flowsheeting Options->Add Input->Add Before, and enter the line:
DEBUG FLASH-MSG=2
Then, run the simulation and export the history file (File, File Export, history file (*.his)). Finally, edit the history file to remove the flowsheet information.
Note that the physical property data must still be available, otherwise the flash convergence trace information will be useless.
Keywords: flash
References: None |
Problem Statement: How do I change the Total Licensed Point count that a particular InfoPlus.21 database is licensed for? | Solution: 1. Stop the database.
2. In the IP.21 Administrator, right click on the node name and select Set Point Count item from the context menu.
3. Enter the number of points that the database is licensed for, click OK and restart the database.
In the IP.21 Administrator, right click on the node name and select Properties, Record Utilization tab to confirm that the correct number of Licensed Points is showing in the Total field.
Keywords: SLM_InfoPlus21_Points
InfoPlus21_Points
Licensed points
References: None |
Problem Statement: What is the difference between positive list ID (i.e. LISTID=1) and negative list ID (i.e. LISTID=-1) in the Cimio_T_API program? | Solution: A negative list ID (i.e. LISTID=-1) re-declares the tag list on the Aspen Cim-IO server. In other words, it forces the creation of a new tag list which is recommended when using the Aspen Cim-IO T-API test client.
A positive list ID (i.e. LISTID= 1) assumes there is already a tag list created and uses it.
If using Aspen Cim-IO for OPC The first tag read with the Aspen Cim-IO T-API test client may result in a Bad status. In this case a second read is necessary to read the value with the correct status. Depending on the OPC server the lISTID may be negative or positive on the second read to retrieve a value with a good status.
Keywords: LISTID
LISTID=1
LISTID=-1
References: None |
Problem Statement: This Knowledge Base article explains why a user is getting the following error message
Error 0xC0040007 (3221487623) The item is no longer available in the server address space.
when trying to connect Aspen Info Plus-21 DA Server through OPC Client? | Solution: This error message is common, if the Tag name (normally call it as item id in OPC terminology), you are using does not exist or is incorrect in the OPC server. ( In this case Aspen InfoPlus.21 server is a OPC server.)
Also make sure, if the tag name contains special characters, to use double quotes () for the tag name alone, in the item id of the OPC client. If the OPC client does not add the double quotes by itself, then you should manually add it through the Item ID box.
Keywords: addressspace
OPC DA
DA
item
References: None |
Problem Statement: What is the difference between a guaranteed truck and a bill by shipment truck in AFO? | Solution: If guaranteed is checked this transport is then viewed by Aspen as a unit that will ?cost? the client ?X? hours (whatever hours appear in the transport Schedule) multiplied by the Cost per Hour. Proprietary units also need to be guaranteed if the driver is paid for his full shift regardless of workload. Any unit can be guaranteed and the system will attempt to fill all guaranteed units with work equally. If checked bill by shipment is checked, this transport will be viewed as a Call on Demand carrier where the client pays only for the loads delivered and then on the basis of a Cost per Volume Chart or as a Point to Point cost.
Keywords: None
References: None |
Problem Statement: This Knowledge Base article explains why Graphics Editor displays *** when viewing BAD data from Aspen InfoPlus.21. | Solution: The Graphics Display uses the QSTATUS to determine whether or not to display a value.
For the Graphics Editor to display *** when viewing BAD data from InfoPlus.21, the Key Level has to be marked as QSTATUS_BAD or the sample has to be a NaN (not a number) value. Additionally the Graphics Display will show *** any time a value of NaN is retrieved from the database.
Keywords: None
References: None |
Problem Statement: The Aspen InfoPlus.21 Administrator allows you to change the date format used by Aspen InfoPlus.21.
How can a query determine the date format used by an InfoPlus.21 server? | Solution: Use the function date_format.
The following example illustrates the use of date_format.
case date_format
when 0
then write 'The date format is DD/MM/YY';
when 1
then write 'The date format is DD-MMM-YY';
when 2
then write 'The date format is MM/DD/YY';
when 3
then write 'The date format is MMM-DD-YY';
when 4
then write 'The date format is YY/MM/DD';
when 5
then write 'The date format is YY-MMM-DD';
end
Keywords: sample query
date_format
date
format
References: None |
Problem Statement: Why do the results obtained from Aspen HYSYS and Aspen FLARENET not match? | Solution: The results obtained for Aspen HYSYS and Aspen FLARENET only match when the kinetic energy term is not taken into account in the energy balance in Aspen FLARENET (set via the Calculations | Options | Energy Balance menu item).
The reason for this is the way the valve model is implemented in Aspen HYSYS: it is based on a simple enthalpy balance, neglecting any contributions related to the kinetic energy of the flowing fluid. Such an assumption is valid if the fluid velocity is much less than the sonic velocity.
If you would like to take into account the kinetic energy term using Aspen HYSYS, you will need to use Aspen Hydraulics option.
Keywords: different, results, kinetic, energy
References: None |
Problem Statement: How are noise calculations performed in Aspen Flare System Analyzer? | Solution: A sample noise calculation can be found in the attached MS Excel file.
Keywords: noise, calculation, equation
References: None |
Problem Statement: What is the Body field on the Calculations page of the Tee Editor ? How does this affect Tee calculations? | Solution: The Body option allows you to define upon which of the pipeline diameters the Tee pressure drop calculations are based. The available options are:
Run - diameter is that of the inlet pipe.
Tail - diameter is that of the outlet pipe.
Branch - diameter is that of the branch pipe on the tee.
Auto - Aspen FLARENET will set the body diameter to be the larger of the inlet and branch pipe diameters.
The user is required to specify the body of the mixing area of the tee to determine the Miller charts K factor that will be used in the pressure drop calculations. The reason a specific size is not entered is because in design mode if resizing of pipes is allowed then the body of the tee will be automatically updated. If all of the pipe sections attached to the tee have the same diameter then the calculated pressure drops will be the same for each of the options in the body field.
Keywords: Tee, Body, Tail, Run, Branch
References: None |
Problem Statement: Using the Aspen InfoPlus.21 Administrator, you can delete file sets one at a time. The attached query allows you to delete a range of file sets at once from a repository. | Solution: Start the Aspen SQLplus query writer and enable Intermediate Output.
Next, download the text file DeleteFileSets.txt attached to theSolution, copy the contents of the file to the query writer, and execute the query.
The query asks for a case sensitive repository name.
After receiving the repository name, the query reports the number of file sets in the repository and then prompts for the starting file set number to delete followed by the ending file set number to delete.
The query then confirms your entry. Enter Yes to continue.
You can open the InfoPlus.21 Administrator and verify the query worked properly by looking at the repository. The file sets to be deleted will have a x beside the file set number.
Note: If the InfoPlus.21 Administrator is open, press F5 to refresh the InfoPlus.21 Administrator.
Finally, you must restart Aspen InfoPlus.21.
Note: This query does not delete the Windows folders or contents of the folders containing file set data.
Keywords:
References: None |
Problem Statement: How do you add a new repository using Aspen SQLPlus? | Solution: We have samples of Aspen SQLPlus scripts available on Aspen InfoPlus.21 Server under the folder
...\AspenTech\InfoPlus.21\db21\sample_programs\TestAtIP21HistAdminSQLPlus
The script TestAddRepAndFS.SQL shows how to create repositories and multiple file sets.
Keywords: New repository
new filesets
add
SQLPlus
sample
References: None |
Problem Statement: AspenTech SQLplus ODBC driver (ip21odbc.dll) is not listed in the Windows Data Sources (ODBC) GUI. | Solution: The odbcad32.exe that is referenced in the Windows Control Panel is only for 64-bit drivers.
The default AspenTech SQLplus ODBC driver is a 32-bit driver so you have to run the odbcad32.exe in the c:\windows\syswow64 directory:
1. Go to c:\windows\syswow64
2. Double click on odbcad32.exe
3. It will open the ODBC Data Source Administrator
4. After a few seconds, the Aspen SQLplus driver is accessible as illustrated below.
Please note that the version of the driver will be different, depending on the version of Aspen SQLplus you installed.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Aspen SQLplus V7.3 Engineering Release IP120326Y patch introduced the AspenTech SQLplus ODBC 64-bits driver.A (Please see the Relase Notes to verify if its installation applies in your system ).
In the V8.0 the AspenTech SQLplus ODBC 64-bits driver is installed through the 64-bits section.
A
As a comparative:
On Operating System 32-bits:
32-bits OS
ip21odbc.dll
libc21.dll
ODBC Data Source Administrator
32-bits InfoPlus.21 ODBC Driver
Windows\System32
Program Files \Common Files \AspenTech Shared
ODBC Data Source Administrator via Administrative Tools or odbcad32.exe found in Windows\System32
On Operating System 64-bits:
64-bits OS
ip21odbc.dll
libc21.dll
ODBC Data Source Administrator
32-bits InfoPlus.21 ODBC Driver
Windows\SysWOW64
Program Files(x86) \Common Files \AspenTech Shared
odbcad32.exe found in Windows\SysWOW64
64-bits InfoPlus.21 ODBC Driver
Windows\System32
Program Files \Common Files \AspenTech Shared
ODBC Data Source Administrator via Administrative Tools or odbcad32.exe found in Windows\System32
Both versions (32-bits and 64-bits) can coexist. Each component has to remain in its own path.
Keywords: ip21odbc.dll
Desktop ODBC
References: None |
Problem Statement: Aspen Shell & Tube Exchanger (Tasc+) can either produce a tube layout from the values entered in the input file, or an interactive technique can be used to draw/customize a tube layout. This | Solution: described how the interactive tube layout feature can be used to customize the tube layout.Solution
The steps are as follows:
1. The Tube Bundle Layout facility is available in Rating/Checking and Simulation modes but not Design mode. The options are directly underneath 'Tube Layout' section on Input | Exchanger Geometry | Geometry Summary | Geometry tab. See the below screen shot
There are 3 options
'Use existing layout' - The program uses the Tube Layout to supply input data needed for the Thermal Calculations. You need to set this option if you want to edit the Layout diagram, as it activates the tab 'Tube Layout' so the existing tube layout drawing can be edited.
'New (optimum) layout' - The program will ignore the existing Tube Layout diagram, and calculate a new tube layout based on layout input items the next time the case is run.
'New layout to match tubecount' - This option will determine the optimum tube count. If this is larger than the tube count specified, the program will remove tubes as far as it sensibly can, until the specified tube count is achieved. If the specified tube count is greater than the calculated tube count then program will issue a warning, but it will not add additional tubes.
As stated above, 'Use existing layout' should be selected, in order to activate the Input | Exchanger Geometry | Geometry Summary | Tube Layout tab
2. On Tube Layout tab, there are a variety of interactive features that users can utilize to customize the tube layout.
? Right mouse-click on an appropriate tube or tube row, and the row will be highlighted and you will have a drop down menu of options to select from, including 'Add/Delete Tube Line' 'Add/Delete Tube' 'Convert Tube to/Add Tie Rod' etc. If you right-mouse click outside the tube layout, you will be given options like 'Save as' 'Print' 'Copy' etc. See the screen shot below.
? It is also possible to modify other components associated with the tube layout such as sealing strips, tie rods, baffle cuts, etc. from this same screen display. Select the component you want to customize from the scroll-down list on the top of the diagram, where you will be given a table at the bottom of the screen, showing information like positions and dimensions of the selected component. By clicking the columns in the table, you highlight the corresponding component in the drawing in red. If necessary, users can also click the cells and manually type in values for the parameters you want to customize. You can also add/delete/restore or move any selected components by using the buttons at the top of the diagram. See the attached screen shot below.
After customizing the tube layout, users should be aware that there might be an inconsistency now between the layout and the main input values. Users can decide how the program will treat these inconsistencies, as given inSolution 125796.
Keywords: Interactive, Tube Layout, Customize, Existing, New, Optimum Layout, Match Tubecount
References: None |
Problem Statement: What are the options for supplying Molecular Weight Distribution (MWD) for Polyfrac. | Solution: Polyfrac is primarily set up to work with streams that possess a polymer molecular weight distribution (MWD). However, a stream can acquire a MWD in only one of two ways:
It can created by a RCSTR, RPLUG, or RBATCH for models that include free radical or Ziegler-Natta kinetics objects.
It can be put into a feed stream with the user model FeedMWD.
If a user chooses to enter the MWD in one of the reactors and then later removes the reactor from the model, the user also removes the means of creating the distribution. Thus, either the reactor must be returned to the simulation, or the FeedMWD must be used.
There is an example of the FeedMWD user model in:
Program Files/AspenTech/Aspen Plus 11.1/GUI/xmp/Polymers Plus folder
This folder also contains a ReadMe.txt file, which has a description of the FeedMWD user model.
Keywords: Polyfrac, Polymers Plus, MWD
References: None |
Problem Statement: How to get data from Aspen InfoPlus.21 into a C# application using a memory-resident DataSet? | Solution: By using the .NET Framework Data Provider for ODBC, data can be read from Aspen InfoPlus.21 into ADO.Net DataSet. The System.Data.Odbc namespace will need to be included in the using directive.
In the attached file (IP21DataSetExamples.zip), there are 2 examples, one being a Windows Forms application and another a console application.
Keywords: C#
DataSet
ODBC
References: None |
Problem Statement: How can I set stream composition via a spreadsheet? | Solution: See attached file Composition set via spreadsheet.hsc. Numbers in column B represent non-normalised composition. Data is normalised in column C and exported to a stream.
There are two important points to note:
data must be exported for ALL components in the model
composition must be normalised
Finally, once the spreadsheet is set up and linked to the stream, it may be necessary (just once) to ignore the spreadsheet, then restore it.
Keywords: Spreadsheet, composition
References: None |
Problem Statement: This sample query shows how to calculate monthly averages for a list of tags. | Solution: For each month, the query determines the amount of time in the month and uses the aggregates table to calculate the averages for each of the tags in the list, in this case ATCL101, ATCL102, and ATCL103 from the demo database. The query uses a temporary table to store the averages for each tag for each month. Finally, the query selects each row from the temporary table while calculating the average monthly average for each tag.
The query produces the output similar to:
Tank 1 Level Tank 2 Level Tank 3 Level
Month Average Average Average
-------- ------------ ------------ ------------
MAR-2015 9683.904 10676.321 10987.700
FEB-2015 9844.802 10384.172 9725.739
JAN-2015 9741.523 10497.375 10731.516
DEC-2014 9691.316 10670.413 10670.216
NOV-2014 9695.959 10542.022 10715.767
OCT-2014 8983.587 11182.053 10511.935
------------ ------------ ------------
AVG 9606.848 10658.726 10557.145
Keywords: aggregates
monthly average
example
sample
References: None |
Problem Statement: How to display time and date independent of the database date format. | Solution: Use the cast function specifying a new format.
I.e. WRITE CAST(CURRENT_TIMESTAMP as CHAR format 'DAY DD MONTH YYYY HH:MI:SS.T');
Would output in the following format
WEDNESDAY 08 MARCH 2006 14:21:10.8
Keywords: timestamp
cast
format
References: None |
Problem Statement: Scanning IO addresses multiple times adds network traffic and introduces errors in some Aspen CIM-IO interfaces. For example, Bailey SEMAPI returns the error Fatal 5 (ICI): Block already established as another point when an address appears more than once in an IO transfer list. | Solution: The attached query FindDuplicateAddresses looks through all IOGETDEF records looking for duplicate addresses contained in the field IO_TAGNAME. The query then reports which occurrences in which IOGET records contains duplicate IO addresses.
To remove an occurrence with a duplicate address, first turn off record processing for the CIM-IO transfer record by changing the field IO_RECORD_PROCESSING to OFF. Next, open the repeat area IO_#TAGS and move to the occurrence containing the duplicate address. Right click on the occurrence number and select Delete Occurrence. After removing all the duplicate addresses from the transfer record, resume process scanning by turning on record processing by setting the field IO_RECORD_PROCESSING to ON. See the knowledge base article 119108 (Adding new occurrences to Get or Unsol records without first turning the record OFF can result in Cim-IO problems) for more information concerning the maintenance of CIM-IO transfer records.
Keywords: Sample Query
Example Query
Duplicate Tags
Duplicate Addresses
References: None |
Problem Statement: The value from column FACTOR in table WSPECS is used to convert weight based property to volume based equivalent. Below is a description of exactly how it affects the matrix structure. | Solution: Assuming we have one specification blend B+C:
Two components can be blended to B+C:
We have max sulphur content expressed in weight% set to 2.5:
In table BLNPROP we have quality data regarding sulphur content expressed in pounds per barrel (SPB):
If we know correlation between SUL( weight percent) and SPB (pound per barrel) we can set it up in table WSPECS in column FACTOR so PIMS will be able use both in quality control row.
After running such a model XSULB+C quality control row will have coefficients like below:
Where:
BBBB+C is a volume of BBB which will be blended to B+C
BCCCB+C is a volume of CCC which will be blended to B+C
BWBLB+C is a weight of blended B+C
As SUL is controlled on weight base PIMS will convert SPB*FACTOR and insert it to the XSUL equation. The whole calculation looks like below:
-0.253*0.454*BBBBB+C – 0.711*0.454*BCCCB+C + 2.5*BWBLB+C >= 0
In other words:
SPB*FACTOR *VOL =SUL*Weight
Should you need more details please see enclosed model.
Keywords: WSPECS
FACTOR
BLENDING
References: None |
Problem Statement: How do I display the status, number of file sets, and active file set number for each repository? | Solution: The attached query displays the status, number of file sets, and the active file set for each repository in an Aspen InfoPlus.21 server.
Keywords: Active file set
Number of file sets
Repository status
References: None |
Problem Statement: This Knowledge Base article provides examples of Process Explorer Ad-hoc calculations. | Solution: Attached PDF document on thisSolution provides syntax examples and formula construction rules for Ad-hoc calculation that can be used in Aspen Process Explorer.
For a detailed explanation of the Time-Based Functions, please consult Aspen Calc Help File.
For a list of Aspen Calc functions that can be used in Process Explorer refers toSolution 115368.
Keywords: Ad-hoc
Process Explorer
Time-Based Functions
References: None |
Problem Statement: When calling GetObject in SQL script for an excel file, receive an error message as below.
''Get object from ''<filepath>\<filename>.xls'' failed: Access is denied at line <number>.'' | Solution: This is generally due to the permission setting of the DCOM object.
1. Click on Start and click on Control Panel
2. Click on Administrative Tools and double click to launch Component Services.
3. Expand the container Component Services.
4. Expand the Computers container
5. Expand the My Computer container
6. Expand the DCOM Config container.
7. Right click on Microsoft Excel Application and click Properties.
8. Select the Security tab
For Windows XP, choose Customize and click on Edit.
Follow these steps under the Launch and Activation Permissions,
1. If the User Group the user belongs to or the user name is not in there, click on Add and type in the User Group or user name and click on OK.
2. Click allow for all the permissions available for the User Group or user name added, i.e. Local Launch, Remote Launch, Local Activation and Remote Activation.
3. Click on OK two times to accept the changes.
Under the Access Permission,
1. If the User Group the user belongs to or the user name is not in there, click on Add and type in the User Group or user name and click on OK.
2. Click allow for all the permissions available for the User Group or user name added, i.e. Local Access and Remote Access.
3. Click on OK two times to accept the changes.
For Windows 2003 with the installation of Service Pack 1, DCOM might not work correctly. This is due to the changes in the default COM permissions in Windows 2003 SP1. Please refer to Microsoft knowledge base article ''Programs that use DCOM do not work correctly after you install Microsoft Windows Server 2003 Service Pack 1''.
http://support.microsoft.com/?kbid=892500
For Windows 2003, choose Customize and click on Edit.
Under the Launch and Activation Permissions,
1. If the User Group Distributed COM Users is not in there, click on Add and type Distributed COM User and click on OK.
2. Click allow for all the permissions available for the Distributed COM Users, i.e. Local Launch, Remote Launch, Local Activation and Remote Activation.
3. Click on OK two times to accept the changes.
Under the Access Permission,
1. If the User Group Distributed COM Users is not in there, click on Add and type Distributed COM User and click on OK.
2. Click allow for all the permissions available for the Distributed COM Users, i.e. Local Access and Remote Access.
3. Click on OK two times to accept the changes.
In addition, administrators must add the domain user to the Distributed COM Users group by the following steps.
1. Click Start, point to Administrative Tools and then click Active Directory Users and Computers.
2. Expand Domain container.
3. Expand the Built-in container.
4. Right-click on Distributed COM Users and click Properties.
5. Select Members tab and click Add and type the User Group or user name and click on OK.
6. Click on OK two times to accept the changes.
Keywords: Excel
Word
SQL
DCOM
References: None |
Problem Statement: Sample Query to check for an existing value in ENG-UNITS | Solution: The following query that can be used to search for an existing value in ENG-UNITS:
SELECT select_description FROM Select8Def WHERE name = 'ENG-UNITS' AND select_description like 'degC';
Keywords:
References: None |
Problem Statement: In aspenONE Process Explorer, comments, annotations, and replies are stored in InfoPlus.21 and cross-referenced to an InfoPlus.21 tag. With this functionality, there is an entry in ADSA that may be added named aspenONE Process Explorer Comments. This article discusses how to use this service. | Solution: When added to ADSA, the aspenONE Process Explorer Comments service has an option for Default Data Source with a checkbox option next to it. There is no impact of adding this service to any data source if the checkbox remains unchecked. If the checkbox is unchecked for all data sources listed in ADSA, then any comments added will first try to add it to an InfoPlus.21 database located on the web server, then it will try to add the comment to the InfoPlus.21 database that contains the tag that is referenced in the comment. If neither of these options are available, it will try to go down the list of InfoPlus.21 servers in ADSA in alphabetical order.
If there is a single data source with the Default Data Source option checked, then the comment will be stored in that database. If there are multiple data sources with the Default Data Source option checked, then the first ADSA entry with the Default Data Source option checked will store all the comments.
Keywords: ADSA comments
Default Data Source
References: None |
Problem Statement: What are different calculation modes such as Design, Rating / Checking, Simulation and Maximum Fouling? | Solution: There are four calculation modes in Shell&Tube / Tasc+
1. Design
2. Rating / Checking
3. Simulation
4. Maximum Fouling.
Design Mode
Design mode identifies one or more exchangers that will perform a thermal duty you specify, subject to limits on the maximum pressure loss you specify as acceptable for each stream.
In Design mode you must provide some basic information about the overall exchanger configuration (shell and header types, baffle type etc) and about the tubes and tube layout used. You can also specify the range of shell sizes, tube lengths etc within which a design should be looked for. The program will then calculate all the other geometric features such as the exchanger size, number of passes, nozzle sizes, baffle cut etc.
The program provides a design based on either cost optimization or on minimum area.
Checking Mode
Checking mode answers the question will this exchanger do this duty?
You have to specify the exchanger geometry and the process information defining the duty. The result of the calculation is expressed as the ratio of actual heat transfer surface area to the required heat transfer surface area. An area ratio above unity implies that the specified duty can be performed.
In the Process Data Input you can specify, for each stream, the flow rate and inlet and outlet conditions (or other information such as heat load from which they may be deduced). In a checking calculation, the heat load implied by these parameters is taken as fixed. The inlet pressure is fixed, but the outlet pressure of each stream is recalculated based on the predicted pressure drop in the exchanger.
Simulation Mode
Simulation mode answers the question what duty will this exchanger achieve?
You have to specify the exchanger geometry and process information defining a first estimate of the duty. You normally fix the exchanger and the inlet conditions and flow rates of the hot and cold streams. The program calculates the stream outlet conditions and hence the duty. The result of the calculation is the ratio of actual to required heat duty.
A Standard Simulation determines the stream outlet conditions. There is also a Generalized Simulation available, in which either the outlet conditions or the inlet conditions or the flow rate of each stream can be revised, as specified by the Revise for Heat Balance in the Process Input.
Note: In a Checking calculation, the three parameters (inlet/outlet/flow rate) are fixed for each stream, and the ratio of the actual surface area ratio to that required is determined. In both Checking and Simulation, the inlet pressure is taken as fixed, and the outlet pressure is calculated.
'Conditions' at inlet and outlet refers to specific enthalpy. Fixed conditions will also mean fixed temperature and quality (vapor mass fraction) as long as the pressure changes are as you have anticipated.
Maximum Fouling Mode
Maximum Fouling Mode answers the question what is the maximum fouling for a specific thermal duty to be obtained.
The calculation mode is similar to Checking, but adjusts the fouling resistance(s) to determine, if possible, the maximum values which give an area ratio of unity. You can specify that the fouling resistance is only adjusted on one side (hot or cold), or that both resistances on both sides are scaled or added to.
Keywords: calculation modes, design, rating, simulation, maximum fouling.
References: None |
Problem Statement: Ad hoc calculations for tags that have special characters in the name, like -, . etc., fail to work in aspenONE Process Explorer, Aspen Process Explorer and Aspen Process Graphics Editor.
For example, let's consider an Ad-hoc calculation that involves a tag Test-ATCAI
=Test-ATCAI*100.
The above calculation returns an Invalid tag In Formula error in aspenOne Process Explorer trend plots.
Process Explorer and Process Graphics Editor error out as well with an Invalid Tag message.
To avoid this error, adopt the following workaround. | Solution: The tag in question needs to be enclosed in curly braces in the ad hoc calculation.
For example:
={Test-ATCAI}*100
Once enclosed in curly braces { }, the calculations function as expected.
Keywords: A1PE
Special Characters
Invalid Tag in Formula
References: None |
Problem Statement: How are LHHW kinetic reactions specified in Aspen Plus? | Solution: A general example of specifying LHHW reactions was added to the Aspen Plus help for version 2006.
Consider a reaction W + X Y + Z.
In the general case, the rate expression is:
If the reaction is non-reversible, the kinetic constants can be combined into the main constant k with its temperature dependency specified by E. The reduced rate expression is:
Where =1 and =0.
To specify this in Aspen Plus, you would enter values for k and E on the Kinetic sheet, then click Driving Force. In LHHW reaction sets, for Term 1 enter A = 0 and concentration exponents for the reactants = 1. Then set Enter Term to Term 2, and enter a large negative value for A, so that is essentially zero.
For reversible reactions, the rate expression could be written as:
To specify this expression, enter k = 1 and E = 0 on the Kinetic sheet, click Driving Force, specify the concentration exponents = 1 for the reactants in Term 1 and for the products in Term 2. Enter appropriate temperature-dependent expressions for and in Term 1 and Term 2, respectively, in the form , or alternatively, .
Alternatively, the rate expression could be written as:
In this case you can enter k and E on the Kinetic sheet, click Driving Force, enter the concentration exponents as above, and specify A = 0 in Term 1 and an appropriate temperature-dependent expression in Term 2 to indicate that , again using the form .
The adsorption expression in LHHW reactions depends on the assumed adsorption mechanism. Expressions for various mechanisms can be found in Perry's Chemical Engineers' Handbook, table 7-2. Suppose the mechanism is given by:
To enter this adsorption expression in Aspen Plus, set Adsorption expression exponent to 2 and define five terms. For the concentration exponents, enter:
Component
Term no. 1
Term no. 2
Term no. 3
Term no. 4
Term no. 5
W
0
1
0
0
0
X
0
0
1
0
0
Y
0
0
0
1
0
Z
0
0
0
0
1
For the adsorption constants, enter temperature-dependent expressions in the other terms for the corresponding Ks, again using the form . Usually C and D will be 0. For the first term which is typically 1, enter A=0 since 1 = exp(0).
Keywords: Langmuir-Hinshelwood Hougen-Watson
References: R. H. Perry and D. W. Green, eds., Perry's Chemical Engineers' Handbook, 7th ed., McGraw-Hill (1997), pp. 7-11 to 7-13. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.