question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: After upgrading to aspenONE Process Explorer v9 the search facility no longer works. Mis-configuration of Apache Tomcat 8 is suspected since the problems extend to the Tomcat applications. There are many symptoms of problems such as:
When you attempt to scan for tags using aspenONE Process Explorer Admin you get a warning messages saying that Search Engine Status : Not Running -and- A connection with the server could not be established (Error code: 500)
If you attempt to Scan all Data it identifies local ProcessData data sources but you also see errors in the output window. For example:
Failed to publish tag metadata to localhost( Request to AspenCoreSearch failed with HTTP Status Code 404 )
In aspenONE Process Explorer (a1PE) you see numerous errors whenever you attempt to use search:
1. An attempt to add a tag to a plot by using the Tag Input Line you get Search Service error:
Unable to validate server response.
Search Service responded with the following error:
HTTP Status 500 - {msg=SolrCore 'collection1' is not available due to init failure: Could not load config file... solrconfig.xml at...}
2. Attempt to use Search for Everything results in a blank search results page with the wait spinner endlessly rotating or an error message that says:
Access service error !
1. Text Status: undefined
2. Error Thrown: undefined
You have already checked in services.msc and the Apache Tomcat Windows service is definitely running. How can search be fixed so that it works without error within a1PE? | Solution: There is a range of causes for such problems. Run through the following suggestions to make sure Apache Tomcat at least is correctly configured:
1. Stop Apache Tomcat 8.0 Windows service (tomcat8.exe) in services.msc:
2. Identify if anything else is using the typical port number used by Apache Tomcat (default is 8080 but we suggest any increment beyond that as a sensible alternative). You can do this by opening a command prompt window and running the following command:
netstat -a | findstr /r ":808.*"
In the above example, 8081 is in use - status is discovered to be LISTENING on that port. In fact in this example it is being used by the obsolete PlantArea site in Internet Information Services (IIS). If you have upgraded from v7.3 then this is likely to be seen.
As a general rule, it's worth checking in Internet Information Services (IIS) Manager to see what bindings are being used and you should ensure that they are in order:
Take this opportunity to delete the obsolete PlantArea site if this exists.
Any remaining listed port numbers should not then be used for Apache Tomcat.
You can then decide what port to use for Tomcat (we recommend using the default 8080 since many of the Knowledge Base solutions assume this). Check all the configuration files where this port number is specified making sure the same number is used in all cases.
Refer to Knowledge Base solution 144666 for a complete list of files.
3. Make sure you are consistently using the correct version of Java. This is particularly true after upgrading to Tomcat 8 which unlike earlier versions will require Java 8 itself. Also note that you should be using 32-bit versions of Tomcat and Java.
Unless there is a particular need for any version of Java preceding Java 8 then it is recommended (not least by Oracle themselves) to remove old versions:
Why should I uninstall older versions of Java from my system?
https://java.com/en/download/faq/remove_olderversions.xml
You should then (re)install Java 8 runtime. Note, alternatively install the Java 8 JDK if you intend to use Aspen IP.21 Browser Graphic Studio. If you do that you can most likely avoid step 4 below although it's still worth reading the following section if you continue to have problems.
4. Set Java paths correctly and clear up all inconsistencies that may be apparent when you go through the following tests:
i. Check that you're not running some antiquated version of Java which may be found on the path somewhere. In a command window type: java -version
Anything other than the expected 1.8.x will indicate possible cause of your problems.
ii. Be aware that in the registry you can find the CurrentVersion for the runtime (and JDK) versions of Java:
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\JavaSoft\Java Development Kit]
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\JavaSoft\Java Runtime Environment]
iii. Check the PATH environment variables for obsolete Java paths (in Control Panel click System. Click Advanced system settings. On the Advanced tab, select Environment Variables).
Note, after installing Java 8 the PATH environment variable is likely to lead with folder: C:\ProgramData\Oracle\Java\javapath
If you look in that javapath folder you will see symbolic links to java.exe, javaw.exe and javaws.exe. Make sure these are correct by right clicking each using Windows Explorer and selecting Properties | Shortcut | Open folder location. If this takes you to a Java 6 or 7 folder then again it's best to uninstall the old Java and reinstall Java 8. Alternatively you can manually recreate the symbolic links yourself if Security Policy permits. In this example Administrators can create symbolic links:
Recreate necessary symbolic links by running this script in a command prompt window (modify it to account for the particular version of Java 8 you have installed):
C:
CD C:\ProgramData\Oracle\Java\javapath
DEL "java*.exe" /Q
MKLINK java.exe "C:\Program Files (x86)\Java\jre1.8.0_66\bin\java.exe"
MKLINK javaw.exe "C:\Program Files (x86)\Java\jre1.8.0_66\bin\javaw.exe"
MKLINK javaws.exe "C:\Program Files (x86)\Java\jre1.8.0_66\bin\javaws.exe"
Having done so, make sure the files listed in the javapath folder have .symlink file type:
Double click javaws.exe to test the link - a popup dialog explaining options should appear. If anything looks out of place then again we recommend reinstalling Java 8 rather than manually configuring these.
"java -version" in a command prompt window should now be returning java version "1.8.x" even when run from the root folder.
iv. Check the JAVA_HOME and JRE_HOME environment variables for obsolete Java paths (in Control Panel click System. Click Advanced system settings. On the Advanced tab, select Environment Variables). These do not always exist (which itself is a problem) but if they do then you must make sure they specify Java 8 folders, problems may occur if this is not true.
JAVA_HOME - Should point at your appropriate version Java Development Kit installation.
JRE_HOME - Should point at your appropriate version Java Runtime installation.
The point is that you do not want the batch files used by Tomcat to trip up on the wrong version of Java or non-existence of required environment variable(s). For example, you can see references to the environment variables if you open the following password.bat file in a text editor:
C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.21\appdata\solr\collection1\conf\password.bat
Password.bat gets used when you attempt to set user name and new password with the aspenONE Credentials utility program. In fact, if password.bat was not present at all then aspenONE Credentials crashes when you attempt to apply changes!
5. Run C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.21\bin\tomcat8w.exe
Click on the Java tab and make sure the path for the Java Virtual Machine is correct: eg.
C:\Program Files (x86)\Java\jre1.8.0_66\bin\client\jvm.dll
Or even better tick the "Use default" check box. You should see that a Java 8 version of jvm.dll is written into the edit box assuming Java 8 is demonstrably the default Java version on your machine (see item 4).
If you look at the Java JDK folders in the file system you may note that there is a server and client version of jvm.dll. The server and the client VMs are similar although the server VM has been specially tuned to maximize peak operating speed. Either can be used although you may prefer to use the server version.
6. Delete all Tomcat log files - whatever is in them is obsolete after the changes you have made above:
C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.21\logs
7. Open aspenONE Credentials utility program - please use the Run as Administrator option when you do so. Tick the "Tomcat Basic Authentication for Search Security" check box and set User Name and New Password. By default these will already be User Name = admin, Password = admin. Reapply these defaults as a suggestion - just to get this working. Click Apply button. You should not get errors and should act on messages if they occur. If the program crashes then something is very wrong with the installation, in particular the password.bat file.
8. Restart IIS (iisreset.exe in command prompt window) and start the Apache Tomcat service in services.msc.
9. After a minute or so Tomcat would have had time to fully deploy its registered applications. You should have a quick look at the catalina log file in the logs folder to make sure no errors or exceptions have happened. Search the Knowledge Base for help with any worrying messages - particular regarding security. Also make sure all references to Java version are as expected in the tomcat8-stderr log file.
10. Log into the Tomcat applications used by aspen search (url's are case sensitive and you should change port number to the one you have chosen for Tomcat):
i. http://localhost:8080/solr
You will be asked for the credentials that you provided in the aspenONE Credentials utility program. If you cannot log in using your credentials (HTTP Status 401) then it's likely that the server credentials have not been saved correctly in the configuration files. If the environment variables were not configured correctly then you will see an error message (instead of an encrypted password) in the tomcat-users.xml. For example,
<user username="admin" password="Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program" roles="AspenSearch" />
Note, tomcat-users.xml file is mentioned in the Knowledge Base solution 137131.
The Apache Solr dashboard should indicate Solr Specification Version: 4.7.0
ii. http://localhost:8080/AspenCoreSearch
You should see the following jobs listed:
DataReceiver1
IngestionRequest1
ItemReceiver1
NPEScanData
NPEScanFiles
PeriodicMaintenance
ScanAPRMDataSources
aspenONEAutoScanner
DataReceiver1, IngestionRequest1 and aspenONEAutoScanner are not used by MES and can be switched to Manual by expanding Job, expanding Schedule, and clicking Manual. Setting jobs to Manual will cause them not to run.
If any of the other jobs are missing then you should consider a repair of your Aspen Web Server product using the Setup.exe program on your install media.
11. It is now time to see if search is working again in a1PE. Try to scan for tags using aspenONE Process Explorer Admin. Try to search for a tag in aspenONE Process Explorer. If problems continue then refer to other Knowledge Base solutions. In particular solution 143946 describes how to further clean up Tomcat folders.
Keywords: HTTP Status 404
HTTP Status 500
Apache Tomcat/8.0.21
References: None |
Problem Statement: How do I render all indirect costs to zero in Aspen In-Plant Cost Estimator? | Solution: Indirect costs are modified in the Contractor form and in the Engineering workforce form in Aspen Capital Cost Estimator (KB number: 000044667). However, these forms are not available in Aspen In-Plant Cost Estimator.
Due to this, if a user wants to render all indirect costs to zero, the template attached to this solution should be used.
The steps to create this template are similar to the one of ACCE with a minor difference: instead of zeroing Total indirects cost (USD), user should zero out individually the following fields:
Fringe benefits percent DFL (%) 0
Burdens percent DFL (%) 0
Travel percent DFL (%) 0
Consumable/small tools pct DFL (%) 0
Scaffolding percent DFL (%) 0
Field Services percent DFL (%) 0
Temp constr/utilities pct DFL (%) 0
Miscellaneous expenses pct (%) 0
Equipment rental pct calculated (%) 0
Mobil/demobilization percent DLF (%) 0
After that, user can save the file as a template and use it in In-Plant Cost Estimator.
Note: To save a template file, user most only modify fields in the Project Basis View tab, and must not add any type of equipment to the project.
Keywords: Indirects, Contractor, Zero, Cost
References: None |
Problem Statement: What are the Cost Basis & Indices for Aspen Economic Evaluation V9.0 & V9.1? | Solution: The Cost Basis for V9.0 and V9.1 is First Quarter of Year 2016. The indices are listed below. Indices for each Cost Basis Year can be found in Chapter 33 of the Icarus
Keywords: Aspen Economic Evaluation, Cost Basis, Indices
References: Manual. You can access Icarus Reference Manual from Help | Documentation option. |
Problem Statement: When install Aspen Asset Analytics on windows server 2012 R2 machine, you will need to install Windows Update KB2999226 as Prerequisite if the update has not been applied.
(It is located at \3rd Party Redistributables\KB2999226\)
However, when double click on "KB2999226 for Windows 2012 R2-x64.msu", you will get below error message: | Solution: 1- Copy the update installer to C:\win2012 and rename it to 2012.msu
2- Run CMD as administrator
3- Type
Expand -F:* C:\win2012\2012.msu C:\win2012\
You will see 4 files have been extracted in folder win2012
4- Type
DISM.exe /Online /Add-Package /PackagePath:C:\win2012\Windows8.1-KB2999226-x64.cab
Keywords: KB2999226
not applicable
Windows update
References: None |
Problem Statement:
I was working with a model where !PDISTxls was hitting limit of xls size. I wanted to change the Output Spreadsheet Extension to Xlsx - After doing so the model runs but cannot finish. PIMS hangs with the PIMSWIN Error message. Also Excel doesn’t work afterwards and I am forced to restart my machine.
Applicable Version(s):
All versions until version 10. | Solution:
The error happens when writing the !ROWS.xlsx table, if it has more than 3200 columns. This is a knows issue in PIMS tracked under CQ00747254 and fixed in version 10. It has been affecting all previous versions. It happens as column max limit in the excel writer code is set at maximum 3200.
Note that you do not have to restart your machine. You just need to go to the task manager and kill all the hanging EXCEL.EXE processes.
If switching to V10 or higher is not possible then the only workaround is to reduce the size of the !ROWS.xlsx or to reduce the size of the original table which indicated the need to switch from Xls to Xlsx as output. This can be achieved by model simplification.
Keywords: None
References: None |
Problem Statement: The Aspen Cim-IO for OPC Interface service can be installed if it does not appear as a service on the system where Cim-IO for OPC is installed. | Solution: Open a command prompt go to the cio_opc_api directory.
> cd %cimioroot%\io\cio_opc_api
Run the InstallService.exe command with the correct path to the Manager.exe file.
> InstallService C:\Program Files\AspenTech\Cim-IO\io\cio_opc_api\Manager.exe
One may be prompted for a user name and a password after pressing 'Enter' on the command above.
This will install the Cim-IO OPC Interface service and add the following key to the registry.
HKLM\System\CurrentControlSet\Services\zzCIMIOOPCMGR
Note, starting with V10, the Manager.exe file itself is no longer distributed with Aspen Cim-IO Interfaces. The supported method to create and manage the Cim-IO Interfaces when using V10 is to use the Aspen Cim-IO Interface Manager. See How to configure a Aspen Cim-IO for OPC DA interface using the Cim-IO Interface Manager
KeyWords
cimio opc interface service
cimio for opc
Keywords: None
References: None |
Problem Statement: How to create and share a link to saved files in AspenONE Process Explorer? | Solution: This feature was added to AspenONE Process Explorer in version 9.1.
Open a saved file in AspenONE Process Explorer, for example a Trend Plot.
Click on the Copy Link button on the A Bar.
Copy the link provided and paste it into an Email or other means to share it with colleagues.
Keywords:
References: None |
Problem Statement: Graphics can easily be configured so that a user can click a dynamic object and invoke another graphic within the same project, another page within AspenIP.21 Process Browser or another webpage all together. This solution provides 5 examples for scripting such actions, more commonly referred to as hotlinks. | Solution: The below examples can be added to the actions tab of any dynamic object in the graphics studio, although typically a hotlink would be created from the OnClick event of a button object.
Please note that Java is case-sensitive, so be careful to follow the case as well as the syntax.
Example #1:
Adding the below script to the actions tab of the dynamic object will replace the current graphic with the one specified:
graphicReplace("DemoMain");
Note: The above example is calling a graphic named DemoMain, which is part of the AspenDemo.prj. The graphicReplace function only works if the graphics exist in the same project. Please see example #4 for instructions on opening a graphic from a different project.
EXAMPLE #2:
Adding the below script to the actions tab of the dynamic object will open a new browser session to Sun Microsystem's Java web site.
window.open('http://java.sun.com');
Note: The calling graphic will still be displayed since this script opens a separate instance of Internet Explorer to display the page.
EXAMPLE #3:
Adding the below script to the actions tab of the dynamic object (typically the button object) will replace the current graphic page with Sun Microsystem's Java web site.
window.open('http://java.sun.com', '_self');
Note: This example is the same as #2, execpt for the '_self' parameter which indicates that the page specified should be opened within the same instance of Internet Explorer that it's being called from.
EXAMPLE #4:
Adding the below script to the actions tab of the dynamic object will replace the current graphic with another graphic, either in the same project or in a different project:
window.open('graphics.asp?ModelName=/Web21/Graphics/AspenDemo/DemoScripts&w=800&h=550');
Note: This example opens a graphic named "DemoScripts" which is part of a project named "AspenDemo". In order for this example to work, the graphic project must already be compiled to the web server. The final parameters in this script specify the width and height of the graphic, and can be adjusted to your preference.
EXAMPLE #5:
Adding the below script to the actions tab of the dynamic object will open a saved plot in a new window:
window.open('PEPlot.asp?path=/Web21/Plots/sampleplot.xml');
Note: This script opens a saved plot named "sampleplot.xml" from the directory <drive>:\Inetpub\wwwroot\AspenTech\Web21\Plots. The path to the file must be one recognized on the Web.21 server. The above script will open the file in a new IE window, but if you wish to open it in the same window you can add the _self argument to the command. For example:
window.open('PEPlot.asp?path=/Web21/Plots/sampleplot.xml', '_self');
KeyWords
javascript
onclick event
shortcut
hotlinks
Keywords: None
References: None |
Problem Statement: Two possible domain level changes that could affect the InfoPlus.21 (IP.21) database include:
1. Changing the domain that the IP.21 server computer account resides in
2. Changing the domain for the IP.21 Administrator account resides in
For the first scenario, one is moving the IP.21 server computer account from one domain to another domain. Doing this changes the fully qualified computer name for the IP.21 server. For example, if the IP.21 server hostname is IP21Serv and currently resides in the OLD_DOMAIN domain, the fully qualified name would be something similar to IP21Serv.OLD_DOMAIN.companyname.com. If one moves the server to the NEW_DOMAIN domain, the new fully qualified host name would be IP21Serv.NEW_DOMAIN.companyname.com.
For the second scenario, one is changing the account used to start the IP.21 database from an account in the OLD_DOMAIN to an account in the NEW_DOMAIN. This account name is listed in the Services tool (Start | Settings | Control Panel | Administrative Tools | Services) for the Aspen InfoPlus.21 Task Service. Under the Log On tab for this service, a domain account should be specified as the account used to start up the service. For example, if the user account used to start this service were IP21Admin, the account would be listed in the service as OLD_DOMAIN\IP21Admin. After switching over to account in the new domain, the account would be listed as NEW_DOMAIN\IP21Admin.
The following solution details how the IP.21 database can be affected by these domain changes and what steps one can take to ensure a seamless transition. | Solution: *********** Changing the domain for the IP.21 server computer account ***********
1. If the new domain is separate then the domain that holds the user accounts for the IP.21 users and the IP.21 task service, ensure that the domains have a trust relationship. A two-way trust relationship is required. This should be verified prior to switching the IP.21 server over to the new domain.
2. Verify that the old fully qualified domain name is not referenced by client applications when accessing the IP.21 server. If in a data source configuration tool, such as ADSA or Local Security (see note below), the full computer name was listed rather than the short hostname (i.e. IP21Serv) these references would have to be modified.
Note: One would only need to check the Local Security configuration if Local Security was installed on the same computer as the IP.21 server.
To check the ADSA configuration, open the ADSA Client Config Tool (Start | Programs | AspenTech | Common Utilities) on the ADSA server or on a client computer. Select the data source for the IP.21 server and select the Edit button. Check through the services listed for the data source to verify the hostname is correct.
To check the Local Security configuration, open the AFW Tools application on a client computer. (Start | Programs | AspenTech | AFW Tools). On the Client Registry Entries tab, double click the URL setting. Check the Data field to ensure that the correct server name is listed in the URL path. It should look something similar to "http://IP21Serv/AspenTech/AFW/Security/pfwauthz.asp"
Other client applications may use means of accessing IP.21 other than through ADSA. If it's unclear on how a particular client application accesses IP.21 and there are concerns that there may be problems after changing the server domain, please contact support.
The general procedure when changing domain for a computer account typically involves rebooting the computer. Once the NT Administrator switches the IP.21 Server to the new domain and the computer is rebooted, IP.21 should start up as normal. Client applications should be able to connect successfully assuming that they are not referencing the old fully qualified domain name.
*********** Changing the domain for the IP.21 Administrator account ***********
Changing the account used to start up the IP.21 services involves more steps. In addition to modifying the account listed for the AspenTech services, the services must be restarted, the IP.21 System Account should be updated (if running IP.21 v5.0 or higher), and the DCOM settings must be updated.
To switch over to the new domain account:
1. Add the new domain user account to the local Administrator group on the IP.21 Server.
2. If Local Security is used, add the domain user account into the Aspen Local Security Administrator role in the AFW Security Manager.
3. Update DCOM security settings allowing for the new domain account to administer IP.21
a. Select Start | Run
b. Type dcomcnfg and click [OK]
c. In the Distributed COM Configuration Properties window select Default Security Tab
d. Click [Edit Default] for the default access permissions
e. In the Registry Value Permissions window Click [Add]
f. In the Add Users and Groups window, select the new domain and click on the [Show Users] button
g. Find and select the user name and click on [Add] to add the user to the Add Names list box.
h. Make sure Allow Access shows in the type of access at the bottom of the window.
i. Click [OK] to return to the Registry Value Permissions window. The selected user will show up in the Names list box.
j. Click [OK] in the Registry Value Permissions and Distributed COM Configuration Properties windows
4. Stop IP.21 from the IP.21 Manager.
5. On the IP.21 server, open the Services Tool. Stop any AspenTech services that start up under a specific account, such as the Aspen InfoPlus.21 Task Service.
Note: Services that require a specific account depend on version and products installed. Some possible services that would require updating include:
AFW Security Client Service
Aspen Audit and Compliance Server
Aspen SQLplus Authorization Server
Aspen Production Record Manager BCU Service
Aspen Production Record Manager Services
Aspen InfoPlus.21 Task Service
AspenTech Calculator Engine
CIM-IO Manager
On each service requiring a domain account, update the account and password fields for the new domain user account. Restart the service after the account is updated. Starting the Aspen InfoPlus.21 Task Service should start up the IP.21 Database.
6. Open the IP.21 Manager. Check to see if the database is starting due to the restart of the Task Service. If not, then manually restart the database using the "Start InfoPlus.21" button in the IP.21 Manager.
7. Update the IP.21 System Account used for auto-starting IP.21 on a reboot:
a. Open the IP.21 Administrator. To do so, right-click on the Aspen InfoPlus.21 Administrator desktop app (or shift-right-click on some older Windows operating systems), select run-as-different-user, and use the OLD service account to run the app. Next, right click on the database name and select [Properties] from the context menu.
b. Select the System Account tab on the Properties window. It should show that the system account is the old domain account and that task service account is the new account.
c. Tick the check box for "Set System Account Equal to Task Service Account".
d. Click [OK] and close the IP.21 Administrator tool.
If local security is used, it is required that both the IP.21 Administrator account and the anonymous user account for the AspenTech virtual directory (see note below) should have read access to group membership information from the domain housing the client user accounts. This is required so that the IP.21 security components can access group membership information for user authentication to the IP.21 database.
Note: To see what account is being used as the anonymous user account for the AspenTech Virtual folder:
1. Open the Internet Information Services Manager from Start | Settings | Control Panel | Administrative Tools | Internet Information Services (IIS) Manager
2. Under the Default Web Site, browse to the AspenTech folder and double-click on it.
3. Now, double-click on the Authentication icon in the IIS section in the right pane. Enabale Anonymous Authentication.
4. By default, the IUSR_nodename account is used for anonymous access. This is a local computer account that gets created when IIS is installed on a machine. This is the account that needs to have read permission to the domain controller information. In some cases, a domain account needs to be used for the Anonymous User account in place of the IUSR_nodename account, as the IUSR account is not granted read access to the group list information in the domain controller. If needed, replace the Anonymous User account in the AspenTech virtual directory with an account that does have permission to resolve group information from the Domain. Typically, the account used as the IP.21 Administrator account is sufficient. If this account is changed then IIS must be restarted.
One can use the SSTest or the aspenONE diagnostics utilities to help determine whether or not the anonymous user account for the AspenTech virtual directory needs to be replaced. SSTest is located in the C:\Program Files (x86)\AspenTech\BPE folder. The aspenONE Diagnostics can be run from the Start menu. If the account specified as the Anonymous User does not have correct domain controller permissions, the SSTest utility will fail the "Performing Client ADSI Domain Test".
To verify that IP.21 is working correctly once the domain changes have taken effect, test the client applications such as Process Explorer to make sure they can connect to the database. Verify that one can open the IP.21 Administrator and access the IP.21 database. Also, try opening the AFW Security Manager utility from the security server machine. If there was a problem with security, the Security Manager would fail to open. If there are problems with any of these checks please contact support.
KeyWords
Keywords: None
References: None |
Problem Statement: Why do I have liquid on the outlet of my KO drum? | Solution: Inside Aspen Flare System Analyzer KO Drums are used to allow liquid to separate from the feed stream so that it can be removed from the flare system.
This block completely removes the liquid phase flow going into the separator feed from the upstream elements of the network, this is applied to the outlets of this object.
When liquid is present at the outlet(s), the most likely cause is the condensation phenomena, which can occur due to the pressure conditions change on the downstream pipes.
Keywords: Liquid, Outlet, KO Drum, Vapor
References: None |
Problem Statement: This knowledge base article describes the Aspen InfoPlus.21 history utilities and their functions. | Solution: A number of history utility programs are provided to allow you to back up, restore, and repair Aspen InfoPlus.21 history repository filesets. Those utility programs are called from the Windows OS command prompt.
The IP.21 history utility programs are located here:
C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin
Note: These commands may also be used in batch files and the repository names ARE case sensitive.
H21ARCADMIN - This command allows you to perform the following tasks:
* Display a number that indicates which history file set is considered the ACTIVE one
Usage:
C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin>h21arcadmin -A -rTSK_DHIS
39
C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin>
* Create a new file set for a specified time span
* Change the beginning and ending times for a file set
* Display the time span for any file set, even ones that are not currently mounted
Synopsis: h21arcadmin (-A| -c | -d | -m) [-s "start_time" -e "end_time"] [-f filepath]
Note: AFTER USING h21arcadmin ALWAYS USE h21arcck. h21arcck is required to validate the fileset after using h21arcadmin.
H21ARCCK - This command checks a history fileset to determine if it is corrupt, and rebuilds it if it is corrupt.
Note: The functionality in the Command Prompt-based h21arcck is replicated in the Windows-based Repair Archive Wizard.
AspenTech recommends that you stop history storage with h21arcstop before you use this command. However, you can use the -b parameter to check any archive (except the current archive) while the history storage is running.
Assuming that you stop history storage to run h21arcck, be certain to start history storage again afterward. Restart history storage using h21prime.
Synopsis: h21arcck [-rrepname] (-afsnum|-i) [-d] [-o] [-b]
-rrepname The name of the repository for which to display the status of the buffer. You do not need to specify a repository name unless you have multiple Aspen InfoPlus.21 history repositories. Note: The repository name is case sensitive.
-afsnum Specifies the number of the history file set.
-I Initializes history when the history storage software starts up. Do not use this parameter.
-d Displays debugging messages on the screen.
-o Forces a rebuild of the history file set even if it is not corrupt.
-b Allows the check to occur while the archives are running.
TIME REQUIRED FOR REBUILDING. h21arcck rebuilds the arc.key file from the data in the arc.dat and arc.byte files. It may take over an hour for h21arcck to rebuild a very large (>200 Mb) file set.
H21ARCPAUSE - This command pauses the history storage program (h21archive) to allow backing up of the active Aspen InfoPlus.21 history repository file set. While paused, tag data is written to the history event buffer for the repository and the Aspen InfoPlus.21 history repository is not available for use by any of the client programs such as the Aspen Process Explorer or Aspen GCS.
The only time you should ever use this command is to prepare for backing up a file set.
Synopsis: h21arcpause -rrepname
-rrepname The name of the history repository to be paused, for example, archive_3. This parameter is only needed for systems with multiple Aspen InfoPlus.21 history repositories. Note: The repository name is case sensitive.
H21ARCPROC - This command resumes the operation of the history storage program (h21archive) after using h21arcpause. When you use h21arcproc, the buffered tag data in the event buffer is written to the active Aspen InfoPlus.21 history repository file set on a first in first out basis.
Synopsis: h21arcproc [-rrepname]
-rrepname The name of the repository on which to mount a history file set. You do not need to specify a repository name unless you have multiple Aspen InfoPlus.21 history repositories. Note: The repository name is case sensitive.
H21SHIFT - This command forces an immediate shift from the active history file set to the next available file set.
The next available history file set is defined by the history storage software to be the lowest numbered file set that is dismounted, and unused. If there is no file set that meets these requirements, then the next available file set is the mounted, unused file set with the oldest data.
Synopsis: h21shift [-rrepname]
-rrepname The name of the repository in which the history file set shift should occur. You do not need to specify a repository name unless you have multiple Aspen InfoPlus.21 history repositories. Note: The repository name is case sensitive.
H21CHGPATHS - This utility allows users to change the paths of the file sets from an old server to the new server. Run the utility in the following manner:
Synopsis: chgpaths old_string new_string
The program will search for old_string in all the paths in the config.dat file and replace it with new_string.
NOTE: There are various other historian utilities. Details about these History Utility Programs can be found in the Aspen InfoPlus.21 Administration Manual or from the Aspen InfoPlus.21 Manager Online Help file . You can find examples and meanings of each of the command (for example, ( afsnum|-i) [-d] [-o] [-b]) in this manual.
(This article was previously published as solution 108428.)
KeyWords:
Utility
H21arcck
H21arcadmin
h21shift
chgpaths
h21arcproc
h21arcpause
(
Keywords: None
References: None |
Problem Statement: Schedule calculation using Store & Forward on Aspen Calc can have different states based on conditions of the input and calculations result; this state can be easily identified by the icon to the left of the calculation. As a result of specific conditions, sometimes it seems that the calculation is actually paused and that Aspen Calc is not executing the calculation at the scheduled time. This article provides an explanation of why some calculations can be paused by Aspen Calc as well as other states of them. | Solution: Calculation status can be summarized by looking at the Aspen Calc help files under Icons topics. You can see the details of these states as follow:
All calculations showing the green tick with a clock behind icon are calculation which has been executed properly on last schedule group execution. These calculations will show the result on Aspen Calc interface or as part of one of the output parameters bound to tags in Aspen InfoPlus.21 or other sources.
Calculations showing the pause icon are calculations which has an input value which has not been updated since last execution time, so that, AspenCalc simple pause them until an update of all input values is detected on next scheduled execution. This can happen, for example, when the calculation is executed quicker than the update rate of the input. For example, having a calculation (in calc script) like:
A = B * 1
Where A is bind to a record holding the result in Aspen InfoPlus.21 (CalcResult) historian and B is bind to a tag updating every minute (InputTest) as follow:
If we schedule this calculation as part of the 5 sec schedule, we will see that on first execution, calculation will show clock behind icon and this will remain for 5 seconds until the calculation is executed again and found that previous values has not been updated (since it is a tag updated every minute) and will put the calculation on pause and show pause icon . Calculation will go on pause mode but the execution time will be updating every 5 second, this can be seen on Aspen Calc interface. The calculation state will remain on paused mode until the tag is updated 1 minute later; the calculation will be executed on next schedule. The track of this behavior can be followed by looking at next trend plot:
The red line represents the value on B while the blue line represents the value on A as result of the calculation; the plot is showing actual values.
Here we can see that calculation is being executed every 5 seconds but does not write values on A until the tag update after one minute. At this point the calculation is executed for all previous scheduled executions using the previous values and also executes the calculation with the new value before entering the pause mode again after five seconds. The calculation will remain paused until next input value update. For the calculation to be executed while in pause mode, all his input must have been updated. For example considering the next calculation
A = B + C
Where A is bound to a record holding the result in Aspen InfoPlus.21 (CalcResult) historian, B is bound to a tag updating every minute (InputTest) and C is bind to a tag updating every two minutes (InputTest) as follow:
We can see on a trend that the calculation will not be executed by just the update of the variable B:
The red line represents the value on B while the green line represent the value on C as result of the calculation. The blue line represents the value on A as result of the calculation. The plot is showing actual values.
Here, we can see that calculation result are not written by the update of the value in parameter B. We need to wait until the value update of the parameter C:
Finally, calculation showing the forward icon , they are executed for all previous history data. In this case, if an input value is receiving values faster than the calculation is executed (in most cases as a result of a CIM-IO forward event) and the calculation has the S&F option enabled, Aspen Calc will detect that values previous to the value read at scheduled execution exist and will repeat the calculation for these points. This is important in cases when the calculation result is writing result values into IP.21 records and we need to have a historical of calculation result for each value.
Keywords: Aspen Calc
Calculation state
References: None |
Problem Statement: When an assay that is part of a crude blend have a zero entry for a yield, PIMS appears to be entirely ignoring property contributions of that specific stream, even though it has non-zero yields in other components of the crude blend (and therefore needs to be included). In other words, A, B and C are components of blended crude X. A has a zero gasoline yield, so PIMS ignores the properties in gasoline for the entire blended crude, even though B and C have gasoline yields and therefore contribute to its properties.
Here is an example of what the above statement is saying:
Table CRDBLEND & ASSAYS:
After you run the model with this setup, you will see that in the final matrix, the property of the cut contributed from the blended crude will not be shown. For example: The property is CLI, and if you search RCLISK1, SCRASLP will not be shown. (SCRA is the logical crude unit) | Solution: This is an issue to be addressed in V12. The issue was that there are two contributors to crude blend SLP that have zero contribution (SL2, HSP) and HSP was missing a value for CLI of SK1, causing it to skip calculating CLI for the blend, SLP. We have updated the logic to ensure it skips crude blend components with a contribution of zero. We were also skipping calculating the index property for a cut if the yield for that cut for the current crude was zero. We have removed this check so that all indices are calculated, regardless of the yield for the cut.
A good workaround for this issue is to remove the zeros for SL2 and HSP for crude blend SLP from CRDBLND, and also change the "0" in VBALSK1 to a very small coefficient.
Keywords: None
References: None |
Problem Statement: In EDR – Shell & Tube Exchanger design & Rating, while we change the tube length limits in “Geometry Limit in Design option”, software is finding optimum tube length. However, if the tube length again on varying the optimum tube length estimated is well below earlier specified tube length but Area get increases. | Solution: This issue is known issue where for example:
When user tries to change the range of tube length in Design Options - Geometry Limits :
In design option if tube length is 800 to 3000 mm range, software is finding optimum tube length 2387mm (AREA 38m2).
However if we change upper limit to 4000 mm, tube length becomes 2700 mm ( with AREA 19 m2).
V10 had a few issues in the design which is fixed in V11, that includes a correction in rho-V-sq definition for operational issues. Design should always start with the longest and thinnest exchanger and then explore other geometry variations. The behaviour is corrected for V11.0.
Keywords: None
References: None |
Problem Statement: GRAYSON and CHAO-SEA methods are using "GMSHXL" as BIPs, but user CANNOT select the parameter in Regression Format. | Solution: This is GUI issue that GMSHXL parameter does not included in the user interface currently.
User can use 'Input' feature to achieve this.
This will be fixed in V11 and the above so the below prescription is effective for V10 and the below.
1. Go Customize | Input
2. Add the statement that is replacing the current parameters for regress. For example, we are using GMSHXL for the 1st column to regress BIP between C3 and NC4.
‘PARAMETERS
BIPARAMETER 1 GMSHXL C3 NC4 ‘
3. After the execution of regression, user can find that the parameter for Grayson / Chao-Sea is regressed well.
Keywords: Regression, Grayson, Chao-Sea, GMSHXL
References: None |
Problem Statement: Why do the On-demand Aspen Watch (AW) Reports generated using the Web Display (PCWS) display incorrect aggregate values for Standard Deviation of a variable? | Solution: Aspen Watch performs aggregations for monthly values by finding a weighted average of the daily aggregates. While performing this calculation, the AW engine uses the number of data points (corresponds to the amount of time over the specified period that AW was collecting data for the controller) to perform the weighted average. The equation to perform this average is as shown below:
Where “n” represents the number of days with good data collection status.
This monthly data aggregation calculation is performed on the last day of every month at 23:59 hrs. However, these calculations are performed differently when a KPI report for a custom specified time duration is requested. The monthly average calculation equation for the case of custom period is shown below:
Where “n” represents the number of days with good data collection status.
Essentially, each day is given equal weight irrespective of the number of data points present in each day. The AW engine uses this procedure in order to reduce the computational load on the system (compromise in accuracy in order to achieve reasonable speed of calculation). The uncertainty in the calculated result increases if the number of data points collected in a day is not consistent.
Additionally, since Standard Deviation is a nonlinear function of both the time and instant values, any aggregation method will likely introduce some approximation.
Keywords: Aggregates
Aspen Watch KPI
Standard deviation
References: None |
Problem Statement: When user convert Aspen Plus' Racfrac with simple tray hydraulic model to Aspen Dynamics flow-driven, it cannot be converted with an error message as below.
5703 : Error : Variable HYD_STEN is undefined. | Solution: This problem has arisen if the case file created in old version v2006.5, because of compatibility issue between v2006.5 and V8.6 (and above).
If user open the file in V8.4 and below, there is no problem with the error message.
"Flooding Options" in Aspen Plus will only work with "Include hydraulic parameters" is being checked.
Please go to Radfrac | Analysis | Report | Property Options and check the "Include Hydraulic Parameters" option, then problem can be solved.
Keywords: RadFrac, Flow Driven, HYD_STEN, flooding
References: None |
Problem Statement: Tulsa unified model is recommended as an improvement to Beggs and Brills model and it supports 2-phase and 3-phase hydraulic calculation, however, sometimes simulation is not converging when user is using Tulsa 3-phase model. | Solution: The problem is Tulsa 3-phase model assume 2nd liquid as water and it sometimes fails to converge when 2nd liquid phase's quite different from water.
This will be fixed in V11, but this is tip for the user who are using V10 and the below version.
If 2nd liquid phase is not a significant issue, user can assume a single liquid phase and solve the simulation. In this case, please select main component of the fluid then it will help the convergence too.
Keywords: Pipe, Hydraulic, Tulsa
References: None |
Problem Statement: The exponent for critical temperature in 'a' parameter (repulsive term) is 1.5 in Aspen Property Document which is different from 2.5 in Redlich-Kwong's original literature. | Solution: This is an error only in the documentation. Aspen property's calculation itself follows 2.5 and it is used in NRTL-RK and other activity model associated with RK model in vapor phase.
Fixed in Version
Will be fixed in V11
Keywords: Aspen Plus, Aspen Property, Redlich-Kwong, RK
References: None |
Problem Statement: When user trys to input liquid sulfur stream into the Coalescer / Condenser in Sulsim, user sometimes observes negative flow rate for outlet gas stream and it has negative compositions for some of components. | Solution: The problem is that running the liquid outlet of a condenser through another condenser/coalescer results in the calculated dissolved H2S in liquid sulfur being greater than the total available H2S.
The best workaround in V10 and below would be to replace the affected coalescers with coolers or heaters. This will allow the user to change the temperature of the streams, but will not remove any vapor.
Fixed in Version
Will be fixed in V11
Keywords: HYSYS Sulsim, negative flow rate, coalescer, condenser
References: None |
Problem Statement: Sometimes the user might experience an issue with database maintenance options when working with writing results to a SQL server database. Is this a known issue? | Solution: If you are connecting only one PIMS model to one database, there shouldn’t be any issues. There are three types of database maintenance options:
Keep existing: Keeps the existing records and adds new records after the run.
Only unique cases: Deletes only those cases from current user and in the current model.
Purge existing: Deletes only solutions for the current model and for the current user.
After you run the model and would like to observe the results in your SQL database, please execute the following query to observe the solution ID, case ID and date and time for each run that you made:
SELECT A.SolutionID, A.CaseID, A.ObjectiveFunction, B.DateTime FROM PrCase AS A
LEFT JOIN PrSolution AS B ON A.SolutionID = B.SolutionID
Based on the option that you have selected, the output table will look different.
If you are connecting multiple PIMS model to one database, the situation depends. If you have low number of models, the database maintenance options should also work as expected. The above query will show you the results of your runs.
However, there is a risk of all options working the same as “keep existing” in large amount of models to one database. This is a known defect and R&D will fix it in the next patches of PIMS (i.e for V10 it will be fixed in CP3).
Keywords: None
References: None |
Problem Statement: Is there a way to make sort or filter apply to datasheet upon creation? | Solution: In Aspen Basic Engineering, users can create their own sort or filter through Knowledge Base (KB), which are customized code written in Rule Editor. Those sort and filter then can be used in organizing the datasheet, especially the continuous datasheet. Typically, the action of sorting and filtering is done after users create the datasheet. Is there a way to make certain sort and filter apply to datasheet upon creation?
We need an XML file called FilterSort.xml. The attachment is an example file.
In the FilterSort.xml file, we need to define the information about what sorts or what filters may need to be applied to the datasheet templates prior to creation.
Template ID is the defined name for datasheet. FilterId can be found in AZFilter definition file by using rule editor. Sort ID can be found in AZsort definition file by using rule editor. The default Filter and Sort Definition files are located in the KBs folder (Default Location: C:\AspenZyqadServer\Basic Engineering19.1\WorkspaceLibraries\KBs\ExampleScripts)
We can add more lines for different datasheet templates and save the FilterSort.xml file in the Templates folder (Default Location: C:\AspenZyqadServer\Basic Engineering19.1\WorkspaceLibraries\Templates)
Important Note: The xml file will not work for existing datasheets when they are reopened. It only applies when a datasheet is newly created.
Keyword
Filter, Sort, Datasheet
Keywords: None
References: None |
Problem Statement: In this KB, we will simulate the controller operation offline using DMC3 Builder and perform any tuning changes as required. | Solution: The offline simulation was carried out using DMC3 Builder before deploying the controller online.
Figure 1: Offline Simulation – inputs
Figure 2: Offline simulation – outputs
The controller TTSS is 60 mins, so the response of each CV needs to be less than or equal to 60 mins. To test the response, the offline simulation is utilized as follows.
Figure 3: Offline simulation plots 1
The simulation was stepped to find out when do these CVs reach steady state. We started at 14:06 and continued stepping until SS is reached as per the following figure.
Figure 4: Offline simulation plots 2
Simulation was stopped at 14:30 which means it took these CVs almost 25 mins to reach steady state which is good as long as it is less than 60 mins.
Keywords: Aspen DMC3 Controller , dynamics, simulation
References: None |
Problem Statement: Aspen SQLplus web Reporting allows reporting on plant data getting historized in Aspen InfoPlus.21 database. These reports are created and managed using Aspen SQL web user interface and can be run on a scheduled or event basis. In addition, it is possible to automatically e-mail and print the reports.
This solution discusses how to send automated e-mails using web-based Aspen SQLplus web Reporting. | Solution: To set up automated e-mail from the Aspen SQLplus Reporter, configuration changes must be made for both ADSA and Aspen SQLplus Reporting.
1. On the ADSA Directory Server, go to Start | Programs | Aspentech | Common Utilities | ADSA Client Configuration Tool.
2. Select the Public Data Source button.
3. Select an existing data source that is configured for Aspen IP.21 Process Browser (formerly Web.21) or add a data source with the following components (at minimum).
a. Aspen DA for IP.21
b. Aspen Process Data (IP.21)
c. Aspen Process Subscription (Generic)
d. Aspen SQLplus service component
Confirm that, when necessary, components contain the correct port number and InfoPlus.21 server name.
4. If an existing ADSA data source is used or if a new data source is configured, add the following component to the list.
Aspen Simple Mail Transfer Protocol (SMTP) Configuration
5. Double click on Aspen Simple Mail Transfer Protocol (SMTP) Configuration to configure it. There are two boxes.
Host Name - This is the name of the appropriate e-mail server. You must specify the name of an e-mail server that allows you to use SMTP protocol.
Permitted e-mail addresses - Adding e-mail addresses to this box restricts the allowable addresses to the dialog box within the Aspen SQLplus Reporting tool (see step 8, below). If you add an e-mail address to the Aspen SQLplus Reporting tool and that name does not match any e-mail address listed here, then you will get an error message: ?Invalid Name?.
6. Click OK and close the ADSA Client Configuration Tool.
7. Open Microsoft Internet Explorer and type in the address for the SQLplus report server (http://SERVER_NAME/sqlplus).
8. For an existing SQLplus report, open it and click the Automate button. Select the E-mail tab. Add an e-mail address for each intended recipient of the SQLplus report ensuring this is the correct SMTP e-mail address for the recipient in the mail server (check for typos and e-mail address changes)..
9. Click the OK button.
Note: Make sure that the Simple Mail Sever Protocol (SMTP) service is enabled for your site by your Information Technology team.
(This article was originally published as solution 116385.)
Keywords: reports
web
internet
SQLplus
Web.21
ADSA
e-mail
e mail
References: None |
Problem Statement: Is it possible to use Aspen Simulation Workbook to model a polymer process? | Solution: The example is a simulation workbook that contains an Aspen Polymers (V8.8) simulation model of a gas-phase polyethylene process. The plant (and the model) can produce grades of high density (HDPE) and linear low-density (LLDPE) - the product depends on the feed recipe (LLDPE includes butene, HDPE does not).
Various operating conditions and recipe parameters are exposed for the user. This example is intended to show how ASW can be used to make a model easy to use for operations support purposes. In this case, an engineer can use the model to evaluate conditions required to make different grades of polymer (each grade would have a specific melt flow index, density, and polydispersity).
Keywords: None
References: None |
Problem Statement: How do I cause alarm violation markers to appear on aspenONE Process Explorer trend charts? | Solution: Records defined by IP_AnalogDef, IP_AnalogDBLDef, and IP_DiscreteDef have a history repeat area named IP_#_OF_ALARM_VALUES that records when the tag goes into alarm. To historize alarm violations for a tag, set the field IP_#_OF_ALARM_VALUES to a number greater than 0, enter a repository name into the field IP_ALARM_REPOSITORY, and change the field IP_ALARM_ARCHIVING to ON.
Once alarm violations have been recorded, aspenONE Process Explorer will mark the excursions on trend charts.
Keywords:
References: None |
Problem Statement: How do I plot data from a text file into Process Explorer? | Solution: Create an ADSA Data Source with Aspen Process Data (File) service.
You can do this from within PE by going to Tools => Options and click on the ADSA tab. Or you can do it from Start => Programs => AspenTech => Common Utilities => ADSA Client Config Tool => User Data Source or Public Data Source
Double click on Aspen Process Data (File) service to get to the Aspen Process Data (File) Properties => in the File Name field, browse to the location of the text file on your hard drive and then check the Run Endless Loop option => click OK all the way out to save the ADSA Data Source.
In Process Explorer => enter a tagname which exists in the text file's Name column and select the ADSA Data Source that you just created with Aspen Process Data (File) service.
NOTES:
1) The database file must consist of a header line followed by one or more lines of time-stamped tag data.
The header line must contain tab-separated column names. See the example below:
DATE<tab>TIME<tab>TAGNAME1<tab> TAGNAME2<tab> TAGNAME3<tab>...
The remaining lines in the database file must contain tab-separated date, time, and data values for the specified tags. The format required is shown below:
MM/DD/YY<tab>HH:MM:SS<tab>NNN.NNN<tab>NNN.NNN<tab>NNN.NNN<tab>...
2) Ensure that the Process Explorer client is pointing at the correct timespan, and eliminate excess spaces, tabs, or carriage returns from the end of the text file.
3) Timestamps passed to and from all data servers (File included) are assumed to be in UTC time. Thus timestamps are "converted" to UTC time before sending a history request and "converted" back to local time when plotted in APEx.
If data (value and timestamps) plotted in Process Explorer does not match that in the file, see the patch below.
4) The Aspen Process Data (File) option may be removed from future versions of Aspen Manufacturing software. The recommended choice, instead of the Aspen Process Data (File), is Aspen Process Data (RDBMS). However the component is still included in the V7.2 release, due in 2010, and there are no active plans to retire it at this time.
KeyWords
text file
process data
csv
ADSA
Keywords: None
References: None |
Problem Statement: Attached to this knowledge base article is a query that deletes all blank occurrences from an Aspen InfoPlus.21 Cim-IO transfer record. A transfer record is a record defined by IOGetDef, IOLongLongTagGetDef, IOLLTagGetDef, IOUnsolDef, IOLongTagUnsDef, IOLLTagUnsDef, IOGetHistDef, IOPutDef, IOLongTagPutDef, IOLLTagPutDef, IOPutOnCOSDef, IOLongTagPOCDef, or IOLLTagPOCDef. | Solution: The query DeleteBlankOccsFromATransferRecord attached to this article deletes all occurrences where IO_TAGNAME is blank from a Cim-IO transfer record.
The query first prompts for the name of an Aspen InfoPlus.21 Cim-IO transfer record. Then, the query turns IO_RECORD_PROCESSING OFF if it is ON. Next DeleteBlankOccsFromATransferRecord deletes all occurrences in the transfer record where IO_TAGNAME is blank starting from the largest occurrence and working backwards to the smallest one. Finally, the query turns IO_RECORD_PROCESSING ON if it was on before.
Keywords: Query
transfer record
delete occurrence
blank occurrence
IOGetDef
IOongLongTagGetDef
IOLLTagGetDef
IOUnsolDef
IOLongTagUnsDef
IOLLTagUnsDef
IOGetHistDef
IOPutDef
IOLongTagPutDef
IOLLTagPutDef
IOPutOnCOSDef
IOLongTagPOCDef
IOLLTagPOCDef.
References: None |
Problem Statement: Platinum comes with two versions, Solo and Server. Platinum Solo starts from traditional PIMS in V8.2. However, Platinum Server can run from any machine. In this KB, we will discuss how to configure the environment for the Case Runner. | Solution: There are several configuration you need to do before you start the Platinum Server. The following steps apply to both PIMS Platinum Solo and Server if you want to use Case Runner.
1. Make sure the two PIMS Case Runner services are running from the Services window
Aspen PIMS Case Runner Services: Log on as ‘Local System’ account
Aspen PIMS Case Runner Web Services: Log on as ‘Network Service
2. Set OS type
Check the Case Runner Service configuration, shown as the screen below,
Use command line to switch to ‘Server’.
Note: if you want to run Platinum Solo, you have to switch back by replace the word 'Server' with 'Desktop'.
3. Data Source connection. The following example showing database for Access.
From PIMS General Settings, point the output database to C:\ProgramData\AspenTech\PIMSPlatinum\DataSource
At PIMS Execution window, make sure this database is ‘Shared’,
4. Set model path to the correct location, use the pencil icon on the right to config
5. Set Browser Security
Once you start Platinum Server, browser will identify the login user to make sure this user has privilege. To do this, set the security in the browser from ‘Internet Options’. This change is only allowed when you login as a Local Administrator.
Once you have done all the procedure above and you still do not see any pencil icon for Case Runner, try to reboot your machine.
Keywords: Case Runner
security
Administrator
Server
References: None |
Problem Statement: How can we control the range of a recursed property in an Aspen PIMS model that uses Distributive Recursion? | Solution: A standard Aspen PIMS model useing Distributive Recursion determines minimum and maximum values for all properties that are involved in recursion. This step occurs during validation or matrix generation and is accomplished by examining all input tables that contain property data. Such tables include BLNPROP, ASSAYS, BLNSPEC, PGUESS, and submodel tables.
Property ranges are important because the Distributive Recursion property update algorithm will not accept a recursed property value less than or greater than the global minimum value for that property.
Most users of Aspen PIMS rely on one of two approaches to control the ranges of recursed properties. Many users add MIN and MAX rows to table BLNPROP to contain the desired minimum and maximum property values. Other users specify the desired minimum and maximum values via the MIN and MAX columns in Table SCALE.
This solution will demonstrate that Table SCALE is the correct way to directly input the minimum and maximum property values.
An Example
Our demonstration is based on the standard VOLSAMP sample model. The property of interest in this example is RON. We start with a model that has no special RON limits in either Table SCALE or Table BLNPROP. The figure below shows the Recursed Property Range Report section of the Model Validation Report.
Please note that the RON property has a minimum value of 59.2489 and a maximum value of 102.0.
We will attempt to present a wider range for this property by using Table BLNPROP. We are trying to set the minimum RON value to 40.0 and the maximum to 105.0.
If we generate the Model Validation Report using this version of Table BLNPROP, the Recursed Property Range Report appears as in the figure below.
Please note that the minimum and maximum values of RON are the values presented in the MIN and MAX rows of Table BLNPROP, which is confirmed by the presence of BLNXXX under the columns labeled Defining Tables.
Next, we will keep the previous values in rows MIN and MAX of Table BLNPROP, but will attempt to present a narrower range for RON in Table SCALE. The figure below shows Table SCALE. We are using the MIN column to set the minimum RON value to 45.0, and the MAX column to set the maximum RON value to 103.5.
After generating the Model Validation Report using this version of Table SCALE and the previous version of Table BLNPROP, the Recursed Property Range Report appears as in the figure below.
Note that the minimum and maximum values of RON are taken from Table SCALE.
This solution has demonstrated that Table SCALE is the correct way to directly set the extreme values of a recursed property. Using rows labeled MIN and MAX in Table BLNPROP is like making a suggestion: if Aspen PIMS can find no values outside of that suggested range elsewhere and if the suggested range is not overridden in Table SCALE, then these values will set the range. Row names MIN and MAX have no special significance in Table BLNPROP. Using the MIN and MAX columns of Table SCALE, on the other hand, will override any other property data when determining the property range.
Keywords: Property
range
MIN
MAX
SCALE
References: None |
Problem Statement: This Knowledge Base article provides a general overview of Aspen Golden Batch Profiler. | Solution: Batch Profiling is a set of tools that allows a user to optimize the quality of batch processes, particularly for the chemical and pharmaceutical industries. It uses
Keywords:
References: Profiles that can be created from a single ?Golden Batch? or from a small group of ?Golden Batches? representing optimal operating conditions. Reference Profiles could also be the result of a theoretical calculation.
Reference Profiles can be created either from the Profile Management tool in Aspen Process Explorer or from the Aspen Production Record Manager Administrator. Profiles can also be generated outside Aspen Production Record Manager and imported. They contain Target values and Upper and Lower limits.
The quality of the current production process can then be monitored against a Reference Profile, or ?Golden Batch? which ensures consistent quality over successive batches.
The following general steps are needed to create a Reference Profile and apply the Profile using the Profile Monitor. For details, please refer to the Aspen Golden Batch Profiling Help Files attached to solution # 132005. This topic is also covered in detail in the APRM Advanced training class (PME-271).
1. Open a new Profile Management Plot in Aspen Process Explorer and select 1 or several batches that will be used to form the Profile.
2. Define a process tag that will be used by the Profile.
3. Calculate an `Envelope? that contains the Target value and the Upper and Lower Limits.
4. Save the Profile to the APRM (Batch) Database.
5. View the Profile using the Profile Monitoring tool in Aspen Process Explorer.
Profiling can be done at two different levels; it can be applied to an individual batch area or to the batch system as a whole.
This solution shows how to apply a profile to a batch area.
As instructed in step 1 above, open a new Profile Management Plot in Aspen Process Explorer and populate Batch Legend with your `Golden Batches?. The batches that you have inserted here will be used to create the new Batch Profile, so typically these will be batches that meet specific production criteria. In other words, these would be your Golden Batches. Note that when the batches are first added to the Batch Legend they are grayed out as they don?t have a profile associated with them.
The next step is to create a new Profile. To do so, right-click in the Profile Legend area, select Insert from the context menu and type in a suitable name for your profile. Decide if you want to apply your Profile to a single batch area or to all batch area and if you want to create a Time Based or Process Value Based profile.
To filter batches based on certain Characteristic criteria a Profile Context can be used. To apply a Context to a Profile, you have to select a Designator characteristic from the Aspen Production Record Manager Administrator tool.
Next, you need to associate a tag with the Profile. From the Profile Properties window, select the Tag tab and enter the details for a tag that the profile will use for monitoring purposes.
At this point in the configuration process, it is necessary to specify the Alignment for this profile. The Alignment defines which (sub)batch Start and End Time characteristics will be associated with the start and end of this profile. The profile will be generated whenever the specified Alignment characteristic gets written to the Aspen Production Record Manager database.
Now you will need to compute the Profile Envelope. After clicking the Compute Profile button on the Envelope tab, the Compute Profile window appears showing the time ranges of the Batches that were added to the Batch Legend in a previous step. Click Compute Profile... to compute profile envelope.
Once a tag has been applied to the Profile and the other properties such as Profile Alignment and Envelope have been assigned, the batches that meet the requirements (and match the context) of the Profile will be automatically enabled in the Batch Legend Area.
When setup has been completed, right click in the Profile Legend area and select Save (your Profile) to Database.
Finally, you can apply a Context to a Profile in the Aspen Production Record Manager Administrator tool by right clicking on the Profiling node and selecting Properties from the menu. Applying or changing the Context to a Profile will allow you to filter batches based on certain Characteristic criteria. To apply a Context to a Profile, you have to select a Designator characteristic from the Aspen Production Record Manager Administrator tool.
Note that the Profiling must be disabled when applying a Context.
Now, in the APRM Administrator enable both Profiling and the new Profile you've just created.
To view the Profile, open a Profile Monitoring Plot in Aspen Process Explorer and insert the tag specified for the particular profile you are going to be viewing. Process Explorer automatically picks up the correct Profile record and the plot will display the associated Profile as soon as the Alignment characteristic is recorded in the Batch database.
The main purpose of a Profile Monitoring plot is to monitor a tag associated with a profile tag that contains the history of the reference profiles that have occurred and the envelope data history for those profiles.
The Profile Monitoring plot sees whether the tag that you entered has an associated profile tag, and if so, plots the reference profile envelope data that it pulls out of the profile tag.
The plot will display the full duration of the reference profile envelope, and track the data against the reference profile envelope.
To see which profiles are being displayed right-click in the plot area and select Profiles? |
Problem Statement: What is the procedure to obtain a stage profile of a property within the RadFrac block? | Solution: To report a property on column stages, you will need to first define a property set within the Prop-Sets folder and then reference it in the RadFrac block. The procedure is outlined below:
1. Navigate to the Properties Prop-Sets folder within the Data browser. Click the New... button and provide a name, i.e., PS-1.
2. From the Physical properties drop-down list, choose the property you want to report in the column profiles. For easier searching, click on the Search button at the bottom of this form. (If Units are not selected, Aspen Plus will report the property in the default system units.)
Define qualifiers, such phase or component basis, on the Qualifiers tab.
3. Within the RadFrac block, navigate to the Report form.
On the Properties sheet, add the Prop-set defined above to the Selected property sets area.
On the Profile Options sheet, you can limit the profile to a specific section of the column. For example, to report the selected property on all stages, choose All stages.
4. Run the simulation and review these property results on the Properties sheet of the Profiles form within the RadFrac block. (Below, results are shown for a Liquid Viscosity profile.)
Hint: You can plot these properties along the profile of the column by assigning selected columns to the X- and Y-axis options on the Plot menu. (A sample plot for a Liquid Viscosity profile is shown below.)
Keywords: Column, RadFrac, profiles, Prop-Sets, properties, plot
References: None |
Problem Statement: How to simulate a Thermosyphon Reboiler in RadFrac using Aspen Shell & Tube Exchanger (EDR). | Solution: Starting with version V7.0 it is possible to rigorously simulate a Thermosyphon Reboiler in RadFrac using Aspen Shell & Tube Exchanger (EDR). The following guidelines illustrate the procedure for simulating a Thermosyphon Reboiler in RadFrac using Aspen Shell & Tube Exchanger program (formerly called Aspen Tasc+).
This is a two step process. First, set up an Aspen Shell & Tube Exchanger *.EDR file with the necessary information. Then, in Aspen Plus, set up the RadFrac column and the Thermosyphon settings.
Setting up Aspen Shell & Tube *.EDR file:
1. Create a new Aspen Shell & Tube exchanger *.EDR file.
2. Next go to Input | Problem Definition | Application options, select the calculation Mode as Simulation and select the cold side application as Vaporization and Vaporizer type as Thermosyphon and thermosyphon circuit calculation as Fixed flow if flow is known otherwise set to Find flow.
3. Next go to Input | Exchanger Geometry | Geometry Summary and enter the exchanger TEMA type and relevant geometry information.
4. Finally go to Input | Exchanger Geometry | Thermosyphon Piping | enter the Thermosyphon piping information, Inlet piping elements and Outlet piping elements information.
5. Save this file.
Note: This file is technically incomplete i.e. it will initially contain the geometry data only and the process and property data will be filled in once integrated with Aspen Plus.
Setting up RadFrac in Aspen plus:
6. Open the associated Aspen Plus file and go to Blocks | RadFrac | Setup | Configuration tab and select the reboiler as Thermosyphon and enter all necessary information on this form.
7. Now go to RadFrac | Setup | Thermosyphon Configuration tab | select the thermosyphon configuration that matches your column setup.
8. Then go to Reboiler tab and specify the thermosyphon initial guess values either flow or outlet condition or both.
9. Click the Reboiler Wizard Button and specify the Block ID for thermosyphon HeatX block, Select type as Shell & Tube, mode as Simulation, Circulation type as Fixed.
10. Then browse to the *.EDR file saved in Step 5 and reference the file.
11. Specify the Flash2 block ID
12. Check the box for move to Hierarchy block to show the whole setup as single unit, otherwise leave it unchecked. click OK.
13. Then in the Data Browser go to Blocks | HeatX ID in Step 9 | Setup and select the Hot side (shell side or tube side) and make sure Simulation type and Thermosyphon type are selected.
14. Go to Pressure Drop tab and select the Calculated from Geometry radio button for both cold side and hot side.
15. Click the Next button or in the Data Browser go to Streams | Hot Stream ID for HeatX (Hot feed stream to thermosyphon reboiler, usually steam will be used) and input the hot stream composition, Temperature/ Pressure/Vapor Fraction and Flow.
Now Run the Simulation so that the outlet conditions are determined based on rigorous thermosyphon Reboiler.
Note: In the attached Example file (created in version V7.1) RadFrac block is modeled using a BXM type Thermosyphon Reboiler. Before running the files on your machine, please change the Path of *EDR file associated with thermosyphon reboiler to reflect your machine and location of the file on your machine. This can be done on the EDR options form of HeatX blcok.
Keywords: RadFrac, Reboiler, Thermosyphon, EDR, Aspen Shell & Tube Exchanger,
References: None |
Problem Statement: A typical gas plant in midstream and refining industries usually consists of a number of integrated processes, requiring global optimization solutions to operate at maximum efficiency. Processes that require optimization include, but are not limited to:
• Acid gas treating
• Sulfur recovery
• Tail gas treating
• Flare systems
• Dehydration
• Nitrogen & helium removal
• Fractionating
• LNG compression
• LNG gasification
• and many more
In addition to integrated process modeling, there is a need to optimize the entire plant in safety, mechanical, cost estimation, and pinch analysis disciplines as well. | Solution: The aspenONE Engineering solution for gas plant optimization is the most complete solution in industry to accurately simulate the entire gas plant in one environment, saving you time and money. aspenONE Engineering solutions include:
• HYSYS Acid Gas Cleaning – Rigorous rate-based simulation of amine treating processes
• Sulsim Sulfur Recovery in Aspen HYSYS – Industry-leading technology for simulating sulfur recovery
• Glycol Package – Simulate TEG dehydration using Twu-Sim-Tassone EOS and unique interaction parameters
• HYSYS Properties – Most accurate solution in industry and trusted for decades in gas plant processes such as fractionation, mercury partitioning, methanol partitioning, hydrate formation, nitrogen rejection, LNG, and more
• Exchanger Design & Rating – Leverage heat exchanger rigorous design in NGL recovery and LNG facilities, including Aspen Shell and Tube, and Aspen Plate Fin
• Safety Environment – Including solutions using Flare System Analyzer, BLOWDOWN Technology, and more
• Energy Analyzer – Conduct pinch analysis and identify potential energy savings in the gas plant, directly in Aspen HYSYS
• Aspen Capital Cost Estimator – Reduces the time for decisions by 20-30% and delivers estimations with 5-10% of action
• Aspen Simulation Workbook – Deploy models to operations in a friendly Excel-based solution
• Pipeline Hydraulics – Optimize flow networks and prevent equipment damage, including prediction for pigging
• Aspen HYSYS Dynamics – Troubleshoot and prevent operational problems with compressor surge analysis
• Column Analysis – Understand the impact of column internals on hydraulics performance using powerful visuals in both design and rating
To learn how to leverage the power of gas plant optimization in Aspen HYSYS to add value in design and operations, and to apply these learnings to your plant, the following application examples have been made available:
• Leverage HYSYS Acid Gas Cleaning and Column Analysis to optimize the performance of an amine-based acid gas treating process
• Adjust operations to process a sour feed while still meeting product specs and emissions requirements in an integrated petroleum refining & gas plant application example
• Utilize the Aspen HYSYS Glycol Package to model a TEG dehydration system for natural gas, with the goal of optimizing operating costs to meet key sales gas product specs and column performance metrics
Keywords: HYSYS, Acid Gas Cleaning, Column Hydraulics, Column Analysis, rate-based modeling, absorber, regenerator, amines, amine blend, MDEA, efficiency mode, advanced modeling mode, heat stable salts, amine treating, acid gas treating, gas plant, gas processing, tail gas treating, hydraulics
References: None |
Problem Statement: Safety Analysis environment crashes when any report is generated. | Solution: To generate any report in the Safety Analysis environment, Microsoft Report Viewer needs to be installed.
V8.6 is compatible ONLY with Microsoft Report Viewer 2010. V8.8 and V9.0 were updated to not be version specific.
Keywords: crash, report, Safety Analysis
References: : CQ00716835 |
Problem Statement: In Aspen Petroleum Schedular, after upgrading model to V8.x using Dbupdate tool, Facing below error: | Solution: Please follow the below steps to resolve this problem:
1. Log in to APS Database via SQL Enterprise Manager
2. Open the Table “DatabaseVersion�.
3. Remove the (-) Minus symbol from the Record for DBVERSION Column
4. Save the changes made.
Please refer the screen shot (with correction.)
After that close the model and then reopen the model via APS.
The problem should be resolved.
KeyWords
Aspen Petroleum Scheduler, Dbupdate, Database
Keywords: None
References: None |
Problem Statement: AspenOne V9.0 Engineering インストレーション ガイド 日本語版 | Solution: 添付ファイルをダウンロードしてください。
KeyWords
SLM, License Server, Installation, Japanese, 日本語
Keywords: None
References: None |
Problem Statement: AspenOne V9.0 Engineering インストレーション ガイド 日本語版 | Solution: 添付ファイルをダウンロードしてください。
KeyWords
SLM, License Server, Installation, Japanese, 日本語
Keywords: None
References: None |
Problem Statement: AspenOne V9.0 SLM License Server インストレーション ガイド 日本語版 | Solution: 添付ファイルをダウンロードしてください。
KeyWords
SLM, License Server, Installation, Japanese, 日本語
Keywords: None
References: None |
Problem Statement: Configuration and use of Aspen Event.21 changed significantly in version 6.0. As of version 6.0, the pd_server.exe program has been replaced with a server-side configuration. | Solution: Configuration of the Event.21 server database is carried out via the Event.21 Configuration tool (\Program Files\AspenTech\InfoPlus.21\c21\e21\config\e21config.exe). The tool is also accessible via the Start button (Start | Programs | Aspen Tech | Aspen Manufacturing Suite | Event.21 Configuration). Use the Event.21 Configuration tool to create and configure the Event.21 database.
The Event.21 Configuration tools allow users to define the database connection, areas, keys, labels, conditions, and networking specifications. The data provider specification in the Database Connection tab must correspond to the type and version of relational database being used.
Use e21_rec_event to record events. It can be used in MS-DOS scripts or through SQLplus commands. The standard InfoPlus.21 event is any occurrence that is significant to the history of a process. The specifics of an event are defined by parameters specified in the SQLplus or MS-DOS scripts.
Syntax for the e21_rec_event command is defined as follows.
e21_rec_event [area=area] [key=key_value] [tag=tagname] [type= type] [severity=severity] [condition=condition] [value=value] [status=status] [username=username] [text=text] [date=date] [time=time] [comment=comment]
NOTE: When using e21_rec_events in version 6.0 and above, it is necessary to use the e21_rec_events executable found in the \Program Files\AspenTech\InfoPlus.21\c21\e21\server directory.
To access an Event.21 server (version 6.0 and above) from a Process Explorer event plot, you must modify existing event ADSA data sources or create new ones that contain the following service.
Aspen Event.21 Data (.Net Events)
The TCP and HTTP ports in the ADSA service should correspond to port numbers specified in the Networking tab of the Event.21 Configuration tool.
For additional information, see pages 24-28 of the Aspen Process Explorer 6.0 Release Notes.
KeyWords
event
database
Keywords: None
References: None |
Problem Statement: A status of BAD is receive the first time a tag value is retrieved using the TEST-API. All subsequent attempts to read values for the same tag results in a status of Good. | Solution: This has been resolved in some instances by disabling the "Perform Initial Synchronous Cache Read" feature of the Cim-IO for OPC application
To access the application:
Open Windows Explorer
Browse to the following directory \Program Files\AspenTech\CIM-IO\io\cio_opc_api
Open the application OPCProperties.exe
Disable the option by clearing the checkbox
Keywords: 77106;
77117;
Facility 77106;
Facility 77106;
Out of Service;
Reason unknown;
OPCProperties.exe;
CIM-IO for OPC;
References: None |
Problem Statement: How do I manage the start-up and shutdown of CIM-IO for OPC interfaces? | Solution: There are three distinct mechanisms to manage the start-up and shutdown of the CIM-IO for OPC interface:
1. The user manually configures the CIM-IO device in the client/server computers with only a text editor.
In this case the user should edit:
· The services file in both computers to enter all the device's services (including S&F ones) with matching TCP ports
· The cimio_logical_devices.def file in both computers to enter the device names and nodes associated with the device being defined. If the device is part of a redundant configuration this file on the client computer also contains in addition to the regular device name entry, entries with the names of the _PRI and _SEC devices and node names.
· The cimio_autostart.bat and cimio_autostop.bat files in the server computer must be edited to include the command that would start/stop the vendor's (OPC as well) server software, the command that starts/stops Aspen's CIM-IO for OPC dlgp, and the command that starts/stops the S&F processes.
When the CIM-IO Manager service is started or stopped, it will first invoke the autostart.bat and autostop.bat command files to execute every one of the commands resulting in the start up/shutdown of the interfaces included there. The site must refrain in this case of using anything else than the CIMIO Manager service to start/stop the device(s), nothing else!
2. The user exclusively configures the CIM-IO device using the built-in I/O Configuration Wizard tool in the InfoPlus.21 Administrator
The user only uses the I/O Configuration Wizard tool in the InfoPlus.21 Administrator to configure the device in the IP.21 Server and in all the remote servers where the CIM-IO device will run, including redundant computers. The tool takes care of all the details of matching TCP ports for all the required services and the names of redundant devices, S&F Processes, etc.
The only thing the user must do in every server is to add the commands that start/stop the vendor's server software into the cimio_autostart and cimio_autostop command files.
The CIM-IO Manager service is uniquely responsible for the startup/shutdown of the vendor's servers mentioned in cimio_autostart and cimio_autostop files and for the startup/shutdown of Aspen's CIM-IO server (CIM-IO for OPC for example) and S&F processes using the device's .csd file that is stored by the InfoPlus.21 Administrator in the CIM-IO management folder. No other mechanism should be used. Note that this file includes the name of the local /remote server for the CIM-IO for OPC. If the clause -nodename is omitted, it will use by default the localhost as the computer where the CIM-IO for OPC will run.
As final note, it is important to understand that for this option to work correctly, before running the I/O Configuration Wizard the CIM-IO servers must be ready. That is, the CIM-IO software and vendor's software must have been installed and the CIM-IO Manager service must be up and running, as this service is used by the InfoPlus.21 Administration tool to create/modify all the necessary files.
3. The user configures the CIM-IO device in the client computer with a text editor, and the CIM-IO for OPC Properties utility in the server computer
In the client computer the user uses a text editor to configure:
· The services file to enter all the device's services (including S&F ones) with matching TCP ports
· The cimio_logical_devices.def file to enter the device names and nodes associated with the device being defined, including the names of the _PRI and _SEC devices and node names if the device is part of a redundant configuration.
In the server computer(s) the user uses a text editor to configure:
· The cimio_autostart.bat and cimio_autostop.bat files in the server computer must be edited to include the command that would start/stop the vendor's (OPC as well) server software and the start/stop of all the S&F processes.
· The CIM-IO for OPC Properties tool is then used to "Configure OPC Servers" that is, to make known to the CIM-IO for OPC Manager service (a service different than the CIM-IO Manager service) the properties of the process AsyncDlgp that will be started/stopped by this service when the service is started or stopped. Every entry in the form to add a new OPC Server will modify or override entries for the device with the same name in the services and logical_device_names.def files. Care should be exercised and the user is required to always double check these files to make sure that they have not been inadvertently modified by simply executing the utility. The utility allows the entry of the node name where the CIM-IO for OPC server will run, which if left blank will default to the localhost. All the Server attributes entered with this utility are kept in the registry hive HKLM\Software\AspenTech\CIM-IO to OPC Interface\AutoStart (or HKLM\Software\Wow6432Node\AspenTech\CIM-IO to OPC Interface\AutoStart on 64bit systems). There will be an entry for each device configured with this utility.
Now the problem arises when this procedure is used in conjunction with any other two. The user may causing conflicts that could make the CIM-IO software to behave erratically or even crash.
When the CIM-IO Manager service is started or stopped, it will first invoke the autostart.bat and autostop.bat command files to execute every one of the commands in there, resulting in the start-up/shutdown of the interfaces included there. Then it will process any .csd files encountered in the CIM-IO management folder.
On the other hand, if the CIM-IO for OPC Manager service is started and it has devices configured in the registry, it will proceed to start up the corresponding AsyncDlgp processes without any qualms as to whether they may be already started by the CIM-IO Manager.
This tool though handy and simple could cause a significant number of problems. For example, the tool does not allow you to edit a server already defined, it must be deleted and this in itself could cause trouble. The tool does not allow you to add command line arguments to the command line that starts the asyncDlg process that may be required in some special circumstances. If configuring a redundant device, the user must exercise extreme caution and ensure the supporting files in all computers involved are reviewed.
Keywords: CIM-IO for OPC
AsyncDlgp
cimio-autostart
References: None |
Problem Statement: The command line utilities for Event.21 have been updated. The updated versions are now in the /server directory under Event.21, not /bin. Using the command line utilities in the /bin directory will return errors like:
ERROR: Cannot connect to server!
ERROR: Status: 2 | Solution: The solution is to path your program to the updated executables in the /server directory.
Keywords:
References: None |
Problem Statement: How do I input a set pressure value less than MAWP? | Solution: Aspen Flare System Analyzer (AFSA) only lets you set the Relieving Pressure and not Set Pressure (Relieving Pressure = Set Pressure + Overpressure).
Consider a fire case where overpressure is 21% of Set pressure. So Relieving Pressure = 121% of Set Pressure (Relieving Pressure = Set Pressure + Overpressure).
AFSA is mainly a tool for sizing the flare network downstream of the relieving device.
Normally for sizing the flare network, the most conservative situation corresponds to the Maximum relieving pressure for Fire sizing, i.e. when Set Pressure = MAWP.
For Design, you should use auto calculate for Relieving Pressure. This is the value AFSA will use.
For Rating, with a given Relieving Pressure, since MAWP is not used anywhere in calculation directly, you can just set Relieving Pressure to whatever you want and then set the MAWP less than that.
Keywords: MAWP, Set Pressure, Relieving Pressure
References: None |
Problem Statement: Is there any pan option for the Process Flowsheet in Aspen Flare System Analyzer? | Solution: The Pan option is now available in V9: Click pan to switch to the Pan mode.
In Pan mode, you can scroll through the Process Flowsheet diagonally:
- Process Flowsheet tab | Modify section | Pan
Â
Keywords: pan, flowsheet
References: None |
Problem Statement: How do I display the results from my simulation in the PFD using Aspen Flare System Analyzer? | Solution: To display the results in the PFD, simply locate and click on the Process Flowsheet tab on the ribbon.
In this section the user will find a drop down list that is most likely set to “None” if no information is being displayed in the PFD.
Clicking on this will show a list of the available variables to display in the PFD. Simply select the variable that should be displayed in the Flowsheet.
Note: In version V8.8 and previous versions, multiple options cannot be selected at once; displaying multiple variables is restricted to the pre-defined combinations shown in this same list. This is no longer a limitation in Aspen Flare System Analyzer V9.0. For more information refer to solution 146674.
The results will be displayed as shown below:
Keywords: PFD, Results, Flowsheet, Show
References: None |
Problem Statement: How to import Blowdown Analysis information from Aspen HYSYS to Aspen Flare System Analyzer in V9 | Solution: The Blowdown Analysis in Aspen HYSYS V9 provides Peak flow conditions that can be easily transferred to Aspen Flare System Analyzer ( AFSA).
You can use the Import Sources feature in AFSA to extract this information.
1. Go to File | Import Sources | HYSYS BLOWDOWN / Depressuring Sources.... or use the Import All Sources tab located in the Import/Export section under Home tab to select the option HYSYS BLOWDOWN / Depressuring Sources....
or
2. In Import BLOWDOWN / HYSYS Depressuring Utility Sources View, browse for the file that contains your BLOWDOWN Analysis and click on Open.
3. The information that will be transferred is shown below. Select the type of valve in which you want to locate your data and the Source's name. Finally click OK.
The Result Summary | Orifice Results in Aspen HYSYS contains the temperature, pressure and mass flow you are transferring to AFSA.
You will see these details in your valve as well as the composition.
Keywords: Blowdown, Import, HYSYS, Source, Peak Flow
References: None |
Problem Statement: In pipeline gathering networks, slug formation can present issues in the operation of pipelines and downstream equipment. To increase pipeline efficiency and prevent damage to equipment, action needs to be taken to prevent slug formation. | Solution: Aspen HYSYS Upstream provides a solution for modeling pipeline networks in steady-state and dynamics mode that can be used to predict the evolution of flow in a pipeline, including terrain-induced slugging.
This white paper will focus on terrain-induced slugging, the challenges in predicting this type of slugging, the solutions available for addressing these challenges and the benefits of using this solution. An accompanying demo file is also available.
Keywords: Slugging, Pipeline Hydraulics, Multiphase Flow, Flow Assurance
References: None |
Problem Statement: As the process contact engineer at your refinery, you are working with the refinery planner to evaluate whether or not the site can run a cheaper (-$10/BBL) heavy sour crude in the crude slate at a fraction of 10% of total feed, while still meeting key specifications on product sulfur, product quality, emissions, etc.
Based on analysis of the initial site constraints, the unit engineer expects:
• Heavy sour feed will decrease conversion given same reactor severity & temperature
– Expect distillate sulfur content to rise above 15 ppm specification
– Increase conversion via HCR temperature in first reactor (incl. guard bed) to meet sulfur spec
– Increased HCR severity will crack more feed, potentially decreasing yield of high-margin diesel
• Heavy sour feed will increase flow rate and H2S content to the acid gas plant
– Expect sales gas H2S content to rise above spec of 4 ppm
– Increase reboiler duty or increase solvent rate to absorber to achieve base case 2.9 ppm
• Sulfur recovery and tail gas plant operations will need to be re-optimized to achieve similar flare targets as base case
– Re-optimize air demand to furnace, catalytic converter temperatures, RGG fuel gas flow rate and incinerator fuel flow rate | Solution: Use the integrated HYSYS model to:
1. Solve a real-world problem using HYSYS by making operational adjustments to accommodate a sour (high sulfur) feed
2. Leverage an integrated HYSYS model for global optimization of refining and gas plant operations
3. Identify margin improvement of +$4M/yr by running 10% of the sour feed in the refinery crude slate
4. Evaluate a case that maximized diesel production by eliminating the kero side draw for +$14M/yr
Aspen HYSYS Petroleum Refining comes equipped with assay management, refinery reactor models, and petroleum properties. Aspen HYSYS Petroleum Refining enables engineering workflows, such as process unit simulation, multi-unit analysis, refinery-wide modeling, and PIMS & APS support.
Acid Gas Cleaning in Aspen HYSYS is a rate-based solution that accounts for both mass-transfer and kinetic effects in the absorber and regenerator columns. Acid Gas Cleaning property packages support a range of amines and amine blends, heavy hydrocarbons, mercaptans, acid gas components, and other key components. Reaction chemistries are automatically generated and “Efficiency� and “Advanced Modeling� modes are supported for increased accuracy or performance.
Sulsim Sulfur Recovery in Aspen HYSYS is the industry’s most accurate tool for modeling the modified-Claus process for sulfur removal. Sulsim Sulfur Recovery utilizes empirical models validated over hundreds of commercial configurations to accurately predict thermal, catalytic and tail gas stages using 33 unit operations, specific sulfur properties, and validated conversion models.
Keywords: HYSYS, Acid Gas Cleaning, Column Hydraulics, Column Analysis, rate-based modeling, absorber, regenerator, amines, amine blend, MDEA, efficiency mode, advanced modeling mode, heat stable salts, amine treating, acid gas treating, gas plant, gas processing, tail gas treating, hydraulics, flooding, weeping, base case, max rating, turndown, example, reboiler duty, solvent, raschig, packing, trays, column, adjust, sales gas, sweet gas, rich amine, lean amine, H2S, CO2, HHV, Sulsim, Sulfur Recovery, Sulphur Recovery, Sulfur, Sulphur, COS, CS2, SO2, furnace, WHE, catalytic converter, tail gas, Hydrogenation Bed, HBED, air demand, incinerator, flare, modified-Claus, Claus, SRU, Petroleum Refining, Hydrocracker, HCR, HDC, SOR, EOR, catalyst, conversion, diesel, naphtha, distillate, kero, kerosene, bottoms, sales gas, sweet gas, acid gas, octane, cetane, RVP, specs, specifications, yields, short-cut column, distillation, fractionator, margins, margin analysis, cost, OPEX, sour, crude, acid gas removal, hydrotreater, hydroprocessing, refinery, refining, ASW, aspen simulation workbook, workbook, Excel, simulation workbook
References: None |
Problem Statement: How do I create a PIMSEE instance in SQL server? | Solution: In order to create a PIMSEE local database, an instance named 'PIMSEE' must be created on the user's local SQL server.
First, the user needs to install MSDE (Microsoft SQL Server Desktop Engine).
Next use MSDE to create an INSTANCE in SQL, called 'pimsee' with password = 'pimsee'. All 'PIMSEE' local databases are under the instance name 'PIMSEE'.
After MSDE is installed, a directory, such as c:\MSDERelA\, will be created.
Click Start | Run to access the Run dialog box, then in the Open field, enter
c:\MSDERelA\setup SAPWD="pimsee" INSTANCENAME="pimsee"
Note - it has to be double quotes around pimsee for SAPWD and INSTANCENAME.
Reboot the computer to start the 'MSSQL$PIMSEE' service.
To verify if 'MSSQL$PIMSEE' service is started, go to Start | Control Panel | Administrative Tools | Services, locate the 'MSSQL$PIMSEE', and check if the text 'Started' appears in the Status column. If it is not running, you can manually start that service by right-clicking it and selecting Start from the right-click menu.
Keywords: Instance
PIMSEE
SQL
Create
References: None |
Problem Statement: When running a PIMS-EE model in PIMS, I got a model converting error !?Error processing database table XXX: Object reference not set to an instance of an object.!? How to resolve this? | Solution: This error could be due to slow response from PIMS-EE database. To resolve, update the statistics on the PIMS-EE database by running the script below, then run PIMS. Note that this action should be performed by a database administrator with sufficient knowledge on managing SQL database. Please be noted that the use of this script is at your own risk and that before modifying any database, make a backup of it. The script should run without errors or warnings and it will be beneficial for you to save the output from the script. DECLARE @tableName varchar(80),@schemaName varchar(80), @sqlStmt AS nvarchar(200) DECLARE c CURSOR FOR SELECT t.name, s.name FROM sys.tables t join sys.schemas s on s.schema_id=t.schema_id OPEN c FETCH NEXT FROM c INTO @tableName, @schemaName WHILE @@FETCH_STATUS = 0 BEGIN SET @sqlStmt = 'UPDATE STATISTICS '+@schemaname+'.[' + @tableName + '] WITH FULLSCAN ' print @sqlStmt EXEC sp_executesql @statement = @sqlStmt FETCH NEXT FROM c INTO @tableName, @schemaName END CLOSE c DEALLOCATE c
Keywords: PIMS-Enterprise Edition, PIMS-EE, Object reference not set to an instance of an object, Error processing database table, Model converting error, Database, script
References: None |
Problem Statement: What happens if I do not assign product to one event screen in MBO? | Solution: In MBO, to make simulator to work correctly on one event screen, there needs to be at least one blended product assigned to that event screen. Otherwise, the MBO model could face some potential issues, such as the beginning inventory does not update correctly.
In Model>>Product dialog box, we can check the even screens that products are assigned to under “Screen enabled for this product� section.
Keywords: MBO
Event Screen
References: None |
Problem Statement: Is it possible to import a PIPEFLO simulation into Aspen HYSYS? | Solution: There is no direct way to import a file created in PIPEFLO to Aspen HYSYS, but you can convert the PIPEFLO model to PIPESIM and then import the PIPESIM model to Aspen HYSYS using the PIPESIM Link Extension.
The PIPESIM Link Extension is a unit operation for using the PIPESIM software package used to simulate pipeline systems within the HYSYS framework. To import the model, follow the Solution ID 143225.
For more information on how to convert the PIPEFLO model to PIPESIM, contact Schlumberger Support (https://www.software.slb.com/support).
Keywords: PIPEFLO, PIPESIM, Import
References: None |
Problem Statement: Terrain-induced slugs are caused by accumulation and periodic pushing of liquid along the length pipeline as it traverses over varying elevations. This type of slug is common at the local minima of the pipeline network. Aspen Hydraulics provides dynamic simulation of the slugging phenomena and accurate prediction of the liquid level, pressure, and fluid velocities along the pipe. | Solution: Aspen HYSYS Upstream can be used to simulate terrain-induced slugging for accurate predictions of slug volumes and frequency. As an example, a simple model representing a pipeline connecting the well to the outlet of the riser in the Dynamics mode is available to users. The pipeline consists of several segment of piping with a riser at the end.
Set up in the Aspen Hydraulics sub-flowsheet
Pre-made strip charts allow users to observe the slugging behavior. These strip chart results show the liquid mass flow rate at the outlet of Complex Pipe-100 over time. This corresponds to an axial distance of 5456 feet, at the bottom of the second local minimum. In the plot, users can observe the liquid flow rate oscillating between high liquid flow rates (slugs) and low liquid flow rates (the space between slugs).
A Microsoft Excel interface is available to show how users can monitor the effect of the flowsheet with Aspen Simulation Workbook (ASW). Data charts are available for users to dynamically monitor the liquid holdup and vapor and liquid mass flow over the pipeline. Liquid flows out of the pipeline and the riser are also charted to provide easy view of the terrain-induced slugging phenomena.
Keywords: Slugging, Pipeline Hydraulics, ASW,
References: None |
Problem Statement: What are the the boundary conditions in Aspen Hydraulics? | Solution: In Aspen Hydraulics, the pressure and flowrate values of the inlet and outlet streams are considered as boundary conditions. These boundary conditions need to be specified correctly in order to make the subflowsheet solve. The total number of boundary specs should be equal to the total number of boundary streams.
For a single pipe in the flowsheet, three boundary conditions are allowed:
For a pipeline network, there are some restrictions user needs to consider:
- Flowrate specification is only allowed on inlet streams.
- For network with multiple inlets, if one of the streams has both mass flow and pressure specified, then all other inlets must have pressure specifications.
- For network with multiple outlets, all the outlets need pressure specifications.
For Mixer unit, four types of boundary conditions are supported:
For splitter unit, two types of boundary conditions are supported:
There are some built-in examples inside Aspen HYSYS that users can check/study these boundary condition. The location for these files are: Open Aspen HYSYS| Resources | Examples | AspenHydraulics.
If you need further assistance on a specific case, please contact AspenTech Support.
Keywords: Aspen Hydraulics, Boundary Conditions
References: None |
Problem Statement: Why is "molality" displayed as a dimensionless property in Aspen Plus? | Solution: When I defined molality as a property set "MTRUE" it is showing as dimensionless property (unitless).
Molality by definition mole/kg water (or solvent), and that means its units cannot be changed. In Aspen Plus MTRUE is shown as dimensionless just because in the context of property set properties having dimensions means allowing change of units. But for molality the unit is fixed so that it is showing as grayed.
Keywords: Molality, MTRUE, Property sets
References: None |
Problem Statement: By default Aspen Custom Modeler and Aspen Dynamics use density to identify the second liquid phase in vapour-liquid-liquid equilibrium. There are, however, cases where the user would prefer to identify the phase by one or more key component(s). | Solution: The capability of usig key components is available in version 12.1.7 and later but it has not yet been documented. It is not available for polymers and it can only be used with local properties calculations. We plan to make it more general in 2004.1 (cumulative hotfix 1) as we explain in the example of usage attached. But the examples provided will work with any version 12.1.7 or more recent.
The attached zip file contains a document that explains how to use the key components to identify a second liquid phase and also the example files VLLdensity.acmf and VLLKeyComp.acmf. The physical properties definition file used is H2OBenzene.appdf. The example was generated in version 12.1.7 (cumulative hotfix 7). File H2OBenzene.bkp is included as it is needed to regenerate the properties definition file in version 2004 and following.
We assume that the user is familiar with generating a properties definition file and using it in a simulation in Aspen Custom Modeler and will, therefore, concentrate on the use of the physical properties submodel to calculate the three phase equilibrium and select liquid phase 2 by density or key component. More information on the basics of using Aspen Custom Modeler can be found in the Getting Started Guide (in Documentation CD).
KeyWords
Keywords: None
References: None |
Problem Statement: How can I access the various solver settings and run options from a script? | Solution: The methods and properties are documented in the Automation section of the on-line help.
The attached example file shows how to access the solver settings and run options.
When you invoke one of those scripts, the settings will be printed in the simulation messages window with the syntax you used in another script. The settings are reported in the same order as displayed on the graphical user interface windows, so you can also use them to quickly identify the name of the property.
One application of these scripts is to record the current settings (clear the simulation messages window, then run the scripts, then copy the text displayed in the simulation message and create a new script) to apply the same in another simulation file or to easily reset the settings.
The scripts may be used to see how the properties can be accessed and also as a way to record the current settings in the syntax of a script (which is printed to the simulation messages window) so you can reset your settings later using the script.
The scripts are:
- show_diagnostics: report the settings displayed on Solver Options Diagnostics tab
- show_estimator: report the setting displayed on Solver Options Estimator tab
- show_homotopy: report the setting displayed on Solver Options Homotopy tab
- show_integrator: report the setting displayed on Solver Options Integrator tab
- show_linear: report the setting displayed on Solver Options Linear tab
- show_nonlinear: report the setting displayed on Solver Options Non-Linear tab
- show_optimizer: report the setting displayed on Solver Options Optimizer tab
- show_tearing: report the setting displayed on Solver Options Tearing tab
- show_tolerances: report the setting displayed on Solver Options Tolerances tab
- show_run_options: report the setting displayed on Run Options
- save_settings: this script invokes all show script and creates a new script with the current settings
- MySettings: script created with save_settings (not the simulation does not use default settings)
Some example scripts are also provided for DMO, LSSQP and SPARSE as non-linear solver, and DMO as optimization.
Keywords: solver, options, script, VBA, automation
References: None |
Problem Statement: This Knowledge Base article provides the answer to the following question: Is it possible to set up one monitored tag with two profile tags in the Aspen Golden Batch Profiler?
The scenario leading to the above question is as follows:
? Within a Batch there is a Temperature characteristic in a Reactor called Temp.
? The Temp is examined and has to be within certain limits for LOWLOW-LOW-HIGH-HIGHHIGH.
? Therefore Temp has 2 sets of limit lines (or bands) to be watched for, one for LOW-HIGH and one for LOWLOW-HIGHHIGH. This is because a Batch | Solution: Unfortunately, it is not possible to set up one monitored tag with two profile tags. In the Aspen Batch.21 Administrator GUI , it "looks" like you ought to be able to set up profile definition A with profile tag M containing envelope data to monitor tag X, and profile definition B with profile tag N, monitoring process tag "X", but the property sheet in the Administrator won't let you do it. When you try to create a new profile and associate the second profile tag with the same monitored tag, it wants to force you to apply that change to all profiles that are associated with that monitored tag.
To restate...
"a profile definition" is an envelope that you wish to monitor a tag against.
The profile definition specifies what tag will contain the envelope data at runtime, and specifies what tag you will be monitoring.
So, in the scenario presented above, one would set up his/her High/Low envelope and specify profile tag "M" to contain that data at runtime, monitoring process tag "X". No problem so far. Then they set up their HighHigh/LowLow envelope (B) and specify profile tag "N", monitoring process tag "X" -- no go! The Administrator wants to change all profiles that monitor process tag "X" to have the same profile tag "N".
So, it is not possible to monitor a tag against two different profiles at once at runtime, but even before that, it is not even possible to configure a tag against two different profile tags.
Important note
In Aspen Golden Batch Profiler V7.1 we provide up to three sets of limit lines which would make the above scenario possible with just one reference profile. Prior to that version, only one set of limit lines was available.
Keywords: Profiling
References: Profile has only a lower and a higher limit.
? Both sets of limit lines have to use the same Profile tag, Temp.p, because it is not possible to associate Temp to two different Profile tags.
? At this point it is not possible to enable both Reference Profiles or even one of them, which leads to the question above. |
Problem Statement: In V7.1 and earlier, if the system is offline (for example in a prolonged Store and Forward state) | Solution: Starting in V7.2 and later, even when the system is in a "catch-up" state,
Keywords: None
References: Profiles would fail to record while the system was catching up, even though the trigger characteristic for the Reference Profile was being recorded. |
Problem Statement: What factors determine when a | Solution: The key thing to keep in mind is that the trigger to record a new profile instance can be compound, much like a BCU trigger. In the case of Aspen's AspenChem demo, the profile is triggered by two elements, the Alignment and Context. First, under the Alignment tab, the requirement is shown that the REACT subbatch START TIME characteristic must be recorded:
Next, note the Context tab. For the given batch being processed, the characteristic Product Planned must be present with a value of 1, 2 or 3. This demonstrates the ability to share the same alignment condition, but still to differentiate which
Keywords:
References: Profile is recorded? For example a customer tries manually recording a characteristic, expecting a Golden Batch Profile instance to be displayed in the Profile Monitoring plot, but it is not recorded. What should be examined to understand why the profile did not record? |
Problem Statement: What do the following warnings about total length of enumeration mean in Standard Library? Do these warning will affect my customized datasheets?
Enumeration Option 914 in eCommonMotorEnclosure
Warning: Total length of option is 914 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 305 in eEntranceConstruction(ExchangerShell)
Warning: Total length of option is 305 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 301 in eExitConstruction(ExchangerShell)
Warning: Total length of option is 301 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 282 in eFinType(ExchangerTubeExternal)
Warning: Total length of option is 282 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option eFluidCode(Pipes) in eFluidCode(Pipes)
Warning: Enumeration option eFluidCode(Pipes) has a duplicate value
Enumeration Option eFluidCode(Pipes) in eFluidCode(Pipes)
Warning: Enumeration option eFluidCode(Pipes) has a duplicate value
Enumeration Option eFluidCode(Pipes) in eFluidCode(Pipes)
Warning: Enumeration option eFluidCode(Pipes) has a duplicate value
Enumeration Option 380 in eHetranTubeMaterial
Warning: Total length of option is 380 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 292 in eIcarusTubeMaterial
Warning: Total length of option is 292 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 463 in eIcarusTubeSheetMaterial
Warning: Total length of option is 463 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 294 in eNFPAReactivityHazardSymbol
Warning: Total length of option is 294 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 824 in eNozzleFunction(Nozzle)
Warning: Total length of option is 824 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 263 in eSealFlushPipingPlan_API610
Warning: Total length of option is 263 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Enumeration Option 323 in eTEMAType(HeatExchanger)
Warning: Total length of option is 323 while only 255 allowed for Excel Datasheet Editor export. Consider shorter names
Library Standard Model
Information: 0 errors found. 14 warnings | Solution: “255 character limit� is a known Excel limitation that is outside our control. ABE includes an error/warning checking process while creating class store such that user will see detail warning about the particular data that exceed allowed length of Excel.
There is nothing to worry about these warnings, this will not affect your customized datasheets. The messages are only for reference.
Keywords: Class Library Editor, enumeration, total length, Excel Datasheet Editor
References: None |
Problem Statement: Why do I get a message "Session has been forcibly disconnected" when I add a symbol in Drawing Editor? | Solution: You receive this message after you rewrite symbol when there is a symbol which you did not rewrite.
An inconsistency occurs between the new symbol and the old one.
Remove the symbol from PFD and place the new symbol.
If you need to customize the symbol which you place on the PFD, save the new symbol as another name.
Keywords: Drawing Editor, Symbol
References: None |
Problem Statement: What versions of AutoCAD are supported in Aspen Basic Engineering? | Solution: AutoCAD 2007 and earlier are supported with all versions of V8.x and V9.
YOu cancan import *.dwg files which created with AutoCAD 2007 and earlier.
Keywords: Aspen Basic Engineering, ABE, AutoCAD
References: None |
Problem Statement: How do I manually register the Excel Datasheet Definer add-in? | Solution: If ABE has been properly installed, the Excel Datasheet Definer should have been properly registered and added to Excel. However, in rare instances, the add-in will be lost and will need to be manually re-registered.
Before proceeding, it is important to make note of whether 32-bit or 64-bit Microsoft Office is installed. ABE only supports 32-bit office. If an ABE installation was performed on a machine with 64-bit Office, the Excel Datasheet Editor add-in will fail to be registered, and there is no workaround but to uninstall 64-bit Office, install 32-bit Office, and reinstall ABE.
1. Make note of where AZDefinerAddin.dll is located.
By default, the AZDefinerAddin.dll should be located in C:\Program Files\AspenTech\Basic Engineering V8.8\UserServices\bin, but it may be different if the user chose a different directory to install ABE in. Also, if using a version other than V8.8, navigate to the folder corresponding to your version.
2. Launch command prompt as an Administrator and enter cd C:\Program Files\AspenTech\Basic Engineering V8.8\UserServices\bin.
If ABE was installed in a different directory, replace the path with the correct one. Also, if using a version other than V8.8, replace Basic Engineering V8.8 with your version.
3. In command prompt, enter regsvr32 AZDefinerAddin.dll.
If successful, you will see this window:
Keywords: Excel Datasheet Definer Add-In Register
References: None |
Problem Statement: How can I transfer the process conditions from the MaterialPorts.Flow attribute to the OperatingConditions.Flow attribute in Aspen Basic Engineering? | Solution: You can use the attached Rule to copy the process data from the Material Ports to the Operating Conditions attributes that will be displayed on the Equipment Datasheet.
Use the following steps to run the rule.
1. Download the attached file "CompressorDataAll.azkbs".
2. Copy the file to your Workspace Library directory
(C:\AspenZyqadServer\Basic Engineering1X.1\WorkspaceLibraries\KBs).
3. Open the StandardLibrarySet.cfg file in the WorkspaceLibraries Directory (C:\AspenZyqadServer\Basic Engineering1X.1\WorkspaceLibraries) and add the line
KBScripts = "CompressorDataAll"
under the line that contains ManagedKBsDirectory= "KBS" and save the .cfg file.
4. Reload your Workspace in the Administration application.
5. Open the CompressorDataAll.azkbs file with the Rules Editor application.
6. Select File | Open Workspace and open the Workspace.
7. Select Tools | Compile.
8. Select Tools | Install/Replace Module.
8. You may now run the rule by selecting Run | Rule and selecting CompressorDataAll.
Keywords: Rule, KB, Operating Conditions, Centrifugal Compressor, Material Port
References: None |
Problem Statement: How to define a new attribute for Aspen HYSYS so it can be exported into ABE? | Solution: There are 3 levels where we can work inside Class Library Editor (CLE). The first level is Class views, then we have Composite views, and finally the Classes are at bottom level.
The next steps are an example using the pump object.
1. Open CLE tool and open the StandardModel.azcl file.
2. Double click on the ObjectMapperPump in the Class Views tab to display the attributes inside this class view.
3. Select an attribute inside this class view and choose the option Open UoPumpBlock to display the composite view level:
4.
5. In case you don’t find the attribute you want inside the composite view level, you will need to go down one more level to search for the class link. To do this, you have to make right click on an attribute inside the composite view level and select Open UoPump.
6. For instance, if we want to have the CalculatedHead inside our data sheet we must have a CalculatedHead attribute in our class view that is correctly linked to the CalculatedHead class. The first thing we have to do is create an attribute for both the class view and the composite view levels.
7. Select the CalculatedHead inside the UoPump class and drag and drop this item into the composite view attribute we have created. When you drop the item you must select Synchronize all to define all the properties of this class attribute inside the composite view (Including the route).
8. Repeat the same steps to define the route for the Class view. Now linking the Composite view attribute to the CalculatedHead attribute created in the class view level.
9. When you have done the changes, you will need to create a new class store that will be used for your workspaces. You can overwrite the Standard Model you have so all the workspaces can have these new definitions.
10. Close the CLE tool and open the Administration tool in order to reload the workspace.
Remember to double check the specific CLE model you are using in your workspace so you can be completely sure that this modification will be applied to your ABE project after restarting the broker or reloading the workspace.
Keywords: Class Library Editor, class view, composite view, classes.
References: None |
Problem Statement: The Excel Datasheet Editor and/or Excel Datasheet Definer add-ins are not loading on start. How can I fix this? | Solution: This solution assumes that the Excel Datasheet Editor and Excel Datasheet Definer add-ins have been properly registered. If that is not the case, see solution ID #146028 and #146033 for more information.
Excel add-in load behavior can be changed by modifying registry keys.
1. From the Start Menu, select "Run" and type in "regedit"
2. Navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\Excel\Addins
3. For the Excel Datasheet Editor add-in, select the folder AZ181ExcelAddin.DatasheetEditor and make sure that the load behavior is set to 3.
4. For the Excel Datasheet Definer add-in, select the folder AZ171DefinerAddIn.ZyqadAddInDesigner and make sure that the load behavior is set to 9.
You can change the load behavior by double-clicking on the registry key and inputting the value which you want to change it to.
Keywords: Excel Datasheet Editor Definer Add-in load on start behavior
References: None |
Problem Statement: How can I set the case at a time for multiple cells in Datasheet Definer? | Solution: You can select all the fields, for example, in a column, and using the Edit | Case from the Datasheets menu, define the case for all the fields at once.
You do not need to go one by one to define the case.
Keywords: Datasheet Definer, Case
References: None |
Problem Statement: A controller with a large list size may not write out to the DCS despite all indications that the writes are successful. This is observed on a Honeywell TPN OPC server running on an eApp. | Solution: The LISTSZ parameter of a controller determines the list size for the PUT and WRITE requests to the CIMIO server. Each OPC server may react differently to the various size of the lists. On the Honeywell new eApp, a list size larger than 420 will cause the writes to fail without any error. All CIM-IO logs show normal, successful writes to the OPC server even though the values never make it out the DCS points.
To fix this error, reduce the LISTSZ until the DCS value matches what the controller is writing out.
Keywords: LISTSZ
Write
TPN server
References: None |
Problem Statement: According to the Aspen Real-Time SPC Analyzer User’s Guide, a Real-Time SPC data record is defined by any of the following records:
Q_BatchXBARDef, Q_XBARDef, Q_XBARSDef, Q_XBARCDef, Q_XBARCSDef, Q_XBAR21Def, Q_XBARS21Def
All such records have a repeat area listing the alarm rules. Each of the rules has a field that holds the current alarm state. Whenever the state changes for any of the rules it is recorded in the Q_ALARM_HIST record. This solution provides a script that you can use to base your own notifications to appropriate operators whenever key rules go into alarm state and are added to the Q_ALARM_HIST log. | Solution: Attached is an SQLplus script that you can configure to be triggered whenever the Q_ALARM_HIST record logs a new alarm Change Of State (COS).
Use Aspen SQLplus to save the script as a CompQueryDef record called AlertSPC. The script will require customization to take account of your own email server settings, etc.
In particular, look for the following variables and configure accordingly: SMTPServer, Sender, Recipient
Also note the use of the following switches: DebuggingEnabled -and- LogToFileEnabled
Use DebuggingEnabled in order to test the script in SQLplus. When set to TRUE the script will process some recent alarm COS occurrences and send logging text to the SQLplus output window. This should be the first thing you do.
If LogToFileEnabled is TRUE then a log file will record the process messages. LogFileName = '%USERPROFILE%\SPCAlerts.log' by default which will result in a log file being generated in the user profile folder of the account that runs the sqlplus_server.exe process.
In order for the query to trigger whenever a new alarm condition occurs, find the new AlertSPC record within Aspen IP.21 Administrator and make the following changes:
#WAIT_FOR_COS_FIELDS = 1
WAIT_FOR_COS_FIELD = Q_ALARM_HIST LOG_SEQUENCE_NUMBER
COS_RECOGNITION = all
Your named recipient(s) should now receive an email when the SPC data records enter the alarm state.
When the script runs it will create a record called AlertSPC_counter defined by IP_DiscreteDef which simply holds the last log sequence number that was processed. This is required to avoid duplicate emails or possibility of skipping any COS occurrences.
DISCLAIMER - The script in this solution should be used as an example of what is possible. We do not claim that it is the most efficient method to notify operators of any Real-Time SPC alarms that are triggered. AspenTech do not provide any warranty or support for it. Use at your own risk.
Keywords: BatchSPC
Q_ALARM_STATE
Q_ALARM_RULE
Q_RULE_STATUS
References: None |
Problem Statement: How to retrieve data to Excel from the Fixed or Repeat Area of an Aspen InfoPlus.21 record using ODBC | Solution: To retrieve data from the Fixed or Repeat Area of an Aspen InfoPlus.21 record:
1. Start up Excel and open a worksheet
2. Make the following selection from the menu bar: Data | Import External Data | New Database Query...
3. From the "Databases" tab, select an existing data source connected to the InfoPlus.21 database, or create a new one by clicking on <New Data Source> and selecting the AspenTech SQLplus driver in the following dialog box
4. The Query Wizard will allow you to select the InfoPlus.21 definition table from which you want to retrieve data (i.e. for IP_AnalogDef records, select IP_Analogdef for Fixed Area fields or select IP_Analogdef_1 for Repeat Area fields) and then the single fields of data (e.g. NAME, IP_TREND_TIME, IP_TREND_VALUE from an IP_AnalogDef record)
5. Once all the fields are selected, the Query Wizard allows you to filter and sort your data
6. You can now choose to eiher (1) "Return Data to Microsoft Office Excel", (2) "View data or edit query in Microsoft Query" or (3) "Create an OLAP Cube from this query"; click Finish to exit the wizard.
KeyWords:
Excel
Office
ODBC
SQLplus
Keywords: None
References: None |
Problem Statement: How do I shorten time to generate platinum flowsheet from Aspen PIMS? | Solution: Users could face the situation, where it takes very long time to generate platinum flowsheet from PIMS for a model with hundreds of cases. The reason is that execution of large number of cases will create large size of result database, leading to Platinum taking longer time to read all data and generate the flowsheet.
If only one or selective cases are needed to be shown in platinum flowsheet to do analysis, one suggestion is to clean up the model (click the brush botton), run the cases needed and then generate platinum flowsheet. In this way, the size of result database will be smaller and less time is needed to read data and generate platinum flowsheet.
Keywords: Generate Platinum Flowsheet
Long Time
Result Database
References: None |
Problem Statement: How to create a new flowsheet in Aspen Platinum Server | Solution: Steps to create a new flowsheet in Aspen Platinum Server:
1. Install and open PIMS Platinum Server.
2. Platinum server interface will be opened in IE or Chrome.
3. Set up Server Roles.
Assign Administrator and Creator roles using the NT Group Selection dialog box. A user can be assigned to a role or an NT group. For groups, all users in that group will have the assigned role.
Role
Responsibility
Administrator
Can configure Administrators and Creators. Server-level permission. Implies all other permissions. NT administrators on the server automatically have the Platinum Administrator rights.
Creator
Can create flowsheet projects and configure designers for that flowsheet project. Server-level permission.
4. Create a new flowsheet.
Click "New Flowsheet" icon and follow the steps in the wizard. .
1) Enter basic information for the new flowsheet.
2) Set up Roles for users who can access the flowsheet.
All roles described below are assigned on a data source level. Each role includes all permissions at the lower levels and a user can have multiple roles.
Role
Description
Designer
Can create data sources in a specific flowsheet project and can associate multiple data sources to projects.
Can create asset groups and PIMS case sets. Has full permission to add, replace or amend cases and can set case run permissions for lower levels.
Publisher
Can publish views in a flowsheet project for other users to see. Can create public layouts for a project. Layouts include flowsheets, unit data views and dashboards.
Can only add new cases if granted permission by a Designer or above.
Viewer
Can only view a flowsheet project.
Can only add new cases if granted permission by a Designer or above.
3) Select Database.
Results database of access needs to be in C:\ProgramData\AspenTech\PIMSPlatinum\DataSources.
4) Case Sets.
5) Group Assets. To generate group assets, please click Auto-Allocate all.
6) Click Apply and Publish.
Now the new flowsheet is created and the roles are set up for different users who are going to access this platinum flowsheet.
5. Share links.
Now we can share the address link of this flowsheet for multiple users.
For local machine, the address is http://localhost/AspenTech/PIMSPlatinum/aspenONE.html#. If the flowsheet is generated on a server, please change localhost to the server name. And please make sure users who want to have access to this flowhsheet have access to the server.
Keywords: Platinum Server
Flowsheet
References: None |
Problem Statement: How to compare marginal values between two cases in Aspen PIMS | Solution: Using stream comparison wizard in Aspen PIMS Platinum the marginal values between cases can be compared
The steps are as follows:
After making a PIMS run with several cases and the generating the flow sheet in PIMS Platinum
1. Insert | Dashboard
2. Click on modify
3. Click on stream comparison wizard
4. From the cases tab move all cases to selects cases section
5. From the Equipment tab select Purchase | Buy
6. Click on Finish
7. From the streams tab select crudes: ANS, ARH, ARL, and TJL
8. Form the Properties tab select Marginal
Keywords: Compare marginal values
Stream comparison wizard
Case comparison
References: None |
Problem Statement: How do I resolve the error, "Model registration failed" when connecting to the Aspen PIMS Platinum server? | Solution: For this example, assume you are using Window 7 OS with 64 bit (to show the application directory). First let’s open a command window run as admin.Â
1. Go to ‘PIMS Case Runner Services’ directory,
C:\>cd \
C:\>cd ‘C:\Program Files (x86)\AspenTech\Aspen PIMS\PIMS Case Runner Service’
2. Check the Case Runner Service configuration show as the screen below,
You can type ‘PIMSCoreCaseRunnerService.exe /?’ in the command line. It will show you every option and its description.
If you are running Platinum Solo, the OS type should be ‘Desktop’. However, if you are running as Platinum server you need to switch it to ‘Server’.
The error message above indicates that your OS type is configured as ‘Desktop’ but you tried to run Platinum Server. You can use the command line to switch, shown in the next screen.
Now you changed the OS Type to Server. If you want to run Platinum Solo, you have to switch back by replace the work 'Server' with 'Desktop'
Keywords: Service, Case Runner, registration
References: None |
Problem Statement: How to configure IIS for platinum server installation? | Solution: If IIS (Internet Information Service) is not configured you may get the following error message during the installation of platinum in server mode. NOTE: this only applies to the Server mode - IIS is not required for Platinum when it is installed on a PC for a single user.
The IIS can be configured by browsing to Control panel | Programs and Features and then click on Turn Windows features on or off
Now in the resulting window check all the options under Internet Information Services
Keywords: Platinum server installation
IIS
Internet information service
Configure IIS
References: None |
Problem Statement: How can I define case sets for PIMS Platinum so I can limit the flowsheet view to just a specific set of cases? | Solution: After generating your flowsheet, in Platinum, go to FILE | Edit Flowsheet and select the flowsheet you want to edit. The Platinum flowsheet wizard will open as shown below.
Change the flowsheet name if desired and click NEXT.
Select the appropriate flowsheet and click NEXT. You will now see the page where you can define your case sets. PIMS will automatically generate case sets based on the available cases. If these are not complete or correct you can modify using the page. Select the desired cases and group them by selecting them on the left and using the middle arrow keys to move them to the appropriate group on the right. When you have finished, click NEXT.
Keywords: None
References: None |
Problem Statement: Aspen PIMS Platinum includes a Standard Refinery Template by default to assist with the allocation of process units when configuring the flowsheet for the first time. Is there a better template available when configuring a flowsheet for an Olefin plant? | Solution: We do have a Standard Olefin Template, but it is not automatically installed with Aspen PIMS Platinum. You can download the olefin template from this article and save it into your local folder under,
C:\Program Files (x86)\AspenTech\PIMS Platinum\bin\templates
When you create a New Flowsheet or Edit an existing one, click 'File', then select 'Edit Flowsheet' and choose the flowsheet you will work on. The 'Aspen PIMS Platinum Flowsheet Wizard' will come up.
Go to the “Group Assets� tab. On the middle right, the 'Auto-Allocate All' pulldown will show 'Standard Olefin template' as one of the options. After selecting it, all the units will be unallocated under the 'Flowsheet' branch. You have to create each model group and manually allocate all the equipment.
The first step is to create model groups. Once you have the Standard Olefins template, in the middle of the wizard screen, the 'Process Unit Icons' pulldown will contain a “Standard Olefins� set of icons.
Minimize the 'Process Unit Icons', drag the unallocated equipment to the tab below.
Expand the 'Process Unit Icons', and drag these icons to the 'Flowsheet' in the 'Process Units with Allocated Equipment' tab to create groups.
Now you can drag all the unallocated equipment to different group as you desire.
Keywords: Platinum, olefin, template, flowsheet, icons, group, allocated, unallocate, allocate, equipment
References: None |
Problem Statement: How do I use Aspen PIMS Platinum to interpret the significance of material balance constraints? | Solution: Material balance constraints are modeled as VBALxxx/WBALxxx rows in a PIMS model and its structure can be viewed in the matrix analyzer. The process implication associated with the material balance constraints can be interpreted based on the column variable and the sign associated with the matrix coefficients in this equation
The example used for illustrative purpose in this solution is associated with the gulf coast model that comes with PIMS installation. The VBALLV1, Light Vacuum Gas Oil material balance row in the model is selected for explanation. The equation corresponding to the VBALLV1 row from the matrix analyzer is as shown below
From the above matrix experienced PIMS users interpret the column variables and coefficients in the equation as shown in Table below
PIMS Variable
Interpretation of variable
Coefficient
Interpretation of coefficient
BLV1HSF
LV1 to HSF blending
-1
Consumed
BLV1LSF
LV1 to LSF blending
-1
Consumed
SCD1L>1
LV1 from Crude unit
1
Produced
SGFPLV1
LV1 to GO HDT Pool
-1
Consumed
SHCFLV1
LV1 to Hydrocracker Feed Pool
-1
Consumed
With the above understanding a flow diagram corresponding to this specific equation in the PIMS model can be visualizes as shown in Figure below
As observed from Figure LVI is produced from Crude unit, and then consumed in Hydrocracke feed pool, Gas Oil Hydrotreater feed pool, Low sulfur Fuel Oil Blender and High Sulfur Fuel Oil Blender.
For a new PIMS user, understanding of PIMS variables and sign convention associated with equation is not obvious. With Platinum, the flow sheet visualization is immediately available, which clearly illustrates the stream disposition as shown in the flow sheet generated by platinum. In this example the stream tag LV1 can be searched in the search box available on the left navigation pane
Keywords: Material balance constraint
Interpretation of Material balance constraint
Interpretation of Material balance constraint in platinum
Platinum flowsheet
References: None |
Problem Statement: Why doesn't the Platinum server page open? | Solution: After successful installation of Platinum on server mode and then opening Platinum server page may display the error message shown in figure
This error might be seen if IIS is not configured correctly during the installation process. Make sure all the components under IIS are checked before installing the software. Correcting the configuration (checking the right options) after installing the software will not resolve the problem.
Keywords: Unable to open platinum server page
Platinum server error
Server error in application
Platinum server installation
References: None |
Problem Statement: Warning Message:
"Model Registration Failed! Error reading model files. The creator of this fault did not specify a Reason. | Solution: Users may receive the “Model Registration Failed” warning message when generating Platinum flowsheet. If it did not specify a reason for this fault, please check PimsCaseRunnerCoreService.log in C:\ProgramData\AspenTech\DiagnosticLogs\PIMS Case Runner Service , which will provide more specific message.
For example in one case, the message in PimsCaseRunnerCoreService.log shows “RegisterModel failed. Model file does not exist.\PimsModel.pimx”. It means the auto-generated flowsheet didn't register the location of the .pimx file properly. By disabling the Case Runner and re-enabling it will fix the issue. Steps below can be followed.
1. In Additional Features, delete PIMS Model Connection
2. Add PIMS CASE Execution.
3. Manually add the PimsModel.pimx file.
Keywords: Model Registration Failed
PimsCaseRunnerCoreService.log
Case Runner
References: None |
Problem Statement: I am trying to create an Aspen Platinum flowsheet from the results of an Aspen PIMS run that I made but I keep getting this message instead:
“PIMS database validation failed, more than one model maybe using the same database.”
Applicable Version(s)
Multiple Versions of Aspen PIMS Platinum | Solution: This error in Platinum will occur if you want to create a flowsheet from a database but it contains results of different models. Currently in PIMS, the model name is identified by its folder name. One way this can happen is if you do the following:
Open PIMS and open the volume sample model.
Run a case of the model and output the results to Access.
After case execution is done, close the model.
Open the parent folder which contains the model (e.g.: C:\Users\Public\Documents\AspenTech\Aspen PIMS\Pims) and change the folder name to Volume Sample 1.
Open the model in PIMS again.
Run a case again. In the execution window, select to keep existing results.
Now the results will be output to the same database. But in the table PrModel in the database, you can see that there are different model names. If you want to create flowsheet from this database, the error you observed will popup, thus the flowsheet cannot be created.
To resolve this issue, you can simply change the model names in this table to the same one and save the access database. For SQL server database, you would need to run a script to update the model name.
Keywords: “PIMS database validation failed, more than one model maybe using the same database.”
Platinum flowsheet generation failure.
References: None |
Problem Statement: Platinum is showing very slow performance for SQL model. How can I improve the performance? | Solution: Sometimes if the database statistics are stale, the performance of Platinum using the SQL model might be affected. Updating these statistics by running the following query can help in a faster and better performance. The query to run in SQL is:
Sp_updatestats
It is good practice to update the stats in SQL from time to time to have good performance for the Platinum/Petroleum Scheduler models to work efficiently.
Keywords: SQL query
References: None |
Problem Statement: How do I configure Plan vs. Schedule in PIMS Platinum? | Solution: In order to configure the model you will need your refinery model in PIMS and in APS (SQL database).
PIMS side
1. Open the PIMS model and run the cases needed.
2. After solving, generate the PIMS Platinum Flowsheet.
3. Close the dialog box in order to open Platinum.
4. Wait for the Flowsheet to load.
Platinum Side
5. Once the Flowsheet has opened go to Flowsheet configuration.
6. Edit the Flowsheet generated by PIMS.
7. Go to the Data source section and add new data source.
8. Enter the data of the APS SQL database and click OK.
9. Go to the Group Assets option and allocate the APS unallocated equipment (this time the Auto-Allocate All option was used).
10. Click the publish button.
11. Click on the Flowsheet.
12. The Flowsheet with plan and schedule data will open.
13. In order to compare Planning and Scheduling the crude and products must be mapped. Click compare and check the schedule box.
 Â
14. Click on the schedule configuration button.
15. Enter the APS model data and click ok.
16. Click the Plan vs Schedule configuration button.
17. Map the streams needed (crudes bought and products sold) by adding a group name and clicking link (In the example the Alaska NS crude is being mapped).
18. Repeat the last step for the streams needed and click Save.
19. Now you will be able to compare the Plan vs. the Schedule.
Keywords: Platinum, Plan, Schedule, compare, 8.5, vs, configure
References: None |
Problem Statement: When you start Case Runner, in the Execution Query window, you get message
‘Failed to load model. Make sure you have sufficient license tokens to load the model’.
What does this mean? | Solution: This message occurs when the user running Platinum does not have model access rights. A user can test this by running this model from within PIMS. In this case you will get some popup messages generated by PIMS and not be able to complete the run.
To correct, grant the user full access to the model folder and retry the run from within PIMS. If there is no problem, then you can try to run Case Runner again.
Keywords: Query
References: None |
Problem Statement: How is the two phase highest velocity calculated in Aspen TASC? | Solution: In a two-phase flow, the highest velocity is calculated in TASC using the total flow rate and the homogenous density "RhoH"The homogeneous density is calculated by this equation:
RhoH = 1/(x/RhoG + (1-x)/RhoL
Where:
x = Vapour Quality
RhoH = Homogeneous density
RhoG = Vapour density
RhoL = Liquid density
Then Velocity = Mass Flow Rate per tube / (RhoH* Cross Sectional Area of one tube)
The largest velocity in a heat exchanger will occur where the quality is highest and the pressure is lowest.
In the attached sample case (E shell Thermosyphon.tai), the highest velocity is at the top of the tubes where vapour quality is highest and calculated by TASC, on Summary output, as 50.5 ft/sec.
The attached excel sheet shows the same number "50.5 ft/sec" too
KeyWords
Velocity
Highest
Density
Keywords: None
References: None |
Problem Statement: How do I perform a case comparison in Aspen PIMS Platinum? | Solution: The traditional case comparison feature in Aspen PIMS compares parameters such as: Solution Status, Feedstock Purchases, Utility Purchases, Product Sales, Utility Sales, and Capacity Utilization between cases. These parameters are hard coded and there no flexibility to add additional parameters for caparison. For example the marginal vales of units, streams between cases cannot be compared.
This limitation with the traditional case comparison can be rectified in PIMS platinum using the stream comparison wizard. This solution illustrates the use of stream comparison wizard in platinum to compare the marginal value of crude feedstocks among three cases. The steps described in this solution can be used for any PIMS model, furthermore; in addition to Marginal Value any other parameter can be compared between cases.
In Platinum interface after generating the flow sheet, follow the steps:
1. Insert|Dasboard
2. Click on modify
3. Click on stream comparison wizard
4. From the cases tab move all cases to selects cases section
5. From the Equipment tab select Purchase|Buy
6. From the streams tab select crudes: ANS, ARH, ARL, and TJL
7. Form the Properties tab select Marginal, in this case marginal is selected because we are interested to compare this parameter among cases. Depending on choice other parameters can be selected
8. Click on Finish
Keywords: Case comparison
Case comparison in Platinum
Stream comparison wizard
References: None |
Problem Statement: Which Dataset Files are supported with Aspen Online Deployment? | Solution: Aspen Online Deployment (AOD) supports three different Dataset files which are,
- APC Dataset Files (*.apcdataset)
- CLC Files (*.clc) which is Aspen DMCplus Collect Dataset
- TXT Files (*.txt) which is a Text File Dataset
Keywords: Dataset
References: None |
Problem Statement: In aspenONE Search, there is a filter available to classify tags based on the "Quantity Type".
The default options available are: Other, Flow, Temperature, Percent, Pressure, Time, Cost
This solution explains how the ta:gs are classified into different categories within Quantity Type. | Solution: The Quantity Type field filters the tags based on the Unit of Measure defined for the tags in Aspen InfoPlus.21 (IP.21) Administrator. The Unit of Measure field is mapped by MAP_Units in each tag's mapping record. For example, for IP_AnalogDef this would be the IP_ENG_UNITS field.
The Units of Measure for tags are categorized based on the Quantity Type Patterns defined in the Aspen Process Data Rest Service configuration file located on the Aspen Web Server:
\inetpub\wwwroot\AspenTech\ProcessData\AtProcessDataRest.config
Supported search patterns are listed as child elements of the <MetaDataOptions> element in this XML formatted document:
<MetaDataOptions>
...
<!-- Patterns used for determining Quantity Type -->
<CostPatterns>US$,EU$,$,€,£,¥</CostPatterns>
<TimePatterns>min,hr,m,sec,s,day,week,month,h</TimePatterns>
<LengthPatterns>ft,cm,m,in</LengthPatterns>
<MassPatterns>g,kg,lb,t</MassPatterns>
<ForcePatterns>lb,N,kg</ForcePatterns>
<VolumePatterns>bbl,gal,l,cc,cm3</VolumePatterns>
<PressurePatterns>atm,bar,psi,pa,hg,h2o,torr</PressurePatterns>
<FlowPatterns>gpm,cfm,cfh,cmh,cms,cfs</FlowPatterns>
<TempPatterns>degrees,F,R,K,C</TempPatterns>
<PercentPatterns>%,percent</PercentPatterns>
<SquarePatterns>2,sq</SquarePatterns>
</MetaDataOptions>
The logic assigns each tag to a Quantity Type by choosing the best pattern match.
For example,
· If the Unit of Measure contains text in the TempPatterns XML element, e.g., K, C, it will be assigned to the Temperature Quantity Type.
· If the Unit of Measure contains text in the FlowPatterns XML element or if the Unit of Measure is a fraction with the numerator containing text from VolumePatterns or MassPatterns and the denominator contains text from TimePatterns, it will be assigned to the Flow Quantity Type.
These patterns can be changed to accommodate different Unit of Measure sets. For example, an additional Unit of Measure "m3" can be added to the VolumePatterns to accommodate for all other flow tags.
However a new category cannot be added in this file to classify a completely new set of units. For example, the tags in IP.21 might be measuring Current(A) or Voltage(volts). In the present configuration, there is no pattern to accurately support such Units of Measure nor can one be added. Hence such tags would end up being classified as "Other".
Note, after editing the config file, the IIS application pool for Process Data needs to be restarted to pick up the changes. The tag data sources must then be rescanned to update the quantity types in the search metadata.
Keywords: Quantity Type
Search
Web.21
PME
References: None |
Problem Statement: How Aspen PIMS Platinum allows you to quickly, easily and accurately track complex stream/process unit interactions | Solution: The underlying framework that builds the Aspen PIMS model is a Linear Program (LP) matrix, with this LP matrix the topography or the flow sheet of the refinery is hard to visualize. Aspen PIMS Platinum transcends this current limitation of PIMS and provides a complete flow sheet view of the refinery planning model, thus; helping to easily understand process sub models and streams (flows) pathways.
The following section compares strategy that is followed to understand process sub model and track streams in PIMS and PIMS platinum.
SIS6 is a C5/C6 isomerization unit in the Gulf Coast model (Sample model that comes with PIMS installation) which is modeled as shown below. In order to identify the input and output to this unit (SIS6) the sign of row activity associated with the material balance rows has to be identified. In this case, the stream LN1 (CD-1 LSR Naphtha) is an input because the VBALLN1 row has a positive coefficient 1 for column LNI, like wise HYL, LN2, LN3 are identified as inputs and ISM, FGS, and H2S are identified as outputs. This identification process can be difficult for a new PIMS user or personnel who are changing their role from a process engineer to a Planner in the refinery.
*TABLE
SIS6
*
TEXT
BAS
SPG
SUL
LN1
LN2
LN3
*
FREE
Free Vector Flag
1.0000
1.0000
*
Product Yields, vol frac
VBALLN1
CD-1 LSR Naphtha
1.0000
VBALLN2
CD-2 LSR Naphtha
1.0000
VBALLN3
CD-3 LSR Naphtha
1.0000
VBALHYL
Hydrogen, kSCF
0.1200
0.0050
0.0300
VBALH2S
H2S, KSCF
-0.0006
0.0000
-0.0006
VBALFGS
Fuel Gas, KSCF
-0.0034
0.0000
0.0000
VBALISM
Isomerate
-0.9970
0.0000
0.0000
VBALLOS
Volume Losses
0.9970
0.0000
0.0000
-1.0000
-1.0000
-1.0000
PIMS Platinum makes this interpretation and learning process much easier. For the same sub model shown in the above table, Platinum presents a flow sheet representation which is elegant and easy to understand.
Analogous to the previous interpretation the inputs LN1, HYL, LN2, LN3 and the outputs ISM, FGS, H2S are obvious from the Platinum flow sheet, this does not requires any mathematical deduction of sign associated with material balance constraints (VBAL-ROWS)
Similar to process units, pathways of streams or stream dispositions can be easily identified in PIMS Platinum. In the conventional method disposition of process streams are identified from the stream disposition map of full solution report or from the matrix analyzer.
The stream disposition map section of the full solution report is huge and sometimes very difficult to track the streams. For the example below, the stream LN1 ((CD-1 LSR Naphtha) was tracked.
The other method that was usually followed is to analyze/track streams with the help of matrix analyzer. To do this the material balance row corresponding to LN1 (VBALLN1) in the matrix analyzer was analyzed. From the matrix shown below, the user interprets the disposition of LNI1 as: LN1 is produced from the Naphtha Splitter (SNSP) as the coefficient corresponding to the column variable SNSPLN1 has a negative coefficient and LN1 is consumed in C5/C6 isomerization unit (SIS6) and gasoline blender because the coefficients associated with the column variables: SIS6LN1, BLN1CRP, BLN1CRG, BLN1LSR, BLN1RPR and BLN1RRG are negative. Normally this type of interpretations takes time in the PIMS learning curve.
The above analysis is simplified in Platinum, here to track stream LN1, the user just needs to hover the mouse over the stream name LN1 in the left navigation pane listing process streams, this will highlight the disposition of this stream as blue, from which it is obvious that this stream LN1 is produced in the Naphtha Splitter and consumed by C5/C6 isomerization unit (SIS6) and for gasoline blender.
Keywords: Platinum visualization
Tracking streams
Tracking Process Submodel
References: None |
Problem Statement: On installing Aspen Web Server, Apache Tomcat is also installed with a pre-configured userID (admin) and password (admin). This password may violate corporate security protocol and needs to be changed. How can this password be changed? | Solution: This knowledge base solution describes the steps how to change the Tomcat Password. The Tomcat password is encrypted and stored within the tomcat-users.xml file.
First, it should be noted that as of V8.7, AspenTech has introduced a new utility called aspenONE Credentials - see KB 141989. As the most convenient solution you should use that to set userID and password.
The following steps describe how an administrator can change the Tomcat password using a manual method that was required before the introduction of aspenONE Credentials:
1. On the web server machine, open a command prompt window (click Start | Run and enter cmd and click OK).
2. Change the directory to the bin directory of the Tomcat installation
(Type cd followed by the path to the Tomcat bin directory and press Enter).
cd C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatXXX\bin (replace XXX with actual Tomcat version)
3. Run the command:
digest -a md5 <password>
where <password> is the new password.
The response is in the form:
<password>:<encrypted password>
For example,
digest -a md5 password
returns
password:5f4dcc3b5aa765d61d8327deb882cf99
(note, with introduction of Tomcat 8, the return string will be different than this, it is not static - run this multiple times and you get different string - and is much longer)
4. Copy the encrypted password, which is the text that appears after the colon (:).
5. Open C:\Program Files (x86)\Common Files\AspenTech Shared\TomcatXXX\conf\tomcat-users.xml in a text editor.
6. Paste the encrypted password into the password field of the admin user, e.g.
<user username="admin" password="5f4dcc3b5aa765d61d8327deb882cf99" roles="manager-gui,AspenSearch"/>
7. Save the file and close it.
8. Restart the Tomcat service (click Start | Administrative Tools | Services, select Apache Tomcat 7 and click Restart Service).
Keywords: Aspen Search, aspenOne, Tomcat, username and password
References: None |
Problem Statement: What is Aspen InfoPlus.21 Mobile? | Solution: Aspen InfoPlus.21 Mobile is a browser based application that provides real time mobile access to Aspen InfoPlus.21 data to manage and improve manufacturing performance from a smart phone. It provides real time access for visualization and analysis of data for enhanced decision making and faster troubleshooting which therefore increase profitability, reduce variability and improve asset utilization. In other words, business can respond faster and more profitably when users have the information they need at their fingertips, anytime anywhere. For more information on the Implementation details of Aspen InfoPlus.21 Mobile application please refer the Implementation Guide
Release V9 was the last version of the aspenONE® Manufacturing & Supply Chain suite that included Aspen InfoPlus.21 Mobile.
Keywords:
References: None |
Problem Statement: When configuring OPC connectivity to an OPC server, the target server actively refuses the access. Why? | Solution: Solution 128940 covers the connection to AspenOTS framework.
Generally speaking, for rest of OPC servers, many of them require Windows account credentials to authorize access.
When one uses a 3rd party interactive OPC client to browse the OPC server, it will typically (and quietly) pass along the credentials of the current Windows user. That is why one can connect to the OPC server from the AOD machine using some interactive OPC utility.
But the AOD server runs under the local SYSTEM account by default, so it would need to impersonate a Windows user when it connects to the OPC servers that requires credentials other than local system. AOD defaults to using an anonymous connection, but you can supply a specific userid and password of a windows domain user that you know can access the OPC server. With the AOD?s "Configure Online Server" utility one can change the configuration of the IO Source to give it a specific userid and password that is known to work. See screen shot below.
Keywords: Credentials, impersonate, access, connectivity, OPC
References: None |
Problem Statement: How do I capture a dataset from Aspen Online? | Solution: To capture a dataset for desired time period follow this procedure,
? Open Aspen Online Deployment (AOD) desktop
? Make sure it is pointed to AOD server on which you have deployed application from which you want to capture a dataset. If you still have a problem then refer to solution ID: 135723 which explains in detail how to see deployed applications from AOD desktop
? Select "Applications" from Online menu and select the application from "Deployed application" list for which you wan to capture dataset
? Click on "Get History" button and select appropriate time range for Dataset and one more time click on "Get History" button on "Get Application History Data window.
? Now you can see Dataset under "Datasets" folder on left hand side window.
Keywords: capturing dataset
References: None |
Problem Statement: In aspenONE MSC (Manufacturing and Supply Chain) token contracts, Aspen Online Deployment (AOD) consumes tokens depending on the number of users. What do we mean by "user" in this case? | Solution: AOD has two runtime licenses associated with it; an online license and a desktop license. The online license cost is N tokens (we do not indicate the exact number because it may be different in time); the desktop license cost is 0 (zero) tokens.
? Each running AOD online application consumes one AOD online license count which costs N tokens. If the online application is stopped, no license is consumed; when the application is started, it checks out one count of the AOD online license; when a running application is stopped, it releases the license/tokens it had checked out.
? Each AOD Desktop instance consumes an AOD desktop license count, but no tokens.
? The web viewer consumes no tokens.
Example: Three AOD online applications are deployed, one is stopped and two are running. This situation means that two AOD online licenses are checked out, consuming (2 * N) tokens.
Note: When the AOD online and AOD desktop applications open and execute simulation cases, those simulations check out simulator licenses (e.g. HYSYS licenses, Aspen Plus licenses), just as if a user had launched the simulator directly. By default an AOD online application will keep the simulation case open for as long as the AOD application is running, which means those simulation case licenses (and tokens) are in use as long as the AOD application is running. For steady state cases, there is an option to open-and-close the simulator each time AOD solves the case, which would free up simulator licenses and tokens between solutions, but using that option has some risk associated with it. That is because: a) tokens may not be available when the simulation case needs to be solved, and b) it often takes a long time to open and initialize the simulation and that can negatively impact system and application performance.
Keywords: Token consumption
Users
References: None |
Problem Statement: How to see deployed applications from Aspen Online Deployment (AOD) Desktop? | Solution: To see deployed application on any particular AOD server using AOD desktop, first we need to point to that server and then look into Online Applications window. So in summary we need to follow two steps as below.
1. Point to AOD server:
Open AOD Desktop and select Server option from Online Menu. This will open Online Servers window. Click on "Add" button to add the AOD server name for which you want to see the deployed application and click on OK button. If server name is already added then you can skip this step and move to step 2.
2. Open Online Applications window
Once Online Server is added, select "Applications" from Online menu which will open "Online Applications" window and it will show you all deployed applications on that particular server
Keywords: deployed applications
References: None |
Problem Statement: What operating platforms are supported for Aspen Online Deployment? | Solution: The server component of Aspen Online Deployment is supported on Windows Server 2003 SP2 and Windows Server 2008 .
The desktop component of Aspen Online Deployment is supported on Windows XP Pro SP3, Vista (Business and Ultimate, SP2), and Windows 7.
It is common convention to have desktop software run on server platforms. Therefore, the Aspen Online Deployment Desktop (which is desktop software) will also run on Windows Server 2003 and Windows Server 2008. The Aspen Online Deployment online application server (which is server software) runs on Windows Server 2003 and Windows Server 2008.
Keywords: Platforms, Windows, Compatibility
References: None |
Problem Statement: When configuring an AOD application, one would go to the I/O Connect tab and do Test Connections. It returns an error message literally as
Server ... is not operating properly. Server status is '(The server has been temporarily is not getting or sending data.] suspended'. | Solution: The editing data source dialog within the online server configuration window has the option to configure identify of "Connect As". However, whatever the "Connect As" identify is defined here, OTS side always consider a connection from AOD data source side with the system identity. This causes the shown connection problem above if the OTS OPC server is configured to use the identity of the launching user to run, since OTS's own launching user identity usually is INTERACTIVE, not system. To work around this issue, OTS OPC server can be configured to use pre-defined user identity to launch, say, INTERACTIVE, if both OTS and AOD would be run by the same user sitting in front the machine. The other possible solution is to pre-define OTS OPC server to use system identify by installing it to be a Windows' service after OTS installation itself.
To switch OTS OPC server to be a windows' service, simply open up dos window and go to the folder where AspenOPCSimServer.exe sits, usually C:\Program Files\AspenTech\Aspen OTS Framework V7.1\Bin
and do
AspenOPCSimServer.exe /service
This will install AspenOPCSimServer.exe as Window's service and as such always use the single configured identity to run and one can use Windows service console to manage its startup options.
To configure DCOM settings for OTS OPC server, it is a bit more involved.The overall steps can be seen on OPC foundation web site, http://www.opcfoundation.org/Search.aspx and search literal "Using OPC via DCOM", one will find a few links covering DCOM configuration for OPC servers on Windows XP and other platforms.
However the key steps are to ensure the OPC server would not use identify information passed along from AOD's OPC data source side. Note that in this context, AOD's OPC data source side acts as an OPC client.
For example, let's use INTERACTIVE as launch identity for the OPC server, within dcomcnfg tool, the 2 key screens are the DCOM security dialog off the machine's property window, where INTERACTIVE is added with local access checked to both permission groups via the Edit Defaults buttons
And then on the properties of the OPC server, switch to use interactive user to run:
So the OPC server would always use the interactive user identity to launch disregarding the identity information passed along from both OTS itself and any other OPC clients such as AOD's OPC data source, and as long as both OTS and AOD are run by same interactive user, the interaction between AOD and OTS would be fine.
Keywords: connection, DCOM, suspend
References: None |
Problem Statement: Why does the number of alerts displayed in the status column of the User interface (i.e. HighHighHigh/LowLowLow, HighHigh/LowLow, High/Low) is always different than the number of alerts shown int the Alerts tab? | Solution: The number of alerts displayed in the status column of the User interface is different than the number of alerts in the alerts tab because the status tab always displays the actual, current status of the tags within each hierarchy. However the number of alerts tab includes the alerts that were being notified and sent to you which are not older than the expiry and which you did not view or dismiss in a previous session.
Keywords: Alerts
InfoPlus.21 Mobile
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.