question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: How can I bypass the login prompt that requests username and password when launching the A1PE web page?
Solution: Follow these steps to change the internet settings so that it automatically logs in without prompting for the username and password: 1. Open Control Panel and select Internet Options for the Internet Properties dialog box to open. 2. Under the Security tab, select Local Intranet and click on the Sites button. 3. Click on Advanced. 4. Add the A1PE URL to the zone: 5. Back on the main dialog box, click on Custom level. 6. Scroll down to User Authentication and select the bubble for Automatic logon with current user name and password. Hit OK and apply the changes. 7. Optionally, you can repeat steps 2-6 for Trusted Sites under the Security tab for good measure. Keywords: A1PE, aspenOne Process Explorer, web page, skip, bypass, avoid, without, login, logon, sign in, sign-in, prompt, user, add, trusted, sites, local, intranet References: None
Problem Statement: How do I perform a silent install of the SLM Tools?
Solution: aspenONE family-based product installs allow users to create install scripts for unattended installations. The Stand-alone Administration tools (such as SLM Tools and SLM Server) cannot be recorded. SLM Tools, if necessary, can be silently installed using one of the command lines below: msiexec /qb /i SLM Tools V12.msi REBOOT=ReallySuppress ATMODE=Install ADDLOCAL=ALL msiexec /qb /i SLM Tools V12 (64bit).msi REBOOT=ReallySuppress ATMODE=Install ADDLOCAL=ALL Please note that the 64-bit version of SLM Tools can only be used by the 64-bit version of the software. 32-bit software versions use the 32-bit version of SLM Tools while running on a 64-bit computer. NOTE: Included in the SLM Tools are the SLM Configuration Wizard, SLM License Profiler, SLM Commute and WLMAdmin (Server only). V14 SLM media has a new module added (Common Components), if you want to silen tinstall v14 SLM please refer to the attached XML files. SLM Tools v14 32-bit.xml is for v14 SLM Tools to silent install on the 32-bit system. SLM Tools v14 64-bit.xml is for v14 SLM Tools to silent install on the 64-bit system. Once the xml file (install script) has been generated, the file can be opened in a regular text editor such as Notepad, and modified to include one of the command lines listed above. When an installation script is created each product that's included in the script has a product number associated with it. To add the command line above you will first need to determine what the last product number is in the script. See example below. The example script has a total of 16 products to install. By adding the SLM Tools command line that will increase the product number to 17. <PRODUCT16>C:\Documents and Settings\Whitakec\My Documents\AspenTech\V12.0\aspenonev8.0dvd1\Aspen PIMS Platinum\Aspen PIMS Platinum.msi</PRODUCT16> <PARAMETER16>ACTION=INSTALL REBOOT=ReallySuppress ATMODE=Repair ADDLOCAL=Platinum.Standard, ASPENROOT=C:\Program Files\AspenTech\ INSTALLDIR=C:\Program Files\AspenTech\ ASPENWORKINGROOT=C:\Documents and Settings\All Users\Application Data\AspenTech\ ATSERVICEPASSWORASSWORD.F4810E23_2486_423D_9D97_E17ABF74B51D=uu66xxzzCCssnnJJ0 ATSERVICEDOMAINNAME.F4810E23_2486_423D_9D97_E17ABF74B51D=0 ATSERVICEUSERNAME.F4810E23_2486_423D_9D97_E17ABF74B51D=0 LM_SERVERLIST.BA3F7B28_13D7_4630_902E_55D684AF3A97=,HOUSLMSRV;default;, LM_LICFILELIST.BA3F7B28_13D7_4630_902E_55D684AF3A97=, SETUPTYPE=</PARAMETER16> <PRODUCT17>C:\xxx\SLM Tools V12.msi</PRODUCT17> <PARAMETER17>REBOOT=ReallySuppress ATMODE=Install ADDLOCAL=ALL</PARAMETER17> <PRODUCT18>C:\xxx\SLM Tools V12 (64bit).msi</PRODUCT18> <PARAMETER18>REBOOT=ReallySuppress ATMODE=Install ADDLOCAL=ALL</PARAMETER18> </ProductList> You will need to include the full path to the SLM Tools 8.4.msi file. You can point to it on the installation DVD or copy it to a folder on your computer. It is recommended to add the SLM Tools command line to the end of the script, otherwise you will have to adjust all the product numbers. Keywords: SLM Tools, silent install, package, atrununattended.exe, deployment References: None
Problem Statement: What is the procedure to install SLM Server Silently?
Solution: aspenONE Installer does not support the record XML file option for SLM Servers and Tools. Users who want to setup multiple SLM Servers across the regions can follow this article to install SLM Server silently. Please follow the below procedure to perform a SLM Server installation silently: -Download the attached files SLMServer.XML, SilentExecute.bat, aut.mst and ATRunUnattended.exe to a particular location from where you would like to execute the installation. Notes - Different product versions need to use different xml file, please choose to download the corresponding xml file according to your own needs. SLM Server V14 32-bit.xml is for V14 SLM Server to silent install on 32-bit system. SLM Server V14 64-bit.xml is for V14 SLM Server to silent install on 64-bit system. SLM Server.xml is for versions prior to V14 SLM Server to silent install on 64-bit system and 32-bit system. -Modify the XML and batch files parameter as follows: Replace the \\MEDIALOCATION parameter with the location of the media\DVD files Replace the \\SCRIPTLOCATION parameter with the location of the script where you will be executing the batch files. The following are the Parameters used in the above XML file: Reboot Reboot the pc after the installation ADDLOCAL Main application to install ASPENROOT Installation directory INSTALLDIR Installation directory ASPENWORKINGROOT Directory where the working files are kept ATSERVICEPASSWORD The password entered while record the xml file, Its blank in this case. ATSERVICEDOMAINNAME Domain name entered while record the xml file, its blank in this case. ATSERVICEUSERNAME User name entered while record the xml file, we used “SYSTEM” in this case. LM_SERVERLIST License server list entered while record the xml file, its blank in this case. LM_LICFILELIST Standalone license file selected while record the xml file, its blank in this case. ASPENROOT64 Installation directory ASPENTECH64 Installation directory TRANSFORMS Use the specific transform file. -Open a Command Prompt (CMD.exe) window as an Administrator -Call the SilentExecute.bat file through the Command Prompt window. Keywords: SLM Server Silent Unattended License Server References: None
Problem Statement: What if error messages like the following appear in those fields (setcim db error and Data out of range): What can be done to fix this?
Solution: Look three fields to the left at IO_VALUE_RECORD&FLD. Make sure that it lists an appropriate record and field combination. For an IP_AnalogDef record it would be <tagname> IP_INPUT_VALUE: The error has been seen before when an incorrect field name is accidentally used (like <tagname> IP_INPUT_TIME). Note: IO_VALUE_RECORD&FLD is only changeable when IO_RECORD_PROCESSING (in the fixed area) is set to 'Off'. Keywords: None References: None
Problem Statement: How to avoid issues when using Aspen Online projects with EDR models? Root Cause: When using Aspen Online projects when using Aspen Online projects with EDR models in standalone EDR or embedded into Aspen Plus or Aspen HYSYS, please follow these best practices. The likely cause for output tags not getting updated properly is because Aspen Online is not configured to use Local System.
Solution: Please follow these best practices when using Aspen Online with EDR models: Change the log on account of the AOL Service to Local System; Enable the TCP/IP connections for the SQL Express Server via SQL Server Configuration Manager. Keywords: Aspen Online, Best Practice, Aspen EDR, Aspen Shell and Tube Exchanger, Digital Twin, AOL References: None
Problem Statement: How do I enable AFW debug logging?
Solution: This knowledge base article will show you how to enable AFW debug logging which will help to troubleshoot AFW related issues. Follow the steps below to enable the logging on the AFW Security Server: 1. Ensure that you have Administrative Privileges to make changes to the machine's registry. 2. Click on Start | Run and type regedit to access the registry. Note: Using the Registry Editor incorrectly can cause serious, system-wide problems that may require you to re-install Windows to correct them. Modifying the Windows Registry should only be performed by experienced Administrators. 3. Locate 32-bit HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AspenTech\AFW 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\AFW . 4. Right click in the right panel and select New-> DWORD (32-bit) Value and name the key afwlog. 5. Double click afwlog key and enter the value 1. 6. Restart the AFW Security Client Service 7. A log file called afwlog.txt will be generated in the folder C:\Users\xxx\AppData\Local\Temp (where xxx refers to the windows login name) 8. Reproduce the issue and share the afwlog.txt file with AspenTech Support for further investigation. 9. Set the value of afwlog in registry to '0' to disable logging. Keywords: afwlog References: None
Problem Statement: How to limit access to different users on the PCWS?
Solution: Use AFW Security roles to assign different access level to APC applications. Example In below example, the student account is part of the ACOWebEngineer role with access to all the applications: In a situation where I have a user or a group of users that I would like to limit the access to specific applications I need to do the following. Create a Role. Assign the user to the role. Add application specific access For step 1 go to the configuration tab/ Security/ Users and roles and create a new Role by clicking “Add Role”. For my example I will call it CRUDEOPER, hit “Apply” to save your changes. For step 2 click on add User and only assign the checkbox for the new CRUDEOPER Role. Then hit “Apply” to save your changes. For step3 switch to Permissions tab, Add the DMC online host name and application Name. We want to add the CRUDEOPER role only to the ATMCRD controller. Then click the check box to have read/write access to standard and operator entries. For engineer entries give read only access. Hit “Apply” to save the changes. Then when the user connects to the web page the security settings allow to see only 1 application (ATMCRD) Keywords: APC DMC3 AFW Roles PCWS References: None
Problem Statement: There is no readily available option in Global Display Data for Normal Volumetric Flowrate to show on the flowsheet.
Solution: Users can turn on the Volumetric flowrate property in the Global Display Data option but this volumetric flowrate is reporting the current total phase volumetric flow rate calculated based on current process temperature and pressure conditions. If the user would like to display Normal Volumetric Flowrate on flowsheet through Global Display Data Option, the user is required to follow the steps below: Create a new property set and select VOLFLMX (Volumetric Flowrate for Mixture). Select the unit that is required. For eg. m3/hr, if you would like to report ncmh. Under Qualifier, select Total for Phase. Uncheck both Temperature and Pressure using system conditions. Input 25 °C for temperature and 1 bar for pressure to calculate Normal condition. (Depends on your required Normal condition) Go to Global Flowsheet Display Options, enable the custom display option and select the property set that has been created above. Click “Apply” to get the Normal Volumetric Flowrate displayed on the flowsheet. Keywords: None References: None
Problem Statement: When navigating to the A1PE home page and the icons do not appear (the area where they normally appear is blank / empty) what sort of things may be checked in order to alleviate the problem? The suggestions in this document assume that the icons DID appear previously and that the page was working correctly at some point.
Solution: If traditional things like trying a different web browser (Google Chrome, Mozilla Firefox, Microsoft Edge, etc.), emptying the browser's cache, trying the page from different computers do not solve the problem here are some suggestions: 1. Have the affected users log out of A1PE and close the browser windows. On the A1PE server navigate to the users' private folders and delete the aspenone.workspace file. The file WILL be recreated when the user navigates to the page and logs in again. The file will usually be located here: C:\ProgramData\AspenTech\A1PE\Files\Private\<username> 2. If the prior suggestion does not fix the problem then replace the WorkspaceDefault.json file with a clean / default one. One for version 11 of A1PE is attached to thisSolution. Please contact AspenTech Software Support if one from a different version is needed. The file is typically located here (on the A1PE server): C:\inetpub\wwwroot\AspenTech\ProcessExplorer After replacing the file then open a Windows OS command prompt (formerly known as the DOS prompt) using a right-click and 'Run as Administrator' command. Issue this command: iisreset Wait for the command to complete. Then restart the Apache Tomcat * service. Once the service is restarted then have users log in to A1PE and see if the problem is fixed. * The name may be 'Apache Tomcat 8.5 Tomcat8' or something similar. 3. Seeing the blank A1PE homepage can also be encountered concurrently with a Process Data Error (Error Code: 503) in the A1PE administrator page. If you see both these errors please review the the following KB article #000098375. 4. It may be useful to check the settings in IIS for the accounts used to start these: AspenProcessDataAppPool AspenProcessDataAppPoolx64 The account should normally be the same account as is used to start the Aspen InfoPlus.21 Task Service (see below) but perhaps consider changing to Local System (and restart the App Pools). If the problem goes away it would be worthwhile to investigate any potential problems with the previously listed account. 5. In at least one instance the problem was caused because the settings in ADSA on the web server were different between the User settings and the System settings (and one may have been incorrect). Check the ADSA settings on the web server itself and make sure they are correct and the same (as per this screenshot): UPDATE: A V12 WorkspaceDefault.json file has been added to the attachments. Please remove the V12 from the front of the file name before using. Keywords: None References: None
Problem Statement: Every attempt to start a MOC session fails with a single Error dialog reporting the message: java.lang.ExceptionInInitializerError The latest MOC debug file (found in C:\ProgramData\AspenTech\AeBRS\MOC\debug folder) may have the following set of messages logged: hh:mm:ss TID: 1: Exception java.lang.ExceptionInInitializerError at Aspentech.JdbcOdbc.JOConnection.<init>(JOConnection.java:46) at Aspentech.JdbcOdbc.JdbcOdbcDriver.connect(JdbcOdbcDriver.java:63) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228) at m2r.DataModel.m2rDatabaseConnection.connect(m2rDatabaseConnection.java:276) at m2r.DataModel.m2rDatabaseConnection.init(m2rDatabaseConnection.java:231) at m2r.DataModel.m2rDatabaseConnection.<init>(m2rDatabaseConnection.java:221) at Util.dbConnection.dbConnection$ExternalDBConnection.<init>(dbConnection.java:76) at Util.dbConnection.dbConnection.connect(dbConnection.java:41) at Util.dbConnection.dbConnection.<init>(dbConnection.java:28) at Util.dbConnection.dbManagedConnection.<init>(dbManagedConnection.java:25) at Util.dbConnection.DefaultConnection.<init>(DefaultConnection.java:17) at Util.dbConnection.DefaultConnectionPool.createConnection(DefaultConnectionPool.java:25) at Util.dbConnection.DefaultConnectionPool.initialize(DefaultConnectionPool.java:21) at Util.dbConnection.dbManagedConnectionPool.<init>(dbManagedConnectionPool.java:38) at Util.dbConnection.DefaultConnectionPool.<init>(DefaultConnectionPool.java:15) at Util.dbConnection.DefaultConnectionManagement.create(DefaultConnectionManagement.java:31) at Util.dbConnection.ConnectionManagement.createConnection(ConnectionManagement.java:130) at Util.dbConnection.ConnectionManagement.createForImpl(ConnectionManagement.java:98) at Util.dbConnection.ConnectionManagement.createFor(ConnectionManagement.java:86) at Util.dbConnection.DefaultConnectionManagement.createFor(DefaultConnectionManagement.java:93) at Util.dbConnection.DefaultConnectionManagement.loadFromConfig(DefaultConnectionManagement.java:85) at library.symbol.chkSQLLib.initExecImpl(chkSQLLib.java:319) at library.symbol.chkSQLLib.initExec(chkSQLLib.java:291) at runtime.vm.chkLibraryManager.initExec(chkLibraryManager.java:168) at runtime.vm.chkVMRuntime.initLibs(chkVMRuntime.java:128) at Util.chkVMUtil.initLibs(chkVMUtil.java:72) at runtime.vm.chkVMRuntimeFactory.createVMRuntimeForLibrary(chkVMRuntimeFactory.java:52) at runtime.vm.chkVMRuntimeFactory.createVMRuntimeForLibrary(chkVMRuntimeFactory.java:44) at runtime.vm.chkVMRuntimeFactory.createVMRuntime(chkVMRuntimeFactory.java:39) at Notifier.watchdogClient$evaluationHandler.<init>(watchdogClient.java:346) at Notifier.watchdogClient.<init>(watchdogClient.java:96) at MOC.mocApp.initializeInterClientComm(mocApp.java:966) at MOC.mocApp.showWindow(mocApp.java:233) at Moc.main(Moc.java:22) Caused by: java.lang.NullPointerException at Aspentech.JdbcOdbc.JODebug.error(JODebug.java:107) at Aspentech.JdbcOdbc.JOSupport.<clinit>(JOSupport.java:67) ... 35 more The latest APIApp debug file (found in C:\ProgramData\AspenTech\AeBRS\APIServer\debug folder) may have the following set of messages logged: hh:mm:ss: Exception java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at runtime.Code.chkCallNativeCode.executeOnce(chkCallNativeCode.java:184) at runtime.Code.chkCallNativeCode.executeMethod(chkCallNativeCode.java:152) at runtime.Code.chkCallNativeCode.execute(chkCallNativeCode.java:94) at runtime.vm.chkVMRuntime.executeExp(chkVMRuntime.java:517) at runtime.vm.chkVMRuntime.executeStmts(chkVMRuntime.java:463) at runtime.vm.chkVMRuntime.executeStmts(chkVMRuntime.java:394) at runtime.vm.chkVMRuntime.execMethodOnCurrentThread(chkVMRuntime.java:320) at runtime.vm.chkVMRuntime.execMethodOnCurrentThread(chkVMRuntime.java:317) at runtime.vm.chkVMRuntimeRequirementEvaluation.evaluate(chkVMRuntimeRequirementEvaluation.java:76) at runtime.vm.chkVMRuntimeConditionEvaluation.evaluate(chkVMRuntimeConditionEvaluation.java:153) at runtime.vm.chkVMRuntimeConditionEvaluation.access$000(chkVMRuntimeConditionEvaluation.java:54) at runtime.vm.chkVMRuntimeConditionEvaluation$1.evaluateMethod(chkVMRuntimeConditionEvaluation.java:127) at runtime.recipe.phaseParamCompilationEngine.evaluateMethod(phaseParamCompilationEngine.java:252) at runtime.recipe.phaseParamCompilationEngine.evaluate(phaseParamCompilationEngine.java:96) at runtime.recipe.phaseParamEvaluationEngine.evaluate(phaseParamEvaluationEngine.java:94) at runtime.vm.chkVMRuntimeConditionEvaluation.evaluateCondition(chkVMRuntimeConditionEvaluation.java:136) at runtime.vm.chkVMRuntimeConditionEvaluation.evaluate(chkVMRuntimeConditionEvaluation.java:102) at Server.Notifier.ConditionEvaluationThread.evaluateCondition(ConditionEvaluationThread.java:151) at Server.Notifier.ConditionEvaluationThread.run(ConditionEvaluationThread.java:86) Caused by: java.lang.NoClassDefFoundError: Could not initialize class Aspentech.JdbcOdbc.JOSupport at Aspentech.JdbcOdbc.JOConnection.<init>(JOConnection.java:46) at Aspentech.JdbcOdbc.JdbcOdbcDriver.connect(JdbcOdbcDriver.java:63) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at m2r.DataModel.m2rDatabaseConnection.connect(m2rDatabaseConnection.java:201) at Util.dbConnection.dbConnection$ExternalDBConnection.<init>(dbConnection.java:71) at Util.dbConnection.dbConnection.connect(dbConnection.java:33) at Util.dbConnection.ConnectionManagement.connectImpl(ConnectionManagement.java:79) at Util.dbConnection.ConnectionManagement.createForImpl(ConnectionManagement.java:51) at Util.dbConnection.ConnectionManagement.createFor(ConnectionManagement.java:41) at MOC.common.ebrDbConnections$ebrConnectionMgmt.createConnectionFor(ebrDbConnections.java:162) at MOC.common.ebrDbConnections$ebrConnectionMgmt.newConnection(ebrDbConnections.java:154) at MOC.common.ebrDbConnections$ebrConnectionMgmt.getConnFor(ebrDbConnections.java:149) at MOC.common.ebrDbConnections.getAPRMConn(ebrDbConnections.java:74) at MOC.common.ebrDbConnections.getAPRMUtil(ebrDbConnections.java:82) at Util.aprm.AprmUtil.getSQLUtil(AprmUtil.java:278) at Util.aprm.AprmUtil.selectSQL(AprmUtil.java:280) at Util.aprm.AprmRecordSet.populate(AprmRecordSet.java:61) at Util.aprm.AprmRecordSet.populateAndProcess(AprmRecordSet.java:76) at Util.aprm.AprmRecordSet.populateAndProcess(AprmRecordSet.java:73) at Util.aprm.AprmRead.readBatch(AprmRead.java:57) at library.symbol.chkAPRMLib.batchRecordRead(chkAPRMLib.java:100) ... 22 more
Solution: This problem will be apparent if the application is not able to write to the JdbcOdbc debug folder. By default everyone is given full control to this folder but we have seen administrators perceiving this to be a security risk and inadvertently making it, at best, read only. We recommend you take this opportunity to ensure ALL the sub-folders under C:\ProgramData\AspenTech\AeBRS\ have modify access granted to everyone. This can be achieved by opening a command prompt (Admin) window, and running the following commands: icacls C:\ProgramData\AspenTech\AeBRS /grant everyone:(OI)(CI)M /T On a similar theme, licensing issues could become apparent in MOC if the sibling SLM folder is not writable, this can be resolved using a similar approach: icacls C:\ProgramData\AspenTech\SLM /grant everyone:(OI)(CI)M /T Keywords: m2rDialog_OK Access is denied No Order can be executed Unable to acquire SLM_Aspen_eBRS_Main license. Application will stop References: None
Problem Statement: What is the login credential for opening Aspen Operator Training (AOT) user interface?
Solution: AOT requires common log-in credential for all users to access the interface. For V11, V12, and V12.1 - these credentials can be used: Username: ISAdmin Password: AspenTechAOT In future release, this credential may change. Please contact Aspen Tech Support for updated credential. Key Words AOT, Log-in, User Interface Keywords: None References: None
Problem Statement: What is Aspen Knowledge In-Context?
Solution: Aspen Knowledge In-Context delivers curated, featured content that is seamlessly integrated within the flowsheet. This tool allows you to access relevant Aspen Knowledge material, providing relevant information that reflects your specific flowsheet topology and interactions with the process model. For example, content regarding Aspen Exchanger Design & Rating appears when you are working with an Activated Heat Exchanger. As a result, you can easily obtain the information needed to complete your current workflow. You can access targeted information from our database, including literature, training, eLearning content, Knowledge Base articles, content from the HTFS Research Network, and videos. Aspen Knowledge In-Context provides the following benefits: Aids you in solving complex asset optimization challenges in an easy-to-use interface. Facilitates improved search and discovery and successful knowledge delivery. Makes it easier for you to locate the necessary information to troubleshoot convergence or modeling errors. Provides best practices guidance and model building assistance. Provides convenient access to eLearning content. Allows you to share feedback with AspenTech to facilitate improved content delivery. The icon is used to indicate that Aspen Knowledge In-Context recommendations are available for the current form. Aspen Knowledge In-Context recommendations are available for forms with context-sensitive help within Aspen HYSYS and Aspen Plus V12 and later versions. Keywords: AspenONE Exchange, Knowledge, Articles, Help Guide, Search, Resources, Documentation References: None
Problem Statement: Contingency calculation in Economics projects
Solution: Contingency is calculated based on the Total Investment Cost (TIC) of the project minus the Contingency itself. Calculation broken up is as follows: Contingency = Total Field Costs + Total Non-Field Costs (except contingency) * Contingency Percentage Note: The Contingency Percentage is defined in the Contingency and Misc. Project Costs form in the Project Basis View Attached is a project with a 15% Contingency defined and an Account Basis report with a manual calculation of the contingency for better understanding. Keywords: Contingency, Calculation, TIC References: None
Problem Statement: What are the differences between blowdown valves and safety valves?
Solution: Both Blowdown Valves (BDVs) and Pressure Safety Valves (PSVs) are safety valves. However, there are some key differences in their applicability. What are PSVs? The primary purpose of a PSV is the protection of life, property, and the environment. These valves are designed to open automatically and relieve excess pressure from pieces of equipment and reclose to prevent the further release of fluid after normal conditions have been restored. In many cases, the PSVs are the last line of protection. A safety valve is not a process valve or pressure regulator and should not be misused as such. It should have to operate for one purpose only: overpressure protection. What are BDVs? The primary purpose of a BDV is to control a continuous flow under high differential pressure. These valves are designed to manually open to depressurize a piece of equipment to rapidly release stress before it hits abnormal circumstances. The outstanding feature of this type of valve is that it can maintain fluid-tightness and it is easily operated without the help of any wedging action. Main differences between BDVs and PSVs Blowdown valves do not open automatically when detecting a set pressure, whereas PSVs open by the fixed pressure sensor. Blowdown valves are operated by pneumatic action, whereas PSVs are operated by mechanical action. Thus, the PSVs are independent of system failure. Blowdown valves do the depressuring in a regulated (slow) way. This is not the case when circumstances are abnormal, in such cases, a PSV would be used. Keywords: Blowdown, safety, protection, depressuring, overpressure References: None
Problem Statement: How can I decipher the Project Level Error messages and where to go in the Project Basis to fix them?
Solution: Here is a general list of the types of ERROR codes that you may see, and where you can go to look in the Project Basis to fix the problem. Project Level Error Messages 'A1 - 0' : refers to a problem with the Code of Account Definitions 'A2 - 0' : refers to a problem with the Code of Account Allocations 'A1 - 1' : refers to a problem with Escalation and/or Material Indexing and/or Man-hour indexing 'CS - 1' : refers to a problem with Contract - Scope (Engineering) 'CS - 2' : refers to a problem with Contract - Scope (Purchase Materials) 'CS - 3' : refers to a problem with Contract - Scope (Installation) 'CS - 4' : refers to a problem with Contract - Scope (exceptions) 'CD - 1' ENTER EITHER COST OR PERCENT : this refers to the Contractor definition in the Project Basis (Contracts - Contractor). Check each major heading in the contract definition to make sure that you have not entered in both a cost and a percent. You can enter a cost or %, but not both. 'DR - 1' . 'DR - 2'... etc. : refers to a problem with the Engineering Workforce - Drawing Types or to Engineering Workforce - Drawing Count 'EN - 1' , 'EN - 2'... etc. : refers to a problem with the Engineering Workforce - By Phase or Engineering Workforce - By Discipline 'ER - 1' : refers to a problem with the Equipment Rental 'EXT- 1' : can refer to one or 2 items. It could refer to an external file being used by the system (such as Civil, Building, Instrumentation Assemblies, Instrumentation Components, or Insulation), or it could refer to a problem with Contracts - Contractors. 'G - 1' thru 'G - 11' : refers to a problem in one of these items - Equipment Specs, General Piping specs, Material Piping specs, custom piping specs, civil/steel specs, Instrumentation, Electrical, Insulation, Paint 'GP - 3' : refers to a problem with the PIPELINE area (Kbase v12.2 and above only). Typically, users forget that the total length that is specified for each pipeline area must equal exactly the sum of all the pipeline segments you placed under each area. 'PC - 1' thru 'PC - 4' : refers to a problem with Process Control 'PD - 1' or 'PD - 2' : refers to a problem with Power Distribution 'PS - 1' thru 'PS - 2' : refers to a problem with an item under the Project Execution Schedule Settings selection (Adjust Schedule and Bar Charts, Equipment Class Delivery Times, Equipment Item Delivery Times, or Add Barchart items) 'T - 4' : refers to a problem with Contingency and Misc Project Costs 'W1 - 1' : refers to a problem with the Construction Workforce - General Rates 'W2 - 1' : refers to a problem with the Construction Workforce - Craft Rates 'W3 - 1' : refers to a problem with the Construction Workforce - Crew Mixes 'W4 - 1 : refers to a problem with the Construction Workforce - Craft Names 'X2 - 1' : refers to a problem with the Indexing - Material 'X3 - 1' : refers to a problem with the Indexing - Man-hour If you still cannot fix the problem, please send an e-mail to AspenTech Customer Support ([email protected]). Include the version of software being used, along with the project file (*.IZP).keyw Keywords: INFO, Information, WARN, Warning, ERROR, FATAL, Capital, Estimator, Scan, Messages, Project, Level, Economics References: None
Problem Statement: Commonly Asked Petroleum Application Questions About Aspen Plus
Solution: 1. Aspen Plus does not generate distillation curves for a stream containing 4 pseudo-components. Why? To generate a distillation curve, a stream must contain at least 5 pseudo-components of non-zero flow to generate distinctive data points at 10%, 30%, 50%, 70, 90%. 2. For streams with significant amount of light components, the calculated Reid vapor pressure is usually off. Why is that? Are there any guidelines for using Reid vapor pressure? Reid vapor pressure is the absolute pressure exerted by a mixture (in pounds per square inch) determined at 100 F and at a vapor-to-liquid volume ratio of 4 (ASTM Method D 323. RVP is intended for characterizing the volatility of gasoline and crude oil, with a typical range of 1 to 20 psia. Out of this range, the accuracy may be poor. Therefore, RDV should not be applied to very light or very heavy streams. 3. How is the Reid vapor pressure calculated in ASPEN PLUS? The Reid vapor pressure is vapor pressure of liquid at 100 F, as measured according to ASTM D-323 procedures. Aspen Plus simulates these procedures by a series of flash as follows: Check if N2 or O2 is present; if so, determine their index values. Setup to the ideal gas option-set (sysop0). Calculate volume for AIR at 32 and 100 Degree F, 1 atm. Determine bubble point pressure of the liquid stream at 100 F. Saturate the liquid with air at 32 degree F. Mix liquid with 4 vol% equivalent of air and flash at 100 F under constant volume. If calculated Reid vapor pres. is greater than 26 psi repeat w/o air saturation. The Reid vapor pressure as measured by the ASTM D-323 differs from the true vapor pressure of the sample due to some small sample vaporization and the presence of water vapor and air. Reid vapor pressure is often used to determine the appropriate type of storage tank (cone roof or floating roof) for petroleum stocks with undefined components. 4. What is the difference between Prop-Set REIDVP, RVP-ASTM, and RVP? The Prop-sets REIDVP and RVP-ASTM are identical. Both are kept for upward compatibility, and can be requested like any other Prop-set. RVP, however, is available only if you define a petroleum property curve for the Reid vapor pressure in the ASSAY.PROP-Curve form, by providing a table of Mid-Percent distilled vs. Reid vapor pressure values. 5. Aspen calculated API gravity is quite different from that of PRO II in some cases. What is the method used in Aspen Plus and what are the assumptions/limitations? The API Liquid Volume model implemented in Aspen Plus uses the following eqution: Vm = Xp Vp + Xr Vr Where V = liquid molar volume X = liquid mole fraction m = mixture p = pseudocomponents r = real components Vp (for pseudocomponent liquid mixture) is calculated using a correlation based on API Figure 6A3.5 (API Technical Data Book, 4th edition). Vr (for real component liquid mixture) is caculated by the mixture Rachett model. The variations in petroleum liquid density results are often caused by the number of cuts generated. Increasing the number of cuts or reducing the cut temperature intervals may improve the accuracy. Refer toSolution 103736 for more details. When multiple assays are present, the way they are blended could also affect the liquid density calculation. The choices include generating: one common pseudocomponent set for all assays one pseudocomponent set for each assay some combinations of assays and blends Refer toSolution 103921 for more about one versus multiple pseudocomponent sets. 6. How is assay broken into pseudo-components? Assay is broken into pseudo-components based on the number of cuts on the True Boiling Point (TBP) curve. The middle point of each cut is used as the boiling point of that cut. By default, Aspen Plus generates 40 pseudo-components using the following cut temperatures: TBP Range (F) No. of Cuts Increments (F) 100 - 800 28 25 800 - 1200 8 50 1200 - 1600 4 100 User can change the default settings under Components, ADA Characterization, Generation. 7. Can users access Aspen Plus generated pseudo-components like real components? Users would like to access pseudo-component properties, such as Tc, Pc, Vc, API gravity, SG, and MW. Currently they are listed in the external report file. No. Only a limited number of pseudo-component property parameters are reported as results in GUI. User cannot alter what to report. To access and change pseudo-component property parameters, use user property model subroutines. 8. How does Aspen handle petroleum properties among pseudo-components? For example, if only bulk sulfur content is given, how does Aspen distribute it to pseudo-components? Petroleum properties are treated as component attributes and attached to pseudo-components. When a property curve is given, the distribution of the property is based on the curve. When only a bulk property is given, it is evenly distributed among all pseudo-components. 9. How does Aspen Plus calculate motor and research octane number? Octane number is calculated from the Octane curve entered with the assay. There are four (4) property-sets for Octane number: MOC-NO - Motor octane number MOCNCRC - Motor octane number curve ROC-NO - Research octane number ROCNCRV - Research octane number curve 10. What is the difference between match and not-match light ends? Light-ends (gases) are typically analyzed separately from the liquid fractions. The distillation curves from the lab normally exclude the light-ends. To generate a distillation curve reflecting the full distillation range of an assay, you need to use Match Light-ends. Match light-ends uses the boiling points of the light-ends components to determine the curve in the range from 0 to lt% where lt% is the percentage of the light-ends in the assay. The default is not match light-ends. 11. How does Aspen Plus match light ends? When Match Light-ends is selected, the TBP curve, from the light-end fraction and blow, will be represented by the boiling points and concentrations of the light-end components. For example, given the light-end fraction = 0.05, the boiling point of the heaviest lights = 64 F, the original TBP curve at 0.05 = 68 F. After matching light-ends, the final TBP curve will be 64 F at 0.05. And, from 0 to 0.05, the curve will be calculated from the light-ends. The original TBP curve in the range from 0 to 0.05 is not used. 12. When using match light ends, sometimes I receive a warning message saying the temperature difference is too large. Under what conditions will Aspen Plus not perform matching light ends? Match light-ends works only when the boiling point of the heaviest component in the light-ends falls within 10 F on the TBP curve at the light-end fraction. In the above example, if the original TBP curve at 0.05 is below 54 F or above 74 F, Aspen Plus will give an error message and not perform matching light-ends. To avoid this error, user has to make sure that the light-ends analysis is accurate and the fraction of light-ends in the assay is accurate. To force matching light-ends when the temperature difference is > 10 F, you can Add or remove the heavy components in the light-end analysis. Change light-end fraction. 13. Can one enter viscosity data for a stream? For heavy petroleum fractions, the API methods do not cope well. If two viscosity points are available, 2800 cp @275 F and 600 cp @325 F can they be used in the simulation? You cannot enter the data directly either in Assay input or stream input. The current procedure is to substitute MUL2USR for the mixture viscosity model. Write a Fortran subroutine for doing interpolation based on these two points. The subroutine fits a model of the type: ln(mulmx) = aa + bb/T 14. How is pseudo-component specific gravity calculated? Liquid molar volume is based on the Rackett or Cavett model. The default is Rackett. Refer to the Aspen Plus on-line help. 15. How is pseudo-component MW calculated? There are nine (9) models for calculating pseudo-component molecular weight. Refer to the Aspen Plus on-line help. 16. How is gross/net heating value calculated for a petroleum stream? Is the method the same for pure components and pseudo-components? Heating value is also called heat of combustion. The heat of combustion of a substance is the change in enthalpy when that substance is converted to its final oxidation products by means of molecular oxygen. The beginning and ending states are: standard heat of combustion: 77 F and 1 atm gross heat of combustion: 60 F and 1 atm The normal state for the water formed by the reaction is liquid in both cases. Since the sensible heat of water from 60 to 77 F is usually negligible in comparison with the heat of combustion, the gross and standard heats of combustion are approximately equal. The net heat of combustion is the heat evolved in combustion beginning and ending at 60 F with product water in gaseous phase. Therefore, the net heat of combustion is less than the gross heat of combustion by the heat of vaporization of the water product. Net/Gross heating value can be reported in Dry/Wet basis for a stream: Dry basis - excludes water already present in the stream before combustion, Wet basis - includes water already present in the stream before combustion. The methods for calculating pure component and petroleum fractions heating value are different. Petroleum Fractions: The method is based on API Procedure 14A1.3, 4th Edition (1983). The heating value is a function of API gravity corrected for impurity concentrations of H2O, S and other inert. Pure components Net Heating Value = -HCOM from pure component databank 17. How does ASPEN PLUS extrapolate values between 0% and the first distillation point and between the end point and 100% point for the True Boiling Point curve? Suppose that the first point is at 10% and the last at 90%. Aspen Plus extrapolates between 0 - 10% and 90 - 100% using two methods: Probabilistic and Quadratic. The default is Probabilistic, which assumes a normal distribution of boiling points and uses the last point provided to extrapolate to the initial and end point. Quadratic was introduced in Aspen Plus Release 9.1-3 18. What is the difference between Probabilistic and Quadratic methods? When extrapolating the True Boiling Point curve in Assay Data Analysis, the default extrapolation method is probabilistic. Probablistic extrapolation uses the last provided point for calculating the values for the extreme ends of the curve. The probabalistic curve assumes a normal distribution of boiling points. ASPEN PLUS can extrapolate to the 99% point to meet the light ends analysis, or to the 0.5% point if no light analysis is provided. ASPEN PLUS, Release 9.1-3 (and higher) includes an option for quadratic extrapolation. You can find this option on the Components ADA/PCS.ADA-Setup form. If the upper (99%) limit, or the lower (0.5% ) limit are not adequate bounds for your purposes, you can obtain the value you need using the Components ADA/PCS.ADA-Setup form. 19. How does initial (default = 0.5%) and final (default = 99%) boiling points setting affect extrapolation? The setting determines at what percentage the end points are reported. For example, with final point set at 0.99%, the temperature corresponding to 99% in the extrapolation is reported as the 100% temperatures. They may be adjusted to match end points. 20. For viscosity the API formula is limited to temperatures of below 400 C (750 F) and component MW of not greater than 7000. How does the program handle very heavy crudes or residues beyond these limits? The procedure uses linear extrapolation for Watson K and API based the chart on 11-31 API Data Book, Fourth Edition. 21. How can Aspen Plus cope with downstream refinery products that are higher in olefinic components than the original crude does? For flosheets with reactors, there should be 2 sets of pseudo-components, one set for the streams before the reactor block and another set after the reactor. Each set of pseudo-components should have its own assay data characterization. The reactor model will need to determine the flows of each pseudo-component for the reactor effluent. 22. How to use a SEP block to separate pseudo-components? SEP block can only access pseudo-components entered in the Component.Main form or generated with Naming Option = LIST. It cannot access pseudo-components generated with the default Naming Option (NBP). You can set the Naming Option in pseudo-component Generation form to LIST. The steps are: Run the simulation once to obtain the pseudo-component break-down. Go to the pseudo-component Generation (PC-Calc) form and change the naming option from NBP to LIST. Enter the names of all the pseudo-components in the LIST fields. Now the pseudo-components become accessible in the SEP block. 23. What is the procedure of using pseudo-component components in a reactor model (eg. RYIELD)? To do this, it is necessary to associate pseudo-components that are generated during an ADA/PCS run with components on the Components.Main form. These components can then be used in a reactor model. Steps: Perform an ADA/PCS run. Create a component id for each ADA/PCS fraction that you want to include in the reactor. Go to 'Components.Main' form Enter a user-specified Comp Id of type 'Pseudo' for each component. Enter the required properties for each of the above components. The component IDS now can be accessed in the reactor model. 24. What is the difference of the five Naming Options in Pseudo-Component Generation? NBP - use the normal boiling points to name each cut LIST - use the IDs in the ID-LIST fields to name the cuts NUMBERED - use integer numbers to name the cuts ROUND-UP - use the upper temperature of the cut as its name ROUND-DOWN - use the lower temperature of the cut as its name For example, if a cut has an average T=215.4 F and the cut temperature specification is 200, 250, . . . F, the cut will be named as Naming Option Cut Name (ID) BNP PC215F ROUND-DOWN PC200F ROUND-UP PC250F 25. Can I generate cuts at specified normal boiling temperatures? No. You cannot specify a set of normal boiling temperatures (NBP) to generate cuts. What you can specify is the cut temperatures, such as 200, 225, 250, 275, 300, ... Aspen Plus will generate cuts at these temperatures and calculate the normal boiling point for each cut. With Naming Option = BNP, the cut names in the results or report file will not match the cut temperatures in the specification, although the actual cuts are generated at the temperatures specified by user. Cut temperature and cut name are not to be confused. The specified cut temperatures are used to generate cuts at specific temperature points, and the cut name is as component ID for a pseudo-component. In the ADA/PCS.PC-Calc form, you can specify both Cut Temperatures - used to generate the cuts. Naming Option - used to name the cuts. The specified cut temperatures overwrite the default values (see online HELP). There are five ways to name the cuts: NBP, LIST, NUMBERED, ROUND-UP and ROUND-DOWN. 26. How is the Pour Point calculated in Aspen Plus? When a liquid petroleum product is cooled a point can be reached at which the oil ceases to flow in a standard test. The pour point is defined as the temperature 5 F above that point. The user can input a pour point curve by supplying temperature values for the pour point at different mid-percent distilled points. Four such data points are required to define a property curve. The value of pour point may be accessed by two different prop-set properties. Prop-set property 'POURPT' calculates the pour point of a stream based on the pour point property curve entered with the assay. Prop-set property 'PRPT-API' calculates the pour point based on API procedure 2B8.1, a function of molecular weight, specific gravity and kinematic viscosity. 27. Does Aspen Plus estimate DHFORM and DGFORM for pseudo-components? Yes. Both are by the Edmister method. Refer to the online help. 28. What are the limitations of the COSTALD method for calculating mole-volume? Can it be applied to pseudo-components of high MW? How does it compare to API or Rackett? Costald is an empirical correlation that computes mole-volume from Tb, MW and SG. For very heavy components, the calculated liquid density may be abnormally high. This method should not be used for pseudo components of high MW. For example, set up a system that has 1 pseudo component. MW = 980, GRAV=0.894 NBP=750. COSTALD: density = 2790 kg/m3 API or Racket: density = 890 kg/m3 29. For a single component stream, the purecomp and mixture densities differ much. RHO prop-set uses DNLDIP(DIPPR model)and that RHOMX uses the Rackett model even if the ThermoSwitch is set to use DIPPR. Aspen Plus uses DIPPR model for pure component and Rachett model for mixture. VL2RKT (mixture model) does not calculate mixture volume by mole-fraction average of pure component volume. It is a corresponding-state method in which the parameters are mixed (there are mixing rules for TC, RKTZRA, etc.). The pure-component model, on the other hand allows both the Rackett and the DIPPR model. 30. Why is the end point of a D86 curve higher than the boiling point of the heaviest component in a mixture? The end point (100%) is extrapolated from the last percentage point (such as 95%). Therefore, it can be higher than the boiling point of the heaviest component. 31. Why does the distillation curve reported for an assay sometimes differ from the input curve? This may be due to the presence of light-ends or curve fitting. 32. What value does Aspen Plus use for the endpoint and IBP of an assay? SimSci uses the 98% point as the endpoint and the 2% as the IBP, by default. By default Aspen Plus use 0.5% and 99% for the initial and end points, respectively. The setting can be modified by user. 33. How can I change the number of pseudo-components generated? This is under Components, Petro Characterization, Generation, Cuts. 34. Should I enter my light-end analysis in the stream input form or in the assay input form? Which is better? In general light-end analysis is entered with assay in the assay input form. In that form, you can also enter specific gravity and molecular weight for each component. To enter light-end analysis in the stream input form, the flow rate of each light-end component must be entered according to its concentration in the assay feed. 35. How many pseudo components should I generate for a given assay? The Getting Started Guide shows how to do this but does not explain how to set the numbers. As a Rule-of-Thumb, you should generate smaller (more) cuts at lower temperatures and larger (less) cuts at higher temperatures. The idea is to generate more cuts in the temperature range of high interest and less cuts in the temperature range of low interest. Cut temperature smaller than 5 F likely will not have much effect and larger than 25 F should be used with reason. The default cut setting is good for most applications. 36. How is the Weight Factor used in pseudo-component generation (PC-Calc)? The Weight-Factor determines how pseudo-component parameters (Tc, Pc, ...) are linearly averaged of the assays/blends. The default is 1.0. For example, given a cut of 100 - 120 C Assay-1 Assay-2 Weight-factor 0.4 0.6 Tc, C 500 550 Average Tc = 0.4 x 500 + 0.6 x 550 = 530 C 37. How are pseudo-components generated when multi assays/blends are entered? Generation under Components, Petro Characterization (PC-Calc in R9) controls pseudo-component set generation. When Generation is not specified (default), Aspen Plus will generate one common set of pseudo-components for all assays and blends, averaged with Weight Factor = 1.0. All assays/blends will be accessible in the feed stream input form. When Generation is specified, Aspen Plus will generate one set of pseudo-components for each ID created under Generation, where one ID may contain several assays, blends or combination of both. In this case only assays/blends included in Generation will show in the feed stream form. Those not included will be treated as not used in simulation and, therefore, become not accessible in the feed input form. For example, if there are four assays A1, A2, A3 and A4. Under Generation, two Ids are created: G-1, contains A1 and A2 with Weight Factor = 1.0 G-2, contains A3 only Aspen will generate the first set of pseudo-components for G-1 and the second set for G-2. A1, A2 and A3 will show in the feed input. No pseudo-component will be generated for A4, and it will not show in the feed input form. 38. Do we have a correlation for calculating Cloudpt? No. 39. There are a number of different distillation curve conversion methods. Which one should I use? The question applies to both the ASTM D2287 and ASTM D86 conversions. API94 is the latest and recommended. The default is Edmister for D86 and API87 for D2287. 40. When should I change the Blend Options for a property? If you have an in-house blending correlation and you know that gives better results. 41. Which option of SOLU-WATER should be used? Option 3 is recommended for most applications. Option 2 is the default for petroleum applications when Free-Water = YES. 42. What is the plan for future versions with the crude library? (update or expansion) Currently there is no plan to update/expand the assay library. Aspen Plus does have an interface to the Phillips Petroleum Assay Library which contains up to 500 assays. 43. How can we use in-house correlation for properties like assay viscosity? Substitute user subroutines for assay parameter models under Components, Petro Characterization, Property. 44. Are there plans to improve the petroleum properties as REIDVP, hydrate formation temperature and pressure. No. There is no such plan. 45. What does Apply cracking correction do when the distillation curve type is ASTM D86? ASTM D86 distillation is carried out at atmospheric pressure. When heated sufficiently hot, heavy fractions undergo thermal cracking before vaporization. The amount and severity of thermal cracking increases with increasing boiling point, contact time, pressure and temperature. Early editions of API included a correction for cracking for observed ASTM D86 temperatures above 475 F. No correction for cracking is now recommended. 46. Is D2887 on volume or weight basis? D2887 is always on weight basis. 47. The final boiling points (TBP, D86 and other curves) generated by Aspen Plus for the bottom product and the feed differ up to 70 C. I would expect that the final boiling points be close together because they contain about the same amount of heavies. The discrepancy is caused by end point extrapolation. Many users think that the initial and end points should be corresponding to the boiling points of the lightest and the heaviest component or pseudocomponent in the assay. That is NOT true. As a matter of fact, TBPs of an assay are a function of component distribution. For two streams containing the same components but with different distribution, their TBP curves will differ. TBPs are defined by the cumulative mid-point mass fractions and the boiling temperature of components (pure or pseudo) in the mixture. The cumulative mid-point mass fraction is the sum of all the mass fractions of the components lighter than the component plus 1/2 of the mass fraction of the component. Example: Fraction Cumulative Frac Pseudo Feed Residue Feed Residue Tb,C PC242C 0.004045 4.97E-06 0.002023 2.48E-06 242 PC253C 0.007435 1.08E-05 0.007763 1.04E-05 253 PC267C 0.008231 1.5E-05 0.015596 2.33E-05 267 PC281C 0.00921 2.12E-05 0.024316 4.14E-05 281 PC295C 0.010555 3.11E-05 0.034199 6.76E-05 295 PC309C 0.012675 4.83E-05 0.045813 0.000107 309 PC323C 0.018611 8.98E-05 0.061456 0.000176 323 PC336C 0.023724 0.000148 0.082624 0.000295 336 PC351C 0.025983 0.000215 0.107478 0.000477 351 PC365C 0.036273 0.0004 0.138605 0.000784 365 PC379C 0.057014 0.000845 0.185249 0.001406 379 PC392C 0.067484 0.001328 0.247497 0.002493 392 PC406C 0.058821 0.001595 0.31065 0.003954 406 PC420C 0.067442 0.002511 0.373781 0.006008 420 PC440C 0.134319 0.008157 0.474662 0.011342 440 PC468C 0.125365 0.014943 0.604504 0.022892 468 PC496C 0.087523 0.021416 0.710947 0.041071 496 PC524C 0.085013 0.044097 0.797215 0.073828 524 PC548C 0.059187 0.058576 0.869315 0.125164 548 PC579C 0.016588 0.037357 0.907203 0.173131 579 PC607C 0.015729 0.068538 0.923362 0.226078 607 PC635C 0.01623 0.118288 0.939341 0.319491 635 PC677C 0.033627 0.375979 0.964269 0.566625 677 PC720C 0.018917 0.245386 0.990541 0.877307 720 If Tb vs cumulative fraction is plotted for the two streams, the curve will look differently. Notice the ends at the cumulative mid-point mass fraction of the heaviest component. It is 0.99 for Feed and 0.877 for Residue. This means that points above 88%wt (90%, 95% and end point) for for Residue have to be extrapolated. The extrapolation may well generate an end point higher than the boiling temperature of the heaviest component. The fact that the highest mass fraction for Feed is 99% explains why its TBPCRV end point is much closer to the boiling temperature of the heaviest component. The extent of extrapolation is controlled by the Assay Procedure in R10:I The specified values determine at what percentage the 0% and 100% points are reported. To improve end point calculation: Increase the number of cuts. Change the initial/final boiling points settings. Use a different extrapolation method. 48. How to apply user correlation for assay physical property parameter (MW, Tc, Pc, . . .) calculation in Aspen Plus. Aspen has a suite of user routines for these property parameters. Currently they are not documented but can be obtained from Aspen Customer Support on an as-needed basis and used as templates for writing user models. Aspen plans to document and deliver these user models in future product release. 49. How is D86 curve converted to TBP curve? Aspen Plus and Pro II give different end points. There are three (3) procedures for converting D86 data to TBP: Edmister Edmister-Okamoto API procedure 3A1.1 Vol 1. 1994 As far as the end-point difference from Aspen Plus and Pro II, it has to do with the difference in curving fitting technique, which is standardized by API. Test results from Release 9.3 are given below for conversion of D86 TO TBP - D86 data taken from API 5th Edition (1992): %Dist D86 (F) API94 (F) ED-OK (F) EDMISTER (F) 0 303.404785 241.655090 248.901810 236.724243 5 336.359467 295.528229 302.408966 290.206329 10 350.000000 316.537140 325.531799 313.357819 30 380.000000 372.577423 376.742188 365.252502 50 404.000000 411.190308 415.087708 404.005188 70 433.000000 451.185425 456.328430 445.771851 90 469.000000 496.695404 501.794098 491.966125 95 486.264587 511.385498 521.353027 510.775635 100 503.529144 538.973938 540.911987 529.585205 Conversion by API94 matches exactly the example in API 5th Edition (1992). 50. How can I obtain the results of pumparound flows and side-stripper stage flows in PetroFrac? In PetroFrac Pumparounds and Sidestripers have their own psuedostream forms. You can attach pseudo-streams to pumparound and side-strippers. For pumparounds, you can specify whether the pseudo-stream is connected to the inlet or outlet. For side-strippers, you can specify the stage and phase of the pseudo-stream. 51. MultiFrac and PetroFrac report the liquid flow rates around the condenser with their own convention ( different from RADFRAC ) that may cause confusion for users. For instance, column profile seems to indicate that the liquid flow coming off the top stage is higher than the column liquid product rate, and reported RR seems to be inconsistent with the reported flow rates. What is the convention? The source of the confusion is the way that the liquid flow rates around the top of the tower are reported: The liquid product rate reported in column profiles includes the free water (it is wet) The top stage liquid flow rate is water free. The subcooled liquid flow rate includes hydrocarbon liquid product. The following definitions should resolve the confusions: Distillate liquid product (DL) in stream report: DL = total liquid product - water decant Stage-1 liquid flow rate (L1) in the column profile: L1 = vapor from stage-2 (V2) - vapor product (V1) Liquid return to the column (LR): LR = stage-1 liquid flow (L1) - DL Reflux Ratio (RR): RR = LR / ( DL + DV ) 52. How is the Flash Point calculated in Aspen Plus? Flash Point is a measure of the volatility and the inflammability of liquid petroleum mixtures. It is the lowest temperature at which a liquid will give off enough vapor to form an flammable mixture with air. The value of the flash point may be accessed by prop-set properties: FLPT-API, the API method for determining flash point (ASTM-D86) FLPT-PM, the Pensky-Martens method (ASTM-D93) FLPT-TAG, the Tag method (ASTM-D56) FLASHPT and FLASHCRV, user specified assay property data for petroleum mixtures For more information seeSolution 115183. 53. If a stream contains significant amount of light components (vapor fraction), distillation curves, such as D86 curve will not be generated. What is the upper limit of vapor fraction above which distillation curves are not generated? D86T is not calculated if the related stream contains less than four(4) components of significant mole fraction (i.e., a mole fraction of greater than 1.D-2). D86T is not calculated for streams that contain a lot of light end (>0.8) or are hydrogen rich(>0.01). These limits were set to ensure the quality of simulation results. 54. What is free-water? Free water is a term used in 3-phase (VLL) separation, referring to liquid that contains mainly water and little hydrocarbon. In such a case, the amount of hydrocarbon in the phase is so low that it is insignificant to simulation. Free-water assumption is often used in petroleum separation. 55. How is water solubility calculated and how to select water solubility option codes in 3-phase calculation? There are several models for calculating water solubility in the organic phase in a 3-phase (VLL) equilibrium system. The water solubility option codes (0, 1, 2 and 3) determine which model to use for the simulation. The default: Solu-water=2, Free-water=YES for petroleum application, Solu-water=3, Free-water=NO for all other applications. When Free-water=NO, solu-water=3 is internally selected and what entered in the GUI is ignored when Valid Phases = Valpor-Liquid-Liquid. solu-water is not ignored when Valid Phases = Vapor-Liquid. Choosing a water solubility option code is similar to selecting a property option set. It depends on the system. In the past, codes 0 and 1 were widely used in refining applications with free-water = YES. However, code 3 is the most rigorous approach in dealing with 3-phase. Code 2 is somewhere between 1 and 3. Aspen Plus calculates water k-value as follows: k = gamma *(water fugacity coeff in organic phase)/(water fugacity coeff in vapor phase) where, Water fugacity coeff in organic: free water option set (solu-water = 0,1,2,3) Water fugacity coeff in vapor: the primary option set (solu-water = 1,2,3) and free water option set (solu-water = 0) Gamma: 1/(mole fraction of water saturated in organic) (solu-water = 0,1) ln (gamma) = G(1 - x)^2 (solu-water = 2): primary option set (solu-water = 3) Mole fraction of water saturated in organic = calc from databank parameters A,B,C ln(x) = A + B/T + CT G is determined from gamma = 1/(mole frac of water saturated in organic) Solu-water = 0 and 1 works OK when free water is ALWAYS present. When water is not saturated in organic, the results will be off. Solu-water = 2 and 3 give more accurate results. Solu-water = 3 is rigorous and recommended. Water solubility increases exponentially with temperature. A saturated system at normal temperatures may well become unsaturated at higher temperatures. 56. I have the distillation curves of the products but not the curve of the feed. How can I set up my simulation? Enter the product distillation curves as assay input and the gas products as light-ends. Blend them to create the feed. 57. What is the definition of mid-percent distilled in assay property input? Mid-percent-distilled refers to a cut. A cut between 5% - 10% has a mid-percent-distilled point of 7.5%. The property entered should be the property of the cut, not the accumulative of the distilled or the heavy left in the pot. 58. What is the furnace feed convention in Petrofrac? Stage duty on feed stage: similar to on-stage feed convention, specified furnace duty added to the feed stage. Single stage flash: similar to above-stage feed convention. Single state flash with liquid runback: similar to above-stage feed convention with the liquid runback from the stage above the feed stage sent to the furnace instead of the feed stage. 59. What is liquid runback in Petrofrac? Liquid runback is the liquid flow from one stage to the stage below. Runback differs from stage liquid flow that includes side- draws (products or pumparounds). The RunbackSpec is usually used to set column liquid flow rates and prevent dry stage. Keywords: Q&A Petroleum Application Aspen Plus References: None
Problem Statement: Is it possible to model a Ammonia Production process using natural gas as a feedstock?
Solution: AspenTech has developed an application example for this type of process. This example will be included with V7.0 and higher. The package is our proprietary work. Consulting may be needed for technology transfer and further model development and validation. Attached is an example of this process. This example will run in Aspen Plus 2006.5 and higher. To use the package, put all of the simulation files in one directory. The Fortran files have been compiled and linked into the ammonia.dll; therefore, a Fortran compiler is not needed to use the basic package. These Fortran files are AspenTech's intellectual property not promised as a product deliverable. This model simulates an Ammonia Production process using natural gas as a feedstock. This model includes the following features: A set of chemical species and property parameters for this process. Typical process areas including: Desulfurization, Reforming Unit, Carbon Monoxide Conversion, Carbon Dioxide Removal, Methanation Unit, Synthesis Unit, Refrigeration and the main streams connecting these units. Usability features such as an Excel file which allows the user to collect simulation results from the synthesis reactor. Definition of property model parameters with user data. A separate example file for the same process is also attached that will run in 64-bit Aspen Plus (V12 and higher). Keywords: None References: None
Problem Statement: History trend plots will not work from the PCWS using either Web.21 or aspenONE Process Explorer (A1PE) as the trend option under Preferences tab. The following error messages can be seen: Error message when opening using Web.21: CreateChart: (TypeError): Unable to get property 'ph' of undefined or null reference description: Unable to get property 'ph' of undefined or null reference number: -2146823281 Error message when opening using aspenONE process explorer, A1PE: Please use aspenONE with Internet Explorer 10, Google Chrome or Chrome Frame. Contact your administrator for assistance.
Solution: Root cause: First check that the web browser you are using is supported, refer to Platform Specifications: Platform Support | AspenTech This error can be seen even when using the supported web browser Internet Explorer (IE) Version 11. However, the root cause of the issue was that IE 11 was emulating unsupported IE version 5 via compatibility view settings. We can verify this is the case by opening the plots on the browser and hitting F12 key to see the Developer Console. If this is the issue, it will show a number 5 (or other unsupported version number) on the black tool bar right hand side by the monitor icon: Solution To fix this issue, within IE, click on the gear icon on the top right-hand side for settings and select Compatibility View settings. Within this dialog window, two actions must be performed: 1. Remove the PCWS URL that may have been explicitly added in the list of websites you've added to compatibility view 2. Clear the checkbox that says display intranet sites in Compatibility View. (You can verify whether the PCWS URL was added to intranet sites by selecting the gear icon for settings > Internet Options > Security tab > Local Intranet > Sites > Advanced and see the list under websites) After these two actions, close the browser and open it again, the history trend plots should show properly without the error message observed before. Compatibility View Settings dialog: Keywords: history, trend, plots, error, CreateChart, property 'ph' References: None
Problem Statement: By default, the MOC client will create a debug file and can take up Hard Disk Space on server or client machines. Also sometimes there is a concern with too many API log files getting generated for the Server.
Solution: To limit the number of log files, we have the debug key 'DEBUG_PURGE_PERIOD=xx' from this KB (000082220 ) that is applicable to both MOC and Apache. However, files are deleted at the start, which happens normally daily for MOC but not for Apache. Since V11, APEM keeps only the last 7 days for the API Server logs. If it still too many files getting generated, then the key 'DEBUG_PURGE_CYCLE=i' can be added in the flag files to keep files only up to a certain number of days. The steps would be as follows - You can purge the debug files by adding a Flag/Key in the config.m2r_cfg file located in C:\Program Files (x86)\AspenTech\AeBRS\ folder. Open the config.m2r_cfg located in the folder above in notepad and add the key 'DEBUG_PURGE_CYCLE=i' Where 'i' is the number of days of debug files, with 'i' being an integer value like 5 for instance. Save the file and make sure you run the Codify All batch file in the same folder to process the configuration files. This key will remove debug files on a cyclic basis. This setting will apply to MOC and Apache. Files will be deleted on a cycle basis, so a restart will not be required. Keywords: DEBUG_PURGE_PERIOD Log file API Log Apache APRM Aspen Production Execution Manager Debug MOC References: None
Problem Statement: Does Aspen HYSYS have a Header unit operation?
Solution: In Aspen Hysys, there is no special block for a similar header. However, it can be simulated with the PIPE SEGMENT block or also with the Separator block. In both you can specify the HOLDUP of the equipment. If you have a lot of input currents, you could use a Mixer to put them together (simulate it instantly in dynamic) and then connect them to the PIPE SEGMENT. Keywords: Aspen HYSYS, Header, Unit Operation References: None
Problem Statement: A live alert was investigated and found the root cause to be a malfunctioned sensor that needs to be replaced. In some cases when the faulty sensor could not be fixed or replaced immediately, it would be sensible to temporarily remove this sensor from its sensor group and deployed it back live. This KB article will demonstrate the best practices to properly remove this sensor from its sensor group until the faulty sensor is fixed.
Solution: Disable Affected Sensor Step 1: From Mtell View, search for the triggered alert and select 'Acknowledge'. Note to not close the agent here so that the alert stays in open state. Closing the alert will only trigger the same alert since the faulty sensor has not been fixed yet. Step 2: Clone the agent that has triggered the alert. Step 3: Click 'Clone Sensor Group' and remove affected sensor from that group. Step 4: Right click the cloned agent and select 'Edit Agent'. Go to step 3 of the wizard and click the Sensor Group drop down. Select the cloned sensor group, click next and finish the wizard. Step 5: Train the cloned agent and deploy it live. Re-enable Affected Sensor Step 6: After the faulty sensor is fixed, we can proceed to revert to the original settings. First, delete the cloned live agent at System Manager. Note that disabling live agent from Agent Builder will not completely delete the live agent. Step 7: Delete cloned sensor group Step 8: Close the original alert in Mtell View. Your original agent is now back online and will send alerts when deviation from baseline normal is detected. Keywords: Live agent Remove sensor Mtell View Mtell System Manager Mtell Agent Builder References: None
Problem Statement: There are many reasons that a user might want to uninstall and re-install all Aspen products on the server or just the APC software, including: Installation issue the first time is causing problems in software functionality An account without administrative privileges was used to install it There was an issue with applying a CP and since you cannot uninstall a CP, you have to uninstall and re-install the software Upgrading to a newer version of APC software on the same server Sometimes when running the uninstallation from the utility, some files may still be left over from the previous installation. Therefore, it is important to check and delete files in certain directories that may still be there, so that they do not interfere with the new installation. Before going through this procedure to fix a suspected installation issue, please contact AspenTech Support to first troubleshoot whether the cause is in fact the installation, especially if you're not sure. Important Note: this procedure includes steps taken to uninstall all AspenTech software. If you are only wanting to uninstall APC products, make sure to only delete the files and folders associated with APC.
Solution: Bonus Tip: if you are using virtual machines, it would be a good idea to take a “checkpoint” of the machine before the uninstall and before the re-install in case you need to come back to that point again. Before uninstalling, save the following files as back-up: For APC Online Server C:\Program Files (x86)\AspenTech\CIM-IO\etc\cimio_logical_devices.def C:\Program Files (x86)\AspenTech\Local Security\Access97\afw.mdb (in case you have some customized AFW roles defined). Folder: C:\ProgramData\AspenTech\RTE\Vxx\Config (in case you need to refer to some old configuration information – you can re-configure things manually with Configure Online Server but this folder will contain the results of those original settings if needed for reference). C:\ProgramData\AspenTech\APC\Online\App\ folders (these folders contain the DMCplus and IQ application configuration files and models) C:\ProgramData\AspenTech\APC\Online\cfg\*.user.message.config (message configuration file for suppressing or latching certain controller messages) For APC Web Server C:\ProgramData\AspenTech\APC\Web Server\Apps\...\*.usergroups.config (custom group definitions for specific controllers if any have been created) C:\ProgramData\AspenTech\APC\Web Server\Apps\...\*.user.display.config (custom display definitions for specific controllers if any have been created) C:\ProgramData\AspenTech\APC\Web Server\Products\...\*.user.display.config (custom display definitions for specific products if any have been created) C:\ProgramData\AspenTech\APC\Web Server\Flowsheet\uploads\schematics\Default\*.* (Only for V12.1 and later: this contains APC Viewer flowsheet images) C:\ProgramData\AspenTech\APC\Web Server\Flowsheet\DataFiles\flowsheet.db (Only for V12.1 and later: this contains APC Viewer flowsheet definition information) For Aspen Watch Server See the Aspen Watch migration instructions for which files to back up: KB 000075689 - Procedure to move Aspen Watch data between computers while upgrading to a newer version IMPORTANT: If IP.21 is installed (Aspen Watch Server), make sure the Startup at Boot box is unchecked in the InfoPlus.21 Manager before rebooting the server: For All APC Servers For all servers, it would be a good idea to open Services and check the “Log On As” account information. If you’re going to use the same one, make sure you have the password for it because the re-install will require this information. This account needs to be a member of the local Administrators Group, and if possible, a Domain account, but does not need to be a domain administrator account Uninstall: Log on as the administrative user that you performed the original install with. Run the Uninstall AspenTech Software (AspenTech Uninstaller) tool and select the checkboxes to remove Aspen products, either all or just APC as per your requirement Reboot the machine. Clean up: Logon as the same user you performed the uninstall with. For a full uninstall, open “Internet Information Services (IIS) Manager” and remove all remaining Application Pools that begin with the word “Aspen”. Also remove all nodes under the Default Web Site that begin with: ADSA…, Aspen…, AT…, BPC…, IP21…, ProcessData…, SQLplus, VisualizationNavigation, Web.21 (DO NOT remove: aspnet_client, webctrl_client if those exist, or any others that don’t match this list of names). Delete the following folders (if not already removed) for all Aspen software, or just the APC subfolders inside as per your requirement: C:\inetpub\wwwroot\AspenTech C:\ProgramData\AspenTech\RTE (*** be sure to back up the files in C:\ProgramData\AspenTech\RTE\Vxx\Config for your reference if needed ***) DO NOT delete other C:\ProgramData\AspenTech\ folders, as some of those folders contain IP.21 data files. The exception here would be when you truly want to wipe the system clean and don’t care about losing other Aspen-related data files. In that case remove the entire C:\ProgramData\AspenTech folder. C:\Program Files\AspenTech C:\Program Files\Common Files\AspenTech Shared C:\Program Files (x86)\AspenTech C:\Program Files (x86)\Common Files\AspenTech Shared C:\Program Files\AspenTech C:\Program Files\Common Files\AspenTech Shared C:\Windows\Microsoft.NET\assembly\GAC_32\AspenTech.ACP.Core.Temporal C:\Windows\Microsoft.NET\assembly\GAC_32\AspenTech.ACP.Core.TypeEvolution C:\Windows\Microsoft.NET\assembly\GAC_32\AspenTech.ACP.RTE.Remoting C:\Windows\Microsoft.NET\assembly\GAC_32\AspenTech.ACP.RTE.Types C:\Windows\Microsoft.NET\assembly\GAC_32\AspenTech.ACP.Services.Logging Make sure there are no AspenTech services listed in the Services panel. If there are, you will need to clean these up in regedit.exe (search regedit in the Start menu and select the command to open the registry). Go to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services node and look for the service name or description and remove its corresponding node. For a full uninstall: Run Regedit (search in Start menu and program will show up) and delete the following registry nodes if they exist: HKEY_CURRENT_USER\SOFTWARE\AspenTech HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\AspenTech Open Event Viewer, right-click on any Aspen related folders on the left navigation tree and select Clear Log. Reboot the machine. Re-install: Apply all Microsoft Windows Updates that are available. This may take a couple reboots and re-checks. Log on as the administrative install account. Run the installer. Be sure to enter the appropriate Service account username and password when prompted (this account needs to be a member of the local Administrators Group, and if possible, a Domain account, but does not need to be a domain administrator account). Install the software and reboot when prompted. Log on as the same user again and wait for all post-installation command windows and processing to all finish. Install any patches and reboot if required. Note: you must reboot after a Cumulative Patch (CP) installation for it to complete, before applying any subsequent Emergency Patches (EPs). Log on as the same user after rebooting. Proceed to configure the SLM wizard (disable broadcasting under Advanced Settings). Configure your Cim-IO logical devices. Check the cimio_logical_devices.def file to make sure it was not changed. If it was, you can replace it with the back up one if needed. For RTE applications, run Configure Online Server and set up the IO Source(s). Check the Enable Server box and apply. Verify that the PCWS is working by opening the web page. You may need to manually restart the Aspen APC Web Provider Data Service one time after installation is complete. Deploy your applications. Check the Windows Event Viewer for any recurring errors. Keywords: apc, uninstall, reinstall References: None
Problem Statement: When upgrading Aspen APC products to a higher version, what is the recommended order for each server involved?
Solution: Before proceeding with an upgrade, you should have a step-by-step plan of which order you want to upgrade your APC servers. It is best practice to complete the upgrade for all of them in one go, rather than upgrading one and waiting long periods of time until the other, since the servers are not backwards compatible. Check out KB 000098125 for more info on AspenTech’s policy about mixed versions and side-by-side installations of APC software. Below is a recommended plan for individual servers: 1. The SLM License Server must be upgraded before anything else because it has to be equal or higher in version than the SLM clients (this includes APC applications). KB 000081689 - What is the compatibility policy for Software License Manager (SLM) Server and Client? The SLM License Server to be upgraded first is mandatory. The rest of the order below is recommended practice, especially for first time users, and can be changed based on user preference. 2. The APC Web Server (PCWS) automatically installs Aspen Local Security (ALS) with it, which includes the roles and permissions required for users to perform tasks such as deploying controllers Online and starting data collection in Aspen Watch. Therefore, the Web Server should be upgraded before the Online and Watch so they can acquire permissions. More background info on ALS in KB 000099212: How to copy Aspen Local Security configuration from one server to another Note that the PCWS is not backwards compatible so you will not be able to view applications running on an older version of the APC Online Server. Once the Online Server is also upgraded and applications are deployed, you can view it on the upgraded Web Server. 3. Next you have the option to do either Cim-IO Server or the APC Online Server. The order doesn’t matter but if you upgrade Cim-IO server first, you can then finish the APC Online server upgrade and establish the connection to Cim-IO right after. 4. Lastly, you want to upgrade the Aspen Watch Server. It is better to wait to upgrade the Watch Server at the end because it can take the longest to do, especially if migrating the history to a new machine. Some users prefer to upgrade the Watch Server first so that the online applications can continue running on the old machines until necessary to upgrade them, so it is up to the user's preference. Similar to the Web Server, the Watch Server is not backwards compatible for the most part. When trying to collect data for ACO applications on an older version, it might be work, as that is being read from Cim-IO (via DMCplus Context Server service). However, an upgraded Aspen Watch server will not be able to collect data for an RTE controller on an older version of the Online server as RTE applications communicate using a WCF data contract, and the changes in .NET framework versions across multiple APC versions can also change the data contracts. 5. Once the servers are upgraded one by one, be sure to follow the post-installation configuration steps in the APC Configuration Guide found on any APC Server under this directory: C:\Program Files (x86)\Common Files\AspenTech Shared\APCConfigurationGuide. You may need to go back to a server that was upgraded first to complete some configuration steps that required another server to be installed. For example, the APC Web Server requires connection via ADSA to be able to read historical data from the Watch Server. Keywords: apc, servers, order, steps, upgrade, install References: None
Problem Statement: The Aspen Advanced Process Control product suite offers two user interface platforms (ACO and RTE) to build and configure two types of APC controllers (DMCplus and DMC3). This article will explain the difference between them and how to check which ones you may be running on your APC Online Server.
Solution: Abstract: ACO (Advanced Control and Optimization) and RTE (Real-Time Environment) are the two platforms (i.e. groups of products) within the APC suite in which users can build models, configure applications, and then deploy online. ACO uses legacy tools like DMCplus Model and DMCplus Build to build the controller and then requires the corresponding *.MDL/*.MDL3 and *.CCF files to deploy online using Manage. RTE is the newer platform that combined all these functions, from building to deploying, into one tool called DMC3 Builder. In both platforms, users have the option to create either a DMCplus or DMC3 controller. DMCplus has all the basic features of the traditional linear controlSolution. DMC3 has the sameSolution included, with more advanced features added such as SmartStep Automated Tester, Calibrate and Adaptive Modeling products. In addition to this, when Aspen DMC3 is deployed through the RTE platform, Smart Tune and Robustness analysis are also included in the bundle. In the ACO platform, when you open a new DMCplus Model project, it will ask to choose between DMCplus or DMC3 application. In the RTE platform, when creating a new DMC3 Builder project, it will ask if you want a DMC3 or APC project (APC project includes the option for creating a DMCplus controller). ACO (left) and RTE (right): More detailed comparison can be found below. ACO versus RTE The ACO-based platform is the legacy platform that includes the following products for APC controllers: DMCplus Model DMCplus Build DMCplus Simulate APCManage Note that ACO platform includes all legacy tools, so it is also used to configure and deploy Inferential Qualities (IQ) applications, using an IQF file loaded in APCManage. RTE is the newer platform that combined all the same functions into one product – DMC3 Builder. So, the main difference between these two is the way the controller was built and deployed. ACO uses the legacy products and requires a *.CCF file (made using DMCplus Build) and *.MDL/*.MDL3 files (exported from DMCplus Model project) to deploy the controller online. The new RTE platform uses DMC3 Builder to build and deploy the controller directly from the desktop application to the APC Online server. ACO vs. RTE Products Used to Build and Deploy APC Controllers: ACO-Based (*.CCF & *.MDL/*.MDL3) Platform RTE (DMC3 Builder) Platform Build the Model DMCplus Model (project file is *.DPP/*.DPP3, then export *.MDL/*.MDL3 file for online use) DMC3 Builder Configure Controller DMCplus Build (*.CCF) DMC3 Builder Simulate Controller DMCplus Simulate (.PSM) DMC3 Builder Deploy the Controller to Online Server - Copy *.CCF and *.MDL/*.MDL3 files to the appropriate folder on the Online Server (C:\ProgramData\AspenTech\APC\Online\app\<controller_name>) - Use PCWS or APCMANAGE to load the controller Connect to Online Server from DMC3 Builder via TCP/IP port and Deploy Start the Controller Process Use PCWS or APCMANAGE to start/run the controller Use PCWS or DMC3 Builder to start/run the controller Enable Aspen Watch to Collect Controller Data - Copy *.CCF and *.MDL/*.MDL3 to the appropriate folder on the AW server (C:\ProgramData\AspenTech\APC\Performance Monitor\app\<controller_name>) - Use AW Maker to add the controller - Use PCWS or AW Maker to enable monitoring Use PCWS or AW Maker to start data collection The architecture for each platform is also different in the way data transfer is processed as the ACO platform uses a Context and RTE uses a Cloud: ACO-Based (*.CCF & *.MDL/*.MDL3) Platform Architecture: RTE-Based (DMC3 Builder) Platform Architecture: DMCplus vs. DMC3 Controllers As mentioned above, DMCplus controllers have the traditional linear controlSolution and DMC3 has added features: In both platforms, when creating a DMCplus controller, the model file exported will have a *.MDL file extension and a DMC3 controller will have a *.MDL3 file extension. In the ACO platform, when creating a DMCplus controller in DMCplus Build, the project file extension will be *.DPP and when creating a DMC3 controller, the project will be *.DPP3 file. In the RTE platform, DMCplus controllers are created in an APC Project and DMC3 controllers are created in a DMC3 Project. Although it is not common to create a DMCplus controller in the RTE platform, here are the steps to do so: In DMC3 Builder, click on File > New Project to choose either a DMC3 or APC Project. As shown below, this APC project can include DMCplus controllers (using an *.MDL file instead of *.MDL3), as well as State-Space and Nonlinear controllers: After importing your dataset, a model is created by choosing the type of model – DMCplus FIR, Linear State Space, or Nonlinear: After running identification on the case and updating the Master Model, the DMCplus model may be exported either as *.MDL or *.APCMODEL: Similarly, if a DMCplus *.MDL model has been generated, it may be imported in Builder > Master Model > Cases, either from a file or from a different application in the project: How to Check Which Controllers You Have Running: If you know that the controllers were deployed using *.CCF and *.MDL/*.MDL3 files, then you have an ACO-based controller and if it is an *.MDL file it is DMCplus and *.MDL3 file it is DMC3. If you know you deployed using DMC3 Builder, you have an RTE-based controller. You can also check this on your web viewer PCWS > Online tab > Overview and look at the header under which your controller’s name is. It will say Aspen APC (RTE) or Aspen APC (ACO). To check which application you have, the controller’s name will have a little green box beside it that says “DMC3” if it is a DMC3 application. If it doesn't have DMC3 written beside it, then you have a DMCplus application. You can also open PCWS > Online tab > Manage view (or the program APCManage on the APC Online Server) to view the configuration. Example - Highlighted in YELLOW below are the platforms RTE and ACO. See annotations 1, 2, and 3 below the screenshot for an explanation. This is a DMC3 RTE Controller - notice the DMC3 written in the green box beside its name. This controller uses DMC3 Builder to perform all functions including create a model, configure, and deploy. This is a DMC3 ACO Controller - controller model file will be in *.MDL3 format, which can be seen in APCManage, and is deployed using APCManage. This is a DMCplus ACO Controller - controller model file will be in *.MDL format, seen in APCManage, and it uses all legacy tools to create the controller including DMCplus Build, DMCplus Model, DMCplus Simulate, and APCManage to deploy. Keywords: ACO, RTE, dmcplus, dmc3, meaning, difference, versus, explain References: None
Problem Statement: Which version of OLI Engine is compatible with Aspen Plus V12 & V12.1?
Solution: Aspen Plus V12 & V12.1 requires OLI Engine Build 10.0.2.1, Build 11.0.1.3 Beta & higher. There is a 64-bit download of OLI for Aspen Plus V11 & higher. Keywords: OLI Engine, compatibility References: None
Problem Statement: The default time units in an Aspen Plus Dynamics plot are hours, the user expects to modify these units.
Solution: For modifying the time units in an Aspen Plus Dynamics plot, follow the next steps: 1. Go to the Run tab. 2. Select Run options. 3. Be sure that for the Change simulation run mode the Dynamic option has been selected. 4. In the Time units area, select the units for the plot. The user can select the expected unit for the plot. Keywords: Time, units, run, plot, dynamic, hours References: None
Problem Statement: Electrolyte NRTL is not available in the Rate-Based Distillation column in Aspen HYSYS V10.0.
Solution: Using Electrolyte NRTL in the Rate-Based Distillation column is not available in Aspen HYSYS V10.0 but from V11.0 and forward versions, Electrolyte NRTL in the Rate-Based Distillation column is available. Aspen HYSYS V10.0: Aspen HYSYS V11.0: Note: Use Aspen Properties as Fluid Package type. Keywords: Distillation, rate-based, Electrolyte NRTL, fluid package, Aspen Properties References: None
Problem Statement: In Aspen HYSYS BLOWDOWN convergence problems and wrong
Solution: s may appear if there are components defined with a zero composition. Solution When more than one component is defined, and it is wanted to model a pure component, it must be ensured that trace amounts of the other components are added in the pure component. If this is not done, the results may not be accurate because potential flash problems with the BLOWDOWN thermodynamic package might occur. By applying this recommendation, it is ensured that the BLOWDOWN property package will not have flash convergence issues because of the described situation. Keywords: Depressuring, pressurization, wrong References: None
Problem Statement: OptiPlant crashes when the user uses the X in the upper right corner or pressing Esc to close the Parameters form.
Solution: The root cause of the problem is that in V12.1, the event when the user closes the Parameters form using the top right X or press the Esc key is not handled. Hence, the data of the selected object (such as an air cooler) becomes corrupted. This causes the application to crash when it is saving the project to disk. As a workaround for V12.1, the user must always choose the ACCEPT or CANCEL option in order to close the Parameters form. Fixed in Version 752034: The problem has been resolved in V14. Keywords: Crash References: None
Problem Statement: What is the application of the component Attributes form available under RCSTR Reactors while working with Polymers?
Solution: Use this sheet to specify how RCSTR is to determine values for component attributes in the outlet stream, when these components are created or changed by reactions in the reactor. Before using this sheet, define the attribute IDs and the constituent elements for each component with attributes on the following sheets: Components | Component Attributes | Selection, for conventional components Methods | NC-Props | Property Methods, for nonconventional components For each component whose attributes change in the reactor, you can select (in order) Substream ID, Component ID, and Attribute ID, and specify the values for the elements. For each selected Attribute ID, specify Value for at least one Element. Alternatively, you can choose to have RCSTR calculate the component attribute values in the outlet stream based on rates supplied by a user kinetics subroutine. To use this option, you must select a Reaction Set of type USER on the Reactions sheet. The user kinetics subroutine must also provide the rate of change of component attributes as described in Aspen Plus User Models. When you select this option, you can provide estimates for component attributes in the outlet stream by specifying Value, as described above. It is also possible to specify estimates for class 0 and class 2 polymer component attributes on this sheet. You can click Generate Estimates to fill in estimates for these attributes from a completed run. The component attributes on the reactor component attribute forms are initialization values only. In an ideal case the model should work with or without an initial estimate. Sometimes it fails without. If it works with or without it is expected to give the same result since there is only one validSolution. There are real situations where multiple steady-states are possible, and the initial estimates are used to steer the model to the expected steady-state condition. This is sometimes the case for exothermic reactions with duty-specified reactors. Please note that there is priority order when calculating attributes for outlet steams. 1. If there are no kinetics models in the RCSTR, users could use this table to specify the attributes at outlet stream. 2. When there is kinetics model being used in the reactor, the outlet attributes are determined by calculated reactions rates. The specifications are treated as initials. Keywords: Component Attributes form, RCSTR, Polymer Attributes References: None
Problem Statement: Which case types are supported by Aspen Simulation Workbook?
Solution: Aspen Simulation Workbook supports the following case types: Aspen Plus and the layered products based on these platforms (e.g., Aspen Polymers Plus) Aspen HYSYS, including HYSYS Upstream and HYSYS Refining Note: Only *.hsc files are supported; *.hscz files are not supported in this release. Aspen Exchanger Design and Rating Aspen Custom Modeler Aspen Plus Dynamics Aspen Chromatography Aspen Adsorption Aspen Model Runner The current version of the Aspen Simulation Workbook supports all run modes for these products. Initialization, steady-state, and dynamic runs are fully supported. Although estimation and optimization run modes are supported, the estimation data and results and optimization results variables are not exposed in the current ACM adapter. Keywords: ASW, supported case types, References: None
Problem Statement: Why do I not see the option for Sump or Chimney on my column internals?
Solution: Normally, you can specify a sump or a chimney tray on the column’s Rating | Sizing | Non-Uniform Tray Data form. Review KB article 000098828 for more information on how to access this form. For rate-based columns, such as the Acid Gas packages or ENRTL-RK Aspen Properties package, the sump and chimney tray internals are not supported. As a workaround, you may specify these internals as a separator external from the column. KB article 000058378 has an example of this workaround for Aspen Plus. Keywords: Sump, Chimney, Internals, Column Environment, Rate-Based, Tray, Packing References: None
Problem Statement: This knowledge based article explains why end user is not shown a textbox for him to enter SQL query when he browse to the Aspen SQLplus Web Service (http://<server_name>/SQLPlusWebService/SQLplusWebService.asmx and click on the ExecuteSQL link.
Solution: The Aspen SQLplus Web Services form is not configured to be access remotely through the web browser. To make the form available for access from a remote machine, configure as follow on the web server. 1. Launch a command prompt using Run as administrator. 2. Stop Internet Information Services (IIS) with below command. iisreset /stop 3. Browse to C:\inetpub\wwwroot\AspenTech\SQLplusWebService in Windows Explorer. 4. Open Web.config in Notepad. 5. Add the following lines after <system.web>. <webServices> <protocols> <add name=HttpGet /> <add name=HttpPost /> </protocols> </webServices> 6. Save the file. 7. Start Internet Information Services (IIS) with below command. iisreset /start Keywords: References: None
Problem Statement: How can I tell which AspenTech products I have installed on my machine?
Solution: Go to the Windows Start Menu to see list of applications and under the folder Aspen Configuration, you will see a program called Uninstall AspenTech Software (or you can just search for it in the Start menu). Don't worry, we're not actually uninstalling any products. This program will open up the AspenTech Uninstaller and list all of the aspenONE products that are currently installed on your machine. This program also includes the aspenONE Version (this is the media release version) and the Product Version (this is the internal version tracker). The internal Product Version may differ for different product suites but it also tell you the latest cumulative patch that was applied to the product, as indicated by the number after the second decimal point. For example, using the screenshot below, Aspen APC Online is on V12.1 (this is the media release version) or internally Product Version number is 20.1.1.0. The 20.1 part is indicating V12.1 and the 1 after that is indicating Cumulative Patch 1 was applied. So 20.1.1.x means V12.1 CP1 for this APC product. Keywords: installed, products, aspenone, check, find, version, cumulative, patch References: None
Problem Statement: I have some plant data (time, value) that I wish to use to set the value of a variable in a dynamic simulation. Can you suggest a way to implement this
Solution: OneSolution is to use a model with the code shown below. The model lets you specify the value of the variable (x) and the time interval between the data points (t). It uses a task to set the variable value (with assignments). Note that you can only use this with version 2004.1 and cumulative patch 4, which addressed some issues with for loops in tasks. Model data n as integerparameter; x([1:n]) as realvariable (fixed); t([1:n]) as time_ (fixed); output_ as output realvariable; dummy as realvariable; dummy = sigma(x) + sigma(t) + output_; task doit runs at 0 for i in [1:n] do output_ : x(i); wait t(i); endfor end End The attached file contains an example Keywords: Task References: None
Problem Statement: Which databank is best for high temperature combustion and incineration? What is the Combustion databank?
Solution: The COMBUST databank is a special databank for high temperature, gas-only phase calculations. It contains parameters for 59 components typically found in combustion products, including free radicals. The CPIG parameters were determined from data in JANAF tables for temperatures up to 6000K (JANAF Thermochemical Tables, Dow Chemical Company, Midland, Michigan, 1979). Calculations using parameters in the ASPENPCD and PURECOMP are generally not accurate above 1500K. You may use the COMBUST databank only for ideal gas calculations (IDEAL option set) and only in the following unit operation models: MIXER, FSPLIT, SEP, SEP2, HEATER, HEATX, MHEATX, RSTOIC, RYIELD, REQUIL, RGIBBS, RCSTR, RPLUG, RBATCH, COMPR, MCOMPR, DUPL and MULT. You must enter phase=vapor (NPHASE=1) for each unit operation block for which it is applicable, and for each Stream. It is not a way of simulating a burner or combustion process. The RSTOIC and other unit operation can be used to do this. COMBUST contains the physical properties used in the unit operations. The only parameters available in the COMBUST databank are Parameter Description CPIG Ideal gas heat capacity coefficients DGFORM Standard free energy of formation DHFORM Standard enthalpy of formation MW Molecular weight NATOM Vector containing numbers of C, H, O, N, S, F, Cl, Br, I, Ar and He atoms In general, the COMBUST provides excellent comparison with literature for the ideal gas heat capacity. Both COMBUST and INORGANIC are comparable regarding the flammability calculations with INORGANIC having a larger set of combustion products. For example, INORGANIC Databank Uses up to 3 ranges - parameters CPIXP1, CPIXP2 and CPIXP3 O2 - Has ideal props for 3 ranges, 298 - 1000, 1000 - 3000 and 3000 - 5000 K H2S - two ranges, 298 to 1000 K and 1000 to 2500 K HCL - 1 range, 298 to 3000 K Cl2 - 1 range, 298 to 3000 K H2 - 3 ranges, 298 to 1000, 1000 to 3000 and 3000 to 5000 CO2 - 2 ranges, 298 to 1000 and 1000 to 3000 H2O - 3 ranges, 298 to 1200 , 1200 to 2500, 2500 to 5000 COMBUST Databank There are 2 ranges built into CPIG O2 - below 300 and 300 to 6000 K H2S - below 300 and 300 to 6000 K HCL - below 500 and 500 to 6000 K Cl2 - below 300 and 300 to 6000 K H2 - below 300 and 300 to 6000 K CO2 - below 300 and 300 to 6000 K H2O - below 400 and 400 to 6000 K Note: There are no liquid or solid properties of these components in either databank Keywords: References: None
Problem Statement: How do I resolved SLM Server persistently getting Error code 26 after running SLMClean Utility.
Solution: This knowledge base article will show you what are the folders that you need to exclude from the anti virus application. As per Gemalto, the issue is with regards to the anti virus application. SLM Server will auto generate some temp files in the below folders and will report Error code 26 when the anti virus application scans and removes the temp files. Kindly refer to kb000075106 for details on standard Aspen folders exclusion. .\Windows\System32 .\Windows\SysWOW64 .\ProgramData\SafeNet Sentinel\Sentinel RMS Development Kit\System Keywords: Error code 26 References: None
Problem Statement: Mtell Help Files are located in the path C:\ProgramData\AspenTech\Aspen Mtell\Suite\Help by default. It may be desired to move these files to a different path. A common reason for this is if the C drive is limited to Operating System files and other applications are stored on a different drive.
Solution: Note: Using the Registry Editor incorrectly can cause serious, system-wide problems that may require you to re-install Windows to correct them. Modifying the Windows Registry should only be performed by experienced Administrators. To move the Aspen Mtell Help files to a different folder: On the machine where Mtell is installed, open the File Explorer and navigate to the following path C:\ProgramData\AspenTech\Aspen Mtell\Suite\ Copy the Help folder Paste this folder in the new desired location Note the new file path, including the Help folder For example, D:\ProgramData\AspenTech\Aspen Mtell\Suite\Help\ In the Windows search bar type regedit, right click on Registry Editor, and select Run As Administrator Navigate to the following path: HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\Aspen Mtell\Suite\Configuration Right click on HtmlHelpExtractor and select Modify… Replace the C:\ProgramData\AspenTech\Aspen Mtell\Suite\Help\ portion of Value data with the new file path Before After Click OK Repeat steps 7-9 for HtmlHelpRoot and InstallPath Return to C:\ProgramData\AspenTech\Aspen Mtell\Suite\ and delete the Help folder You will need to close and reopen any instances of Mtell for the changes to take effect To confirm the change was successful: Open Aspen Mtell System Manager Click the Help icon in the top right corner If the change was successful, the Help window will populate If Mtell is unable to find the Help files at the path saved in the registry, you will see the following error. Reopen the Registry Editor and confirm the value you entered matches the location of the help files. Keywords: Mtell help files Move help files Help file location References: None
Problem Statement: How can the 'Aspen Process Data Service' service, which normally will be present on the Aspen InfoPlus.21 system, be added to the Windows Services applet if it is not present?
Solution: Open a Windows OS command prompt (formerly known as the DOS prompt) and issue this command: C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /i AspenTech.PME.ProcessData.WindowsService.exe Once you try it please make sure the service appears in the Windows Services. If it is not starting with a named account that is a member of the local administrators group please add that account (should be same one that is used to start the Aspen InfoPlus.21 Task Service). Note: In one instance the command prompt needed to be changed to the directory where the service executable is located before running the install command, so in that case it was “C:\Program Files(x86)\AspenTech\ProcessData” (even though it’s a 64-bit machine). Keywords: None References: None
Problem Statement: Microsoft patch KB5005568 (Sept 2021) impacts DCOM settings and raises the minimum authentication level for DCOM communication. Microsoft introduced hardening changes in DCOM that could affect CimIO for OPC communications with new OPC error message: 10036 - Please raise the activation authentication level at least to RPC_C_AUTHN_LEVEL_PKT_INTEGRITY in client application.
Solution: You must install Microsoft updates released June 8, 2021 or later and enable the following registry key on server and client: Path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Ole\AppCompat Value Name: RequireIntegrityActivationAuthenticationLevel Type: dword Value Data: default = 0x00000000 means disabled. 0x00000001 means enabled. If this value is not defined, it will default to enabled. Use Value Data in hexadecimal format Note that you must restart the server and client in order for the registry key to take effect. Microsoft support link: https://support.microsoft.com/en-us/topic/kb5004442-manage-changes-for-windows-dcom-server-security-feature-bypass-cve-2021-26414-f1400b52-c141-43d2-941e-37ed901c769c Acknowledgment: Thank you, Chris Betts from Valero, for researching and reporting this issue. Keywords: None References: None
Problem Statement: Video: Installing and Configuring V12.1 Aspen Unified
Solution: This video will walk you through the installation and configuration steps for V12.1 Aspen Unified. You can also find attached a pdf guide with the corresponding steps to follow. Keywords: None References: None
Problem Statement: Older versions of Aspen InfoPlus.21 could only have 113 file sets per repository. This was due to the fact that the DiskHistoryDef record (like any record) is limited to a maximum memory allocation which applies to all individual Aspen InfoPlus.21 records. As of Aspen InfoPlus.21 version 6.x, the 113 file set per repository limitation has been removed, does this imply that the maximum record size limit has also been removed?
Solution: There remains a 128 kilobyte (64*1024 word) record size limit in Aspen InfoPlus.21 (note, this was smaller before version 2006). In order to have the ability to have more than 113 file sets per repository, the file set information had to be removed from the DiskHistoryDef record in the database. Therefore, the archive information is no longer stored within the record but rather in shared memory. When the database is stopped, the archive parameters are now stored in the config.dat file. This is how it is possible to have more than 113 file sets, but each record still has a maximum size limit of 128KB. Keywords: Historian Parameters References: None
Problem Statement: How to create an ODBC connection through Excel
Solution: Go to Administrative tools and open ODBC Data Source Administrator to create a new Data Source. Go to the System DSN tab and click the “Add” button to bring up the Create New Data Source window. Select “AspenTech SQLplus” ODBC driver and click Finish. On the SQLplus Setup window type a name for your new ODBC Data Source and select an Aspen Data Source from the drop down list. Then click on the Test button to make sure the new data source is properly configured. Click OK twice to go back to the initial window where you will see the new ODBC Data Source. (TEST for this example) and click Ok to close this window. Now open Excel and go to the “Get External Data” tool bar located under the Data tab. Click on the “From Other Sources” icon and select “From Microsoft Query” option to open the “Chose Data Source” window. In the “Chose Data Source” window select the ODBC Data Source we created previously (TEST) and click OK. Now on the next window (Query Wizard – Chose Columns) you can select tables and columns to build your query and click Next. Specify your filter criteria and click Next. Specify how you want your data sorted and click Next. Finally select whether you want to return the data to Microsoft Excel or View the data or edit query in Microsoft Query and click Finish. Keywords: ODBC SQLplus Driver Connection Excel References: None
Problem Statement: Setting up transfer functions to be used in Aspen HYSYS Dynamics requires special attention to some details, inputs and configuration of the solver to see the effect reflected.
Solution: You may follow the steps shown in the video below: Keywords: Aspen HYSYS Dynamics, Controller, Ramp, Transfer Function, PID, SP, Setpoint References: None
Problem Statement: When logging into Aspen Production Execution Manager (APEM) Mobile on the mobile web server itself, encounter below error message even though the web server itself has been added as a workstation.
Solution: This is due to browsing to the Aspen Production Execution Manager Mobile website using localhost for the URL instead of the server name. The issue can be resolved by either method. Instead of browsing to the site using http://localhost/ApemMobile, use http://<server_name>/ApemMobile instead. Add 0:0:0:0:0:0:0:1 as a workstation in Config module. Keywords: APEM Mobile Error loading workstation: 0:0:0:0:0:0:0:1 References: None
Problem Statement: Sometimes, depending on the complexity and structure of a simulation, it can turn difficult to solve and get results. Equation Oriented is a useful way to converge simulations and other additional benefits like faster calculations in Aspen HYSYS.
Solution: The video attached explains what Equation Oriented (EO) is and how it is different from the default Sequential Modular (SM) mode. It is explained how to create an EO subflowsheet and how to change a simulation from SM to EO mode and some basic characteristics and advantages of this run mode. Main topics covered in this video: Difference between SM and EO run modes in Aspen HYSYS How to create an EO subflowsheet Change from SM to EO mode EO Variables list Variable specification Which are the different run modes in EO? Variable Swapping Keywords: EO, convergence, variables, References: None
Problem Statement: Knowledge Center Diagnostics tool - User Interface
Solution: The attached tool is designed to help customers diagnose Knowledge Center issues they may face at their end. The tool performs various connectivity tests and provides information that enables customers to identify these issues, or to be copied & sent to the AspenTech Support team for further investigation and assistance. Please find the tool's usage information and instructions provided in the attachment. For V12.0/V12.1 download DiagnosticTool.zip For V12.2 download DiagnosticTool_V122.zip Keywords: Aspen Knowledge, Diagnostics tool References: None
Problem Statement: In V12.1, APC Web Viewer has the functionality of displaying Model Curves, Status, Curve Overrides, and Gains in the Model view of a controller. But for large controllers, we cannot see the model curves even if we enabled the “Model curves” option.
Solution: To View the Model curve for a large controller like this, we need to zoom in on the model curve by dragging the target area over the curves you want to see. After zooming in, we can see the model curves like below. Keywords: New Web interface DMC3 V12.1 Model curve References: None
Problem Statement: In the History tab of the PCWS, we can see each controller’s KPI plot as indicated in the screenshot below. But this is only for a single controller and the Format is fixed. How do we create custom KPI Groups that contain KPI for several controllers?
Solution: We can create KPI Groups from PCWS. To create a new KPI Group, go to PCWS>History>AW Maker>Tag Group Manager Edit KPI Group like the example below. Please add the KPIs that you want to add to the new KPI plot. You can any controllers KPIs. You also can add the Header from the “Plus button” on the right-hand side. After completed, please click the Save button. Now you can see the created KPI Group like below. Keywords: AspenWatch KPI Group PCWS Custom View References: None
Problem Statement: SPYRO6 is a 32 bits program that previously is working well in integration with Aspen Plus V10 and older. Due to the current upgrade to Aspen Plus V11 and newer which is 64 bits program, SPYRO6 with the existing setup is no longer working.
Solution: To make SPYRO6 continue to work in 64 bits Aspen Plus (V11 and newer), some changes are required for the relevant documents. Modify rtopt.opt file The default rtopt.opt file in the Model folder will consist of 2 lines. 1 line is the path that pointing to zespyro.dll and another line is the path that pointing to usrkti.dll in the Aspen Plus installation directory. Remove the line with the path pointing to usrkti.dll. Modify the remaining line with the path pointing to zespyro.dll to the installation directory of the Aspen Plus version that you want the SPYRO6 to work with. Modified rtopt.opt file shall look like the screenshot below: Modify CFG file Required to add a new line under SPYRO Subr Name. The new line is the path pointing to the usrkti.dll. The new line should look like below if your usrkti.dll and pyrotec.ini are inside the folder name “SPYRO_File”. This folder is in the same directory as CFG and Model folder. SPYRO Subr Path = '..\SPYRO_File' If the SPYRO model is not working using the above configuration, the user should copy the usrkti.dll and pyrotec.ini files to the below directory and the SPYRO Subr path should point to the same directory: C:\Program Files\AspenTech\Aspen Plus V12.1\Engine\ComSpyro6 SPYRO Subr Path = 'C:\Program Files\AspenTech\Aspen Plus V12.1\Engine\ComSpyro6' When using the above Subr Path, please take note that the whole line length should not exceed 80 letters. The user can delete the unused space to shorten the overall length. (Space between Path and the = sign) The CFG file shall look like below when using step ii. If you have more than 1 SPYRO model in the same flowsheet that using the same CFG file, you should duplicate the CFG file to make each SPYRO model use its own CFG file as well as the KTI file. So you can have XXX-01.cfg for the first model and XXX-02.cfg for the 2nd model. You shall duplicate the KTI file and rename it to XXX01.kti and XXX02.kti. Do not forget to update the respective CFG file to point to the right KTI file. Not forget to update the SPYRO model | Parameter | Run Control to point to the right CFG file for each model. Keywords: None References: None
Problem Statement: How to model ice bath for cooling?
Solution: Attached is a simple simulation file created using a heat exchanger HeatX block with methane (CH4), water (H2O), solid ICE, and Nitrogen (N2) in version V7.1. The electrolyte wizard was used to set up Chemistry reactions to model the ice formation. The light gases are specified as Henry's components (for this example CH4 and Nitrogen). A small amount of nitrogen is added to get flash work and avoid singularities. Please refer to attached example file for more details. Keywords: ice bath References: None
Problem Statement: Inside Aspen Custom Modeler when a subroutine is written, the shortcuts/hotkeys can be used, the list of the different shortcuts/hotkeys can be long and hard to remember.
Solution: Inside Aspen Custom Modeler when a subroutine is written, the shortcuts/hotkeys can be used. These shortcuts/hotkeys show the list of different parameters, variables, ports, models, structures, streams, and procedures. The next table shows the shortcut/hotkeys for each item. Keywords: Shortcut, subroutine, variable, port, model, Alt, hotkey. References: None
Problem Statement: How to change the material for the heat exchanger components inside Aspen Shell and Tube Mechanical and Aspen Shell and Tube Exchanger V12.2?
Solution: In order to change the material for the heat exchanger components inside Aspen Shell and Tube Mechanical and Aspen Shell and Tube Exchanger V12.2, follow the next steps: Go to the Material tab in both applications. The path for Aspen Shell and Tube Mechanical is Input/Material/Main Materials/Material Specifications. The path for Aspen Shell and Tube Exchanger is Input/Construction Specifications/Materials of construction. Click on Search Databank (Aspen Shell and Tube Mechanical) and Databank Search (Aspen Shell and Tube Exchanger). In the new pop-up window, in the second table, select the Component whose material you will change, for example, Flange at front head cover – material. In the first table, look for the material, for example, SA-285 K02801, select it and click on Set. Click on Ok. You can see the selected material for the selected Component. Keywords: Material, component, databank, select material, change material. References: None
Problem Statement: It is crucial to select the right Pressure Drop Calculation Option when creating a Fired Heater in EDR, as this will impact the way the inputs are processed to calculate the results.
Solution: The pressure drop calculation option can be selected in two different locations: Input| Problem Definition| Process Data| Streams Input| Program Options| Pressure Drop| Process Streams The three Pressure Drop Calculation Options for a Fired Heater in EDR are: Predict Outlet Pressure This will allow the program to calculate the outlet pressure of the process stream keeping the inlet pressure fixed. The value of the pressure for the outlet stream will be updated in the results section according to the calculations even if a value for the outlet pressure was provided in the Process Data Form. Predict Inlet Pressure Following the same logic of the previous option, the program will calculate the inlet pressure of the process stream keeping the outlet pressure fixed. The value of the pressure for the inlet stream will be updated in the results section according to the calculations even if a value for the inlet pressure was provided in the Process Data Form. Checking (in+out fixed) This option will allow doing a pressure rating scenario in which both the inlet and outlet pressure will be used for the calculations and will remain as specified in the results section. Keywords: Pressure changes, different than specified, modified, operation mode References: None
Problem Statement: This article briefly describes the steps to perform Principal Component Analysis (PCA) using Aspen Unscrambler. PCA can be used to reveal the hidden structure within large data sets. It provides a visual representation of the relationships between samples and variables.
Solution: Step 1: When a data matrix is available in the Project Navigator, access the menu for analysis by PCA from Tasks – Analyze – Principal Component Analysis The PCA dialog box has the following fields Model Inputs tab Weights tab Validation tab Algorithm tab Step 2: Go to Principal Component Analysis Model Inputs In the Model Inputs tab, select a Matrix to be analyzed in the Data frame. Select pre-defined row and column ranges in the Rows and Cols boxes, or click the Define button to perform the selection manually in the Define Range dialog. Once the data to be used in modeling are defined, choose the number of Principal Components (PCs) to calculate, from the Maximum Components box. The Mean center data check box allows a user to subtract the column means from every variable before analysis. The Identify outliers check box allows a user to identify potential outliers based on parameters set up in the Outlier limits tab. Step 3 : Go to Weights tab Weights tab is used to weight the individual variables relative to each other. This is used to give process or sensory variables equal weight in the analysis or to down weight variables you expect not to be important. The tab is given below. Select the weights from below mentioned options A/(SDev +B) : This is a standard deviation weighting process where the parameters A and B can be defined. The default is A = 1 and B = 0. The check box Pareto performs Pareto scaling which divides by the square root instead. Constant: This allows the weighting of selected variables by predefined constant values. Down weight: This allows the multiplication of selected variables by a very small number, such that the variables do not participate in the model calculation, but their correlation structure can still be observed in the scores and loadings plots and in particular, the correlation loadings plot. Block weighting: This option is useful for weighting various blocks of variables prior to analysis so that they have the same weight in the model. Check the Divide by SDev box to weight the variables with standard deviation in addition to the block weighting. Step 4: Go to Validation tab Chose the validation type from different validation types and Cross validation setup for the available cross validation options. Step 5 : Algorithm tab The Algorithm tab provides a choice between the PCA algorithms NIPALS and Singular Value Decomposition (SVD). Keywords: Principal Component Analysis, PCA, Unscrambler References: None
Problem Statement: This article briefs about the steps to do the Projection in unscrambler on latent space model.
Solution: Once a latent space model has been created on a set of samples, the samples comprising the model are projected into the new space and can be visualized in the scores plot. New samples can be projected on the same scores space (i.e. the model). This is done by matrix multiplication of the new data and the loading vectors. This method is applicable to Principal Component Analysis (PCA), Principal Component Regression (PCR) and Partial Least Squares Regression (PLSR) techniques. Step 1: To access the projection functionality use the Tasks – Predict – Projection from menu option. The Project to Latent Space dialog box will open. Step 2: To run a projection, open the project containing a valid PCA/PCR or PLSR model. In the case of PCR or PLSR, only entire or full prediction models in the project will be available for selection. Step 3 : Input data in the below mentioned fields Components: Allows the user to choose the number of components to use for projection. The set number of components for the model will be displayed and used by default. Pretreatment: Specify which pretreatments to apply automatically before prediction. Only pretreatments saved with the selected model can be used. Outlier Limits: Specify the warning limits. By default, the same limits as saved with the calibration model is used. Active only when the Identify Outliers option is checked. Details are given in Set Outlier limits section. Identify Outliers: This option enables an automatic identification of outliers based on predefined criteria. Data: Matrix: Here the matrix with the projection data is selected. Use the Rows and Cols drop-down lists to define the input range of the projection data matrix. Use the Define button to select the input range interactively. Step 4: Click on OK to perform the projection. Keywords: Projection, Unscrambler References: None
Problem Statement: Which fluid package is recommended for a mixture of gas, oil and water in Aspen HYSYS?
Solution: For a mixture of gas, oil and water with components such as CO2, N2, hydrocarbons, benzene, toluene, xylenes, H2S and water we recommend using the CPA fluid package. We either have the water kijs for most binaries or an estimation formula for higher molecular weight hypos. Key Words: Aspen HYSYS, fluid package, gas/oil/water mixture Keywords: None References: None
Problem Statement: Has AspenTech incorporated the data published by Katz-Firoozabadi in estimating properties of hydrocarbons?
Solution: Yes, Aspen HYSYS does have the Katz-Firoozabadi methods for estimating liquid density and molecular weights. This can be setup from the hypotheticals manager. Key Words: Aspen HYSYS, Katz-Firoozabadi, hypos Keywords: None References: None
Problem Statement: Why is Aspen Online requiring access to an ENG license for Aspen Plus when I set it up to use MSC tokens?
Solution: Users can specify whether Aspen OnLine should draw licenses and tokens for Aspen Plus RTO models from the Engineering license pool or the MSC license pool. However, users should note that we have added a new feature since v12 where if the inp/appdf files provided in the Offline folder are older than the bkp file, Aspen Plus GUI (aspenplus.exe) will be opened to generate them again during the offline-to-online process which will require access to ENG tokens (SLM_AspenPlus). If users do not want this to happen, please make sure that both the inp and appdf files added are newer than the bkp. Key Words: Aspen Plus, Aspen Online (AOL), ENG tokens Keywords: None References: None
Problem Statement: How the lead lag calculation execute in Aspen Calc by using OnDemand calculations?
Solution: LeadLag is not a built-in function, so it can't be used in Calcscript. LeadLag can only be used as an on demand calculation. Create an on demand calculation with LeadLag, and then make it a shared on demand calculation if you're wanting to use it with IP21. After save as shared then tag would be created in IP_CalcDef : Key words: LeadLag ,Save as shared Keywords: None References: None
Problem Statement: A1PE does not hold the license by giving error as license denied on A1PE page.
Solution: The A1PE webpage can get the license denials error if the AspenProcessDataAppPoolX64 crashes for troubleshooting the issue you need to check the below settings from IIS . Check the Advanced settings from IIS for AspenProcessDataAppPoolX64 as below Maximum worker Process as set to 1 The Regular time interval value of 1740 is the number of minutes (29 hours) before IIS will automatically recycle the app pool. This will cause the AtProcessDataREST.dll to be unloaded and all the cache and licenses it has built up will be released. This is a performance hit and needed set to zero. Perform an IIS reset after changing the settings . Monitor w3wp.exe for AspenProcessDataAppPoolX64 is not crashing from task manager : If crashing is observed apply the below patches for V11 & V12 : Aspen_ProcessData_V11.0.2_ECR_00720819 https://esupport.aspentech.com/S_SoftwareDeliveryDetail?id=a0e4P00000RnT8aQAF Aspen_ProcessData_V12.0.0.5_ECR_00742714 https://esupport.aspentech.com/S_SoftwareDeliveryDetail?id=a0e4P00000S7FNXQA3 if the aboveSolution does not work then please contact [email protected] . Key words: AspenProcessDataAppPoolX64, A1PE , License denied Keywords: None References: None
Problem Statement: How to install the Aspen Process Data Add-in without installing additional AspenTech products?
Solution: Go to the AspenTech Web Server http://webservername/Web21/DownloadAddin.asp Click the Download and install ExcelAddinSetup.exe link to download a file that has the webservername reference as part of its filename. Run the downloaded installation file. Please note: • To install this Excel Add-in, the target machine needs to have Microsoft .NET Framework 4.0 installed. • If your machine has UAC turned on, then you need to run your browser 'As Administrator' and your login account must have permission to update/write to the Windows Registry. • webservername is an important part of the downloaded filename since this suffix is used during the installation of the configuration files - it must be the same name as the web server. Keywords: ProcessData Excel Addin References: None
Problem Statement: When launching Aspen Production Record Manager ODBC test application, encounter below error message.
Solution: This is due to a DLL not being registered. In order to resolve the error, performed as follow: Launch a command prompt using Run as administrator. Browse to C:\Program Files\AspenTech\MES\ADSA in Windows Explorer. Register the AtDsaLocator.dll with below command. regsvr32 AtDsaLocator.dll Keywords: APRM ODBC Test (x64) APRM_ODBC_TestApp.exe References: None
Problem Statement: A simulation with electrolytes is converging the tear streams and design specs. However, at the very end of the execution of the simulation, the following error is raised: ** ERROR WHILE GENERATING REPORT FOR STREAM: xxx FLASH CALCULATIONS BYPASSED DUE TO UNREASONABLE SPECIFICATIONS. SPECIFIED TEMPERATURE (MISSING) IS HIGHER THAN THE UPPER LIMIT (1.0000D+04). PROPERTIES WILL NOT BE CALCULATED DUE TO FLASH FAILURE. Note the stream xxx is a tear stream. The effect is that the stream temperature and other properties are not available in the stream results. What is surprising is that the source block worked fine and the outlet temperature can actually be seen in the block results. This error appears more or less randomly, based on changes made in the simulation.
Solution: In this case the convergence loop flushed the tear stream and set the values to missing in the final iteration. To avoid this problem, you need to uncheck the box for affected block logic. Tear streams with electrolytes do not get flashed which is generally not a problem, since typically the block in the loop will calculate its outlet(s). The tear stream symptom occurs when the checkbox is checked, and an outer loop did not converge a user-tightened tolerance. As the outer loop moves on with more iterations to reach the user tolerance, it skips the blocks that are involved with the incident tear stream if the blocks are determined as not affected. This causes the tear stream results to be flushed but not recalculated. It really depends on how the last iteration solved and appears random. Unchecking the checkbox will force the consistency and resolve the problem. Keywords: None References: None
Problem Statement: During a CSV historian sensor data import, it takes more than two minutes to upload into Aspen Mtell. The upload can still be completed, but it will take an enormous amount of time if you have hundreds of CSV files to import. The import for a single CSV file should not take longer than five seconds.
Solution: Mtell does not have the capacity to process a time series data with granularity of less than one data point per second. Here is an example of data that is too granular: Look back at your sensor data and make sure that the data granularity is not more one data point per second to ensure your CSV files get imported smoothly. Keywords: V11 Data Import System Manager References: None
Problem Statement: How to configure SMTP server to send email for Aspen Mtell? During Aspen Mtell configuration it is required to configure SMTP server setting to allow emails to be sent for alerts generated by the system and watchdog emails. Prerequisites: Create an account on SMTP server that will be used as sender for Alerts and Watchdog emails Make sure this account has permission to send emails Make sure port 25 is open between Mtell Application Server and SMTP server
Solution: Open Aspen Mtell System Manager Select Configuration tab Select Email Servers from the menu on le the left under the Email options Select Create Server from the Ribbon On the right you will now have the option to fill in server details Profile name will be the friendly name the server is referred to withing Aspen Mtell Host name or IP Address is the address of the SMTP server Port should be left on default unless a custom port is used by the server Send Email timeout should be left as default Enable SSL should be left checked as default unless otherwise specified Use as Default should be used if the users wishes Aspen Mtell to default to this server when sending emails, this can be useful when multiple SMTP servers are defined From Email Account should be filled in with the email we wish to be shown as the sender when email notifications are sent from Aspen Mtell User name should be an account that has permission to log onto the SMTP server and is allowed to be used to generate emails Password is the password of the above account Domain is the account both this email and the SMTP server belong to In the ribbon click the Save button to Save the server details Click on the Test Server button in the ribbon A Box will pop up confirming the server is correctly configured Keywords: Emails Notifications Alerts Messages References: None
Problem Statement: When upgrading an Aspen Mtell database, I get an Operation error “The service has encountered an error processing your request. Please try again. Error code 701.” Error code 701 is a SQL error which means “There is insufficient memory to run this query.” Most Mtell database upgrades are small enough that they will not cause this error. This error has been observed before during Update 480. Update 480 corresponds to V12 CP4 and will be executed when upgrading a database from a version prior to CP4 to CP4 or later.
Solution: If the SQL Server is a physical or virtual machine: Solution 1 On the SQL server, open SSMS (Microsoft SQL Server Management Studio) and connect Right click on the server, and select Properties Go to the Memory page Check the Maximum server memory. It may be too low. If so, increase the memory and try running the upgrade again. Solution 2 On the SQL server, open Task Manager On the Mtell server, trigger the database update again. Monitor the memory on the SQL server in Task Manager. If the memory is consumed during the process, you will need to free up some memory on the SQL server. Try terminating any unnecessary applications or processes. If using an Azure managed instance of SQL: The SQL resources may need to be increased. Try increasing the resources to the next tier (S0 to S1 for example). Run the database upgrade again. If it succeeds, you can decrease the resources back to the original level. Keywords: Database upgrade Update-480 Error code 701 References: None
Problem Statement: This article described the compatibility between Online Server and PCWS on Different Version.
Solution: We always recommend as best practice is to have all APC servers running on the same version. However, there could be some scenarios in which you could have different versions of the software. In those cases, where different version needs to coexist there is a couple of things to consider: 1.- Forward compatibility is not supported by APC software. This basically means that you cannot use recent file versions into old software versions for example: trying to open a V12.1 DMC3 project using V11 DMC3 Builder. This does not only apply to Desktop applications as DMC3 Builder or DMCplus Model, but also to Server applications. In the specific case of PCWS, if PCWS version is equal or greater than the Controllers version, PCWS is going to be able to display them. For example, if PCWS is V12.1 and all your APC applications (IQ, DMC3, DMCplus etc) are V10, PCWS will be able to show them. If not, PCWS will not be able to display the applications (for example PCWS V10 and APC applications V11) 2.- Following the logic of the previous point, the order of the upgrade will matter for an APC system. Let’s take the example of an upgrade that will be performed in phases. In this case, as the software will not be upgraded all at the same time you have to think about a strategy that could cover the needs as the upgrade goes through. For example, I can consider upgrading the first PCWS and the AspenWatch server considering that they will support the use of the previous version of the APC applications. then on a secondary phase, I can upgrade the DMC server as I know there won’t be any inconvenience related to the supported version. If I go first for the DMC server and try to deploy them before upgrading PCWS, then PCWS will not be able to display the DMC applications. Keywords: DMC3, PCWS, Compatibility References: None
Problem Statement: This
Solution: Frames how DMC3 Builder determines how many correlation coefficients to plot after running a case? Solution The MV Cross-Correlation plot reflects the correlation between two MVs over a time horizon window. A maximum value (between 0-1.0) at time t=0 shows the strongest correlation between the two MVs. By default, the Scale of the correlation plots cannot be changed, and the Y-axis shows a scale between -1 and 1. For this kind of plot DMC3 uses an equation to calculate the default number of correlation coefficients: Number of Corr-Coeffs = TTSS x Samples/min; For example: (1) Sampling = 15 Sec, TTSS=30 min, then NCC = 30 x 4 = 120 (for one-side), the display window will be -120 to +120; (2) Sampling = 60 sec, TTSS=90 min, then NCC = 90 x 1 = 90, the correlation plot window will be -90 to +90. Keywords: DMC3, Plot Correlations References: None
Problem Statement: External Targets are common requests for APC users. This is a quick guide on how External Target can be set up on DMC3 Builder and DMCplus build.
Solution: Setting External Targets on DMCplus: To enable External Targets in a CCF file, select the Tools menu and then select Options. Then select the General tab and here can be found the option External Targets. This option will allow enabling the External Target for the controller and it allows to select between three options: Not used. - Which will not take into consideration ET Full RTO. - Real Time Optimization Values Limited Use (IRV). - Ideal Resting Values Please Refer toSolution https://esupport.aspentech.com/S_Article?id=000015581 “How do I pick between Full Use (RTO) and Limited Use (IRV) type when enabling External Targets?” for more detailed information about RTO and IRV. In addition, this window will allow enabling the ET in the Online Controller. The next step is to select the variable(s) which is desired to Enable the External Target. By selecting the variable, it will notice that the External Target flag is now active in the top ribbon. check on this flag and this will now display new parameters for the variable selected. Finally, choose the ETCV parameter and double-click on it. On default value change it to 1 and in Tag name, you must write the Tag that will be written/read by the external software. In addition, in the Keyword, you can also change what kind of action the external software will have on such Tag. Also, you need to specify the CIM-Io device and Source. Setting External Targets on DMC3: On the DMC3 file, you can set the External Target from the Optimization Tab from the controller tree. On the Optimization, the tab goes to the top ribbon and click on Configure Optimizer, then select the ET option on the Target type selector in the Case Actions Tab. Then it must be selected which variable will have the External Target, this can be done by selecting the External Target type on the Target option from each variable. On the simulation Tab, click on the variable which has the External Target and find the TARGET attribute. Also, It will be noticed that the combined status Is changed to Target. In the case of DMC3 the TARGET attribute is the equivalent to ETCV described above for the CCF file. On the deployment tab on the controller tree. In this Tab select the Variable that has the ET and it will be noticed that the attribute TARGET is shown in the variable detail Panel. However, if is it not shown, this parameter can be enabled from the option customize on the top ribbon. Click on Customize and find the option TARGET then check it. Finally, the information of IO Souce, IO Tag, and IO database should be filled in as it was done in the case of a CCF file. Keywords: DMCplus, DMC3, External Target References: None
Problem Statement: From Configure Online Server the automatic triggering of Snapshot from the controllers can be configured. However, this has a limitation of 1 hour (by default cannot be less than this period). This KB described an alternative way to Trigger the snapshots within a Default interval or Condition.
Solution: The workaround for this problem basically is to create an Input or output calculation that detects the condition to create the snapshot. this can be done on either DMCplus or DMC3. For purpose of the demonstration, this article will show theSolution on a DMC3 controller. The DMC3 controller can manually trigger a snapshot using the switch entry TriggerSnapshot which can be accessed on the General Section of the controller in PCWS (Click on the controller’s name in the Operations section). The goal of the calculation then is to create a condition that changes this switch from NO to YES. This is a description of the Entry extracted from DMC3 Help file: If the TriggerSnapshot entry value is Yes (1) at the end of a controller cycle, then the controller will create a new snapshot of the online application. The entry value can be changed on demand from PCWS, a User Calculation, or an IO Tag. The RTE scheduler performs snapshots at the end of the application cycle based on certain conditions such as whether a snapshot was requested from DMC3 Builder, or an automatic snapshot is scheduled, or the controller is stopping. For v10 and later, the RTE Scheduler will also examine the value of TriggerSnapshot to decide whether to create an application snapshot. If a snapshot is produced, then the TriggerSnapshot entry will be reset to No. If TriggerSnapshot is changed from Yes to No, this will also output an operator message stating that a snapshot was triggered. The following Calculation is an example of the use of the entry based on the condition of triggering a new snapshot on a five-minute interval. Every five minutes a new Snapshot will be created. IMPORTANT NOTE: The script shown is limited to be an example as the script may contain some failures that need to be fixed and may not adapt to all systems. The purpose of the example is just to show the use of the TriggerSnapshot entry. mmnt = minute(lastrun)’THIS SECTION IDENTIFIES THE NUMERICAL VALUE OF THE MINUTES FIELD OF THE TIMESTAMP if mmnt = 0 or mmnt = 5 or mmnt = 10 or mmnt = 15 or mmnt = 20 or mmnt = 25 or mmnt = 30 or mmnt = 35 or mmnt = 40 or mmnt = 45 or mmnt = 50 or mmnt = 55 then ’THE CONDITION IS BASED ON TRRIGGERNIG SNAPSHOT ON A 5 MIN BASIS trigger = 1 else trigger = 0 end if In this example, the controller was running from 1:55 PM to 2:00 PM on a minute cycle period. Keywords: DMC3, Snapshot, Calculations References: None
Problem Statement: Importing Datasets and Cases information does not contain Local Slice information for the vectors, but it can import Global Slices. This article talks about working with local slices from DMCplus Model.
Solution: Please follow the next steps: 1.- In this example, there is a local slice, interpolated slice, and global slice for a data set 2.- Select all vectors and export them as .dpv file 3.- This instead of creating a clc file with all vectors splits the data set into individual vector files but can be group imported on dmc3 builder 4.- Additionally export the slice information as .dls 5.- Open dmc3 builder and import vectors files 6.- You will notice the local information is there 7.-import the .dls file you will notice by going to the slice information edit box that all slicing information is there. Keywords: DMC3 Builder, DMCplus, Slices References: None
Problem Statement: This
Solution: frames how to import a Non-linear Apollo Controller into DMC3 Builder. Solution Please follow the next steps: 1.- Open DMC3 builder and create an APC Project. 2.- Go to Controller and then on the top ribbon select import application. 3.- a pop-up window should appear to select the .run application from Apollo IMPORT NOTE: make sure that the folder from where you will import the .run file contained the .xml file too. 4.- At this point, the Model should be successfully imported and you should be able to access the controller. 5.- Non-Linear Controllers also can be exported and imported from the APC project. These files will be saved as xxxx.apcapplication which can be imported into DMC3 builder, in the same way, the XXXX.dmc3application does. The main condition for .apcapplication to be imported is that the type of project is an APC project. Keywords: DMC3 Builder, Apollo, APC project References: None
Problem Statement: This article described what can be checked in case Shadow Prices are not getting calculated by DMC controller
Solution: In the DMC Controller, Shadow Prices are typically calculated and will display a Non-zero value as long as one of the MV is an active constraint. However, there could be some situation when even if the condition of having active constraints is met, Shadow Prices can display Zero values for all MV. The main reason for this could be related to the Steady StateSolution Option that is being used on the controller. This option is basically controller by the entry EPSMVPMX (Steady StateSolution Option), this is a general entry on the controller that can be found on General in DMCplus or Simulation > Application Details on DMC3 Builder. The following descriptions are the different options that can be selected for EPSMVPMX: 1 (obsolete) now functions the same as option 2 2 Legacy interior point QP algorithm 3 Legacy interior point QP algorithm and generate a debug file at each cycle - don't use for long as this will fill up your disk 4 Active set method - very robust but can be slow for Composite-size problems. Shadow prices are calculated for both LP and QP subproblems. 5 New interior point method for both LP and QP subproblems. No shadow prices are calculated. 6 New interior point method - switch to active set for the objective function rank. The motivation for the type 6 is that this enables shadow prices. 7 Not used 8 Use interior point for the objective function rank if number of variables is greater than 300. Otherwise use active set. This is recommended for Composite controllers only. To solve the problem of the Shadow prices not getting calculation just change theSolution Option to 4 as EPSMVPMX = 4. This option always allows shadow prices to get calculated. Keywords: DMC, Shadow Prices, EPSMVPMX References: None
Problem Statement: HTP plots can be saved into the following directory C:\inetpub\wwwroot\AspenTech\Web21\Plots launching the Browser with an administrator account, otherwise, the following message appears “Permission denied - No account was found: Launch browser using ‘Run As Administrator’ or contact your administrator.”
Solution: The way to avoid this message is by allowing everyone to save Plots in the mentioned directory. The following steps should be taken to allow this: 1.- Go to the following Directory C:\inetpub\wwwroot\AspenTech\Web21 and right-click on the Plots folder. Then select properties. 2.- In the Plots, properties window go to security and click edit. This Pop-up a window called “Permissions for Plots”. 3.-Click on Add, this will pop up a window called Select Users or Groups. In the Tab “Enter the object names to select” write Everyone, then Click on Check Names and click Ok. 4.-Back on the “Permissions for Plots” window click on apply and then click Ok. Click Ok for the rest of the widows. This should allow everyone to save plots without having an Administrator account. If required the permission can be restrained just for a few users by giving the user name. Keywords: HTP Plots, PCWS References: None
Problem Statement: This article described the edition error that may appear when working with text files to import into AspenWatch maker for Misc or PID collection: The following is an example of one of the errors. Errors in input file: D:\Temp\pidtags.txt Line 36 Column 1: TagName (20-LIC-101/_) has invalid character (/,&,<,>,,' or space). Line 36: 20-LIC-101/_ 20-LIC-101 PID 1000 0 1 1 ns=6;s=0:OS_OWSSERVER1::20-LIC-101/_
Solution: AspenWatch Maker supports importing txt files for the Misc and PID configuration. This import action requires a specific format for it works. An example of this can be found below: # TagName Description EngUnits PlotHigh PlotLow TaskID TagType Address # AW_5FIC34 Htr Pass 1 T/H 400 100 1 0 5FIC34.PV AW_5FIC35 Htr Pass 2 T/H 400 100 1 0 5FIC35.PV AW_5FIC36 Htr Pass 3 T/H 400 100 1 0 5FIC36.PV AW_5FIC37 Htr Pass 4 T/H 400 100 1 0 5FIC37.PV # # The following are 3 PID Tags # # Use DCS Type: TDC3000 or TDC3000_OPC # # TagName Description EngUnits PlotHigh PlotLow TaskID TagType Address # AW_FIC107 Ovhd Prod KBPD 30 0 1 1 FIC107 AW_FIC201 Top Reflux KBPD 50 0 1 1 FIC201 AW_FIC210 Side Draw KBPD 60 0 1 1 FIC210 Notice that TagName and Address can be different entries for the file. The TagName is a unique name that will be registered as Record on the IP21 Database while the Address will be the Tag to be collected from the OPC Server. Good practice dictates that is a good idea to use the same Address and TagName. However, in some situations, Address could contain some special characters (/,&,<,>,,' or space) on it (for example some Foxbridge tags) and unfortunately, AW Maker does not accept that kind of character for the collection, as result of trying to use those characters on the name the mentioned error will prompt when importing the file. Moreover, IP21 or AW maker cannot provide a directSolution to enter special characters on the name. and the suggestion/Solution would be to edit/modify manually the text before import. Remember that What is being validated is not the IO point that is on the OPC but the tag name that would be saved on the AW database Here are some further notes that can be encountered in the File Example on the AspenWatch Server. C:\ProgramData\AspenTech\APC\Performance Monitor\Tools # TagName - (max 16 characters) must be unique within all records in the Aspen Watch database # Description - (max 32 characters) # EngUnits - (max 16 characters) # PlotHigh, PlotLow - real values used for plotting only # TaskID - task number to distribute processing load: TSK_PIDx (where x is 1, 2 or 3 - ignored for Misc tags) # TagType - 0 for Misc Tag, 1 for PID Tag # Address - (up to 40 characters) this is the DCS-specific Cim-IO tag name. For PID tags, it is the base address w/o parameter # NOTE: Leave Address blank to create a Misc tag with no I/O connection (for local calculations, etc.) Keywords: PID, Misc Tags, AspenWatch References: None
Problem Statement: When trying to apply a patch to Aspen Mtell, you receive the following error: Product Prefix not found This error will appear as the aspenONE Update Agent tries to validate each product to be upgraded.
Solution: Before installing a patch the aspenONE Update Agent checks to confirm that a valid build version is currently installed. The above error message will be generated if an incorrect base version is installed or if there was a problem with the installation of the correct build version. Follow the steps below to resolve the issue. 1. Determine what versions of Mtell this patch is compatible with: a. Browse for patches on the AspenTech Support Site (https://esupport.aspentech.com/s_homepage#productPatches) b. Select the product Family, Product and base Version from the drop down lists and click Go c. Click on the name of the patch you are applying to take you to its information page d. Find the versions of Aspen Mtell this patch can be applied to in the Product and Version section 2. Open Aspen Mtell System Manager and navigate to Configuration -> Settings -> Agent Services, view the current build version in the Build field The version in this picture is V12.0.3 3. If the build version does not match any of the versions listed in step 1 you will need to install a compatible version before applying this patch, you will find installation media in the AspenTech Download Center . If your build version is listed under the supported versions for this patch, proceed to step 4. 4. Locate the installer for the base version of Mtell that needs to be repaired. The folder will be named Aspen-VXX.X-APM-Mtell and the file to run the installer is Setup.exe, if you no longer have the installer, you can download it at the Download Center 5. Right click on Setup.exe and choose Run as administrator 6. Click INSTALL NOW 7. Select Repair 8. Read and accept the terms of the agreement 9. Specify licensing, if it is already filled out click Next 10. Select Install Now 11. Reboot the machine 12. You should now be able to install the patch Keywords: Mtell Patch aspenONE Update Installer Product Prefix References: None
Problem Statement: This error was detected on V12 and V12.1 of CIMIO and APC V12.1. The problem can happen when CIMIO Diagnostic is enabled to track Reads and Write failures and OPC requests. As part of the symptoms once the CIMIO Diag log is enabled the APC controller will start logging messages such as Write Failure from Application XXXX. This is a general error that affects the entire APC controller and will not show any particular variable, moreover, it can show Time Outs as another problem.
Solution: The problem is basically is related to the time that CIMIO and APC take to process all requests for reading and Writing. Once the CIMIO Diagnostics is disabled the APC controller should be able to send the information to the OPC as the resource for the Diagnostics log will no longer be used. If CIMIO Diagnostics logs require to be used another workaround is to increase the Timeout from the IO source. To access the IO Source, you will need to bring the Configure Online Server application from the DMC/Online Server. Follow the next steps to modify the Time out value: 1.- From windows start open Configure Online Server. 2.- Once is open go to the IO tab and select the IO Source that the controller is using 3.- On the right-hand side, panel select Edit Source. 4.- change the Value of Time out (as a general observation around 120 seconds will work, as thumb rule use the double of the controller cycle) 5.- finally click OK then on the main interface click on Apply and Ok to accept all changes 6.- Turn ON the controller and have CIMIO Diagnostics Log running. Keywords: DMC3, CIMIO, Configure Online Server References: None
Problem Statement: This
Solution: frames a workaround when Importing a CLC file fails with the error message: Operation failed: atclc_readsampsect() failed with error code 20 Error reading sample time from sample data line Solution 1.- import the clc into DMCplus Model as a DMCplus project. 2.-Go to Vector List Node and select the only Vector list that should appear under that node, then right-click and select export #name.clc 3.- This will pop up an Export vector list window. Please save the dataset on your desktop, change the name and change the extension to clc. 4.- Open a new DMC3 builder project and try to import the new clc file Notes: It is highly possible that the original clc file is corrupted and in this case DMCplus model has more tolerance to open these files than DMC3 Builder has and even if the mentioned workaround can bypass the problem it is very possible that the file imported could contain less data than the original. In any case, is a good practice to verify what could because the corruption of the clc file and check the data inside the clc file. Keywords: DMC3 Builder, DMCplus Model, CLC file References: None
Problem Statement: This article described the differences between the use of two different Simulation options of the APC controllers that can be used in PCWS.
Solution: Online Simulation and What-if simulation are different options to run changes and analysis on the controller without having to send information to the OPC but having the PCWS display of the APC controller. Online Simulation can be deployed directly from the DMC3 builder with no further configuration while What- If simulation requires AspenWatch to be used. The attached PDF document provides a more detailed overview of the use and display of these tools Keywords: PWCS, Online Simulation, What-If Simulation References: None
Problem Statement: This article described what can be done in case Configure Online Server fails to open and return an error about account privileges
Solution: The problem might be related to CIMIO or AFW problems. In the case of CIMIO problems, the common cause is that another instance is using the same CIMIO interface and does not allow do modifications. In this case close application as: CIMIO Interfaces Manager CIMIO Test API Aspen DMC3 (In case Deployment is being tested) This will allow launching Configure Online Server. In case is an AFW problem, you can try the followingSolution Solution 1: Change the account that AFW is currently using to the Local System account. For the changes to take place a restart of the service is required. Please notice that when AFW is restarted RTE and Web Data Provider services will be restarted as well (RTE and Web Data Provider are dependencies of AFW) and this can cause that the controller stops and disappear from PCWS or that the web page gets unavailable for a while. Is highly recommended to back up, Turn Off and Stop the controllers to avoid malfunctioning as well as notify users that PCWS will be down while the service is restored. Solution 2 Clean up the AFW cache folder. This can be done following the next steps: 1.- Shut down RTE service and Web Data provider Service (when you shut down RTE it will stop the controller and disconnect the Online Server, thus please make sure to manually Turn Off and Stop the Controllers before stopping RTE) 2.- Shut down AFWsecurity client service 3.- Go to in C:\ProgramData\AspenTech\AFW and make a copy of all files inside that folder in a desktop folder as a backup, mostly containing cache files from AFW, and sometimes these files can get corrupted. Aclcache afwcache applcache rolecache 4.- Restarted AFW security client service, RTE service, and Web data provided. then go to the path mentioned in step 3 and make sure new cache files were created. 5.- Try again to open Configure Online Server (remember to run as admin) Keywords: Configure Online Server, CIMIO, AFW References: None
Problem Statement: This
Solution: provides guide information on the firewall port configuration for the communication of GDOT Web Viewer and GDOT Online. Solution The communication between the GDOT V11.0 Web Viewer and the GDOT V11.0 online server uses HTTP/2. The “as installed” port is 8000. But is suggested to change it to something else for example something between 1025 and 48 000. To avoid using a port that is already reserved or in use on the GDOT online server, review the C:\Windows\System32\drivers\etc\services file; also run (in a command window) the command “netstat -a -n” and review the local address column for ports in use. Also, consider that the port has to be specified in two different config files: On the GDOT online server: C:\ProgramData\AspenTech\GDOT Online\V12.0\WebBackEnd\GDOTOnlineWebCoreConfig.txt On the GDOT Web Viewer server: C:\ProgramData\AspenTech\GDOT Online\V12.0\WebFrontEnd\GDOTOnlineWebViewerConfig.json On the “GDOTOnlineWebCoreConfig.txt” installed on the GDOT online Server, the port also has to be changed but make sure that the IP Address do not change. The file would look something like this, where the highlight text is the port to be changed: setCredentials Server=APEXOPT\SQLEXPRESS; Database=GDOTOnlineHistory; Trusted_Connection=True; MultipleActiveResultSets=True; serverstart 127.0.0.1 8000 appstart GDOT.REF_OPT.1 Messages OFF Restart the GDOT Core Service after changing the file. On the “GDOTOnlineWebViewerConfig.json” installed on the GDOT Web Viewer, the port also has to be changed and in this case the IP Address should match the GDOT online server. The file would look something like this, where the highlight text is the port and IP address to be changed: { ipAddress: 127.0.0.1, port: 8000, GdotDCVersion: 2.2.x, DiagramDataFile: gdv-default.db, DiagramImageFolder: Default } Keywords: GDOT Online, GDOT Web Viewer, Ports References: None
Problem Statement: This
Solution: frames about what could trigger the message General Hard Move ReSolution in all MV for the Ramp on Control Mode Solution This error occurs when the engine determines to overwrite the MOVRES set by the user. This is a consideration that happens for ramp variables in any controller mode (Control / Smart Step / Calibrate). Here is an extract on how the MOVRES act on the ramp variable (this can be consulted as well on the DMC3/DMC+ help file): Special consideration for ramp handle manipulated variables This feature is not recommended for MVs that are primary ramp handles. The ramp will not be controlled very well. The dynamic move plan will often make changes to MVs to move the ramps around (even if there is no change in SS target for the MV). Be aware that this feature will frequently repress or accelerate the moves used to control the ramp and result in poor ramp control performance. To protect a ramp variable from drifting away due to a non-zero move reSolution being used in ramp handle MVs, the engine may have to override the user supplied MOVERES value in some circumstances. The user can prevent the engine from overriding the MOVRES value by setting STCORRECT to -1. To access the STCORRECT entry, the user must add a general User Defined Entry (e.g. MYCORRECT) and then connect it to STCORRECT using an input calculation in the CCF (STCORRECT = MYCORRECT). Then set the user defined entry to -1 or 0 depending on the preference for this behavior. Here is how Move ReSolution is handled in the engine (applicable to Control/SmartStep/Calibrate modes): 1. If no CV violation occurs or STCORRECT = -1, the user supplied MOVRES will be honored. 2. When in Control mode, if a ramp CV violates its constraint, the non-zero move reSolution of relevant ramp handle MVs will be reset to 0 if STCORRECT <> -1; otherwise, a warning message will be issued. 3. When in SmartStep or Calibrate mode, the engine will calculate a MOVRES based on the CV Test Margin and the model gain matrix in such a way that the MV will be able to make a move should the relevant CV be one Test Margin outside the limit. The engine will then use the smaller of the user entered value and the calculated value if STCORRECT <> -1. Keywords: Controller Mode, Move Re References: None
Problem Statement: This KB article explains how to check the Emergency Patch (EP) level on the APC system.
Solution: The best way to check on the EP lever is to check on the version of one of the functional areas affected by that patch. On the support page for the EP, there is a list of the functional areas affected and their version. Every EP installation output a APC_ECR.log file with more details on the functional areas affected, these are useful as well to determine the current EP level. Here is case for the EP2 for V10 CP2 which is applied on the AW and Web servers. The observations are: Check the The output APC_ECR.log to see what were the files affected and where they are located. For example, The version of WebDataProviderSvc.exe in C:\Program Files (x86)\AspenTech\Web server\bin On the other hand, emergency patches applied to the Online and desktop servers affect files in: C:\Program Files (x86)\AspenTech\APC\Vxx\Builder C:\Program Files (x86)\AspenTech\RTE\Vxx Xx is the software version. The software always renames the previous versions of the affected files. This is another quick way to determine the current EP level. In this case we have applied the EP5 for V10 CP2. Keywords: …Emergency patch, DMC 3 Builder, Aspen Watch and Web Servers. References: None
Problem Statement: The issue has been observed where all Aspen Watch features on the PCWS web page are working properly. However, when trying to run a custom report from the web, it shows the following message: Error generating report <Report Name>, section CSS_PART Failed to connect to server'
Solution: The root cause of this issue is likely that the ADSA data source name does not match between the web and watch servers. To resolve this issue, change the data source name on the Web server to match the data source name on the Watch server. First go to your Aspen Watch server and open the program called ADSA Client Config Tool, which will open up a dialog box with the title ADSA Properties. Then go to the Configuration tab on top and select Public Data Sources. There you should see the Data Source name, note this down. Now perform the same steps on the APC Web Server. Open ADSA Client Config tool and verify whether the Data Source Name used here is the same as the one used on the Watch Server. If it is not, changed the Data Source name on web server to be the same as watch server, hit Apply, then close and re-open the web browser to check if the issue is resolved. Data source name in this example screenshot is APCV121: Keywords: None References: None
Problem Statement: After a new installation of V12 MES suite, Aspen Process Explorer does not open.
Solution: This is a known issue that can be address by adding a registry key in the installation for V12. Open Registry Editor on the machine that is having the issue Go to HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\AspenTech Add a new String named “DoNotRegister” and set the data value to 1. (See example below) Open again Aspen Process Explorer you should be able to open it now. Fixed in Version Issue detected in V12 and V12.1. The issue will be fixed in a future release. Keywords: Aspen Process Explorer Aspen splash screen V12 MES References: None
Problem Statement: When running predictions for raw variables, the scaling of measurements and prediction correspond with the measurement scaling shown in the dataset. This KB explains why the scale shown for prediction and measurements is different when running predictions for ramp variables and gives an explanation of how this processing work.
Solution: Model ID consider ramps at Steady State when their rate of change is constant, in fact the Identifications for ramps works with differentiated values. Therefore, instead of the raw measurement is the measurement of the ramp variable being differenced and then filtered what it is shown in the predictions (ramp delta). It is possible to see the processing behind the scene with these steps(second plot in the previous image). Go to the Datasets section and click Add Calc and create new vector using the Difference formula and setting the vector equal to a ramp variable, maxmove = 0, and default_value = 0 and click OK. Click Add Calc to create another new vector using the Filter - Low Pass Exp formula and setting the vector equal to the difference vector created in one; use a factor = 0.99. Add your new filtered vector to your case as a stable variable (not a ramp) and run a model identification. The ramp variable will generate a ramp curve while the steady state differenced, and the filtered ramp will generate a steady state curve but to see the comparison you should look at the predictions for both your ramp variable and the filtered vector. Use compare predictions You should see that though they may not look exactly the same they look fairly similar. In effect this is what you are seeing when you look at the ramp delta measurements and predictions. Since the measurement for the ramp variable is the differenced and filtered ramp measurement and so are the predictions, so If you take a snapshot of the predictions by clicking save on compare predictions, it will generate a dataset with these processed values instead of the raw values. You can change time constant for ramp filter, the default is 100 minutes. This is what determines how much filtering is done. Go to File>Options>Case Management> time constant for ramp filter in minutes Visualizing actual prediction for ramps Go to File>Options>Case Management> Use differences for predictions of ramps This determines if the ramp delta is used for visualization, in other words, this will visualize the actual measurement of the ramp in the prediction (first plot in the next image). This is only for visualization and do not affect the ID results. Keywords: …Predictions, Ramp variable, Scaling References: None
Problem Statement: CIM IO is the main Aspentech application that connects to the OPC server. There might be cases in which an AT application that requests a significant amount of tags from delta V, will not work and mark the tags invalid. For example, Collect.exe, can request a significant amount of tags at the same time, but in this situation IDB_ST_ERROR’s are outputted for previously test API validated tags. Changing CIM IO list size in the collect file does not seem to work, neither creating an environmental variable (to perform cache reads) according to KB https://esupport.aspentech.com/S_Article?id=000074039 This error is difficult to diagnose with test API GET, since test API requests one tag at a time. The procedure in KB https://esupport.aspentech.com/S_Article?id=000064839 has been proved useful to diagnose this kind of error, since it will mark blocks of bad data that are good with test API.
Solution: Make sure the customer has enough license points on the deltva V historian, check on KB https://esupport.aspentech.com/S_Article?id=000087408, error: 0xC004080B: Exceeded OPC Server license limit. Item not added. On the OPC machine. After adding new licenses, a restart of the OPC machine might be needed to broadcast the additional points. Keywords: …OPC, CIM IO, Delta V, collect References: None
Problem Statement: This KB article frames how to use the get transforms feature inside DMC3 Builder. This is useful when updating Master model curves from imported cases, but you don’t want to lose the already specified transform configuration.
Solution: These features are available by right clicking on an input or output variable inside a specific case. The software will show a message with the current transform to be copied from the master model into the case. Remember that the transform will affect the ID, therefore, you would need to run identify again to see the new response. Keywords: …Transform, DMC3 Builder, Master Model References: None
Problem Statement: How to change plant capacity by Individual project area in Aspen Capital Cost Estimator?
Solution: Aspen Capital Cost Estimator lets you evaluate alternate plant capacities. When you change plant capacity, Aspen Capital Cost Estimator re-sizes each project component to a desired plant capacity. User can change the plant capacity for the whole project using the Decision Analyzer. The capacity can be changed for individual areas. To change plant capacity: Open your baseline project and save it under a new scenario name that reflects the new capacity. This will ensure that your baseline project remains intact, separate, and apart from your about-to-be scaled project. On the Run menu, click Decision Analyzer or click the “A” button on the toolbar. The Decision Analyzer dialog box appears. Select the Scale by Area check box, and then click the Select Areas button. The Scale by Areas dialog box appears. On the Scale by Areas dialog box, select the check boxes for the areas where you want to adjust the scale. In the Scaling Factor column, edit the scale as desired for each selected area. Click OK to return to the Decision Analyzer form. Keywords: Plant capacity, Scale by Area References: None
Problem Statement: How to change plant capacity in Aspen Capital Cost Estimator?
Solution: Aspen Capital Cost Estimator lets you evaluate alternate plant capacities. When you change plant capacity, Aspen Capital Cost Estimator re-sizes each project component to a desired plant capacity. Unique expert system rules, based on engineering principles, provide the basis for revising the size of every project component in the process facility that is implicated in stream flows, as well as the size of other plant facility components in the plant layout, including process and utility components inside battery limits (ISBL) and outside battery limits (OSBL), associated installation bulks, piping, cable runs, buildings, structures, pipe racks, and site improvements. To change plant capacity: Open your baseline project and save it under a new scenario name that reflects the new capacity. This will ensure that your baseline project remains intact, separate, and apart from your about-to-be scaled project. On the Run menu, click Decision Analyzer or click the “A” button on the toolbar. The Decision Analyzer dialog box appears. Select the Change Plant Capacity by (5-600%) check box to change the plant capacity for all areas. If you opted to change plant capacity, type the desired percentage adjustment or select it using the Up/Down arrow buttons. For example, if you need to revise the capacity by a value beyond 600% to 700%, scale your project twice. For this, the Evaluate Project check box should be cleared. Then you can split the desired 700% into two parts: first use 350%, and on completion, scale it again at 200%. Click OK to initiate the Analyzer Scale-up Module. Upon completion, save the scaled project. Keywords: Plant capacity References: None
Problem Statement: How to add Ductile iron pipe in Aspen Economic Evaluation Tools?
Solution: Ductile iron (DI) pipe is commonly used for water and wastewater applications. The plant bulk model “BPIPDI PIPE” is available under the list of available piping plant bulks in all three EE products. The model is cataloged under “Plant Bulks > Piping > Ductile Iron Pipe” in the component palette and in the “Add component” dialog. Keywords: Ductile iron References: None
Problem Statement: How to select round fiber formed foundation type for steel pipe racks in ACCE?
Solution: Round fiber formed pier foundations are occasionally used to support steel pipe racks instead of spread footings or deep pile foundations. The foundation type field to the pipe rack form that let user select between spread footing foundations and round tube formed pier foundations. (TYP vs. PIER). Keywords: Round fiber formed foundation, pipe racks References: None
Problem Statement: What are different types of Excel report options in ACCE?
Solution: User can select different type of Excel report options in ACCE. To modify the type of report that is displayed, open the Tools | Options | Preferences menu. On the Reporting tab, under Excel report, there are three possible options. Always overwrite previously run Excel reports will reset the existing workbook with the selected report as the only worksheet; any previously created worksheets will be cleared. Append to the existing Excel reports will add the report as another worksheet in the existing workbook; previously created worksheets will be retained. Prompt for selection dialog will launch a dialog upon running reports asking you to choose to either Overwrite previously run Excel reports or Append to the existing Excel reports. Keywords: Excel report References: None
Problem Statement: How to change the default location to save the report definition in Aspen Flare System Analyzer?
Solution: User can specify the directories in which to save the report definition for each of the entries in the Report column. This allows user to maintain a range of alternative report definitions for each type of report On the File | Preferences | Report’s tab, user can change the default location to save the report definition. The Save Report Format Paths with Model check box allows the model to be tied to a particular set of report formats that might be specific to the model rather than to the report formats. Keywords: Report definition References: None
Problem Statement: How to set default units set for simulation files in Aspen Flare System Analyzer?
Solution: User can fix the default units set to be used for the simulation in Aspen Flare System Analyzer. The available unit sets are Metric, British, Metric_g (for gauge) and British_g (for gauge). On the File | Preferences | General tab, user can set default units set. Keywords: Units Set References: None
Problem Statement: How to select default pipe material for new pipes in Aspen Flare System Analyzer?
Solution: User can fix the default pipe material in Aspen Flare System Analyzer. The two materials available for selection are Carbon Steel and Stainless Steel. On the File | Preferences | Default’s tab, user can change default material for new pipes only. In older pipes the preferences are saved with the case and will remain the same. Keywords: Pipe material References: None
Problem Statement: How to set default Tee types in Aspen Flare System Analyzer?
Solution: User can select the tee type to be set as a default for all the tees in the model. The available tee types are 90°, 60°, 45°, and 30° tees. On the File | Preferences | Default’s tab, user can change default values of tee type for new cases only. In older cases the preferences are saved with the case and will remain the same. Keywords: Tee Type References: None
Problem Statement: How to select default Composition basis option for each of the relief sources in Aspen Flare System Analyzer?
Solution: User can select default Composition basis option for each of the relief sources. The available composition basis options are Molecular Weight and Mole/Mass Fractions. Molecular Weight - The molecular weight of the fluid is given. Mole fractions are estimated based upon the list of installed components. Mole/Mass Fractions - A full component-by-component composition must be given for the fluid. On the File | Preferences | Default’s tab, user can change default Composition basis option for new cases only. In older cases the preferences are saved with the case and will remain the same. Keywords: Composition basis, Molecular Weight, Mole/Mass Fractions References: None
Problem Statement: Aspen Utility - How to use “LookUp Table” for HRSG to provide Steam flow vs. Efficiency data?
Solution: For V11: Right click on the HRSG icon in the flowsheet Select Forms -> All Variables Look for EffMethod Select LookUpTable if the user has Steam flow vs. Efficiency data available. Five cells for EffTab and FlowTab will be created in the Form as default. The EffTab needs some values for the efficiency of the HRSG. The FlowTab refers to the Steam flow generated in the HRSG. It is important to point that the efficiency is the dependent variable and the steam flow is the independent variable. If user wants to modify the number of points in the table, he can change this value in the cell named NEffPoints. The default value is 5. For V12: Same steps could be followed as above in V11, but also possible to have quick addition of the Eff. Table data using new option under All variable table in V12. User needs to first select from All variable table option of EffMethod - >LookUp Table Right click on the HRSG icon Click on Forms -> Eff_Tab Change the number of points if necessary and fill the cells with the available information of SteamFlow vs. Efficiency. Keywords: HRSG, Efficiency, LookUp Table References: None